Roofing Estimation

RoofLens

From Drone to Done

RoofLens is a SaaS platform that converts drone and aerial photos into precise roof measurements, damage maps, and automated line-item estimates. Built for small-to-mid-size roofing contractors and insurance adjusters, it eliminates manual measuring and spreadsheet guesswork to produce ready-to-send PDF bids in under an hour, cutting estimating time up to 70% and reducing disputes.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

RoofLens

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
Empower roofing contractors and insurers to close accurate, professional roof estimates instantly from aerial imagery, eliminating manual delays and disputes.
Long Term Goal
Within five years, enable 10,000 contractors to produce 100,000+ verified roof estimates annually, cutting average quoting time by 50% and reducing claim disputes by 30%
Impact
Empowers small-to-mid-size roofing contractors and insurance adjusters by reducing estimating time up to 70%, delivering ready-to-send line-item bids in under an hour, and cutting claim disputes by 30%, increasing quoting accuracy and accelerating revenue collection.

Problem & Solution

Problem Statement
Small-to-mid-size roofing contractors and insurance adjusters waste hours and lose bids to manual ladder measurements and spreadsheet estimates because photos and satellite tools are inaccurate, labor-intensive, and fail to produce ready-to-send line-item quotes.
Solution Overview
RoofLens processes drone and aerial photos with photogrammetry and AI to generate precise roof measurements and damage maps. Automated line-item material counts and instant downloadable PDF estimates replace ladder measurements and spreadsheets, producing ready-to-send, accurate bids in under an hour.

Details & Audience

Description
RoofLens is a SaaS platform that converts drone and aerial photos into precise roofing measurements, damage maps, and ready-to-send estimates. It serves small to mid-size roofing contractors and insurance adjusters who need faster, accurate quotes. RoofLens eliminates manual measuring and spreadsheet guesswork, cutting estimating time up to 70% and reducing disputes by delivering accurate bids within an hour. Its automated photogrammetry-to-estimate export generates line-item material counts and downloadable PDF bids instantly.
Target Audience
Small-to-mid-size roofing contractors and insurance adjusters (25–55) seeking fast, accurate estimates using drone-enabled workflows.
Inspiration
On a soggy afternoon I watched a roofer friend climb wet ladders, tape in hand, fumble photos and spreadsheets—then lose three bids after half a day measuring. Seeing his frustration and wasted work crystallized one idea: if aerial photos could turn directly into precise measurements and ready-to-send estimates, contractors wouldn’t lose revenue to ladders and guesswork. That image became the seed of RoofLens.

User Personas

Detailed profiles of the target users who would benefit most from this product.

S

Speedy Sales Sam

- 29–42, residential roofing sales rep at a growing, mid-size contractor. - Works storm‑prone suburbs; 200+ miles weekly; 5–7 in‑home appointments daily. - 3–7 years selling exterior trades; tech‑comfortable; iPad and CRM heavy. - Compensation: base plus commission; accelerators for same‑day signatures and upsells.

Background

Started as a canvasser, losing deals while waiting on takeoffs. Adopted tablets and drones after faster competitors kept beating him. Now aims to leave every driveway with a signed, defensible bid.

Needs & Pain Points

Needs

1. Instant preliminary estimate from minimal photos. 2. Homeowner‑friendly visuals to explain scope. 3. Pay‑per‑job pricing for variable volume.

Pain Points

1. Waiting days for takeoffs kills momentum. 2. Inconsistent measurements trigger awkward callbacks. 3. Evening follow‑ups buried in paperwork chaos.

Psychographics

- Burns for same‑day signatures and referrals. - Trusts visuals more than roofing jargon. - Impatient with back‑office delays and rework. - Competitive, tracks close‑rate like a sport.

Channels

1. YouTube sales tips 2. Facebook Groups roofing pros 3. Google Search local estimates 4. LinkedIn industry peers 5. Apple Podcasts sales coaching

F

Franchise Standardizer Fran

- 35–50, Operations Director at a 5–12‑location roofing franchise. - Manages 20–60 estimators and reps; multi‑state compliance responsibility. - Bachelor’s in business/CM; KPI‑ and SOP‑driven; spreadsheet native. - Budget owner for tooling; accountable for margin, cycle time, quality.

Background

Rose from senior estimator to ops after inconsistent bids hurt margins and brand. Built SOPs but fights tool sprawl and shadow processes. Seeks one enforceable estimating system.

Needs & Pain Points

Needs

1. Role‑based templates with locked line items. 2. Central approvals, SLAs, and exception alerts. 3. Team analytics by branch and rep.

Pain Points

1. Rogue estimates erode margins and trust. 2. Manual roll‑ups waste reporting hours weekly. 3. Disconnected tools confuse and slow teams.

Psychographics

- Obsessed with playbooks and consistent execution. - Data‑first; dashboards before gut instincts. - Risk‑averse on compliance and brand reputation. - Empowers teams, demands measurable accountability.

Channels

1. LinkedIn industry ops 2. YouTube admin tutorials 3. Google Search SOP templates 4. Slack communities operations 5. Email newsletters roofing

C

Commercial CAD Casey

- 32–55, commercial estimator at a regional roofing contractor. - Handles 5–10 bids weekly; roofs 40,000–200,000 sq ft. - Proficient in Bluebeam/CAD; collaborates with GCs and architects. - Office‑based with regional site walks for verification.

Background

Started as an architectural drafter before moving into estimating. Lost nights redlining prints when measurements didn’t match as‑builts. Now insists on traceable geometry and exportable layers.

Needs & Pain Points

Needs

1. Parapet, wall, and penetration counts labeled. 2. DXF/DWG exports at exact scale. 3. Adjustable tolerances per specification.

Pain Points

1. Aerial shadows obscure boundaries and edges. 2. Manual digitizing consumes late nights. 3. Scope misreads force costly change orders.

Psychographics

- Precision over speed; hates ambiguous edges. - Collaboration‑minded with architects and subs. - Documentation‑heavy; everything must be traceable. - Cautious about assumptions and tolerances.

Channels

1. Bluebeam Forum workflows 2. LinkedIn commercial estimators 3. YouTube CAD tips 4. Google Search spec sheets 5. Reddit r/Construction

P

Policy Advocate Priya

- 35–60, licensed public adjuster; works multi‑state CAT deployments. - Manages 8–20 active claims; contingency‑fee compensation. - Travels frequently; home office evenings for documentation. - Experienced with Xactimate; comfortable with drone/aerial imagery.

Background

Former roofing PM turned public adjuster after repeated scope disputes. Built a reputation for airtight files that survive carrier audits. Needs evidence that stands up in escalations.

Needs & Pain Points

Needs

1. Carrier‑aligned line items and codes. 2. Timestamped damage maps with notes. 3. Shareable audit trails for escalations.

Pain Points

1. Carrier pushback on scope definitions. 2. Disorganized photos derail negotiations. 3. Inconsistent third‑party reports undermine cases.

Psychographics

- Justice‑minded, fights for policyholder fairness. - Evidence over emotion; meticulous documentation. - Tenacious follow‑ups until resolution. - Prefers tidy, timestamped paper trails.

Channels

1. LinkedIn public adjusters 2. Facebook Groups PA forums 3. X (Twitter) CAT updates 4. YouTube case studies 5. Email newsletters insurance

Q

QA Compliance Quinn

- 28–45, QA/compliance lead at a mid‑size roofing company. - Reviews 20–40 projects daily across multiple branches. - Former crew lead; OSHA‑minded; checklist expert. - Office‑based; coordinates constantly with field and desk teams.

Background

Moved from crew lead to QA after warranty claims exposed sloppy documentation. Built internal checklists but needs automated coverage verification and approval control.

Needs & Pain Points

Needs

1. Automated coverage checks and quality flags. 2. Locked templates with approval gates. 3. Exception reports by user and branch.

Pain Points

1. Missed photos trigger expensive re‑flights. 2. Template drift causes inconsistent bids. 3. Version confusion multiplies rework.

Psychographics

- Detail‑obsessed; zero tolerance for sloppiness. - Risk‑averse; prioritizes compliance and safety. - Coach mindset; prefers teachable moments. - Loves clear standards and exceptions.

Channels

1. YouTube QA workflows 2. LinkedIn roofing ops 3. Google Search checklists 4. Slack communities quality 5. Email policy updates

P

Portfolio Planner Parker

- 34–52, manages 50–200 roofs for REITs or schools. - Budget owner; accountable to CFO, boards, and tenants. - Facilities background; now a data‑centric portfolio manager. - HQ office‑based; site visits monthly or after storms.

Background

Started as a facilities tech, promoted after taming reactive maintenance chaos. Burned by opaque vendor bids and surprise failures, now demands comparable scopes and clear ROI forecasts.

Needs & Pain Points

Needs

1. Portfolio rollups with standardized condition grades. 2. Apples‑to‑apples bid comparisons. 3. Maintenance planning exports for boards.

Pain Points

1. Incomparable estimates stall approvals. 2. Surprise leaks disrupt operations. 3. Site access delays slow surveys.

Psychographics

- Budget‑disciplined; hates surprise costs. - Prefers dashboards over trade jargon. - Risk‑averse; prioritizes tenant uptime. - Vendor‑neutral, outcome‑focused decisions.

Channels

1. LinkedIn facility management 2. IFMA forums discussions 3. Google Search vendor comparisons 4. YouTube maintenance planning 5. Email CRE newsletters

Product Features

Key capabilities that make this product valuable to its target users.

Capacity Balancer

Distributes triaged jobs across estimators and drone operators using live capacity, time windows, and skill tags (steep, tile, commercial). Prevents overload, keeps queues moving, and raises same‑day completion rates without manual juggling.

Requirements

Live Capacity & Availability Model
"As a dispatch coordinator, I want accurate live capacity for each estimator and operator so that auto-assignments avoid overload and meet daily targets."
Description

Maintain real-time capacity for estimators and drone operators by factoring shift schedules, PTO, active and queued workload, travel buffers, and per-day caps. Incorporate job duration models derived from historical averages by roof type, pitch, and complexity. Represent capacity in normalized units with current and forecasted availability per time window. Integrate with user profiles, calendar feeds, and the triage pipeline so capacity automatically updates on assignment, completion, or schedule changes. Expose query and subscription APIs for the balancer to retrieve snapshots and react to capacity changes.

Acceptance Criteria
Real-time Capacity Update on Job Assignment
Given org default NCU = 1 minute and window size = 15 minutes And estimator E has 240 NCU available today from 09:00–17:00 local When a job J with modeledDuration = 90 minutes and travelBuffer = 20 minutes is assigned to E at 10:00 Then E’s available capacity decreases by 110 NCU across the impacted time windows starting at 10:00 And no time window shows negative availability And the capacity snapshot reflects the change within 2 seconds of the assignment event And an audit log entry records userId, jobId, deltaNCU, windows affected, and timestamp And moving J back to triaged or cancelling restores 110 NCU to the corresponding time windows within 2 seconds
Shift Schedule and PTO Blocking
Given estimator E’s shift is 08:00–12:00 and 13:00–17:00 local with a 1-hour break And E has an all-day PTO on 2025-09-10 synced from calendar When requesting capacity for E for 2025-09-10 Then all time windows show 0 available NCU When requesting capacity for E for 2025-09-11 Then windows outside the shift show 0 NCU and windows during the shift reflect availability based on per-day cap minus assigned workload And changes to the calendar feed (adding/removing PTO) update capacity within 2 minutes
Job Duration Model Application and Fallback
Given a job with attributes roofType = tile, pitch = steep, complexity = high And the historical dataset contains at least 30 completed jobs for that attribute cohort within the last 90 days When the model computes duration Then the predicted duration equals the median of that cohort rounded to the nearest 5 minutes and is stored as modeledDurationMinutes Given another job with fewer than 30 relevant historical samples When the model computes duration Then it falls back to the default for roofType, or to a global default if none exists, and records modelSource = fallback
Per-Day Capacity Caps and Travel Buffers
Given drone operator D has a per-day cap of 360 NCU for 2025-09-12 And D has two jobs scheduled: J1 (120 min + 15 min travel) and J2 (150 min + 20 min travel) When evaluating capacity for assigning a new job J3 (100 min + 10 min travel) on the same day Then the capacity snapshot for any time windows that would cause the daily cap to be exceeded reports 0 available NCU And the day-level available NCU equals 360 − (J1+travel + J2+travel) prior to considering J3 And travel buffers are deducted from the appropriate time windows between jobs
Capacity Snapshot Query API
Given a request to GET /capacity/snapshot?userIds={list}&role=estimator&start={iso}&end={iso}&tz=America/Chicago&window=15m When the request is valid Then the API returns 200 with an array of windows including windowStart, windowEnd, availableNCU, reservedNCU, capNCU, lastUpdatedAt And p95 latency is ≤ 300 ms for queries up to 7 days and 50 users And the response includes ETag and version for caching/optimistic concurrency And invalid parameters return 400 with a machine-readable error code and message
Capacity Change Subscription API
Given a client subscribes to /capacity/stream for org O filtered by role = operator When a job assignment, completion, calendar change, or per-day cap update affects any operator in O Then a change event is emitted within 2 seconds containing userId, reason, timeWindows, deltaNCU, newAvailableNCU, and sequenceId And events are delivered at-least-once with monotonically increasing sequenceId per user And the stream supports ≥ 1,000 concurrent subscribers without dropped events
Concurrent Updates and Idempotency Guarantees
Given two assignment events for jobs J1 (60 min + 10 min travel) and J2 (45 min + 5 min travel) target the same user U and arrive within 50 ms When the system processes both events Then capacity deductions are atomic and no time window becomes negative And the final availableNCU equals initialAvailableNCU − sum(deltas) exactly once And processing a duplicate assignment event for J1 (same jobId and sequence) does not deduct capacity twice And a completion event for J1 restores exactly the previously deducted NCU to the corresponding windows
Skill Tag Matching & Compliance Guardrails
"As an operations manager, I want assignments to respect required skills and certifications so that safety, quality, and compliance standards are maintained."
Description

Ensure only eligible resources are considered for assignment by enforcing skill tags (e.g., steep, tile, commercial), certifications, and required equipment. Provide admin configuration for tags and defaults per job template, with automatic tag inference from RoofLens triage metadata. Block or warn on overrides that violate compliance, require approval where configured, and log reasons when jobs are unassignable due to missing skills. Surface suggested eligible resources and nearest alternatives.

Acceptance Criteria
Eligible Resource Filtering on Assignment Panel
Given a job has required skill tags, certifications, equipment, and a schedule window When the Assign dialog is opened Then only resources who possess all required tags, have all required certifications valid through the job date, have the required equipment available and not double-booked, and have capacity within the time window are listed as Eligible And resources with missing tags, expired certifications, unavailable equipment, or capacity conflicts are excluded from the Eligible list And the Eligible list loads in under 800 ms for an organization with up to 500 resources And the UI displays the Eligible count and supports pagination when results exceed 50
Automatic Skill Tag Inference from Triage Metadata
Given an admin-defined inference map exists that maps triage attributes (e.g., roof_slope, material, building_type) to skill tags When a job is created or triage metadata is updated Then required skill tags are auto-applied according to the active inference rules And if multiple rules apply, the union of tags is applied without duplicates And if required tags cannot be inferred, the job is flagged as Needs Review and lists the missing tags And each inference event is recorded in the job audit log with rule identifier, old->new tags, actor=system, and timestamp
Admin Configuration of Skill Tags, Certifications, Equipment, and Template Defaults
Given a user with Admin role When they create, edit, or deactivate a skill tag, certification, or equipment type Then the system enforces unique names, required fields, and prevents deactivation if referenced by active job templates unless a replacement is mapped And changes are versioned with audit entries (user, before/after, timestamp) and are reversible via rollback of the last version And job template defaults can specify required tags, certifications, and equipment; changes apply only to jobs created after the save time And non-admin users attempting these actions receive a 403 error and no change is persisted
Compliance Override Warning and Approval Workflow
Given an organization guardrail policy is configured as Warn, Block, or Require Approval When a dispatcher attempts to assign a resource that violates required tags, certifications, or equipment Then the UI displays violation details and follows policy: Warn allows assignment only after entering a reason; Block prevents assignment; Require Approval creates a Pending Assignment for an approver And overrides and approvals capture reason, user, approver (if applicable), and timestamp and are immutable in the audit log And Pending Assignments cannot start work or reserve equipment until approved; if not approved within the configured hold time, they auto-expire and notify the requester
Unassignable Job Reason Logging and Notifications
Given a job has required tags, certifications, equipment, and time window constraints When no eligible resources exist Then the job state is set to Unassignable and a structured reason list is stored, including counts by category (missing tag, expired certification, equipment unavailable, capacity conflict, time window mismatch) And a notification is sent to the designated channel/role upon entering Unassignable (debounced to once per state change) And the reasons are visible in the UI and via GET /jobs/{id}/eligibility; the endpoint responds in under 500 ms and is filterable by category
Suggested Eligible Resources and Nearest Alternatives
Given a job is ready for assignment When viewing resource suggestions Then the top 5 eligible resources are listed, ranked by earliest completion ETA (considering travel time, current queue, and capacity) And up to 5 nearest alternatives are shown that violate at most one constraint, each with the specific violation highlighted and an available override/approval path if policy permits And selecting an alternative triggers the compliance workflow And suggestions compute in under 1.2 seconds for up to 500 resources and 1,000 open jobs
SLA and Time Window–Aware Scheduling
"As a scheduler, I want the system to place jobs within promised time windows so that customer expectations and SLAs are consistently met."
Description

Honor customer time windows, promised SLAs (e.g., same-day), and blackout periods when scheduling. Score candidates by ability to satisfy the earliest feasible window, prioritizing same-day completion. Account for operator travel time via mapping APIs and buffer for upload/processing stages. Reserve capacity for urgent jobs, highlight SLA risk, and visualize assigned windows on shared calendars. Detect and flag conflicts prior to confirmation.

Acceptance Criteria
Earliest Feasible Window Selection and Same‑Day Prioritization
Given a job with one or more customer time windows and a same-day SLA, When the scheduler evaluates candidates, Then it selects a candidate whose start and end fall entirely within the earliest feasible window today; If no same-day candidate exists, select the earliest feasible window on the next available day. Given multiple same-day candidates, When scoring, Then the chosen candidate has the highest score where scoring prioritizes (1) SLA compliance, (2) earliest finish time within the window, (3) minimal total travel + buffer time. Given a scheduled job, Then the stored assignment includes selected window identifier, start/end timestamps, calculated score, and scoring rationale for auditability. Given a job is assigned to a same-day feasible window, Then no later-day assignment for that job is created by the scheduler in the same run.
Operator Availability, Time Windows, and Blackout Compliance
Given an operator blackout from 13:00–15:00 and a customer window 12:00–16:00, When evaluating feasibility, Then no portion of the planned start–end (including buffers) overlaps 13:00–15:00; If it cannot fit, the candidate is rejected. Given organization non-working days and operator PTO/holidays, When scheduling, Then assignments are not placed on those dates. Given a blackout is added or changed after a provisional hold is created, When the scheduler re-evaluates, Then impacted holds are flagged as conflicts prior to confirmation with reason "Blackout overlap".
Travel Time and Processing Buffers Included in Schedule
Given a candidate sequence of jobs for an operator, When computing feasibility, Then travel time from the previous job (or start location if none) to the new job is retrieved via the mapping API using live traffic and included in the plan. Given service duration and configured pre-flight, upload, and processing buffers, When validating a time window, Then the sum of travel + service duration + all buffers fits entirely within the window; otherwise, the candidate is rejected. Given buffers are applied, Then the planned end time equals planned start + service duration + all buffers and is used for SLA feasibility and subsequent travel calculations.
Skill and Capacity Matching with Urgent Capacity Reserve
Given a job requiring skills [e.g., steep, tile], When filtering candidates, Then only operators with all required skill tags are considered. Given operator daily capacity C and an urgent capacity reserve R%, When placing non-urgent assignments, Then the total scheduled work (including buffers) does not exceed (100%−R%) of C; urgent jobs may consume the reserved R%. Given no eligible operator meets skill and capacity constraints within the SLA, Then the job is flagged as SLA risk with reason "No skilled capacity within SLA".
Conflict Detection and Pre‑Confirmation Controls
Given a provisional assignment, When attempting confirmation, Then the system checks for overlaps on operator, estimator, equipment, and vehicle calendars and for double-booking of the customer time window; If any conflict exists, confirmation is blocked and a conflict detail is shown. Given a user with override permission, When they override a conflict, Then a reason is required and the override is recorded in the audit log with timestamp and user ID. Given a successful confirmation, Then the assignment state is "Confirmed" and no conflicts are present for any involved resource calendars.
Calendar Visualization and Timezone Accuracy
Given an assignment is confirmed, Then it appears on shared calendars for assigned operator and estimator within 5 seconds with correct local timezone start/end and location. Given users in different timezones view the same assignment, Then each sees local-time rendering that maps to a single consistent UTC timespan. Given an assignment is rescheduled, Then the prior calendar event is removed and the updated event appears within 5 seconds without duplicates. Given a calendar export (ICS) is requested, Then the file contains accurate start/end, location, SLA badge/label, and notes.
SLA Risk Scoring, Prioritization, and Alerts
Given a same-day SLA job not yet scheduled, When time remaining before SLA breach is less than estimated service duration + travel + buffers, Then the job’s SLA risk state becomes High and is visually highlighted. Given any job’s SLA risk state changes (None→Low→Medium→High), Then an alert is sent to the configured channel within 1 minute and the job is elevated in the scheduling queue ahead of lower-risk jobs. Given a job is scheduled to finish after its SLA deadline, Then it is marked "SLA breach predicted" and explicit user acknowledgment is required before confirmation. Given a job exits High risk due to a feasible assignment, Then the risk badge is cleared and the resolution is recorded in the audit log.
Auto-Assignment Load Balancer
"As a dispatcher, I want jobs automatically assigned to the best available resource so that queues move without manual juggling and same-day completion rates increase."
Description

Distribute triaged jobs using configurable strategies (least-loaded, round-robin within region, proximity for drone ops) weighted by skill match, capacity fit, time window feasibility, and SLA urgency. Support streaming and batch assignment as jobs arrive. Include deterministic tie-breakers and fairness over time, and enforce business rules (e.g., max concurrent inspections, per-day estimate limits). Produce assignment decisions with confidence scores and rationale, persist results, and update capacity accordingly.

Acceptance Criteria
Streaming Least-Loaded Estimator Assignment
Given an incoming triaged estimate job in region R with required skills and a feasible time window And eligible estimators with published capacity, per-day limits, and last-assigned timestamps When streaming mode is enabled and strategy least-loaded is active for estimators Then assign the job to the estimator with the smallest projected workload who meets skills and time window without breaching per-day limits And ties are broken deterministically by oldest last-assigned timestamp, then lowest userId And the decision is returned within 500 ms at P95 with a confidence score (0.0–1.0) and rationale listing at least the top 3 weighted factors And the assignment, confidence, rationale, and strategy are persisted with jobId, assigneeId, timestamp, and version And the assignee’s remaining capacity and next-available time are updated atomically And if no feasible assignee exists, the job remains unassigned with reason code NO_CAPACITY_OR_WINDOW and next review ETA computed
Regional Round-Robin Fairness
Given round-robin within region is configured for estimators in region R with eligible pool [E1,E2,E3] When 9 tie-eligible jobs arrive where all three are equally eligible by skills and capacity Then assignments rotate E1→E2→E3→E1→E2→E3→E1→E2→E3 without deviation And the rotation pointer persists across service restarts and is auditable And no single estimator receives more than 34% of tie-eligible assignments in any rolling window of 100 such jobs And ties within a rotation step are resolved by lowest userId deterministically
Proximity-Based Drone Operator Assignment with Time Windows
Given a drone mission job with location L, required skills, and time window [start,end] And eligible operators with prior scheduled locations and maxConcurrentInspections configured When proximity strategy is enabled for drone operators Then assign the job to the operator with minimum feasible travel time who can arrive within the window without exceeding maxConcurrentInspections And travel time uses the configured routing provider from prior job end time plus a 15-minute buffer And if multiple operators are equal on travel time, break ties by earliest next-available time, then lowest userId And the operator’s schedule is updated to block the slot and capacity counters adjusted atomically And the decision is persisted including travel ETA and buffer applied
SLA-Urgency Prioritization with Anti-Starvation
Given queued jobs with SLA priorities P1 (urgent), P2, and P3, and eligible assignees When assignment runs under mixed load for 1,000 jobs Then P1 jobs are assigned ahead of lower priorities when feasible, achieving ≥95% on-time assignment within their SLA windows And lower-priority jobs are not starved: each eligible assignee receives at least 1 P3 job per 10 assignments when P3 jobs have been waiting >10 minutes And SLA weight contribution to confidence is recorded in the rationale for each decision
Batch Assignment Determinism and Performance
Given a batch of 200 triaged jobs and a fixed snapshot of assignee state When batch assignment is triggered with seed S and deterministic mode enabled Then processing completes within 30 seconds at P95 and results are identical across repeated runs with the same input and seed And results are independent of input job order And per-job outcomes are persisted transactionally; partial failures are retried up to 3 times with exponential backoff And the API returns a summary including counts assigned/unassigned, average confidence, and unassigned reason codes
Business Rule Limits Enforcement and Reason Codes
Given configured rules: maxConcurrentInspections per operator, perDayEstimateLimit per estimator, and region boundaries When an assignment would violate any rule Then the assignment is not made and the job is marked unassigned with a specific reason code (e.g., MAX_CONCURRENT_REACHED, DAILY_LIMIT_REACHED, OUT_OF_REGION) And alternative eligible assignees are evaluated before returning unassigned And all rule checks and outcomes are recorded in the decision rationale and audit log
Confidence, Rationale, Idempotency, and Capacity Update
Given an assignment request for job J with idempotencyKey K When the same request is submitted multiple times within 24 hours Then exactly one assignment is created and subsequent requests return the same assignee, confidence, and rationale without side effects And confidence is computed from normalized weights of skill match, capacity fit, time window feasibility, SLA urgency, and proximity (for drone ops), summing to 1.0 And rationale lists each factor with its weight and score and is persisted with timestamp and version And capacity and schedules are updated atomically with the assignment or rolled back on failure
Manual Override & Reassignment with Audit
"As a dispatch lead, I want to override and rebalance assignments with safeguards so that I can handle exceptions without breaking compliance or creating conflicts."
Description

Provide UI and APIs for authorized users to override, lock, or reassign jobs with real-time conflict checks. Warn on violations of skills, capacity, and SLA constraints, and require justification where policy dictates. Capture a complete audit trail (who, when, why, previous and new assignee, rule inputs) and support bulk rebalancing during outages or weather holds with undo capability. Prevent double-booking and maintain historical assignment lineage.

Acceptance Criteria
Single Job Reassignment with Skill Violation Warning and Justification
Given a triaged job requiring skill tags ["steep"] scheduled within a 2-hour window and assigned to Estimator A And an authorized user with role "Dispatcher" is on the job details screen When the user selects Estimator B who lacks "steep" skill and clicks Reassign Then the system performs real-time checks within 2 seconds and displays a non-blocking warning listing violated rules (missing skill: steep; SLA risk: none; capacity: within limit) And the Reassign action is disabled until the user enters a justification of at least 15 characters When the user submits with justification and confirms Then the job is reassigned to Estimator B And an audit record is created capturing who, when (UTC timestamp), previous assignee, new assignee, justification, violated rules with rule inputs, and request source (UI) And the assignment lineage for the job shows the new hop appended with immutable fields
Hard Block on Double-Booking Prevents Override
Given Drone Operator C has an existing booking covering 10:00–12:00 on 2025-09-05 And Job J is scheduled 10:30–11:30 on 2025-09-05 And the user has "Admin" role When the user attempts to assign Job J to Drone Operator C Then the system detects a time window overlap and returns a blocking error within 2 seconds And the assignment is not changed And the UI/API does not offer an override option for double-booking violations And an audit entry is recorded for the failed attempt (who, when, reason: double-booking, previous assignee, attempted assignee, rule inputs, source)
Job Lock to Shield from Auto-Balancer
Given Job K is assigned to Estimator D And the user with role "Dispatcher" clicks Lock Assignment and provides justification When the lock is applied Then the job's assignment status is set to Locked and visible in UI and API And the auto-balancer must not move Job K while locked And any automated reassignment attempts during lock are skipped and logged And an audit record is created containing lock actor, timestamp, justification, lock scope, and TTL if any When the user unlocks the job Then the lock is removed and audit records updated with unlock actor and timestamp And the auto-balancer may include the job in future cycles
Bulk Rebalance During Weather Hold with Undo
Given 50 active field jobs in region R scheduled for today with status "Ready to fly" And a weather hold is declared for region R And a user with "Dispatcher" role selects all 50 jobs and chooses Bulk Reassign -> Hold Queue When the user provides a batch justification and confirms Then the system processes the batch atomically; either all 50 jobs move to Hold Queue or none do And each job receives an audit entry capturing batch operation id, who, when, previous assignee, new queue or assignee, justification, and rule inputs And the operation completes within 60 seconds and returns a success summary (moved: 50, failed: 0) And an Undo option is presented for 10 minutes; invoking Undo reverts all 50 jobs to their prior assignments and creates reversal audit entries
API Reassignment with Version Check and Idempotency
Given an API client with scope assign:write and idempotency key K issues PUT /jobs/{id}/assignment with payload {assigneeId, justification, ifMatch: version} And the job's current version matches the provided If-Match header When the request is processed Then the system performs conflict checks and returns 200 with body containing assignmentId, auditId, newVersion, and violatedRules (empty if none) And the same request retried with the same idempotency key within 24 hours returns 200 with the original response (no duplicate audit entries) When the If-Match version is stale Then the system returns 409 Conflict with a problem detail body and no state change And an audit entry is created for successful changes only
Assignment Lineage Visibility and Completeness
Given a job that has been reassigned three times When a user with "Viewer" role opens the Assignment History panel or calls GET /jobs/{id}/assignment-history Then the lineage displays an ordered list of all assignment hops with fields: previousAssigneeId, newAssigneeId, actorId, actorRole, source (UI/API/Auto), justification, violatedRules with rule inputs, timestamp (UTC), and correlation or batch id And records are immutable and tamper-evident (write-once with append-only semantics) And the API responds within 1 second for up to 100 hops
Simultaneous Reassignment Attempts Resolve with Conflict
Given two authorized users open Job M concurrently And both attempt to reassign to different assignees within 5 seconds When the first update is committed Then the second attempt receives a 409 Conflict (API) or inline conflict message (UI) prompting refresh And only one audit record exists for the successful change; the failed attempt logs an audit trail entry marked as rejected with reason: version conflict And the job reflects the first successful assignment only
Notifications with Accept/Decline and Auto-Reassign
"As an estimator or operator, I want to quickly accept or decline assigned jobs so that the schedule stays accurate and work can be reallocated without delay."
Description

Send real-time notifications to assignees with job details, scheduled window, location, and required skills via mobile/app/email. Allow assignees to accept or decline within a configurable response window; on decline or timeout, automatically reassign to the next best candidate per balancer rules. Sync accepted jobs to personal calendars and record acknowledgments for SLA verification. Provide throttling and quiet hours to avoid notification fatigue.

Acceptance Criteria
Real-time multi-channel notification delivery and payload
Given a job is assigned by the Capacity Balancer to a user with mobile push, in-app, and/or email enabled When the assignment is created Then the system sends notifications on all enabled channels within 10 seconds And each notification includes job ID, job type, address, map/deep link to location, scheduled time window, required skill tags, and Accept/Decline actions And notification send results are logged with channel, timestamp, and delivery status
Accept within response window updates assignment and calendar
Given a user receives a job notification with an active response window configured at the org level When the user selects Accept from any channel within the response window Then the job is confirmed assigned to that user and removed from other candidates’ queues And an acknowledgment is recorded with user ID, job ID, timestamp, channel, and device identifier And a calendar event is created on the user’s connected calendar within 30 seconds including title, time window, address, and job link/notes And subsequent Accept/Decline attempts for the same job by any user are rejected with an idempotent response and audit log entry
Decline triggers auto‑reassign to next best candidate
Given a user receives a job notification When the user selects Decline and optionally provides a reason Then the system re-evaluates candidates using Capacity Balancer rules (skills match, live capacity, time window, proximity) And assigns the job to the next best candidate within 60 seconds And sends notifications to the new assignee per channel preferences And records the decline, reason, reassigned-to user, and timestamps for SLA reporting And the decliner does not receive further notifications for that job
Timeout auto‑reassign on no response
Given a user is notified of an assignment with an active response window When the response window expires without an Accept or Decline Then the system marks the attempt as Timed Out And automatically reassigns the job using Capacity Balancer rules within 60 seconds And sends notifications to the newly selected candidate And records timeout details (assignee, sent time, expiry time, reassignment target, timestamps) for SLA verification
Quiet hours and throttling compliance
Given a user has configured quiet hours and a per-minute notification throttle When a notification would be sent during quiet hours Then the system defers the notification until quiet hours end And the response window countdown begins when the first notification is actually delivered to the user And when more than the throttle limit of notifications are pending, the system sends at most the configured number per minute in FIFO order And all deferrals and releases are logged with timestamps for audit
Calendar sync creation, updates, and removals
Given a job has been accepted and a calendar event has been created When the job is rescheduled or reassigned to a different user Then the original user’s calendar event is updated or removed within 60 seconds And the new assignee’s calendar receives a created/updated event within 60 seconds of reassignment And duplicate events are not created across repeated updates And calendar sync failures are retried up to 3 times and surfaced in system logs
Audit trail for SLA verification and reporting
Given any notification lifecycle occurs (sent, delivered, accepted, declined, timed out, reassigned) When the flow completes Then the system stores an immutable audit record containing job ID, user IDs, action type, timestamps (sent, delivered, acted), channel(s), device identifier (if available), and reassignment chain And audit records are searchable by job ID, user, date range, and action type And audit records are retained for at least 365 days and are exportable to CSV
Exception Handling & Escalations
"As an operations supervisor, I want clear exception alerts with recommended next steps so that I can resolve bottlenecks before SLAs are breached."
Description

Detect unassignable jobs (no eligible resource, capacity exhausted, weather holds), scheduling conflicts, and SLA-at-risk items. Route them to an exceptions queue with reason codes, recommended actions (relax constraints, extend windows, engage subcontractors, split tasks), and prioritization. Trigger escalations to managers via alerts and webhooks, and track resolution status and time-to-clear for operational reporting.

Acceptance Criteria
Detect Unassignable Job and Create Exception
Given a job J has at least one of: no eligible resource matching required skill tags, all eligible resources' capacity is exhausted for J's time window, or J's service area is under active weather hold When the Capacity Balancer runs auto-assignment Then job J is not auto-assigned And an exception E is created with reasonCodes including one or more of [NO_ELIGIBLE_RESOURCE, CAPACITY_EXHAUSTED, WEATHER_HOLD] And E includes fields: exceptionId, jobId, detectedAt (UTC), priority (default=Medium), reasonCodes[], recommendedActions[], state='Open' And E appears in the Exceptions Queue within 10 seconds of detection And job J status is updated to 'Exception' with a link to E
Scheduling Conflict Exception Detection
Given a job J is assigned to resource R And J's scheduled window overlaps another job assigned to R or violates the minimum travel buffer or time-window constraints When the scheduling check runs on create or update of J or R's schedule Then an exception E is created with reasonCodes=[SCHEDULING_CONFLICT] And E references conflicting jobIds and resourceId And E appears in the Exceptions Queue within 10 seconds And job J is flagged 'Conflict' until resolved
SLA-at-Risk Identification and Prioritization
Given an SLA target time-to-complete for job type T is configured as S hours from job creation And job J has not started or is unassigned And the projected completion time exceeds S by <= 2 hours or the time-to-assign exceeds 80% of S When the SLA monitor runs every 5 minutes Then an exception E is created with reasonCodes=[SLA_AT_RISK] And E.priority='High' And E includes breachETA and timeToBreach (minutes) And E appears in the Exceptions Queue within 10 seconds
Exceptions Queue Display, Recommendations, and Sorting
Given there are one or more open exceptions When the Exceptions Queue is viewed Then each exception displays reasonCodes, priority, detectedAt, jobId, and a list of recommendedActions from {RELAX_CONSTRAINTS, EXTEND_TIME_WINDOW, ENGAGE_SUBCONTRACTOR, SPLIT_TASK} And each recommendedAction includes actionCode, description, parameters (if any), and estimatedImpact metrics (e.g., eligibleResourcesDelta, completionETAChange minutes) And the queue is sorted by priority (High > Medium > Low), then by timeToBreach ascending if present, else by detectedAt ascending And a badge shows the total number of open exceptions
Manager Escalation via Alerts and Webhooks
Given an exception E is created with reasonCodes including SLA_AT_RISK or E.priority='High' or reasonCodes includes WEATHER_HOLD with detectedAt older than 2 hours When E is created Then an alert is sent to the designated manager channels (in-app and email if configured) within 30 seconds And a webhook POST is sent to the configured endpoint within 30 seconds with payload: exceptionId, jobId, reasonCodes, priority, recommendedActions, detectedAt, tenantId, signature, idempotencyKey And webhook delivery retries up to 5 times with exponential backoff on non-2xx responses And duplicate webhook deliveries carry the same idempotencyKey
Exception Resolution Workflow and Time-to-Clear Tracking
Given an open exception E exists for job J When a user applies a recommended action (e.g., RELAX_CONSTRAINTS) or manually resolves the underlying issue and clicks 'Mark Resolved' Then E.state transitions to 'Cleared' And E records resolutionCode, resolvedBy, resolvedAt (UTC), and timeToClear (minutes) And job J's status is updated accordingly (e.g., Assigned or Scheduled) And E appears in operational reports and APIs for the selected period with its timeToClear value

SLA Predictor

Shows a per-address probability of meeting the due time based on queue length, route ETA, daylight, and weather. Flags at‑risk jobs early and recommends actions (reassign, bundle, rush) to protect SLAs and customer promises.

Requirements

Real-time SLA Probability Engine
"As a dispatcher, I want to see a real-time probability that each job will meet its due time so that I can prioritize and sequence work to protect SLAs."
Description

Compute a per-address probability of meeting the due time by combining queue length, live route ETA, daylight windows (sunrise/sunset and civil twilight), and granular weather forecasts (precipitation, wind, temperature) with historical job durations by roof type, pitch, and crew performance. The engine produces a calibrated probability (0–1), confidence band, and risk tier, updating automatically when any input changes. It is timezone-aware, resilient to missing data via priors and fallbacks, and persists predictions with timestamps for traceability. Exposes internal thresholds for flagging, supports batch scoring for daily schedules, and meets performance targets (<200 ms per job, horizontal scaling). Integrates with RoofLens scheduling and job objects via service-to-service APIs.

Acceptance Criteria
Real-time Probability Output and Calibration
Given a scheduled roofing job with complete inputs (queue length, live route ETA, daylight window, granular weather forecast, historical durations by roof type/pitch/crew) When the SLA Probability Engine scores the job Then it returns fields: probability in [0,1], ci_low in [0,1], ci_high in [0,1] with ci_low ≤ probability ≤ ci_high, and risk_tier in {Low, Medium, High, Critical} based on current thresholds And probability calibration on the environment’s holdout set (≥1,000 jobs) achieves Expected Calibration Error ≤ 0.05 or Brier Score ≤ 0.18 And all numeric outputs use ISO units and are rounded to ≤3 decimal places without truncation
Automatic Re-scoring on Input Changes
Given an existing prediction for a job When any upstream input changes (queue length, route ETA, daylight window, weather forecast, or historical duration parameters) Then the job is re-scored within 10 seconds and a new prediction is persisted with a fresh timestamp and incremented prediction_version And the previous prediction remains queryable for at least 30 days And a prediction.updated event is emitted to the scheduling service with job_id, old_version, new_version, and deltas (probability, risk_tier)
Timezone and Daylight Handling
Given a job address in any timezone, including dates with DST transitions When the engine computes daylight windows Then sunrise, sunset, and civil twilight for the job’s geocoordinates and service date are used from an authoritative source and interpreted in the local timezone And route ETAs and due_time are normalized to the job’s local timezone before scoring And predictions within ±3 hours of DST changes reflect correct local offsets with no negative or duplicated local times
Missing Data Fallbacks and Confidence Behavior
Given inputs with missing or stale data When weather data is unavailable Then the engine uses a documented prior (location/seasonal climatology) and marks prediction.degraded=false if all other major inputs are present When historical duration data for the roof type/pitch/crew is missing Then the engine backs off to broader cohort priors and increases confidence interval width by at least 20% relative to baseline When two or more of {route ETA, weather, historical durations} are missing or stale (>30 minutes for ETA/weather, >90 days for historical) Then prediction.degraded=true is set and a fallback_reason list is persisted while still returning a probability and confidence band
Batch Scoring and Performance
Given a daily schedule batch of up to 5,000 jobs When the batch scoring API is invoked Then the service processes the batch with per-job latency P95 ≤ 200 ms and overall completion ≤ 5 minutes at 4 replicas And the service supports idempotent retries (same batch_id produces the same results without duplicates) And batch results include per-job status, error (if any), and prediction metadata (model_version, thresholds_version, timestamps)
API Integration and Persistence/Traceability
Given valid service-to-service authentication When clients call POST /v1/sla/score (single) or POST /v1/sla/score:batch (batch) Then the API responds 200 with a defined schema including job_id, probability, ci_low, ci_high, risk_tier, model_version, thresholds_version, created_at (ISO 8601 with timezone), and inputs_hash And predictions are persisted with an immutable record of input snapshots or hashes, model_version, thresholds_version, and service build commit for traceability And GET /v1/sla/predictions?job_id returns the latest prediction, with optional version and time-range filters to retrieve history
Flagging and Threshold Management
Given configurable thresholds for risk tiers and at-risk flags When an authorized client updates thresholds via PUT /v1/sla/thresholds Then the new thresholds_version is stored and audit-logged with actor, before/after values, and timestamp And all subsequent scores use the new thresholds within 60 seconds without service restart And existing persisted predictions retain their original thresholds_version and are not mutated And an at_risk flag is set whenever probability < at_risk_threshold, and an event is emitted for downstream consumers
Predictor Data Integrations
"As an operations manager, I want reliable integrations that feed routing, weather, and daylight into the predictor so that the predictions stay accurate without manual data entry."
Description

Establish secure, monitored integrations for all predictor inputs: routing ETA (existing dispatch/route service), queue length (job pipeline), weather (hourly/nowcast by lat/long), and daylight (sunrise/sunset calculations). Includes geocoding addresses to coordinates, timezone resolution, caching and refresh schedules, retries with exponential backoff, rate limiting, and fallbacks when upstreams degrade. Normalize units and schemas, validate payloads, and store a local, short-lived cache to reduce latency/cost. Manage credentials via secrets storage and provide observability (health checks, metrics, alerts) to ensure continuous, accurate data flow into the SLA Predictor.

Acceptance Criteria
Address Geocoding and Timezone Resolution
Given a canonical postal address When the system geocodes the address Then it returns latitude and longitude with at least 5 decimal places and geocoding_accuracy of rooftop or interpolated with confidence >= 0.8 And it returns an IANA timezone ID for the coordinates And p95 end-to-end latency is <= 1000 ms for warm cache and <= 2000 ms for cold cache And results are cached for 24 hours using a normalized_address cache key And on primary provider 5xx/timeout > 2s/HTTP 429, the system retries with exponential backoff (base 200 ms, factor 2, jitter) up to 3 attempts, then fails over to a secondary provider And if post-failover confidence < 0.8, the result is marked unresolved and coordinates/timezone are not persisted
Routing ETA Integration with Retries and Rate Limits
Given origin and destination coordinates with a planned departure timestamp When requesting ETA from the dispatch/route service Then the response is validated against schema v1 and normalized to duration_minutes (integer) and arrival_time (ISO 8601 with destination timezone offset) And p95 latency is <= 1500 ms for warm cache and <= 2500 ms for cold cache And provider rate limiting is enforced at 10 requests/second per API key with a token bucket (burst 100) And on HTTP 429/5xx/timeout > 2s, the client retries with exponential backoff (base 250 ms, factor 2, max 3 retries, jitter) And on persistent failure, a cached ETA not older than 15 minutes is returned with data_freshness = "stale"; otherwise the result is marked unavailable without blocking the caller
Weather Nowcast/Hourly Integration and Normalization
Given latitude/longitude and a time window from now to the job's ETA When retrieving weather data Then required fields are present and normalized: precipitation_probability (0–100%), precipitation_intensity (mm/h), wind_speed (m/s), wind_gust (m/s), temperature (°C), condition_code (enum) And the series includes nowcast for T+0 to +60 minutes when available, otherwise hourly values, all timestamps in ISO 8601 with timezone offset And responses are cached for 10 minutes; p95 latency <= 2000 ms cold, <= 1000 ms warm And if the primary provider returns 4xx/5xx/timeout > 2s or missing required fields, the system fails over to a secondary provider; if still incomplete, the result is marked partial with confidence < 1.0 And outlier values beyond 5 standard deviations of a 24h rolling baseline are discarded
Daylight Computation for ETA Window
Given coordinates and an ETA timestamp in the destination timezone When computing sun events Then sunrise and sunset times are produced for the date along with civil_twilight_start and civil_twilight_end (ISO 8601) And in_daylight is computed for ETA and for the ETA ± 30 minute window And results differ from a trusted reference dataset by <= 60 seconds for a test set of 100 locations/dates, including DST transitions And polar day/night edge cases return in_daylight = true for continuous day or false for continuous night, and next_sunrise/next_sunset within 365 days And computation completes within 50 ms p95
Queue Length Ingestion from Job Pipeline
Given active crews and regional assignment rules When ingesting pipeline data Then queue_length per crew reflects counts in states {scheduled, en_route, in_progress} and excludes {canceled, completed} And records include crew_id, region_id, queue_length (integer), last_updated (ISO 8601), and data_source And ingestion runs every 5 minutes; p95 ingestion cycle <= 1000 ms; pagination and idempotency are handled correctly And if crew assignment is missing, counts roll up to region; if upstream is unavailable for 2 consecutive runs, last known values are used if age <= 15 minutes, else queue_length is set to null and freshness = "stale" And payloads are validated against schema v2; invalid records are dropped and counted in metrics
Local Short-Lived Cache and Fallback Behavior
Given predictor input requests to external providers When caching is applied Then cache keys are namespaced per integration and environment and normalized to avoid duplicates And TTLs are: geocoding 24h, routing ETA 15m, weather 10m, queue length 5m, daylight 7d And stale-while-revalidate is enabled for up to 2x TTL with background refresh and concurrency coalescing And cache hit ratio maintains >= 40% over a 24h rolling window; exceeding max memory triggers LRU evictions without request failures And on upstream degradation, cached data within TTL or stale window is served; otherwise responses include degraded = true without blocking the caller
Observability, Alerts, and Secrets Management
Given integrations are deployed in production When monitoring and security controls operate Then each integration exposes a /health endpoint reporting status, dependencies, last_success timestamp, and build version And metrics emitted include request_count, success_count, error_rate, p50/p95/p99_latency, cache_hit_ratio, retry_attempts, and rate_limit_throttles And alerts fire within 5 minutes when error_rate > 5% (5m), p95_latency > 3s (5m), no successful call for 15 minutes, cache_hit_ratio < 40% (30m), or a secret will expire in < 14 days And credentials are stored in a managed secrets service, rotated at least every 90 days, access controlled via RBAC, and never logged or committed to code/config And audit logs record secret access and configuration changes; security and contract tests must pass 100% in CI before deployment
At-Risk Job Flagging & Alerts
"As a scheduler, I want at-risk jobs to be automatically flagged and alerted so that I can intervene before we miss customer commitments."
Description

Automatically flag jobs whose predicted on-time probability drops below configurable thresholds or whose daylight or weather constraints imply likely breaches. Display risk badges in lists and job detail views, and emit real-time alerts through in-app notifications, email/SMS, and webhooks. Support organization- and job-type-specific thresholds, deduplication and cooldown windows, escalation rules, snooze/acknowledge actions, and audit of alert lifecycle. Enable bulk views and filters for "At Risk Today/This Week" to streamline daily standups and route reviews.

Acceptance Criteria
Threshold-Based Risk Flagging and Badges
Given organization- and job-type-specific SLA risk thresholds are configured And a job has a current predicted on-time probability And daylight and weather constraints are evaluated for the job’s address and ETA in the job’s local timezone When the probability falls below the applicable threshold OR daylight/weather constraints imply a likely breach Then the job is marked At Risk with severity Warning (<= warning threshold) or Critical (<= critical threshold or hard constraint) And a risk badge with severity appears in the jobs list row and the job detail header And the badge and severity update within 60 seconds of the latest model or constraint update And when the probability rises above thresholds and constraints no longer imply a breach Then the At Risk state and badges are removed within 60 seconds and the cleared state is recorded
Real-Time Multi-Channel Alert Emission with Dedup and Cooldown
Given in-app, email/SMS, and webhook channels are enabled and recipients are configured for the organization And a per-job cooldown window is configured (default 30 minutes) and deduplication keys are defined as (job_id, risk_reason, severity) When a job first becomes At Risk or its severity increases Then an in-app notification is created immediately and email/SMS and webhook alerts are dispatched within 30 seconds And each alert payload contains job_id, address, due_time, predicted_on_time_probability (0–100%), risk_reasons, severity, threshold_used, recommended_actions, and a deep link to the job And webhook deliveries include an idempotency key and HMAC-SHA256 signature using the org’s secret and are retried up to 3 times over 10 minutes until a 2xx is received And alerts with the same deduplication key within the cooldown window are suppressed and a suppression event is added to the audit log
Snooze and Acknowledge Controls
Given a user with Alert Manage permission views an at-risk job When the user snoozes the alert for a selected duration (15m, 30m, 60m, custom up to 24h) Then outbound alerts for that job and risk reason are suppressed during the snooze period and the job badge shows Snoozed with remaining time When the user acknowledges the alert Then duplicate alerts for the same reason and severity are suppressed for 2 hours unless severity increases or a new risk reason is added And all snooze, unsnooze, and acknowledge actions reflect in the UI within 2 seconds and are written to the audit trail within 5 seconds
Escalation Policies and Business-Hour Respect
Given the organization defines an escalation policy with targets, channels, and timeouts (e.g., escalate after 15 minutes unacknowledged; immediately if Critical within 2 hours of due time) And business hours and holiday calendars are configured When an at-risk alert remains unacknowledged beyond its configured timeout OR a job becomes Critical within 2 hours of due time Then the alert is escalated to the next target via configured channels and marked Escalated And escalations are not sent during non-business hours unless the policy’s after-hours override is enabled And escalations do not fire while the alert is snoozed unless the policy’s snooze-override is enabled And each escalation event records target, channel, timestamp, and prior notification history in the audit log
Bulk At-Risk Views, Filters, and Actions
Given the user navigates to At Risk Today or At Risk This Week When filters (job type, crew, region, severity) and sort order are applied Then the list shows only jobs currently At Risk whose due date/time falls within the selected horizon in the user’s timezone And severity counts (Warning, Critical) display and update with filters And the first page (up to 100 rows) loads within 3 seconds for datasets up to 2,000 at-risk jobs, with pagination available And the user can multi-select jobs to assign/reassign, bundle, acknowledge, or export CSV And the CSV export includes columns: job_id, address, due_time, probability, severity, risk_reasons, threshold_used, last_updated
Alert Lifecycle Auditability
Given alert lifecycle auditing is enabled by default When any event occurs (detected, severity_changed, alert_sent, delivery_failed, suppressed, snoozed, unsnoozed, acknowledged, escalated, resolved) Then an immutable audit record is created within 5 seconds containing org_id, job_id, event_type, risk_reasons, old_state, new_state, actor (user_id/system), channel, timestamp (ms), delivery_id (if applicable), response_code (if applicable) And audit entries are queryable via UI and API by org_id, job_id, date range, event_type, and severity And audit data is retained for at least 12 months and exportable as CSV within 30 seconds for up to 10,000 records
Action Recommendations & What-Ifs
"As a dispatcher, I want actionable recommendations with projected impact so that I can choose the most effective mitigation with minimal disruption."
Description

Generate ranked mitigation options when a job is at risk, such as reassigning to a nearer crew, bundling with nearby stops, adjusting sequence, or marking for rush. For each recommendation, estimate uplift to on-time probability, expected cost/impact, and spillover effects on other jobs. Provide one-click actions or deep links to dispatch tools, plus sandboxed "what-if" simulations to compare scenarios before committing. Respect operational constraints (crew skills, capacity, service hours, daylight, weather windows) and explain assumptions used in each recommendation.

Acceptance Criteria
Ranked Recommendations Display for At-Risk Job
Given a job J with current on-time probability P below the organization’s SLA threshold T When the user opens the SLA Predictor panel for J Then the system generates recommendations within 5 seconds And the list is sorted in descending order of expected on-time uplift (ΔP) And each recommendation displays its action type (reassign/bundle/sequence/rush) and rank number And if fewer than 3 valid options exist, an "Insufficient options" state is shown alongside any available options And ties in uplift (within 0.5 percentage points) are broken by lower estimated cost, then by lower spillover loss, then by action type lexicographic order to ensure deterministic ranking
Impact Estimation on Uplift, Cost, and Spillover
Given any displayed recommendation R for job J with baseline probability P When R is rendered Then it shows predicted probability after R (P'), uplift ΔP = P' − P (percentage points), incremental cost (org currency), route time change (minutes), and a list of impacted jobs with each ΔP_i And probabilities are rounded to 0.1 percentage points, costs to 2 decimals, and times to whole minutes And a confidence band is shown for P' (or "N/A" if unavailable) with tooltip definitions And when R is applied in the sandbox, simulated P', cost, and time match displayed estimates within ±1.0 pp for probability and ±5% for cost/time And clicking an impacted job opens its detail with pre-filter highlighting the spillover change
Constraint-Aware Filtering of Recommendations
Given crew skills, capacity limits, service hours, daylight bounds for the address/date, weather windows, and route legality constraints are configured When recommendations are generated for job J Then options violating any constraint are excluded from the display And each displayed option shows a "Constraints satisfied" indicator with the key constraints it depends on (e.g., Skill: Drone Pilot L2; Capacity: 1 slot free; Daylight: within window) And if no valid options remain, a "No valid recommendations" message is shown with the top three blocking constraints And attempting to commit a violating option is blocked with a descriptive error referencing the failed constraint(s)
One-Click Actions and Deep Links Execution
Given a displayed recommendation R When the user clicks Apply and confirms Then the corresponding dispatch change is executed and the live plan recalculates within 10 seconds And job J and all impacted jobs have their probabilities refreshed and visible in the UI And a success/failure toast is shown with a link to the activity log And if the user lacks permission or an external dispatch tool is the system of record, a deep link is presented that opens the target with job ID, crew ID, and sequence parameters prefilled And upon receiving the external callback/webhook, the system syncs state and marks R as Applied in the job timeline
What-If Simulation Sandbox with Scenario Comparison and Commit
Given an at-risk job J When the user creates a sandbox scenario Then they can adjust crew assignment, sequence position, bundling of nearby stops (within a configurable radius/time), and rush flag And the system computes P', ΔP, incremental cost, route time change, and spillover impacts within 5 seconds per scenario And up to 5 scenarios can be compared side-by-side with the best option highlighted by a configurable utility score (e.g., weight_uplift vs. weight_cost) And sandbox changes do not affect the live plan until the user clicks Commit on a selected scenario And Discard removes the scenario and reverts the view to baseline metrics And a shareable link or ID reproduces the scenario for another authorized user
Assumptions and Data Provenance Transparency
Given a displayed recommendation or sandbox scenario When the user expands Assumptions Then the panel lists: model version, data timestamps for queue length, route ETA, weather, daylight, source systems for each input, assumed average service time, traffic model, and fallback rules for missing data And each item shows its value and last-updated timestamp And if any input is older than the organization’s staleness threshold, a warning is shown with a Recompute action And clicking Explain provides a short rationale summarizing top drivers of the recommendation and key constraints considered
Safety Guardrails and Acknowledgements for Negative Spillover
Given a recommendation R causes any impacted job K to fall below SLA threshold T or to decrease by more than 5.0 percentage points When the user attempts to commit R Then a modal lists all negatively impacted jobs with current vs. projected probabilities and requires explicit acknowledgement before proceeding And the default modal action is Cancel And only users with Dispatcher (or higher) role can override and proceed And upon commit, an audit entry records the acknowledgement, user, timestamp, and before/after probabilities for J and all impacted jobs
Prediction Explainability & Audit Log
"As a compliance lead, I want to see why a prediction was made and maintain an audit trail so that I can explain decisions and resolve disputes."
Description

Surface clear reason codes and top contributing factors (e.g., weather severity, late ETA, limited daylight) for each prediction to build trust and aid decision-making. Record all predictions, threshold crossings, alerts, user overrides, and executed recommendations in an immutable audit log with timestamps and actors. Provide searchable views and export (CSV/API) with retention controls, redacting PII where not required. Include integrity checks and versioning for model/config changes to support dispute resolution and continuous improvement.

Acceptance Criteria
Explainable Prediction Panel per Job
Given a job with a computed SLA prediction When the job detail page is opened Then the UI displays the SLA-miss probability as a percentage with two decimal places And the top 5 contributing factors with signed percentage contributions that sum to within ±1% of total influence And human-readable reason codes mapped from contributing factors And raw input values for queue length, route ETA, daylight remaining, and weather severity index And ModelVersion and ConfigVersion identifiers And an InferenceTimestamp in UTC ISO8601 with millisecond precision And a "Copy explanation" control copies a JSON payload containing probability, top factors, inputs, and version metadata to the clipboard And the explanation panel renders within 400 ms at P95 after job details load And all displayed values exactly match the backend explanation API for the prediction ID
At-Risk Flag and Action Recommendations
Given a configured SLA-miss threshold T and a computed prediction probability p for a job When p >= T Then an "At-Risk" badge is shown with severity color bands (Medium for [T..T+0.15), High for [T+0.15..1]) And a rationale sentence cites the top 2 contributing factors by name And exactly 3 recommended actions are listed with estimated uplift (%) and confidence band (Low/Med/High) And each recommendation provides an Execute action and a View impact details link And changing T in settings affects subsequent evaluations within 1 minute and displays the new ConfigVersion And any Execute or Dismiss action emits an audit log event with actor, timestamp, job identifiers, and recommendation id
Immutable Audit Log Event Coverage
Given the system processes predictions and user actions When any of the following occurs: prediction generated, threshold crossing evaluated, alert dispatched, user override saved, recommendation executed Then an audit event is appended for each occurrence containing EventId, EventType, TimestampUTC(ms), ActorId, ActorType, OrgId, JobId, AddressId, PredictionId (if applicable), ModelVersion, ConfigVersion, RequestId, TraceId, PreState, PostState And events are append-only; API requests to PATCH/DELETE an event are rejected with HTTP 405 and reason "immutable" And each event includes Hash and PrevHash (SHA-256) forming a verifiable chain And the integrity verification endpoint returns OK for an unbroken chain over any requested time range And event ingestion latency P95 is <= 1 second from event occurrence
Audit Log Search and Filter UI/API
Given a user with AuditLog.View permission When they query by time range, OrgId, JobId/AddressId, EventType, ActorId, ModelVersion, ConfigVersion, and ThresholdCrossing flag Then the results include all and only matching events And results are sorted by TimestampUTC descending by default And the first page (size=100) renders within 2 seconds at P95 for up to 50k matching events And pagination, totalCount, and a Download CSV action are available in the UI And the API supports the same filters and returns JSON with items[], totalCount, limit, and offset
CSV and API Export with PII Redaction
Given a filtered audit log result set When the user exports to CSV without elevated scope Then PII fields (CustomerName, Email, Phone, StreetAddress) are redacted as "REDACTED" And the CSV contains a header row with canonical column names and exactly N data rows where N equals the result count And export time for up to 100k rows is <= 60 seconds And an ExportCreated audit event is emitted with actor, filter summary, row count, and pii=false When the user has scope audit:export:pii, role=Admin, and provides a required Justification text Then PII fields are unredacted in the export and the ExportCreated event records pii=true and the justification And the REST export endpoint returns content-type text/csv, supports gzip via Accept-Encoding, and preserves stable column order
Retention Rules and Legal Hold
Given org-level retention rules are configured per event type (e.g., Predictions=18 months, Alerts=36 months, Overrides=60 months) When the nightly retention job runs Then events older than the configured period are purged and a RetentionPurgeSummary event is appended including counts by type and time window And events tagged with a LegalHoldId are excluded from purge And admins can create, view, and release legal holds with optional expiry and scope (event types, job ids) And a Retention Simulation report shows would-be deleted counts without purging And purged data is no longer retrievable via UI or API And purge throughput is >= 50k events/min without causing API error rate to exceed 0.5%
Versioning and Reproducibility of Predictions
Given a model or configuration change is deployed When the change is activated Then new ModelVersion and ConfigVersion identifiers are generated with an effectiveFrom timestamp and a required ChangeLog entry (author, reason) And every new prediction references the active ModelVersion and ConfigVersion And for any PredictionId, the Replay API recomputes the probability using stored feature vector and versioned artifacts, matching the recorded value with absolute difference <= 0.001 And any mismatch emits a DriftDetected event with details And the ModelArtifact checksum (SHA-256) stored in the version record matches the stored artifact's checksum And the integrity check API lists all known versions with checksums and last-seen usage
SLA Risk Dashboard & Job Detail UI
"As a branch manager, I want a clear dashboard and job view of SLA risk so that my team can monitor and act quickly across all active jobs."
Description

Deliver a responsive dashboard that lists all active jobs with their SLA probability, risk tier, route ETA, and daylight/weather overlays. Enable sorting, filtering (by crew, region, due time, risk level), bulk selection, and quick actions. The job detail view presents a timeline with key factors, confidence bands, recommended mitigations, and the ability to trigger actions. Ensure accessibility (WCAG 2.1 AA), internationalization for time/date formats, and accurate timezone handling across regions.

Acceptance Criteria
Ops Manager Reviews Portfolio Risk on Dashboard
Given active jobs exist across regions, When the user opens the dashboard, Then a table lists all active jobs with columns: Address, Crew, Region, Due Time (local), SLA Probability (%), Risk Tier (Low/Medium/High), Route ETA, Daylight Remaining (hh:mm), and Weather Risk (None/Moderate/Severe). Given data services are available, When the dashboard loads, Then a “Last updated” timestamp is visible and all displayed metrics are no more than 60 seconds old. Given daylight or weather data are unavailable for a job, When the row renders, Then a neutral indicator with a tooltip “Data unavailable” is shown and actions remain enabled. Given viewport widths from 320px to 1440px+, When rendering the dashboard, Then the layout is responsive with no horizontal scroll at 375px, sticky table header at ≥768px, and critical columns remain accessible via column priority or controlled horizontal scroll. Given a dataset of 200 active jobs, When loading the dashboard on a standard connection, Then first contentful paint ≤1.5s and table becomes interactive ≤2.5s.
Dispatcher Filters and Sorts Jobs to Identify At-Risk Work
Given active jobs are loaded, When the user applies filters Risk Tier = High AND Crew ∈ {A,B} AND Region = North AND Due By ≤ today 17:00 (job local time), Then only matching jobs are shown and the result count reflects the filtered set. Given a sort is selected (e.g., SLA Probability ascending), When applied, Then rows are ordered correctly and sorting completes in ≤300ms for up to 500 rows. Given filters and sort are active, When the page is refreshed or the URL is shared, Then the same filters and sort are restored from query parameters. Given filters are applied, When the user clicks “Clear all”, Then all filters reset and the full, unsorted list returns within 500ms. Given pagination or infinite scroll is enabled, When navigating pages with filters applied, Then the filter criteria persist across pages and the total filtered count remains accurate.
Coordinator Applies Quick Actions to Multiple At-Risk Jobs
Given a filtered list of at-risk jobs, When the user selects multiple rows, Then a bulk action bar appears with actions: Reassign, Bundle, Rush, and disabled actions show eligibility reasons via tooltip. Given the user has permission, When “Reassign” is executed and a new crew is chosen, Then all eligible jobs are updated, the affected rows show the new crew, and a confirmation toast summarizes successes and failures. Given some selected jobs are ineligible, When a bulk action runs, Then ineligible rows are skipped with inline reasons and the operation continues for eligible rows. Given a bulk action completes, When the dashboard refreshes, Then SLA Probability and Risk Tier recompute and reflect new values within 60 seconds. Given any bulk action is performed, When viewing the audit trail, Then an entry exists with actor, timestamp (UTC ISO 8601), action type, affected job IDs, and before/after fields. Given a network or service error occurs during an action, When the error is detected, Then the user sees a non-blocking error toast, can retry the failed subset, and no unintended partial state persists.
User Opens Job Detail to Evaluate SLA Risk and Mitigations
Given a job is selected from the dashboard, When opening its detail view, Then the header shows Address, Crew, Region, Due Time (job local), current SLA Probability (%), and Risk Tier. Given the detail view is loaded, When viewing the timeline, Then key factor events (dispatch, en route, checkpoints, weather alerts, daylight threshold) are plotted with timestamps, and the route ETA curve displays confidence bands (e.g., P50 and P90). Given recommendations are available, When viewing “Recommended Mitigations,” Then each item shows the action, estimated impact on SLA Probability (e.g., +12%), prerequisites, and an enabled “Trigger” control when eligible. Given a mitigation is triggered, When the action completes, Then the predicted SLA Probability updates to reflect the change within 60 seconds and the recommendation state updates (e.g., marked applied). Given the user returns to the dashboard, When navigating back, Then previous filters, sort order, and scroll position are preserved.
Screen Reader User Navigates Dashboard and Job Detail
Given keyboard-only navigation, When traversing dashboard and detail screens, Then all interactive elements are reachable in a logical order, have visible focus indicators, and no focus traps exist. Given assistive technology use, When reading SLA Probability, Risk Tier, Daylight, and Weather indicators, Then each has descriptive labels/ARIA roles (e.g., “SLA probability 72 percent, Medium risk”). Given color-based risk indicators, When color perception is limited, Then risk tiers are also conveyed via text/icons and meet contrast ratios ≥4.5:1 for text and ≥3:1 for large text/icons. Given live metric updates, When values change, Then ARIA live regions announce changes without shifting focus or causing unexpected context changes. Given filters and forms, When validation errors occur, Then errors are programmatically associated to inputs, announced by screen readers, and all controls have labels; touch targets are ≥44×44 px. Given automated accessibility testing, When running axe (or equivalent) on dashboard and detail, Then no WCAG 2.1 AA violations of severity “serious” or “critical” remain.
Regional Admin Views Jobs Across Multiple Timezones
Given jobs span multiple timezones, When listing them, Then Due Time displays in the job’s local timezone with offset (e.g., 15:00 MDT, UTC−06:00) and a tooltip shows the viewer’s local equivalent. Given locale preferences are changed, When switching locale, Then date/time formats follow the selected locale (12/24h, day/month order) and static UI strings render in that locale. Given sorting by Due Time, When jobs are in different timezones, Then sort is computed using UTC timestamps to ensure chronological correctness. Given daylight remaining is shown, When calculating values, Then computations use job site coordinates, date, and timezone rules including DST. Given a list export is requested, When generating the file, Then timestamps are ISO 8601 with timezone (e.g., 2025-09-04T17:00:00-06:00) and column headers are localized. Given DST transition days, When a due time falls within a shift, Then an indicator clarifies the transition and displayed times remain unambiguous.
Mobile Supervisor Uses Dashboard in the Field
Given a phone viewport (≤414px width), When opening the dashboard, Then the layout collapses to cards with primary indicators (Risk Tier, SLA %, Due Time), action menu is accessible, and no horizontal scroll appears. Given limited connectivity (≈400ms RTT, 1 Mbps), When loading 200 jobs, Then skeleton loaders appear within 200ms, time to interactive ≤3.5s, and input latency ≤100ms during scroll and filter operations. Given long lists on mobile, When reaching the bottom, Then incremental loading fetches the next 50 jobs, maintains ≤1 in-flight request, and cancels requests on navigation. Given touch interactions, When tapping controls, Then hit targets are ≥44×44 px, and the sticky bulk action bar does not obscure content. Given device rotation, When changing orientation, Then state (filters, selection, scroll) persists and content reflows without layout shift >0.1 (CLS).

Route Bundles

Auto-groups nearby addresses into optimized runs with turn‑by‑turn order. Assign or reassign an entire bundle in one click to slash windshield time and lift daily throughput.

Requirements

Smart Geo-Clustering Engine
"As a dispatcher, I want jobs auto-grouped into nearby bundles so that I can create efficient daily runs without manual sorting."
Description

Automatically groups scheduled inspections and estimates into geographically tight bundles based on real travel times, proximity, service duration per stop, job priority, and crew capacity. Integrates with RoofLens job data (address, roof complexity, required skills, appointment windows) to produce balanced daily runs that minimize windshield time while maximizing stops per crew. Supports exclusion rules, minimum/maximum bundle sizes, and manual overrides without breaking optimization constraints.

Acceptance Criteria
Balanced Daily Runs Generation
Given a planning day with N scheduled jobs and M available crews with defined shift lengths When the engine generates route bundles Then each bundle’s total planned time (service + travel) is within ±10% of its assigned crew’s shift length unless infeasible And the standard deviation of total planned time across bundles is ≤ 20% of the crew shift length And all jobs are assigned to exactly one bundle or explicitly flagged as unassigned with a reason code
Real Travel Time Optimization
Given origin/destination pairs for all candidate stops and current traffic profiles When the engine sequences stops within each bundle Then the total estimated drive time is reduced by ≥15% compared to a straight-line nearest-neighbor baseline on the same job set And turn-by-turn order is exported for each bundle And no stop-to-stop leg exceeds a 2x increase versus current-traffic ETA without a documented cause tagged on the leg
Respect Appointment Windows and Service Duration
Given jobs with hard and soft appointment windows and estimated service durations When bundles are generated Then hard windows are satisfied for 100% of scheduled jobs And soft windows are satisfied for ≥95% of scheduled jobs; violations are flagged as soft-window breaches And service start times plus durations do not exceed the job’s allowed end time And any infeasible jobs are left unassigned with a “Window Infeasible” reason
Crew Capacity and Skills Matching
Given crews with capacities (max total service time per day) and skill tags, and jobs with required skills and complexity-adjusted durations When jobs are assigned to bundles Then no crew’s total planned service time exceeds capacity And every job is assigned only to a crew possessing all required skill tags And complexity factors are applied to service duration estimates within ±5% of the configured multiplier set And priority jobs (P1) are scheduled before lower-priority jobs when windows overlap
Exclusion Rules and Bundle Size Constraints
Given defined geographic exclusion zones, customer/insurer exclusions, and bundle size constraints (min/max stops per bundle) When the engine forms bundles Then no bundle contains a stop inside an exclusion zone or with an excluded tag And each bundle has a number of stops within the configured min/max bounds unless infeasible, which is flagged And jobs marked “Do Not Group” are not included in any bundle And cross-territory bundling does not occur when territory lock is enabled
Manual Overrides Without Breaking Constraints
Given a dispatcher manually moves a job between bundles or reorders stops When the engine re-optimizes Then hard constraints (skills, hard windows, exclusions, capacity) remain satisfied And only soft constraints may be violated; each violation is immediately surfaced with reason codes And the manual override persists across subsequent auto-optimizations unless reverted And the resulting plan recomputes in ≤ 3 seconds for bundles up to 25 stops
One-Click Bundle Assignment/Reassignment
Given a selected bundle and a target crew When the user assigns or reassigns the entire bundle in one click Then the bundle is reassigned preserving stop order unless a constraint would be violated And if a violation would occur, the engine proposes the minimal set of swaps or schedule shifts to regain feasibility and displays them And travel times are recomputed and total planned time remains within ±10% of the target crew’s shift length And the reassignment completes in ≤ 5 seconds for bundles up to 50 stops
Traffic-Aware Multi-Stop Route Optimization
"As a field lead, I want an optimized stop sequence with turn-by-turn directions so that I can minimize drive time and hit planned ETAs."
Description

Optimizes the stop order within each bundle using traffic-aware routing and configurable service times to produce the fastest feasible route. Generates turn-by-turn directions, ETAs, and distance metrics, with deep links to preferred navigation apps for mobile execution. Persists route sequences back to the job records for accountability and repeatability, and provides fallback routing if third-party map services degrade.

Acceptance Criteria
Traffic-aware optimization with service times
Given a route bundle with 5–25 geocoded stops, a specified start location, and a departure time When the user selects Optimize Route Then the system computes a stop order that minimizes total travel time using live traffic for the departure time and includes configured service durations And returns ETAs per stop, total route time, and total distance And completes the optimization within 5 seconds for bundles up to 25 stops And flags any unreachable stop with a clear error and excludes it from optimization until resolved
Default and per-stop service time configuration
Given an organization-level default service time is set and individual stops may define overrides When a route is optimized Then ETAs incorporate the per-stop override where provided and otherwise use the default When a user enters an invalid service time (negative or > 240 minutes) Then the input is rejected with a validation message and the prior valid value is retained When the default service time is changed and the route is re-optimized Then total route time and ETAs update to reflect the new default
Turn-by-turn directions and route metrics
Given a successfully optimized route When the user views Route Details Then turn-by-turn directions are generated for each leg between ordered stops And each leg shows distance and duration And each stop shows cumulative ETA and cumulative distance from the start And directions regenerate within 2 seconds if the route order changes
Deep links to preferred navigation apps
Given a user has set a preferred navigation app (Google Maps, Apple Maps, or Waze) When the user taps Open in Navigation on a mobile device Then the app opens within 3 seconds with the optimized multi-stop route in the correct order and start location And each stop’s coordinates and labels are passed accurately And if the preferred app is unavailable, the user is prompted with available alternatives and a browser-based fallback opens with the same route And departure time is included in the deep link where the target app supports it
Persist route sequence and audit trail to job records
Given an optimized route is saved When the save action completes Then the stop sequence index, ETAs, total time, total distance, departure time, service-time settings, traffic model, and optimizer version are written to each associated job record And the bundle displays the saved sequence on reopen And a versioned route history entry is created with user, timestamp, and delta from prior version And route data is retrievable via API for auditing
Fallback routing during map service degradation
Given the primary routing provider is degraded (timeouts > 2 seconds, 5xx errors, or rate limiting) When the user runs Optimize Route Then the system falls back automatically to an alternate provider or distance-based routing without live traffic And the UI indicates Fallback routing active and lowers ETA confidence And optimization completes within 8 seconds for bundles up to 25 stops And an operational alert and telemetry event are recorded And the system automatically resumes primary provider once health checks pass
Re-optimization after route changes
Given an optimized route exists When a stop is added, removed, has its service time changed, or a user locks specific stops in place And the user selects Re-optimize Then a new sequence is proposed that respects locked stops and constraints And a comparison shows change in total time, distance, and any stop order changes And the user can accept to save as a new version or cancel to retain the prior version
One-Click Bundle Assignment & Reassignment
"As an operations manager, I want to assign or reassign a whole bundle in one click so that I can adapt schedules instantly without rebuilding routes."
Description

Enables assigning an entire route bundle to a crew or adjuster in a single action, updating calendars, capacity, and workload automatically. Supports rapid reassignment when crews change or issues arise, triggering notifications (in-app, SMS, email) and syncing to mobile devices. Maintains an audit trail of assignments and timestamps for operational visibility.

Acceptance Criteria
Assign Entire Bundle to Crew in One Click
Given an unassigned route bundle with N scheduled stops and a selected crew with available capacity today And the dispatcher has Assign Bundle permission When the dispatcher clicks Assign on the bundle and selects the crew Then all N stops are added to the crew's calendar in the bundle's optimized order within 5 seconds And the crew's daily capacity and workload metrics update to include the N stops within 5 seconds And the assignment is synced to all assignees' mobile devices within 30 seconds if online, or on next sync if offline And in-app notification is delivered within 5 seconds and SMS/email within 60 seconds to all assignees configured for those channels And an audit trail record is created with bundle_id, from_assignee=null, to_assignee=crew_id, actor_user_id, timestamp (UTC), stop_count=N
Reassign Remaining Stops to New Crew
Given a bundle currently assigned to Crew A with some stops completed and some scheduled And Crew B is eligible to receive the bundle When the dispatcher reassigns the bundle from Crew A to Crew B in one click Then only the unstarted stops move to Crew B; completed stops remain on Crew A And all moved stops retain their turn-by-turn order and scheduled time windows And Crew A's capacity/workload decrease and Crew B's increase accordingly within 5 seconds And both crews receive notifications (in-app <=5s, SMS/email <=60s) indicating reassignment, moved_stop_count, and first stop time And an audit trail record captures from_assignee=Crew A, to_assignee=Crew B, actor_user_id, timestamp (UTC), moved_stop_count, reason (optional) And mobile apps reflect the new assignments within 30 seconds if online, or on next sync if offline
Validation and Conflict Handling on Assignment
Given a dispatcher attempts to assign a bundle to a crew When the crew lacks required role, territory, or exceeds daily stop capacity threshold Then the system blocks the assignment and displays a specific error reason within 2 seconds When the assignment would overlap existing time windows Then the system prompts for override with reason or suggests the next available start window And if override is confirmed with a non-empty reason, the assignment proceeds and the reason is logged in the audit trail And if the same bundle is assigned to the same crew twice, the operation is a no-op with a visible confirmation and a deduplicated audit entry
Notification Preferences and Consolidation
Given users have channel preferences set for in-app, SMS, and email When a bundle assignment or reassignment occurs Then recipients receive a single consolidated notification per event per channel (not one per stop) And the message includes bundle name/ID, total stop count or moved_stop_count, date, and first stop ETA And delivery status is tracked per channel with success or failure logged in the audit trail And failed SMS/email attempts are retried up to 3 times with exponential backoff and final failure is surfaced to the dispatcher
Audit Trail Completeness and Immutability
Given any bundle assignment or reassignment event When the event is persisted Then the audit trail record includes bundle_id, actor_user_id, from_assignee, to_assignee, affected_stop_ids, counts, timestamp (UTC), client_ip, reason (if provided), and channels_sent And records are immutable to non-admin users and any admin edit or appended correction is versioned with previous record retained And bundle history view lists events in reverse chronological order and filters by date, actor, and assignee And audit data is exportable to CSV with the same fields
Mobile Sync and Offline Resilience
Given an assignee’s mobile device is offline at the time of (re)assignment When connectivity is restored and the app syncs Then the new or updated bundle assignment appears with correct order, times, and metadata on first successful sync And no duplicate stops are created if multiple sync attempts occur And push notification failures due to invalid tokens fall back to SMS/email when configured And mobile and web show consistent assignee and stop lists within 30 seconds of sync completion
Role-Based Access and One-Click Safeguards
Given a user without Assign Bundle permission attempts to assign or reassign a bundle Then the action is disabled in UI and blocked by API with 403 and an audit entry for the denied attempt When a permitted user initiates assignment via one click Then a confirmation with assignee name and stop count is shown and is keyboard-accessible And the action completes with a success toast and the bundle status changes to Assigned within 5 seconds And every state change is traceable to the triggering user session
Constraint-Aware Scheduling (Time Windows, Skills, Equipment)
"As a scheduler, I want routing to honor time windows and crew capabilities so that customer commitments are met safely and legally."
Description

Respects appointment time windows, daylight hours, crew skill certifications (e.g., drone pilot), and equipment constraints (ladder size, drone availability) during bundling and routing. Validates constraints up front, flags conflicts, and proposes alternative runs that remain feasible. Incorporates soft vs. hard constraints and allows per-job overrides with clear impact indicators on ETA and utilization.

Acceptance Criteria
Time Window Compliance in Bundle Optimization
Given a bundle with 3 jobs: J1 (09:00–10:30 hard), J2 (11:00–13:00 hard), J3 (14:00–16:00 hard) in the same service area with realistic travel times When the user clicks Optimize Bundle Then the scheduled arrival for each job is within its time window And no arrival is earlier than the window start or later than the window end And ETAs are displayed in the correct local time zone And the bundle is marked Feasible
Daylight Hours Enforcement for Drone Flights
Given a job J4 that requires drone flight on 2025-10-12 at a location with civil twilight 06:45–19:15 local And the selected crew is available 07:00–18:00 When Optimize Bundle schedules J4 Then J4's on-site start is between 06:45 and 18:00 And no drone step is scheduled before 06:45 or after 19:15 And if daylight cannot be met for a drone-required job, the bundle is flagged Conflict: Daylight and suggestions include moving affected jobs to the next feasible day
Crew Skill and Certification Matching
Given Crew A without FAA Part 107 certification and Crew B with valid certification expiring 2026-01-01 And Job J5 requires Drone Pilot skill When assigning the bundle containing J5 Then J5 cannot be assigned to Crew A And the UI displays Conflict: Missing Skill for Crew A and recommends Crew B And assignment to Crew B succeeds and records the certification used And if Crew B's certification is expired on the scheduled date, assignment is blocked with Conflict: Expired Certification
Equipment Capacity and Availability Constraints
Given inventory: 1 drone available, 1x 24ft ladder, 1x 40ft ladder And Crew C has 24ft ladder; Crew D has 40ft ladder And Jobs: J6 requires 40ft ladder; J7 requires drone; J8 requires no special equipment When optimizing and assigning the bundle J6+J7+J8 to Crew C Then conflicts are detected before save: J6 requires 40ft ladder; J7 requires drone (not on Crew C) And the system proposes Crew D for J6 and any crew with a drone for J7 And if the single drone is already allocated in the same time block, J7 is marked Conflict: Equipment Unavailable and is not scheduled
Conflict Detection and Alternative Proposal
Given a bundle with 5 jobs where travel time makes J2's hard window 10:00–11:00 impossible after J1 When Optimize Bundle is executed Then the bundle is labeled Infeasible with a list of conflicts including job IDs, constraint types, and blocking details And the system generates at least 2 alternative run options that satisfy all hard constraints and increase total drive time by no more than 15% over the theoretical minimum And each alternative is presented with ETA, total drive time, utilization %, and affected soft-window penalties And alternatives are generated in under 10 seconds for up to 20 jobs
Soft vs Hard Constraints with Per-Job Overrides and Impact Indicators
Given Job J9 has a soft time window 12:00–14:00 with max lateness 15 minutes and penalty weight 5 And Job J10 has a hard window 15:00–16:00 When the user toggles J9's window from soft to hard and re-optimizes Then the route respects J9 strictly and any prior soft violation is eliminated And when J9 is soft, the route may schedule start between 14:00 and 14:15 with a visible Soft Violation indicator showing minutes late and penalty cost And the UI displays impact indicators for any override: delta ETA per job, change in total drive time, and utilization % change And reverting the override restores the prior plan within 1 click
One-Click Bundle Reassignment Preserving Feasibility
Given a feasible bundle assigned to Crew E And Crew F lacks a 40ft ladder required by J11 in the bundle When the user clicks Reassign Bundle and selects Crew F Then the system re-optimizes within 5 seconds and blocks reassignment with a clear list of conflicts and suggested crews/equipment to resolve And when the user selects Crew D (who meets all constraints), reassignment succeeds in one click and the route remains feasible with preserved job order where possible And no partial or silent drops of jobs occur; all assignments either pass or fail atomically
Interactive Map & List View with Drag-and-Drop Editing
"As a routing coordinator, I want to visualize and tweak bundles on a map so that I can quickly resolve edge cases the algorithm can’t fully anticipate."
Description

Provides a split map/list interface that visualizes color-coded bundles, stop sequences, and ETAs. Allows drag-and-drop of addresses between bundles or within a bundle to adjust order, with instant recalculation of route time, miles, and feasibility checks. Includes undo/redo, conflict warnings, and hover details (service time, notes, roof type) to streamline manual fine-tuning.

Acceptance Criteria
Split Map/List View with Color-Coded Bundles and Sequences
Given bundles A and B each contain at least one stop, When the interface loads, Then the map displays markers color-coded per bundle and numbered by stop sequence, and the list groups stops by bundle using the same colors. Given I select a bundle header in the list, When I click it, Then the corresponding markers highlight on the map and non-selected bundles reduce opacity to 50%. Given ETAs are available, When the view loads, Then each stop shows its ETA in the list and as a marker label on map hover. Given a dataset of up to 100 stops across up to 5 bundles, When the interface loads on a modern browser, Then time to first interactive paint is 2 seconds or less.
Drag-and-Drop Address Between Bundles
Given an address belongs to Bundle A, When I drag it from the list to Bundle B’s list header, Then the address is reassigned to Bundle B and removed from Bundle A. Given I drag an address from the map, When I drop it onto a marker or bundle drop zone for Bundle B, Then it is reassigned to Bundle B. Given I drop an address into Bundle B’s list between two stops, When the drop completes, Then the address is inserted at that exact position and all stop sequence numbers are reindexed starting at 1. Given metrics are displayed, When the reassignment completes, Then route time, miles, and ETAs for both bundles recalculate and update within 1 second. Given the reassignment would violate a hard constraint (e.g., fixed appointment window), When I drop, Then the system blocks the change, shows a conflict modal with reasons, and makes no data changes.
Drag-and-Drop Reorder Within Bundle
Given a bundle with three or more stops, When I drag a stop to a new position within the same bundle list, Then the sequence updates to match the new order and map markers renumber accordingly. Given I drag a stop on the map, When I drop it onto a position indicator for the same bundle, Then the list order updates to match the new position. Given a reorder completes, When recalculation runs, Then bundle route time, miles, and all ETAs update within 800 milliseconds for bundles of up to 25 stops. Given I drag over an invalid target (different bundle without modifier or outside drop zones), When hovering the target, Then the UI shows a not-allowed cursor and no drop highlight appears.
Instant Recalculation of Time, Miles, and ETAs
Given any edit changes a stop’s sequence or bundle membership, When the edit commits, Then travel distance, duration, and per-stop ETAs recompute and display at both bundle and stop levels. Given network latency exceeds 1 second, When recalculation is pending, Then a progress indicator shows until completion and results return within 5 seconds or an error banner with retry appears. Given the routing service returns an error, When recalculation fails, Then previous metrics remain visible, the edit is preserved, and an error notification provides a retry action.
Constraint Feasibility Checks on Edit
Given configured constraints (max bundle duration, max stops per bundle, working hours, appointment windows), When a drag-and-drop edit is attempted, Then the system evaluates constraints before committing the change. Given a soft constraint would be exceeded, When the drop completes, Then a warning banner appears detailing the exceeded constraint and offers Proceed and Undo actions. Given a hard constraint would be violated, When the drop occurs, Then the edit is blocked; a modal lists violations and provides only a Cancel action. Given unresolved conflicts exist after an edit, When I hover the warning icon at bundle or stop level, Then a tooltip lists the specific conflicts and the affected stops.
Undo/Redo for Edits
Given I have performed at least one edit, When I press Ctrl+Z or click Undo, Then the last edit is reversed within 300 milliseconds and all UI, sequences, markers, and metrics revert accordingly. Given I have undone at least one edit, When I press Ctrl+Y (or Shift+Ctrl+Z) or click Redo, Then the previously undone edit reapplies and metrics recalculate. Given the edit history depth is 20 steps, When additional edits exceed this depth, Then the oldest history entry is discarded and the Undo control indicates no further history. Given an edit was blocked due to a hard constraint, When I attempt Undo/Redo, Then history integrity is maintained and no invalid state is introduced.
Hover Details on Stops
Given I hover over a stop marker or its list item for at least 250 milliseconds, When hover occurs, Then a tooltip appears showing service time, notes, and roof type for that stop. Given a stop lacks notes or roof type, When I hover, Then the tooltip displays a placeholder (—) for missing fields without layout shift. Given I move the pointer away from the stop, When hover ends, Then the tooltip hides within 150 milliseconds.
Live Re-optimization & Exception Handling
"As a dispatcher, I want to re-optimize routes mid-day when plans change so that crews stay productive and customers remain informed."
Description

Monitors in-day events (cancellations, no-shows, delays, weather advisories) and offers fast re-optimization of a single bundle or the full schedule while preserving completed and locked stops. Syncs changes to assigned crews in real time with updated ETAs and navigation links, and highlights downstream impacts to customer time windows and crew capacity.

Acceptance Criteria
Single-Bundle Re-Optimization After Mid-Route Cancellation
Given a bundle with at least 6 stops where some are marked Completed and at least 1 stop is Locked And the assigned crew is en route with the mobile app online When a customer cancels an upcoming stop and the dispatcher clicks "Re-optimize Bundle" Then the optimization completes in ≤10 seconds And all Completed and Locked stops remain in their original sequence positions and timestamps are unchanged And the canceled stop is removed from the bundle and marked Canceled And remaining stops are resequenced to minimize total drive time while honoring time windows and service durations And updated ETAs are generated for all remaining stops And the assigned crew receives the new sequence, ETAs, and refreshed navigation link for the next stop in ≤5 seconds And any stop with an ETA projected outside its committed window is flagged At Risk with the minutes early/late displayed
Full-Schedule Re-Optimization on Weather Advisory
Given there are 3 or more active bundles across multiple crews And a weather advisory with a geofenced area and active time range is received When the dispatcher initiates "Re-optimize Full Schedule" Then the optimization completes in ≤20 seconds for up to 60 total remaining stops And all Completed and Locked stops in every bundle remain fixed in place And stops inside the advisory window are either moved outside the affected time range or flagged Unschedulable Today if no feasible placement exists And capacity limits per crew (max stops or total service minutes) are not exceeded; any overages are flagged with the amount over capacity And each crew’s updated sequence and ETAs are pushed to their devices in ≤5 seconds And the system highlights the count of impacted customers and which time windows are now At Risk with minute deltas
No-Show Handling within Customer Time Window
Given a crew marks the current stop as No-Show at time T with proof note When the dispatcher clicks "Re-optimize Bundle" for that crew Then the no-show stop is moved to the end of the day if a feasible placement exists within the customer’s window; otherwise it is flagged for rescheduling And the optimization completes in ≤10 seconds And Completed and Locked stops are unchanged in sequence and timestamps And subsequent stops’ ETAs are advanced and recalculated And customers for the next two stops receive updated ETAs if notifications are enabled And the crew’s device receives an updated next-stop navigation link in ≤5 seconds And any newly created window violations are flagged At Risk with minute deltas
Delay Event Propagation and Impact Highlighting
Given an in-progress stop reports a delay of 30 minutes due to on-site conditions When the dispatcher applies the delay and triggers "Recalculate ETAs" (without changing sequence) Then all downstream ETAs in the bundle update in ≤3 seconds And any stop whose ETA falls outside its committed window is flagged At Risk with the exact minutes late And the bundle-level impact summary displays total added drive + service minutes and count of at-risk stops And the crew capacity utilization updates and flags if total planned service minutes exceed capacity And the crew’s device displays the new ETA for the next stop in ≤5 seconds
Locked and Completed Stops Preservation Across All Re-optimizations
Given a schedule with multiple bundles where some stops are Locked and others are Completed When any re-optimization is initiated (single-bundle or full-schedule) Then Locked and Completed stops retain their positions in sequence and are not resequenced And their planned arrival times and service durations remain unchanged And if constraints make a solution infeasible without moving a Locked or Completed stop, the system presents an error and aborts changes, leaving schedules unchanged
One-Click Bundle Reassignment with Real-Time Sync
Given a bundle assigned to Crew A with an optimized sequence When the dispatcher reassigns the entire bundle to Crew B in one action without re-optimizing Then the sequence order remains unchanged And Crew B receives the full stop list, current ETAs, and navigation link for the next stop in ≤5 seconds And Crew A’s device is updated to remove the reassigned bundle within ≤5 seconds And audit logs capture the reassignment with timestamp, actor, from/to crew, and stop count And if Crew B is at capacity, the system warns and requires explicit confirmation before proceeding
Routing Performance Analytics & ROI Tracking
"As a business owner, I want to measure windshield time savings and throughput gains so that I can prove ROI and optimize staffing."
Description

Tracks key metrics such as total drive time, miles, stops per day, on-time percentage, and route utilization per crew and date range. Compares Route Bundles performance against manual baselines to quantify windshield time reduction and cost savings. Offers exportable reports and dashboards that surface recurring bottlenecks, informing future capacity planning and territory design.

Acceptance Criteria
Aggregate Metrics by Crew and Date Range
Given I select a date range and one or more crews When I open the Routing Performance dashboard Then I see per‑crew and aggregate values for total drive time (minutes), total miles, stops completed per day, on‑time percentage, and route utilization And totals reflect only completed stops within the selected date range in the organization’s timezone And drive time excludes dwell time at stops and any breaks explicitly marked as off‑route And miles and time include only segments between the first and last stop of each assigned route And metric tooltips display their formulas and units
Baseline Comparison: Route Bundles vs Manual
Given the selected date range includes both Route Bundles and manually routed days And baseline classification is derived from the route’s routing method flag When I enable "Compare to baseline" Then the UI displays absolute and percentage differences for total drive time, total miles, and stops per day between bundles and manual baselines And cost savings are calculated using admin‑configured $/mile and $/hour rates and shown per crew and in total And each comparison displays the number of routes included for bundle and baseline groups And if either group has fewer than 5 routes in range, a clear "insufficient baseline" notice is shown and differences are disabled
On‑Time Percentage Calculation Accuracy
Given each stop has a scheduled ETA window and a completion timestamp And a grace period (minutes) is configured at the organization level with a default of 5 When on‑time percentage is calculated for the current filters Then a stop is counted on‑time if completion time ≤ ETA end + grace minutes And on‑time % = (on‑time completed stops ÷ completed stops) × 100, rounded to one decimal place And canceled, skipped, and unattempted stops are excluded from the denominator And the dashboard and exports use the same definition and yield identical values
Route Utilization Metric Integrity
Given a route has a planned duration (minutes) and planned capacity (stops) And actual route time is measured from first stop arrival to last stop departure When utilization is computed Then utilization (time) = actual route time ÷ planned duration, expressed as a percentage and capped at 150% And routes with actual time < 15 minutes are excluded from utilization aggregates And routes with utilization < 60% or > 120% are flagged as under‑ or over‑utilized in the UI and exports And the utilization tooltip explains the formula and exclusion rules
Export Reports to CSV and PDF
Given I have applied crew, date range, and any additional filters When I export the Performance Summary Then a CSV download starts within 5 seconds for datasets ≤ 50,000 rows and contains columns: date, crew, total miles, total drive time (minutes), stops completed, on‑time %, route utilization %, baseline miles, baseline drive time (minutes), savings miles, savings time (minutes), savings cost (USD) And a PDF summary with trend charts and top insights is generated within 30 seconds and matches dashboard values (differences only due to rounding to displayed precision) And both exports reflect the same filters and timezone as the dashboard And export attempts are logged with user, timestamp, and filter parameters
Bottleneck Detection and Surfacing
Given stop‑level delays are computed as (completion time − scheduled ETA end − grace, minimum 0) When viewing the Bottlenecks tab for the selected date range Then the system lists up to the top 10 recurring bottlenecks by ZIP code or 1 km grid tile where average delay per stop > 10 minutes and occurrences ≥ 3 distinct days And each entry shows average delay, total delayed stops, affected crews, and suggested time windows with lower historical delay And clicking a bottleneck filters the dashboard and map to the impacted stops and routes
Data Freshness and Access Control
Given telematics and stop events are ingested continuously When a user with role Dispatcher or Admin opens analytics Then metric computations use data no older than 15 minutes from the current time And users with role Field Tech cannot access analytics pages or exports and see an access denied message And all analytics page views and exports are recorded in an audit log with user id, timestamp, and action details

GeoHeat Overlay

Layers hail size, wind swaths, and claim density on the triage board. Instantly spot high‑impact clusters and move them to the top, ensuring the worst‑hit customers are served first.

Requirements

Hazard Data Ingestion & Normalization
"As an operations manager, I want hazard data to be current and consistent so that I can trust the overlay when prioritizing responses."
Description

Integrate with one or more third‑party weather providers to ingest hail size and wind swath datasets on a scheduled basis (hourly when available, minimum daily). Normalize all inputs to a common geospatial reference (WGS84), unify schemas, deduplicate overlapping sources, and store as versioned time‑series layers with source, timestamp, and confidence metadata. Implement data quality checks, backfill the last 36 months for historical analysis, and provide failure alerts with automatic retries. Expose a query service optimized for tiled rendering and time‑window filtering to supply the GeoHeat overlay and downstream scoring.

Acceptance Criteria
Scheduled Hazard Ingestion and Versioning
Given providers configured with declared update frequencies (hourly when available, otherwise daily) and API credentials When the scheduler runs at the configured cadence per provider Then the system requests and retrieves the latest hail size and wind swath datasets for that provider And creates a new immutable versioned layer only if the provider has new data since the last active version And attaches metadata: source_id, provider_timestamp, ingest_timestamp (UTC), event_time_range, and confidence And marks the version as active only after all validations pass And achieves freshness targets: p95 ingest lag ≤ 15 minutes for hourly providers and ≤ 4 hours for daily providers And produces an audit log entry with run_id, version_id, record_counts, durations, and outcome
Geospatial Normalization and Schema Unification to WGS84
Given raw provider datasets with varying CRS, units, and attribute names When the transformation step executes Then all geometries are reprojected to WGS84 (EPSG:4326) with positional error ≤ 5 m at the 95th percentile And units are normalized: hail_size_in (inches), wind_speed_mph (mph) And timestamps are normalized to UTC and stored as event_ts (ISO 8601) And attributes are mapped to the unified schema: {hazard_type, magnitude, magnitude_unit, geometry, provider_id, provider_ts, event_ts, confidence, source_version, provenance} And features failing reprojection, schema, or range validation are quarantined with error codes and excluded from activation And the active version contains zero schema violations per contract
Deduplication and Confidence-Weighted Merge of Overlapping Sources
Given overlapping hail or wind features from multiple providers within the same spatial-temporal window When the merge step runs on normalized inputs Then duplicates are detected using a 100 m spatial grid and a 60-minute temporal window per hazard_type And conflicts are resolved by selecting the feature with highest confidence; on ties prefer newest provider_ts; on subsequent ties use provider priority list And merged features retain a provenance list of contributing source_ids and confidences And no two active features overlap within the same grid cell and temporal window after merge And aggregate magnitude is computed deterministically per rule set (e.g., max hail_size_in, max wind_speed_mph)
Data Quality Checks, Alerts, and Automatic Retries
Given an ingestion or transformation run for a provider When a validation fails (schema, CRS, unit, geometry) or data freshness lag exceeds 2 scheduled intervals Then the run is marked failed, a retry is queued with exponential backoff up to 5 attempts, and an alert is sent to the on-call channel including run_id, provider_id, error_type, and next_retry_at And successful retries auto-resolve the incident and close the alert with resolution details And successful runs that deviate in record_count by more than ±30% from the 7-day moving average raise a warning alert without blocking activation And all failures and warnings appear on the data ops dashboard within 1 minute of detection
36-Month Historical Backfill and Idempotent Resume
Given a backfill job is initiated for a provider When processing historical data windows covering the last 36 months Then the job creates versioned layers per window without duplicating previously ingested versions (idempotent by provider_ts + checksum) And the job can resume after interruption and continue from the last successful window without reprocessing completed windows And backfill completeness reaches ≥ 99% of expected windows per provider with gaps listed in a report export (CSV) And throughput sustains ≥ 1 month of historical data processed per 10 minutes on standard cluster configuration And backfilled versions pass the same validations and dedup rules as real-time runs before activation
Tile-Optimized Query Service with Time-Window Filtering
Given a request to the hazard query service with z/x/y tile indices and a time filter (start_ts, end_ts) When the request is executed against active normalized layers Then the response includes only features intersecting the tile and time window, encoded as GeoJSON FeatureCollection in WGS84 And p95 latency for zoom levels 6–12 is ≤ 200 ms under 50 concurrent requests And response payload size for zoom 10 tiles is ≤ 500 KB; if larger, the service applies thinning/aggregation and sets a truncated flag=true And responses include caching headers with stable ETag per (z,x,y,start_ts,end_ts) and return 304 Not Modified when unchanged And the service supports filtering by hazard_type and returns correct aggregate properties per feature needed by GeoHeat (e.g., max_magnitude)
Claims Density Aggregation
"As a branch manager, I want to see where claims are concentrated so that I can allocate crews to the hardest‑hit neighborhoods."
Description

Aggregate claim/job locations from RoofLens projects, supported CRMs, and optional CSV import into a privacy‑safe geospatial grid (configurable geohash precision). Geocode and deduplicate records, apply tenancy isolation, and enforce minimum bucket thresholds to prevent re‑identification. Support filters by status (e.g., open, pending, closed) and event date, and refresh aggregates on a 15‑minute cadence. Produce heatmap‑ready tile data with counts and normalized intensity for consumption by the overlay and impact scoring engine.

Acceptance Criteria
Multi-Source Claim Ingestion and Geocoding
Given active connections to RoofLens projects, supported CRMs, and a valid CSV upload endpoint, When the ingestion pipeline executes, Then records created or updated in the last 15 minutes are fetched from all connected sources. Given a CSV file, When validation runs, Then required columns include either (address) or (lat,lng), plus status and event_date; files missing required columns are rejected with an error and no rows ingested. Given a CSV file with mixed valid and invalid rows, When validation runs, Then invalid rows are skipped with per-row error reasons and valid rows are ingested. Given a record lacking coordinates but with address fields, When geocoding runs, Then latitude and longitude are assigned with 6-decimal precision and confidence >= 0.8; otherwise the record is flagged geocode_failed and excluded from aggregation.
Deterministic Deduplication within Tenant
Given multiple records within the same tenant that share either an identical external_id OR are within 25 meters and have event_date within 24 hours, When deduplication runs, Then they are merged into a single canonical claim with a union of source references and the most recently updated attributes retained. Given records from different tenants with matching keys, When deduplication runs, Then no cross-tenant merge occurs. Given a merged claim, When grid counts are computed, Then the claim contributes exactly 1 to its grid bucket.
Tenancy Isolation and Access Control
Given a user authenticated to tenant A, When requesting claim-density tiles for any z/x/y and viewport, Then only tenant A’s data is included and other tenants’ data is excluded. Given a request scoped to tenants {A,B}, When tiles are generated, Then aggregates include only tenants A and B. Given any tile payload, When inspected, Then it contains no PII (names, street addresses, emails, phone numbers, claim IDs).
Privacy-Safe Geospatial Grid and Thresholding
Given a configured geohash precision per zoom level, When generating tiles, Then claims are aggregated into grid buckets using the configured precision for that zoom. Given a bucket with count < k_min (default 5), When emitting tile data, Then the bucket is not emitted (suppressed) to prevent re-identification. Given any emitted tile, When validated, Then no bucket represents fewer than k_min claims and normalizedIntensity values are in [0,1] with 2 decimal places.
Filters by Status and Event Date
Given a tiles request with a status filter (e.g., open,pending), When aggregates are computed, Then only claims matching those statuses are counted; unknown status values are ignored. Given a tiles request with an event_date range [start,end] in UTC, When aggregates are computed, Then only claims with event_date within the inclusive range are counted. Given no filters provided, When aggregates are computed, Then all statuses and all event dates are included.
15-Minute Refresh Cadence and Latency
Given the system schedule, When jobs are triggered, Then aggregation and tile generation start every 15 minutes at :00, :15, :30, and :45. Given new or updated source records, When the next scheduled run completes, Then corresponding tiles are updated within 5 minutes of job start and expose a lastRefreshed timestamp in ISO 8601 UTC. Given no data changes since the previous run, When the scheduled job runs, Then tiles are regenerated and lastRefreshed is updated to the current run time.
Heatmap-Ready Tile Output Contract
Given a tiles request specifying z,x,y, tenant scope, filters, and zoom level, When the tile is returned, Then the response is application/json containing { metadata: { z,x,y,lastRefreshed,kMin,precision }, buckets: [ { geohash, count:int>=0, normalizedIntensity:float } ] }. Given any tile with at least one bucket, When normalizedIntensity is computed, Then normalizedIntensity = round(count / max(counts in this tile), 2), and if max count = 0, all normalizedIntensity = 0. Given bucket data, When validated, Then count values are integers >= 0 and geohash strings match the configured precision for the zoom.
Overlay Controls & Legend
"As a dispatcher, I want to toggle and adjust overlay layers so that I can compare hazards and claim density without losing map context."
Description

Provide interactive map controls to toggle hail, wind, and claims density layers; adjust opacity; reorder layer stacking; and select time windows via a date range slider or event picker. Include a color legend with units (inches for hail, mph for wind) and threshold filters to hide low‑impact areas. Implement hover tooltips with localized values at cursor, responsive layouts for desktop/tablet, keyboard navigation, and persistence of user preferences per account. Ensure smooth pan/zoom with lazy tile loading and graceful fallback when data is temporarily unavailable.

Acceptance Criteria
Layer Toggle and Opacity Controls
- Given the triage board map is loaded and the account has access to GeoHeat data, When the user toggles the Hail, Wind, or Claims Density control, Then the corresponding layer visibility updates within 300 ms and the control reflects its active/inactive state. - Given a layer is active, When the user adjusts its opacity slider between 0% and 100%, Then the map updates the layer opacity continuously with no more than 100 ms delay between slider movement and visual change. - Given multiple layers are active, When the user changes the opacity of one layer, Then only that layer’s opacity changes and others remain unchanged. - Given a layer’s data is currently unavailable, When the user attempts to toggle that layer on, Then the control shows a disabled state with a tooltip "Temporarily unavailable" and the layer is not added to the map. - Given the user modifies any layer visibility or opacity, When the user refreshes the app or signs in on another device, Then the previous settings are restored for that account.
Layer Stacking Reorder
- Given at least two layers are active, When the user drags a layer chip in the stack list to a new position, Then the map redraws with the new stacking order within 300 ms and the list order matches the visual order. - Given the reorder control is focused, When the user presses Alt+Up or Alt+Down, Then the focused layer moves up or down one position in the stack and the change is announced to assistive tech. - Given the layer order has been changed, When the user signs out and back in, Then the custom stack order is restored for that account and organization. - Given a layer is set to 0% opacity, When the user moves it in the stack, Then the stacking change persists and takes effect when opacity is increased.
Time Window Selection and Event Picker
- Given the date range slider is visible and data exists for the selected region, When the user adjusts the start or end handles and releases, Then the map filters to events within the range and updates within 500 ms. - Given the user opens the Event Picker, When a specific event is selected, Then the date range synchronizes to that event’s bounds and the map zooms/pans to the event’s extent. - Given the selected range yields no events, When the filter is applied, Then the map shows a non-blocking "No events in range" message and zero overlay tiles are rendered. - Given the user switches between the date range slider and the event picker, When a new selection is made, Then the other control reflects the new selection state without data loss.
Legend, Units, and Threshold Filters
- Given a layer is active, When the map renders, Then a legend is visible for that layer showing color scale, units (Hail in inches, Wind in mph, Claims Density in claims/mi²), and current thresholds. - Given the user selects a threshold preset (e.g., Hail ≥ 1.0 in, Wind ≥ 60 mph), When applied, Then overlay cells below the threshold are hidden within 300 ms and the legend displays the active cutoff. - Given the user enters a custom threshold, When the input is submitted, Then values are validated against layer bounds and errors are announced for invalid input. - Given multiple layers are active, When thresholds are adjusted for one layer, Then only that layer responds and others remain unchanged. - Given the device is tablet or desktop, When the legend is toggled collapsed/expanded, Then it remains readable and does not obstruct essential map controls.
Hover Tooltips with Localized Values
- Given a pointer is over a visible overlay cell, When hover is detected, Then a tooltip appears within 150 ms showing the topmost active layer name, value formatted in the user’s locale with units (e.g., 1.25 in, 65 mph), and event date/time localized to the user’s timezone. - Given multiple layers are active, When hovering, Then the tooltip displays the value for the topmost active layer only, consistent with the current stacking order. - Given a touch device, When the user long‑presses on the map, Then the tooltip appears and can be dismissed with a tap outside or the Close control. - Given the pointer leaves the overlay or the map is panned, When hover ends, Then the tooltip disappears within 200 ms without leaving artifacts.
Keyboard Navigation and Responsive Layout
- Given focus starts on the map controls, When navigating with Tab/Shift+Tab, Then all interactive elements (layer toggles, opacity sliders, reorder buttons, date slider, event picker, threshold inputs, legend toggle) are reachable in a logical order and have a visible focus indicator. - Given a control is focused, When Space/Enter is pressed to toggle or Arrow keys are used to adjust sliders (±1 step; Shift+Arrow ±10 steps), Then the control updates and the map reflects the change. - Given desktop (≥1024 px) and tablet (≥768 px and <1024 px) breakpoints, When the viewport changes, Then controls reposition to the designated layout without overlapping content and hit targets are at least 44×44 px on tablet. - Given a screen reader is active, When controls receive focus, Then accessible names and values are announced, including units and current thresholds, and instructions for sliders are provided. - Given orientation change on tablet, When the device rotates, Then the current map state (extent, active layers, filters) is preserved.
Smooth Pan/Zoom, Lazy Loading, and Graceful Fallback
- Given any overlay is active, When the user pans or zooms the map, Then interaction latency remains under 100 ms and overlays maintain at least 30 FPS on recommended hardware. - Given new tiles enter the viewport, When loading begins, Then only tiles within and near the viewport are requested and a lightweight placeholder indicates loading progress; tiles outside the viewport are not requested. - Given the overlay data service is slow (over 3 seconds) or returns an error, When a request fails, Then a non-blocking banner "Layer data temporarily unavailable" is shown, the UI remains operable, exponential retry is attempted up to 3 times, and no unhandled errors occur. - Given data becomes available after a temporary outage, When a retry succeeds, Then the overlay renders automatically and the banner dismisses itself. - Given all overlays are unavailable, When the user opens the map, Then the base map renders with a clear empty state and controls are disabled appropriately without breaking layout.
Impact Scoring & Clustering Engine
"As a sales lead, I want high‑impact clusters identified and scored so that I can target outreach to the right customers first."
Description

Compute a per‑cell impact score by combining normalized hail intensity, wind speed, and claims density using configurable weights. Detect spatial clusters of high‑impact cells (e.g., DBSCAN/HDBSCAN), output cluster centroids and bounds, and attach human‑readable explanations (e.g., hail 2.0 in, wind 60 mph, density 12/mi²). Expose scores and clusters via API to the triage board, and allow admin‑level tuning of weights and thresholds with versioned configurations. Recalculate incrementally as new data arrives and cache results for fast retrieval.

Acceptance Criteria
Per-cell Impact Score Calculation with Configurable Weights
Given an active configuration with normalization ranges and weights {w_hail, w_wind, w_density} that sum to 1 When a grid cell has inputs {hail_intensity, wind_speed, claim_density} Then each input is normalized using the active configuration’s ranges to values in [0,1] And the impact score equals (w_hail*h') + (w_wind*w') + (w_density*d') with absolute error ≤ 1e-6 And the impact score is clamped to [0,1] And missing inputs are treated as 0 and the cell output includes data_quality="partial" And the cell output includes score, component_contributions, config_version, and computed_at timestamp
High-Impact Cluster Detection with Centroids and Bounds
Given an active configuration specifying algorithm ∈ {DBSCAN, HDBSCAN}, score_threshold T, eps/epsilon, and min_samples When clustering is executed over all cells in an area Then only cells with score ≥ T are eligible for clustering And each cluster contains ≥ min_samples eligible cells And each cluster includes a centroid (geodesic average of member cell centers, EPSG:4326) rounded to 6 decimal places And each cluster includes bounds as both a convex hull polygon (WGS84) and a bounding box [minLon,minLat,maxLon,maxLat] And given identical inputs and configuration, repeated runs produce identical clusters, centroids, and bounds And each cluster is assigned a stable id derived from (config_version + area + member cell ids)
Human-readable Cluster Explanations
Given a computed cluster and the active configuration When the explanation is generated Then the explanation string is formatted as: "hail {p90_hail}"; wind {max_wind} mph; density {mean_density}/mi²" And p90_hail is the 90th percentile hail size in inches rounded to 0.1" And max_wind is the maximum wind speed in mph rounded to the nearest integer And mean_density is the average claims density per square mile rounded to 0.1 And the cluster output includes explanation_fields {p90_hail,max_wind,mean_density} and explanation string And units appear exactly as specified (", mph, /mi²)
Impact Scores and Clusters API for Triage Board
Given a valid OAuth2 access token with scope triage:read When the client calls GET /api/v1/geoheat/impact?bbox={minLon,minLat,maxLon,maxLat}&asOf={ISO8601}&include={cells|clusters|both}&minScore={0..1} Then the API responds 200 with JSON containing config_version, last_updated, and arrays for requested entities And responses include ETag and Cache-Control headers And for bbox areas ≤ 25mi x 25mi, P95 latency ≤ 500 ms when served from cache and ≤ 2,000 ms on cold compute And pagination is provided via a cursor when cells > 10,000 And invalid parameters yield 400 with error.code and error.message; unauthorized or insufficient scope yields 401/403; rate-limited requests yield 429 with Retry-After And cells include {cell_id, center, score, component_contributions, data_quality}; clusters include {cluster_id, centroid, bounds, size, explanation} And results are filtered by minScore and include parameter
Admin Tuning and Versioned Configurations
Given a user with admin role When the user creates a configuration via POST /api/v1/geoheat/configs with weights, normalization ranges, algorithm, and thresholds Then weights must be ≥ 0 and sum to 1.0 (±0.001); ranges and thresholds must pass schema validation; algorithm ∈ {DBSCAN,HDBSCAN} And on success the system assigns an immutable semantic version (e.g., 1.2.0) and stores created_by and created_at When an admin activates a configuration version Then it becomes the active config_version for subsequent computations and is recorded in an audit log with actor, timestamp, and diff And only admins can create/activate/rollback (others receive 403) And rollback to any prior version is supported via POST /api/v1/geoheat/configs/{version}:activate And previously active versions remain immutable and queryable
Incremental Recalculation and Caching on New Data
Given new hail, wind, or claims data ingested for area A and time window t When the incremental recompute job runs Then only cells overlapping A are recomputed and cached keys for affected cells/clusters are invalidated And 95% of affected cells are available via the API with updated scores within 5 minutes of data ingestion And clusters overlapping A are recomputed and their ids remain stable if membership is unchanged; ids change if membership changes And ETag values change for any response whose underlying data changed And the system processes at least 100k affected cells per minute without backlog growth over a 10‑minute window
Auto‑Triage Reprioritization
"As a scheduler, I want the triage board to auto‑prioritize high‑impact jobs so that urgent cases are addressed before lower‑risk ones."
Description

Integrate impact scores with the triage board to automatically sort jobs, flagging "High Impact" cases above a configurable threshold. Provide bulk actions to move clustered addresses to the top, preserve manual overrides with an audit trail, and display score explanations inline for transparency. Support tie‑breakers (e.g., customer tenure, open claim status, SLA breach risk) and allow disabling at the user or workspace level via feature flags.

Acceptance Criteria
Auto-Sort by Impact Score Threshold
Given the user has access to the triage board and Auto‑Triage is enabled at both workspace and user levels When impact scores are available for jobs Then the board auto‑sorts jobs in descending Impact Score order within 2 seconds of load or refresh And any job with score >= the configured High Impact threshold displays a "High Impact" badge and is placed above jobs below the threshold And jobs with no score remain in their prior order beneath scored jobs and are labeled "No Score"
Configurable High Impact Threshold
Given a workspace admin opens Auto‑Triage settings When they set a High Impact threshold between 0 and 100 (step 1) and save Then the new threshold persists at workspace scope and takes effect on all boards within 30 seconds or next refresh And a default threshold of 70 is pre‑populated on first enablement And invalid entries (blank, <0, >100, non‑numeric) are blocked with inline validation and no save occurs
Deterministic Tie‑Breakers for Equal Scores
Given two or more jobs have equal Impact Scores within a tolerance of ±0.1 When the board orders these jobs Then tie‑breakers are applied in this order: 1) SLA breach risk (higher risk first), 2) open claim status (open before none), 3) customer tenure (longer tenure first), 4) created date (older first), 5) address alphabetical (A→Z) And the applied tie‑breaker(s) are displayed in a tooltip or details pane labeled "Sorting by: …" for each affected job And sorting is stable such that reloading the board produces the same order given unchanged data
Manual Override Preservation with Audit Trail
Given a user with edit permissions manually reorders a job (drag‑and‑drop) or pins it When auto‑triage recalculates or the page reloads Then the overridden job maintains its manual position relative to other overridden jobs and is excluded from auto reordering until reverted And an audit entry is recorded capturing user, timestamp, action type, from/to position, and optional reason And the job shows a "Manual Override" indicator with an action to "Revert to Auto" that restores auto sorting immediately
Bulk Move Clustered Addresses to Top
Given the GeoHeat Overlay is visible and the user selects a geographic cluster of addresses When the user clicks "Move Selected to Top" and confirms Then only the selected jobs are moved into a Top section within 3 seconds, ordered by Impact Score and tie‑breakers And an Undo action is available for 30 minutes to revert the bulk move And an audit entry records the selection method (cluster polygon or filter), count of jobs moved, and initiating user
Inline Score Explanation for Transparency
Given a job on the triage board When the user clicks "Why this score?" or hovers the info icon Then a panel or tooltip opens within 300 ms showing the score breakdown (hail size, wind swath proximity, claim density, weights), data source timestamps, and the final Impact Score And the explanation matches the sorting calculation within a tolerance of ±0.1 And the explanation is accessible via keyboard (focusable, ESC to close, ARIA labels)
Workspace and User-Level Feature Flags
Given a workspace admin disables Auto‑Triage at the workspace level When any user loads the triage board Then auto sorting, High Impact badges, and bulk move options are disabled/hidden, while existing manual overrides remain intact And when the workspace is enabled but a specific user disables Auto‑Triage in their profile, only that user sees manual ordering while others see auto‑sorted boards And feature flag changes propagate to active sessions within 1 minute or on next refresh
Map Performance, Caching & SLOs
"As a user, I want overlays to load quickly and reliably so that I can work efficiently during storm responses."
Description

Pre‑render raster/vector tiles for hazard and density layers with CDN-backed caching and delta updates to meet p95 overlay tile load times under 500 ms and maintain interactive map panning at 45+ FPS on modern hardware. Implement client/server caching, ETags, and compressed payloads; degrade to last‑known tiles during provider outages; and instrument end‑to‑end telemetry with alerts on SLO breaches. Include load testing and capacity planning to support concurrent surge usage during major storm events.

Acceptance Criteria
p95 Overlay Tile Load Time < 500 ms
Given the GeoHeat overlay is enabled on the triage board for an active storm region and the QA baseline device is on stable broadband (50–100 ms RTT, 50+ Mbps) using the latest Chrome When the user pans and zooms to load at least 10,000 overlay tiles across zoom levels Z8–Z13 during a 15-minute session under the capacity-plan load profile Then the client-measured p95 tile load time (request start to tile render) is <= 500 ms and p99 <= 800 ms And the tile request error rate (4xx/5xx/timeouts) is <= 1% And the CDN cache-hit ratio for overlay tiles is >= 90% during the test
Map Panning Performance ≥45 FPS With Overlay On
Given the GeoHeat overlay is enabled with hail, wind, and claim density layers visible on the QA baseline device (4-core CPU, 16 GB RAM, integrated GPU, 1080p) on the latest Chrome When continuously panning for 30 seconds across a 10 km by 10 km area at zoom level Z11 Then the 1-second rolling average FPS never drops below 45 And the 95th percentile frame time is <= 22 ms And there are no input janks > 100 ms
Client/Server Caching, ETags, and Compression
Given a previously fetched overlay tile with ETag "X" When the client re-requests it with If-None-Match "X" before TTL expiry Then the server responds 304 Not Modified within 200 ms and includes Cache-Control with max-age >= 3600 and ETag "X" Given the same tile is in the browser cache and TTL not expired When the map re-renders it Then no network request occurs and the tile displays within 50 ms Given a cold tile fetch with Accept-Encoding "br,gzip" When the tile is requested from the origin via CDN Then the response includes Content-Encoding "br" or "gzip" and decodes successfully in the client Given the tile content has changed since last fetch When re-requested with If-None-Match "X" Then the server responds 200 OK with a new ETag and updated Cache-Control, and intermediaries respect the new validators
Pre-rendered Tiles and CDN Cache Warm
Given the nightly pre-render window from 00:00–02:00 UTC When the tile pre-render job runs Then raster/vector tiles for hail size, wind swaths, and claim density are generated for zoom levels Z4–Z13 across all configured geographies with <= 0.1% generation failures And tiles are published to the origin successfully And a CDN warm-up primes the top access decile of tiles based on the last 7 days, achieving >= 85% cache-hit within 30 minutes of publish And job duration, success rate, and missing-tile counts are logged to telemetry with zero critical errors
Delta Updates and Targeted CDN Invalidation
Given new hazard data arrives for specific geographic extents while the system is live When the delta update pipeline executes Then only tiles intersecting the changed extents are regenerated and invalidated at the CDN And 95% of updated tiles are available at the CDN edge within 10 minutes of data arrival and 99% within 20 minutes And users do not see mixed-epoch tiles within the same viewport more than 2 minutes after the update completes And origin QPS remains within ±10% of baseline during the update window
Degrade to Last-known Tiles on Provider Outage
Given the upstream provider is experiencing an outage (tile 5xx rate > 20% or health check failing for 2 consecutive minutes) When users load or pan the GeoHeat overlay Then last-known cached tiles render with p95 tile time <= 800 ms And a non-blocking banner indicates "Data temporarily stale" with a last-updated timestamp And failed tile requests use exponential backoff (max 3 retries) with a 2 s per-request timeout And map panning maintains the FPS criterion from the performance scenario And upon provider recovery, fresh tiles resume within 5 minutes and the banner auto-clears
Telemetry, SLO Monitoring, and Surge Capacity Validation
Given end-to-end telemetry is enabled across client, CDN, and origin with shared trace IDs When normal and surge loads defined in the capacity plan (peak concurrent sessions, RPS, geography mix) are exercised by users and synthetic clients Then dashboards report client-side p50/p95/p99 tile load times, FPS metrics, CDN cache-hit ratio, and origin latency by layer and zoom And SLOs are enforced: p95 tile load <= 500 ms and panning FPS >= 45 And alerts notify on-call within 5 minutes when SLOs are breached for 5 consecutive minutes, error rate > 1%, or CDN cache-hit < 85% And a quarterly load test reaches capacity-plan peak while meeting SLOs and documents >= 20% headroom

Rules Engine

Configure carrier, jurisdiction, or franchise rules that set priority, due times, and required line items. Enforce consistency at scale and reduce back‑and‑forth with carriers.

Requirements

Visual Rule Builder
"As an operations manager, I want to visually create and publish rules by carrier and jurisdiction so that estimates automatically follow our standards and reduce back-and-forth with carriers."
Description

A web-based interface to compose, validate, and publish rules that set priority, due times, and required line items based on carrier, jurisdiction, franchise, and claim attributes; supports condition/action blocks, templates per carrier, rule precedence and scoping, draft vs. published versions, schema validation, conflict detection, tagging, and change history; integrates with the estimate creation pipeline and permissions model to control who can create, review, and publish rules.

Acceptance Criteria
Compose Rule with Condition/Action Blocks
Given I am a user with the Rule Editor permission When I create a rule with conditions Carrier="ABC Insurance" AND Jurisdiction="TX" AND ClaimType="Hail" And I add actions Priority="High", DueTime="48 hours from claim create", RequiredLineItems=["Tarp Install", "Emergency Dry-In"] Then the builder renders a valid logical tree and JSON preview matching the schema And the rule can be saved as Draft without validation errors And reopening the draft shows the same condition/action configuration
Real-time Schema Validation and Error Messaging
Given I am editing a rule When I leave a required field empty or use an invalid operator for a field type Then inline error messages appear adjacent to the field with a clear description and expected format And the Publish action is disabled while any blocking errors exist And fixing the field clears the error state without a page reload
Draft, Review, Publish Lifecycle with Change History
Given a Draft rule exists When I submit it for review Then users with Reviewer role can approve or request changes with a required comment When a Reviewer approves and a Publisher publishes Then a new immutable version is created, status=Published, effective timestamp recorded, and the Draft does not affect live evaluations before publish And the change history logs author, timestamp, diff of changes, and review comments
Precedence, Scoping, and Conflict Detection
Given two rules exist where Rule A (priority 10, scope Carrier=ABC Insurance, Jurisdiction=TX) sets Priority=High and Rule B (priority 5, same scope) sets Priority=Normal When attempting to publish Rule B Then the system detects a conflict on action "Priority" for the overlapping scope and displays a conflict report identifying the rules, scopes, and actions And evaluation preview for a sample claim shows that only Rule A’s action applies due to higher precedence And if conflicts are unresolved (no deterministic precedence), publish is blocked until the user changes priority or scope
Carrier Templates and Rule Cloning with Tagging
Given a Carrier Template "ABC Insurance - Residential" exists When I create a new rule from this template Then the rule is instantiated with predefined conditions/actions, naming convention, and tags from the template, status=Draft And I can edit fields and save without modifying the original template And cloning a rule creates a new Draft with a unique ID and a "-copy" suffix in the name And I can add or remove tags on the rule and filter the rule list by those tags to return the new rule within 1 second
Estimate Pipeline Integration and Rule Effects
Given a claim with attributes {Carrier:"ABC Insurance", Jurisdiction:"TX", Franchise:"Austin", ClaimType:"Hail", CreatedAt:"2025-09-04T10:00:00Z"} When an estimate is created for this claim Then the rules engine is invoked and returns actions within 200 ms p95 over 10 consecutive runs And the estimate reflects Priority=High, DueTime <= "2025-09-06T10:00:00Z", and RequiredLineItems include all specified by matching rules without duplicates And an audit trail is attached to the estimate showing which rule versions fired and the matched conditions
Permissions and Access Control for Rule Operations
Given roles are configured as Creator, Reviewer, Publisher, and Viewer When a user without Publisher role attempts to publish a rule Then the action is denied with a clear message and HTTP 403 on the API And users can only view or edit rules according to their role And all create, update, review, and publish actions are recorded in the audit log with user ID and timestamp
Deterministic Rule Evaluation Engine
"As an estimator, I want rules to be applied automatically and consistently when I start or update an estimate so that priorities, due times, and required line items are set without manual steps."
Description

A stateless, horizontally scalable service that evaluates published rules at key events (intake, estimate creation, line-item edits, submission), producing normalized outputs for priority, due times, and required line items; guarantees deterministic ordering and conflict resolution, idempotent outcomes, and performance SLA under 50ms per evaluation; exposes API and event hooks, supports caching, retries, and metrics, and records evaluation context for auditing.

Acceptance Criteria
Evaluate Rules at Intake, Estimate Creation, Line-Item Edit, and Submission
Given a published ruleset version is active When an evaluation request is received with eventType in {intake, estimate_created, line_item_edited, submission_requested} and a valid context payload Then the engine evaluates rules and returns 200 with a normalized response containing fields: evaluationId (UUID), ruleVersionId (string), eventType (string), priority (integer), dueTimeUtc (ISO-8601), requiredLineItems (array of {code:string, quantity:number}), evaluationTimestampUtc (ISO-8601) And requiredLineItems contains no duplicate codes and each quantity is a non-negative number with up to 3 decimal places And dueTimeUtc is greater than or equal to evaluationTimestampUtc And the response validates against the published OpenAPI schema And no server-side session state is created beyond audit records
Deterministic Ordering and Conflict Resolution Across Rules
Given multiple rules match the same context and propose conflicting priority, dueTimeUtc, or requiredLineItems When the evaluation is executed on any node with identical inputs and the same ruleset version Then the outputs (priority, dueTimeUtc, requiredLineItems) are identical across runs and nodes And conflicts are resolved using this precedence, in order: higher specificity (carrier > jurisdiction > franchise > global), then higher rulePriority, then newer publishedAt timestamp, then lower ruleId lexicographic order And requiredLineItems are merged by code; on duplicate codes, the resulting quantity equals the maximum proposed quantity; the list is deterministically ordered by category ascending then code ascending And when any conflict is resolved, conflictResolutionTrace is included listing the winning ruleId for each field
Idempotent Outcomes Across Retries and Nodes
Given the same input request (including identical context and resolved ruleset version) is sent multiple times or retried When evaluations are processed within a 10-minute window Then the outputs for priority, dueTimeUtc, and requiredLineItems are byte-for-byte identical across responses And an evaluationHash (SHA-256) derived from inputs, ruleset version, and outputs is identical across runs And event hooks emitted use evaluationHash as the deduplication key and are delivered at-most-once to each subscriber And supplying the same idempotencyKey in the request results in a single side effect (e.g., one webhook delivery), even across retries
Performance SLA: 650ms per Evaluation
Given the service is operating with three or more instances and warmed caches When 10,000 evaluation requests are executed at 200 RPS distributed across {intake, estimate_created, line_item_edited, submission_requested} with median payload size 3 25 KB Then p95 end-to-end evaluation latency per request is 3 50 ms and p99 is 3 75 ms measured at the service boundary (excluding client network latency) And 5xx error rate is 3 0.1% and timeouts are 3 0.1% And under a cold cache, p95 latency is 3 80 ms after the first 100 requests
API Contract and Event Hooks Delivery
Given the public endpoint POST /v1/rules/evaluate and configured event hooks (webhooks or message bus) When a valid request is submitted Then the response status is 200 and the body matches the OpenAPI schema exactly (field names, types, enums) And a RulesEvaluated event is published within 2 seconds to the configured sink with headers X-Signature (HMAC-SHA256) and X-Evaluation-Hash And transient delivery failures are retried up to 5 times with exponential backoff (initial 1s, factor 2, jitter 0.2) before dead-lettering; duplicates are suppressed using evaluationHash for 24 hours And invalid requests return 400 or 422 with a machine-readable error code and JSON pointer to the offending field; no event is emitted
Caching Behavior and Rule Publication Invalidation
Given ruleset lookups are cacheable by (carrier, jurisdiction, franchise, publishedVersionId) When evaluations for the same cache key are executed under steady-state traffic Then cache hit rate is 3 85% and median item age is 3 2 minutes And a newly published ruleset invalidates all affected cache keys within 5 seconds globally And a Cache-Control: no-cache request header forces a cache bypass for that evaluation only And maximum staleness of rule data used in any evaluation is 3 5 minutes
Audit Logging and Operational Metrics
Given audit logging is enabled When an evaluation completes Then an immutable audit record is stored within 1 second containing: evaluationId, evaluationHash, ruleVersionId, inputContextHash, matchedRuleIds, conflictResolutionTrace (when present), outputs, timings, and nodeId And audit records are retrievable by evaluationId and evaluationHash via GET /v1/rules/evaluations/{id} with access control, and retained for 90 days And operational metrics are exposed at /metrics including counters and histograms for evaluations, latency, cache hits/misses, conflicts resolved, webhook successes/failures; alerts are defined for latency and error-rate SLOs
Jurisdiction & Carrier Data Mapping
"As an intake specialist, I want RoofLens to automatically resolve carrier and jurisdiction from job data so that the correct rule set is applied without manual lookup."
Description

Automated resolution of carrier, jurisdiction, franchise, and claim type from intake data using address geocoding, policy metadata, and integration mappings; normalizes fields into a canonical model for rule conditions, handles fallbacks when data is incomplete, and exposes a mapping catalog with versioning and access controls to ensure the correct rule set is selected every time.

Acceptance Criteria
Auto-Resolve Jurisdiction from Address Geocoding
Given an intake payload with service_address "123 Main St, Dallas, TX 75201, USA" And no explicit jurisdiction provided When the mapping service performs geocoding and jurisdiction resolution Then canonical.jurisdiction.code = "TX" And canonical.jurisdiction.county = "Dallas" And canonical.jurisdiction.country = "US" And mapping.confidence >= 0.95 And p95 latency for the geocoding step <= 2000 ms under normal network conditions And the response includes source.geocoding_provider and source.geocoding_timestamp
Map Carrier and Franchise from Policy Metadata and Integrations
Given policy.external_carrier_code = "SFG" and policy.franchise_code = "DFW-01" And an integration mapping exists: "SFG" -> canonical.carrier.id = "state_farm" When the mapping service processes the intake Then canonical.carrier.id = "state_farm" And canonical.franchise.id = "dfw-01" And mapping.source.carrier = "integration:xactimate" And three repeated runs with identical input produce byte-identical canonical.carrier and canonical.franchise sections (idempotent)
Normalize Intake Into Canonical Claim Model
Given an intake with claim_type = "hail", loss_date = "2025-08-15 14:30:00-05:00", and roof_area = "2500 sq ft" When normalization executes Then canonical.claim.type = "hail" And canonical.loss_date = "2025-08-15T19:30:00Z" (ISO 8601 UTC) And canonical.roof.area_m2 is within 0.1% of 232.26 And output validates against JSON Schema version "v1.2" with zero errors And unknown input keys are ignored without causing failures
Fallback Behavior for Incomplete or Ambiguous Data
Given an intake with address = "Springfield, 62704" and no carrier code or claim_type When mapping runs Then canonical.jurisdiction.code = "IL" (resolved from ZIP 62704) And canonical.carrier.id = "unknown" And canonical.claim.type = "unknown" And canonical.flags.requires_review = true with reasons ["missing_carrier","missing_claim_type","ambiguous_address"] And an audit event "mapping.fallback_applied" is emitted with trace_id and reasons
Deterministic Rule Set Selection From Canonical Mapping
Given canonical fields carrier.id = "state_farm", jurisdiction.code = "TX", claim.type = "hail", franchise.id = "dfw-01" And mapping_version = "2025.09.0" and evaluation_date = "2025-09-04" When rule set selection executes Then ruleset.selection_key = "state_farm|TX|hail|dfw-01" And ruleset.version selected is the highest effective on or before 2025-09-04 And selection completes in <= 50 ms p95 from the in-memory index And audit log records mapping_version, ruleset_id, ruleset_version, and selection_key
Mapping Catalog Versioning and Immutability
Given mapping catalog versions exist: 2025.07.0 (active) and 2025.09.0 (draft) And actor.role = "admin" When the admin publishes 2025.09.0 with effective_date = "2025-09-05" Then version 2025.09.0 status = "scheduled_active" with effective_date = "2025-09-05" And version 2025.07.0 remains immutable and retrievable by exact version id And jobs created before 2025-09-05 continue to use 2025.07.0; jobs on/after use 2025.09.0 And the publish action is recorded with actor_id, timestamp, and diff summary And GET /mapping-catalog/versions/2025.07.0 returns content identical to its original publish snapshot
Access Controls for Mapping Catalog
Given roles = ["admin","editor","viewer"] When a viewer attempts POST /mapping-catalog/publish Then the response status = 403 (forbidden) And an editor can POST /mapping-catalog/versions to create drafts but receives 403 on publish And an admin can publish and set effective_date successfully (201/200) And 100% of access attempts are logged with actor_id, role, action, resource, and outcome
Required Line Item Enforcement & Overrides
"As an estimator, I want the system to auto-add and enforce required line items with a clear override path so that my bids meet carrier expectations while allowing justified exceptions."
Description

Mechanisms to auto-add required line items defined by rules with default quantities and notes, present a compliance checklist, block estimate submission if items are missing, and allow role-based override with mandatory reason and attachments; logs all overrides for audit and supports mapping to standard code sets (e.g., Xactimate) to ensure carrier-aligned outputs.

Acceptance Criteria
Auto-Addition of Required Line Items with Defaults
Given a ruleset is active for the estimate’s carrier and jurisdiction When a new estimate is created or rules are re-applied Then the system auto-adds all required line items exactly once, with their default quantities, units, and notes from the rule, and associates the originating rule ID to each added item Given a required line item already exists on the estimate When rules are applied Then the item is not duplicated and its user-edited quantity and notes are not overwritten Given a required line item has defined default values in the rule When it is auto-added Then the quantity and note match the rule values; if no default is defined, those fields are left unpopulated and remain editable
Real-Time Compliance Checklist and Rule Traceability
Given an estimate subject to active rules When the estimate view loads Then a compliance checklist lists every required line item with a status of Present or Missing and displays the source rule name and priority Given a required line item is added, removed, or edited When the change is saved Then the checklist status updates within 1 second and remains consistent with the estimate contents Given a user clicks a checklist entry When the entry is selected Then the UI focuses or navigates to the corresponding line item for quick remediation
Submission Blocker for Missing Required Items
Given at least one required line item is Missing and no approved override exists When a user attempts to submit, export, or mark the estimate final Then the action is blocked and an error modal lists each missing item, its source rule, and the required actions (add item or request override) Given all required items are Present or have approved overrides When a user attempts to submit, export, or mark the estimate final Then the action succeeds without blocker Given an API submission endpoint is called with missing required items and no overrides When the request is processed Then the API responds with HTTP 422 and a machine-readable list of missing items and rule identifiers
Role-Based Override with Mandatory Reason and Attachment
Given a user with the OverrideRequiredItems permission When they initiate an override for a specific required item Then the system requires a reason text of at least 15 characters and at least one attachment before allowing the override to be saved Given a user without the OverrideRequiredItems permission When they attempt to initiate an override Then the UI disables the control and the API responds with HTTP 403 Forbidden Given a valid override is saved When the estimate checklist is refreshed Then the item’s status changes to Overridden and the estimate becomes eligible for submission subject to other checks
Override Audit Logging and Immutability
Given any override is created, updated, or revoked When the action is saved Then an immutable audit record is written containing estimate ID, line item identifier and code, rule ID, user ID and role, action type (created/updated/revoked), timestamp (UTC), reason text, and attachment IDs Given an audit record exists When a user attempts to modify or delete it Then the system prevents changes; corrections must append a new audit entry Given the audit trail is viewed for an estimate When filters are applied (date range, user, rule ID) Then the result set updates accordingly and can be exported to CSV
Standard Code Mapping for Carrier-Aligned Outputs
Given a required line item has a mapping to a standard code set (e.g., Xactimate) When the item is auto-added or exported Then the mapped code, unit, and description are used in the estimate UI, PDF, and data exports Given a required line item lacks a valid mapping When the user attempts submission Then the system flags a mapping error in the checklist and blocks submission until a mapping is provided or an authorized override is recorded Given an estimate is exported to a carrier-specific format When validation is performed Then the output passes schema or format validation for that carrier, reflecting mapped codes
Priority and Due Time Application from Rules
Given a rule defines a priority and due time for a required item When the item is added to the estimate Then the priority and due time are applied to the line item and displayed in the UI and checklist Given the due time elapses before submission When the checklist is viewed Then the item is marked Overdue with a timestamped indicator Given an estimate with items carrying priority and due time When the estimate is exported or submitted Then the priority and due time metadata are included in downstream outputs where supported
SLA Due Time Calculator with Business Calendars
"As a dispatcher, I want due times calculated using our business calendars and carrier rules so that work is scheduled accurately and SLAs are met."
Description

A calculator that derives due dates from rule-defined SLAs while honoring time zones, business hours, weekends, carrier/jurisdiction holidays, and daily cutoff times; updates task deadlines and notifications, recalculates when inputs change, provides a visible countdown in the job view, and includes an API and UI to manage calendars per carrier and franchise.

Acceptance Criteria
Compute Due Date Using Business Calendars and Time Zones
Given a job is created on 2025-03-12 16:30 in America/Denver with SLA = 2 business hours and business hours = 08:00–17:00 Mon–Fri When the due date is calculated Then the due date is 2025-03-13 09:30 America/Denver and the stored deadline includes timezone = America/Denver Given a job is created on 2025-03-12 16:46 in America/Denver with a daily cutoff = 16:45 and SLA = 2 business hours, business hours = 08:00–17:00 Mon–Fri When the due date is calculated Then work starts 2025-03-13 08:00 and the due date is 2025-03-13 10:00 America/Denver Given a job is created on 2025-03-13 16:00 in America/Denver with SLA = 4 business hours, business hours = 08:00–17:00 Mon–Fri, and 2025-03-14 is a carrier holiday When the due date is calculated Then 1 hour is counted on 2025-03-13, no hours on 2025-03-14 (holiday) or weekend, remaining 3 hours accrue on 2025-03-17 and the due date is 2025-03-17 11:00 America/Denver Given a job is created on 2025-03-12 15:00 in America/New_York with SLA = 5 business hours and business hours = 09:00–17:00 Mon–Fri When the due date is calculated Then the due date is 2025-03-13 12:00 America/New_York and all UI displays show the timezone abbreviation (e.g., EDT)
Recalculate Due Date on Input Changes with Audit Trail
Given an existing job with a computed due date When the SLA value is changed via Rules Engine Then the due date is recalculated immediately and updated in the UI within 5 seconds and an audit record stores old_due, new_due, change_reason = "SLA changed", rule_version, calendar_version, actor Given an existing job with a computed due date When the assigned business calendar (hours, cutoff, holidays) is updated or the job is reassigned to a different carrier/franchise Then the due date is recalculated, notifications are rescheduled, and an audit record stores the delta; no duplicate notifications are sent for the old schedule Given an existing job with a computed due date When the job timezone is changed Then the stored deadline is converted to the new timezone without changing the absolute instant and the countdown reflects the new local time within 5 seconds Given a recalculation request is received multiple times with the same inputs When processed concurrently Then the result is idempotent and only one audit entry is written for that unique change
Visible Countdown Timer in Job View
Given a job with a computed due date within business calendars When the job view loads Then a countdown shows remaining business time in days/hours/minutes, updates at least every minute, and pauses outside business hours Given remaining business time > 24 hours When the countdown renders Then the badge color is neutral and the tooltip displays exact due datetime, timezone, calendar name, and rule source Given remaining business time <= 24 hours and > 0 When the countdown renders Then the badge color is warning (amber) and the hours/minutes remaining are accurate to the minute Given the due date is in the past When the countdown renders Then the badge color is critical (red) and shows time overdue with a leading "+"; the overdue counter respects business hours
SLA Notifications and Escalations
Given a computed due date When notification schedules are generated Then messages are queued for T-24h, T-2h, T-0 (due), and every 4 business hours overdue, constrained to send within business hours of the applicable calendar Given a T-2h notification would fall outside business hours When scheduling Then it is moved to the next opening minute of the next business window prior to due; if not possible, it sends at the window open time nearest before T-0 Given the due date changes due to recalculation When existing notifications are compared to the new schedule Then obsolete notifications are canceled and new ones are scheduled; no duplicate messages are sent for the same milestone Given email and in-app are enabled channels When a notification is sent Then both channels are delivered with tokens {job_id, due_at_local, due_at_utc, timezone_abbr, priority, rule_name}; delivery status and timestamps are logged per channel
Calendar Management API (Carrier and Franchise)
Given an authenticated user with calendar:write scope When POST /calendars with {name, time_zone, hours_by_day, cutoff_time, holidays[]} is called Then the API returns 201 with the created calendar, validates hours (no overlaps, within 00:00–24:00), cutoff within open hours, and rejects invalid time zones with 422 Given an existing calendar When PATCH /calendars/{id} updates hours, cutoff, or holidays Then a new calendar_version is created, effective_at is recorded, and GET returns the latest active version by default Given a carrier and a franchise When PUT /assignments sets {entity_type: carrier|franchise, entity_id, calendar_id} Then GET /assignments returns the mapping and POST /preview-due-date accepts {start_at, sla_hours, entity_type, entity_id, jurisdiction} and returns computed due_at with trace of applied hours, cutoff, holidays Given a delete request When DELETE /calendars/{id} Then the API rejects deletion if the calendar is in use and allows soft-delete when not; 409 is returned on conflict; audit entries are written for all mutations
Calendar Management UI with Preview Simulator
Given a user with Admin role opens Settings → Calendars When creating or editing a calendar Then they can set time zone, per-day business hours, daily cutoff, and add holidays (single date or range) with validation errors shown inline Given a calendar form with invalid inputs (overlapping hours, cutoff outside open hours, invalid date range) When attempting to save Then the Save action is disabled and specific error messages identify the exact fields to correct Given a saved calendar When using the Preview Due Date tool with inputs {start_at, sla_hours, carrier/franchise, jurisdiction} Then the UI displays the calculated due date/time, applied time zone, and a step-by-step trace of hours consumed across days including holidays and weekends Given unsaved changes exist When navigating away Then the user is prompted to confirm discard or stay; accessibility requirements are met (keyboard navigation, focus state, aria labels)
Rule and Calendar Precedence and Fallback
Given a job associated to a franchise with a franchise calendar and a carrier with a carrier calendar When selecting the business calendar for due-date calculation Then the carrier calendar is used; if no carrier calendar exists, the franchise calendar is used; if neither exists, the system default calendar is used Given jurisdiction-specific holidays are configured When calculating the due date Then the effective holiday set is the union of the selected business calendar holidays and the jurisdiction holidays, with duplicates de-duplicated Given conflicting business hours between carrier and franchise calendars When determining hours Then only the selected calendar’s hours and cutoff apply; no blending of hours occurs Given no valid calendar can be resolved When the calculator runs Then the task is marked "Calendar Missing", no due date is set, a blocking alert is shown in the UI, and an error event is emitted for monitoring
Rule Simulation Sandbox
"As a QA analyst, I want to simulate rule evaluations before publishing so that I can validate outcomes and prevent production errors."
Description

A safe environment to test rules against sample or live payloads with a visual trace of which conditions matched, what actions fired, and the final outputs; supports version-to-version comparisons, saved test suites, fixture generation from real jobs, and read-only simulation in production to validate changes before publishing.

Acceptance Criteria
Run Simulation on Sample Payload with Visual Trace
Given an authenticated user with Rules:Simulate permission selects Rule Set A version v1 and a valid JSON payload (≤1 MB) from Fixtures or upload When the user clicks Run Simulation Then the engine executes in read-only mode and completes within 5 seconds at P95 And the results display final outputs (priority, due time, required line items) And the trace shows each evaluated condition with boolean result, data path, rule/condition ID, and evaluation order And fired actions are listed with parameters and timestamps And no changes are written to jobs, estimates, or rules And the user can expand/collapse trace nodes and download results as JSON and PDF
Compare Two Rule Versions on a Single Payload
Given a user selects Rule Set A versions v1 and v2 and a single payload (fixture or upload) When the user clicks Compare Then both versions execute read-only and complete within 7 seconds at P95 And the UI shows side-by-side outputs, matched conditions, and fired actions And differences are highlighted with counts of added/removed/changed outputs and actions And a diff report can be downloaded as JSON And if rule schemas are incompatible, a validation error explains missing/renamed fields without running the comparison
Create and Execute Saved Test Suite
Given a user creates a Test Suite with a name and at least 5 test cases, each referencing a fixture or uploaded payload and expected output assertions (e.g., priority=High, includes line item X) When the user runs the suite against a selected rule version Then the system executes all tests in parallel where possible and returns a pass/fail summary with counts and duration And per-test results show passed assertions, failed assertions, and link to the full trace And the suite history is stored with timestamp and executor And the user can re-run failed tests only And the platform supports 100 tests completing within 2 minutes at P95 And the suite can be exported/imported as JSON
Generate Fixture from Real Job Snapshot
Given a user with Fixture:Create permission is viewing Job J in production When the user clicks Create Fixture from Job Then the system snapshots the rules input payload at that moment and redacts PII fields (name, email, phone, street address) while retaining ZIP/region for jurisdiction logic And an immutable Fixture ID is created with a link back to Job J and timestamp And the user may add/edit expected outputs before saving And the fixture is stored in the organization Fixtures list and is selectable in the sandbox And no data on Job J is modified
Read-Only Simulation in Production (Shadow Mode)
Given an admin enables Shadow Mode for Rule Set A candidate version vNext with a sampling rate between 1% and 100% When live jobs are processed Then vNext evaluates the same inputs in parallel without affecting live decisions or downstream actions And for each sampled job the system records live version outputs, vNext outputs, and a diff summary And added latency to the live request is ≤10% at P95 And a dashboard shows parity percentage, top differing rules/conditions, and error counts And Shadow Mode can be toggled on/off by admins without deploys
Access Control and Audit Logging for Simulations
Given role-based access control is configured When a user without Rules:Simulate attempts to run a simulation, comparison, or suite Then the system returns 403 Forbidden and logs the attempt When an authorized user runs a simulation/comparison/suite or toggles Shadow Mode Then an audit record is stored with user, org, timestamp, rule set and version(s), environment (sandbox or prod shadow), payload hash or fixture ID, runtime, and outcome (Pass/Fail/Error) And audit records are retained for 180 days and exportable to CSV
Validation and Error Handling in Sandbox
Given an uploaded or fixture payload fails schema validation or references missing fields When the user initiates a simulation Then execution is blocked and a structured error is returned listing JSON paths and reasons Given a runtime error occurs during rule evaluation When the simulation runs Then the result is marked Error with the failing rule/condition identifier and stack context, and a partial trace is returned And per-rule evaluation is timeboxed to 3 seconds and overall to 30 seconds; exceeding limits returns a Timeout error without side effects
Audit Trail & Compliance Reporting
"As a compliance officer, I want a complete audit trail and exportable reports of rule application so that we can resolve disputes and demonstrate adherence to carrier requirements."
Description

Immutable logging of rule versions, inputs, evaluation results, user actions, and overrides with timestamps and IDs; provides searchable history, exportable CSV/PDF reports per carrier or jurisdiction, and one-click evidence packs attached to the estimate to streamline dispute resolution and demonstrate adherence to carrier requirements.

Acceptance Criteria
Immutable Event Logging for Rule Evaluations and Overrides
Given a rule evaluation is triggered for an estimate When the rules engine executes Then the system writes an append-only audit event with fields: event_id (UUIDv4), event_type, estimate_id, rule_id, rule_version, carrier_id, jurisdiction_code, user_id (or system), timestamp_utc (ms precision), input_hash (SHA-256), evaluation_result (pass/fail and outputs), correlation_id, and request_id And the event is persisted with a write-once policy such that any update/delete via UI/API returns 403 and is separately logged as a blocked attempt And overrides capture before_value, after_value, override_reason (non-empty), override_by_user_id, override_timestamp_utc, and optional approver_user_id if approval is required And all audit events for a single evaluation share a common correlation_id And timestamps are stored/displayed in UTC and include timezone offset when rendered And a rule publish action records rule_version, publisher_user_id, change_summary, and Git/SemVer reference if available
Searchable Audit History by Carrier, Jurisdiction, and Date Range
Given at least 10,000 audit events exist across multiple carriers and jurisdictions When a user searches using any combination of filters: carrier_id, jurisdiction_code, estimate_id, rule_id, rule_version, user_id, event_type (evaluation, override, publish, export, evidence_pack), override_flag, and date range (UTC) Then results return the first page within 2 seconds for up to 10,000 matching events (95th percentile) with stable pagination (page size selectable: 25/50/100) And results are sortable by timestamp_utc (default desc), carrier_id, jurisdiction_code, user_id, and event_type And each result row displays: timestamp_utc, event_type, estimate_id, rule_id@version, carrier_id, jurisdiction_code, user_id, override_flag, and a link to full event details And the user can save and load named filter presets per workspace
Export Compliance Report to CSV and PDF per Carrier/Jurisdiction
Given a filtered audit history for a carrier and/or jurisdiction is displayed When the user selects Export CSV or Export PDF Then the generated file includes metadata header (carrier_id, jurisdiction_code, date_range, generated_at_utc, generated_by, filter_hash) and rows with columns: event_id, timestamp_utc, event_type, estimate_id, rule_id, rule_version, user_id, evaluation_result, override_flag, approver_user_id, override_reason, correlation_id And the row count matches the on-screen filtered total (within current export scope: current page or full set as selected) And CSV conforms to RFC 4180 with UTF-8 encoding; PDF is A4/Letter compatible with selectable option And the export job completes within 30 seconds for up to 50,000 rows and provides a secure, expiring download link (>= 60 minutes validity) And exports are logged as audit events (event_type=export) with file_hash (SHA-256) and download_count
One-Click Evidence Pack Attached to Estimate
Given an estimate with completed rule evaluations and any overrides exists When the user clicks Generate Evidence Pack on the estimate Then the system generates a single artifact (PDF or ZIP as configured) containing: rule versions applied, inputs (redacted as configured), evaluation outputs, timeline of user actions and overrides, carrier/jurisdiction context, and export/report copies And the evidence pack is automatically attached to the estimate record with a unique file_id, file_hash (SHA-256), size, and generated_at_utc And the artifact is available to share via a time-limited, access-controlled link and downloadable by authorized roles only And generation completes within 60 seconds for estimates with up to 500 audit events And regenerating later preserves prior versions and links each pack to the estimate and correlation_id used at generation time
Tamper Detection and Chain-of-Custody Verification
Given the audit store contains events for an estimate When an integrity verification is run (manually from UI or nightly job) Then the system validates a forward-only hash chain where each event stores event_hash and prev_event_hash per estimate or correlation_id And the verification returns Pass when no gaps or hash mismatches exist, else Fail with the first failing event_id and reason And any detected integrity failure raises a high-severity alert and creates an audit event (event_type=integrity_alert) And exported reports and evidence packs embed a verification manifest (checksum and chain summary) enabling offline verification
User Action Traceability with IDs and Timestamps
Given a user performs any action related to rules (create/edit/publish rule, run evaluation, override line item, export, generate evidence pack) When the action completes Then an audit event records: user_id, role, organization_id, session_id, source_ip, user_agent, timestamp_utc, and action-specific payload And viewing the event detail in UI reveals these fields with field-level help and copies to clipboard And access to view audit events is controlled by role permissions; unauthorized access attempts are denied with 403 and logged as audit events And all timestamps display in UTC with local time tooltip conversion and include ms precision

Smart Rebalance

When new rush jobs arrive, intelligently reshuffles priorities and reassigns work while honoring locks and user preferences. Notifies affected users with clear reasoning to minimize confusion.

Requirements

Dynamic Priority Rules Engine
"As a dispatch coordinator, I want rush jobs to automatically reprioritize and assign to the best available estimator so that we hit SLAs without manual reshuffling."
Description

Implements a configurable scoring and rules engine that continuously evaluates incoming and in-flight jobs to determine updated priorities and optimal assignees. Consumes attributes such as rush flag, due dates, customer tier, revenue impact, job complexity, geographic proximity, and required skills to calculate a deterministic score and ordering. Produces a ranked work queue and recommended assignments that align with business objectives and SLAs. Supports admin-configurable weighting, hard/soft constraints, tie-breakers, and skill/territory filters. Integrates with RoofLens job intake, scheduling, and user directory to fetch real-time availability and capacity. Provides versioned rule sets and a test harness for safe iteration and regression checks. Expected outcome is consistent, objective prioritization that reduces manual triage and accelerates turnarounds.

Acceptance Criteria
Deterministic Scoring and Stable Ordering
Given an active ruleset version and a fixed set of jobs with known attributes When the engine evaluates the set multiple times across different nodes within the same ruleset version Then each job receives the same numeric priority_score to three decimal places across runs And the ranked order is identical across runs And ties are resolved using the configured tiebreaker order; if unset, defaults apply: due_date asc, revenue desc, created_at asc, job_id asc
Admin-Configurable Weights and Versioned Rule Sets
Given an admin updates attribute weights, constraints, and tiebreakers and publishes ruleset version V2 with an effective_at timestamp When the clock reaches effective_at Then all new evaluations use V2 And evaluations prior to effective_at continue to reference the previous version V1 And an audit log entry records who changed what and when And an admin can roll back to V1, after which new evaluations reference V1
Hard vs Soft Constraints and Skill/Territory Filters
Given jobs specify required_skills and territory and the directory lists user skills and territories When the engine recommends an assignee Then no recommendation violates any hard constraint (required_skills, territory locks, user lock) And soft constraints (user preferences) influence the score via the configured penalties without blocking assignment And each recommendation includes a constraint_evaluation with pass/fail per constraint
Capacity- and Availability-Aware Assignee Recommendation
Given real-time capacity (max_concurrent, in_progress) and availability windows for users When the engine assigns or recommends an assignee for each job in priority order Then only qualified users within availability windows and with remaining capacity are considered And capacity limits are not exceeded for any user And if no qualified user is available, the job is marked unassigned with reason_code = "no_qualified_capacity" And recommendations include estimated_start_time based on availability
Event-Driven Re-evaluation SLA
Given triggers including job_created, rush_flag_changed, due_date_updated, complexity_updated, user_status_changed, or capacity_changed When any trigger occurs Then impacted jobs are re-scored and the ranked queue is updated within 5 seconds (p95) And the recommended assignee is recalculated where constraints or capacity changed And only impacted jobs are recalculated (no full recompute) unless ruleset version changes
Test Harness and Regression Validation
Given a baseline test pack consisting of input jobs, directory snapshot, and expected scores/order for ruleset version Vn When the test harness is executed against the current implementation Then it outputs a pass/fail report with diffs for any job whose score or position differs from baseline And the harness can be run in dry-run mode without mutating live data And results are stored with timestamp and ruleset version for historical comparison
Ranked Queue and Score Explainability Output
Given the engine produces outputs via API and message bus When a client retrieves the ranked queue Then each job includes priority_score, rank_position, recommended_assignee (or null), and score_breakdown per attribute weight And the API supports filtering by team, territory, and required_skill and sorting only by rank_position And the response schema is versioned and backward compatible for at least one minor version
Assignment Locks & Preferences Enforcement
"As an estimator, I want Smart Rebalance to respect my locks, skills, and working hours so that I am not assigned work I cannot or should not take."
Description

Enforces user- and job-level constraints during reshuffles, ensuring Smart Rebalance never violates hard locks (e.g., do-not-move assignments, in-progress tasks) and always honors user preferences (working hours, PTO, do-not-disturb windows, preferred job types, territory, and skill certifications). Models capacity limits, max concurrent jobs, and equipment requirements to prevent overallocation. Surfaces conflicts with clear reasons and automatically seeks valid alternatives; if none exist, escalates via rules (e.g., request override approval). Integrates with user profiles, calendar availability, skill matrix, and geo/territory data sources to maintain accurate constraints. Outcome is trust in the system through predictable, policy-compliant reassignments.

Acceptance Criteria
Hard Locks Are Never Broken
Given assignments A1 is flagged Do-Not-Move and A2 has status In Progress for user U1 When Smart Rebalance runs due to a new rush job arrival Then A1 and A2 retain the same assignee, schedule, and status with 0 field changes in the proposed plan And the audit log records exclusion entries for A1 (reason=LOCKED_ASSIGNMENT) and A2 (reason=IN_PROGRESS) And any plan variant that moves A1 or A2 is rejected before apply
Schedule Respects Availability Windows
Given user U2 has working hours 09:00–17:00 local time, a daily Do-Not-Disturb window 12:00–13:00, and PTO on 2025-09-10 When Smart Rebalance proposes or applies assignments for U2 Then no assignments are scheduled outside 09:00–17:00, none overlap 12:00–13:00, and none are placed on 2025-09-10 And if a job cannot fit within U2’s availability, U2 is not assigned and alternatives are sought And if no valid alternative exists, an override approval request is created with reason=AVAILABILITY_CONFLICT and the job remains unassigned
Skills and Equipment Must Match Job Requirements
Given job J requires certifications={FAA Part 107}, skills={Level 2 Drone Ops}, and equipment={Thermal Camera} When Smart Rebalance evaluates eligible assignees for J Then only users whose profiles satisfy all required certifications, skills, and equipment are considered eligible And J is assigned only to an eligible user And if no eligible user exists, J remains unassigned, a conflict is logged with reason=SKILL_OR_EQUIPMENT_MISMATCH, and an override approval request is created
Capacity and Concurrency Limits Prevent Overallocation
Given user U3 has weekly capacity=20 hours and MaxConcurrentJobs=2 When Smart Rebalance schedules or reassigns work Then U3’s total planned hours for the defined capacity period do not exceed 20 And at no point in the timeline does U3 have more than 2 simultaneous active assignments And any job that would cause a violation is reassigned to an eligible user; if none exist, the job remains unassigned and a conflict with reason=CAPACITY_EXCEEDED is raised with escalation to override approval
Territory and Geo Constraints Are Honored
Given users have territories and optional travel radii and jobs have geo-coordinates When Smart Rebalance assigns or reassigns jobs Then a job is assigned only to users whose territory contains the job location and, if configured, within the user’s travel radius And cross-territory assignments are not created unless an explicit policy rule allows them And if no eligible user exists within constraints, the job remains unassigned and a conflict with reason=TERRITORY_MISMATCH is logged and escalated per rules
Conflicts Are Explained and Users Notified
Given Smart Rebalance generates changes or encounters constraint conflicts When the plan is produced Then each affected user receives a notification within 60 seconds detailing the change(s), reason codes for each change or block, and the before/after assignment summary And the system presents a conflicts list with standardized reason codes and suggested alternatives And if an override is required, an approval request is routed to the designated role with the blocking reasons and evidence attached
Reliable Integration With Profiles, Calendar, Skills, and Territory Data
Given user profiles, calendars, skill matrix, and territory sources are connected When Smart Rebalance runs Then the decision uses the latest available data from each source and records source timestamps/versions in the audit trail for every decision And if any required source is unavailable or returns invalid data, the run fails safe (no changes applied), raises conflicts with reason=INTEGRATION_UNAVAILABLE, and notifies the operations/admin channel And once the source recovers, a retry can be triggered to complete the rebalance
Explainable Notifications
"As a team member whose queue changes, I want a clear explanation of what changed and why so that I can trust the system and adjust my plan quickly."
Description

Delivers clear, actionable notifications to all affected users whenever Smart Rebalance modifies priorities or assignments. Messages include what changed, why it changed (e.g., rush job arrival, SLA breach risk, rule fired), when it takes effect, and suggested next steps. Provides in-app announcements, inbox threads with change diffs, and optional email/mobile push with deep links back to updated queues. Supports batching to reduce noise, localization, quiet-hours compliance, and user-level preferences for channels and frequency. Integrates with the reason-code output from the rules engine to generate human-readable explanations and with the audit trail for traceability. Expected result is high user understanding and minimal confusion during reprioritizations.

Acceptance Criteria
In-App Announcement for Rebalance Change
Given a logged-in user has at least one assignment impacted by Smart Rebalance When the rebalance completes Then an in-app announcement banner is displayed to that user within 5 seconds of the event timestamp And the message includes: what changed (priority delta and/or reassigned from <prev_assignee> to <new_assignee>), why it changed (human-readable reason from rules engine), when it takes effect (timestamp in the user’s timezone), and suggested next steps (CTA to Review Queue) And the banner contains a link labeled "View details" that opens the corresponding inbox thread And the announcement is marked Unread until opened and moves to Read after open And the announcement is accessible (role=alert, keyboard dismissible, screen-reader readable) and meets WCAG contrast ratio ≥ 4.5:1
Inbox Thread with Change Diffs
Given a Smart Rebalance event affects one or more jobs for a user When an inbox thread is created or updated for that event Then the thread includes per-job diffs showing before/after values for priority, assignee, and SLA due time And the thread groups changes by job and supports sorting by priority and time changed And the thread includes deep links that open the updated queue filtered to impacted jobs And the thread stores an immutable snapshot of before/after states and the event timestamp And the thread metadata contains audit_event_id(s) for traceability And if another event impacts the same user within 10 minutes, it is appended as a new reply in the same thread
Email and Push Notifications Respect Preferences
Given a user has channel preferences configured When a rebalance event impacts that user Then notifications are sent only via channels the user has enabled And email and push content mirror the in-app fields (what, why, when, next steps) and include a deep link to the updated queue And email is dispatched within 60 seconds; push within 30 seconds, unless batched by policy And deep links use secure tokens that expire after 24 hours And per-user frequency caps are enforced: max 3 emails/hour and 5 pushes/hour; excess events are included in the next batch And delivery failures are retried up to 3 times with exponential backoff and logged on final failure
Quiet Hours Compliance
Given a user has quiet hours configured and active When a rebalance event occurs during quiet hours Then no email or push is sent during quiet hours And the event is included in a single digest delivered within 5 minutes after quiet hours end And in-app notifications are delivered silently (no sound/toast) and appear in the inbox And all times are evaluated in the user’s configured timezone
Localization and Formatting
Given a user with locale and timezone preferences When a notification is generated Then all text is localized to the user’s language using ICU messages; if a translation key is missing, English is used with no placeholder keys visible And timestamps are displayed in the user’s timezone and locale format And numbers and currencies in any counts or amounts follow the user’s locale And reason codes are mapped to localized, human-readable strings And mobile push payloads respect limits: title ≤ 64 characters; body ≤ 180 characters; truncate with ellipsis if exceeded
Reason-Code Mapping Integration
Given the rules engine outputs reason_code and rule_id for a rebalance change When a notification is generated Then the explanation text is composed from the reason_code template populated with event data And the rule_id and reason_code are stored in notification metadata and visible in details And if a mapping is missing, a generic explanation is shown and a warning is logged identifying the missing key And unit tests cover each reason_code template and required placeholders
Audit Trail Traceability
Given an audit trail entry exists for each Smart Rebalance event When a notification is created, updated, dismissed, or deleted Then the notification records the related audit_event_id(s) And a "View audit" link opens the audit view filtered to those IDs And user actions on the notification (open, dismiss, link click) are written to the audit trail with actor, timestamp, and action And deleting a notification does not remove or alter the original audit entries
Atomic Rebalance with Preview & Rollback
"As an operations lead, I want to preview and safely apply a rebalance with the option to roll back so that changes do not disrupt active work."
Description

Provides a safe-apply workflow for schedule changes: simulate a rebalance to preview impacts (assignment diffs, priority deltas, conflicts), then apply all changes atomically to avoid partial updates. Ensures idempotent operations, optimistic concurrency control, and consistent state across services. Implements automated rollback on failure and a manual revert option, both linked to a specific rebalance transaction ID. Includes guardrails such as change thresholds, circuit breakers, and retry policies to protect active work. Integrates with scheduling, notifications, and audit services so previews mirror real outcomes before committing. Outcome is reliable, low-risk reshuffling that preserves user confidence and operational continuity.

Acceptance Criteria
Preview Mirrors Apply Outcome
- Given a schedule state version S and a valid rebalance request R, When a preview is generated for R at version S, Then the preview returns assignment diffs (added/removed/reassigned), priority deltas per job, and conflicts (locks, capacity, skills) with counts and IDs. - Given the same request R and unchanged schedule state version S, When apply is executed using the preview token, Then the resulting assignments, priorities, and resolved conflicts exactly match the preview output (100% parity). - Given the preview token has a configured TTL, When apply is attempted after expiry, Then the system rejects with a stale-preview error and makes no changes.
Atomic All-or-Nothing Apply Across Services
- Given an approved preview token PT, When apply is executed, Then either all changes are committed in the scheduling service, notifications are sent to affected users with rationale, and an audit record with transaction ID T is written, or none of these occur (no partial state). - Given a simulated failure in any downstream service during apply, When the failure occurs, Then the system aborts the apply, reverts any partial changes, writes an audit entry indicating abort with reason, and returns an error without sending partial notifications. - Given the apply completes successfully, When verifying services, Then scheduling reflects new assignments, notifications include per-user change reason with before/after summary, and the audit log stores the full diff, all linked to T.
Idempotent Rebalance Operations
- Given a transaction ID T and request R, When the apply endpoint is retried N times due to transient errors, Then the final state equals exactly one successful apply, with no duplicate notifications and no duplicate audit entries (at most one per target). - Given the same preview parameters and schedule state, When preview is requested multiple times, Then the preview output is deterministic and identical across requests. - Given an apply is received with a previously completed transaction ID T, When processed, Then the system returns an idempotency confirmation and performs no additional changes.
Optimistic Concurrency Control on Apply
- Given a preview generated at schedule version S, When the schedule changes to S' before apply, Then the apply attempt fails with a 409 Conflict (or equivalent) indicating stale preview, and no changes are committed. - Given a preview generated at S and apply executed while S is current, When processed, Then the apply succeeds; if any record-level conflicts arise, they are detected and cause a single, atomic failure rather than partial commits.
Automated Rollback on Apply Failure
- Given apply has started for transaction T and a failure occurs after some steps succeed, When the failure is detected, Then the system initiates automated rollback, restoring scheduling to the pre-apply state, suppressing or sending corrective notices, and recording a rollback event linked to T in the audit log. - Given rollback completes, When verifying, Then the system state equals the exact pre-apply snapshot; temporary artifacts are cleaned; and a failure notification is sent to the initiator with reason and next steps.
Manual Revert by Transaction ID
- Given a successful apply with transaction ID T within the configured retention window, When an authorized user requests a revert, Then the system restores the schedule to the exact pre-apply snapshot, issues revert notifications with clear reasoning, and writes an audit entry linking the revert to T. - Given a revert is requested outside the retention window or by an unauthorized user, When processed, Then the system denies the request with a clear error and makes no changes.
Guardrails: Thresholds, Circuit Breakers, Retries, and Locks
- Given a preview where proposed changes exceed the configured change threshold (e.g., >X% of active assignments or >Y total reassignments), When apply is requested, Then the system blocks the apply and returns a threshold-exceeded message with counts. - Given lock flags on jobs/users and user preference constraints, When preview/apply runs, Then no locked assignments are modified and no assignments violate user preferences; any attempted violations are reported as conflicts. - Given transient downstream failures, When apply runs, Then the system retries up to N times with exponential backoff and jitter; if error rate exceeds the circuit-breaker threshold within the observation window, Then further applies are rejected while the breaker is open and resume only after recovery criteria are met.
Comprehensive Audit Trail & Reason Codes
"As a compliance manager, I want an auditable history of schedule changes with reason codes so that I can resolve disputes and meet record-keeping obligations."
Description

Captures an immutable, searchable record of every rebalance event, including before/after queues, assignments, scores, fired rules, triggering signals, approvers, and notifications sent. Standardizes structured reason codes and human-readable summaries to explain each change. Supports filtering by user, job, timeframe, and rule version, and exports to CSV/PDF for sharing with clients or carriers to reduce disputes. Applies retention policies and access controls to protect sensitive data. Integrates with incident tracking and analytics for trend analysis on rush volume, rule effectiveness, and SLA adherence. Expected outcome is transparent, defensible decision history that builds trust with internal teams and external stakeholders.

Acceptance Criteria
Immutable Rebalance Event Logging
Given a rebalance is executed (manual or automatic) within a tenant When the rebalance completes Then a single audit event is persisted within 2 seconds (p99) containing: event_id, tenant_id, timestamp (UTC ISO 8601), initiator (user_id or system), affected_job_ids, before/after queue positions, before/after assignee_id per job, priority/score before/after, fired_rules [id, version, outcome], triggering_signals, locks/user_preferences considered, approver_ids (if any), and notifications [recipient_id, channel, template_id, sent_at] And the audit store is append-only; each event includes hash and previous_hash forming an immutable chain And attempts to alter or delete an event are blocked and logged as a separate security_event with actor_id and timestamp And 99.9% of rebalance executions have a corresponding audit event; any missing event raises a critical alert within 60 seconds
Structured Reason Codes and Summaries
Given a rebalance changes any job assignment, priority, or queue position When the audit event is created Then each change item includes a required reason_code from a controlled taxonomy (versioned), reason_version, and a human_readable_summary of 240 characters or fewer And unknown or deprecated reason_code values are rejected; the event is not persisted and an error is returned/logged And UI/API and exports display both reason_code and human_readable_summary for each change
Audit Log Filtering and Search Performance
Given an authorized user with AuditViewer scope When they filter by any combination of user_id, job_id, timeframe (start/end), rule_version, and reason_code (AND logic) Then the results include only matching events from the user's tenant and scope And results are paginated via cursors, sorted by timestamp desc by default, and include total_count And queries returning up to 10,000 events respond within 2 seconds (p95)
CSV/PDF Export for External Sharing
Given a filtered audit result set of 50,000 events or fewer When the user requests CSV and/or PDF export Then the export includes all visible fields and respects applied filters and sort order And the file includes metadata headers: generated_at, requester_id, tenant_id, filter_params, total_count, dataset_hash And sensitive fields are masked per policy (e.g., recipient contact partially masked) And the file is available for download within 60 seconds (p95) and an export_audit record is created And for result sets larger than 50,000, an asynchronous export job is queued and a download link is delivered within 15 minutes (p95)
Role-Based Access and Data Retention
Given a multi-tenant environment When any user requests audit data Then only roles Admin, Manager, or Auditor with appropriate scope can access; all others receive 403 And cross-tenant access is prevented; no events from other tenants are returned And audit records are retained for 24 months by default; upon expiration they are purged within 7 days and a purge_audit record is created And legal_hold flags suspend purging for specified job_ids or time intervals until cleared
Analytics and Incident Integration
Given audit events are generated When the analytics pipeline processes events Then 95% of events are delivered to the data warehouse within 5 minutes of creation and 99% within 15 minutes, matching the documented schema And daily trend tables for rush_volume_by_day, rule_effectiveness, and sla_adherence are populated by 02:00 UTC And if audit write failures exceed 5 within any 10-minute window or missing-event alerts fire, an incident webhook is sent to the incident system within 30 seconds, creating or annotating an incident with context
Approval and Notification Traceability
Given a rebalance requires approval and/or emits notifications When the audit event is created Then the event includes approver_id, decision (approved/denied), and decision_timestamp; if no approval is required these fields are null And all notifications are recorded with recipient_id, channel, template_id, status (sent/failed), error_code (if failed), and sent_at And searches by approver_id or notification recipient_id return matching events
Scalable, Low-Latency Rebalance
"As a product owner, I want rebalances to complete quickly at scale so that users experience minimal disruption during peak intake."
Description

Meets defined performance budgets so rebalances complete quickly even during intake spikes. Targets p95 end-to-end recompute under 3 seconds for 1,000 open jobs and 50 active users, with graceful degradation and backpressure under extreme loads. Implements efficient data access patterns, incremental recalculation, and horizontal scaling of the scoring service. Ensures concurrency safety when multiple triggers occur (e.g., simultaneous rush jobs) through queuing, deduplication, and merge strategies. Provides observability with metrics, traces, and alerts tied to SLOs for throughput, latency, and error rates. Outcome is responsive, reliable behavior that minimizes user disruption and keeps estimators productive.

Acceptance Criteria
p95 End-to-End Rebalance Latency at Target Load
Given a staging environment mirroring production with 1,000 open jobs and 50 active users When a single rush job triggers a global rebalance Then the measured end-to-end rebalance duration p95 is <= 3,000 ms over 200 consecutive runs And p99 is <= 5,000 ms And HTTP 5xx rate is 0 and client-visible error rate is < 0.1% during the test window And 100% of locks and user preferences are preserved in resulting assignments
Throughput and Backpressure Under Extreme Intake Spike
Given an intake spike of 30 rush jobs per minute for 10 minutes with 2,500 open jobs and 80 active users When rebalances are triggered by each intake event Then triggers are enqueued with deduplication so that effective queue depth never exceeds 200 And median time-in-queue is <= 2,000 ms and p95 <= 5,000 ms And the API responds 202 for queued operations and 429 for over-limit callers without increasing 5xx above 0.1% And 100% of unique triggers are processed exactly once with no data loss
Incremental Recompute for Partial Changes
Given that only 10% of jobs have changes to priority inputs When a rebalance is triggered Then no more than 15% of jobs are rescored/re-evaluated end-to-end And the final assignments match a full recompute baseline (identical assignments or score deltas <= 0.5%) And p95 latency improves by >= 40% versus a forced full recompute under the same load
Horizontal Scaling of Scoring Service
Given scoring service replicas are increased from N to 2N under steady load L When the autoscaler scales out based on CPU > 70% or queue depth > 100 for 2 consecutive minutes Then system throughput increases by >= 70% and p95 latency does not regress And no replica exceeds 80% CPU or 85% memory utilization for > 5 minutes post-scale And scale actions are rate-limited to >= 3 minutes between changes to prevent thrashing
Concurrency Safety for Simultaneous Triggers
Given two or more rush jobs arrive within 200 ms while a rebalance is in progress When multiple triggers are received Then triggers are deduplicated and merged into a single effective rebalance unit And final assignments are persisted exactly once with no duplicate reassignment events And database writes use transactions or optimistic concurrency to prevent lost updates; deadlocks = 0 and retry failures = 0 after bounded retries (<= 3) And event ordering is preserved using monotonic sequence IDs
Observability and SLO-Backed Alerts
Given SLOs of rebalance p95 <= 3,000 ms, error rate <= 0.5%, availability >= 99.9% When the system processes rebalances under normal and stressed conditions Then metrics (rebalance_latency_ms, rebalance_queue_depth, rebalance_error_rate, triggers_deduped_total, scoring_replica_count) are emitted every 10s with < 60s freshness And distributed traces include a 'rebalance' span with correlation IDs in >= 95% of requests And SLO burn-rate alerts (14d/1h and 6h/5m windows) fire within 2 minutes of threshold breach and create tickets with runbook links And dashboards show latency percentiles, throughput, queue depth, error budget burn and are peer-reviewed
Graceful Degradation of User Experience During Load
Given the system is under heavy load causing backpressure When a user submits a rush job via UI or API Then the user receives an immediate acknowledgment with status = queued and an estimated start time And UI progress indicators update at least every 5 seconds without freezing And assignment changes respect existing locks 100% and notifications are delivered once complete (no loss, possible delay) And median submission response time stays <= 500 ms despite backpressure

Triage Audit Trail

Logs the rationale behind every priority and assignment (hail metrics, due date, drive‑time, rule hits). Export a shareable record to defend decisions with carriers and management, reducing disputes.

Requirements

Decision Event Capture
"As an operations manager, I want every triage decision and its inputs to be automatically captured so that I can reconstruct why a job was prioritized or assigned the way it was."
Description

Automatically capture every triage decision (priority score, assignment, route choice) with timestamp, actor (user or automation), source system, and full input signal set including hail metrics at the service address and time, due dates and SLAs, drive-time estimates, rule hits, and prior values. Persist correlation IDs to link related operations, include ruleset/model versions and data source provenance, and support reliable ingestion via pub/sub with retry and offline queues to prevent data loss. Integrates with the existing triage service and job lifecycle events to ensure complete coverage across manual overrides and automated workflows.

Acceptance Criteria
Automated Triage Decision Event Capture
Given the triage service makes an automated decision for a job When the service computes a priority score, selects an assignee, and determines a route Then an immutable decision event is created and persisted within 2 seconds of computation end And the event includes: eventId (UUIDv4), jobId, timestamp (ISO 8601 with timezone), actor.type = "automation", actor.id = "triage-service", sourceSystem = "triage-service" And the event includes outputs: priorityScore (number), assignment.userId, assignment.queueId (nullable), route.choiceId (nullable) And the event passes schema validation and is retrievable by jobId and eventId
Manual Override Decision Event Capture
Given a user overrides any triage attribute (priority score, assignment, or route) via UI or API When the override is saved Then a decision event is appended within 2 seconds capturing: actor.type = "user", actor.id, actor.role, sourceSystem in {"ui","api"} And the event includes oldValue and newValue diffs for each changed field And the event includes user-provided reason/comment (optional) and overrideType = {"priority","assignment","route"} And the event preserves the previous correlationId if present, else generates a new one And the event is visible in the audit feed ordered by timestamp desc
Input Signal Set Completeness and Accuracy
Given any triage decision (automated or manual) is recorded When the event is created Then the inputSignals object must include: hail.metrics (provider, datasetVersion, observationTime, hailSizeMax, hailProbability), serviceAddress (geocode, lat, lon), dueDate, sla (name, targetHours), driveTime (provider, origin, destination, seconds, computedAt), ruleHits[] (ruleId, name, version, outcome), priorValues (before state of fields changed) And fields are populated (non-null) per spec unless explicitly marked nullable; otherwise the event is rejected and not persisted And numeric fields are within valid ranges (e.g., driveTime.seconds >= 0; hailSizeMax >= 0) And all timestamps are ISO 8601 with timezone and within 5 minutes of event timestamp
Correlation and Causality Linking
Given a job undergoes multiple triage operations (initial triage, reassignment, reroute) When each operation emits a decision event Then each event includes correlationId (UUID) shared across the related operations And each subsequent event includes parentEventId referencing the immediately preceding decision event for that job And querying by correlationId returns all related events in causal order with no gaps And correlationId and parentEventId are propagated across services (UI, API, triage-service)
Ruleset/Model Versioning and Data Provenance
Given a triage decision is produced using rules and/or models and external data providers When the decision event is created Then the event includes ruleset.id, ruleset.version (semver), model.version (semver, nullable), model.checksum (nullable) And dataProvenance[] contains one entry per external data source with provider, dataset, version/date, requestId/traceId And sourceSystem is one of the enumerated values {"triage-service","rules-engine","ui","api","external-integration"} And events missing required provenance are rejected and logged with error code PROVENANCE_MISSING
Reliable Pub/Sub Ingestion with Retry and Offline Queue
Given the event publisher cannot reach the pub/sub broker or storage sink When a decision event is generated Then the event is placed in a durable offline queue with 72-hour retention and retried with exponential backoff (max interval 5 minutes) And publishing uses at-least-once semantics; each event carries an idempotencyKey = eventId to enable deduplication downstream And in a resilience test with 10,000 events and a 15-minute network outage, 0 events are lost, backlog drains in < 2 minutes after recovery, and p95 end-to-end ingestion latency < 3 seconds once connectivity is restored And publish_success_rate >= 99.9% over a 24-hour window and is observable via metrics
Lifecycle Coverage Across Automated and Manual Flows
Given triage-related lifecycle changes occur (triage-created, triage-updated, reassigned, route-changed, SLA-updated, job-closed) When any such change happens via automation, UI, or external API Then a decision event is emitted for each change with changeType in the enumerated set and includes sourceSystem reflecting the origin And an integration test simulating 50 jobs across mixed flows yields a 1:1 mapping of lifecycle changes to decision events with no missing types And events for external triggers include external.systemName and external.requestId
Rationale Schema and Normalization
"As a data analyst, I want standardized rationale fields so that I can query and compare triage outcomes across jobs and over time."
Description

Define and implement a structured, versioned schema for triage rationale that normalizes common inputs (hail size/intensity, storm date/time, distance/drive-time, due date/SLA, rule hits, model scores) into typed fields with units, confidence, and provenance. Ensure tenant isolation, backwards compatibility through schema versioning, and support for geospatial and temporal context. Provide an abstraction layer that maps source-specific data into canonical fields to enable consistent querying, analytics, and cross-job comparisons.

Acceptance Criteria
Schema Versioning and Compatibility
- Given the canonical triage rationale JSON Schema registry, when the API is queried for the current schema, then it returns a SemVer (MAJOR.MINOR.PATCH) identifier and the full JSON Schema. - Given a new rationale record payload, when it is validated, then it must conform to the current JSON Schema or the write is rejected with machine-readable error codes and paths. - Given a persisted record with an older schema_version, when it is read via the API, then it deserializes without data loss and includes its original schema_version. - Given a MINOR or PATCH schema release, when records written under prior MINOR/PATCH versions are read, then validation passes without migration. - Given a MAJOR schema release, when records written under prior MAJOR versions are read, then server-side migration produces a canonical response and records migration provenance (from_version, to_version, migrated_at, migrator_version). - Given a request for a schema changelog, when the API is called, then it returns both human-readable notes and a machine-readable diff of added/removed/changed fields.
Hail Metrics Normalization
- Given hail size inputs in inches or millimeters from supported sources, when ingested, then hail.size_mm is stored as a decimal in millimeters with precision to 0.1 mm and values outside [0,150] mm are rejected with errors. - Given source-specific hail intensity scales, when normalized, then hail.intensity_scale is mapped to an integer in [0..5] per mapping table, and mapping_version and source provenance are stored. - Given storm date/time provided in any timezone, when processed, then storm.start_utc and storm.end_utc are stored as ISO 8601 UTC with timezone conversion recorded (original_tz, converted_by, converted_at). - Given multiple hail sources for the same job, when aggregated, then per-source observations are preserved with confidence ∈ [0,1] and an aggregate hail.confidence is computed using configured policy and recorded with policy_id and version. - Given missing hail intensity in a source, when normalized, then hail.intensity_scale is null, and the record passes validation with a required_fields list indicating which fields were absent.
Distance and Drive-Time Normalization
- Given job and base coordinates (WGS84), when distance is computed, then distance_km is stored as a decimal in kilometers (precision 0.01) using the configured routing provider and includes provenance (provider, provider_version, computed_at). - Given routing results, when persisted, then drive_time_minutes is stored as an integer number of minutes and route_mode is recorded (e.g., driving, walking). - Given an input distance in miles from a source, when normalized, then it is converted to kilometers with a relative error ≤ 0.5% versus double-precision reference conversion. - Given routing provider failure or timeout, when ingestion runs, then distance_km and drive_time_minutes are null, and an error object is recorded (code, message, provider, occurred_at) without blocking the rest of the rationale write. - Given a request for provenance, when the record is fetched, then the response includes the exact coordinates used, provider metadata, and any fallbacks applied.
Due Date, SLA, and Rule Hits Normalization
- Given a due date provided in local time, when normalized, then due_date_utc is stored as ISO 8601 UTC and sla.tier (e.g., Bronze/Silver/Gold) and sla.target_hours are captured per tenant configuration with config_version. - Given rules evaluated during triage, when persisted, then rule_hits[] contains entries {rule_id, rule_name, rule_version, outcome, weight, evaluated_at_utc} and only outcome=true entries are counted in priority scoring. - Given ML model decisions, when recorded, then model_scores[] contains {model_id, model_version, score, threshold, decision, confidence ∈ [0,1], provenance}. - Given missing due date, when validated, then the record is accepted only if sla.policy_allows_missing_due_date=true in tenant config, otherwise validation fails with a specific error code. - Given an export request, when generated, then due date, SLA, rule hits, and model scores appear in the normalized, labeled fields matching the canonical schema.
Tenant Isolation for Rationale Records
- Given multi-tenant data, when a rationale record is written, then tenant_id is required and stored immutably. - Given an authenticated user from tenant A, when querying rationale records, then results include only records with tenant_id=A. - Given an authenticated user from tenant A, when attempting to access a record with tenant_id=B, then the API responds 403 and no field values are leaked in error messages. - Given a cross-tenant export attempt, when executed, then the export contains only the requesting tenant’s records and includes an audit entry recording requester, tenant_id, and record count. - Given system logs, when inspected, then no PII or field values from tenant B appear in logs of tenant A requests (log redaction in place).
Abstraction Layer Mapping from Heterogeneous Sources
- Given input from NOAA_StormEvents_CSV, when ingested, then the adapter maps fields to canonical hail.* and storm.* with unit conversions and fills provenance {source_system, source_field, transform} for each mapped field. - Given input from VendorX_WeatherAPI_v3, when ingested, then the adapter produces the same canonical fields without changes to the schema or consumers. - Given routing data from OSRM or Google Distance Matrix, when normalized, then the abstraction layer outputs identical canonical fields (distance_km, drive_time_minutes, provider metadata) regardless of provider. - Given a new source adapter is added, when end-to-end tests run, then no changes are required in analytics queries that target canonical fields. - Given unmappable or missing source fields, when processed, then canonical fields are set to null and a warnings[] array captures each omission with code and context.
Geospatial and Temporal Context Support
- Given a job address, when normalized, then job_location is stored as a WGS84 Point (lat, lon) with 6 decimal places and spatial indexable type, and optional service_area is stored as a Polygon when provided. - Given any timestamp fields, when persisted, then they are stored as ISO 8601 UTC with milliseconds and include original timezone metadata if converted. - Given a query for jobs within a 25 km radius and a storm time window, when executed, then results are correct per spatial and temporal filters using the canonical fields. - Given a rationale export, when generated, then it includes geospatial fields (coordinates, optional polygon) and temporal fields (UTC timestamps) in the normalized structure. - Given invalid geometry (self-intersecting polygon), when validated, then the write is rejected with an explicit geometry_error code.
Immutable Audit Log and Versioning
"As a compliance officer, I want a tamper-evident, versioned audit log so that I can prove the triage history has not been altered."
Description

Store triage rationales in an append-only, tamper-evident log with per-job versioning that captures initial decisions, subsequent overrides, and re-triage events. Chain entries with cryptographic hashes, record rule/model versions, and provide consistent point-in-time snapshots for export. Enforce retention policies, soft-deletes via tombstones, and server-side encryption with key management. Optionally leverage object lock or WORM-capable storage for compliance to meet carrier audit expectations.

Acceptance Criteria
Append-Only Write Enforcement
Given a valid triage event for a job, When an audit entry is created via the API, Then the system persists a new record with an immutable sequence number and server-generated UTC timestamp (ms precision). Given any attempt to update or delete an existing audit entry via the API, When the request is processed, Then the system rejects it with HTTP 405 or 409 and no mutation occurs. Given concurrent writes for the same job, When entries are committed, Then sequence numbers are strictly increasing with no duplicates. Given a read of a job's audit entries, When the list endpoint is called, Then entries are returned in ascending sequence order and include: jobId, entryId, sequence, authorId, rationale, serverTimestamp, and read-only flags.
Per-Job Versioning and Overrides Captured
Given a job with no prior triage, When the first decision is recorded, Then version=1 is assigned and the decision state is stored (priority, assignee, due date, hail metrics snapshot, drive-time, rule hits, rule/model version IDs, rationale text). Given an override or re-triage request, When processed, Then a new entry is appended with version incremented by 1 and a computed diff of changed fields from the prior version. Given an override request without a rationale, When validated, Then the request is rejected with HTTP 400 and a validation error indicating rationale is required. Given a request for a specific version, When the versioned read endpoint is called with version=N, Then the exact state as of version N is returned. Given a request for the latest decision, When the latest endpoint is called, Then the highest version number and its state are returned.
Cryptographic Hash Chain Integrity Verification
Given a new audit entry, When it is persisted, Then entryHash is computed over a canonical serialized payload and prevHash references the prior entry's hash for that job, using a recorded algorithmId (e.g., sha256:v1). Given a request to verify a job's audit chain, When the integrity endpoint is called, Then the service recomputes hashes across all entries and returns status=valid with the terminal hash if all links match. Given any mutation or missing entry in the chain, When verification runs, Then status=invalid is returned with the first failing sequence number and expected vs actual hash values. Given a genesis entry for a job, When inspected, Then prevHash is null (or a defined constant) and verification still returns status=valid.
Point-in-Time Snapshot Export
Given a job with multiple versions, When an export is requested for version=N (or timestamp=T), Then the system produces an immutable export bundle containing: job metadata, entries up to N/T, effective decision state, rule/model version IDs, and an integrity proof (first and terminal hash). Given repeated export requests for the same job and version parameters, When executed, Then the byte content of the export is identical and includes a stable exportId. Given a job with ≤ 500 audit entries, When an export is requested, Then the export is available for download within 10 seconds p50 and 30 seconds p95. Given an export URL is generated, When accessed within its validity period, Then it downloads successfully; after expiry (≥24 hours), access returns HTTP 410 Gone. Given later overrides occur after the exported version, When the same point-in-time export is re-requested, Then its content remains unchanged.
Retention Policy and Tombstone Soft-Delete
Given a tenant retention policy in days is configured, When entries exceed the retention threshold or a manual redact is requested, Then the system appends tombstone entries that reference the affected entryIds and policy reason without removing prior hashes. Given a read of a redacted entry, When the API is called, Then the content body is replaced with a redacted marker and reason code, while metadata (sequence, timestamps, hashes) remains readable. Given a chain containing tombstones, When integrity verification runs, Then status=valid is returned and includes the tombstone entries in the computation. Given an export that includes redacted periods, When generated, Then the export omits redacted content bodies but includes tombstone markers and still verifies end-to-end integrity. Given an attempt to hard-delete an entry, When the request is made, Then the system rejects the operation and instructs to use the redaction/tombstone process.
Server-Side Encryption and Key Management
Given audit data is stored at rest, When inspected via storage metadata, Then server-side encryption is enabled and each object/record references a managed key identifier. Given a scheduled key rotation event, When rotation completes, Then new writes use the new key while previously stored entries remain decryptable via authorized API access. Given an unauthorized principal attempts to read audit content, When accessing via API or direct storage, Then the request is denied (HTTP 403 for API) and raw ciphertext is not exposed via API responses. Given cross-tenant access is attempted, When a principal from Tenant A requests Tenant B's audit data, Then access is denied and encryption context prevents decryption. Given a decryption failure occurs, When logged, Then an auditable event is recorded with keyId, tenantId, and timestamp without leaking sensitive content.
Compliance Mode with Object Lock/WORM
Given a tenant enables compliance mode with a retention period R, When new audit artifacts are written, Then they are stored with object lock/WORM semantics and report a retentionUntil timestamp. Given any attempt to modify or delete a locked artifact before retentionUntil, When the request is processed, Then it is rejected with an error indicating compliance lock is in effect. Given a request to reduce retention or disable compliance mode, When submitted by a non-compliance role, Then the request is denied; increasing retention is allowed and is irreversible by non-compliance roles. Given a need to verify compliance state, When queried, Then the system exposes per-tenant compliance settings and per-artifact lock status and retentionUntil values. Given retentionUntil has passed, When policy enforcement runs, Then artifacts become eligible for tombstone/redaction according to the tenant's retention policy without breaking chain integrity.
Evidence Attachment and Linkage
"As an adjuster, I want to view and attach the exact evidence used in triage so that I can justify the decision to carriers and management."
Description

Enable attaching and linking supporting evidence to each decision version, including hail history maps and metrics, storm event data, drive-time screenshots, rule evaluation traces, and imagery thumbnails. Store references via secure URIs with signed, expiring links, enforce file type/size limits and malware scanning, and render safe previews in-app. Maintain data lineage and source attribution, and ensure each attachment is bound to the specific decision version for accurate reconstruction.

Acceptance Criteria
Upload Allowed Evidence Files with Validation
Given a valid decision version exists and I am authorized to manage its evidence And I select a file with type in {PDF, PNG, JPG, JPEG, CSV, JSON} And the file size is <= 50 MB and the total size of all attachments on this version after upload will be <= 500 MB When I upload the file and provide required metadata (attachment_type, filename) Then the upload succeeds and returns {attachment_id, decision_version_id, filename, content_type, size, created_at} And an audit log entry is recorded with {action: "attach", outcome: "success", user_id, decision_version_id, attachment_id} When the file type is not allowed or size limits would be exceeded Then the upload is rejected with HTTP 415 (unsupported type) or 413 (payload too large) and an error code {EVIDENCE_TYPE_INVALID or EVIDENCE_SIZE_EXCEEDED} And no attachment record or blob is persisted And an audit log entry is recorded with {action: "attach", outcome: "failure", reason}
Malware Scan and Quarantine on Upload
Given a file has been uploaded for a decision version When the system receives the file Then it is stored in quarantine and a malware scan is initiated And preview and download are disabled until scan_status = "clean" And scan results are persisted with {engine_name, engine_version, signature_date, scanned_at, scan_status} When scan_status = "infected" Then the file is quarantined (not publicly accessible) and marked permanently blocked And the user receives a non-sensitive notification that the file was blocked for security And an audit log entry records {action: "scan", outcome: "blocked", attachment_id} When no scan result is available within 5 minutes Then scan_status = "timeout" and access remains blocked And the user is prompted to retry the upload or contact support
Signed, Expiring Link Generation and Access Control
Given an authorized user requests access to an attachment When the system generates a link Then a signed URL scoped to {attachment_id, action: preview|download} is created And the URL expires in 15 minutes (±2 minutes clock-skew tolerance) And the URL is HTTPS-only and does not expose raw storage bucket paths And access attempts are logged with {user_id, attachment_id, action, ip, user_agent, timestamp} When the link has expired or has been revoked Then subsequent access returns HTTP 403 and no content is served And the event is recorded in the audit log And application logs mask signature query parameters to prevent token leakage
Attachment Binding to Specific Decision Version
Given an attachment is created for a decision version Then it is immutably bound to that decision_version_id and cannot be reassigned to another version via UI or API When a new decision version is created for the same job Then prior attachments remain associated with their original version and are not auto-migrated And a "Copy to Version" action (if invoked) creates a new attachment_id on the target version with provenance {parent_attachment_id} When the audit trail for a decision version is fetched or exported Then the attachment list exactly reflects the set bound to that version, enabling exact reconstruction
In-App Safe Preview Rendering
Given an attachment with scan_status = "clean" and a supported type When I open the evidence panel for the decision version Then the app renders a safe preview via a server-side proxy with CSP enforced (no external network requests) And for images (PNG, JPG, JPEG): a thumbnail and zoomable preview are shown And for PDFs: the first page is rendered as an image preview And for CSV/JSON: the first 100 rows/lines are shown in a read-only viewer without executing scripts And 90th percentile preview load time is <= 2 seconds for files <= 10 MB When the file type is unsupported or scan_status != "clean" Then a "No Preview Available" state is shown with an option to request a download link (subject to authorization)
Data Lineage and Source Attribution Capture
Given I attach evidence related to a triage decision When the attachment is saved Then the system records immutable metadata: {source_type (hail_api|storm_event|drive_time|rule_trace|imagery), provider, source_uri (if available), collected_at, location/bounding_box (if applicable), rule_id and rule_version (if applicable), created_by, content_type, size, sha256_hash, created_at} And the sha256_hash is computed on the stored blob and used for integrity verification When the same file is re-uploaded Then a new attachment record is created with its own attachment_id and metadata; existing records remain unchanged When reconstructing or exporting a decision version Then lineage and source attribution fields are included for each attachment to support defensibility
Audit Trail Export with Evidence References
Given a user exports the triage audit trail for a specific decision version When a "Shareable Export" is generated Then the export includes, for each attachment: {filename, content_type, size, sha256_hash, created_by, created_at, source_type, provider, collected_at, location/bounding_box (if available), rule_id/rule_version (if applicable), scan_status} And a signed preview/download link valid for 7 days is included for each attachment And the export is produced within 30 seconds for up to 100 attachments And after 7 days, links return HTTP 403 and no content is served And an audit entry records {action: "audit_export", user_id, decision_version_id, link_expiry} When an "Internal Export" is generated Then attachment links default to 15-minute TTL and require authentication to access
Shareable Audit Report Export
"As a project manager, I want to export a shareable audit report so that I can defend triage decisions with carriers and internal stakeholders."
Description

Generate exportable audit trail reports in PDF and CSV that include a summary, detailed decision timeline, normalized rationale fields, rule/model versions, and linked or embedded evidence. Provide one-click share links with expiring tokens, email/API delivery, configurable redaction profiles, watermarks, timestamps, and tenant branding. Support carrier-specific templates, localization/time zones, pagination for large jobs, and deterministic outputs for a given version to ensure consistency across reviews.

Acceptance Criteria
Deterministic PDF/CSV Export for a Given Job Version
Given a finalized triage job at version V with selected carrier template T and redaction profile R When a user exports the audit report as PDF and CSV Then both artifacts include: (a) summary section, (b) complete decision timeline with event timestamps, (c) normalized rationale fields (priority score, hail metrics, due date, drive-time, rule hits with IDs), (d) rule/model names with semantic version identifiers, and (e) evidence references (embedded thumbnails and/or signed links) And repeated exports with identical inputs on the same application version yield byte-identical outputs (matching SHA-256 for PDF and CSV) And each artifact embeds a report version identifier and generated-on timestamp in document metadata And CSV column order and headers are stable and documented And missing data fields are represented by explicit null placeholders, not omitted
One-Click Share Link with Expiring Token
Given a successfully generated audit report artifact When the user creates a share link with an expiry duration E Then the system issues a unique, non-guessable URL containing a time-bound token scoped to read-only access of that specific artifact and parameters (template, redaction, version) And the link expires exactly at E, after which requests return HTTP 403/410 without revealing existence And the owner can revoke the link immediately, rendering the token unusable within 60 seconds And access via the link delivers the exact exported version (immutable), not a regenerated variant And all accesses are logged with timestamp, IP, user-agent, and token ID And the link page displays tenant branding and watermark as configured And rate limiting protects the link from excessive downloads (>50 requests/minute returns 429)
Email and API Delivery of Audit Report
Given an exported report and a list of recipients When the user sends via email Then recipients receive a message within 5 minutes with tenant branding, subject template, and either (a) the PDF attached if <=15 MB or (b) a secure expiring link if larger And hard bounces and complaints are recorded and surfaced to the sender with reason codes And the system redelivers transient failures up to 3 attempts with exponential backoff Given an API client When POST /reports/{id}/deliver is called with a valid token and payload Then the API returns 202 with a delivery_job_id, and GET /deliveries/{delivery_job_id} reports status transitions (queued, sending, sent, failed) and failure reasons And delivery requests are idempotent within 24 hours using Idempotency-Key; duplicates do not create multiple emails
Configurable Redaction Profiles and Watermarking
Given a tenant with redaction profiles defined (e.g., Carrier-Minimal, Full-PII) When a profile P is selected for export Then fields marked for P (policyholder PII, adjuster emails, internal comments) are removed or masked in both PDF body, PDF metadata, and CSV cells And all embedded images are scrubbed of EXIF/XMP GPS data when P requires it And the selected watermark text and opacity render on every PDF page within margins and do not obscure required data (AA contrast maintained for foreground text) And a Redaction Profile ID and Watermark flag are recorded in the report metadata page
Carrier-Specific Templates and Tenant Branding
Given a job associated to Carrier C and tenant branding B When the report is exported Then the carrier-specific template for C is applied (section ordering, field labels, disclaimers) with template version ID included on the cover page And tenant logo, colors, and footer details render according to branding B on both PDF and email And if C has no custom template, the system falls back to the default template with a recorded fallback reason And CSV column set and header names align to template C’s specification
Localization and Time Zone Rendering
Given tenant locale L and time zone Z (overridable per export) When the report is exported Then all timestamps in the PDF narrative render in Z with correct historical DST rules, and a legend notes the UTC offset And CSV timestamps use ISO 8601 with explicit offset (e.g., 2025-09-04T13:45:00-06:00) And date, number, and currency formats follow locale L conventions And localized labels and section titles match the selected language pack; missing translations fall back to English and are listed in metadata
Pagination and Evidence Linking for Large Jobs
Given a job with >500 evidence items or total PDF size exceeding 50 MB when fully embedded When exporting the PDF Then the report paginates sections with a generated table of contents and page numbers on every page And evidence items render as 200px thumbnails with captions while full-resolution files are provided via time-bound signed links And the final PDF remains <=50 MB and generation completes within 2 minutes for the test dataset And evidence links include MIME type, size, and checksum in an appendix table for verification
Rule Evaluation Traceability
"As a triage lead, I want transparent rule and model traces so that I can understand which factors drove a priority or assignment."
Description

Capture and persist a detailed rule engine evaluation trace for each triage decision, including rules evaluated, matched conditions, thresholds, input values, and contribution to the final score or assignment. Record ruleset identifiers and versions (e.g., Git commit), generate human-readable explanations, and provide diff views between versions to highlight changes affecting outcomes. Where ML models are used, include feature importances or SHAP-style summaries within storage and exports.

Acceptance Criteria
Persist Full Rule Evaluation Trace per Triage Decision
Given a triage run completes for a claim When the system finalizes the triage decision Then it persists a trace linked to triage_id containing: ruleset_id, ruleset_version (e.g., Git commit), engine_version, decision_timestamp (UTC ISO-8601), correlation_id, input_snapshot (hail metrics, due date, drive-time, claim metadata), aggregate_score, final_priority, assignment target And it records evaluated_rules[] with: rule_id, rule_name, rule_version, matched (boolean), conditions[] {condition_id, operator, threshold, input_value, eval_result}, weight, score_contribution And 100% of rules considered by the engine appear in evaluated_rules[] (count matches engine log) And the additional write overhead to persist the trace is ≤ 200 ms at p95 for 1000 concurrent triages And retrieving the trace by triage_id returns exactly the persisted values (field-for-field match)
Generate Human-Readable Triage Rationale
Given a stored evaluation trace for a triage decision When a user opens the Triage Audit Trail view Then the system renders an explanation that lists matched rules with plain-language reasons showing actual input values vs thresholds and the per-rule score contributions And it summarizes final priority and assignment with the top 3 drivers (by absolute contribution) And it displays a score breakdown (per-rule and cumulative) consistent with the numeric trace (no rounding errors > 0.01) And if ML was used, it includes a short summary of top features and their directional impact And the explanation is generated in ≤ 1.0 s for p95 and is deterministic for the same trace
Export Audit Trail to PDF and JSON with Redaction
Given a stored evaluation trace When a user selects Export and chooses PDF or JSON Then the export includes: raw trace, human-readable explanation, ruleset_id, ruleset_version (commit), engine_version, timestamps, claim_id, org_id, and checksum And fields marked PII in the data classification are redacted/masked in both formats And the JSON validates against schema versioned as audit_trace_schema_v1 and includes schema_version And the PDF renders sections: Inputs, Rule Hits, Score Breakdown, Assignment Rationale, ML Summary (if present) And a secure download link is produced that expires within 24 hours and is single-use And the exported content matches the on-screen data exactly
Diff View Between Ruleset Versions and Outcome Impact
Given two triage decisions for the same input snapshot produced under different ruleset versions When a user selects Compare in the Audit Trail Then the system displays a side-by-side diff highlighting added/removed/changed rules, thresholds, and weights And it computes and shows the score delta per changed rule and the total score delta And it indicates whether the final priority and assignment changed between versions And the diff computation completes in ≤ 2 s at p95 for traces with ≤ 200 evaluated rules And the user can export the diff to PDF and JSON including a summary of impact
Store and Expose ML Feature Importances/SHAP Summaries
Given an ML model contributes to a triage decision When the prediction is executed Then the trace stores: model_id, model_version/hash, prediction output, confidence/score, feature_values snapshot, and per-feature importances or SHAP values (top 10 at minimum) And units/scales in the feature_values align with the model’s expected inputs and are labeled And the human-readable and export views include a concise ML rationale summary and the full vectors in JSON And when no ML is used, ML fields are present as null and no ML summary is shown
Secure API Endpoint to Retrieve Audit Trace
Given a user with permission view_audit_trail When they call GET /api/v1/triage/{triage_id}/audit-trace Then the API returns 200 with the full JSON trace including schema_version and ETag And it returns 403 for users without permission and 404 for non-existent triage_id And p95 latency is ≤ 400 ms for traces ≤ 200 KB and the endpoint is rate-limited to 60 requests/min/user And each access is audit-logged with user_id, triage_id, timestamp, and outcome
Immutability, Integrity, and Retention of Audit Records
Given an existing persisted audit trace When any client attempts to modify or delete the trace Then the system prevents in-place mutation and instead allows only an append-only correction creating a new record linked via supersedes_id And each record includes a SHA-256 checksum and tamper checks run on read; mismatches trigger an alert and block export And all traces are stored encrypted at rest (AES-256) and served over TLS 1.2+ And traces are retained for 7 years by default, after which they are purged with a logged purge event referencing legal_hold exceptions
Access Control and Redaction Controls
"As an account administrator, I want granular access and redaction controls so that we can share audit records externally without exposing sensitive information."
Description

Implement role-based access control for viewing, querying, and exporting audit trails with per-tenant policies. Provide granular redaction settings for PII and sensitive business fields in both UI and exports, with audit logs of who viewed or exported what and when. Support legal holds, consent flags, IP allowlists for share links, and watermarking that embeds viewer identity to deter unauthorized redistribution.

Acceptance Criteria
RBAC Enforcement for Viewing, Querying, and Exporting Audit Trails
Given a user in tenant A with role Estimator has permissions audit_trail.view=true, audit_trail.query=true, audit_trail.export=false And cross-tenant access is denied by policy When the user views or queries audit trails for tenant A Then the UI and API return 200 OK with only tenant A records scoped by role permissions And when the user views or queries audit trails for tenant B Then the UI and API return 403 Forbidden with no data And when the user attempts to export audit trails in any format Then the request is blocked with 403 Forbidden and an audit event "export_denied" is recorded with reason "permission"
Granular Field-Level Redaction in UI and Exports
Given tenant A has a redaction policy masking fields homeowner_name, phone, email, exact_address, negotiated_rates for non-Admin roles And the redaction token is "[REDACTED]" When a Project Manager in tenant A views the audit trail UI and export previews Then the configured fields display as "[REDACTED]" and are omitted from client-visible payloads And when the same user downloads PDF, CSV, and JSON exports Then those fields contain "[REDACTED]" and no unredacted values exist in file metadata or embedded objects And when an Admin with audit_trail.view_sensitive=true performs the same actions Then unmasked values are visible
Immutable Access and Export Audit Logging
Given audit logging is enabled system-wide When any user views, queries, shares, or exports audit trails Then an audit event is captured with actor_user_id, actor_role, tenant_id, action, object_type, object_id, timestamp_utc (ISO-8601), source_ip, user_agent, redaction_policy_id, consent_flag_state, share_link_id (if applicable), and result (allowed/denied) And audit events are written within 2 seconds of the action and are immutable (append-only) And querying the audit log by object_id returns the new event And attempts to alter or delete audit events return 423 Locked and are audited as "tamper_denied"
Legal Hold Prevents Modification and Deletion
Given a claim's audit trail is under legal hold with hold_id and reason set by a Compliance Admin When any user attempts to delete, purge, or modify audit trail records or associated exports Then the operation is blocked with 423 Locked and references hold_id And retention for held records is extended until the hold is released And exports remain allowed for authorized roles but include a visible "Legal Hold" banner with hold_id and timestamp and are watermarked And releasing the hold requires Compliance Admin role, a justification, and is audited
Consent Flags Govern PII Exposure
Given consent_flag=false for a property owner on a claim When any user or share link viewer accesses UI or exports Then PII-marked fields are redacted for all roles and attempts to unmask return 403 Forbidden and are audited with reason "no_consent" And when consent_flag=true with consent_expiry in the future Then users with audit_trail.view_sensitive=true may view PII in UI and exports, and consent metadata (who, when, how) is displayed and embedded in export footer And when the current time exceeds consent_expiry Then PII is automatically re-redacted, access attempts are blocked, and events are audited
IP Allowlists Restrict Share Link Access
Given a share link for an audit trail export is created with allowed CIDRs 192.0.2.0/24 and 2001:db8::/32 and TTL=72h When a viewer requests the link from an allowed IP within TTL Then the content is delivered with HTTP 200 and access is logged with share_link_id And when a viewer requests from a disallowed IP, after TTL expiry, or after revocation Then the request is rejected with HTTP 403 (or 410 for expired) and no bytes are served, and the attempt is audited And updates to the share link allowlist take effect within 60 seconds
Identity Watermarking on All Exports
Given a user or share link viewer exports audit trail content to PDF, image, or spreadsheet When the export is generated Then a visible watermark includes viewer_full_name, user_id or share_link_id, tenant_id, timestamp_utc, source_ip, and document_id on every page And an invisible forensic watermark embedding the same fields is added to rasterized images and PDF objects And watermarking cannot be disabled for share links and can be disabled only by Tenant Owner for internal exports via policy switch, which is audited And running the watermark verification tool on the exported file returns a positive match for all embedded fields

Corridor Auto-Plan

One-tap AR corridors auto-generated from the roof outline and camera FOV set your ideal passes, gimbal angles, and overlap targets. Get airborne in under a minute with a plan that guarantees full-facet coverage and consistent capture across steep, complex roofs—no manual patterning required.

Requirements

Auto-Generated AR Corridors
"As a drone pilot for roofing inspections, I want one-tap automatic corridors that guarantee full-facet coverage so that I can start flying quickly and avoid manual grid planning."
Description

Algorithmically generates flight corridors from the RoofLens roof outline, camera FOV, and target frontlap/sidelap to ensure full-facet coverage with minimal turns. Computes pass spacing, altitude, and corridor orientation (ridge-parallel or perpendicular) based on roof geometry, pitch, and multi-level structures. Produces an ordered pass plan with corridor width, speed guidance, and recommended gimbal pitch per pass. Handles complex shapes, dormers, and steep slopes, with constraints for turn radius and clearance from edges. Integrates with job records so plans are saved, versioned, and reusable offline, regenerating in under three seconds after parameter changes.

Acceptance Criteria
Full-Facet Coverage and Overlap Compliance
Given a job with a finalized roof outline (including dormers and multi-level facets) and user-specified target frontlap and sidelap When Auto-Generated AR Corridors are created Then the union of pass footprints covers 100% of each roof facet interior with no uncovered gap larger than 0.1 m And achieved frontlap along passes is >= the target frontlap for at least 95% of sampled positions per facet and never less than (target - 2%) And achieved sidelap between adjacent passes is >= the target sidelap for at least 95% of sampled positions per facet and never less than (target - 2%)
Pass Spacing and Altitude Derivation from FOV and Targets
Given camera horizontal and vertical FOV and target frontlap and sidelap When the plan is generated Then the selected altitude and computed pass spacing produce a cross-track swath such that spacing = swath_width * (1 - target_sidelap) within ±1% or ±0.05 m (whichever is larger) And the recommended speed per pass yields along-track footprint spacing corresponding to (1 - target_frontlap) within ±1% based on the configured capture interval or frame sampling And all computed altitudes and speeds respect platform min/max limits
Orientation Optimization (Ridge-Parallel vs Perpendicular)
Given a roof with a dominant ridge axis When the plan is generated Then the chosen corridor orientation is explicitly set to ridge-parallel or ridge-perpendicular And compared to the alternate orientation, the selected orientation yields fewer or equal passes and fewer or equal total turns while meeting coverage and constraint requirements And if the lowest-turn option violates constraints, the chosen orientation is the lowest-turn feasible option that satisfies all constraints
Turn Radius and Edge Clearance Enforcement
Given configured minimum turn radius R and edge clearance C When inter-pass connectors and path geometry are generated Then all turn segments have curvature radius >= R And no waypoint, corridor centerline, or capture path lies within C of any roof boundary edge or obstruction polygon And lead-in and lead-out segments are added when necessary to satisfy R and C at the route start and end
Ordered Pass Plan with Speed and Gimbal Guidance
Given a generated plan When inspecting the output Then passes are returned as an ordered list where each pass includes: pass index, start coordinate, end coordinate, corridor width, recommended speed, and recommended gimbal pitch And the pass order forms a continuous route: the start of pass n+1 is within 2 m of the end of pass n or connected by a legal turn satisfying the minimum turn radius And recommended gimbal pitches are within camera mechanical limits and are directionally consistent with facet slope (steeper facets have more negative pitch recommendations)
Multi-Level, Dormers, and Complex Geometry Handling
Given a roof outline containing multiple elevation levels, dormers, and non-rectilinear facets When corridors are generated Then corridors are grouped per elevation level and no pass crosses between level polygons And every dormer facet is covered to the same overlap targets as primary facets And connector paths respect the minimum turn radius and do not traverse voids or non-roof areas
Regeneration Under 3 Seconds with Versioning and Offline Reuse
Given an existing plan saved to a job and cached for offline use When any of the parameters (frontlap, sidelap, camera FOV, orientation preference, altitude cap, edge clearance, turn radius) are changed and the plan is regenerated Then a new plan is produced and displayed in <= 3.0 seconds on supported devices And the plan is saved to the job with an incremented version number, timestamp, and parameter set used And the updated plan remains available offline and can be reloaded with no network connectivity
AR Corridor Visualization & Anchoring
"As a pilot, I want the corridors to stay locked to the roof in AR so that I can reliably follow passes even with device drift."
Description

Renders the generated corridors as 3D AR lanes locked to the roof using ARKit/ARCore sensor fusion with GNSS/IMU and the RoofLens roof outline as the world anchor. Supports quick alignment via user-placed anchor points (e.g., ridge apex) and dynamic re-anchoring to correct drift, maintaining sub-meter visual alignment. Provides directional arrows, lane width guides, and pass numbers with high-contrast, sunlight-readable styling and occlusion handling around chimneys and trees. Monitors alignment quality and prompts recalibration when thresholds are exceeded. Operates at smooth frame rates and degrades gracefully in low-texture or low-light conditions.

Acceptance Criteria
Initial AR Anchor Placement on Ridge Apex
Given the roof outline is loaded and AR tracking state is Normal When the user taps the ridge apex to place the primary anchor Then the AR corridors lock to the roof plane with lateral/vertical positional error ≤ 0.5 m at 10–30 m range, measured against the outline projection And the corridor heading deviates ≤ 5° from the roof ridge vector And the initial alignment is applied within 3 seconds of the anchor tap And the user can adjust the anchor and see updates applied within 500 ms And the resolved anchor pose (position, orientation, confidence) is persisted for the current session
Automatic Drift Detection and Re-Anchoring In-Flight
Given corridors are anchored and capture is active And the drift score exceeds thresholds (RMS position error > 0.5 m or yaw error > 5° for ≥ 2 s) When the system initiates auto re-anchoring or the user selects Recalibrate Then a new anchor is solved and applied within 2 seconds without restarting the AR session And the visual transition is smoothed over ≤ 300 ms with no overlay teleport > 1.0 m on screen And pass numbers and lane indices remain consistent before and after re-anchoring And an event with timestamps and drift metrics is logged to the session
Sub-Meter Corridor Alignment Across All Roof Facets
Given a multi-facet roof with a validated outline When the user views the structure from at least four distinct vantage points Then the median overlay-to-outline offset is ≤ 0.5 m and the 95th percentile offset is ≤ 1.0 m across all visible facets And the overlay remains adhered during device pitch/roll changes up to 45° And alignment stays within thresholds for at least 5 minutes under Normal tracking conditions
Sunlight-Readable Overlay Styling and Legibility
Given outdoor ambient illumination of 90–110k lux and device brightness ≥ 80% When corridors, directional arrows, lane width guides, pass numbers, and gimbal angle indicators are rendered Then on-frame measured contrast ratio is ≥ 7:1 for text/icons against the immediate background And pass numbers are legible at 5–30 m viewing distance with minimum apparent character height of 12 arcminutes And color selections remain distinguishable under protanopia and deuteranopia simulations while preserving semantic meaning And lane width guides represent target width with absolute error ≤ 0.10 m at 10–30 m range And the gimbal target vs actual indicator shows numeric error with accuracy ±2°
Occlusion Around Chimneys, Dormers, and Trees
Given environment depth/scene mesh data is available at confidence ≥ threshold When a corridor segment passes behind a detected obstacle Then the overlay is occluded with obstacle silhouette IoU ≥ 0.7 and edge bleed ≤ 10 px And when depth confidence drops below threshold for > 1 s, the renderer disables occlusion and draws hidden segments as dashed lines at 50% alpha And occlusion state changes within 200 ms of confidence transitions without dropping below 30 FPS
Real-Time Performance and Graceful Degradation
Given supported mid-tier devices (e.g., A14 Bionic / Snapdragon 8 Gen 1 class) When rendering AR corridors during continuous movement Then average frame rate is ≥ 30 FPS with dropped frames ≤ 5% over a 60 s interval And CPU utilization ≤ 70% and GPU utilization ≤ 80% over the same interval; device surface temperature rise < 8°C over 10 minutes And in low-texture or low-light (< 10 lux) conditions, the system auto-reduces effects (occlusion, label density) within 300 ms to maintain ≥ 24 FPS And on SLAM tracking loss, overlays pause with a "Reacquiring…" banner and tracking is restored within 3 s in ≥ 80% of attempts
Alignment Quality Monitoring and Recalibration Prompts
Given a continuous alignment quality score derived from AR tracking state and outline reprojection error When the score crosses the warning threshold (tracking ≠ Normal or RMS error > 0.75 m) for ≥ 1 s Then a recalibration prompt is shown within 1 s with haptic feedback And the prompt offers Recalibrate and Dismiss; Dismiss snoozes prompts for 60 s unless a critical threshold (RMS > 1.25 m) is exceeded, in which case a prompt reappears within 3 s And after successful recalibration, the prompt auto-dismisses and the quality indicator returns to Normal within 1 s And all prompt and recalibration events are timestamped and recorded in the session log
One-Tap Plan Creation & Quick Edit
"As a contractor, I want to create and adjust a flight plan in seconds so that I can get airborne quickly and tailor coverage to each roof."
Description

Offers a single action to auto-create the corridor plan using the active camera profile, target overlap, and safety constraints, then presents a concise summary (passes, estimated time, altitude range, battery count). Enables rapid adjustments for overlap %, pass orientation, minimum edge clearance, excluded areas, and speed, with instant re-generation. Works offline using cached roof outlines and device profiles, auto-saves to the RoofLens job, and supports undo/redo. Provides a preflight confirmation flow that checks connectivity to the aircraft and validates plan parameters before entering capture mode.

Acceptance Criteria
One-Tap Auto-Plan Creation (Offline/Online)
Given a job with a cached roof outline and an active camera profile And a target overlap percentage and safety constraints are set When the user taps "Auto-Create Plan" Then the system generates a corridor flight plan in under 3 seconds on supported devices And the plan covers ≥99% of roof facet area at or above the target overlap And all passes maintain ≥ the configured minimum edge clearance from roof boundaries and exclusions And gimbal angles and altitude values stay within the active camera profile limits And the plan appears on the map with pass paths and direction arrows
Plan Summary Presentation
Given an auto-generated plan exists When the plan is displayed Then a summary shows: pass count, estimated flight time (mm:ss), altitude range (min–max), and estimated battery count And the summary renders within 1 second of plan generation or update And units respect the app’s unit setting (imperial/metric) And estimated battery count = ceil(estimated flight time ÷ (per-battery endurance × 0.8)) using the active device profile
Quick Edit with Instant Regeneration
Given an auto-generated plan is visible When the user adjusts overlap %, pass orientation (±90°), minimum edge clearance, excluded areas (add/edit/remove), or flight speed Then the plan re-generates within 2 seconds of the last change And the map and summary reflect the new plan And coverage and edge clearance constraints remain satisfied And excluded areas persist after regeneration And edits can be chained without leaving the screen
Undo/Redo of Plan Edits
Given the user has made one or more edits to the plan When the user taps Undo or Redo Then the previous/next plan state is restored within 1 second And the summary updates to match the restored state And at least the last 20 actions are retained in the undo stack And Undo/Redo are disabled when no further actions are available
Offline Plan Creation and Editing
Given the device has no network connectivity And the job has cached roof outlines and device/camera profiles When the user creates or edits a plan Then all auto-plan and quick-edit functions work without network access And an offline indicator is shown And any save operations are queued locally without errors And upon reconnection, queued saves sync within 10 seconds
Auto-Save to Job
Given a plan is created or edited When the user pauses interaction for at least 1 second or navigates away Then the current plan state auto-saves to the RoofLens job within 2 seconds (online) or to local queue (offline) And reopening the job restores the last saved state And no more than the last change is lost after a crash or force-quit
Preflight Confirmation and Validation
Given a plan is ready and the user taps "Preflight" When the preflight check runs Then the app verifies aircraft connectivity and telemetry availability And validates plan parameters: altitude within controller and profile limits, speed within device limits, minimum edge clearance satisfied, and no excluded areas intersect with passes And if any validation fails, a blocking error explains the issue and “Enter Capture” is disabled And if all checks pass, “Enter Capture” is enabled And the full preflight completes in under 3 seconds
Auto Gimbal and Camera Settings
"As a pilot, I want the system to set gimbal and camera parameters automatically so that my images are sharp and consistent across steep facets."
Description

Automatically computes gimbal pitch for each pass to maintain a consistent off-nadir angle relative to local roof pitch and facet orientation. Sets camera exposure, focus, and shutter speed targets based on motion-blur limits for the planned ground speed and lighting, locking white balance and exposure to ensure uniformity across passes. Applies settings via supported SDKs (e.g., DJI, Skydio) with verification and fallback guidance where APIs are limited. Records applied settings into EXIF and the job log for downstream processing and dispute reduction. Allows safe user overrides with guardrails and restores defaults post-flight.

Acceptance Criteria
Compute Gimbal Pitch for Consistent Off-Nadir by Facet
- Given a planned corridor pass with facet pitch and orientation, when the plan is generated, then each capture point has a target gimbal pitch that maintains the specified off-nadir angle relative to the local facet normal within ±2.0°. - Given transitions between facets within a single pass, when crossing a facet boundary, then the gimbal target updates within 1.0 s and before the next capture trigger. - Given aircraft turn-in to a pass, when within 10 m of the first capture point, then the gimbal reaches its target pitch at least 0.5 s before the first image is captured. - Given live telemetry, when an image is captured, then the recorded gimbal pitch is within ±2.0° of the computed target.
Motion-Blur-Limited Shutter and Focus Targeting
- Given camera sensor and focal length and planned ground speed and altitude, when computing exposure, then the target shutter speed ensures ≤1.0 pixel linear motion blur at the planned speed and GSD. - Given ambient EV from meter reading, when target shutter cannot be achieved with ISO ≤ 1600 and aperture within device limits, then the system prompts to reduce speed and/or increase ISO, and blocks auto-start until the user acknowledges the recommendation. - Given focus mode selection, when preflight checks run, then focus is set to AF-S or MF at hyperfocal distance, focus lock is confirmed within 2 attempts, or the user is prompted to tap-to-focus before start. - Given exposure targets, when applying settings, then exposure mode is set to Manual (M) with the computed shutter, ISO, and aperture values.
Lock White Balance and Exposure Across Passes
- Given preflight readiness, when starting the mission, then white balance is set to a fixed Kelvin value and exposure remains manual for the duration of all passes. - Given consecutive captures during the mission, when inspecting EXIF, then white balance mode is "Manual" and color temperature varies by no more than ±100 K between any two images. - Given exposure lock, when inspecting EXIF across all images in a pass, then exposure values (ISO, shutter, aperture) do not vary by more than ±0.1 EV unless a user override was applied and logged. - Given a mid-mission resume, when the aircraft re-enters a pass, then the previously locked WB and exposure are reapplied within 2.0 s before the next capture.
SDK Application with Verification and Fallback Guidance
- Given a supported SDK (e.g., DJI, Skydio), when applying camera and gimbal settings, then each setting returns a success acknowledgement and a read-back value matching the target within defined tolerances (pitch ±0.5°, shutter ±1/3 stop, ISO exact, WB ±100 K). - Given transient SDK errors, when a setting application fails, then the system retries up to 3 times with exponential backoff and surfaces the final error message to the user. - Given an unsupported or limited SDK endpoint, when a setting cannot be programmatically applied, then the UI presents step-by-step manual instructions tailored to the detected platform and prevents mission start until the user confirms completion. - Given any setting remains unverified after retries and manual guidance, when attempting to start the mission, then start is blocked and a "Settings Not Verified" status is shown with the specific failing fields.
EXIF and Job Log Recording of Applied Settings
- Given each captured image, when saved to storage, then EXIF contains: ExposureTime, ISOSpeedRatings/ISO, FNumber, WhiteBalanceMode, WhiteBalanceTemperature, FocusMode, FocusDistance, GimbalPitchDegree, GimbalYawDegree, FlightSpeed, CalculatedOffNadirAngle, and BlurLimitPixels. - Given mission completion, when reviewing the job log, then there is an entry per pass that records the applied targets and read-back values for exposure, focus, white balance, gimbal pitch, platform type, SDK version, and any overrides or retries, with timestamps. - Given a validation script running post-flight, when parsing all mission images, then 100% of images include the required EXIF fields and the values match the job log within defined tolerances. - Given any image is missing required EXIF fields, when exporting the PDF bid, then the system flags the data gap in the job log and marks the affected images for review.
User Overrides with Safety Guardrails
- Given the user chooses to override a computed parameter preflight, when entering a value, then the UI constrains the input so that predicted motion blur does not exceed 1.5 pixels and off-nadir deviation does not exceed ±5.0° from the plan. - Given an override would violate guardrails, when the user attempts to apply it, then the system clamps to the nearest safe value and displays a warning explaining the constraint. - Given an override is active, when capturing images, then the override is applied consistently across the pass and is recorded in both EXIF (as a tag "Override=true" and "OverrideFields") and the job log with the user ID and timestamp. - Given in-flight safety triggers (e.g., sudden lighting drop > 2 EV), when the override would lead to underexposure or excessive blur, then the system prompts to pause or reduce speed and does not automatically change locked exposure without user consent.
Restore Camera Defaults Post-Flight
- Given mission end or abort, when exiting the capture workflow, then the system restores the camera and gimbal to the user's saved defaults or the manufacturer's safe defaults within 10 seconds. - Given defaults restoration, when reading back settings, then shutter, ISO, aperture, white balance mode, focus mode, and gimbal mode match the saved defaults within tolerances (pitch ±0.5°, WB mode exact). - Given any setting cannot be restored, when notifying the user, then the UI lists the specific settings that remain non-default and provides a one-tap "Retry Restore" option. - Given app relaunch after a crash during mission, when the aircraft reconnects, then the app detects an incomplete restore and attempts restoration again before permitting a new mission.
Coverage Assurance & Live Heatmap
"As an adjuster, I want live coverage feedback so that I don’t miss areas and can prove complete capture."
Description

Projects the real-time camera footprint onto the roof mesh to compute achieved frontlap/sidelap and coverage completion during flight. Displays an on-device heatmap and completion percent, issuing alerts for gaps or off-target overlap and suggesting recovery passes before landing. After flight, runs a verification pass that confirms coverage against thresholds and generates a coverage report saved to the RoofLens job and available in exported PDFs. Uses telemetry, pose, and image timestamps to reconcile any deviations from the planned corridors for auditability.

Acceptance Criteria
In-Flight Coverage Heatmap and Completion Percent
Given an active flight using Corridor Auto-Plan and configured coverage thresholds When the drone begins capturing images and streaming telemetry Then the device shall render a roof-mesh heatmap overlay with a visible legend and numeric completion percent And the heatmap and completion percent shall update at ≥2 Hz with capture-to-visual latency ≤500 ms (P95) And the projected footprint alignment error on the roof surface shall be ≤0.5 m (P95) relative to post-flight reconstruction And completion percent shall only increase when new covered area meeting overlap thresholds is added
Frontlap and Sidelap Computation Accuracy
Given planned frontlap and sidelap targets for the mission When images are captured with varying gimbal angles and airspeed Then the system shall compute achieved frontlap and sidelap per roof facet and display them numerically and on the heatmap And achieved frontlap and sidelap shall be within ±3 percentage points of an offline verification algorithm (P95) And overlap computations shall reflect gimbal/FOV changes within the next update cycle (≤0.5 s)
Gap and Overlap Alerts with Recovery Suggestions
Given the live heatmap and overlap calculations during flight When any roof area >0.25 m² remains uncovered or local overlap falls >10 percentage points below target across an area >1 m² Then the system shall issue visual, audible, and haptic alerts within 2 s of detection And the system shall propose at least one recovery pass with path, altitude, gimbal angle, and expected coverage gain And suggested recovery passes shall respect defined no-fly zones and battery reserve constraints And alerts shall auto-clear once the affected area meets thresholds or the user dismisses with justification
Post-Flight Coverage Verification and Report Generation
Given flight completion (auto or manual) and data sync availability When verification is triggered Then the system shall complete coverage verification within 2 minutes for roofs ≤600 m² (and proportionally scaled at ≤0.2 s/m² thereafter) And overall roof coverage shall be ≥98% by area and each facet ≥95% by area to pass And for ≥95% of each facet area, achieved frontlap ≥ (target − 5 pp) and sidelap ≥ (target − 5 pp) And a coverage report (metrics, facet breakdown, heatmap snapshots, deficiencies, and reflight tasks if failed) shall be saved to the RoofLens job And the coverage summary page shall be included by default in exported PDFs with timestamps, app/firmware versions, and thresholds used
Auditability via Telemetry/Pose/Image Timestamp Reconciliation
Given telemetry (lat/lon/alt, roll/pitch/yaw), camera intrinsics/extrinsics, and image timestamps captured during flight When generating the audit trail Then the system shall correlate each image with pose and planned corridor segment, flagging deviations >2 m lateral or >10° orientation from plan And telemetry–image timestamp skew shall be ≤50 ms (P95) after sync, with residual drift ≤10 ms over the mission And an audit log (timeline, deviation map, per-image metadata) shall be exportable as JSON/CSV, attached to the job, and referenced in the PDF report
Degraded Conditions and Complex Geometry Handling
Given steep or complex roofs (e.g., facets >60° pitch, dormers, obstructions) or intermittent GNSS When live coverage estimation confidence drops below 0.8 (internal metric) Then the heatmap shall label regions as Unknown (confidence) distinct from Uncovered (coverage) and exclude Unknown from completion percent And Unknown regions >2% of roof area shall trigger a recommendation for targeted reflight passes And visual-inertial fallback shall maintain coverage update rate ≥1 Hz with footprint alignment error ≤0.8 m (P95) under GNSS degradation
Safety, Compliance, and Keep-Out Zones
"As a pilot, I want automated safety constraints and warnings so that I can fly confidently and compliantly around complex roofs."
Description

Enforces geofencing and user-defined keep-out polygons with altitude and lateral buffers around obstacles derived from the roof model and detected structures. Suggests pass direction based on wind to reduce crosswind drift and ensures battery reserves with RTH triggers based on plan progress. Runs preflight checks for GPS health, compass status, firmware compatibility, and SDK connectivity, pausing guidance on signal degradation and prompting safe recovery actions. Logs safety events and plan deviations into the job timeline for compliance and insurance documentation.

Acceptance Criteria
Geofencing and Keep-Out Enforcement with Buffers
Given a corridor auto-plan with user-defined keep-out polygons and obstacle-derived buffers (lateral L meters, altitude A meters) is prepared And the roof model and detected structures are loaded When the plan is validated and during live guidance Then all planned waypoints, paths, and camera FOV footprints maintain >= L meters lateral clearance from keep-out polygon edges and obstacle footprints And all planned altitudes above obstacles maintain >= A meters vertical clearance And the system blocks arming/launch if any segment violates L or A, highlighting offending segments on the map And during flight, any incursion toward buffered areas halts guidance within 1 second, commands a hold/backtrack, and displays a "Geofence Hit" alert And each prevented incursion is logged with timestamp, location, and buffer details in the job timeline
Preflight Safety Checks Gate
Given the aircraft is connected and a mission plan is selected When the operator runs preflight checks Then GPS health meets configured minimums (e.g., satellites >= S_min or HDOP <= H_max), compass status is OK, firmware is within the compatible versions list, SDK connectivity is stable for >= 5 seconds, home point is recorded, and geofence/keep-out data is loaded And if any check fails, the Start Mission control is disabled and a specific remediation message is shown per failed check And upon all checks passing, a digitally signed preflight report (including firmware versions, thresholds, pass/fail per item) is attached to the job timeline
Wind-Aware Pass Direction Suggestion
Given live wind vector data is available from the aircraft sensors or an approved weather source And candidate pass headings are derived from the roof outline When the system computes the recommended pass direction Then the suggested direction yields a crosswind component <= configured limit C_max (default 4 m/s) where feasible, otherwise minimizes crosswind magnitude And the UI presents the suggested direction, estimated drift reduction percentage, wind source, and timestamp And if the operator selects a direction with crosswind > C_max, a confirmation dialog with risk warning is shown and the override decision is logged
Dynamic RTH and Battery Reserve Enforcement
Given current battery state-of-charge (SOC), consumption rate, wind-adjusted return estimate, and mission progress are known When predicted landing SOC would fall below the configured reserve R% (default 20%) if the mission continues Then an RTH trigger is issued within 2 seconds, guidance transitions to Return-to-Home, and the operator is alerted And if the remaining mission segment cannot be completed while preserving R%, the system auto-splits the plan at the next safe waypoint and prompts for a battery swap And all RTH triggers and reserve calculations (inputs and estimates) are logged to the job timeline
Signal Degradation Pause and Recovery Prompts
Given corridor guidance is active When any of the following persist beyond 3 seconds: GPS HDOP > H_max or satellites < S_min, compass variance > V_max, RC link RSSI < R_min, or video link SNR < N_min Then guidance pauses within 2 seconds, the aircraft holds position (or rises to the configured safe loiter altitude within the corridor), and a recovery checklist is displayed And guidance resumes automatically only after all metrics are within limits for 10 consecutive seconds or the operator explicitly resumes And if degradation persists beyond 30 seconds, the operator is prompted to RTH or land, with one-tap execution available
Safety Event and Plan Deviation Logging
Given a mission is planned or in progress When any safety event (e.g., geofence hit, preflight failure, signal degradation, RTH trigger) or plan deviation (e.g., skipped pass, corridor edit, manual override) occurs Then an immutable entry is added to the job timeline including UTC timestamp, GPS coordinates, severity, actor (system/operator), reason code, and links to relevant media/telemetry And the timeline supports filtering by event type and can be exported as signed PDF and JSON for compliance/insurance documentation
No-Fly Zone and Authorization Compliance (Online/Offline)
Given airspace/no-fly data is available from the integrated provider and cached locally When a planned path intersects a prohibited zone Then mission generation is blocked with an error citing the data source and effective times of the restriction And when a planned path intersects an authorization-required zone, the operator is prompted to supply a valid authorization token; upon verification, planning proceeds and the token is attached to the job And when offline, cached airspace data not older than 24 hours is used; if older, mission start is disabled until sync or a policy-based override is acknowledged and logged

WindSmart Pathing

Live wind and gust sensing adapts headings, speed, and pass spacing on the fly. Visual and haptic prompts help counter drift and yaw so overlap stays on target, reducing blurry shots, re-flys, and battery swaps in breezy conditions.

Requirements

Onboard Wind Vector Estimation
"As a drone pilot using RoofLens, I want accurate live wind speed and direction at the aircraft so that the system can compensate my flight path and I can make safe decisions in gusty conditions."
Description

Derive real-time wind speed and direction at the aircraft using onboard telemetry (IMU, GPS drift, attitude, thrust) fused with localized weather data as fallback. Provide a continuously updated wind vector (≥2 Hz), latency <200 ms, and a confidence score. Expose the vector to guidance, pathing, and safety subsystems via a unified interface in the RoofLens flight app. Include calibration routines, sensor sanity checks, and graceful degradation when signal quality is low.

Acceptance Criteria
Sustained Real-Time Wind Vector Updates During Survey Flight
Given the aircraft has nominal GPS lock (HDOP ≤ 1.5) and healthy IMU status When flying a standard survey grid at 4–7 m/s in steady wind for 10 minutes Then the wind vector is emitted at a rate ≥ 2.0 Hz for ≥ 95% of the flight duration And no individual update gap exceeds 1.0 s And each update includes a timestamp, wind speed, wind direction, and confidence And timestamps are monotonic and within 50 ms of the system clock
End-to-End Latency Under 200 ms
Given the wind vector is consumed by Guidance, Pathing, and Safety via the unified interface with synchronized clocks When measuring time from sensor fusion completion to consumer receipt over a 10-minute flight Then the 95th percentile end-to-end latency is ≤ 200 ms And the maximum observed latency is ≤ 250 ms And latency jitter (standard deviation) is ≤ 60 ms
Confidence Score Publication and Behavior
Given high-quality signals (HDOP ≤ 1.5, no IMU saturation, stable thrust) When hovering for 60 s and then flying straight for 60 s Then confidence is ≥ 0.8 for ≥ 90% of samples Given degraded signals (HDOP ≥ 3.0 or IMU saturation detected or frequent GPS velocity spikes) When maintaining flight for 2 minutes Then confidence is ≤ 0.5 for ≥ 80% of samples And confidence values are bounded in [0.0, 1.0] and update at the same cadence as the wind vector And a quality flag is set to LOW whenever confidence < 0.6
Unified Interface Subscription by Guidance, Pathing, and Safety
Given Guidance, Pathing, and Safety are subscribed to the unified wind interface When the module emits wind updates during a 5-minute flight Then all subscribers receive identical payloads (speed, direction, confidence, source, timestamp) for each sequence number And inter-subscriber delivery skew is ≤ 50 ms per update And subscribing or unsubscribing any one consumer mid-flight does not cause more than one missed update for other consumers And the interface reports version="v1" in metadata
Calibration Workflow and Persistence
Given the aircraft is on the ground with motors disarmed When the user initiates Wind Bias Calibration Then the routine completes successfully within 60 s or aborts with a clear error and retry instructions And on success, calibration parameters are persisted and survive power cycle Given in-flight calibration is initiated with prescribed maneuvers When the pilot completes the maneuver set within 2 minutes Then calibration converges and is applied without interrupting wind outputs And the user receives success/failure indication in-app
Graceful Degradation and Weather Fallback
Given GPS or magnetometer health is lost for > 5 s When localized weather data is available Then the module switches source=fallback_weather within 1 s and continues publishing at ≥ 1 Hz And confidence decreases by at least 0.3 relative to pre-failure average And upon sensor recovery, the module reverts to source=fused_onboard within 2 s And the step change between consecutive outputs during transitions is ≤ 3 m/s in speed and ≤ 30° in direction
Sensor Sanity Checks and Fault Handling
Given anomalous telemetry conditions occur (e.g., computed wind magnitude exceeds aircraft ground speed by > 10 m/s for > 2 s, or IMU gyro saturation persists > 1 s) When such conditions are detected Then the module flags status=UNRELIABLE_WIND and sets confidence=0.0 or suppresses publication within 100 ms And emits a fault event with root-cause code to Safety within 100 ms And records a log entry containing timestamp, sensor health, and last valid wind vector And under nominal conditions, false-positive UNRELIABLE_WIND events are ≤ 1 per flight hour
Adaptive Path Replanner
"As a solo roofer, I want the app to automatically adapt headings and speed when gusts hit so that my sidelap and image sharpness stay within spec without constant manual input."
Description

Continuously adjust headings, ground speed, and waypoint timing to counter crosswinds and gusts while maintaining survey geometry. Apply rate-limited setpoint changes to avoid aggressive maneuvers, respect geofences and roof perimeters, and comply with DJI/Autel SDK constraints. Fall back to guidance-only mode (no direct control) when SDK control is unavailable. Ensure deterministic behavior with reproducible parameters per job and seamless resume after temporary loss of link.

Acceptance Criteria
Crosswind Compensation Maintains Overlap and Heading
Given a roof survey plan with target 75% frontlap and 65% sidelap at 40 m AGL, computed lane spacing L and photo interval T, and steady crosswind 4–8 m/s with gusts up to +3 m/s perpendicular to lanes When the mission runs with Adaptive Path Replanner enabled Then 95th-percentile lateral deviation from lane centerline is ≤ 0.2 × L And RMS heading error relative to lane axis is ≤ 3° And achieved sidelap is ≥ 60% on ≥ 99% of photos and median ≥ 65% And achieved frontlap is ≥ 70% on ≥ 99% of photos and median ≥ 75% And unplanned re-fly distance due to overlap shortfall is ≤ 1% of total track length
Rate-Limited Setpoint Smoothing Under Gusts
Given a wind step change of 3 m/s within 1 s during a pass and a required crosswind correction of ≥ 10° When the replanner updates heading, ground speed, and waypoint timing Then commanded heading rate never exceeds 12°/s And commanded ground-speed change rate |dv/dt| ≤ 0.7 m/s² And estimated lateral acceleration along track ≤ 3.0 m/s² And successive waypoint ETA updates differ by ≤ 1.0 s per update cycle And SDK command send frequency remains ≤ 10 Hz with no bursts over this limit
Geofence and Roof Perimeter Compliance During Replanning
Given a roof polygon R, a hard geofence G, and a safety margin M = 1.0 m inside R When the replanner computes path corrections under wind Then all commanded positions remain inside both G and the interior of R offset inward by M at all times And minimum distance to the roof edge is ≥ M for ≥ 99.9% of samples and never < 0.5 × M And zero boundary violations are recorded in telemetry and logs And if a correction would breach the margin, the replanner reduces speed or defers the photo trigger instead of exiting bounds
SDK Constraint Compliance and Fallback to Guidance-Only
Given the active SDK rejects or times out direct control commands for ≥ 1.0 s When this condition is detected during flight Then the system enters guidance-only mode within 1.0 s of detection And while in guidance-only mode, zero direct attitude/velocity/position commands are sent to the SDK, and operator visual and haptic prompts are issued at ≥ 1 Hz And all SDK calls made remain within documented capability and rate limits for the vendor, with zero uncaught errors And upon SDK control availability for ≥ 2.0 s, the system re-enters direct-control mode with heading and speed deltas respecting rate limits and with no single 100 ms period showing > 5° heading or > 1.0 m/s speed discontinuity
Deterministic Reproducibility of Setpoints per Job
Given a job’s parameters, SDK capability profile, random seed S, and recorded wind and link-event time series When the replanner is executed twice in log-replay with identical inputs Then the emitted setpoint stream (timestamps, positions, velocities, headings) matches exactly (byte-identical hash) And the exported job bundle contains S and a parameter/capability hash that reproduces the same setpoints when reprocessed And across different hardware of the same architecture, numeric deviations in logged setpoints are ≤ 1e-6 in magnitude
Seamless Resume After Temporary Link Loss
Given a running mission experiences a telemetry/control link loss lasting 5–60 s mid-pass When the link is restored Then the replanner resumes the mission without operator input, reacquiring the current lane within 5 s with lateral error ≤ 2 m and heading error ≤ 5° And the system does not duplicate more than 2 consecutive photos and leaves ≤ 2% uncovered roof area due to the interruption And resume actions respect the rate limits defined for heading and speed changes And mission progress, coverage map, and photo indexing remain consistent with pre-loss state
Pass Spacing Auto-Tuning
"As an estimator, I want lane spacing to auto-tune based on wind so that I don’t have to re-fly sections due to poor overlap."
Description

Dynamically compute and adjust lane spacing to maintain target sidelap/forward lap thresholds (e.g., 70%/80%) under varying winds. Differentiate upwind vs. downwind passes, compensate for yaw-induced footprint skew, and re-space subsequent passes when drift exceeds limits. Surface the effective spacing to the UI and log changes to the job for auditability.

Acceptance Criteria
Maintain Target Overlaps Under Crosswind Drift
Given a mapping mission targeting 70% sidelap and 80% forward lap with an estimated crosswind between 6 and 10 m/s And aircraft groundspeed, camera footprint, and trigger interval are known When WindSmart Pass Spacing Auto-Tuning runs in flight Then commanded lane spacing is computed such that predicted sidelap ≥ 70% for all upcoming passes And predicted forward lap ≥ 80% for all upcoming captures based on current groundspeed and trigger interval And realized overlaps computed from telemetry after each pass are ≥ targets for at least 95% of images per pass
Upwind vs Downwind Forward-Overlap Adjustment
Given alternating upwind and downwind legs where headwind/tailwind creates >20% difference in groundspeed When auto-tuning computes capture cadence and spacing per leg Then on upwind legs, capture interval and/or groundspeed are adjusted so predicted forward lap ≥ 80% And on downwind legs, capture interval and/or groundspeed are adjusted so predicted forward lap ≥ 80% And no leg deviates more than ±5% from the planned ground sampling distance as a result of these adjustments
Yaw-Induced Footprint Skew Compensation
Given sustained yaw |ψ| ≥ 8° for ≥ 2 seconds during a pass When computing effective footprint width and lane spacing Then the algorithm uses projected footprint width = base_footprint_width × cos(|ψ|) And commanded lane spacing is reduced accordingly so predicted sidelap remains ≥ 70% And realized sidelap derived from track and yaw telemetry is ≥ 70% for at least 95% of images in the affected segment
Auto Re-spacing After Drift Exceeds Limit Mid-Mission
Given measured cross-track drift indicates predicted sidelap on the next pass would fall below 70% And the aircraft is within 50 m of the next turn point When the system evaluates spacing Then it recomputes the next N (≥3) passes’ centerlines within 1 second And updates spacing so predicted sidelap on each recomputed pass is ≥ 70% without violating mission min/max lane spacing constraints And the updated plan introduces no unflown gaps > 0.5 m and no redundant coverage > 150% sidelap
UI Shows Effective Spacing and Overlap in Real Time
Given WindSmart auto-tuning is active in flight When lane spacing is adjusted Then the UI displays the current effective lane spacing (meters) and predicted sidelap/forward lap within 2 seconds of the change And displayed values are within ±0.5 m (spacing) and ±2% (overlaps) of the values recorded to the job log And the UI denotes pass direction (upwind/downwind) and flags if spacing differs by ≥5% between directions
Audit Log of Spacing Changes and Rationale
Given any automatic change to lane spacing or capture cadence occurs When the change is committed Then an entry is appended to the job log with timestamp, pass index, prior spacing, new spacing, prior/predicted sidelap and forward lap, wind estimate, yaw, and rationale code And 100% of such changes during a mission are logged And the log is exportable as JSON and included in the PDF job summary as a human-readable table
Visual and Haptic Drift Cues
"As a pilot in breezy conditions, I want clear visual and haptic prompts to counter drift and yaw so that I can keep the capture within quality tolerances."
Description

Provide on-screen vectors, color-coded guidance bars, and configurable haptic feedback that prompt micro-corrections to counter drift and yaw during manual or semi-autonomous flight. Ensure cues are legible in sunlight, minimally intrusive, and accessible (contrast and vibration patterns). Operate offline with cached assets and degrade gracefully if sensor inputs are limited.

Acceptance Criteria
Sunlight Legibility of Drift Cues
Given ambient light is >= 80,000 lux or device brightness is set to maximum When drift vectors, guidance bars, and labels are rendered over live video Then essential cue elements meet contrast ratio >= 7:1 against the background And vector stroke width is >= 2 px at 1080p and scales proportionally with resolution And cue text font size is >= 14 sp without clipping on 5"–10" displays And cues remain clearly visible in a 60-second outdoor recording with no loss of legibility
Severity Encoding via Color and Pattern
Given lateral drift speed is computed from flight sensors When drift speed falls into [0.0–0.5) m/s, [0.5–1.5) m/s, or >= 1.5 m/s Then guidance bars display green, amber, or red respectively within 200 ms of threshold crossing And bar length scales linearly with drift speed (0–2.5 m/s => 0–100% length) And all severity encodings meet contrast >= 4.5:1 against the background And when Accessibility: Color-Blind Mode is ON, severity is additionally encoded by distinct patterns (solid/striped/dashed) and tooltips do not rely on color-only
Configurable Haptic Feedback Patterns
Given a controller or device with haptic capability is connected When drift speed exceeds 0.5 m/s, 1.5 m/s, or yaw rate exceeds 20°/s Then short (50 ms), medium (150 ms), and long (300 ms) pulses are emitted respectively within 80 ms of threshold crossing And haptic intensity is configurable from 0% to 100% in 10% increments with default 60% And users can enable/disable haptics per profile and settings persist across app restarts And if no haptics are available, visual cues continue and a single non-blocking notice "Haptics unavailable" is shown per session
Offline Operation with Cached Assets
Given the device has no internet connectivity When WindSmart Pathing and drift cues are enabled Then all required UI assets and rules load from local cache in <= 500 ms And no network requests are attempted during the flight session (0 outbound calls) And on cache miss, a minimal monochrome cue set loads in <= 300 ms and an offline warning is logged for later sync And total cached assets for this feature do not exceed 20 MB
Graceful Degradation on Limited Sensors
Given wind vector data is unavailable but IMU/gyro and GPS are available When cues are generated Then the system uses ground-speed drift and yaw-rate proxies and displays a "Low confidence" badge And cue update rate remains >= 6 Hz with on-screen latency <= 150 ms (p95) And if both GPS and IMU are unavailable for > 2 seconds, visual cues auto-hide, haptics disable, and a banner "Cues paused: sensors unavailable" appears without blocking flight controls And cues auto-resume within 500 ms after sensors recover
Minimal Intrusiveness and User Dismissal
Given cues are active during manual or semi-autonomous flight When overlays are rendered Then total overlay area covers <= 15% of the viewport and does not occlude the central 30% aiming region And system maintains >= 24 FPS (p99) with CPU utilization increase <= 10% over baseline And user can snooze cues for 5 seconds via single tap or mapped controller button; cues auto-restore afterward And overlays never obscure critical flight telemetry (altitude, battery, link status)
Cue Latency and Update Rate
Given normal device load (CPU < 80%) and active flight When wind/drift inputs change Then visual cue time-to-display is <= 120 ms (p95) with update rate >= 10 Hz And haptic cue time-to-vibrate is <= 80 ms (p95) with inter-pulse jitter <= 20 ms And under high load (CPU >= 80%), system degrades to >= 6 Hz while maintaining latency bounds and displays a "Reduced update rate" indicator
Overlap and Blur Assurance Monitor
"As a quality-focused operator, I want a live quality meter and automatic safeguards so that I avoid blurry shots and coverage gaps that cause disputes."
Description

Compute real-time coverage and image quality risk using ground speed, shutter speed, GSD, motion profile, and live wind vector. Display a quality meter, predict gap risk, and take protective actions: auto slow-down, adjust trigger rate, mark segments for redo, or pause capture if thresholds are violated. Emit alerts and log all interventions to support QA and dispute reduction.

Acceptance Criteria
Telemetry-Driven Risk Computation
Given live telemetry (ground speed, shutter speed, GSD, motion profile, wind vector) updating at ≥5 Hz When survey capture is active Then a coverage/blur risk score in the range 0–100 is computed and updated at ≥5 Hz And the score increases when ground speed-to-shutter ratio increases, angular rates increase, or crosswind component increases, holding other inputs constant And if any required input is stale >500 ms, the score state is set to Unknown and a warning event is emitted within 200 ms
Quality Meter UI and Thresholds
Given the risk score stream is available When the quality meter is displayed Then the meter shows the numeric score (0–100) and a color/state: Green (0–29), Yellow (30–59), Red (60–89), Critical (90–100) And end-to-end UI latency from score update to render is ≤200 ms at the 95th percentile And the meter refresh rate is ≥5 Hz while capture is active
Gap Risk Prediction vs Configured Overlap
Given configured forward and side overlap targets are set for the mission When predicting coverage using current ground speed, shutter interval, aircraft motion, and wind drift for the next 5 photos Then the predicted minimum forward and side overlap percentages are computed and displayed And if the predicted overlap is ≥5 percentage points below the target in any axis, Gap Risk is set to True and timestamped And the UI shows the predicted minimum overlap and the deficit (percentage points)
Protective Action — Auto Slow-Down and Trigger Rate Adjustment
Given Gap Risk = True due to forward overlap shortfall and autopilot speed control is enabled When policy "Auto Slow/Retime" is ON Then ground speed is reduced by up to 30% with deceleration ≤1 m/s² until predicted forward overlap ≥ target within 3 s or control limits are reached And camera trigger interval is adjusted by up to ±20% (within camera limits) to keep GSD variance within ±10% of planned And minimum safe airspeed and camera capability limits are respected; if limits prevent recovery, state escalates to Red within 1 s
Protective Action — Segment Mark-and-Redo
Given Critical state persists for ≥2 s or the blur metric exceeds threshold on ≥3 consecutive images When the condition is detected Then the active path segment is tagged "Redo Required" with start/end timestamps, GPS bounds, and affected image IDs And a redo task is queued to repeat the segment after the current pass, with an operator prompt to Accept or Skip And the segment is visually flagged on the map until the redo is completed or explicitly dismissed
Protective Action — Auto Pause on Critical Risk
Given Critical state occurs and recovery by auto slow/retime fails within 3 s and pause policy is enabled When the aircraft is in survey mode Then image triggering is paused within 1 s and the aircraft holds position or continues flight without capturing until risk <60 And capture resumes only after risk <60 for ≥5 s and either operator confirmation or auto-resume policy allows
Alerts and Intervention Logging
Given any threshold transition (Green↔Yellow↔Red↔Critical) or protective action occurs When the event is emitted Then a structured log entry is written within 100 ms containing timestamp, GPS, wind vector, ground speed, shutter speed, GSD, risk score, state, and action type And an operator alert is shown with a toast and a haptic pattern mapped to severity (Yellow=1 short, Red=2 short, Critical=3 long) And 100% of threshold transitions and protective actions are present in the flight log and included in the mission QA report (PDF/JSON)
Battery and Safety Guardrails
"As a field tech, I want wind-aware battery and safety guardrails so that I don’t run out of battery fighting headwinds or violate safety limits."
Description

Maintain wind-aware energy budgeting and return-to-home (RTH) reserves, forecasting consumption for each leg under headwind scenarios. Reorder or shorten passes to reduce upwind exposure when reserves are tight, and enforce configurable no-fly thresholds (gust limit, max tilt, GNSS quality). Trigger auto-pause/RTH when safety margins are breached and record the rationale in the flight log.

Acceptance Criteria
Per-leg Wind-Aware Energy Forecast and RTH Reserve Enforcement
Given a mission with a defined home point, aircraft profile, battery capacity, and configured RTH reserve (percent SoC and minimum RTH time) And wind sensing is available from onboard sensors or telemetry When per-leg energy is forecast at a cadence of at least 1 Hz using wind-adjusted airspeed and current draw models Then the UI displays for the active and next leg: predicted end-of-leg SoC (%), predicted RTH SoC (%), and RTH time (mm:ss) And if any predicted end-of-leg SoC is below the configured reserve or predicted RTH time is below the configured minimum, the leg is not started and a Guardrail:ReserveBreach warning is raised And after each leg completes, the forecast vs actual energy consumption MAPE is <= 10% across the last 3 completed legs in steady winds (<= 10 m/s)
Adaptive Pass Reordering and Shortening Under Tight Reserves
Given a mission with mixed upwind/downwind passes and a configured RTH reserve When forecasted end-of-leg SoC for any upcoming upwind leg is below reserve Then the system reorders passes to execute downwind or crosswind legs first such that reserve is maintained And overlap and sidelap remain within mission spec with deviation <= 5% And if no ordering satisfies reserve, upwind legs are shortened incrementally (>= 25 m steps) until predicted end-of-leg SoC >= reserve or the pass is deferred And the replanned sequence and any truncated passes are presented for operator confirmation unless an auto-pause is active
Gust Limit No-Fly Enforcement
Given a configurable gust limit (m/s) is set When preflight gust magnitude exceeds the limit for >= 5 s within any 10 s window Then takeoff is inhibited with error Guardrail:GustLimit and the limit value is shown When in-flight gust magnitude exceeds the limit for >= 3 s Then auto-pause is engaged within 1 s and the aircraft holds position if safe And if gust magnitude remains above the limit for >= 10 s, RTH is initiated
Max Tilt Enforcement During Mission
Given a configurable max tilt (deg) threshold is set When roll or pitch exceeds the threshold for > 2 s during mission execution Then auto-pause is engaged within 1 s and warning Guardrail:TiltLimit is shown with measured tilt And if tilt exceeds the threshold continuously for > 8 s or occurs > 3 times within 60 s, RTH is initiated
GNSS Quality Guardrail for Takeoff and In-Flight
Given GNSS quality thresholds are configured (min satellites, max HDOP) When preflight GNSS quality is below thresholds (satellites < min or HDOP > max) for >= 5 s Then takeoff is inhibited with error Guardrail:GNSSQuality When in-flight GNSS quality is below thresholds for >= 3 s Then auto-pause is engaged and warning is shown And if GNSS quality remains below thresholds for >= 30 s, RTH is initiated
Guardrail Event Logging and Rationale
Given guardrail monitoring is active When any guardrail prevents an action, auto-pauses, or initiates RTH Then a flight-log entry is created within 5 s containing: ISO8601 UTC timestamp, lat/lon/alt, reason code, measured values (wind gust m/s, tilt deg, HDOP, satellites), configured thresholds, forecast values (end-of-leg SoC %, RTH SoC % and time), action taken, user override flag, and config version And the entry is immutable, appended-only, and synchronized to the mission record within 60 s of landing And the log is exportable in JSON and included in the mission PDF summary with a concise rationale message
Wind Session Logging and Reports
"As a project manager, I want detailed wind and quality logs tied to each job so that I can audit flights, defend estimates, and improve team practices."
Description

Capture wind vectors, replanner adjustments, pass spacing changes, overlap metrics, blur risk events, and operator overrides. Sync logs to the RoofLens job record and generate a concise QA summary that can be attached to the PDF bid or exported (CSV/JSON). Provide post-flight heatmaps for overlap and motion blur likelihood to guide re-fly decisions and team training.

Acceptance Criteria
Log Wind Vectors and Replanner Adjustments During Flight
Given an active WindSmart mission with GNSS lock When the aircraft is airborne Then wind vectors (speed_mps, dir_deg_true, gust_mps) are sampled at ≥1 Hz and recorded with ISO-8601 UTC timestamps and aircraft lat/lon/alt Given wind speed changes by ≥1.0 m/s or direction by ≥10° When the change is detected Then a wind_change event is appended within 1 s with pre/post values and sample_ids Given the replanner alters heading or speed due to wind When the adjustment is applied Then a replanner_adjustment event is recorded with cause=wind_compensation, pre/post heading_deg, pre/post speed_mps, and affected_segment_ids Given logging is active When the mission ends Then the flight log contains no sample gaps >3 s and ≥99% of expected samples are present
Record Pass Spacing Changes and Overlap Metrics
Given a planned pass spacing S_plan_m When the replanner adjusts pass spacing Then a pass_spacing_change event is logged with pass_index, before_m, after_m, and rationale Given images are captured When overlap is computed Then per-image frontlap_pct and sidelap_pct are recorded and aggregated per pass with mean, min, and max values Given overlap metrics are computed When quality thresholds are evaluated Then the log stores a compliance flag per pass using defaults frontlap_pct ≥70% and sidelap_pct ≥60% (thresholds configurable) Given the mission completes When logs are validated Then ≥98% of captured images have associated overlap metrics or an error_code explaining omission
Detect and Log Blur Risk Events and Operator Overrides
Given motion blur heuristics are enabled When predicted ground smear exceeds 1.0 pixel at exposure time Then a blur_risk event is logged with severity ∈ {low, medium, high}, image_ids, airspeed_mps, shutter_s, and yaw_rate_dps Given the operator pauses capture, changes mode, or overrides yaw/speed When the override occurs Then an operator_override event is recorded with type, magnitude, duration_ms, and autonomy_resume_flag Given blur_risk or override events occur When the mission ends Then the summary includes counts and first/last timestamps for each event type and they are queryable by time range
Sync Flight Logs to RoofLens Job Record Post-Flight
Given a completed mission linked to a RoofLens job When the aircraft lands and connectivity is available Then logs upload and attach to the job within 2 minutes with status=uploaded and checksum_sha256 recorded Given connectivity is unavailable at landing When it is later restored Then logs sync automatically within 2 minutes without user action and with visible retry/backoff status updates Given the upload completes When the job record is viewed Then the flight log, QA summary, and heatmaps are listed with version, file_size, and download links that succeed (HTTP 200)
Generate QA Summary and Attach to PDF Bid
Given a job with at least one completed WindSmart mission When Generate Bid is triggered Then a QA Summary (≤2 pages, ≤300 KB) is generated containing wind stats, overlap compliance per pass, blur/override counts, and flight date/time Given the QA Summary is generated When the PDF bid is created Then the QA Summary is appended as an appendix or attached as a separate PDF and is visible in the bid package Given the QA Summary exists When viewed Then pass/fail indicators are shown for thresholds (default: frontlap_pct ≥70%, sidelap_pct ≥60%, blur_events = 0) with the actual thresholds displayed and changeable in settings
Export Logs and QA Data as CSV and JSON
Given a completed mission When the user selects Export CSV or Export JSON Then a file downloads within 10 s containing wind, replanner_adjustments, pass_spacing_changes, overlap metrics, blur_risk events, and operator_overrides Given the export is generated When fields are inspected Then keys and units match schema documentation (e.g., wind_speed_mps, wind_dir_deg_true, frontlap_pct, sidelap_pct) and timestamps are ISO-8601 UTC Given multiple missions are selected When bulk export is triggered Then a ZIP downloads containing one file per mission plus a manifest.json with mission_id, start_time, end_time, and file checksums
Render Post-Flight Overlap and Motion Blur Heatmaps
Given a completed mission with geotagged images When processing finishes Then overlap and motion_blur_likelihood heatmaps render on the job map within 5 minutes of upload completion Given heatmaps are visible When the user toggles layers or adjusts thresholds Then the legend updates and tooltips show grid cell value, min/max per pass, and linked image_ids Given heatmaps are computed When export is requested Then PNG and GeoJSON files download successfully and align to the base map with horizontal error ≤1 m

Overlap Guardian

A real-time forward/side overlap gauge overlays your corridor. If coverage dips below threshold, lanes turn red and the app nudges you to slow, tighten, or add a pass—producing reconstruction‑ready photo sets that eliminate QC failures.

Requirements

Real-time Overlap Computation Engine
"As a drone pilot, I want accurate real-time overlap metrics while flying so that I can adjust my flight and capture reconstruction-ready photos without re-flying."
Description

Compute forward and side overlap continuously at 1 Hz or higher from aircraft telemetry (position, altitude AGL, groundspeed, yaw, gimbal pitch/roll), camera intrinsics/extrinsics, and trigger interval. Detect dips below user-defined thresholds and expose normalized metrics (0–100%) to the UI and guidance logic via a local API. Support nadir and oblique capture, variable altitudes, and terrain following (using DSM if available or flat-earth fallback). Fuse RTK accuracy when present; degrade gracefully with standard GNSS by inflating uncertainty bands. Time-align shutter events with vehicle motion to account for motion blur and rolling shutter. Persist a per-pass coverage buffer to enable backtracking and post-flight reporting.

Acceptance Criteria
Real-time Overlap Metrics at ≥1 Hz
Given a 10 Hz telemetry stream including position, altitude AGL, groundspeed, yaw, gimbal pitch/roll, camera intrinsics/extrinsics, and trigger interval When the engine is running Then forwardOverlap and sideOverlap are computed and updated at ≥1 Hz with end-to-end update latency ≤200 ms per update And forwardOverlap and sideOverlap are normalized to 0–100% (inclusive), with no values outside range And metric timestamps are monotonic and within 150 ms of the source telemetry time When the trigger interval changes mid-mission or groundspeed varies by ±50% Then forwardOverlap re-stabilizes within 1 s with no single-step spike >10% between consecutive updates
Threshold Breach Detection and Local API Publication
Given user-defined thresholds forwardThreshold and sideThreshold When either overlap metric dips below its respective threshold Then the engine flags a breach and publishes an event on the local API within ≤300 ms including metricName, value, threshold, timestamp, and severity (marginal if within 5% of threshold, critical if >5% below) And when the metric recovers above (threshold + 2% hysteresis) Then a recovery event is published within ≤300 ms And the local API endpoint for current state returns forwardOverlap, sideOverlap, thresholds, breachFlags, severity, and uncertainty in a single response within ≤100 ms
Nadir/Oblique Support Across Variable Altitudes
Given gimbal pitch angles of 0°, 20°, and 35°, camera roll up to ±5°, altitudes AGL varying from 30 m to 120 m, and straight flight lines When overlaps are computed Then forwardOverlap and sideOverlap match a reference photogrammetry model within ±5% across all configurations And yaw deviations up to ±15° from lane heading do not increase error beyond ±6% And outputs remain within 0–100% with no NaN or null values
Terrain Following with DSM and Flat-Earth Fallback
Given a DSM with ≤1 m resolution is available When terrain slope changes up to 20% occur along the corridor Then AGL derived from DSM has RMS error ≤1.0 m versus ground truth and overlap errors ≤5% When DSM input becomes unavailable mid-mission Then the engine switches to flat-earth mode within ≤1 s, sets mode='flat', and inflates overlapUncertainty by ≥3% absolute When DSM input becomes available again Then the engine switches back to terrain mode within ≤1 s and logs both mode transitions
RTK Fusion and GNSS Uncertainty Inflation
Given RTK FIX telemetry with reported σH≤0.03 m and σV≤0.05 m When overlaps are computed Then published overlapUncertainty (1σ) ≤2% and quality='RTK_FIXED' Given standard GNSS without RTK with σH≥0.5 m and σV≥1.0 m Then published overlapUncertainty (1σ) ≥5% and quality='GNSS' And uncertainty bands are included in the local API response alongside the overlap values
Shutter-Time Alignment and Motion/Rolling Shutter Compensation
Given camera trigger events and shutter profiles (exposure time and rolling-shutter readout time) When computing forward overlap on an aircraft traveling 0–15 m/s Then image capture time uses shutter midpoint and rolling-shutter temporal offset, with time-alignment error ≤20 ms versus hardware timestamps And forward overlap bias due to motion blur at 10 m/s with 1/250 s exposure is ≤3% versus ground truth
Per-Pass Coverage Buffer, Backtracking, and Post-Flight Reporting
Given a 6-lane corridor mission When images are captured Then a geospatial coverage buffer is updated per image and persisted to disk at least every 5 s without data loss on simulated crash When the app restarts Then the coverage buffer reloads identically (checksum match) and is available to guidance for backtracking When a gap >10 m² with overlap below threshold is detected and an additional pass is flown over the gap Then the buffer marks the area as covered and post-flight report API returns per-lane coverage ≥threshold with no remaining gaps And a post-flight endpoint returns JSON including per-lane forwardOverlapAvg, sideOverlapAvg, minOverlap, coveragePercent, and gapSegments with coordinates
Corridor Overlay and Lane Coloring
"As a field technician, I want a clear visual overlay that shows where coverage is insufficient so that I can correct my flight path in real time."
Description

Render the planned flight corridor with lane centerlines and dynamic polygons on the live map/video overlay. Color-code lanes based on computed overlap (green=meets, amber=at-risk, red=below threshold). Display an at-a-glance gauge for forward/side overlap and a mini-map showing current swath vs planned. Support pinch-zoom, north-up/track-up, high-contrast and colorblind-safe palettes. Maintain <100 ms UI latency from metric update to on-screen change. Operate offline with cached basemaps and job footprints. Allow quick toggles to show/hide overlays for de-cluttering.

Acceptance Criteria
Live Corridor Rendering on Video Overlay
Given a loaded mission with a planned corridor and valid aircraft pose, When the live video/map view is active, Then render the planned corridor with lane centerlines and dynamic polygons aligned to planned coordinates with ≤1.5 m positional error at ≤120 m AGL; And overlay elements refresh at ≥10 Hz; And lane centerlines are continuous with no gaps >2 px at 2× zoom; And overlays never obscure flight-critical indicators (record, battery, link).
Lane Color Coding by Computed Overlap
Given forward overlap threshold FWD_T and side overlap threshold SIDE_T are configured, When computed forward and side overlaps are updated, Then lane color is Green when both overlaps ≥ threshold + 5 percentage points; And lane color is Amber when both overlaps ≥ threshold but either is < threshold + 5 percentage points; And lane color is Red when either overlap < threshold; And lane color transitions reflect the latest metric within 100 ms of metric receipt (95th percentile).
Forward/Side Overlap Gauge Responsiveness
Given live overlap metrics are streaming, When a new metric value arrives, Then the forward and side gauges update numerically (integer %) and graphically within 100 ms (95th percentile); And threshold markers are visible on each gauge; And update cadence is ≥5 Hz during flight; And gauge values match telemetry within ±1 percentage point over a 5‑minute run.
Mini‑Map Swath vs Planned with Orientation Modes
Given mini‑map is visible in North‑Up mode, When the aircraft moves, Then the planned footprint remains north‑aligned and the current swath trail updates within 250 ms; Given Track‑Up is selected, When aircraft heading changes, Then the mini‑map rotates to keep track at top within 300 ms without jitter >5°; And pinch‑zoom supports 0.5×–4× with first-frame response ≤80 ms and maintains geospatial alignment ≤1.5 m.
High‑Contrast and Colorblind‑Safe Palettes
Given Colorblind‑Safe palette is selected, Then lane states are distinguishable by both color and pattern (Green=solid, Amber=diagonal stripe, Red=crosshatch) and each pair’s luminance contrast ratio is ≥3:1; And simulated deuteranopia/protanopia/tritanopia tests yield ≥95% correct state identification across a 20‑screenshot set; Given High‑Contrast mode, Then overlay strokes/text achieve ≥4.5:1 contrast against underlying imagery.
Offline Operation with Cached Basemaps and Job Footprints
Given basemaps and job footprints are pre‑cached for the AOI + 500 m buffer, When the device is offline before or during flight, Then corridor overlays, lane coloring, gauges, and mini‑map load and function without network calls; And cached tiles render with no blank screens (missing tiles show a hatch placeholder); And UI thread frame time stays ≤16 ms avg and ≤50 ms at 95th percentile over 10 minutes.
Overlay Visibility Quick Toggles
Given the live view is active, When the user toggles Corridor, Lane Colors, Gauges, or Mini‑Map, Then the selected overlay hides/shows within 100 ms and its state persists for the current mission; And an All Overlays toggle hides/shows all in one action without pausing data capture; And the last-used visibility states restore on app resume within 1 second.
Adaptive Nudge Guidance
"As a pilot flying surveys, I want timely, simple guidance on how to fix overlap issues so that I can maintain quality without stopping the mission."
Description

Provide unobtrusive, actionable prompts when overlap risk is detected, suggesting to slow down, tighten lane spacing, or add a pass. Use haptic, audio, and on-screen cues with rate limiting to avoid alert fatigue. Adapt recommendations based on wind, groundspeed variance, camera trigger rate, and remaining battery. Offer one-tap actions for common adjustments (e.g., reduce speed to X m/s, shrink lane spacing by Y%). Log each nudge and pilot response for QA and continuous improvement.

Acceptance Criteria
Nudge on Forward Overlap Risk at High Groundspeed
Given a mission with a forward overlap threshold T% and predicted forward overlap < T% for ≥ 2s based on current groundspeed and trigger rate When the risk is detected Then the app shows an on-screen banner labeled "Forward overlap risk" displaying current vs threshold values and a one-tap action "Reduce speed to X m/s" And When the pilot taps "Reduce speed to X m/s" Then the aircraft groundspeed setpoint changes to X ± 0.2 m/s within 2s and the banner updates to "Applied" And Then a single short haptic pulse and single chime play once, respecting device mute settings and user preferences And Then the same forward-overlap nudge is not re-issued for at least 30s unless predicted forward overlap decreases by ≥ 10 percentage points
Side Overlap Nudge under Crosswind Drift
Given a side overlap threshold T% and crosswind drift causes predicted side overlap < T% for ≥ 2 consecutive samples When the risk is detected Then the app shows a banner "Side overlap risk" with one-tap actions "Shrink lane spacing by Y%" and "Add pass" And When "Shrink lane spacing by Y%" is tapped Then lane spacing updates by Y% (computed to meet T% + 5% buffer) and replanning completes within 3s; subsequent lanes reflect the new spacing And When remaining battery reserve is insufficient to add a pass while maintaining ≥ 15% reserve Then the "Add pass" action is disabled with a tooltip "Insufficient battery" And Then a distinct two-short-pulse haptic pattern and a chime play once, respecting user preferences
Nudge Rate Limiting and Suppression
Given nudge categories {speed, spacing, add-pass} When multiple risks occur Then no more than one nudge per category is shown within any rolling 30s window and no more than 3 total nudges per minute And When a pilot dismisses or accepts a nudge Then the same category is suppressed for 10s And When snooze is activated for 2 minutes Then only critical risks (predicted overlap < T% − 20%) are allowed to alert with a reduced cue pattern And Then duplicate banners for the same root cause do not appear concurrently
Adaptive Recommendation Using Telemetry
Given wind speed and variance, groundspeed variance over 5s, camera trigger rate (requested vs actual), and remaining battery % When predicted forward and/or side overlap < T% Then the recommendation minimizes mission extension while restoring overlap ≥ T% with battery reserve ≥ 15% And When groundspeed variance over 5s > 1.5 m/s or actual trigger rate is > 10% below requested Then the recommendation favors speed reduction over adding a pass And When crosswind > 5 m/s Then the recommendation favors lane spacing reduction over speed change And When the payload supports trigger rate adjustment and this achieves overlap ≥ T% with lower cost than speed change Then a one-tap "Increase trigger rate to N Hz" is presented and applies within 2s
One‑Tap Action Execution and Undo
Given any nudge with a one-tap action When the action is tapped Then the commanded change (speed X, spacing Y%, or add pass) applies within 2s and is reflected in telemetry/plan UI And Then an "Undo" appears for 10s When "Undo" is tapped within 10s Then prior settings are restored within 2s And When the vehicle API does not support the action Then the app presents step-by-step manual instructions and logs the limitation without crashing And All changes respect safety bounds (min speed, max yaw/roll) and are rejected with an explanation if they would be violated
Multimodal Cues and Accessibility
Given user preferences for audio and haptics and the device mute state When a nudge is issued Then an on-screen banner occupies ≤ 20% of the viewport, includes action buttons, and meets contrast ratio ≥ 4.5:1 with accessible labels And Then audio plays at configured volume unless the device is muted; haptic plays if enabled, even when muted And Severity levels use distinct haptic patterns: advisory=1 short, escalated=2 short, critical=1 long And Screen readers (VoiceOver/TalkBack) read banner text and actions in logical order
Nudge and Response Logging for QA
Given logging is enabled When any nudge is shown, accepted, dismissed, or snoozed Then an event is persisted within 1s including: id, mission_id, UTC timestamp, nudge_type, root_cause, predicted forward/side overlaps and thresholds, wind_speed, groundspeed_mean/std, trigger_rate requested/actual, battery_percent, recommended action, user_action, and action_latency_ms And Then an outcome event records overlap values 30s after the action (or after dismiss) to compute effect size and result_status ∈ {improved, no_change, regressed} And Logs sync to the server within 5 minutes of creation or at mission end with retry/backoff; CSV export includes these fields; no PII beyond hashed pilot_id is stored And If network is unavailable Then logs are queued locally and survive app restarts
Threshold Profiles and Calibration
"As an estimator setting up a job, I want preset overlap profiles that match the deliverable so that pilots capture exactly what processing needs the first time."
Description

Allow configuration of forward/side overlap thresholds via selectable profiles (2D orthomosaic, 3D model, steep roof) with editable defaults. Auto-suggest thresholds based on altitude, camera FOV, and job type from RoofLens. Provide a quick calibration flow to verify trigger interval vs speed before takeoff. Persist per-organization defaults and enforce minimums set by admins. Expose thresholds via API for integration with mission planners.

Acceptance Criteria
Selectable Threshold Profiles with Editable Defaults
- Given an authenticated user opens Overlap Guardian settings, When they view the Profile selector, Then the following profiles are available: "2D Orthomosaic", "3D Model", "Steep Roof". - Given a profile is selected, When the settings panel loads, Then forward and side overlap fields auto-populate with that profile’s defaults. - Given the user edits a default value and saves, Then the new default persists for the organization and is reapplied on next launch. - Given an org-level default exists, When another user in the same org selects the profile, Then they see the updated org default. - Given the user clicks "Reset to Profile Default", Then the values revert to the profile’s current org default.
Admin Minimum Threshold Enforcement
- Given an org admin, When they set minimum forward/side overlap for each profile, Then values must be between 50% and 95% inclusive. - Given minimums are saved, When a non-admin attempts to set a profile default below the minimum, Then save is blocked and an inline error explains the minimum. - Given thresholds are updated via API below minimum, Then the request is rejected with HTTP 422 and a validation error code "MIN_THRESHOLD_VIOLATION". - Given minimums exist, When suggestions or calibrations propose values below minimum, Then the UI clamps suggestions to the minimum and marks them "clamped". - Given an admin changes a minimum upward, When existing defaults are below the new minimum, Then they are automatically raised to the new minimum and a system audit log entry is recorded.
Auto-suggest Thresholds from Altitude, FOV, and Job Type
- Given altitude AGL, camera horizontal FOV (deg), and RoofLens job type are available, When the user selects or changes any of these, Then the app computes suggested forward and side overlap within 300 ms. - Given job type "2D Orthomosaic", Then suggested forward >= 70% and side >= 60% unless clamped by admin minimums. - Given job type "3D Model", Then suggested forward >= 80% and side >= 70% unless clamped by admin minimums. - Given job type "Steep Roof", Then suggested forward >= 85% and side >= 75% unless clamped by admin minimums. - Given camera FOV is unknown, Then the app uses the device profile default FOV for suggestions and displays "Using default camera FOV". - Given the user accepts suggestions, Then the values populate the active profile but remain editable subject to minimums.
Pre-Flight Overlap Calibration Flow
- Given a selected profile and thresholds, When the pilot runs "Calibration" before takeoff, Then the app instructs a 10-second straight-line taxi/hover test and records GPS groundspeed and trigger interval. - When calibration completes, Then the app computes expected forward overlap and reports "Pass" if within ±5% of the threshold or "Fail" otherwise. - Given "Fail", Then the app recommends a max groundspeed (m/s) and/or trigger interval (s) that would achieve the threshold within ±5%. - Given "Fail" and user role is not admin, Then mission arming is disabled until calibration passes or thresholds are raised by an admin. - Given "Pass", Then the calibration result (timestamp, speed, interval, aircraft, camera) is stored and attached to the mission for audit.
Thresholds API for Mission Planner Integration
- Given a valid OAuth2 token with scope "overlap:read", When a client calls GET /v1/orgs/{orgId}/overlap-profiles, Then the API returns 200 with JSON including profile names, defaults, and admin minimums per profile. - Given scope "overlap:write", When a client calls PUT /v1/orgs/{orgId}/overlap-profiles/{profileId} with valid values meeting minimums, Then the API returns 200 and the changes persist. - Given invalid values below minimums, When PUT is called, Then the API returns 422 with error code "MIN_THRESHOLD_VIOLATION". - Given changes are saved, Then a webhook event "overlap.profile.updated" is emitted within 5 seconds to registered endpoints. - Given ETag is provided on GET, When a client supplies If-Match with a stale ETag on PUT, Then the API returns 412 Precondition Failed.
Real-time Overlap Gauge Behavior with Active Thresholds
- Given a mission is in progress, When predicted forward or side overlap falls below the active threshold for >1.0 seconds, Then the corresponding lane indicator turns red and a haptic/sound nudge is triggered once every 5 seconds while below threshold. - Given predicted overlaps recover above thresholds for 2 consecutive samples, Then the indicators revert to normal and nudges stop. - Given thresholds are changed mid-flight, Then the gauge updates immediately and applies the new thresholds within 250 ms. - Given extreme values (e.g., thresholds >95%), Then the gauge remains functional without UI clipping or crashes.
Offline Profiles Access and Sync
- Given the device is offline, When the user opens profiles, Then the last-synced org profiles and minimums from the past 30 days are available read-only. - Given the device is offline and the user attempts to edit defaults, Then changes are queued locally and marked "Pending Sync". - When connectivity is restored, Then queued changes are synced within 60 seconds; conflicts are resolved by server last-write-wins and results are reflected in the UI. - Given sync failure, Then the UI shows an error and retries with exponential backoff up to 5 minutes. - Given the device has no cached data, When offline, Then the UI communicates that profiles cannot be loaded.
Post-Flight Coverage QC Report
"As a project manager, I want an automatic coverage report after each flight so that I can prove capture quality and avoid rework or disputes."
Description

Generate a per-mission QC report summarizing achieved forward/side overlap distributions, heatmaps of low-coverage areas, and a list of recommended recovery passes. Export as PDF for customers and JSON for automation. Attach the report to the RoofLens job and block upload if critical gaps are detected (admin-configurable). Include telemetry stats, camera model, RTK status, and timestamps to streamline audits and dispute resolution.

Acceptance Criteria
Per-Mission Overlap Distribution Summary Generation
Given a completed mission with 100–5,000 geotagged images and a flight path When the QC report is generated Then forward and side overlap are computed per image using the configured along-track and cross-track definitions And the report includes mean, median, min, max, standard deviation, p5, p25, p50, p75, p95 for both forward and side overlaps, rounded to 0.1% And the report shows the percentage of images meeting or exceeding the admin-configured overlap targets (default 75% forward, 65% side) And the sample size equals the number of valid images; invalid/missing-geotag images are counted and listed separately And generation completes within 3 minutes for ≤1,000 images and within 10 minutes for ≤5,000 images on standard infrastructure And calculations are deterministic: repeated runs on the same input produce identical statistics
Low-Coverage Heatmap Rendering
Given georeferenced image footprints and the mission AOI When the QC report is generated Then a coverage heatmap is produced on a grid with default cell size 1.0 m (admin adjustable 0.5–5.0 m) And cells with overlap below admin-configured thresholds are flagged as low coverage and visualized in red; others use a green→yellow gradient And the heatmap includes a legend with value ranges, min/max, target thresholds, and CRS (EPSG code) And the heatmap spatial extent matches the AOI bounds within ±0.5 m And the heatmap is embedded in the PDF and included in the JSON as a GeoJSON FeatureCollection with cell polygons and attributes {forward_overlap, side_overlap, is_low_coverage} And heatmap generation completes within 2 minutes for missions with ≤1,000 images
Recommended Recovery Passes Generation
Given one or more contiguous low-coverage regions totaling ≥10 m² When the QC report is generated Then the system proposes recovery passes that, if flown, raise coverage in each region to meet targets And each proposed pass includes start/end coordinates, altitude AGL, speed, heading, gimbal pitch, estimated duration, and expected coverage gain And no more than 10 passes are proposed per mission; adjacent regions are merged where efficient; reason codes are provided when no passes are needed And proposed passes avoid no-fly polygons supplied with the mission and maintain minimum clearance constraints (altitude ≥ configured minimum) And proposed passes are included as a map overlay in the PDF and as GeoJSON LineString features in the JSON export
PDF and JSON Export and Schema Validation
Given a generated QC report When the user exports Then a PDF and a JSON file are produced with filenames RoofLens_QC_<MissionID>_<YYYYMMDDThhmmssZ>.pdf and .json And the PDF contains sections: Executive Summary, Overlap Distributions, Low-Coverage Heatmap, Recommended Recovery Passes, Telemetry & Sensor Details, Audit Metadata And the JSON validates against QC Report Schema v1.0 with top-level field schema_version="1.0" And both files include a SHA-256 checksum recorded in the job activity log And for missions ≤1,000 images, the PDF size is ≤20 MB and the JSON size is ≤10 MB without loss of required detail
Automatic Attachment to RoofLens Job
Given a mission linked to a RoofLens job When the QC report is generated Then the PDF and JSON are attached to the job under the "QC Reports" section with versioning (v1, v2, ...) And access is restricted to roles with JobView or higher; downloads are logged with user ID and timestamp And if a report already exists for the mission, a new version is created; previous versions remain accessible read-only And a job timeline event "QC report generated" is created with links to both files
Critical Gap Detection and Upload Blocking
Given admin-configured critical gap rules (e.g., any low-coverage region > X m² or total low-coverage area > Y%) When the QC report is generated Then the system evaluates the mission against these rules And if any rule is triggered, reconstruction upload is blocked for the mission with a clear error message citing the specific rule(s) and measured values And admins and job owners with OverrideQC permission can proceed via an explicit override that records user, reason, and timestamp in the audit log And if no rules are triggered, upload proceeds without interruption And configuration changes take effect for new reports within 5 minutes and are reflected in report metadata
Telemetry, Camera, RTK, and Timestamps Inclusion
Given available EXIF and flight telemetry for mission images When the QC report is generated Then the report includes camera make/model, lens focal length, sensor size, shutter speed range, ISO range, and image count by camera And RTK status distribution is included (percent images RTK Fixed/Float/None) plus correction source; any gaps are flagged And flight telemetry stats include average and 95th percentile ground speed, altitude AGL, GSD (min/median/max), wind estimate if available, and mission start/stop UTC timestamps And missing or corrupted telemetry is reported with counts and impact notes; computations fall back using available data And report metadata includes app version, algorithm version, generation timestamp UTC, and a reproducibility hash of inputs
Offline and Fail-safe Operation
"As a pilot working in remote areas, I want the overlap system to work reliably offline and never compromise safety so that I can complete missions confidently."
Description

Ensure Overlap Guardian functions without network connectivity, including metric computation, overlays, nudges, and local caching of basemaps and job boundaries. Detect GNSS degradation and switch to conservative guidance, pausing nudges that could compromise safety. Fail closed: never issue prompts that conflict with obstacle avoidance or airspace rules. Provide a low-power mode that keeps metrics accurate while minimizing device and aircraft battery impact.

Acceptance Criteria
Core Offline Overlap Guidance
Given the device is in Airplane Mode and a cached job is opened When the flight session starts Then forward and side overlap metrics are computed at ≥2 Hz and overlays render with <150 ms latency without any network requests Given coverage dips below the configured threshold When nudges are enabled Then nudge prompts render within 500 ms and update as metrics change while offline Given the job basemap and boundary are cached When panning or zooming the map in-flight offline Then tiles and boundaries load from cache with no missing tiles within the planned corridor plus a 150 m buffer
Preflight Offline Readiness and Caching
Given network connectivity is available and a mission is loaded When the operator taps "Make Offline" Then the app downloads and caches basemap tiles for zoom levels 16–19 covering the planned corridor plus a 150 m buffer and the job boundary, showing progress to 100% Given "Make Offline" has completed When network connectivity is lost Then the mission opens and runs with full basemap and boundary availability and shows an "Offline Ready" status Given the total offline cache exceeds 1 GB When new offline areas are cached Then least‑recently‑used areas older than 30 days are evicted, preserving the current mission cache
GNSS Degradation Detection and Conservative Guidance
Given GNSS horizontal accuracy is >3 m for ≥3 s or satellite count is <8 When Overlap Guardian is active Then the system switches to Conservative Guidance, increases target overlap by +10%, and pauses speed‑up or widen‑lane nudges Given Conservative Guidance is active When GNSS accuracy returns to ≤3 m for 10 s and satellite count is ≥12 Then normal guidance resumes and a "GNSS Recovered" banner is shown Given Conservative Guidance is active When rendering gauges and lanes Then coverage lanes and gauges are shown in amber and a "GNSS Degraded" indicator is visible
Fail-Closed Safety Interlocks
Given obstacle avoidance is active or the flight controller reports avoidance When an overlap shortfall is detected Then no nudge suggests speed increase, path tightening, or an additional pass until avoidance clears Given airspace/geofence rules prohibit flight outside the job boundary or such data is unavailable offline When calculating nudges Then the system suppresses prompts that would direct flight outside the boundary and displays a "Safety Interlock" tooltip Given a conflict exists between a nudge and controller limits (max speed, altitude, or no‑fly constraints) When the conflict is present Then the nudge is not shown and the event is logged locally for audit
Low-Power Mode Accuracy and Efficiency
Given device battery ≤20% or aircraft battery ≤30% or the user enables Low‑Power Mode When Overlap Guardian is running Then processing frequency reduces to ≥1 Hz, screen brightness dims by 20%, and background sync/network calls are disabled Given Low‑Power Mode is active When comparing overlap metrics to Normal Mode over a 2‑minute segment Then absolute error in forward/side overlap is ≤3% and the nudge decision matches ≥95% of the time Given Low‑Power Mode is active for 10 minutes When measuring system power draw on the device Then CPU utilization is reduced by ≥25% versus Normal Mode for the same mission replay
Offline State UX and Deferred Sync
Given the app is offline When Overlap Guardian starts Then an "Offline" badge and a "Cache: Ready/Partial/Missing" status are shown within 1 s Given the session is offline When telemetry and events (nudges, GNSS state changes, interlocks) are generated Then they are stored locally up to 50 MB per job with integrity checks Given connectivity is restored When the app is in the foreground Then queued telemetry syncs within 5 minutes or presents a retry/error message if sync fails
Graceful Operation Without Cached Basemap/Boundary
Given no basemap tiles are cached and network is unavailable When the mission opens Then overlap metrics and gauges operate normally, the basemap layer is hidden, and a non‑blocking banner states "Basemap unavailable offline" Given the job boundary is not cached and the flight is offline When computing corridor guidance Then the system uses the last known mission boundary or reconstructs a corridor from waypoints and disables prompts that rely on exact boundaries Given missing caches are detected When the operator opens the mission preflight screen online Then the app prompts to "Make Offline" with an estimated download size and coverage area

Facet Finder

Computer vision identifies roof facets, dormers, valleys, and penetrations from the live feed, pinning any uncaptured surfaces with AR markers. Prompted obliques ensure all loss‑critical features are documented for accurate measurements and damage mapping.

Requirements

Real-time Facet Segmentation
"As a field adjuster, I want the app to automatically detect and outline roof features in real time so that I can capture complete, accurate measurements without manual tracing."
Description

Implements a low-latency computer vision pipeline that detects and segments roof facets, dormers, valleys, ridges, eaves, and penetrations directly from the live drone feed. Generates vectorized, georeferenced polygons with per-edge classification and maintains temporal consistency across frames for stable overlays. Integrates with RoofLens’ measurement engine to auto-compute areas, pitches, and linear totals, and exposes structured outputs for downstream damage mapping and estimating. Targets sub-200ms end-to-end inference latency on recommended field hardware and supports graceful degradation to frame-skip mode on lower-end devices. The outcome is accurate, real-time feature extraction that reduces manual tracing and accelerates end-to-end estimate creation.

Acceptance Criteria
Low-latency segmentation on recommended field hardware
Given a live 1080p30 drone video stream to a device that meets RoofLens’ recommended field hardware spec When the Real-time Facet Segmentation pipeline runs continuously for 5 minutes Then end-to-end latency (frame ingest to overlay render) p95 is ≤ 200 ms and p99 is ≤ 250 ms And the overlay update rate is ≥ 24 FPS with dropped overlays ≤ 1% of frames And the processing backlog never exceeds 3 frames
Temporal stability of overlays during orbit flight
Given an orbit flight around a residential roof with moderate parallax and lighting variation When facets are segmented and tracked across consecutive frames Then polygon track IDs remain stable with ≥ 95% continuity over any 30-second window And inter-frame IoU between matched polygons has a median ≥ 0.85 and p10 ≥ 0.75 (after motion compensation) And edge classifications do not flicker between types in consecutive frames more than 2% of the time And overlay vertex jitter at ridges is ≤ 3 pixels RMS
Accurate georeferenced vectorization
Given calibrated camera intrinsics and synchronized pose (GPS/IMU) per device spec When polygons are vectorized and projected to a map coordinate system Then each roof-edge vertex is within ≤ 0.30 m of surveyed ground truth on the test set And outputs include CRS identifier (EPSG code), acquisition timestamp, and pose metadata And polygons are simple, closed, have consistent winding, and contain no self-intersections or overlaps beyond a 5 cm tolerance And per-edge attributes include type and length (meters) with length error ≤ 2% vs ground truth
Per-edge classification completeness and precision
Given diverse roof types (gable, hip, dormers, valleys) in a curated validation set When edges are labeled as eave, ridge, hip, valley, rake and penetrations are detected as points/polygons Then macro-averaged precision ≥ 0.90 and recall ≥ 0.90 for edge classes And dormer and penetration detection F1 ≥ 0.88 And ≥ 99% of total edge length is assigned a class (unclassified edge length ≤ 1%) And class labels remain consistent across frames with temporal flip rate ≤ 2%
Automatic measurement integration with RoofLens engine
Given successful segmentation of a roof with reference measurements When the measurement engine ingests the vectorized polygons and edge classes Then total roof area error is ≤ 3% vs baseline and per-facet area median error ≤ 3% And linear totals (eaves, ridges, hips, valleys) have error ≤ 2% or ≤ 0.30 m, whichever is greater And facet pitch absolute error ≤ 1.5° or ≤ 0.5/12 (rise/run), whichever is more permissive And computed measurements are available in UI and API within ≤ 2 seconds of overlay stabilization And measurement records include source segmentation ID, version, and confidence summary
Graceful degradation to frame-skip mode
Given a lower-end device or adverse conditions that cause sustained latency budget exceedance When p95 end-to-end latency > 200 ms for ≥ 1 second or compute utilization > 90% for ≥ 1 second Then the system switches to frame-skip mode (e.g., processing every Nth frame) to achieve overlay update interval p95 ≤ 500 ms And a non-blocking UI indicator displays current sampling ratio and recovery status And median IoU vs full-rate mode on the same scene remains ≥ 0.80 and area error ≤ 5% And the system automatically returns to full-rate after 10 consecutive seconds within budget And no crashes occur and memory usage drift is ≤ 5% over 10 minutes
Structured outputs API for downstream systems
Given a consumer requests current or completed segmentation results via SDK/API When the session has active or finalized outputs Then a versioned JSON schema is returned containing polygons, edges, classes, confidences, georeference, timestamps, and track IDs And API median response time is ≤ 300 ms for sessions with ≤ 500 facets And responses validate against the published JSON Schema and export losslessly to GeoJSON with attributes preserved And outputs contain no PII and include only necessary session metadata
AR Gap Markers
"As a drone pilot, I want AR markers to appear on areas I haven’t properly captured so that I know exactly where to fly to complete the survey."
Description

Automatically identifies occluded or insufficiently observed roof surfaces during capture and pins persistent AR markers at their estimated 3D locations, guiding the operator to revisit and document them. Uses device pose (IMU/visual-inertial odometry) and drone telemetry for world anchoring, updates marker states as coverage improves, and removes markers once quality thresholds are met. Integrates with session completeness metrics and the capture checklist, feeding coverage heatmaps to ensure all surfaces required for measurement and damage assessment are documented. Expected outcome is higher dataset completeness with fewer return visits and reduced dispute risk.

Acceptance Criteria
Real-time Gap Detection and Marker Placement
Given an active capture session with live video and valid pose/telemetry And a roof surface region whose coverage score < 0.80 or view-angle diversity < threshold When the insufficiency is detected Then an AR gap marker is created at the estimated 3D centroid of the uncovered region within 2 seconds And the marker is visible in AR with label showing surface type and distance And duplicate markers for the same region are merged using a 1.0 m spatial merge radius And the marker is stored with state = "New" and timestamp in the session record
World-Anchored Marker Stability and Accuracy
Given tracking is valid (visual-inertial odometry and telemetry healthy) When the operator translates ≥ 15 m around the site over 30 seconds Then each gap marker remains world-stable with on-screen position drift ≤ 0.30 m (95th percentile) And under device rotation/zoom the marker jitter is ≤ 3 px per frame (95th percentile) Given a temporary pose loss ≤ 2 seconds occurs When tracking recovers Then markers re-anchor within 2 seconds with position error ≤ 0.50 m
Marker Lifecycle: Update, Resolve, and Auto-Removal
Given a marker with state = "New" When new imagery increases local coverage score by ≥ 0.20 Then the marker state updates to "In Progress" Given coverage score ≥ 0.90 And image quality meets GSD ≤ 1.5 cm/pixel And at least 3 distinct view angles separated by ≥ 20° are captured When these conditions are satisfied Then the marker transitions to "Resolved" and is removed from AR within 1 second And the coverage heatmap and completeness metric update within 2 seconds And the resolution event is appended to the session audit log
Operator Guidance to Revisit Uncaptured Surfaces
Given one or more unresolved gap markers exist When the operator selects "Go to Gap" for a marker Then the UI displays bearing, range, and recommended oblique angle with tolerance ±15° And guidance prompts are updated at ≥ 1 Hz until the operator is within 5 m and ±15° of the target And a capture prompt (visual/haptic/voice) triggers when framing criteria are met Then the region’s coverage score increases by ≥ 0.20 within 10 seconds or a diagnostic is shown indicating unmet conditions
Capture Checklist and Session Gating Integration
Given unresolved gap markers > 0 When the user attempts to finish the capture session Then completion is blocked and a list of unresolved markers is displayed And an "Override and Submit" option is available And selecting override requires a reason code and free-text note and records user ID and timestamp When all markers are resolved (count = 0) or an override is recorded Then the checklist item auto-completes and session completeness score updates to reflect ≥ 95% coverage
Persistence and Recovery Across Disconnections
Given gap markers exist during capture When the app is backgrounded for ≤ 30 minutes or the drone disconnects and reconnects within 60 seconds Then all markers and their states persist and reload within 3 seconds without duplication And marker IDs remain stable across reconnects And if network connectivity is offline, marker operations function locally and sync within 10 seconds after connectivity is restored
Oblique Capture Guidance
"As a roofing estimator, I want guided oblique prompts that tell me where and how to capture specific features so that my photos meet insurer standards and support accurate estimates."
Description

Provides dynamic prompts for required oblique angles and vantage points to document loss-critical features such as step flashing, valleys, and penetrations. Computes recommended headings, gimbal tilt, altitude, and standoff distance based on detected geometry and coverage gaps, and delivers clear on-screen cues with optional audio/haptic feedback. Enforces minimum imaging standards (GSD, overlap, angle of incidence) and adapts to airspace and geofence constraints. Integrates with the flight workflow, session scoring, and quality gates before measurement export. Outcome is consistent, insurer-acceptable imagery that supports accurate measurements and defensible damage mapping.

Acceptance Criteria
Loss-Critical Feature Oblique Prompt (Valleys, Step Flashing, Penetrations)
Given an active flight and a detected loss-critical feature with a coverage gap > 0 When the pilot is within 25 m horizontal standoff of the feature Then the system displays a prompt within 1.0 s with recommended heading (deg), gimbal tilt (deg), altitude (m AGL), and standoff distance (m) And the prompt updates at >=5 Hz as the aircraft moves And visual alignment indicators target an angle of incidence between 30 and 60 degrees And upon capturing two compliant obliques from azimuths separated by >=60 degrees, the feature’s coverage gap = 0 within 2.0 s
Imaging Standards Enforcement (GSD/Overlap/Incidence)
Given camera intrinsics and current altitude are known When the predicted capture at the target feature would yield GSD > 1.5 cm/px OR angle of incidence outside 30-60 degrees OR inter-oblique overlap < 60% on the feature polygon Then the capture control presents corrective guidance with required adjustments And any image taken below standard is auto-tagged Substandard and excluded from export And the feature remains flagged Needs Oblique until it has >=2 compliant obliques from distinct azimuths separated by >=60 degrees
Geofence and Airspace-Constrained Replanning
Given an active geofence and/or airspace altitude ceiling is loaded When the computed recommended altitude or standoff would violate a constraint Then the system recomputes compliant parameters within 1.5 s and updates the prompt And if compliance cannot be achieved while meeting imaging standards, the feature is marked Not Attainable with the blocking constraint and suggested mitigation And no guidance is shown that directs the pilot outside the geofence or above the ceiling
Multimodal Guidance Cues (Visual/Audio/Haptic)
Given guidance prompts are enabled When alignment error decreases below 10 degrees heading and 5 degrees tilt from the recommendation Then visual cues switch to green and a capture-ready indicator appears And optional audio beeps and haptic pulses are emitted within 300 ms of state change And audio/haptic toggles are user-configurable, persist across sessions, and default to Off
Workflow Integration, Session Score, and Quality Gate
Given a flight session with oblique guidance enabled When compliant obliques are captured Then the session score increases by +5 per compliant oblique and -3 per Substandard oblique, clamped to 0-100 And before measurement export, a Quality Gate blocks export if any loss-critical feature remains Needs Oblique or Not Attainable And the Quality Gate lists blocking items with per-feature status and one-tap recapture prompts
Export-Ready Imagery and Metadata for Insurer Acceptance
Given the pilot initiates export When all features have >=2 compliant obliques from distinct azimuths separated by >=60 degrees Then the export proceeds only if each selected image includes EXIF GPS (CEP <= 1.5 m), timestamp (UTC), yaw/pitch/roll, altitude AGL, and camera model And the export is labeled Export Ready and measurement generation is unlocked And if any check fails, export is blocked with a list of missing items and links to recapture
Confidence Scoring and Manual Overrides
"As an adjuster, I want to review low-confidence detections and quickly correct them so that the final measurements are accurate and defensible."
Description

Attaches per-feature confidence scores to each detected polygon and edge class, highlights low-confidence areas in the UI, and provides fast editing tools to adjust boundaries, add missing features, or reclassify edges. Tracks edits with an audit trail and propagates changes to measurements, damage maps, and line-item estimates in real time. Offers tunable thresholds to require operator confirmation before export when confidence is below policy limits. Outcome is transparent, controllable automation that preserves accuracy and trust while minimizing rework.

Acceptance Criteria
Per-Feature Confidence Score Attachment
Given a completed detection run, When the system saves roof features, Then each polygon (facet, dormer, valley, ridge, hip, eave, penetration) and each edge segment has a confidence score between 0.000 and 1.000 with three-decimal precision. Given a loaded project, When accessing a feature’s details, Then the confidence score, detector model version, and detection timestamp are displayed and retrievable via API. Given a detection result, When any feature lacks a confidence score, Then the project is marked invalid and an error is logged. Given the same imagery, model version, and parameters, When detection is re-run, Then confidence scores are identical.
Low-Confidence Visual Highlighting
Given default policy thresholds (Low < 0.75, Medium 0.75–0.90, High ≥ 0.90), When a project is opened, Then features below the Low threshold render with a red overlay and are listed in a "Review: Low Confidence" panel. Given a user adjusts the confidence threshold slider, When the value changes, Then the set of highlighted features updates within 200 ms without altering measurements. Given two features, one above and one below the threshold, When toggling "Highlight Low Confidence", Then only the below-threshold feature is highlighted. Given a highlighted feature, When hovering, Then a tooltip shows its confidence score and classification.
Manual Boundary Editing Tools
Given a selected polygon, When the user drags a vertex, Then the polygon updates with snapping tolerance ≤ 10 cm at ground scale and the edit completes within 150 ms. Given two adjacent polygons, When an edge is moved, Then topology remains watertight with no overlaps or gaps > 5 cm. Given a missing feature, When the user traces a new polygon and classifies it, Then the feature is created and participates in measurements. Given an edge segment, When the user reclassifies it (e.g., ridge to hip), Then the new class is saved and used in downstream calculations. Given a sequence of edits, When the user performs undo/redo, Then up to the last 50 actions are reversible and re-applied correctly.
Audit Trail for Manual Overrides
Given any manual edit, When it is saved, Then an immutable audit record is appended with user ID, timestamp (UTC), action type, feature ID(s), prior geometry hash, new geometry hash, prior class, and new class. Given the audit log, When viewed in the UI or exported as CSV/JSON, Then records appear in chronological order and cannot be edited or deleted by non-admin users. Given a project with N edits, When requesting the audit log via API, Then N records are returned and the latest matches the most recent edit. Given an audit record, When expanded, Then it links to a viewport state that replays the pre- and post-edit geometries.
Real-Time Propagation to Measurements and Estimates
Given a manual boundary change, When the edit is committed, Then all affected measurements recompute and update UI values within 2 seconds for projects with ≤ 200 features. Given a damage map linked to edited features, When recomputation occurs, Then polygon masks and counts reflect the new geometry. Given an open estimate, When measurements change, Then line-item quantities recalc and any dependent totals update without manual refresh. Given a recently exported PDF, When measurements change, Then the system flags the export as out-of-date until re-exported.
Export Gating by Confidence Policy
Given a project policy with export threshold T, When any feature has confidence < T, Then export actions are blocked by a modal listing offending features and requiring user confirmation to proceed. Given the modal, When the user confirms and optionally enters a note, Then export proceeds and the confirmation is logged to the audit trail. Given T = 0, When exporting, Then no gating occurs; Given T = 1.0, Then export requires all features at 1.000 confidence or explicit confirmation. Given a project admin, When they update T, Then the new threshold is persisted, audit-logged, and applied on next export attempt.
Model Versioning and Determinism
Given a detection model version M, When run twice on the same inputs and parameters, Then feature sets and confidence scores are identical. Given a model upgrade to version M+1, When opening an existing project, Then the previous scores remain unchanged until a re-run is explicitly triggered. Given a project, When exporting, Then the model version used for current scores is included in export metadata and API responses. Given a re-run, When it completes, Then the audit trail records old vs new model versions and the delta in confidence for affected features.
Camera Calibration and Scale Assurance
"As a quality manager, I want the system to self-calibrate and verify scale so that dimensional outputs are consistently accurate across jobs and devices."
Description

Ensures accurate scaling by auto-deriving intrinsic camera parameters from EXIF and self-calibration routines, correcting lens distortion, and fusing drone altitude/RTK GNSS and pose data to anchor measurements. Performs drift detection across the session, prompts for corrective passes if error exceeds thresholds, and validates outputs against known references (e.g., measured ridge length) when available. Integrates with the measurement engine and exports calibration metadata with the report for auditability. Outcome is consistent, verifiable dimensional accuracy across devices and flight conditions.

Acceptance Criteria
Auto-derive Intrinsics and Lens Distortion Correction
- Given a live drone camera feed with EXIF data available, when calibration starts, then intrinsic parameters (fx, fy, cx, cy) and distortion coefficients (k1–k3, p1–p2) are computed and stored within 3 seconds. - Then radial reprojection RMS <= 0.8 px and principal point is within 3% of the image center. - Given EXIF is missing or incomplete, when calibration starts, then self-calibration runs using feature tracks and returns parameters with reprojection RMS <= 1.2 px. - Then calibration status is set to "Calibrated" and a blocking error is shown if thresholds are not met.
Scale Anchoring with RTK GNSS/Altitude and Pose Fusion
- Given GNSS solution type is RTK Fixed, when scale anchoring completes, then mean absolute scale error for distances 5–30 m is <= 1.0% against ground truth targets. - Given GNSS solution type is Float or Standard, when anchoring completes, then mean absolute scale error for distances 5–30 m is <= 3.0%. - When altitude/pose input drops out, then the system falls back to last good fused state, flags scale confidence as "Degraded" within 1 second, and continues capture. - Then the current scale confidence (Fixed/Degraded) is visible in UI and recorded in the session log.
Session Drift Detection and Corrective Pass Prompting
- Given an active capture session, when cumulative scale drift > 1.5% over the last 60 seconds or reprojection RMS > 1.0 px for 100 consecutive frames, then the operator is prompted within 5 seconds to perform a corrective pass with path guidance. - When a corrective pass is completed, then drift <= 0.8% and RMS <= 0.8 px within 30 seconds; otherwise the prompt persists and the session is flagged "Needs Attention". - Then all drift threshold crossings, prompts, user actions, and metric values are timestamped and stored in the audit log.
Reference-Based Validation and Optional Rescale
- Given a user enters a known reference length or selects two points with known separation, when validation runs, then the measured length and percent error are displayed. - If percent error <= 1.5% (RTK Fixed) or <= 3.0% (other), then status "Validated" is shown; otherwise a warning appears with a "Recalibrate/Rescale" action. - When the user accepts rescale, then global scale is updated to match the reference and all dependent measurements refresh within 5 seconds, and the change is recorded in the audit log with previous and new factors.
Measurement Engine Integration and Calibration Metadata Export
- Given a calibrated session, when the measurement engine computes facet areas and linear features, then it must use current intrinsics, distortion, and scale parameters. - Then the generated PDF and JSON report include calibration metadata: camera model, EXIF focal length, computed fx/fy/cx/cy, k1–k3, p1–p2, calibration method (EXIF/self-cal), GNSS solution type, scale confidence, reprojection RMS, drift metrics, reference validation result, and timestamps. - Then metadata in the report exactly matches session log values; if calibration fails thresholds, the report is marked "Unverified accuracy" and auto-approvals are disabled.
Cross-Device and Condition Consistency
- Given two supported camera models and two flights under differing conditions (lighting, wind), when the same roof is captured per guidance, then 95% of linear dimensions (n >= 20) are within ±2.0% of ground truth and areas within ±3.0%. - Then inter-device variance (std dev) for linear measurements after calibration is <= 1.0%. - Any outliers > 3.0% are flagged in the UI and included in the audit report.
Offline Edge Processing Fallback
"As a contractor working in remote areas, I want the app to function fully without internet so that I can complete captures and generate measurements on site."
Description

Provides an on-device inference mode for low-connectivity sites using optimized models (quantization/pruning) and a lightweight tracking pipeline, with deferred synchronization to the cloud once a connection is available. Maintains feature parity for essential functions: detection overlays, AR markers, guidance prompts, and local caching of edits and metadata. Implements conflict resolution and checksum-based verification during sync to preserve data integrity. Outcome is reliable field operation regardless of network conditions, preventing capture interruptions and data loss.

Acceptance Criteria
Auto Fallback to Offline Edge Processing
Given the device is actively capturing and network throughput drops below 256 kbps or pings fail for 5 consecutive seconds, When the connectivity check triggers, Then the app switches to on-device inference within 3 seconds without terminating the capture session. Given the switch to offline mode occurs, When inference resumes, Then detection overlays remain visible with no more than 5 consecutive dropped frames and per-frame processing latency increases by no more than 150 ms compared to the prior minute average. Given the mode switch, When it occurs, Then a one-time offline banner appears within 1 second and an analytics event "offline_mode_entered" is logged with timestamp and project ID.
Essential Feature Parity in Offline Mode
Given offline mode is active, When the Facet Finder runs on the reference device class, Then the detection pipeline maintains a minimum 12 FPS at 1080p and end-to-end latency <= 250 ms p50 and <= 400 ms p95. Given offline mode is active, When running the standard validation image set, Then roof facet/dormer/valley/penetration detection achieves F1 within 5% absolute of the current cloud model and no class F1 below 0.80. Given offline mode is active, When placing AR markers on identified uncaptured surfaces, Then average positional error at 5 m distance is <= 10 cm and drift over 30 seconds is <= 15 cm. Given offline mode is active, When loss-critical features are not yet documented, Then guidance prompts trigger with recall >= 95% and false prompt rate <= 5% on the validation route set. Given offline mode is active, When edits are made (add/remove markers, facet labels), Then the UI updates within 150 ms and the change is persisted locally within 200 ms.
Local Caching and Crash-Safe Persistence
Given offline mode is active, When a user edits facets, markers, or metadata, Then the change is written to durable local storage within 200 ms and is encrypted at rest with AES-256-GCM. Given the app force-closes or the device reboots during capture, When the app is reopened, Then the last session is automatically restored to within the last action with no more than one edit lost and media files are intact. Given storage I/O errors occur, When a write fails, Then the system retries up to 3 times with exponential backoff and logs an error event; if all retries fail, the UI displays a non-blocking warning within 2 seconds. Given multiple projects are cached, When listing projects offline, Then projects load within 2 seconds for up to 100 cached projects and search/filter operates locally.
Deferred Sync with Checksum Verification
Given offline-captured data exists and connectivity is restored with stable throughput >= 1 Mbps for 30 consecutive seconds, When auto-sync conditions are met, Then sync begins within 10 seconds and shows a progress indicator. Given assets are queued for upload, When each asset uploads, Then a SHA-256 checksum is computed locally and verified by the server; on mismatch, the asset is retransmitted up to 3 times before marking as failed. Given sync is interrupted, When connectivity drops again, Then sync resumes from the last confirmed checksum checkpoint without re-uploading verified bytes. Given all queued items are uploaded, When server acknowledgments are received, Then local items are marked as "synced" within 2 seconds and the offline banner is cleared.
Conflict Resolution During Sync
Given the same project was edited locally offline and in the cloud by another user, When sync runs, Then a deterministic last-writer-wins per field using vector-clock timestamps resolves conflicts and no duplicate markers or facets are created. Given a conflict was auto-resolved, When sync completes, Then an audit log entry is created with fields changed, old/new values, resolver, and timestamps, and the user is notified with a non-blocking toast and a "Review changes" link. Given a structural conflict (e.g., facet deleted remotely but edited locally) occurs, When sync runs, Then the system preserves both states by restoring the deleted facet as "Recovered (local)" and flags the project for review. Given user initiates manual review, When opening the conflict UI, Then differences are displayed within 2 seconds and user choices are applied within 1 second and synced on save.
Offline Mode UI Indicators and Guidance Prompts
Given the app switches to offline mode, When the switch completes, Then a persistent offline indicator appears within 1 second and is accessible (WCAG AA contrast) and a dismissible tooltip explains limitations. Given offline is active, When guidance prompts are triggered, Then audio/haptic prompts play within 200 ms of trigger and the prompt queue persists across mode transitions without duplication. Given a feature is cloud-only, When the user attempts to access it offline, Then the UI disables the action and displays an inline message with a "Queue for later" option that succeeds upon reconnection. Given return to online is detected, When stable for 30 seconds, Then the indicator changes to "Syncing…" with progress and reverts to normal state within 2 seconds after completion.
Storage Management and Eviction Policy
Given local cache is enabled, When cache size exceeds 80% of the configured 5 GB limit, Then the app warns the user and offers to free space without interrupting capture. Given cache reaches its limit, When eviction runs, Then only items already synced to cloud are evicted using LRU order, unsynced items are never evicted, and capture continues unless free space < 200 MB. Given device free space drops below 200 MB during capture, When threshold is hit, Then the app pauses new media capture and displays a blocking dialog with options to free space or reduce capture quality; no data already captured is lost. Given user changes cache size in settings, When applied, Then the new limit is enforced immediately and reflected in storage diagnostics within 2 seconds.

Altitude Gate

AR altitude bands lock in the ideal height for target GSD and local airspace rules. Audible cues alert if you stray high or low, keeping scale accuracy tight and preventing reshoots due to inconsistent image resolution.

Requirements

GSD-to-Altitude Band Calculator
"As a field pilot, I want the app to auto-generate an altitude band from my target GSD and camera so that my images have consistent scale without manual calculations."
Description

Automatically computes an optimal altitude band (min/max AGL) from a target ground sample distance (GSD), camera intrinsics (sensor size, focal length, resolution), and lens distortion profile. Applies safety margins for sensor noise and environmental drift to keep scale accuracy within tolerances required by RoofLens measurement algorithms. Exposes an API and UI control to set target GSD from job templates and mission presets, supports metric/imperial units, and writes the selected band into the flight plan so downstream capture, processing, and PDF estimate generation use consistent resolution assumptions.

Acceptance Criteria
Compute Altitude Band from Target GSD and Camera Intrinsics
Given a target GSD, camera sensor width/height (mm), focal length (mm), image resolution (px), and a valid lens distortion profile When the calculator is executed Then it returns min_agl_m and max_agl_m such that the effective GSD at both band edges equals the target GSD within ±2.0% at image center And min_agl_m < max_agl_m and (max_agl_m - min_agl_m) > 0.5 m And results are deterministic for identical inputs (bit-for-bit equality)
Apply Safety Margins for Sensor Noise and Environmental Drift
Given default safety margins of altitude_drift_m = ±1.0 and focal_variation_pct = ±1.0 When the altitude band is computed Then the worst-case effective GSD error over the band, considering these margins, does not exceed ±2.0% of the target GSD And when custom safety margins are provided, the computed band changes accordingly and still maintains the ±2.0% constraint And if no feasible band exists under the provided margins, the calculator reports no-solution with a descriptive reason
Clamp Altitude Band to Local Airspace Limits
Given local airspace constraints with min_agl_m and max_agl_m When the computed altitude band exceeds these constraints Then the band is clamped to the intersection with [min_agl_m, max_agl_m] And if the intersection is empty, the operation fails with a 422 Unprocessable Entity and message code AIRSPACE_CONFLICT And the response includes the unclamped band for operator review
Metric and Imperial Unit Support
Given target GSD input in cm/px or in/px and a selected unit system (metric or imperial) When the altitude band is computed Then outputs include min_agl in meters and feet with correct conversion (1 m = 3.28084 ft) and consistent rounding (display: 0.1 m, 0.5 ft; internal: double precision) And converting inputs from metric to imperial (and vice versa) yields numerically equivalent bands within 0.1%
REST API for GSD-to-Altitude Band Calculation
Given POST /v1/altitude-band with JSON {target_gsd_value, target_gsd_unit, sensor_width_mm, sensor_height_mm, focal_length_mm, resolution_width_px, resolution_height_px, lens_profile_id, tolerance_pct?, safety_margin?, airspace_limits?} When called with valid parameters Then respond 200 with {min_agl_m, max_agl_m, min_agl_ft, max_agl_ft, target_gsd_value, target_gsd_unit, tolerance_pct, safety_margin_applied, airspace_clamped, calculations_version, request_id} And p95 latency ≤ 100 ms for payloads ≤ 2 KB And invalid/missing fields return 400 with field-level errors; no feasible band returns 422 with reason codes {AIRSPACE_CONFLICT|GSD_UNACHIEVABLE|MISSING_LENS_PROFILE} And all responses are schema-valid and cache-control set to no-store
UI Control to Set Target GSD from Job Templates and Mission Presets
Given a job template with a target GSD and unit system When a new mission is created from the template Then the Target GSD control is pre-populated and the computed altitude band is displayed instantly And editing the Target GSD or unit system recomputes and updates the band within 200 ms And invalid inputs (e.g., non-numeric, out-of-camera-capability range) are rejected with inline errors and no band update And saving the mission persists the target GSD, unit, and computed band to the mission preset
Persist Selected Altitude Band into Flight Plan and Downstream Use
Given a mission with a computed altitude band When the mission is saved/exported Then the flight plan includes {altitude_band: {min_agl_m, max_agl_m}, target_gsd: {value, unit}, camera_profile_id, lens_profile_id, tolerance_pct, safety_margin} And when the capture app loads the plan, it reads and displays the same band without recomputation drift (values equal within display rounding) And when processing runs, the job config echoes the same target GSD and band; the generated PDF includes the assumed GSD matching the plan And if any stage detects a mismatch > 0.1% between stored and consumed values, a consistency warning is logged and surfaced to the user
Local Airspace Compliance & Terrain Awareness
"As a pilot, I want the altitude band to automatically respect local airspace limits and terrain so that I stay compliant and safe during capture."
Description

Constrains the altitude band to local airspace rules and terrain-adjusted ceilings by combining airspace advisories (e.g., facility maps, geofences) with a local digital elevation model (DEM) to compute true AGL limits along the flight area. Validates requested bands against maximum allowable altitude, minimum safe altitude over obstacles, and mission geofence; provides actionable alternatives if the band is not permitted. Caches advisories and DEM tiles for offline use and records the applied constraints into the job metadata for auditability.

Acceptance Criteria
Terrain-Adjusted AGL Ceiling Computation
Given a planned flight polygon or path and a local DEM covering the entire flight area When the system computes terrain-adjusted AGL ceilings Then it samples the geometry at intervals ≤20 m along paths and a grid ≤50 m inside polygons And for each sample computes the max permitted AGL as the minimum of all applicable AGL ceilings from airspace advisories and mission vertical limits at that location And 100% of samples produce a value; missing DEM cells are filled by nearest-neighbor within 100 m or the sample is marked as blocked And the overall allowed maxAGL for the band is the minimum of all sample max AGLs, rounded down to the nearest 1 m And across a validation dataset with ground-truth, computed AGL values differ from truth by ≤ max(DEM RMSE, 1 m) And the computation completes in ≤2 s for path length ≤3 km or polygon area ≤0.5 km² on a modern mobile device
Airspace Rules Enforcement on Requested Band
Given a requested altitude band [minAGL, maxAGL] and current airspace advisories covering the flight area When any portion of the area intersects an advisory with a max AGL lower than maxAGL Then the system flags non-compliance before Altitude Gate activation and prevents activation And it displays the most restrictive advisory name, source, and max AGL And it offers to cap the band to [minAGL, advisoryMaxAGL] if minAGL ≤ advisoryMaxAGL; otherwise it declines with the reason "Band floor above legal ceiling" And validation completes in ≤500 ms after the band is set or changed
Mission Geofence-Constrained Band Validation
Given a mission geofence polygon with an optional vertical ceiling When validating the requested altitude band and planned path Then 0% of the planned path length may lie outside the horizontal geofence; violations list offending segments with start/end coordinates And if a vertical geofence ceiling exists, maxAGL ≤ that ceiling at all samples; otherwise a violation is raised And minAGL is normalized to ≥0 m AGL; negative values are rejected And results are updated within ≤300 ms after geofence edits
Offline Advisory and DEM Caching
Given the user downloads a selected flight area for offline use When the device has no network connectivity during validation Then the system uses cached advisories and DEM tiles covering the flight area + 500 m buffer And the cache includes advisory metadata (id, version, effective dates) and DEM tiles sufficient to achieve ≤30 m ground sampling distance And offline validation outputs match online outputs for the same dataset within ±1 m AGL and identical pass/fail decisions And if cached advisories are older than 24 h, a "Stale airspace data" warning is displayed and the most restrictive rules are applied And the total cache footprint for a 1 km² area is ≤100 MB
Actionable Alternatives for Non-Permitted Band
Given a requested altitude band that is not permitted When presenting alternatives Then the system suggests at least one permitted band that preserves target GSD within ±10% by reducing maxAGL to the computed allowed ceiling And provides up to two additional options: split the mission to avoid the restricted area, or shift the geofence by ≥50 m away from the restriction boundary And alternatives are computed and displayed within ≤1 s And selecting an alternative updates the band and triggers re-validation, showing "Compliant" if all constraints are satisfied
Audit Metadata of Applied Constraints
Given a mission is saved or started When altitude constraints are applied Then the system writes to job metadata: timestamp, coordinate reference, DEM source and tile bounds, DEM resolution and vertical datum, advisory ids and versions, geofence id, computed allowed band [minAGL, maxAGL], any non-compliance reasons, offline/online flag, and sampling parameters And re-evaluation from the saved metadata reproduces the allowed band within ±1 m and the same set of advisory ids And the exported PDF includes a "Flight Constraints" summary (allowed band, advisory summary, DEM source) And metadata is retained with the job for ≥1 year
Obstacle Minimum Safe Altitude Enforcement
Given a known obstacles dataset containing obstacle locations and heights AGL When computing the minimum safe altitude floor for the band Then for any sample within 50 m of an obstacle, the band floor is raised so that minAGL ≥ obstacle height + 10 m buffer And if the requested minAGL is below this floor, the system raises the floor or flags non-compliance before activation And obstacle clearance checks complete within ≤500 ms for up to 500 obstacles in the area And in validation tests with seeded obstacles, no permitted band results in <10 m clearance over any obstacle
AR Altitude Band Overlay
"As a pilot, I want a clear visual overlay of the allowed altitude band so that I can keep the drone within the correct height at a glance."
Description

Renders a low-latency augmented reality overlay in the flight view that visually depicts the target altitude band as color-coded horizontal guides with real-time numeric readouts. Shows clear in-band/out-of-band states, supports high-contrast and colorblind-safe palettes, adapts for portrait/landscape orientations, and remains legible under glare. Integrates with mission HUD elements without obscuring critical telemetry and scales appropriately across supported mobile devices and drone SDK live feeds.

Acceptance Criteria
Low-Latency AR Overlay Update
Given the live drone video feed is active at 30–60 FPS and altitude telemetry is available at 5 Hz or higher When the aircraft altitude changes by at least 0.5 m (or 2 ft) Then the altitude band overlay and numeric readout update within 100 ms at p95 and within 50 ms median, measured from telemetry timestamp to on-screen paint And the overlay maintains at least 30 FPS at p95 with no more than 1% dropped frames over a continuous 5-minute flight
In-Band and Out-of-Band Visual States and Numeric Readouts
Given the target altitude band [Amin, Amax] is configured in the mission When current altitude Acur is within [Amin, Amax] Then the overlay presents the In Band state using the selected palette and a non-color cue (checkmark icon), and the numeric readout displays Acur with units matching app settings (m or ft) And the numeric readout accuracy is within ±0.5 m (±2 ft) of SDK altitude after smoothing and rounds to 0.1 m (0.5 ft) When Acur < Amin Then the overlay presents the Below Band state with a downward arrow cue and shows the delta Amin − Acur When Acur > Amax Then the overlay presents the Above Band state with an upward arrow cue and shows the delta Acur − Amax
Orientation Adaptation and HUD Non-Obstruction
Given supported devices are used in portrait or landscape and critical HUD zones are defined (battery, GPS, link, RTH, flight timer) When the device rotates or safe-area insets change Then the altitude band overlay reflows within 300 ms to maintain a minimum 8 dp padding from all critical HUD zones and system safe areas And no overlay element overlaps or occludes critical HUD text or icons at any time
High-Contrast and Colorblind-Safe Palettes
Given Display > Overlay Palette offers Default, High Contrast, and Colorblind-Safe options When a user selects a palette Then the selection applies immediately to the altitude band overlay and persists across app restarts and flights And all overlay states (In Band, Below, Above) are distinguishable under simulated Deuteranopia, Protanopia, and Tritanopia via automated checks And text elements meet WCAG AA contrast ratio of at least 4.5:1 against the live feed background and non-text indicators meet at least 3:1 And every state includes a non-color cue (icon or pattern) in addition to color
Legibility Under Glare and Sunlight
Given the device ambient light sensor reports at least 30,000 lux for 5 seconds When High-Glare mode auto-activates Then numeric readouts render at least 14 sp with outline or backdrop to maintain contrast of at least 4.5:1 against the live feed And guide lines render at least 3 dp thick with a contrast of at least 3:1 And the overlay remains readable at maximum device brightness on iPhone 13, iPhone SE (2nd gen), Pixel 7, and iPad Pro 12.9 under lab conditions of 50,000–100,000 lux
Cross-Device and Live Feed Scaling
Given screen sizes from 4.7 inches to 12.9 inches and input video resolutions 720p, 1080p, and 4K in the test matrix When the flight view is displayed Then altitude band guide spacing, line thickness (2–6 dp), and numeric text size (14–22 sp) scale proportionally with screen density and resolution And no overlay text or lines clip, truncate, or fall outside the visible safe area on any device And overlay render time per frame is under 5 ms median and under 10 ms at p95 on each device, with additional CPU and GPU load attributable to the overlay averaging 10% or less
Mission Parameter Synchronization and Persistence
Given the mission target altitude band is modified by the user mid-flight or via GSD target adjustment When the change is saved Then the overlay updates to the new [Amin, Amax] within 1 second and recalculates the state accordingly And unit changes between meters and feet apply to numeric readouts within 500 ms without app restart And on app relaunch during an active mission, the overlay restores the last known target band from persistent storage within 2 seconds
Audible Deviation Cues
"As a pilot, I want spoken or tonal cues when I drift out of the altitude band so that I can correct without taking my eyes off the drone."
Description

Provides configurable audible alerts when altitude drifts near or outside the band, with hysteresis to prevent alert spam. Includes progressive tones and optional voice prompts (e.g., "ascend 3 meters") that operate offline and respect device audio settings. Allows users to set sensitivity thresholds per mission template, temporarily mute cues, and auto-escalate alerts if deviation persists. Logs all alerts with timestamps for later review in the RoofLens job record.

Acceptance Criteria
Near-Band Deviation Alert Tone
Given an active mission with altitude band enabled and a configured near-band threshold T_near When aircraft altitude is within the band and crosses T_near toward the edge and remains beyond T_near for ≥ 1.0 s Then play a single short warning tone within 500 ms of the 1.0 s hold And do not speak a voice prompt at this level And do not repeat this tone again until the alert state is cleared per hysteresis rules
Out-of-Band Deviation Voice Prompt and Escalation
Given an active mission with altitude band enabled When aircraft altitude exits the band by Δ meters Then play an immediate alert tone within 500 ms of exit And speak a voice prompt within 1.5 s saying "ascend Δ meters" or "descend Δ meters" (Δ rounded to nearest whole meter) And if the aircraft remains out-of-band for ≥ 3 s Then repeat the alert tone every 2 s And if the aircraft remains out-of-band for ≥ 7 s Then repeat the voice prompt every 5 s until re-entry into band or cues are muted
Hysteresis and Anti-Spam Behavior
Given a near-band alert has fired When altitude returns toward band center Then suppress re-alerts until altitude has been back inside the non-alert zone by ≥ 0.5 m for ≥ 2 s And given an out-of-band alert has fired When altitude re-enters the band Then suppress re-alerts until altitude has remained within the band by ≥ 1.0 m margin for ≥ 2 s And limit alerts of the same level (near-band or out-of-band) to at most one new alert every 5 s unless escalation level changes
Per-Template Sensitivity Configuration
Given a mission template in preflight settings When the user sets a near-band threshold T_near in the range [0.5 m, 10.0 m] and saves Then the threshold is persisted with the template And when a mission is started from that template Then T_near is applied to alert logic without additional configuration And when the user switches to a different template before takeoff Then the active T_near updates immediately for that mission And if the user enters an invalid value (outside range or non-numeric) Then the value is rejected with a validation message and the previous valid value is retained
Temporary Mute and Auto-Resume
Given an in-flight session with audible cues enabled When the user activates Mute for a duration D in {30 s, 60 s, 120 s} or until manually unmuted Then suppress all tones and voice prompts immediately And continue evaluating alerts and logging them with muted=true and audible_played=false And show an active mute indicator with remaining time when D is finite And when D elapses or the user manually unmutes Then resume audible cues within 1 s And if an alert-eligible deviation is active on resume Then begin at the appropriate escalation level within 2 s
Offline Operation and Device Audio Respect
Given the device has no network connectivity When any alert is triggered Then tones and voice prompts are produced using on-device resources and no network calls are attempted And given device system audio is muted (volume=0) or in Silent/Do Not Disturb When alerts are triggered Then no sound is played and each event is recorded with audible_played=false And given system audio is non-zero and not in Silent/Do Not Disturb When alerts are triggered Then sounds play at the current system volume and do not override user audio settings
Alert Event Logging to Job Record
Given an alert event occurs (near-band, out-of-band, or escalation) When the event is handled Then append a log entry to the current RoofLens job record containing: timestamp (UTC ISO-8601), alert_level {near-band|out-of-band|escalated}, direction {ascend|descend}, deviation_meters (1 decimal), altitude_meters, band_min, band_max, prompt_type {tone|voice}, audible_played {true|false}, muted {true|false} And when the device is offline Then buffer log entries locally and sync them to the job record within 30 s of connectivity restoring And in the job record review view Then all alert entries appear in chronological order with no duplicates and with their original timestamps
Sensor-Fused AGL Estimation
"As a pilot, I want accurate, stable altitude readings so that the band visualization and alerts are trustworthy in varying conditions."
Description

Delivers stable and accurate above-ground-level (AGL) readings by fusing barometer, GNSS/RTK, and manufacturer-provided height sensors using a Kalman filter with bias correction. Calibrates at takeoff using known ground elevation from DEM and continuously adjusts for pressure changes and GPS drift. Outputs high-rate (≥10 Hz) altitude to drive AR overlays and audible cues, with health diagnostics (e.g., sensor trust scores) and graceful degradation when one or more inputs are unavailable.

Acceptance Criteria
Takeoff Calibration Using DEM
Given DEM coverage is available at the launch coordinates and the vehicle is on a level pad When the system arms and performs auto-calibration at takeoff and the aircraft hovers at 1.5 m AGL for 10 seconds Then the fused AGL reports 1.5 m ± 0.2 m (mean over last 5 seconds) and records calibration timestamp, DEM source, and computed bias And upon landing back on the pad, the fused AGL returns to 0.0 m ± 0.2 m within 3 seconds
10 Hz AGL Output Stream
Given the fusion module is enabled and sensors are providing data When subscribing to the /agl_fused stream for 5 continuous minutes Then the effective sample rate is ≥ 10.0 Hz, with ≤ 1% inter-sample intervals exceeding 120 ms, and no missing sequence numbers And each message includes timestamp, AGL value (meters), and 1-sigma uncertainty (meters)
Accuracy and Stability Under RTK and Non-RTK
Given a ground-truth reference for altitude (e.g., laser range or total station) and flight profiles between 5 m and 60 m AGL When GNSS status is RTK Fixed Then fused AGL error has RMS ≤ 0.25 m and 95th percentile ≤ 0.50 m over the profile, and 1-second jitter (std dev) ≤ 0.10 m When GNSS status is Float/Single (non-RTK) Then fused AGL error has RMS ≤ 1.00 m and 95th percentile ≤ 2.00 m, and 1-second jitter ≤ 0.30 m
Continuous Bias Correction for Pressure and GNSS Drift
Given the aircraft hovers at a constant height for 20 minutes When ambient pressure is perturbed by +/−1.0 hPa equivalent and GNSS vertical drift of up to 0.8 m is introduced (simulated or environmental) Then the Kalman filter bias terms converge within 30 seconds after each disturbance and fused AGL drift remains within ±0.30 m of pre-disturbance value And reported 1-sigma uncertainty increases during the disturbance and returns to within 10% of baseline within 60 seconds post convergence
Health Diagnostics and Trust Scores
Given the system is operating normally When querying the diagnostics API or telemetry Then per-sensor trust scores in [0.0, 1.0] are published at ≥ 1 Hz for barometer, GNSS/RTK, and manufacturer height sensor, along with an overall fusion health {Good, Degraded, Critical} And when any sensor becomes stale (update age > 0.5 s) or noisy (variance > 2× nominal), its trust score drops below 0.40 and overall health updates accordingly, with timestamped log entries
Graceful Degradation on Sensor Loss
Given the aircraft is flying at ~30 m AGL and the fused stream is healthy When one sensor (e.g., barometer) becomes unavailable Then the fused AGL stream continues at ≥ 10 Hz without interruption, overall health changes to Degraded, and 1-sigma uncertainty increases by ≥ 2× baseline When two sensors become unavailable Then the fused AGL stream persists at ≥ 10 Hz using remaining input(s) and model prediction, health changes to Critical, and a user-visible alert is emitted And when sensors recover, trust scores ramp up and uncertainty returns to within 20% of baseline within 60 seconds
AR/Audible Cues Driven by AGL Feed
Given an Altitude Gate band derived from target GSD is configured to [28 m, 32 m] When the fused AGL exits the band (AGL < 28 m or AGL > 32 m) for > 0.5 s Then an audible cue is emitted within 0.5 s of violation and the AR overlay band indicator changes state And when fused AGL re-enters and remains within the band for ≥ 1.0 s, audible cues cease and the overlay returns to in-band state And during a steady 2-minute hover within the band, false cue rate is ≤ 1 per 10 minutes (projected)
Altitude Adherence Logging & Proof
"As an estimator, I want a report of altitude consistency so that I can prove capture quality to clients and avoid reshoots or disputes."
Description

Captures and stores mission-level adherence metrics including time-in-band percentage, min/max altitude, average deviation, and out-of-band events. Embeds these metrics into the RoofLens job record and exposes them in processing dashboards, exports (CSV/JSON), and optional PDF bid appendices to provide evidence of consistent image resolution. Flags missions with poor adherence for reshoot review and feeds quality signals into automated measurement confidence scoring.

Acceptance Criteria
Capture and Compute Altitude Adherence Metrics During Mission
Given an Altitude Gate with a configured target altitude and band (min/max, meters AGL) And drone telemetry is recorded at ≥1 Hz with timestamps When the mission completes and processing runs Then the system computes and stores: time_in_band_percent, min_altitude_m_agl, max_altitude_m_agl, avg_deviation_from_target_m, out_of_band_event_count, cumulative_out_of_band_duration_s And computed values match a reference calculation within ±1% for percentages and ±0.5 m for distances And telemetry gaps > 3 seconds are counted toward out_of_band_event_count and cumulative_out_of_band_duration_s And all metrics are labeled with units and derived as AGL using the project terrain baseline
Persist Metrics in RoofLens Job Record with Audit Trail
Given a mission’s adherence metrics are computed When the RoofLens job record is saved Then metrics are persisted as immutable fields on the job record with computed_at (ISO 8601 UTC) and metrics_version And any recomputation creates a new metrics_version and retains prior versions for audit And the job record stores data lineage: mission_id, drone_id, firmware_version, processing_engine_version And deletes/overwrites of metrics are prevented except via versioned recompute by authorized roles, with audit logs of who/when/why
Display Adherence Metrics and Events in Processing Dashboard
Given an authenticated user with role Estimator, Adjuster, or Owner When the user opens the Job Details processing dashboard Then the adherence metrics are visible with clear labels and units (meters AGL, seconds, percent) And a status indicator is shown using thresholds: Green if time_in_band_percent ≥ 90% and max_deviation_from_target_m ≤ 2; Yellow if 75–89% or 2 < max_deviation_from_target_m ≤ 4; Red otherwise And an events timeline lists each out-of-band event with start/end timestamp and duration And the dashboard renders the metrics within 2 seconds at p95 for up to 1,000 recent missions
Export Adherence Metrics to CSV and JSON
Given a user requests an export of jobs including adherence metrics When CSV or JSON export is generated Then each record includes fields: job_id, mission_id, target_altitude_m_agl, band_min_m, band_max_m, time_in_band_percent, min_altitude_m_agl, max_altitude_m_agl, avg_deviation_from_target_m, out_of_band_event_count, cumulative_out_of_band_duration_s, adherence_status, computed_at And JSON uses snake_case keys and ISO 8601 UTC timestamps; numeric values use dot decimal with up to 3 decimals And CSV includes a stable header in the same order; empty values are null in JSON and empty in CSV And exports exclude PII and API credentials And export completes within 5 seconds for up to 10,000 jobs
Include Optional PDF Appendix with Adherence Evidence
Given PDF bid generation is initiated And the setting “Include Adherence Appendix” is enabled When the PDF is generated Then an appendix page is appended containing a summary table of adherence metrics matching the export fields And a chart of altitude vs. time highlights out-of-band intervals And the appendix increases file size by ≤ 500 KB and adds ≤ 2 pages And if metrics are unavailable, the appendix is omitted and a one-line note is added to the PDF summary And the PDF renders without layout errors in Adobe Acrobat and Chrome viewers
Flag Poor Adherence and Trigger Reshoot Review
Given adherence thresholds are defined as: time_in_band_percent < 80% OR max_deviation_from_target_m > 5 OR cumulative_out_of_band_duration_s > 60 When a mission is processed Then the job is marked “Adherence: Needs Review” and displays triggered threshold(s) with actual values And a reshoot review task is created and assigned to the project owner And an in-app notification and email are sent to the project owner within 1 minute And the flag can be cleared only by Owner or Supervisor roles with a required note, and all actions are audit logged
Integrate Adherence Metrics into Measurement Confidence Score
Given the measurement pipeline computes a confidence score from 0 to 100 When adherence metrics are present Then the score is adjusted by a documented formula: up to −20 points for time_in_band shortfall, up to −10 points for max_deviation_from_target_m > 2, and up to −10 points for cumulative_out_of_band_duration_s > 30, not dropping below 0 And the score stores inputs, component adjustments, final value, and formula_version for traceability And unit tests verify deterministic outputs within ±0.1 for fixed inputs And when metrics are missing, a neutral adjustment (0) is applied and flag adherence_metrics_missing is set

QC Instant Replay

Auto-builds a coverage heatmap and gap list on landing. Tap any gap to launch a micro-mission that fills it in 1–2 passes, then generate a QC pass certificate—fixing issues on-site and stopping costly drive-backs.

Requirements

Real-time Coverage Heatmap
"As a field operator, I want an instant coverage heatmap on landing so that I can see what areas need reshooting before leaving the site."
Description

Automatically generates a georeferenced coverage heatmap upon drone landing by analyzing captured imagery and flight logs to compute area coverage, overlap percentage, ground sampling distance (GSD), and camera angle compliance against job-specific thresholds. Renders an interactive overlay on the job map inside RoofLens, updates the job’s QC state, and persists tiles and metrics for audit. Supports progressive updates during flight where telemetry is available and seamlessly associates images with the active job.

Acceptance Criteria
Heatmap auto-generation on landing
Given an active job with a defined coverage polygon and metric thresholds And a drone flight linked to the job completes with landing detected within the job geofence When landing is detected and imagery/flight logs are available Then a georeferenced coverage heatmap is rendered on the job map within 30 seconds And job QC state updates to "Coverage Analyzed" And metrics are computed and displayed: covered area (%), min/avg overlap (%), worst-case GSD (cm/px), camera angle compliance (%) And gaps below thresholds are listed with location references
Progressive in-flight coverage updates
Given live telemetry or live image upload is available for the active job When the drone is in-flight within job bounds Then the coverage heatmap updates at least every 5 seconds with new coverage And interim metrics are labeled "In-Flight (Provisional)" And pan/zoom and layer toggles respond within 200 ms during updates And on landing, provisional and final results reconcile within 30 seconds without duplicate cells
Georeferencing accuracy and alignment
Given the job polygon and base map in WGS84 (EPSG:4326) When the heatmap cells render Then positional error between cells and the job polygon boundary is <= min(1.5x worst-case GSD, 0.5 m) And any systematic offset > 0.5 m raises an "Alignment Warning" banner And tile metadata stores spatial reference, bounds, and resolution
Overlap, GSD, and camera angle compliance
Given job thresholds for frontlap, sidelap, GSD, and camera angle ranges When metrics are computed Then frontlap and sidelap are computed per cell and aggregated to min/avg for the job area And worst-case and average GSD (cm/px) are computed from altitude and camera intrinsics And camera angle compliance is computed as % of cells within the allowed gimbal pitch/roll range And each metric is compared to thresholds and marked Pass or Fail in the QC summary And the QC summary displays numeric values with units and computation timestamp
Interactive overlay and drill-down
Given the heatmap layer is enabled on the job map When a user taps/clicks a cell or a gap list item Then a details panel shows cell metrics (overlap, GSD, angle, timestamp) within 200 ms And clicking a gap recenters and zooms to the gap within 500 ms and highlights it And toggling the heatmap layer shows/hides it within 200 ms without page reload
Persistence and audit trail
Given heatmap tiles and metrics are generated When the job is reopened after generation or accessed via audit up to 30 days later Then tiles and metrics load from persistent storage within 2 seconds on a 20 Mbps connection And the audit record includes: flight ID, camera model, firmware, thresholds, algorithm version, computation timestamp, and input hash And if recomputed, a new version is created; prior versions remain read-only and retrievable
Image association with active job
Given new imagery and flight logs arrive while multiple jobs are nearby When associating assets to a job Then assets auto-associate to the active job if capture time overlaps the job session and centroid falls within the job geofence (tolerance 10 m) And ambiguous cases (association score < 0.8) require explicit user confirmation And duplicate assets by content hash are ignored And unassigned assets appear in an inbox for manual assignment
Gap Detection & Risk Ranking
"As a QA lead, I want a prioritized list of coverage gaps with severity so that I can fix the highest-impact issues first."
Description

Identifies uncovered or underqualified regions by evaluating coverage, overlap, GSD, and obliquity thresholds per roof facet. Produces a ranked list with coordinates, approximate size, affected measurements (e.g., ridge length, area), and a severity score tied to expected estimation error. Thresholds and ranking weights are configurable per organization or template. Exposes in-app filters and updates in real time as new imagery is added.

Acceptance Criteria
Detect Gaps Per Roof Facet Using Configured Thresholds
Given organization template thresholds are set to coverage ≥ 95%, overlap ≥ 70%, GSD ≤ 1.5 cm/px, and obliquity ≤ 15° And a test project with two facets (Facet A, Facet B) where exactly 3 tiles on Facet A violate coverage and 0 tiles on Facet B violate any metric When the imagery is processed for QC Then exactly 3 gap polygons are created on Facet A and 0 on Facet B And each created gap records the metrics that failed and their measured values And every gap polygon is fully contained within its facet boundary with ≤ 0.5 m tolerance on the boundary And facets with no metric violations produce zero gaps
Produce Ranked Gap List With Coordinates and Severity
Given processed gaps with known reference sizes of 2.0, 1.2, and 0.8 m² and expected severities 0.85, 0.60, and 0.30 respectively When viewing the Gap List Then the list is sorted by severity descending (0.85 first, then 0.60, then 0.30) And each list item includes: id, facetId, centroid coordinates in WGS84 (lat,lon to 6 decimal places), size_m2 (within ±10% of reference), affected_measurements (non-empty array), and severity_score (0.00–1.00 with two decimals) And tapping a list item centers the map on its centroid within ≤ 2 m positional error
Organization and Template Configuration Overrides
Given Organization A default thresholds and weights (T1,W1) and a Template "High Precision" with stricter thresholds and different weights (T2,W2) And a project under Organization A with imagery that produces 6 gaps with (T1,W1) When the template "High Precision" is applied Then the system re-evaluates and returns a different gap set and ordering consistent with (T2,W2) and the gap count becomes exactly 9 And reverting to Organization defaults restores the original 6 gaps and original ordering And the same project cloned into Organization B with its own defaults yields results independent of Organization A (no cross-org bleed-through)
In-App Filtering and Sorting of Gaps
Given a project with 15 total gaps across facets A, B, and C with severities distributed between 0.10 and 0.95 and violations spanning coverage, overlap, and GSD When filter Severity ≥ 0.60, Facet = B, and Violation includes GSD is applied Then the filtered list count equals 4 and only gaps from Facet B with GSD violations and severity ≥ 0.60 are shown And clearing all filters restores 15 total gaps within 300 ms And changing sort to Size (ascending) orders the current result set by size_m2 increasing
Real-Time Update On New Imagery
Given an initial evaluation showing 8 gaps And 6 new images are uploaded that cover 3 of the existing gaps When ingestion and processing of the 6 images complete Then within 10 seconds the UI updates without manual refresh to show exactly 5 remaining gaps And severity scores for all impacted gaps are recalculated and the ranking updates accordingly And resolved gaps are removed from the list and map overlays
Severity Score Reflects Expected Estimation Error
Given a validation project with ground-truth measurements for area and ridge length and a defined severity model mapping metric deviations to expected estimation error (0.00–1.00) When severity scores are computed for all detected gaps Then severity is monotonically increasing with absolute estimation error for affected measurements And the Pearson correlation between severity score and absolute estimation error across gaps is ≥ 0.70 And gaps with severity ≤ 0.10 correspond to < 1% expected error and gaps with severity ≥ 0.90 correspond to ≥ 20% expected error
Tap-to-Launch Micro-Missions
"As a pilot, I want to tap a gap and have the app generate a quick corrective flight so that I can restore coverage with minimal time and effort."
Description

Enables one-tap generation of short corrective flight plans for a selected gap. Plans 1–2 efficient passes with required altitude, path width, speed, and gimbal angle to meet QC thresholds, taking into account known obstacles, site boundary, no-fly zones, remaining battery, and local regulations. Provides a preflight checklist and on-screen preview, supports send-to-drone via supported SDKs, and streams mission telemetry back to update the heatmap and audit logs.

Acceptance Criteria
Tap Gap Generates 1–2 Pass Micro-Mission
Given a user taps a single unresolved gap on the QC heatmap for the active site When the system processes the request Then a micro-mission plan is generated and displayed within 5 seconds And the plan contains 1 or 2 passes only And each pass specifies altitude (m AGL), path width (m), speed (m/s), and gimbal angle (degrees) And the planned swath fully covers the gap polygon with at least a 2 m lateral buffer And the plan is labeled with the gap ID and estimated mission time
QC Geometry Meets Project Thresholds
Given project QC thresholds for overlap, GSD, and viewing angle are configured When the plan is generated Then computed forward and side overlap within the gap are greater than or equal to their configured thresholds And computed ground sampling distance (GSD) is less than or equal to the configured maximum And gimbal angle is within the configured range for the selected capture mode And the number of passes is minimized (1 pass if thresholds are satisfied with 1, otherwise 2) And plan generation is blocked with a clear message if thresholds cannot be satisfied within 2 passes
Obstacle, Boundary, and No‑Fly Compliance
Given the site boundary polygon, mapped obstacles with heights, and active no‑fly/geo‑fence zones When generating the plan Then all waypoints and path segments remain inside the site boundary minus a 2 m buffer And the plan maintains at least the configured 3D safety margin from obstacles And planned altitude never exceeds the lesser of the local legal ceiling and the project altitude cap And no path intersects restricted airspace; if unavoidable, generation is blocked with a message listing the blocking area(s) And the preview displays a compliance badge (Pass/Fail) for Boundary, Obstacles, and No‑Fly checks
Battery and Flight Time Validation
Given the connected drone’s current battery state of charge and health When a plan is generated Then estimated mission time plus return‑to‑home reserve is less than or equal to the remaining battery time estimate And the configured minimum battery reserve is maintained And if the requirement is not met, launch is blocked and the UI prompts for a battery swap, showing the deficit estimate And the preview displays estimated mission time, remaining flight time, and reserve
Preflight Checklist and On‑Screen Preview
Given a generated micro‑mission When the user opens the preview Then the map displays planned path, passes, waypoints, and footprint envelopes with altitude and gimbal annotations And a preflight checklist is presented and must be fully acknowledged before Send to Drone is enabled And the checklist includes: GPS lock acquired, home point set, IMU/compass status OK, camera storage available, obstacle sensors OK, takeoff area clear, battery reserve met And the user can view and save a mission summary including altitude, speed, gimbal angle, expected coverage, and estimated time
Send‑to‑Drone and Execution via Supported SDKs
Given a supported drone is connected and the checklist has passed When the user taps Send to Drone Then the mission uploads successfully via a supported SDK and returns a mission ID And the user can Start, Pause/Resume, and Abort the mission from within the app And if upload or start fails, a specific error with SDK code and recovery guidance is shown And the system records SDK name and version, drone model and firmware, operator ID, gap ID, and mission ID
Telemetry Streaming Updates Heatmap and Audit Logs
Given a micro‑mission is executing When telemetry is streaming Then position, altitude, speed, gimbal angle, and capture events are received at a rate of at least 1 Hz And coverage heatmap tiles for the gap area update within 3 seconds of each new capture event And upon mission completion, the gap status changes to Resolved and the audit log records planned vs flown metrics, QC compliance results, timestamps, and operator And if the mission is aborted, the gap remains Open and the audit log records the abort reason and partial coverage And all telemetry and audit data are persisted to the server within 30 seconds of mission completion when connected
Offline QC Processing
"As a field operator working in low-connectivity areas, I want QC to function offline so that I can finish the job without waiting for a signal."
Description

Executes heatmap generation and gap detection entirely on-device when connectivity is limited, using cached basemaps and configuration. Queues assets and metrics for background sync to the cloud when a connection is available, with conflict resolution and deterministic reprocessing to ensure parity between edge and cloud results. Provides clear offline indicators and guards against data loss on app close.

Acceptance Criteria
Offline Heatmap and Gap Detection on Landing
Given the drone lands and the device has no internet connectivity When the image ingestion completes Then the app generates a coverage heatmap and gap list entirely on-device without making any network calls And the initial heatmap and gap list render within 120 seconds for up to 250 images on a device meeting the published minimum specs And any processing failures are surfaced with a retriable error and logged with a timestamp and run ID
Use of Cached Basemaps and Configuration Offline
Given cached basemaps and QC configuration are present on the device When the app detects offline mode Then the last successfully downloaded QC configuration (version-stamped) is loaded from local storage And basemap tiles within a 500m radius of the job centroid are served from cache And if required cache entries are missing or older than 30 days, the app displays a non-blocking warning and proceeds with placeholders without attempting network fetches
Offline Indicator and Action Guardrails
Given the device is offline When the QC workflow is initiated Then a persistent "Offline QC" banner is displayed and an offline badge appears on the job card And cloud-only actions (e.g., share-to-cloud, team handoff) are disabled with explanatory tooltips And the user can still view heatmap, gap list, and launch micro-missions And offline state transitions (offline->online, online->offline) are recorded in the session log
Queueing and Persistence Across App Close/Crash
Given QC assets (heatmap tiles, gap JSON, metrics, logs) are produced offline When the app is backgrounded, force-closed, or the OS kills the process Then all pending assets are written to an on-disk queue within 2 seconds of creation using atomic writes And on next launch, the app reconstructs the queue and resumes processing without duplication And no more than one asset per 10,000 is lost in a simulated power-loss test; any loss is reported with asset IDs
Background Sync on Connectivity Restoration
Given there are queued QC assets and metrics And the device regains internet connectivity When connectivity has been stable for 10 seconds Then the app starts background sync automatically and displays progress (items remaining, throughput, ETA) And uploads are resumable with checksum verification and exponential backoff (max backoff 5 minutes) And partial failures are retried up to 5 times before marking as "Needs Attention" with a user-resolvable action
Conflict Resolution and Deterministic Parity with Cloud
Given an offline-processed job has corresponding cloud assets with overlapping versions When the queued results are synced Then version conflicts are resolved by latest-timestamp-wins, preserving the superseded version in an immutable audit log And the cloud triggers deterministic reprocessing using the synced inputs and config seed And edge vs cloud outputs match within defined tolerances: heatmap tile IoU ≥ 0.99; linear measurements within ±0.5% or ±1 inch (whichever is greater); gap list IDs and coordinates match within ±10 cm And any parity failure flags the job as "Parity Review" and blocks certificate issuance until resolved
Offline Micro-Mission Generation and Update
Given a user taps a gap in the gap list while offline When a micro-mission is generated Then the flight path and camera settings are computed locally using cached basemap and the active aircraft profile And after flying 1–2 passes, newly captured images update the heatmap and remove the gap locally within 30 seconds of landing And a QC pass certificate can be generated offline as a draft and queued for sync, with a "Pending Cloud Parity" watermark until parity passes
QC Pass Certificate Generator
"As an estimator, I want a QC pass certificate attached to the job so that clients and carriers trust the measurements and reduce disputes."
Description

Validates that post-correction coverage meets configured QC thresholds and produces a tamper-evident QC Pass Certificate in PDF and JSON formats. Includes site boundary, coverage/overlap/GSD metrics, before/after gap list, timestamps, pilot and aircraft identifiers, firmware versions, weather snapshot, and signed hashes of flight logs and imagery. Attaches the certificate to the RoofLens job, embeds a summary in exported bids, and supports shareable links. If thresholds are not met, generates a QC Fail report with remediation guidance.

Acceptance Criteria
QC Pass Decision Logic at Job Closeout
Given a completed drone capture with post-correction coverage metrics and a job-level QC configuration When the system evaluates coverage, overlap, and GSD against the configured thresholds Then the QC outcome is set to "Pass" only if all configured thresholds are met or exceeded And the QC outcome is set to "Fail" if any configured threshold is missed And the decision record includes the exact metric values, threshold values, and evaluation timestamp And re-evaluating with the same inputs produces the same outcome and stored values
PDF QC Pass Certificate Content and Layout
Given a QC outcome of "Pass" and all required metadata present When the PDF certificate is generated Then the PDF includes: site boundary map, coverage heatmap, overlap metric summary, GSD metric summary, before/after gap list, capture start and end timestamps, pilot identifier, aircraft identifier, aircraft and controller firmware versions, weather snapshot with source and timestamp, unique certificate ID, certificate version, and signed hashes of flight logs and imagery And all required fields are populated and human-readable And the PDF is digitally signed and shows as unaltered in a standards-compliant PDF viewer And the PDF maps and charts render at a minimum effective resolution of 300 DPI on Letter/A4 pages
JSON QC Certificate Schema Compliance
Given a QC outcome of "Pass" When the JSON certificate is generated Then the JSON validates against the published QC Certificate JSON Schema version declared in the document And includes keys for siteBoundary, metrics.coverage, metrics.overlap, metrics.gsd, gaps.before, gaps.after, timestamps.start, timestamps.end, pilot.id, aircraft.id, firmware.aircraft, firmware.controller, weather.source, weather.observedAt, assets.flightLogs[], assets.imagery[], hashes[], certificate.id, certificate.version, signature.algorithm, signature.value And all numeric metrics include associated units And each asset entry includes a sha256 hash and byte length And the canonicalized JSON hash matches the signature's signed payload
Tamper-Evidence and Signature Verification
Given a generated QC certificate (PDF and JSON) and the referenced flight logs and imagery are accessible When a verifier validates the certificate using the RoofLens public key Then the JSON signature verifies successfully against the canonicalized payload And the embedded PDF signature validates with status "valid and unmodified" And recomputing sha256 over each referenced flight log and image matches the hashes listed in the certificate And any signature or hash mismatch causes verification to fail with a diagnostic naming the non-matching artifact
Attachment to Job and Bid Summary Embedding
Given a QC certificate has been generated for a job When the job is viewed in RoofLens Then the certificate (PDF and JSON) appears in the job's attachments with correct filenames and sizes And a one-page QC summary is embedded into newly exported bid PDFs and includes outcome, evaluation timestamp, coverage/overlap/GSD metrics, pilot and aircraft identifiers, and a link or QR code to the full certificate And re-exporting an existing bid updates the embedded summary to the latest certificate And deleting or revoking a certificate removes the summary from future exports while preserving previously exported bids unchanged
Shareable Link Generation and Access Control
Given a QC certificate exists When a user generates a shareable link with a specified expiry between 7 and 30 days Then a public URL is created that serves the certificate without authentication until expiry And each access is logged with timestamp, IP, and user agent And the link can be revoked, after which requests return HTTP 410 Gone And after expiry the link returns HTTP 410 Gone And the URL contains a signed, unguessable token with at least 128 bits of entropy
QC Fail Report with Remediation Guidance
Given one or more QC thresholds are not met after post-correction flights When the user requests certificate generation Then the system produces a QC Fail report (PDF and JSON) instead of a Pass certificate And the report lists each failing metric with measured value versus threshold, the unresolved gap list, and recommended micro-missions with estimated additional passes required And the Fail report attaches to the job and supports a shareable link under the same access controls as a pass certificate And the job UI displays remediation guidance and a call-to-action to launch the suggested micro-missions
Multi-Operator Session & Deconfliction
"As a site supervisor, I want my team to coordinate gap fills safely so that we finish faster without drones interfering with each other."
Description

Supports team operations by sharing the live heatmap and gap list across nearby devices, allowing operators to claim gaps and assign micro-missions. Implements soft locking and mission deconfliction to prevent duplicate work and reduce collision risk, including temporal separation and minimum distance rules for multiple aircraft. Synchronizes over local network or cloud when available and records assignments and outcomes for audit.

Acceptance Criteria
Real-Time Heatmap Share and Gap List Sync (LAN/Cloud)
Given 2–5 operators are joined to the same session on a local network, When one device marks coverage or creates/updates a gap, Then all other devices display the change within ≤2 seconds (95th percentile) and ≤5 seconds worst-case. Given LAN is unavailable and internet is available, When a change occurs, Then updates propagate via cloud within ≤5 seconds (95th percentile) and ≤10 seconds worst-case, and LAN is preferred when both paths are available. Given a device is offline (no LAN or internet), When changes occur, Then changes are queued locally as Pending Sync and are applied on other devices within ≤10 seconds of reconnection. Given concurrent edits to the same gap without a lock, When conflicts are detected, Then the system resolves by server timestamp (last-writer-wins) with deterministic tie-breaker (lowest device ID) and notifies affected users.
Operator Claims Gap with Soft Lock
Given an unclaimed gap is visible, When an operator taps Claim, Then the gap shows Claimed by <user> on all devices within ≤2 seconds (LAN) or ≤5 seconds (cloud) and becomes soft-locked to that user. Given a gap is soft-locked, When another operator attempts to claim or start a mission for that gap, Then the action is blocked and a message indicates Claimed by <user>. Given no activity (no mission start or update) occurs on a claimed gap for >120 seconds, When the timer elapses, Then the soft lock auto-expires and the gap returns to Unclaimed on all devices. Given a mission completes or the claimer manually releases, When released, Then the lock clears and the gap status updates within the same sync thresholds.
Micro-Mission Assignment and Acceptance
Given an operator has claimed a gap, When they assign a micro-mission (to self or another operator), Then a mission record is created with missionId, assignee, target area, constraints, and status Assigned, visible to all devices within sync thresholds. Given a mission is Assigned, When the assignee accepts, Then the mission status changes to Accepted and all other operators are prevented from starting the same mission. Given a mission is Assigned, When the assignee declines, Then the mission returns to Unassigned and the claimer retains the soft lock for 60 seconds (grace window). Given a mission is Accepted by one operator, When any other operator attempts to start it, Then the start is blocked with message Mission in progress by <user>.
Aircraft Deconfliction: Minimum Distance Enforcement
Given two or more aircraft are registered in the session with live telemetry, When a mission plan or active flight would reduce separation below minAircraftSeparationMeters (default 30 m, configurable), Then the system blocks arming/mission start for the second aircraft and displays a separation warning until predicted separation ≥ minimum. Given aircraft are already airborne, When live telemetry shows actual separation < minAircraftSeparationMeters, Then both operators receive high-priority alerts and guidance to pause/hold until separation is restored, and the event is logged. Given an admin updates minAircraftSeparationMeters, When subsequent missions are planned or started, Then the new value is enforced and displayed in the deconfliction UI.
Temporal Separation on Intersecting Flight Paths
Given two micro-missions have intersecting polygons or flight lines, When a mission is accepted while an intersecting mission is in progress or completed within the last separationWindowSeconds (default 15 s, configurable), Then start is delayed and a countdown is shown until the window elapses. Given two micro-missions do not intersect, When either is accepted, Then no temporal separation delay is applied. Given operator override permissions are disabled, When an operator attempts to bypass the temporal separation delay, Then the action is blocked and recorded.
Offline Operation and Conflict Resolution on Reconnect
Given a device loses connectivity, When an operator claims a gap or starts/completes a mission, Then the actions are allowed locally, marked Pending Sync, and time-stamped. Given two devices offline claim the same gap, When they reconnect at different times, Then ownership is resolved to the earliest claim timestamp and the losing device is notified with a prompt to stop or reassign within 10 seconds. Given a device reconnects after being offline, When sync begins, Then all queued actions are uploaded and reconciled within ≤10 seconds with idempotent application and conflict audit entries.
Audit Trail and Export
Given any claim, assignment, start, pause, completion, override, or deconfliction event occurs, When it is processed, Then an immutable audit record is created with sessionId, gapId/missionId, actor userId, aircraftId (if applicable), local and server timestamps, location (if applicable), outcome, and reason (if override/decline). Given an authorized user requests an audit export for a session, When the request is made, Then a downloadable JSON and CSV are generated within ≤10 seconds containing 100% of recorded events in order. Given a record needs correction (e.g., mission canceled), When corrected, Then a new audit event is appended and the original remains unchanged; exports include both with clear sequencing.

QR Verify

Adds a scannable QR and short link to every DisputeLock PDF that opens a lightweight verification page. Adjusters confirm authenticity in seconds—hash match, capture timestamps, and signer list—without installing anything. Cuts back‑and‑forth and speeds approvals while preserving offline PDF workflows.

Requirements

PDF QR and Short Link Injection
"As a roofing estimator, I want every DisputeLock PDF to include a scannable QR code and short link so that adjusters can instantly verify authenticity without logging in or installing apps."
Description

Embed a unique, high-contrast QR code and human-readable short link into every DisputeLock PDF at render time. The QR encodes a signed verification token and short URL; the printed short link offers a manual fallback. Placement is consistent across templates (e.g., footer margin), sized for camera readability and print/scan resilience (adequate quiet zone, ECC level M/H). The element is version-aware (new code per regenerated PDF), themeable with account branding, and configurable per account or document type. The injection runs in the existing PDF generation pipeline without altering core content layout or pagination and degrades gracefully for legacy PDFs.

Acceptance Criteria
Footer QR and Short Link Placement
- Given a DisputeLock PDF is rendered, when the footer is produced, then a QR code and human-readable short link are placed at the right footer margin of page 1 by default, aligned consistently across all templates. - And the placement maintains at least 6 mm clearance from core content and page edges, and does not overlap any existing elements. - And the short link font size is >= 8 pt and fully readable at 300 DPI print. - And when the account setting "Place QR on all pages" is enabled, then the QR and short link appear on every page at the same coordinates.
Encoded Token and Short URL Integrity
- Given the QR is generated, when decoded, then it contains a short URL with a unique slug and a signed verification token associated with the specific document ID, account ID, and PDF version number. - And the token is signed with the system signing secret (HMAC-SHA256) and includes an issued-at timestamp. - And server-side verification of the token succeeds for untampered payloads and fails for any modified payloads. - And the printed short link text exactly matches the short URL encoded in the QR.
Print/Scan Resilience and Readability
- Given the PDF is printed on standard A4/Letter at 300 DPI in grayscale or color, when scanned by common smartphone cameras (iOS/Android) at 30–60 cm, then the QR decodes successfully in ≥ 99% of 50 random test prints. - And the QR physical size is >= 22 mm x 22 mm with a quiet zone of >= 4 modules and ECC level M or higher. - And the QR-to-background contrast ratio is >= 7:1; if brand color fails contrast, the QR is rendered in pure black on white. - And the short link remains readable after printing and photocopying once.
Version Awareness on Regeneration
- Given a DisputeLock PDF is regenerated (new version), when rendering, then a new QR and short link slug are produced, distinct from the prior version. - And scanning the old QR resolves to the prior version’s verification record, while the new QR resolves to the new version. - And the token payload includes the PDF version number and created-at timestamp. - And the regeneration event is logged with a reference to both slugs.
Branding and Theming Controls
- Given an account with branding configured, when the QR block is rendered, then label text and short link styling (font, color) use account theme while preserving minimum contrast and font size. - And the QR symbol itself remains high-contrast and unbranded unless brand color meets contrast and ECC criteria. - And when branding is missing or invalid, the system falls back to default styling without failing PDF generation.
Per-Account and Per-Document-Type Configuration
- Given account settings specify QR/short link placement, size, ECC level, and page scope per document type, when rendering a DisputeLock PDF of that type, then the configured values are applied. - And disabling the feature at the account or document-type level results in no QR or short link being injected for that PDF, with a trace log entry recorded. - And in absence of explicit settings, safe defaults are applied (footer, page 1, 22 mm QR, ECC M). - And settings can be overridden per-request via API flags, constrained within safe bounds (min size 18 mm; max size 30 mm).
Pipeline Integration and Graceful Degradation
- Given the existing PDF generation pipeline, when QR injection runs, then the total render time increases by no more than 300 ms on average across 100 sample renders. - And no page count or core content layout changes occur; a visual diff excluding the reserved footer rectangle shows zero differences. - And if QR injection fails (e.g., signing or short-link creation error), then the PDF is still generated without the QR/short link, an error is logged with correlation ID, and no user-visible crash occurs. - And legacy PDFs generated before this feature remain unchanged, and re-downloads do not retroactively inject QR unless the PDF is re-rendered.
Mobile-First Verification Page
"As an insurance adjuster, I want a fast, mobile-friendly verification page from the QR scan so that I can confirm document authenticity and key details in seconds at the job site."
Description

Serve a lightweight, public, mobile-first page that loads in <1s on average 4G, displaying cryptographic hash match status (pass/fail), capture timestamps, signer list, and key document metadata. Provide clear visual status (green check/red alert), company branding, and print-friendly layout. The page requires no authentication, supports accessibility (WCAG 2.1 AA), respects privacy configurations (e.g., masking sensitive fields), and works across default phone camera QR scanners and modern browsers. Include timezone-aware timestamps, a verification ID, and a contact link for disputes.

Acceptance Criteria
4G Load Performance and Lightweight Page
Given a first-time visitor on a mid-range phone (Android Pixel 5 or iPhone 11) on simulated 4G (1.6 Mbps down, 300 ms RTT) When the verification URL is opened Then Largest Contentful Paint is <= 1.0s median across 3 runs; And First Contentful Paint is <= 0.8s median; And total transfer size (HTML+CSS+JS+images) is <= 200 KB; And no single JS bundle exceeds 50 KB; And primary content (status, metadata, timestamps, signer list) is visible without additional interaction.
Hash Status and Verification Metadata Display
Given a valid verification ID encoded in the QR or short link When the page loads Then it displays a green check and the label "Verified" if the stored document hash matches; Or a red alert and the label "Hash Mismatch" if it does not; And the cryptographic hash value is shown with its algorithm (e.g., SHA-256) and can be expanded to full length; And the signer list (names and roles) is displayed in the original capture order; And capture timestamps and key document metadata (document title, generation date, organization) are shown; And the Verification ID is clearly visible and copyable; And no authentication is required to view.
Timezone-Aware Timestamps
Given a viewer’s device timezone When timestamps are rendered Then all timestamps (e.g., document generated, signatures captured) are displayed in the viewer’s local timezone including numeric UTC offset; And an option is available to view timestamps in UTC (ISO 8601 format); And daylight saving time rules are respected for the displayed dates.
WCAG 2.1 AA Accessibility
Given keyboard-only and screen reader usage When navigating and reading the page Then all interactive elements are reachable via keyboard with visible focus order matching visual order; And semantic landmarks and headings are provided; And status messages are announced via ARIA live regions; And text/icon color contrast meets WCAG 2.1 AA (>= 4.5:1 for normal text, >= 3:1 for large text/icons); And the page passes automated axe-core checks with zero serious/critical violations.
Privacy Masking Configuration Enforcement
Given the account’s privacy settings require masking specific sensitive fields When the page renders Then those fields are masked per configuration (e.g., last 4 visible, remainder redacted) in both visual text and accessible names; And masked values do not appear in the DOM, page source, network responses, or URL/query parameters; And the page indicates that some fields are masked due to privacy settings without revealing the underlying values.
QR Scan and Short Link Compatibility
Given the QR code or short link on a DisputeLock PDF When scanned with default camera apps on iOS 15+ and Android 10+ or opened in Safari, Chrome, Firefox, or Edge (latest two versions) Then the link resolves over HTTPS with no more than one redirect; And the verification page loads without authentication and displays correctly on 360–1024 px viewport widths with no horizontal scrolling; And a clear error page with support guidance is shown for invalid or expired Verification IDs.
Print-Friendly Layout, Branding, and Contact
Given a user prints the verification page to Letter or A4 When using the browser’s Print function Then the output fits on a single page in portrait; And includes company branding (logo and name), visual status indicator, cryptographic hash, Verification ID, signer list, timestamps, and key metadata; And excludes navigation/interactive UI; And includes a visible dispute contact link (tel: or mailto:) that is clickable on-screen and rendered as text in print.
Document Hashing and Integrity Validation
"As a claims supervisor, I want QR verification to use cryptographic hashing and signed tokens so that any tampering is reliably detected and flagged."
Description

Generate a SHA-256 hash of the finalized PDF bytes and store it with version metadata in a write-once audit store. Issue a signed, time-bounded verification token embedded in the QR/short link. The verification endpoint recomputes the hash from the canonical document, validates the token signature and expiry, and returns pass/fail with reason codes (e.g., mismatch, expired, unknown). Prevent token guessing/enumeration and avoid PII in URLs. Support re-issue on document regeneration while preserving historical versions for compare.

Acceptance Criteria
Hash Generation on Finalized PDF
Given a PDF is finalized for distribution When the finalize operation completes Then the system computes SHA-256 over the exact finalized PDF bytes (no post-processing or re-serialization) And persists the hex-encoded hash value And a subsequent SHA-256 recomputation over the canonical stored bytes equals the persisted hash
Write-Once Audit Store with Version Metadata
Given storing a hash and associated metadata When creating the audit record Then the store writes an immutable record containing: documentId, versionId, hashHex, algo="SHA-256", byteLength, contentType="application/pdf", createdAt (UTC ISO-8601), createdBy Given an attempt to update or delete an existing audit record When the mutation is attempted Then the store rejects it with error code WORM_VIOLATION and no data changes occur And the denied attempt is logged with actor identity and timestamp
Signed Time-Bounded Verification Token Issuance
Given a finalized document with a stored hash When generating the QR and short link Then the system issues a compact signed token with claims: docId, versionId, hashId, iat, exp, nonce And exp - iat is configurable from 1 hour to 30 days (default 7 days) And the token is signed with an allowlisted algorithm using the platform private key; public keys exposed via JWKS And the token length is <= 512 bytes And the QR/short link embeds only the token (no PII, no guessable identifiers)
Verification Endpoint: Hash Recompute and Token Validation
Given a request with a token arrives at the verification endpoint When processing the request Then the endpoint validates the signature against active JWKS, checks exp/iat within <=60s clock skew, verifies nonce non-reuse, and confirms token not revoked And loads canonical PDF bytes for the token's docId/versionId, recomputes SHA-256, and compares to the stored hash And returns JSON with status "pass" when hashes match; otherwise status "fail" with reasonCode in {TOKEN_EXPIRED, TOKEN_INVALID, TOKEN_REVOKED, HASH_MISMATCH, DOC_UNKNOWN} And uses HTTP 200 for pass/fail, 429 for rate-limited, and 5xx for server errors And achieves median latency <= 200 ms and p95 <= 500 ms under 100 RPS
Token Guessing and Enumeration Resistance
Given unauthenticated requests with invalid or random tokens When 10,000 such requests are sent within 5 minutes from a single IP Then responses are uniform: HTTP 200 with status "fail" and reasonCode=TOKEN_INVALID, revealing no document existence And per-IP rate limiting caps to 60 requests/minute with exponential backoff; excess returns 429 And short links use non-sequential IDs with >= 128 bits of entropy And the verification endpoint does not accept docId/versionId as query parameters; only token-based validation is allowed
No PII in URLs and Tokens
Given a generated QR and short link When the URL and token are decoded and inspected Then neither the URL path, query parameters, nor token payload contains PII (names, emails, phone numbers, street addresses, claim numbers, GPS coordinates) And server access logs redact full tokens and query strings by default And CI includes automated static checks that scan URLs/tokens for PII patterns; builds fail on detection
Re-Issue on Regeneration with History Preservation
Given a document is regenerated resulting in changed bytes When it is finalized again Then a new versionId is created and a new hash record is stored while prior versions remain immutable and retrievable And previously issued tokens remain valid until exp and validate only against their original version's hash And the verification page displays docId, versionId, createdAt, and hash summary, clearly indicating Latest vs Historical And a compare endpoint accepts docId and two versionIds and returns a diff summary based on stored metadata
Audit Trail Capture and Presentation
"As an adjuster, I want to see who signed and when during verification so that I can approve estimates confidently without requesting additional documentation."
Description

Aggregate and persist an immutable audit trail for each DisputeLock document: creation/finalization timestamps, signer list and signing events, and key lifecycle events (e.g., edits, regenerations). Encrypt at rest, apply retention policies, and expose a minimal subset on the verification page (per privacy settings). Provide a structured, downloadable audit JSON for internal use while keeping the public page concise. Ensure consistency with existing DisputeLock audit data to avoid duplication.

Acceptance Criteria
Persist Immutable Audit Trail on Finalization
Given a DisputeLock document is created and finalized When finalization completes successfully Then the audit trail persists creation and finalization timestamps in UTC with millisecond precision And the document ID, version, and content hash are recorded And the complete signer list and each signing event (actor, event type, timestamp) are recorded And key lifecycle events (edits, regenerations) are appended with timestamps And the write returns a success code and is readable via the internal audit API
Append-Only Immutability and Tamper Detection
Given an existing audit trail for a DisputeLock document When an attempt is made to update or delete a prior audit entry Then the operation is rejected and an error is returned And only new entries can be appended Given the audit trail is read When the integrity chain (per-entry hash referencing previous hash) is verified Then any tampering produces a verification failure with the first offending entry identified
Encryption at Rest with Managed Keys
Given audit data is persisted to storage Then the data is encrypted at rest using KMS-managed AES-256 keys And database/storage encryption status reports "enabled" When a key rotation is performed Then existing audit records remain readable and new writes use the rotated key When attempting direct storage reads without application-layer decryption Then plaintext audit content is not obtainable
Retention Policy and Legal Hold
Given a retention policy of N days is configured When an audit record becomes older than N days and is not on legal hold Then it is purged within 24 hours and is no longer retrievable via API or admin UI And a deletion event with timestamp and document ID is logged to the system audit When a legal hold is applied to a document Then its audit records are retained regardless of age until the hold is removed When the retention value changes Then the new policy applies prospectively to future expirations
Public Verification Page Shows Minimal Subset
Given a user scans the QR code or opens the short link on a DisputeLock PDF When the verification page loads Then it displays only: document hash match status, creation and finalization timestamps, signer count (and masked initials if allowed), and last regeneration timestamp And it excludes PII such as full names, emails, IP addresses, and device identifiers And it respects privacy settings (fields disabled by policy are omitted) And it requires no app install or login and loads within 500 ms p95 with payload <= 150 KB When the document is private or link access is disabled Then the page returns a redacted view or 404 with no sensitive data leaked
Downloadable Structured Audit JSON for Internal Use
Given an authenticated internal user with Estimator or Admin role When they request the audit JSON for a document Then a file is returned with content-type application/json and HTTP 200 And it conforms to schema version "audit.v1" with fields: document metadata, signer list, ordered events, hashes, and UTC ISO 8601 timestamps And the JSON includes the public-page subset markers indicating which fields are public And the response is denied with HTTP 403 for unauthorized users And typical files are <= 5 MB and stream within 2 seconds p95
Audit Consistency and De-duplication with Existing Data
Given a document already has legacy DisputeLock audit entries When the new audit service ingests or records events Then the unified timeline presents each logical event exactly once with stable event IDs And regeneration creates a new event without duplicating prior ones When the same event is submitted twice (idempotent key) Then only one entry is persisted And pre- and post-migration event counts and hashes match for the same document
Short Link Service with Custom Domain Support
"As a field adjuster, I want a readable short link on the PDF so that I can type it in when QR scanning isn’t possible or allowed on my device."
Description

Create a reliable short-link service that generates non-sequential, collision-resistant IDs (e.g., base62), protected against enumeration and abuse. Support account-level custom domains (e.g., verify.contractor.com), HTTPS-only redirects, and CDN-backed global edge routing for availability. Include manual entry fallback via the printed code and handle 404/expired states gracefully with guidance. Log scan and visit events for operational insights without exposing PII.

Acceptance Criteria
Non-Sequential, Collision-Resistant ID Generation
Given the service is requested to create a short link When 10,000,000 IDs are generated in a test run Then zero collisions occur and each ID is 10-12 characters using base62 [0-9A-Za-z]. Given two links are created within the same millisecond When their IDs are compared Then they are not lexicographically ordered by creation time and share no deterministic prefix beyond the path root. Given an ID is inspected When decoded Then it contains no embedded timestamps, counters, or account identifiers. Given security testing attempts to predict the next ID When evaluating success rate across 1,000,000 trials Then prediction success is indistinguishable from random guessing (no better than 1/62^N).
Enumeration and Abuse Protection
Given unauthenticated requests from a single IP When more than 60 resolve attempts occur within 60 seconds Then subsequent requests receive HTTP 429 with a Retry-After header and challenges are applied for 10 minutes. Given distributed probing across multiple IPs for a single domain When aggregate resolve attempts exceed 1,000 RPS for 60 seconds Then WAF rules apply challenge/blocks and events are logged for ops review. Given requests attempt to list or traverse IDs When accessing directory-style paths (/, /list, /ids) or using HEAD/OPTIONS probing Then no index is exposed and responses return 404/405 with no ID leakage. Given repeated failed lookups from the same IP When 10 invalid IDs are requested within 60 seconds Then subsequent requests are challenged (e.g., CAPTCHA) for 15 minutes.
Account-Level Custom Domain Support
Given an account owner adds verify.contractor.com When a DNS TXT validation token is created and detected Then the domain verifies within 5 minutes and the UI shows Verified status. Given a verified domain with a CNAME to the platform edge When traffic arrives at https://verify.contractor.com/{id} Then the request is routed through the custom domain and resolves the correct ID. Given a verified domain When TLS is provisioned via ACME Then a certificate is issued within 10 minutes and auto-renewed at least 30 days before expiry without outages. Given a domain remains unverified for 72 hours When links are created for that account Then links default to the platform short domain and the domain status indicates Misconfigured with actionable DNS guidance.
HTTPS-Only Redirect Enforcement
Given an HTTP request to http://{domain}/{id} When accessed Then respond with 301 to https://{domain}/{id} and include HSTS max-age=31536000; includeSubDomains. Given a TLS handshake using versions below TLS 1.2 or weak ciphers When attempted Then the connection is refused. Given a generated QR code or short link When scanned or clicked Then the destination loads over HTTPS only with no mixed-content warnings. Given the verification page is served When headers are inspected Then Content-Security-Policy and Referrer-Policy are present to prevent downgrade/mixed content.
CDN Edge Routing and High Availability
Given synthetic monitors in NA, EU, and APAC When resolving short links over 24 hours Then p95 TTFB <= 300 ms and p99 <= 500 ms. Given an edge POP experiences an outage When traffic is routed Then failover occurs within 60 seconds with error rate < 0.1%. Given burst traffic of 2,000 requests per second per region for 5 minutes When load tested Then success rate >= 99.9% and median TTFB <= 200 ms. Given a rolling 30-day measurement window When availability is calculated Then end-to-end availability for link resolution is >= 99.95%.
Manual Entry Fallback and Error State Handling
Given a PDF displays the printed short code (e.g., RL-5X7Q9T) When the code is entered at the verify domain or default short domain Then it resolves to the same verification page as the QR scan within 2 seconds p95. Given an expired or deleted link When accessed Then respond with HTTP 410 and render a branded guidance page with no PII, a clear expiration message, and a CTA to contact the contractor for a new link. Given a malformed or unknown code When accessed Then respond with HTTP 404 and render a branded guidance page instructing the user to recheck the code and providing a support link. Given assistive technology users When the guidance page is audited Then it meets WCAG 2.1 AA for structure, contrast, and keyboard navigation.
Privacy-Safe Scan and Visit Event Logging
Given a verification page is opened When logging occurs Then an event is recorded asynchronously with fields: linkId, timestamp (UTC), domain, userAgent, country/region, referrer; IP is stored only in anonymized form (/24 IPv4 or /48 IPv6); no emails, names, or device identifiers are stored. Given normal network conditions When measuring logging latency Then 95% of events are persisted within 2 seconds and 99% within 5 seconds without delaying page load. Given an operations dashboard query When viewing metrics Then only aggregated counts by day, domain, and region are available; there is no drill-down to individual IP addresses. Given data retention policies When enforced nightly Then raw events older than 30 days are deleted or aggregated, and exports provide aggregated metrics only.
Security, Privacy, and Abuse Protections
"As a contractor, I want verification to protect sensitive client information and resist abuse so that we maintain trust while enabling fast approvals."
Description

Limit public verification content to the minimum necessary, with optional project-level PIN or claim-number allowlist to reveal extended details. Enforce strong CSP, HSTS, and no third-party trackers/cookies; implement rate limiting and basic bot mitigation on the verification endpoint. Provide configurable redaction (e.g., mask homeowner contact info) and clear legal disclaimers. Maintain detailed server logs for forensics while respecting data protection obligations (GDPR/CCPA) and honoring data deletion/retention policies.

Acceptance Criteria
Public Verification Page Minimal Disclosure
Given an unauthenticated user opens a valid QR Verify link When the verification page loads Then it displays only: a hash match indicator, a truncated SHA-256 fingerprint (first 8 + last 8), the PDF generation timestamp (UTC, ISO 8601), event timestamps for signature and lock (minute precision), and signer role counts (no names/emails) And it does not display any PII (names, emails, phone numbers, street addresses), GPS coordinates, pricing, line items, photos, or claim numbers And a clear legal disclaimer and links to the Privacy Policy and Terms are present And a “View extended details” control is visible that requires a PIN or claim number to proceed
Extended Details Revealed with Project PIN
Given a project has a verification PIN configured When the correct PIN is submitted on the verification page Then extended details are revealed, including full signer names/emails, the full SHA-256 hash, and complete event timestamps/audit entries And access persists only for the current session/tab and the PIN is not stored in the URL, localStorage, or sessionStorage When an incorrect PIN is entered 5 times within 10 minutes Then further PIN attempts are blocked for 15 minutes and a generic error/429 is returned And all PIN submissions are logged as success/fail without storing the PIN value And error messages are generic and do not reveal whether a given PIN format or length is correct
Claim-Number Allowlist for Extended Details
Given a project has a claim-number allowlist configured When a user submits a claim number that exists in the allowlist (case and format-insensitive) Then extended details are revealed When a user submits a claim number not in the allowlist Then access is denied with a generic message and no details are revealed And claim-number submissions are rate-limited to 10 attempts per minute per IP with 429 returned on exceed And submitted claim numbers are neither echoed back in responses nor persisted client-side or in URLs If both a PIN and an allowlist are configured Then satisfying either the PIN or an allowlisted claim number grants access to extended details
Security Headers and No Third-Party Trackers/Cookies
Given a GET request to the verification page endpoint Then the response includes the headers: Strict-Transport-Security: max-age=31536000; includeSubDomains; preload; Content-Security-Policy that disallows 'unsafe-inline' and 'unsafe-eval', restricts default-src to 'self', and blocks third-party origins; Referrer-Policy: strict-origin-when-cross-origin; X-Content-Type-Options: nosniff; Permissions-Policy disabling camera, microphone, geolocation; and frame-ancestors 'none' (or X-Frame-Options: DENY) And the verification page makes no network requests to third-party domains and contains no tracking pixels or analytics beacons And no third-party cookies are set; if any cookie is strictly necessary, it is first-party, Secure, HttpOnly, SameSite=Strict, and not used for tracking
Rate Limiting and Basic Bot Mitigation on Verification Endpoint
Given repeated requests to the verification page from a single IP When the request volume exceeds 60 requests per minute Then subsequent requests receive HTTP 429 with a Retry-After header Given credential submission (PIN/claim) attempts from a single IP or for a single project When attempts exceed 10 per minute per IP or 20 per 10 minutes per project Then additional attempts are temporarily blocked and receive HTTP 429 And lightweight, first-party bot mitigation (e.g., proof-of-work or unobtrusive JavaScript challenge) is applied only to suspicious traffic without third-party services And the P95 TTFB for legitimate traffic remains under 500 ms under nominal load with mitigations enabled And all rate-limit and mitigation events are logged with a request-id and without storing PII
Configurable Redaction of PII
Given project-level redaction settings are configured When the public verification page renders Then the selected fields (e.g., homeowner name, email, phone, street address, claim number) are masked using a consistent pattern and are not present unmasked in the DOM, HTML source, or API responses And the same redactions apply to generated PDFs unless an authorized, extended-details view is explicitly requested When extended details access is granted via PIN or allowlisted claim number Then unmasked values appear only in that view and are not injected into the public page source
Server Logging, GDPR/CCPA Compliance, and Retention/Deletion
Given any interaction with the verification endpoints Then server logs capture event type, timestamp (UTC), anonymized IP (e.g., /24 for IPv4 or /48 for IPv6), user-agent, project ID, outcome (success/failure), and a request-id/correlation-id And logs do not capture full PINs or full claim numbers (store only salted hashes if needed for forensics) And logs are retained per a configurable policy (default 90 days; options include 30/90/365) and automatically purged when expired When a GDPR/CCPA deletion request is processed for a project or data subject Then associated personal data is deleted or anonymized within 30 days, and verification logs are purged/anonymized to remove PII while preserving security-relevant metadata And administrators can export project-related data in a machine-readable format upon request within 30 days

Custody Ledger

Embeds an append‑only chain‑of‑custody record showing who captured, uploaded, edited, approved, and shared each report—complete with device IDs, GPS/time, and WebAuthn identity. Export or share as an audit sheet to defend evidence handling and reduce scope disputes.

Requirements

Tamper‑Evident Append‑Only Ledger
"As a contractor defending my estimate, I want a tamper‑evident history of every action on a report so that I can prove the integrity and handling of evidence during scope disputes."
Description

Implement an immutable, append‑only event log for each RoofLens report. Every action that affects evidence or report state (capture, upload, detection, edit, approval, share, export) is recorded as a discrete event containing: event type, actor reference, device ID, UTC timestamp, GPS fix with accuracy, hash of the affected artifact(s), and a cryptographic link to the previous event (hash chaining). Ledger segments are periodically sealed with signed checkpoints to enable efficient verification at scale. All evidence artifacts (images, PDFs, datasets) are content‑addressed (e.g., SHA‑256) to bind their integrity to the ledger. The ledger integrates at the domain layer so all product surfaces (mobile capture, web editor, automation, APIs) emit standardized events. This provides provable non‑repudiation and tamper evidence for dispute defense and compliance.

Acceptance Criteria
Append-Only Enforcement on Ledger Writes
Given a report ledger with a recorded lastEventHash When a new ledger event is submitted with prevHash equal to the current lastEventHash Then the system appends the event, persists it once, and updates lastEventHash to the new event hash And there is no supported operation to update or delete any prior event And any attempt to modify or delete a prior event is rejected with error code LEDGER_IMMUTABLE and no data change occurs And if the submitted prevHash does not equal the current lastEventHash, the write is rejected with error code CHAIN_CONFLICT and the event is not persisted
Hash Chain Integrity Verification
Given an exported ledger for a report containing events in order When a verifier recomputes SHA-256 over the canonical serialization of each event and validates each prevHash link Then each event.hash equals the recomputed hash and each event.prevHash equals the prior event.hash And any single-byte change to any stored event causes verification to fail and identifies the first invalid event index And verification can be completed offline using only the exported data (no database calls)
Event Schema Completeness and Validation
Given a ledger event is submitted for persistence When server-side validation is executed Then the event includes required fields: reportId, eventId, eventType ∈ {capture, upload, detection, edit, approval, share, export}, actorRef, deviceId, timestamp (UTC RFC3339 with 'Z'), gps (lat ∈ [-90,90], lon ∈ [-180,180], accuracyMeters ≥ 0) or gps.missing with reason ∈ {no_signal, disabled, policy}, artifactHashes[] (each 64-char lowercase SHA-256 hex), prevHash (64-char lowercase SHA-256 hex) And any missing or invalid field causes rejection with error code EVENT_SCHEMA_INVALID and a field-specific message And timestamps are stored in UTC and retain millisecond precision And artifactHashes refer only to artifacts that exist or are being created atomically in the same transaction
WebAuthn Actor Identity Binding
Given a user-initiated action (e.g., edit, approval, share) that produces a ledger event When the user authenticates with WebAuthn and the assertion is verified server-side Then the ledger event records actorRef (userId), webAuthn.credentialId, webAuthn.challengeHash, and webAuthn.verificationResult = true And if WebAuthn verification fails, the action is aborted and no ledger event is written And for automation/system-initiated actions, the event records servicePrincipalId and a detached signature over the canonical event payload; invalid signatures are rejected
Content-Address Binding of Evidence Artifacts
Given an artifact (image, PDF, dataset) is captured, uploaded, or generated When the artifact is stored Then a SHA-256 hash is computed over the artifact bytes and recorded in artifactHashes of the corresponding ledger event And subsequent retrieval and re-hash of the artifact matches the recorded hash; any mismatch flags the artifact as TAMPER_SUSPECT and surfaces in audit exports And artifacts with identical hashes are de-duplicated at storage (single content-addressed object, multiple references) And the ledger event stores MIME type and byte length to support deterministic re-hash validation
Signed Checkpoint Sealing and Verification
Given a ledger segment with one or more events since the prior checkpoint When a checkpoint is created Then the checkpoint contains segmentStartEventId, segmentEndEventId, chainTipHash, rootHash (over the segment), createdAt (UTC), keyId, and signature over these fields using the configured platform signing key And offline verification with the published public key (by keyId) validates the signature and that recomputed rootHash matches the segment contents And altering any event within the sealed segment causes checkpoint verification to fail And public key rotation preserves validation of historical checkpoints via their recorded keyId
Cross-Surface Domain-Layer Event Emission
Given actions are performed via each product surface: mobile capture, web editor, automation pipeline, and public API When an action that changes evidence or report state completes Then a standardized ledger event is emitted by the domain layer with the same schema across surfaces and appears in the ledger within 2 seconds of action completion And if event emission fails, the originating action is rolled back or marked failed; no state change occurs without a corresponding ledger event And no code path exists that mutates report state bypassing the domain-layer ledger emission (validated by integration tests covering each surface)
WebAuthn Identity Binding for Actions
"As an account administrator, I want every sensitive action to be bound to a WebAuthn‑verified user so that our custody records are trustworthy and resistant to impersonation."
Description

Require WebAuthn (FIDO2) verification for privileged actions and bind each ledger event to a verified user identity. Store assertion metadata (credential ID, user handle, origin, challenge, sign counter) with the event to support later validation. Support platform and roaming authenticators across web and mobile, including passkeys. For field capture where continuous prompts are impractical, use short‑lived, WebAuthn‑backed session tokens and device keys to co‑sign events, with re‑verification on sensitive steps (approval, export, share). Map identities to org roles to reflect custody responsibilities. This ensures strong, phishing‑resistant identity attribution for chain‑of‑custody.

Acceptance Criteria
Privileged Web Action Requires WebAuthn and Binds Ledger Event
Given a logged-in user with a registered WebAuthn credential and assigned org role When the user initiates a privileged web action (e.g., edit report, change estimate line-items) Then the system prompts for and verifies a WebAuthn assertion against the configured RP ID and origin And on success the action completes and a custody-ledger event is recorded bound to that user identity And the ledger event stores: credentialId (base64url), userHandle, origin, challenge (base64url), signCounter (post-verify value), authenticatorAttachment (platform|roaming), userVerification=required, verificationResult=true, eventId, timestamp And on cancel or verification failure the action is blocked, no success ledger event is created, and the API returns 401/403 with error code "webauthn_verification_failed"
Mobile Passkey Authentication for Privileged Action
Given a user on iOS or Android using platform passkeys or a roaming authenticator When the user initiates a privileged action within the mobile app or PWA Then a WebAuthn passkey prompt is shown and the assertion is verified for the expected RP ID/appID and origin And on success the action completes and a custody-ledger event is recorded with assertion metadata and device information And on unsupported authenticator or misconfigured RP ID/appID the action is blocked with error code "webauthn_origin_mismatch"
Ledger Event Stores WebAuthn Assertion Metadata for Validation
Given any successful privileged action or step-up verification When the custody-ledger event is persisted Then the record includes immutable fields: credentialId, userHandle, origin (or rpId), challenge, signCounter, authenticatorAttachment, userVerification, verificationResult, verificationMethod (direct|session-token), eventId, timestamp And all fields are stored in canonical formats (base64url for binary, ISO-8601 UTC for time) and pass JSON schema validation And attempts to modify these fields after write are rejected with 409 Conflict and no mutation occurs
Field Capture Uses Short-Lived WebAuthn-Backed Session Tokens with Device Co-Sign
Given a field user starts a capture session and completes a WebAuthn verification When the system issues a session token bound to the device key (public key registered for the app instance) Then the token TTL is <= 15 minutes and is audience- and device-bound (tokenId, deviceKeyId), and cannot be used from another device And each capture event within the session is co-signed by the device key and includes tokenId and a server-verified signature And on token expiry or revocation further capture events are rejected until the user re-verifies via WebAuthn And sensitive actions are excluded from token-only auth and require fresh WebAuthn (see re-verification criteria)
Sensitive Steps Require Fresh WebAuthn (Approval, Export, Share)
Given a user attempts Approval, Export PDF, or Share Report When the last successful WebAuthn verification in the session is older than 5 minutes or was performed on a different device Then the user is prompted for WebAuthn and the action proceeds only on successful verification And the resulting ledger event records verificationTime and stepType (approval|export|share) with assertion metadata And on cancel or failure the action is aborted with error code "step_up_required" and no success ledger event is written
Identity-to-Role Mapping Captured and Enforced in Ledger
Given the user performing an action belongs to an organization When the ledger event is created Then the event stores userId, displayName, orgId, and roleAtTime (e.g., Adjuster, Estimator, Approver) And authorization enforces that only permitted roles can perform the action; disallowed attempts return 403 with error code "role_not_authorized" And subsequent changes to the user’s role do not alter historical ledger entries; role changes are logged as separate events
Anti-Replay, Challenge, and Origin Validation for Assertions
Given the server issues a unique, single-use WebAuthn challenge for an action When an assertion is received Then the challenge must match the most recent unconsumed challenge for that session and is marked consumed on success And the signCounter must be strictly greater than the last stored counter for that credentialId; otherwise verification fails with error code "replay_detected" And the origin/rpId must match configured values and userVerification must be present; mismatches fail verification And failed verifications are logged to a security audit log without creating a success custody-ledger event
Trusted Time, GPS, and Device Metadata Capture
"As an insurance adjuster, I want precise time, location, and device details recorded with each photo and edit so that I can validate where and when evidence was captured."
Description

Capture high‑fidelity context for each event: device identifiers (mobile device model/OS, app version, drone UID/airframe serial), UTC timestamp synchronized via NTP with drift detection, and GPS coordinates/altitude with accuracy and fix quality. For images, extract and store EXIF/XMP alongside computed content hashes; for drones, include flight controller time and location when available. Detect and flag inconsistencies (e.g., EXIF time skew, missing GPS, simulated locations). Clearly mark confidence levels when signals are unavailable and prevent silent fallbacks. This augments the ledger with verifiable spatiotemporal provenance to strengthen evidence defensibility.

Acceptance Criteria
Device/App and Drone Identifiers Captured per Event
Given a capture, upload, edit, approve, or share event is recorded from the mobile app When the event is persisted to the custody ledger Then the event record includes deviceModel, osName, osVersion, appVersion and a stable deviceId And no identifier fields are populated with generic placeholders like "unknown"; missing items are explicitly null with reason codes Given an uploaded drone image or telemetry file from a supported drone When the event is persisted Then the event record includes droneUid (UAS ID if present) and airframeSerial And the sourceType is recorded as "drone" versus "mobile" or "web" Given any event where required identifiers are unavailable due to platform limitations When the event is persisted Then the ledger sets deviceMetadataStatus = "Incomplete" with a machine-readable reason (e.g., permissionsDenied, notExposedByOS) And the UI surfaces a "Device Metadata Incomplete" flag on the event And no silent fallback values are injected
NTP-Synchronized UTC Timestamp with Drift Detection
Given NTP is reachable at the moment an event is recorded When the event timestamp is captured Then event.timestampUtc equals the NTP UTC time with millisecond precision And event.clockDriftSeconds is stored as (deviceUtc - ntpUtc) And timeConfidence = "High" if |event.clockDriftSeconds| <= 2.0, else "Low" And a flag clockDriftDetected = true is set when |event.clockDriftSeconds| > 2.0 Given NTP is unavailable for > 5 seconds at the moment of capture When the event timestamp is captured Then event.timestampUtc uses device UTC time And ntpStatus = "Unavailable" and timeConfidence = "Low" And the event timestamp is not mutated later when NTP resumes (no backfill or overwrite) Given the device local timezone changes during use When events are recorded before and after the change Then all event.timestampUtc values remain unaffected (UTC only) And the ledger export shows ntpStatus, clockDriftSeconds, and timeConfidence for each event
GPS Coordinates, Altitude, Accuracy, and Fix Quality Stored
Given the device has an active GNSS/fused location fix at capture time When an event is recorded Then the event includes latitude, longitude, altitudeMeters, horizontalAccuracyMeters, verticalAccuracyMeters, fixQuality (e.g., 2D/3D/RTK), provider, and locationTimestampUtc And locationConfidence = "High" when horizontalAccuracyMeters <= 10 and fixQuality is 3D or better; otherwise "Low" Given horizontalAccuracyMeters > 25 or fixQuality is unknown When the event is recorded Then locationConfidence = "Low" And a flag locationLowAccuracy = true is set Given location services are disabled, permission is denied, or no fix is available within 5 seconds When the event is recorded Then no coordinates are stored (no last-known fallback) And locationConfidence = "None" with reason (e.g., permissionDenied, noFix, servicesDisabled) And the ledger entry shows provider = "none" and locationMissing = true
Image EXIF/XMP Extraction and SHA-256 Content Hashing
Given a supported image (e.g., JPEG/TIFF) is uploaded When the file is ingested Then the system computes and stores sha256 of the original bytes (pre-transformation) And extracts and stores raw EXIF/XMP blobs plus parsed fields (e.g., DateTimeOriginal, GPS, CameraModel) And the ledger entry for the event references sha256 and parsed metadata Given the same exact file is uploaded again When ingestion completes Then the computed sha256 matches the prior value Given the file is modified by a single byte When ingestion completes Then the computed sha256 differs from the original Given EXIF DateTimeOriginal exists When compared to event.timestampUtc Then exifTimeSkewSeconds is recorded And exifTimeSkewDetected = true when |skew| > 30 seconds Given EXIF GPS coordinates exist and event GPS exists When their distance at capture time is computed Then exifGpsMismatchDetected = true when distanceMeters > 20 Given the image lacks EXIF/XMP data When ingestion completes Then metadataStatus = "Missing" with reason = "noEmbeddedMetadata" And no synthesized placeholder values are added
Drone Flight Controller Time/Location Inclusion and Cross-Checks
Given imagery or telemetry from a supported drone (e.g., DJI) is ingested When metadata is parsed Then flightControllerTimeUtc and flightControllerGps (lat, lon, alt, accuracy/fixQuality if available) are stored And aircraftSerial, remoteControllerId (if available), and droneUid are stored Given both flightControllerTimeUtc and event.timestampUtc are available When their difference is computed Then fcTimeSkewSeconds is stored and fcTimeSkewDetected = true when |skew| > 2 seconds Given both flightControllerGps and event GPS exist When their distance is computed Then fcGpsMismatchDetected = true when distanceMeters > 10 Given flight controller data is not present in the source files When ingestion completes Then droneTelemetryStatus = "Unavailable" with a machine-readable reason And no placeholder telemetry values are injected
Simulated/Mock Location and Anomaly Detection
Given the OS reports mock/emulated location enabled or the active location provider is not in the allowlist When an event is recorded Then simulatedLocationDetected = true and locationConfidence = "None" And the event cannot be labeled "Trusted Location" in the ledger/UI Given successive location fixes imply speed > 100 m/s without corresponding drone/device capability or telemetry When the event sequence is analyzed Then locationAnomalyDetected = true and locationConfidence downgraded to "Low" Given any simulated or anomalous location is detected When the ledger is exported Then the export includes simulatedLocationDetected and/or locationAnomalyDetected flags and associated reasons
Ledger Timeline UI with Diffs and Filters
"As a project manager, I want an easy‑to‑scan custody timeline with diffs so that I can quickly audit who changed what and when without digging through raw logs."
Description

Provide an in‑app, per‑report timeline that visualizes the custody ledger as a readable sequence of events with filters by user, device, action type, and time range. Each event entry should link to affected artifacts and render contextual previews (image thumbnails, geometry overlays). For edit events, present before/after diffs of measurements and line items. Surface trust indicators and anomaly flags (e.g., hash mismatch, time gaps, missing GPS). Integrate the timeline into existing report and export views to enable quick audits without leaving the workflow.

Acceptance Criteria
Chronological Timeline with Event Metadata
Given I open a report's Custody Ledger timeline When the timeline loads Then it displays a reverse‑chronological list (newest first) of all ledger events for that report And each event row shows: event timestamp in ISO‑8601 with local timezone plus relative time, actor display name, verified WebAuthn badge if verified, action type, device ID, and GPS coordinates if available And events without GPS display a visible "GPS missing" indicator And event ordering remains stable across refresh and pagination And expanding an event reveals full, read‑only metadata including event ID and cryptographic hash
Filter by User, Device, Action Type, and Time Range
Given the timeline is visible When I set any combination of user(s), device ID(s), action type(s), and an absolute time range Then only events matching all selected filters are displayed And the visible count updates within 1 second for reports with up to 500 events And active filters are shown as removable chips and can be reset with a single Clear All action And the filter state persists when navigating within the report and is encoded in the URL for deep‑linking And removing all filters restores the unfiltered timeline
Artifact Links and Contextual Previews
Given an event references artifacts such as photos, plans, or measurement geometries When I expand the event Then thumbnail previews lazy‑load and render within 1 second of entering the viewport And clicking a photo thumbnail opens a full‑size viewer in context And toggling a geometry reference overlays the geometry on the plan preview with on/off controls And each artifact label links to the corresponding section of the report in the same workspace And missing or inaccessible artifacts show a placeholder and an anomaly badge
Edit Event Diffs for Measurements and Line Items
Given an event records edits to measurements and/or line items When I open the event's diff view Then each changed field displays before and after values with units and precision consistent with project settings And additions show a + indicator, removals a − indicator, and modified values are highlighted And numeric changes display absolute and percent deltas And unchanged fields are collapsed by default with an option to expand all And the diff content is copyable and included verbatim in the audit export
Trust Indicators and Anomaly Flags
Given events include hash, timestamp, GPS, and identity metadata When the timeline renders Then each event shows a trust badge with one of: Verified, Warning, or Error And the following are flagged: hash mismatch -> Error; missing GPS -> Warning; time gap between consecutive events > 10 minutes -> Warning; client timestamp skew > 2 minutes from server -> Warning And selecting "Show anomalies only" filters the timeline to flagged events And clicking a trust badge reveals a tooltip explaining the reason and affected fields
In‑Workflow Integration and Export Audit Sheet
Given I am viewing a report When I open the Custody Timeline panel/tab Then it opens in‑context without leaving the report view and preserves scroll and filter state on close/reopen And when I export the report as PDF Then the export includes an Audit Sheet listing timeline events (respecting applied filters if chosen) with trust badges and edit diffs And when I click "Share Audit" Then a shareable link opens the report with the timeline and current filters applied And when I click "Download Audit" Then CSV and JSON exports of the full or filtered ledger are downloaded
Performance, Scalability, and Empty/Error States
Given a report with a large ledger (up to 5,000 events) When the timeline first loads Then above‑the‑fold content renders within 2 seconds on a standard workstation and the list uses virtualization to keep memory usage under 200 MB And when filters yield no matches Then an empty state with a "Clear filters" action is shown And when a network error occurs Then a non‑blocking error banner with Retry is displayed and previously loaded events remain visible And the timeline is read‑only with no controls that modify ledger data And the UI supports keyboard navigation, provides ARIA labels for trust/anomaly badges, and meets WCAG AA contrast for primary elements
Audit Sheet Export and Verifiable Bundle
"As a contractor submitting a claim, I want to export a verifiable audit sheet so that reviewers can independently confirm custody without requesting extra documentation."
Description

Enable one‑click export of the custody ledger as a shareable audit package in PDF, CSV, and JSON. The PDF should summarize key events, include a scannable QR code/deep link to the live verification page, and watermark with report identifiers. The JSON should include event payloads, hashes, and checkpoint signatures to allow offline verification. Generate short‑lived, signed share links with access controls and expiration. Append the audit sheet as an optional section in bid/report PDFs to reduce disputes and support carriers’ documentation requirements.

Acceptance Criteria
One-Click Multi-Format Audit Export
Given a user with "Export Audit" permission viewing a completed report's custody ledger When they click "Export Audit Package" Then a single ZIP is generated within 30 seconds containing audit.pdf, audit.csv, audit.json (and optionally manifest.json), with filenames prefixed by <reportId>_<UTC-ISO8601-timestamp> And the ZIP size and contents match the ledger scope selected (full report) And an export activity log entry is recorded with userId, deviceId, timestamp, and package checksums
PDF Audit Sheet: Summary, Watermark, QR/Deep Link
Given an export package is generated When opening audit.pdf Then the first page displays reportId, claimNumber (if provided), property address, customer name, and total event count And each page has a repeating diagonal watermark with the reportId and "RoofLens Audit" And a QR code is present on page 1 that encodes an HTTPS deep link to the live verification page with a signed token And the event table includes columns: seq, timestamp (ISO 8601 UTC), eventType, actorName, webAuthnCredId(last4), deviceId(last6), gpsLat, gpsLon, gpsAccuracyM, eventHash(short), checkpoint(boolean) And the PDF embeds the SHA-256 of audit.json in its metadata field AuditJsonSha256
JSON Verifiable Bundle: Hash Chain and Signatures
Given an export package is generated When inspecting audit.json Then it contains an ordered events array where each event has: payload, payloadHash(sha256), timestamp(ISO 8601 UTC), actorUserId, webAuthnCredId, deviceId, gps{lat,lon,accuracyM}, prevHash(sha256) And a top-level checkpoints array exists with objects {index, rootHash, signature, algorithm} And algorithm is Ed25519 and signatures verify successfully against the organization's published public key And recomputing the hash chain from the first event yields the final rootHash referenced by the last checkpoint And all verification steps succeed without network access (offline verification)
Signed Share Links: Access Controls and Expiration
Given a user selects "Create Share Link" for an audit package When they choose access mode = Token-only and TTL = 7 days Then the system returns an HTTPS URL with an unguessable, signed token (>= 32 bytes entropy) that grants read-only access to audit.pdf, audit.csv, and audit.json And accesses are logged with timestamp, ipHash, and userAgent And requests after expiration return HTTP 410 Gone and no file content Given access mode = Org-only When a non-authenticated requester opens the link Then they are prompted to authenticate as a member of the owning organization before any file is served Given the link is revoked by the creator or an admin When the link is accessed within 60 seconds of revocation Then the response is HTTP 410 Gone and the attempt is logged
QR/Deep Link Verification Page Behavior
Given a printed audit.pdf with QR is scanned on a mobile device When the QR deep link is opened Then the verification page loads over HTTPS and displays reportId, package SHA-256, first/last event timestamps, and status (Active/Expired/Revoked) And if the token is expired or revoked, downloads are disabled and the page states the reason without leaking whether a valid package exists And if the token is tampered, the response is HTTP 403 with no package metadata And if Active, selecting "Verify Package" recomputes server-side checksums of stored artifacts and reports Match/No Match
Append Audit Sheet to Bid/Report PDFs
Given an organization enables the "Append Audit Sheet" setting When a user generates a bid/report PDF for a report with a custody ledger Then the produced PDF includes the audit sheet as a final section starting on a new page, preserving original report pages unmodified And page numbering and table of contents (if present) reflect the added pages And the embedded audit sheet retains its watermark and QR code at printable resolution (>= 300 DPI) And when the setting is off, no audit sheet pages are appended
CSV Export: Structure and Fidelity
Given an export package is generated When opening audit.csv Then the file is UTF-8 encoded with LF line endings and a single header row: seq,timestamp,eventType,actorName,webAuthnCredId_last4,deviceId_last6,gpsLat,gpsLon,gpsAccuracyM,eventHash_short,checkpoint And the number of data rows equals the number of ledger events And timestamps are ISO 8601 UTC; GPS coordinates have 6 decimal places; empty values are blank And fields containing commas, quotes, or newlines are RFC 4180 compliant (properly quoted) And recalculating SHA-256 over audit.csv matches the checksum in manifest (if present)
Role‑Based Access, Redaction, and Privacy Controls
"As a compliance officer, I want to control and audit who can see detailed custody data and redact sensitive fields so that we meet privacy obligations while preserving evidentiary value."
Description

Introduce fine‑grained permissions governing who can view, export, or share custody details. Support field‑level redaction (e.g., hide exact GPS or device serials) based on recipient role and jurisdiction, while retaining full fidelity internally. Record access events to the ledger to maintain an audit of who viewed or shared custody data. Provide organization‑level policies for retention periods and PII handling to align with GDPR/CCPA and carrier guidelines.

Acceptance Criteria
Viewer Role Export and Share Restrictions
Given a user with the Viewer role has access to a report When they open the Custody Ledger UI Then the Export and Share controls are not visible Given a user with the Viewer role has the report ID When they call POST /reports/{id}/share Then the API responds 403 Forbidden and no share link is created Given a user with the Viewer role has the report ID When they call GET /reports/{id}/custody/export Then the API responds 403 Forbidden and no export file is generated Given a Viewer follows a deep link to an existing share without explicit grant When the link is opened Then the system responds 403 Forbidden and logs a denied access attempt
Jurisdiction-Driven GPS and Device Redaction for External Recipients
Given an Adjuster in Org A shares a report to a recipient with role Carrier Reviewer in EU jurisdiction When generating the share preview Then GPS coordinates are rounded to 3 decimal places and device serials are replaced with SHA-256 hashed tokens Given the EU Carrier Reviewer opens the shared report When viewing custody details via UI or API Then exact GPS (>=5 decimal places), raw device serials, and uploader IP do not appear anywhere in responses Given the recipient downloads the custody audit sheet When the file is inspected Then the Redaction Manifest lists fields redacted due to EU policy and recipient role with the applied policy version ID
Internal Full Fidelity vs External Redacted Views
Given an Org Admin is authenticated within Org A When they open custody details for a report Then exact GPS to 6 decimals, full device serials, uploader IP, and WebAuthn credential ID are visible Given the same report is accessed via an external share link without an Org A session When custody details are viewed Then only redacted fields per the share policy are shown and no unredacted data appears in UI or network responses Given CDN caching is enabled When a redaction policy changes Then external responses reflect updated redactions within 5 minutes and never serve unredacted content to external recipients
Access and Share Events Logged to Custody Ledger
Given any user or external recipient views custody details When the page or API response is delivered Then an append-only ledger entry is written within 2 seconds including timestamp (UTC ISO8601), actor identity (user ID or share token), WebAuthn credential ID (if applicable), IP, action (view/export/share), redaction profile ID, and result (success/denied) Given an admin attempts to modify or delete a ledger entry When using UI or API Then the system denies the action and writes a tamper-attempt audit entry while leaving the ledger unchanged Given a share link is created or revoked When the action completes Then a ledger entry records issuer, recipient role, jurisdiction, expiry, and policy snapshot ID
Organization Retention and PII Handling Enforcement
Given an organization policy sets retention_period_days=365 and pii_policy=GDPR When a share link older than 365 days is accessed Then the system returns 410 Gone and logs an access-denied event Given a data subject deletion request is approved When the GDPR erasure job runs Then PII fields (names, emails, phone, exact GPS, device serials, IPs) are anonymized in external artifacts within 30 days and the action is logged with an evidence ID Given a legal hold is applied to a report When retention would otherwise purge PII Then the system defers purge, marks items on-hold, and requires admin justification for any access, logging each access with a reason code
Export Includes Redaction Manifest and Excludes PII
Given an external recipient downloads custody audit files (PDF and CSV) When the files are inspected Then no redacted fields are present and the Redaction Manifest enumerates each redacted field, redaction method, reason (role, jurisdiction, policy), and policy version ID Given an export is generated When integrity metadata is computed Then the export includes a SHA-256 hash and a RoofLens signature; the hash is logged to the ledger and matches the downloaded file Given an internal user exports a full-fidelity audit When attempting to share it externally Then the system enforces recipient-specific redaction at share time by regenerating the export; direct forwarding of an internal unredacted export to external recipients is blocked with 403 unless explicitly allowed by policy
Automated Integrity Verification and Alerts
"As an operations lead, I want automated checks that validate the entire custody chain and alert me to anomalies so that issues are caught before a dispute arises."
Description

Run scheduled and on‑demand verification that recomputes artifact hashes, validates hash‑chain continuity, and checks WebAuthn assertions and checkpoint signatures. Display verification status and trust badges in the UI and include results in exports. Trigger alerts to admins when anomalies are detected (missing events, hash mismatch, signature failure, clock skew) with guided remediation steps. Provide an API endpoint to retrieve verification reports for external systems.

Acceptance Criteria
Nightly Scheduled Integrity Verification
Given it is 02:00 UTC and there exist reports updated in the last 24 hours When the scheduler triggers the nightly verification run Then a verification job is enqueued once per eligible report within 5 minutes And each job is tagged runType=scheduled and de-duplicated per report/run window And a verification record is persisted with fields: reportId, runId, runType, startedAt, completedAt, result, checks And the overall nightly run completes without orphaned or stuck jobs (0 jobs in running state after 60 minutes)
On-Demand Verification from Report View
Given a user with role Admin or Editor is viewing a report When they click Verify Now Then the system starts a verification job within 10 seconds and displays in-progress status And the UI updates to show Pass/Warning/Fail with timestamp upon completion And a verification record is persisted with runType=on-demand linked to the initiating user And unauthorized users (Viewer) cannot trigger verification (action hidden or returns 403)
Hash-Chain Continuity and Artifact Hash Recompute
Given a report with custody ledger and stored artifacts When verification runs Then the SHA-256 of each stored artifact matches the recorded artifactHash And each event's previousHash equals the hash of the preceding event with no missing sequence And event timestamps are strictly non-decreasing And any mismatch or missing link marks the run result=Fail with codes HASH_MISMATCH or CHAIN_BROKEN and identifies affected eventId or artifact
WebAuthn Assertion and Checkpoint Signature Validation
Given events signed via WebAuthn with stored public keys and metadata When verification runs Then each WebAuthn assertion verifies origin, rpId, challenge, signature, and non-decreasing signCount And each checkpoint signature validates against the configured ledger signing key And any invalid assertion sets result=Fail with code WEBAUTHN_INVALID and lists eventIds And any missing or invalid checkpoint sets result=Warn with code CHECKPOINT_MISSING_OR_INVALID
Anomaly Detection Alerts with Guided Remediation
Given a verification run detects anomalies (missing events, hash mismatch, signature failure, clock skew > 5 minutes) When the result is Fail or Warn Then email and in-app alerts are sent to org admins within 2 minutes including reportId, runId, anomaly codes, and firstSeenAt And the alert includes a link to a remediation guide specific to the anomaly type And an alert record is created with status=Open and can be acknowledged and resolved And repeated anomalies within 24 hours are deduplicated into a single thread per report
Verification Status Badges in UI and Exported Reports
Given a report has the latest verification result When the report is listed or viewed Then a status badge (Verified, Warning, Failed, Unknown) is displayed with lastVerifiedAt and clickable details And when exporting PDF and JSON, the verification summary and detailed findings are included in the export And the PDF shows a trust badge and verification page; the JSON includes checks[], anomalies[], and result fields And exports reflect the latest completed verification at export time
Verification Report Retrieval API
Given an authenticated API client with read permission When it requests GET /api/v1/reports/{reportId}/verification?run=latest Then the API returns 200 with JSON containing runId, runType, startedAt, completedAt, result, checks[], anomalies[], and signatureInfo And requesting a specific run via runId returns the matching record or 404 if not found And unauthorized requests return 401/403; requests for non-existent reports return 404 And response time is <= 500 ms P95 and responses are cacheable via ETag/Last-Modified

Revision Diff

Generates a clear, color‑coded change summary between versions—line items added/removed, quantity shifts, notes, and attachments. Includes author, timestamp, and reason notes so supplements read like a concise commit history, making negotiations faster and less contentious.

Requirements

Immutable Revision Snapshots
"As an estimator, I want each save to create an immutable version so that I can reliably compare changes later and defend my bid history."
Description

Capture a complete, read‑only snapshot of each estimate revision at save time, including line items, quantities, unit prices, taxes/fees, roof measurements, damage annotations, notes, attachments, and layout metadata. Stamp every snapshot with author ID, timestamp, and an optional reason note, and assign a monotonically increasing version number. Store snapshots in a way that guarantees reproducible diffs and prevents post‑hoc edits, integrating with existing RoofLens estimate entities and permissions so prior versions can be referenced, compared, and exported without data drift.

Acceptance Criteria
Snapshot Created on Save
Given an existing estimate with pending edits and an optional reason note When the user clicks Save Revision Then a new snapshot is created that includes exactly: all line items with quantities and unit prices, taxes/fees, roof measurements, damage annotations, notes, attachments, and layout metadata And the snapshot is stamped with the saver’s authorId and an ISO 8601 UTC timestamp within 5 seconds of server time And the snapshot’s version number for that estimate is exactly previous_version + 1 (starting at 1) And the API response returns snapshotId and version And the snapshot can be retrieved by snapshotId and version and its fields match the saved working state at the time of save
Immutability Enforcement
Given a stored snapshot When a client attempts PATCH, PUT, or DELETE on the snapshot resource via API Then the request is rejected with 405 or 403 and no data is modified And any UI controls for editing snapshot fields are disabled and do not persist changes And a server-side contentHash recorded at create time equals the hash recomputed on read; any mismatch raises an alert and the snapshot is not mutated And an audit log entry is recorded for blocked mutation attempts including actor, timestamp, and action
Deterministic Diff Between Snapshots
Given two snapshots A and B of the same estimate When a diff is requested repeatedly across different times and after catalog/price list updates Then the diff JSON payload is identical across runs with a stable sort order and identical checksum And the diff enumerates added/removed/changed line items, quantity deltas, note changes, attachment changes, and layout changes; unchanged items are not flagged And the diff header displays A and B authorIds, timestamps (ISO 8601 UTC), versions, and reason notes
Role-Based Access to Prior Versions
Given an authorized user (Estimator, Admin, or Adjuster) with read access to the estimate When they request view, compare, or export of any snapshot Then access is granted and actions succeed with 200 responses Given an unauthorized user or a user from another tenant When they request any snapshot or diff by id or version Then a 403 (or 404 for cross-tenant isolation) is returned and no snapshot metadata (including attachment filenames or counts) is leaked And all snapshot queries are scoped by tenant/account and estimate id
Concurrent Saves Produce Monotonic Versions
Given two different users save revisions of the same estimate within 100 ms When both save operations complete Then exactly two snapshots exist with distinct version numbers N and N+1 for that estimate And no duplicate or skipped version numbers are created for that estimate And the version assignment is atomic; retries due to transient failures do not create extra versions And timestamps reflect commit order (earlier commit <= later commit)
Reason Note Validation and Persistence
Given a reason note up to 500 characters (Unicode allowed, newlines allowed) When provided at save Then it persists exactly as entered and is retrievable and rendered safely Given a reason note longer than 500 characters When save is attempted Then validation fails with a 422 error and a clear message; no snapshot is created Given HTML/JS content in the reason note When saved Then content is stored but rendered escaped to prevent XSS Given no reason note is provided When saving Then the reason field is null/empty and omitted from displays
Stable Export of Prior Versions
Given any snapshot When exporting to JSON and PDF multiple times Then each export contains exactly the snapshot’s state (line items with quantities and unit prices, taxes/fees, measurements, annotations, notes, attachments metadata/thumbnails, layout metadata, authorId, timestamp, version, reason note) And repeated exports produce identical byte-for-byte outputs (same SHA-256 hash) per format And changes to the current estimate, catalogs, or measurements after snapshot creation do not alter the export outputs
Line‑item Matching & Change Detection
"As an insurance adjuster, I want a precise summary of what changed between revisions so that I can quickly validate supplements and understand cost impacts."
Description

Implement a diff engine that reliably maps line items between revisions even when items are reordered, renamed, or regrouped. Use stable IDs where available and fallback fuzzy matching on SKU/code, description, unit, category, and price to detect adds, removals, quantity changes, price changes, note edits, and tax/fee impacts. Compute per‑line deltas and totals impact, and identify moved items without double‑counting. Include change detection for section/group headers (e.g., tear‑off, underlayment, flashing) and for measurement‑driven quantities so users can trace quantity shifts back to updated measurements.

Acceptance Criteria
Reordered Items Matched by Stable IDs
Given two estimate revisions containing the same set of line items with identical stable IDs but different display positions within the same group When the diff engine runs Then each line item is matched exactly once by stable ID And no add or remove is reported for those items And no "moved" indicator is shown when the group is unchanged And per-line deltas for unchanged fields are 0 And the totals impact delta is 0
Renamed Items Matched by Fallback Fuzzy Logic
Given two revisions where a line item lacks a stable ID in one or both revisions And the item shares SKU/code (exact match) or, if missing, achieves a composite match score >= 0.85 using description similarity, unit, category, and price proximity When the diff engine runs Then the items are matched as the same line item And a "renamed" change is reported if the description text differs And add/remove is not reported for the matched items And per-line deltas reflect any quantity or price changes And totals impact reflects the sum of per-line deltas And if no candidate reaches the threshold, the item is reported as added/removed respectively
Moved Items Identified Without Double-Counting
Given an item present in both revisions matched by stable ID or fuzzy logic And its parent section/group changes from A to B When the diff engine runs Then the item is reported once with "moved from A to B" And it is not counted in adds or removals And per-line deltas are 0 if only the group changed And section totals reflect the reallocation while grand total remains unchanged
Section Header Change Detection
Given two revisions with section/group headers that may be added, removed, renamed, or reordered When the diff engine runs Then header-level changes are reported as added, removed, renamed, or moved And moving a header does not produce add/remove events for unchanged child items And renaming a header does not affect child matching And section subtotal deltas are computed and reported for changed sections
Measurement-Driven Quantity Traceability
Given a line item whose quantity is derived from a named measurement with a reference ID and formula And the measurement's value changes between revisions When the diff engine runs Then the line-item quantity delta is attributed to that measurement change And the diff shows previous and new measurement values and the measurement ID/name And all line items linked to the changed measurement display the trace And if a quantity changes without a linked measurement change, the diff labels it as a manual quantity edit
Price, Tax, and Fee Change Detection and Totals Impact
Given per-unit price, tax rate, or fee values differ between revisions When the diff engine runs Then per-line deltas are computed for unit price, extended price, tax, fees, and line total And document-level deltas are computed for subtotal, taxes, fees, and grand total And the grand total delta equals the sum of all line total deltas within ±0.01 due to rounding And changes to tax or fees are reported even if quantity is unchanged And currency formatting and rounding are consistent across line and total deltas
Line-Item Notes and Attachments Change Detection
Given two revisions where a line item's note text and/or attachments differ When the diff engine runs Then note edits are reported as "note updated" with before/after presence indicated And attachment changes are reported as added or removed by filename (or hash) at the line-item level And no financial deltas are reported solely due to note or attachment edits And the matched status of the line item is preserved
Color‑Coded Diff Viewer
"As a sales rep, I want a clear, color‑coded view of changes so that I can scan and explain updates to a homeowner or adjuster in seconds."
Description

Provide an interactive UI that presents diffs with intuitive color semantics (added = green, removed = red, modified = amber) and icons, with side‑by‑side or inline views. Support filters (adds/removes/quantity/price/notes/attachments), grouping by trade or section, search, and expand/collapse for long estimates. Show tooltips with previous → new values and net deltas, and a legend explaining colors. Ensure responsive performance (initial render under 2 seconds for typical estimates), accessibility (keyboard navigation and contrast), and seamless navigation from the estimate screen within RoofLens.

Acceptance Criteria
Side-by-Side and Inline Diff Views with Color Semantics
Given two selected revisions of an estimate, when the user clicks View Diff from the estimate screen, then the Diff Viewer opens in the same tab focused on those revisions. Given the Diff Viewer is open, when the user lands on the page, then the side-by-side view is active by default and can be toggled to inline without a page reload. Given changes are rendered, when items are displayed, then added items are marked green, removed items red, and modified items amber, each with distinct state icons, consistently in both views. Given a user opens the legend, when the legend is toggled, then it clearly explains the color semantics and icons and can be dismissed via mouse or keyboard. Given a mixed set of changes, when labels and colors are applied, then no change is mislabeled or missing a state icon.
Filter by Change Type and Search
Given filter controls are visible, when the user toggles Adds, Removes, Quantity, Price, Notes, or Attachments, then only matching diffs remain visible and the change count updates accordingly. Given filters are set, when the user switches between side-by-side and inline, then the filter state persists. Given no items match the current filters, when the list updates, then a zero‑state message is shown with a clear reset option. Given the search box is used, when the user enters a query, then results are filtered case‑insensitively across line item description, SKU/code, note text, and attachment filenames with matches highlighted. Given both filters and search are active, when results are shown, then only items satisfying all selected filters and the search query are displayed. Given a search query has multiple matches, when the user presses Enter or uses next/previous controls, then focus moves to the next/previous match and the match index (e.g., 3/12) updates.
Group by Trade or Section with Expand/Collapse
Given the Group By control is set to Trade or Section, when diffs are rendered, then items appear under group headers with group names and per‑group change counts. Given group headers are visible, when the user clicks a header or uses keyboard to toggle, then that group expands/collapses and aria-expanded reflects the state. Given many groups exist, when the user clicks Expand All or Collapse All, then all groups expand/collapse accordingly. Given the user switches between side-by-side and inline views, when returning, then previously expanded/collapsed groups retain their state. Given long lists are scrolled, when grouping is enabled, then headers remain sticky at the top of the viewport within their group context.
Tooltips and Delta Calculations for Modified Items
Given a modified quantity or price line item, when the user hovers or focuses the info affordance, then a tooltip shows previous → new values with absolute and percentage deltas, formatted with correct units/currency (e.g., $1,250.00 → $1,500.00, +$250.00, +20%). Given a modified note, when the tooltip opens, then it indicates added/removed/edited and shows a concise snippet of the change. Given attachment changes exist, when hovering/focusing the attachment change indicator, then the tooltip lists added/removed filenames and counts. Given rounding rules, when deltas are computed, then currency rounds to 2 decimals and quantities to the estimate’s unit precision without off‑by‑one errors. Given the tooltip is shown, when the user presses Esc or moves focus away, then the tooltip dismisses; it never obscures the focused control or overflows the viewport.
Notes and Attachments Diff Rendering
Given a note was added, removed, or edited, when the diff is displayed, then added text is highlighted green, removed text red, and edits amber with clear inline markers in inline view or aligned side‑by‑side in split view. Given attachments changed, when the diff is displayed, then added attachments appear with a plus indicator and thumbnail/file‑type icon, removed with a minus indicator; unchanged attachments do not appear. Given an attachment is clicked, when supported by the browser, then a preview opens in a modal or new tab; otherwise the file downloads. Given long note content, when the user clicks Show more/Show less, then the content expands/collapses without losing scroll position, and the state persists when switching views. Given filters for Notes or Attachments are applied, when the list updates, then only note/attachment changes remain visible and search continues to match within note text and filenames.
Initial Render Performance and Responsiveness
Given a typical dataset (≈200 line items with mixed changes and up to 20 attachments) on Chrome latest on a mid‑tier laptop (≥8 GB RAM), when the user opens the Diff Viewer, then first interactive render completes within 2 seconds. Given large diffs (virtualized list of ≥1,000 rows), when the user scrolls, then average frame rate is ≥45 FPS and input latency remains <100 ms. Given viewport width ≥1024 px, when the viewer opens, then side‑by‑side view is available; for narrower widths, inline view becomes default and all controls remain accessible via a responsive layout. Given the initial data payload is loaded, when the user toggles view modes, filters, grouping, or search, then updates apply client‑side within 300 ms on the typical dataset and without additional network calls.
Accessibility: Keyboard Navigation and Contrast Compliance
Given the viewer is focused, when the user navigates with Tab/Shift+Tab, then all interactive elements (view toggle, filters, group selectors, search, expand/collapse, legend, attachments) are reachable in a logical order with a visible focus outline. Given a diff row is focused, when the user presses Arrow Up/Down, then focus moves to the previous/next row; Enter toggles expand/collapse where applicable, and shortcuts are announced in the legend. Given tooltips and legends, when triggered via keyboard, then their content is exposed to screen readers with appropriate roles and labels; color semantics are also conveyed via text or icons (not color alone). Given color‑coded elements, when evaluated, then contrast meets WCAG 2.1 AA (≥4.5:1 for normal text, ≥3:1 for large text/icons), and focus indicators meet contrast guidelines. Given collapsible groups and toggles, when states change, then ARIA attributes (e.g., aria-expanded, aria-pressed) update correctly and are announced.
Reason Notes & Audit Trail
"As a project manager, I want required reason notes and a visible audit trail so that negotiations are grounded in documented rationale and accountability."
Description

Require an author, timestamp, and a brief reason note when creating a new revision, with optional per‑line change notes. Display these metadata at the top of the diff and alongside individual changes where applicable. Lock notes after save to maintain integrity, and log all actions (who created, viewed, exported, or shared a diff) to the existing audit log. Expose audit entries in the UI for compliance and include them in exports when requested.

Acceptance Criteria
Mandatory Reason Note on New Revision
Given a user initiates creating a new revision When they attempt to save without entering a reason note Then the save is blocked and a validation message "Reason note is required" is displayed Given a user enters a reason note shorter than 5 characters or longer than 300 characters When they attempt to save the revision Then the save is blocked and a validation message "Reason must be 5–300 characters" is displayed Given a user enters a valid reason note (5–300 characters) When they save the revision Then the revision is created and the metadata records the author (user ID and display name) and an ISO 8601 timestamp with timezone offset And the reason note is stored with the revision
Per-Line Change Notes Optional and Attributed
Given a user edits or adds a line item within a pending revision When they optionally add a per-line change note up to 300 characters and save the revision Then the note is saved and associated with that specific line change under the revision Given a user leaves the per-line change note empty When they save the revision Then the revision saves successfully and no note is displayed for that line in the diff Given a saved revision contains per-line change notes When viewing the diff Then each noted line displays its note text alongside the change and is attributed to the revision author
Display Metadata in Diff Header and Changes
Given a saved revision exists When a user opens the revision diff view Then the header displays Author (display name), Timestamp (ISO 8601 with timezone offset), and Reason note exactly as saved And for any line changes that include notes, the note text is displayed adjacent to the corresponding change item And no metadata fields are editable from the diff view
Lock Notes and Metadata After Save
Given a revision has been saved When any user (including the author) attempts to edit the reason note or any per-line change note Then the fields are read-only and an affordance indicates notes are locked after save Given a user needs to correct or augment notes When they create a subsequent revision Then the previous revision’s notes remain unchanged and the new revision allows new notes to be entered prior to saving
Audit Logging for Create, View, Export, and Share
Given a user creates a revision When the save succeeds Then an audit log entry is written with actor (user ID), action "REVISION_CREATED", object (revision/diff ID), and timestamp Given a user views a revision diff When the diff is rendered Then an audit log entry is written with action "DIFF_VIEWED" including actor, object ID, and timestamp Given a user exports a revision diff (PDF or other supported format) When the export completes Then an audit log entry is written with action "DIFF_EXPORTED" including actor, object ID, format, and timestamp Given a user shares a revision diff (e.g., generates link or sends email) When the share action completes Then an audit log entry is written with action "DIFF_SHARED" including actor, object ID, method, and timestamp Given any of the above actions occur When the user opens the audit UI Then the corresponding log entry is visible within 2 seconds of the action completing
Audit Log Visible in UI for Compliance
Given a user with permission to view audit logs opens a revision diff When they select the Audit tab/panel Then they see a list of audit entries filtered to that diff/revision, each showing Timestamp (ISO 8601 with timezone), Actor (display name), Action, and Details And entries are sorted by newest first and paginated at 50 per page And the user can filter by Action type and Date range And users without audit permissions do not see the Audit tab/panel
Include Audit Entries in Exports When Requested
Given a user initiates exporting a revision diff When they enable the "Include audit log" option and complete the export Then the exported artifact contains an appended Audit section listing all audit entries for that diff with columns: Timestamp (ISO 8601 with timezone), Actor, Action, Details Given a user exports a revision diff with the "Include audit log" option disabled When the export completes Then the exported artifact contains no audit entries Given audit entries exist for the diff When the export is generated Then the audit entries included match what is shown in the UI for the selected filters (or all entries if no filters were applied)
Attachments & Annotations Diff
"As an adjuster, I want to see how evidence photos and damage markups changed so that I can verify that estimate changes are justified."
Description

Detect and present changes to photos, documents, and drawing annotations between revisions, including added, removed, and updated items. For updated annotations, render before/after overlays of measurement lines and damage polygons with callouts highlighting moved vertices and label edits. Provide thumbnail previews, file metadata, and quick open to full‑size viewers. Tie attachment changes back to affected line items where applicable (e.g., new hail damage photos linked to replacement line items).

Acceptance Criteria
Added and Removed Attachments Diff Display
Given two revisions (A older, B newer) exist for the same job When the user opens Revision Diff → Attachments & Annotations Then the system categorizes attachment changes into Added (in B only) and Removed (in A only) And each item displays a thumbnail or file-type icon, filename, type, size, capture/upload timestamp, and uploader name And category headers display item counts And items are sorted by newest upload time by default with an option to sort by filename A→Z/Z→A And pagination or infinite scroll activates when a category exceeds 50 items And empty states display “No added attachments” or “No removed attachments” as applicable And the change list renders within 2 seconds at P90 for jobs with ≤300 attachments
Updated Attachments Detection and Presentation
Given an attachment exists in both revisions with modified content or metadata (e.g., file content, filename, description, tags) When the diff is generated Then the item is labeled Updated with a summary chip of changed fields (e.g., Content, Filename, Tags) And image content changes are shown with before/after thumbnails side-by-side with a swipe/toggle control And metadata differences are highlighted (additions in green, deletions in red, edits as before/after values) And a content-changed flag is set when checksum or byte size differs And clicking opens the full-size viewer defaulting to the after state And an audit event is recorded with attachmentId, changeType=updated, author, timestamp, and optional reason
Annotation Before/After Overlay with Vertex and Label Changes
Given a drawing annotation (measurement line or damage polygon) was modified between revisions When the user opens its diff view Then before (red) and after (green) overlays render on the same photo/canvas with pixel alignment And moved vertices display callouts showing delta distance and direction; only movements ≥0.1 ft (3 cm) are callout-worthy And label text edits show before/after callouts with deletions in red strikethrough and insertions in green underline And an opacity slider (0–100%) is available and defaults to 60% And a legend explains colors and symbols And measurement units respect the project/unit settings And exporting the diff (PNG/PDF) includes overlays and callouts
Attachment-to-Line Item Linking in Diff
Given a changed attachment or annotation is linked to one or more line items When the diff row is displayed Then linked line items appear as badges (code + short description) with change status chips (New/Updated/Unchanged) And clicking a badge navigates to that line item’s diff view And when no links exist, the row displays “No linked line items” And for new hail-damage photos tagged with a slopeId that drove a Replacement line item, a link is auto-shown via tag-to-line-item mapping (tag=hail AND slopeId matches line item) And the count of linked line items is shown per row
Quick Open to Full-Size Viewer from Thumbnails
Given any attachment or annotation thumbnail in the diff list When the thumbnail is clicked Then a full-size viewer opens in a modal within 2 seconds at P90 on broadband And the viewer supports zoom up to 400%, pan, before/after toggle, and download (if the user has permission) And deep links preserve state (?revA, ?revB, itemId, view=after|overlay) and re-open to the same state within 1 second after modal open And closing the modal returns keyboard focus to the originating list row
Non-Image Attachment Preview and Fallbacks
Given a non-image document (e.g., PDF, DOCX) or unsupported format appears in the diff When the item is displayed Then a file-type icon is shown and a first-page thumbnail is generated for PDFs; icon-only for unsupported types And content-level visual diff shows “Diff not supported” for unsupported formats while metadata diffs still render And Quick Open launches an in-app PDF viewer when available, otherwise triggers a secure download And updated metadata (e.g., title, tags) is highlighted as per the Updated pattern
Exportable Diff Report (PDF & Share Link)
"As a contractor, I want to export and share a polished change report so that stakeholders can review and approve supplements without logging into RoofLens."
Description

Generate a branded, paginated PDF and secure shareable link that capture the diff with color‑coding, metadata header (author, timestamp, reason), legend, and per‑section summaries including net total change. Support company logo/branding, configurable redaction of internal notes, and page anchors for quick navigation. Links should be tokenized with expiration and access tracking, and PDFs should reference immutable snapshot IDs to ensure the report matches the exact compared versions.

Acceptance Criteria
Branded Diff PDF Generation with Metadata and Snapshot Integrity
- Given a project with an uploaded company logo, an identified author, and two selected revisions with valid snapshot IDs, when the user exports the diff as PDF, then a PDF is generated within 60 seconds that includes company logo and name in the header, author name, ISO 8601 UTC generation timestamp, compared revision labels, and both immutable snapshot IDs. - Then the PDF filename follows the pattern "RoofLens_Diff_<ProjectId>_<RevA>_vs_<RevB>_<YYYYMMDDTHHmmssZ>.pdf". - Then the reason for change is displayed in the header; if no reason was provided, the text "No reason provided" is shown. - Then every page footer displays "Page X of Y" pagination. - Then given identical inputs (snapshots, redaction setting, branding), repeated exports produce byte-identical PDFs (matching SHA-256 hash).
Color-Coded Change Legend and Per-Section Net Totals
- Given the diff includes added, removed, modified items, notes, or attachments, when the PDF is generated, then a legend appears on page 1 listing all diff categories with their visual styles (color/icon), and every diff row uses the corresponding style consistently. - Then each major section includes a summary box showing counts of Added, Removed, Modified and the section Net Total Change in the project currency rounded to two decimals. - Then the document includes a grand total Net Change that equals the sum of section totals within ±0.01 of currency. - Then a grayscale/print-friendly cue (icon or pattern) accompanies each category so changes remain distinguishable when printed in black and white.
Configurable Redaction of Internal Notes
- Given redaction setting = Include, when exporting or sharing, then internal notes render verbatim wherever they appear. - Given redaction setting = Redact, when exporting or sharing, then internal notes are replaced by "[Redacted]" and note indicators remain without revealing content; totals are unaffected. - Given redaction setting = Omit, when exporting or sharing, then the notes column/section is hidden and the layout reflows with no empty placeholders; the legend hides "Notes Updated". - Then the chosen redaction setting is embedded in the share link and cannot be changed by viewers; only the owner can generate a new link with a different setting.
Tokenized Share Link with Expiration, Revocation, and Access Tracking
- Given the user creates a share link for a diff, when the link is generated, then a unique, unguessable token (≥128 bits entropy) is issued over HTTPS and bound to the snapshot pair and redaction setting. - Then the link has a default expiration of 14 days and is configurable between 1 and 60 days; the exact expiration timestamp (UTC) is stored server-side. - When a viewer accesses the link before expiration, then the diff view loads; after expiration or revocation, the response is 410 Gone with an explanatory message. - Then each access is logged with timestamp (UTC), IP address, user agent, and country/region; the owner can view the access log in the app. - Then tokens cannot be reused across tenants; attempts to access with a mismatched tenant are denied.
PDF Pagination and Page Anchors for Quick Navigation
- Given the diff spans multiple sections and pages, when the PDF is generated, then each page shows "Page X of Y" in the footer. - Then the PDF contains bookmarks/outlines for each major section and subsection; selecting a bookmark navigates to the exact section start. - Then a table of contents at the beginning includes clickable links to sections. - Then the share link URL supports #anchors for sections (e.g., #materials, #labor) and navigating to these anchors scrolls to the correct section. - Then generation of a report with at least 50 diffed line items completes within 60 seconds and all anchors function.
Consistency Between Share View and PDF
- Given the same snapshot pair and redaction setting, when the share view is loaded and a PDF is exported, then the visible content (items, ordering, quantities, notes visibility), legends, and per-section/net totals match exactly. - Then invoking "Download PDF" from the share view produces a PDF whose SHA-256 hash matches the PDF exported from the app for the same inputs and exporter version. - Then updating the project after snapshot creation does not alter existing share views or previously generated PDFs; both continue to display the original snapshot content.
Snapshot Reference Security and Integrity
- Given the export/share is created, when inspecting the document properties and header, then both compared snapshot IDs are present and match server records. - Then altering client-side parameters (other than the token) cannot change which snapshots are rendered; attempts are rejected with 400 or ignored without changing the result. - Then if either snapshot ID is missing or invalid, export/share creation is blocked with a clear error message and no partial output is produced. - Then all endpoints for export/share enforce HTTPS and set Cache-Control: no-store on share responses.
Diff API & Webhooks
"As a systems integrator, I want an API and webhooks for diffs so that I can sync revision changes with our CRM and claims workflows automatically."
Description

Expose REST endpoints to list revisions for an estimate, fetch diff summaries and detailed change sets, and retrieve export artifacts. Support filters (by change type, section, author, time window), pagination, and ETag caching. Provide webhooks that fire when a new revision is created or when a diff is exported/shared, including metadata for downstream CRMs or claims systems. Enforce authentication, rate limits, and audit logging consistent with RoofLens platform standards.

Acceptance Criteria
List Revisions with Pagination and ETag Caching
Given an authenticated client and an estimate with at least 53 revisions When the client GETs /v1/estimates/{estimateId}/revisions?limit=20 Then the response status is 200 and contains exactly 20 revision records ordered by createdAt descending And the response includes paging.cursors.next when more results exist and paging.cursors.prev when applicable And the response includes an ETag header When the client repeats the request with If-None-Match set to that ETag and no revisions changed Then the response status is 304 and the body is empty
Filter Revisions by Change Type, Section, Author, and Time Window
Given an authenticated client and revisions spanning multiple change types, sections, authors, and dates When the client GETs /v1/estimates/{estimateId}/revisions with changeType=modified&section=Roofing&authorId={userId}&since=2025-01-01T00:00:00Z&until=2025-12-31T23:59:59Z Then only revisions whose diffs include at least one modified item in section 'Roofing' by the specified author within the time window are returned And the response count equals the number of matching revisions When the client supplies an invalid filter value Then the response status is 400 with a machine-readable error code and field-specific messages
Fetch Diff Summary for a Revision
Given an authenticated client and a known revisionId When the client GETs /v1/estimates/{estimateId}/revisions/{revisionId}/diff?view=summary Then the response status is 200 with a JSON body containing totals.added, totals.removed, totals.modified, totals.quantityDelta, affectedSections[], attachments.added, attachments.removed, author, createdAt, and reasonNote And totals and counts accurately reconcile with the corresponding detailed change set for that revision
Fetch Detailed Change Set for a Revision
Given an authenticated client and a known revisionId When the client GETs /v1/estimates/{estimateId}/revisions/{revisionId}/diff?view=detailed Then the response status is 200 with a JSON array of change objects each containing changeId, itemId, section, changeType, before, after, quantityDelta, unit, notesDiff, attachments.added[], attachments.removed[] And no fields are null where empty arrays or zero values are expected, and the list is deterministically ordered by section then itemId
Retrieve Diff Export Artifact with ETag and Content Negotiation
Given an authenticated client and an existing export artifact for a revision When the client GETs /v1/estimates/{estimateId}/revisions/{revisionId}/diff/exports/{artifactId} with Accept: application/pdf Then the response status is 200 with Content-Type application/pdf and a non-empty binary body and an ETag header When the client repeats the request with If-None-Match set to that ETag and the artifact is unchanged Then the response status is 304 and the body is empty When the client requests Accept: application/json Then the response status is 200 with Content-Type application/json and the JSON export payload
Webhook Delivery for Revision and Export/Share Events
Given a subscribed webhook endpoint configured with a shared secret When a new revision is created for an estimate Then RoofLens sends an HTTPS POST within 30 seconds containing eventType "revision.created", estimateId, revisionId, authorId, createdAt, and reasonNote And the request includes an X-RoofLens-Signature HMAC-SHA256 header computed with the shared secret When a diff export is generated or a share link is created for a revision Then RoofLens sends an HTTPS POST containing eventType "diff.exported" or "diff.shared", estimateId, revisionId, authorId, createdAt, exportFormat, artifactId, url, expiresAt (if share), changeTotals.added, changeTotals.removed, changeTotals.modified, and any mapped crmRef/claimRef And a 2xx subscriber response is recorded as delivered; non-2xx triggers retries with exponential backoff for up to 24 hours using idempotency keys to avoid duplicate processing
Authentication, Rate Limits, and Audit Logging Enforcement
Given a request without a valid access token or API key When the client calls any Diff API endpoint Then the response status is 401 with a WWW-Authenticate header and no sensitive data in the body Given a request with insufficient scopes When the client calls any Diff API endpoint Then the response status is 403 Given sustained request volume exceeding the tenant’s rate limit When the client continues calling Diff API endpoints Then responses include X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers, and a 429 is returned when the limit is exceeded And every request (success or error) writes an audit log entry capturing actor, resource, action, timestamp, IP, userAgent, outcome, and correlationId

GeoSeal

Watermarks each photo, measurement, and page section with verified GPS, time, altitude, and capture confidence. A roll‑up badge summarizes coverage integrity, proving the imagery is job‑specific and unaltered—disarming claims of stock photos or post‑hoc edits.

Requirements

Metadata Verification Engine
"As a roofing estimator, I want each asset’s GPS, time, and altitude to be verified and confidence-scored so that I can prove the imagery is job-specific and trustworthy in bids and disputes."
Description

Build an ingest pipeline that extracts and validates GPS latitude/longitude, timestamp, altitude, and drone telemetry from EXIF/flight logs for every uploaded image and measurement. Cross-check capture coordinates against the job geofence, normalize timestamps to UTC with NTP-backed clock drift correction, and convert altitude to AGL using local elevation data. Compute a capture confidence score based on signal accuracy/HDOP, flight log concordance, device trust, and proximity to the job polygon. Persist a canonical, versioned metadata manifest per asset, flag anomalies (e.g., out-of-bounds, stale clock, edited EXIF), and expose verification results to downstream rendering and reporting services.

Acceptance Criteria
EXIF and Flight Log Metadata Extraction
- Given an uploaded asset with embedded EXIF and/or an associated flight log, When the ingest pipeline processes it, Then it extracts GPS latitude/longitude (decimal degrees), device timestamp, altitude (with reference if present), heading, speed, camera make/model, and telemetry fields (e.g., HDOP/PDOP) into a structured record. - Given an image without a flight log, When processed, Then EXIF-only fields are extracted; any missing required fields are set to null and an anomaly code "missing_field" is recorded per field. - Given a batch of ≥100 valid input assets, When processed under nominal load, Then the extraction success rate is ≥99.5% and P95 extraction time per asset ≤1500 ms. - Given unsupported file types, When encountered, Then the system records an "unsupported_format" anomaly, skips parsing safely, and continues processing other assets without system crash. - Given supported formats (JPEG EXIF 2.3+, TIFF, DJI/Skydio flight logs .txt/.csv/.srt), When parsed, Then field mappings conform to the canonical schema and unit normalization rules.
Job Geofence Coordinate Validation
- Given a job polygon geofence and optional buffer distance (meters), When validating a capture coordinate, Then the asset is marked In-Bounds if within polygon±buffer; otherwise Out-of-Bounds. - Given any validated coordinate, When distance to polygon is computed, Then the nearest-edge distance is stored with precision ≤0.5 m. - Given an Out-of-Bounds asset, When flagged, Then anomaly code "oob_capture" is recorded with measured distance and coordinate in the manifest. - Given a missing or invalid geofence, When validation runs, Then anomaly code "geofence_missing" is recorded and the in-bounds status is set to Unknown. - Given validation execution, When run, Then calculations use WGS84 and consistent geodesic methods; P95 validation latency ≤200 ms per asset.
UTC Normalization with NTP Drift Correction
- Given a device timestamp T_device and NTP time reference, When normalizing, Then the output timestamp is ISO 8601 UTC with millisecond precision and includes an applied drift correction value. - Given the computed drift magnitude, When |drift| > 5 minutes, Then anomaly code "stale_clock" is recorded with the drift amount; otherwise the drift is recorded without anomaly. - Given NTP synchronization, When used, Then the NTP source, offset, and last sync time are persisted in the manifest for auditability. - Given identical inputs and algorithm version, When reprocessed, Then the normalized UTC timestamp is identical (idempotent).
Altitude Conversion to AGL
- Given altitude MSL and capture coordinates, When converting, Then AGL = MSL − ground_elevation from a DEM with resolution ≤10 m, and both AGL and MSL values are persisted with units and references. - Given only relative altitude or home point elevation, When MSL is unavailable, Then AGL is computed from available references; if insufficient data, anomaly "agl_unavailable" is recorded and AGL is null. - Given DEM data gaps, When encountered, Then a fallback DEM is used; anomaly "elevation_fallback" is recorded along with an estimated AGL error. - Given conversion execution, When run, Then P95 conversion latency ≤150 ms per asset and numerical precision is ≤0.1 m.
Capture Confidence Score Computation
- Given validated metadata, When computing the score, Then a 0–100 capture confidence score is produced using weights: signal accuracy/HDOP 40%, flight log concordance 25%, device trust 20%, proximity to job polygon 15%. - Given the score, When categorized, Then thresholds are: High ≥85, Medium 60–84, Low <60; both numeric score and category are persisted. - Given missing inputs, When scoring, Then available components are used, weights re-normalized, and anomaly "partial_score" is recorded with a rationale including component contributions and weights. - Given job-level aggregation, When computed, Then a roll-up badge includes min/avg score and % in-bounds, and is deterministic for the same inputs.
Canonical Versioned Metadata Manifest Persistence
- Given an asset, When persisting, Then the manifest includes asset_id, job_id, processing_version, input hashes (image and flight log SHA-256), extracted fields, normalized UTC and AGL, geofence results, confidence score, anomalies array, processor build info, and timestamps. - Given a reprocess with changed inputs or algorithm version, When saved, Then processing_version increments by +1; prior versions remain immutable and retrievable by version id. - Given storage, When retrieving by asset_id (latest) or by version, Then P95 read latency ≤200 ms and durability target matches provider guarantees (e.g., S3 11x9s durability). - Given manifest integrity, When read, Then a cryptographic signature is verified; on mismatch, anomaly "manifest_tampered" is raised and read is blocked to downstream until resolved.
Downstream Verification Results Exposure
- Given a downstream service request, When calling the Verification API for an asset, Then the latest manifest subset (including confidence category, anomalies, UTC time, AGL, in-bounds flag, roll-up badge reference) returns within ≤300 ms P95. - Given successful processing, When completed, Then an event is published to topic "metadata.verified" with asset_id, job_id, processing_version, score, category, anomalies, and timestamps. - Given API evolution, When new fields are added, Then the API remains backward-compatible for at least two minor versions and old clients continue to function unchanged. - Given authorization, When unauthenticated or unauthorized requests occur, Then the API returns 401/403 respectively and logs the attempt with a correlation id.
Contextual Watermark Overlay
"As a project manager, I want clear provenance watermarks on all outputs without obscuring details so that recipients can quickly validate authenticity while still assessing damage."
Description

Render legible, non-obtrusive watermarks onto every photo, measurement, and report section that display lat/long, capture time (UTC/local), altitude (AGL), and capture confidence. Support dynamic placement to avoid key image content, adjustable opacity/size, light/dark backgrounds, and per-page section stamping in PDFs and the web viewer. Ensure DPI-aware, vector-quality overlays for print, consistent typography across platforms, and localization of time/units. Provide configuration presets (strict, standard, minimal) with project-level defaults and per-export overrides while preserving alignment with the verified metadata manifest.

Acceptance Criteria
Metadata Completeness, Accuracy, and Localization
Given a photo, measurement diagram, or report section with a verified metadata manifest (latitude, longitude, UTC timestamp, altitude AGL, capture confidence) When it is rendered in the web viewer or exported to PDF Then the watermark displays Lat and Long in decimal degrees to 5 places; UTC in ISO 8601 (YYYY-MM-DDThh:mm:ssZ); Local time in the project time zone using locale formatting; AGL with locale unit (ft for en-US, m otherwise) to 1 decimal; Confidence as a percentage to 1 decimal And all displayed values equal the manifest values after conversion and rounding (lat/long 5 dp half-up; AGL 1 dp with 1 m = 3.28084 ft; confidence 0.1% half-up) And standardized labels are used: "Lat", "Long", "UTC", "Local", "AGL", "Conf" And the same formatting is applied consistently across photos, measurement diagrams, and report sections
Dynamic Placement Avoids Key Content
Given an asset image with key-content masks (roof polygons, annotations, legends, scale bars, badges) When the watermark position is computed Then it must not intersect any key-content mask and must maintain ≥ 8 px margin at 1x zoom And the engine evaluates 8 candidate positions (4 corners, 4 edge centers) and selects the first zero-intersection position by priority order: TR, BR, TL, BL, RC, LC, TC, BC And if no zero-intersection position exists, it selects the position with total overlap area ≤ 0.5% of the image area and zero overlap with annotations And the chosen position for a given asset and resolution is deterministic and identical in web and PDF outputs
Legibility and Non-Obtrusiveness Across Backgrounds
Given any background luminance or texture When the watermark is rendered Then the text contrast ratio with its immediate background is ≥ 3:1 And if contrast < 3:1, the renderer auto-switches to inverse theme and adds a 1 px outline to achieve ≥ 3:1 And default opacity is 30% with an adjustable range of 10%–60% in 5% increments; font size range is 8–12 pt in PDF and 10–16 px on web; defaults are 9 pt (PDF) and 12 px (web) And at 100% zoom on web, x-height is ≥ 6 px; in PDF at 300 DPI, cap height is ≥ 1.6 mm
DPI-Aware Vector Quality in PDF and Print
Given a PDF export at any page size and DPI setting (72, 150, 300, 600) When viewed at 400% zoom or printed at 300 DPI Then all watermark text is vector/selectable and shapes are vector with no rasterization artifacts And changing export DPI does not alter typographic point sizes; relative placement varies by ≤ ±1 px/pt compared to specification And in the web viewer, zooming from 50% to 400% preserves sharp edges with no blurring or reflow of watermark content
Per-Section Stamping in PDFs and Web Viewer
Given a multi-page report containing photos, measurement diagrams, and summary sections When exporting to PDF and viewing in the web viewer Then every photo page and measurement page includes a section-specific watermark using that section’s metadata And non-visual pages (e.g., summary) include the roll-up coverage badge in the header or footer per design spec And no page in the export lacks a watermark or badge; page-level badges display page index (e.g., 5/18) And the roll-up coverage badge appears on the cover and on each page header with a consistent verification ID/checksum
Consistent Typography Across Platforms
Given rendering on Chrome, Edge, and Safari on Windows and macOS, and in PDF viewers (Acrobat, Apple Preview) When the watermark is displayed Then the configured font family and weights are used consistently across platforms; PDF embeds the font; no fallback fonts are used And line height, letter spacing, and kerning match design tokens within ±1 px on web and ±0.5 pt in PDF And cross-platform visual regression for the same asset and zoom yields text bounding box differences ≤ 1 px in width/height
Configuration Presets, Defaults, Overrides, and Manifest Alignment
Given the presets Strict, Standard, and Minimal and a project-level default When a project is created Then a default preset is stored and applied to exports unless overridden per-export And per-export overrides can select any preset without changing the project default; the selected preset is recorded in export metadata/audit logs And presets behave as follows: - Strict: includes Lat, Long, UTC, Local, AGL, Confidence; min size 9 pt (PDF)/12 px (web); opacity 40%; if any required manifest field is missing or any displayed value would differ from the manifest beyond rounding rules, the export is blocked and a discrepancy is logged - Standard: includes all fields; min size 8 pt (PDF)/12 px (web); opacity 30%; missing fields display "N/A" - Minimal: displays only the roll-up coverage badge; no field-level strings And in all presets, overlay values are read from the verified metadata manifest; client-side values that differ are ignored
Cryptographic Seal & QR Verification
"As an insurance adjuster, I want a tamper-evident seal and scannable code on reports so that I can instantly prove no post-hoc edits were made to photos or measurements."
Description

Generate a cryptographic seal for each asset and exported report by hashing content plus canonical metadata (SHA-256) and signing with a managed private key (e.g., Ed25519/JWS). Embed a compact signature payload and a QR code on each page that links to a verification endpoint. On scan, show signature validity, canonical hash, and a diff check against the uploaded artifact to detect edits. Maintain key rotation, a transparency log of issued seals, and secure key storage (HSM/KMS). Ensure seals survive common file conversions and support offline verification via downloadable manifest files.

Acceptance Criteria
Per-Asset Hashing and Signing
Given an asset and its canonical metadata, When a seal is generated, Then a SHA-256 hash over the canonicalized content+metadata is computed per the published spec and matches the hash recomputed by the reference verifier. Given the computed hash, When signing, Then a compact JWS using EdDSA (Ed25519) is produced whose header includes alg=EdDSA and kid, and whose payload includes asset_id, canonical_hash, created_at, and spec_version. Given the corresponding public key, When verifying the JWS, Then the signature validates and the payload fields are consistent with the asset record. Given identical inputs on repeated runs, When regenerating the seal, Then the canonical_hash is identical and the payload asset_id remains unchanged.
QR Code Embedding and Scan Verification
Given a sealed multi-page PDF report, When exported, Then each page renders a scannable QR code at >= 0.6 inch with error correction level M or higher and a visible signature badge. Given a mobile device camera in typical indoor lighting, When scanning the page QR from screen or print, Then it resolves to the verification URL and loads within 2 seconds showing status (Valid/Invalid), canonical_hash, asset_id, and capture data. Given a QR with tampered URL parameters, When the endpoint is called, Then the request is rejected with HTTP 400 and no asset data is leaked. Given 50 scans per minute for the same asset, When requests are served, Then p95 response time is < 800 ms and the service remains available (HTTP 2xx).
Diff Check Against Uploaded Artifact
Given the original exported artifact for a sealed page, When it is uploaded to the verification endpoint, Then the system recomputes the canonical hash and returns validity=true with zero diffs. Given an artifact altered by pixel edits or changed measurement text, When uploaded, Then validity=false and a machine-readable diff is returned identifying changed regions/text with page coordinates. Given different asset types (raster photo, vector PDF), When diffing, Then the appropriate algorithm is applied and an overlay (PNG for photos, annotated PDF for reports) is available for download. Given an invalid or corrupted upload, When processed, Then the service returns HTTP 422 with an error code and no diff.
Seal Durability Across Conversions
Given a sealed PDF report, When converted to PDF/A-2b using default Ghostscript settings, Then the embedded QR remains scannable and verification via QR returns Valid with the same canonical_hash. Given a sealed PNG image, When recompressed to JPEG at quality 85 and resized within ±10%, Then the on-image QR remains scannable and verification via QR returns Valid. Given carrier metadata stripped during conversion, When verifying via QR, Then validation still succeeds because it does not rely on carrier metadata. Given a converted artifact uploaded for diff, When lossy changes cannot be normalized, Then the result clearly reports validity=false and indicates content alteration due to conversion.
Offline Verification via Manifest
Given a sealed asset, When the user downloads the manifest, Then it is a signed JSON containing asset_id, canonical_hash (hex), alg, kid, created_at, signature, and either an embedded JWK or a URL to fetch the public key set. Given an offline environment, When the manifest and the original artifact are verified with the reference CLI tool, Then signature verification succeeds and the computed hash matches the manifest canonical_hash. Given a manifest with an altered signature, When verified offline, Then the tool exits non-zero and reports verification failure. Given stale local keys, When verifying with a manifest that embeds the public key, Then verification still succeeds without network access.
Key Rotation and Backward Verification
Given an active signing key K1, When rotating to key K2, Then new seals within 1 minute carry kid=K2 and all seals signed with K1 remain verifiable. Given verification of a historical seal, When resolving kid, Then the JWKS and offline key bundle include the referenced key until its archival date, and the transparency log records the rotation event. Given a revoked key, When verifying a seal timestamped before the revocation, Then the service reports Valid with an annotation that the key was revoked after issuance; seals timestamped after the revocation are reported Invalid (signed with revoked key). Given HSM/KMS-backed signing, When observing operations, Then private keys are non-exportable, all sign operations are audit-logged (actor, time, request_id), and any export attempts are blocked and alerted.
Transparency Log and Public Audit
Given issuance of a new seal, When recorded, Then an entry is appended to an append-only transparency log with seal_id, asset_id (or its hash), canonical_hash, kid, and timestamp, and a signed checkpoint is published at least hourly. Given a seal_id, When requesting an inclusion proof, Then the API returns a Merkle inclusion proof that verifies against the latest checkpoint with the public log key. Given historical checkpoints, When an external auditor verifies consistency, Then any removal or reordering of entries is detected via consistency proofs. Given privacy constraints, When publishing entries, Then no PII is exposed; asset identifiers are salted hashes while preserving verifiable inclusion.
Coverage Integrity Badge
"As a company owner, I want a single badge that summarizes capture quality so that I can quickly judge whether evidence will stand up to customer or carrier scrutiny."
Description

Compute and display a roll-up badge summarizing capture integrity across the job: percent roof area covered, photo count, altitude range, average GSD, duplicate/blur rate, time span, and overall capture confidence. Visualize as a color-coded badge (green/yellow/red) with thresholds and tooltips that explain any deficiencies (e.g., gaps over ridge lines, low GPS accuracy segments). Embed the badge in report headers, job dashboards, and export summaries, and expose underlying metrics via API for auditing and automated workflow rules.

Acceptance Criteria
Roll-up Metric Computation and Storage
Given a job with at least one finalized roof polygon and a set of geo-tagged photos When the Coverage Integrity engine runs Then it computes and persists metrics: coverage_percent (0.0–100.0, 0.1 precision), photo_count (integer), altitude_min_m/altitude_avg_m/altitude_max_m (1 m precision AGL), gsd_avg_cm (0.1 precision), duplicate_blur_rate_percent (0.1 precision), low_accuracy_photo_ratio_percent (0.1 precision), time_span_minutes (integer), confidence_score (0–100 integer), computed_at (ISO-8601 UTC), algorithm_version (string) And coverage_percent excludes photos flagged as duplicate/blurred and those with horizontal_accuracy_m > 10 And time_span_minutes equals the difference (in minutes) between earliest and latest valid capture timestamps in the job’s timezone And all values are stored on the job record and are retrievable via UI and API
Badge Color Assignment by Thresholds
Given computed coverage integrity metrics When determining the badge color Then set color = green if all are true: coverage_percent >= 95.0; duplicate_blur_rate_percent <= 5.0; gsd_avg_cm <= 3.0; altitude_max_m <= 120; low_accuracy_photo_ratio_percent <= 5.0; time_span_minutes <= 120; confidence_score >= 90 Then set color = yellow if no red conditions are met and at least one is true: coverage_percent in [85.0, 94.9]; duplicate_blur_rate_percent in (5.0, 15.0]; gsd_avg_cm in (3.0, 5.0]; altitude_max_m in (120, 150]; low_accuracy_photo_ratio_percent in (5.0, 20.0]; time_span_minutes in (120, 1440]; confidence_score in [70, 89] Then set color = red if any are true: coverage_percent < 85.0; duplicate_blur_rate_percent > 15.0; gsd_avg_cm > 5.0; altitude_max_m > 150; low_accuracy_photo_ratio_percent > 20.0; time_span_minutes > 1440; confidence_score < 70 And the badge lists the top 1–3 failing metrics as causes for yellow/red
Tooltip Explanations and Deficiency Detailing
Given a computed badge in any color When the user hovers or taps the badge Then a tooltip shows each metric’s actual value (with units), its green threshold, and per-metric status (green/yellow/red) And deficiencies include “Gaps over ridge lines” when uncovered strip width along any ridge line > 0.5 m for a continuous segment > 2.0 m, displaying count and max width And deficiencies include “Low GPS accuracy segments” when any contiguous capture block ≥ 60 s contains ≥ 5 photos with horizontal_accuracy_m > 10, displaying the time range and count And numeric values are rounded as: coverage_percent 0.1; gsd_avg_cm 0.1; altitude 1 m; rates 0.1%; time minutes 1 And the tooltip provides a “View Coverage” action that opens the coverage overlay centered on detected gaps
Report Header Embedding
Given a computed badge When generating any PDF report Then the badge appears in the header of page 1 with the correct color and a summary string: "{coverage_percent}% covered, {photo_count} photos, avg GSD {gsd_avg_cm} cm, dup/blur {duplicate_blur_rate_percent}%, span {time_span_minutes} min, confidence {confidence_score}" And the badge renders at ≥300 DPI, fits within a 200×60 px area at 100% zoom, and maintains color fidelity in print and screen outputs And the badge appears consistently across Chrome, Firefox, and Edge generated PDFs And the report metadata JSON embedded in the PDF includes the badge metrics and computed_at
Dashboard and Export Summary Embedding
Given a computed badge When viewing the job dashboard Then the badge is visible adjacent to the job title and updates within 5 seconds of metric recomputation When exporting the job summary (CSV and JSON) Then the export includes: coverage_percent, photo_count, altitude_min_m, altitude_avg_m, altitude_max_m, gsd_avg_cm, duplicate_blur_rate_percent, low_accuracy_photo_ratio_percent, time_span_minutes, confidence_score, color, computed_at, algorithm_version And CSV column names match the API field names; numeric fields use dot decimal; time spans are expressed in minutes
API Exposure for Auditing and Automations
Given a valid API token When calling GET /api/v1/jobs/{job_id}/coverage-integrity Then respond 200 application/json with fields: job_id, coverage_percent, photo_count, altitude_min_m, altitude_avg_m, altitude_max_m, gsd_avg_cm, duplicate_blur_rate_percent, low_accuracy_photo_ratio_percent, time_span_minutes, confidence_score, color, computed_at (ISO-8601), algorithm_version, deficiencies[] (code, severity, message), links.coverage_map And respond 404 for unknown job_id and 401 for invalid/expired token And include ETag and Last-Modified headers; conditional GET with If-None-Match returns 304 when unchanged And the payload validates against the published JSON Schema; numeric fields match the defined precisions
Recompute Triggers and Performance SLAs
Given a job with up to 500 photos When photos are added, removed, or the roof outline changes Then the coverage integrity computation starts automatically and completes within 60 seconds of the last change being detected And for ≤300 photos the computation completes in ≤10 seconds at the 95th percentile; for 301–500 photos it completes in ≤25 seconds at the 95th percentile And the updated badge and metrics publish atomically to the dashboard, report preview, exports, and API And a webhook event coverage_integrity.updated is emitted with job_id, old_color, new_color, computed_at, and metrics_delta
Public Verification Portal & API
"As a homeowner or carrier reviewer, I want to independently verify a report’s authenticity online so that I don’t have to rely solely on the contractor’s word."
Description

Provide a public web portal and REST API where third parties can scan a QR or upload a PDF/photo to verify authenticity. Return signature validity, hash match status, job geofence proximity, capture time window, and any anomaly flags without exposing sensitive customer data. Implement rate limiting, uptime monitoring, audit logs, and a privacy-first data model that redacts PII while still proving provenance. Offer embeddable verification widgets and a webhook for automated status retrieval in claim systems.

Acceptance Criteria
Portal: QR Scan and File Upload Verification
- Given a valid GeoSeal QR code, When scanned in the public portal, Then the portal displays signature_valid, hash_match, within_geofence, geofence_distance_m, capture_window_status, anomaly_count, coverage_confidence, verification_id, and issued_at. - Given a supported PDF/JPG/PNG (<= 15MB) without QR, When uploaded, Then the portal extracts/verifies GeoSeal metadata or file hash and displays the same summary fields. - Given a supported input (<= 15MB), When processed, Then time-to-result P95 <= 3s and P99 <= 6s; if processing exceeds 10s, Then show a timeout state with a clear message. - Given an invalid or unsupported input, When processed, Then show a non-technical error: "Unsupported file" or "Unverifiable"; Given a tampered QR/signature, Then show "Invalid signature" without stack traces. - Given keyboard-only navigation or screen reader usage, When completing verification, Then all required actions are operable and labeled (WCAG 2.1 AA).
API: Verification Endpoint v1
- Given POST /verify with multipart file or GET /verify?token=..., When inputs are valid, Then respond 200 with JSON containing: signature_valid:boolean; hash_match: one of ["match","mismatch","unknown"]; within_geofence:boolean|null; geofence_distance_m:number|null; capture_window_status: one of ["within","outside","unknown"]; anomalies: array of {code:string, severity: one of ["info","warn","error"]}; coverage_confidence:number in [0,1]; verification_id:string UUID; issued_at: ISO-8601; schema_version:"v1". - Given Idempotency-Key provided, When the same payload is retried within 24h, Then return the same verification_id and body (idempotent). - Given a supported input (<= 15MB), When processed, Then P95 latency <= 1.5s and P99 <= 3s. - Given invalid input, Then return: 400 (bad parameters), 413 (>25MB), 415 (unsupported type), 422 (unverifiable), 429 (rate limited), 500 (server), each with error.code, message, request_id. - Given any response, Then include X-Verification-Version:v1 and no PII fields (customer_name, email, phone, street_address, exact GPS).
Verification Logic: Geofence and Capture Window
- Given a job geofence polygon (WGS84) and asset GPS points, When verifying, Then within_geofence=true if shortest horizontal distance <= 50 meters; else within_geofence=false; geofence_distance_m is integer meters (rounded); altitude is ignored. - Given no geofence is provided, When verifying, Then within_geofence=null and geofence_distance_m=null. - Given a job capture window [start,end] (UTC, ISO-8601), When all asset timestamps are within inclusive bounds, Then capture_window_status="within"; if any asset is outside, Then "outside"; if window missing, Then "unknown". - Given device/server timestamp skew > 5 minutes, When verifying, Then add anomaly {code:"clock_skew", severity:"warn"}.
Privacy: PII Redaction and Data Minimization
- Then portal and API responses must not contain PII: customer_name, email, phone, street_address, billing data, or exact GPS coordinates. - Then responses may include only provenance fields: verification_id, signature_valid, hash_match, within_geofence, geofence_distance_m, capture_window_status, anomalies/anomaly_count, coverage_confidence, issued_at, schema_version. - Then all identifiers are opaque (UUIDs or cryptographic hashes); no sequential/internal IDs exposed. - Then unauthenticated uploads are retained <= 24 hours for processing and are not accessible via public URLs. - Then automated checks scan JSON payloads and block deployment if PII field names are detected.
Operational: Rate Limiting and Abuse Controls
- Given unauthenticated portal requests, When >60 verifications are initiated from the same IP within 60 seconds (burst allowance 120 within 10s), Then subsequent requests return HTTP 429 with Retry-After and RateLimit-Limit/Remaining/Reset headers. - Given API key-authenticated requests, When >600 requests/minute per key, Then return 429 with the same headers; allowlisted keys/IPs are exempt per configuration. - Given a portal user triggers >=3 rate-limit events within 5 minutes, Then a CAPTCHA challenge is required before next verification attempt. - Given a single upload >25MB, Then return 413 with guidance to compress or use the API. - Given normal traffic, Then rate limiting does not throttle legitimate users below 95th percentile concurrency targets (documented baseline).
Integrations: Webhook Delivery and Security
- Given a registered webhook (URL + shared secret), When a verification completes, Then send POST within 5 seconds containing event_type:"verification.completed", verification_id, signature_valid, hash_match, within_geofence, geofence_distance_m, capture_window_status, anomaly_count, coverage_confidence, issued_at. - Then include HMAC-SHA256 over the raw body in header X-Signature:"sha256=<digest>", require HTTPS, and include X-Verification-Id and X-Event-Type headers. - Then retries occur on non-2xx responses with exponential backoff (1m, 5m, 15m, 1h, 6h, 24h) up to 8 attempts, with a stable X-Idempotency-Key across retries. - Then webhook secrets support rotation; during rotation, events are signed with both old and new secrets for a configurable overlap window. - Then webhook payload excludes PII and is <=256KB.
Embeddable Verification Widget
- Given <script src=... data-verification-id=...>, When embedded on a third-party site, Then render a badge showing signature_valid, hash_match, within_geofence, and capture_window_status, linking to the public verification page. - Then the widget loads asynchronously, is <50KB gzipped, does not block rendering, and loads no third-party trackers. - Then the widget supports theme customization via data attributes (e.g., color, light/dark) and is WCAG 2.1 AA accessible. - Then the widget fetches only the verification summary via a CORS-enabled endpoint and exposes no PII; compatible with strict CSP (no inline scripts required). - Then on offline/429 conditions, the widget shows a graceful "Unavailable — retry" state without uncaught errors.
Watermark Privacy & Redaction Controls
"As a compliance lead, I want control over how much location/time detail is shown on watermarks so that we meet privacy policies without weakening authenticity guarantees."
Description

Enable admins to configure what metadata appears in visible watermarks (e.g., rounding coordinates, hiding exact timestamps) while preserving the full canonical metadata in the signed manifest. Support policy templates by client/market, role-based overrides, and an auditable redaction log. Ensure any redaction choices are reflected consistently across web viewer and PDFs and do not invalidate the cryptographic seal by using a stable canonicalization layer for signing versus display.

Acceptance Criteria
Admin sets GPS/altitude rounding for visible watermarks
Given an admin creates a policy with lat/lon rounding set to 0.001° and altitude rounding set to 1 m When a project with 12 photos and 4 measurement pages is processed Then all visible watermarks display lat/lon rounded to the nearest 0.001° and altitude rounded to the nearest 1 m And the signed manifest stores full-precision coordinates (≥7 decimal places) and raw altitude values And no visible watermark displays more precision than the policy allows
Admin hides timestamps from visible watermarks
Given an admin edits a policy to hide capture timestamps on all outputs When a user views photos and measurement pages in the web viewer and exports a PDF Then visible watermarks show a placeholder label “Time hidden per policy” with no exact timestamp And the signed manifest retains ISO 8601 timestamps with timezone offsets for each asset And seal verification succeeds and reports no mismatch between display and manifest
Apply client-specific policy template with role-based override
Given a client template “Acme-Midwest” requires hidden timestamps and 0.01° coordinate rounding And the org role matrix permits Admin override but denies Adjuster override When a Project Admin sets rounding to 0.005° for that project Then the project inherits all template settings except the rounding, which is overridden to 0.005° And when a user with role Adjuster attempts to unhide timestamps Then the action is blocked with an explanatory message and recorded in the audit log
Auditable redaction log with export and immutability
Given redaction settings are changed for a project (fields: timestamp hidden, lat/lon rounding from 0.01° to 0.005°) When viewing the project’s Redaction Log Then the log shows actor, role, timestamp (UTC), project ID, affected fields, old→new values, source (UI/API), and reason (if provided) And the log entry is append-only, tamper-evident (hash-chained), and includes a verifiable checksum And an authorized user can export the log to CSV and JSON with the same entries and checksums
Consistent redaction across web viewer and PDF outputs
Given a project uses policy “Privacy-Standard” (timestamps hidden, lat/lon 0.001°, altitude 1 m) When a user compares an on-screen photo and its corresponding exported PDF page Then the watermark text, precision, placeholders, and redaction indicators are identical across both And each page section (photos, plan sheets, measurement tables) applies the same policy rules And automated diffing of rendered text extracts shows zero mismatches in redacted fields
Stable canonicalization layer preserves signature despite display redactions
Given a project is signed using the canonical manifest (normalized field order, full metadata, stable units) When display policies are toggled between three presets (Strict, Standard, Open) Then the cryptographic signature and manifest hash remain identical across toggles And signature verification with the public key returns “Valid” for each exported PDF and the web bundle And no display-only field participates in the signed canonicalization payload
Policy enforcement on invalid or conflicting overrides
Given a client template marks exact GPS visibility as Non-Overridable When a Project Admin attempts to set lat/lon rounding to “Exact” Then the system blocks the change, explains the Non-Overridable constraint, and suggests allowable values And the attempted change is logged with failure status in the Redaction Log And existing outputs remain unchanged and continue to verify against the current manifest

SignSeal

Applies standards‑based digital signatures (PAdES/LTV) from RoofLens and optional co‑signers. Once signed, the PDF is cryptographically locked; any change breaks the seal and flags clearly. Users get long‑term, court‑ready validation without extra tools.

Requirements

PAdES-LTV Signature Engine
"As a roofing contractor, I want RoofLens to apply a standards-based digital signature to my PDF bid so that recipients can verify authenticity and integrity in any standard PDF reader."
Description

Implement a server-side signing service that applies PAdES Baseline (B-LT) compliant digital signatures to RoofLens-generated PDFs. Embed the full certificate chain, OCSP/CRL responses, and signed attributes to enable long-term validation in standard PDF readers without plugins. Use SHA-256 or stronger algorithms, support multiple digest/signature algorithms, and ensure compatibility with Acrobat/Reader trust stores (AATL/EUTL). Integrate with the PDF generation pipeline so the seal is applied automatically at the end of estimate creation, with optional visible signature appearance and metadata binding to the job, customer, and version. Provide error handling, retries, and idempotency for signature application.

Acceptance Criteria
Automatic PAdES B‑LT Sealing at Estimate Completion
Given an estimate PDF is produced by the RoofLens pipeline When the pipeline reaches the signing step Then the service applies a PAdES Baseline B-LT signature to the PDF And embeds all certificates required to build the chain to a trust anchor And embeds OCSP/CRL responses for each certificate in the chain And adds an RFC 3161 document timestamp And the resulting file opens in Adobe Acrobat/Reader with status "Signature is valid" and "LTV enabled" And no additional plugins or configuration are required for validation
AATL/EUTL Trust and Offline LTV Validation
Given a signed PDF produced by the service When opened in Adobe Acrobat/Reader configured with default AATL/EUTL trust stores and with network access disabled Then Acrobat/Reader reports the signature as valid with LTV enabled And the certificate chain resolves to a trust anchor present in AATL or EUTL And no "Unknown signer" or "Unable to verify" warnings are shown
Tamper Detection and Cryptographic Lock Enforcement
Given a signed PDF produced by the service When any byte of the PDF content is modified after signing (e.g., text edit, image change, or page insert) Then Acrobat/Reader flags the signature as invalid and indicates the document has been altered since signing And a DocMDP permission level is set to prevent changes after signing
Visible Signature Appearance and Metadata Binding
Given the job is configured for a visible signature appearance on page 1 When the PDF is signed Then a visible signature widget appears within the configured rectangle with signer name, signing time (UTC), and reason And the PDF XMP metadata contains job_id, customer_id, and estimate_version And those metadata fields are included in the signature's signed attributes or covered by the document digest such that tampering them invalidates the signature And opening the PDF shows the visible signature without overlapping existing content
Idempotent Retries and Fault Handling in Signing Service
Given a transient failure occurs during signing (e.g., HSM timeout or OCSP responder unavailable) When the service retries according to an exponential backoff policy Then only a single final signature revision exists if a retry eventually succeeds And the service uses an idempotency key derived from estimate_id and estimate_version to prevent duplicate signatures And if all retries fail, the pipeline surfaces a specific error code and human-readable message and leaves the PDF unsigned And a subsequent identical request with the same idempotency key returns the prior result without creating a new signature
Algorithm Support and Security Minimums
Given the service is using default cryptographic settings When signing a PDF Then the signature uses SHA-256 or stronger for all digests And SHA-1 is rejected with a clear error And RSA-PKCS#1 v1.5, RSA-PSS, and ECDSA signature algorithms are supported, with the chosen algorithm recorded in the signature dictionary and audit logs
Validation Data Completeness and Freshness (OCSP/CRL and Timestamp)
Given the service embeds validation data in the DSS When inspecting a signed PDF Then OCSP responses are present for each certificate where OCSP is available; otherwise CRLs are embedded And each OCSP/CRL entry is within its thisUpdate/nextUpdate validity window at signing time And the document timestamp is issued by a trusted TSA and validates successfully in Acrobat/Reader And Adobe Preflight PAdES B-LT validation passes without errors
Trusted Timestamping Integration
"As a compliance officer, I want every signature to include a trusted timestamp so that the signing time can be independently proven years later."
Description

Integrate RFC 3161-compliant TSA providers to attach trusted timestamps to each signature and document (signature-time and document-time stamps). Support at least two redundant TSA endpoints with automatic failover and configurable policies per region/tenant. Embed timestamp tokens within the PDF per PAdES requirements to prove the signing time independent of system clocks. Ensure reliable network performance, audit logging of TSA requests/responses, and graceful degradation if one TSA is unavailable. Expose settings to choose TSA providers and policies.

Acceptance Criteria
PAdES-Embedded Signature and Document Timestamps
Given a PDF is signed via SignSeal with timestamping enabled When signing completes Then each digital signature includes an RFC 3161 timestamp token in UnsignedAttributes per PAdES And the PDF includes a document-level DocTimeStamp entry per PAdES And Adobe Acrobat and an ETSI validator report both timestamps as valid and trusted And the timestamp time equals the TSA genTime in the token (±1s) And the PDF DSS contains the TSA certificate chain and revocation data enabling offline validation
Automatic TSA Failover During Outage
Given a policy with primary and secondary TSA endpoints is active And the primary endpoint is unreachable, returns 5xx, or produces an invalid response When a timestamp is requested during signing Then the system fails over to the secondary after a single 3s timeout on the primary without user action And the final PDF contains a valid timestamp from the secondary And end-to-end signing latency overhead from failover is ≤ 5s at P95 And the audit log records the failover event and the final provider used
Tenant- and Region-Scoped TSA Policy Configuration
Given a tenant admin configures a TSA policy per region with at least two endpoints, provider priority, policy OID, hash algorithm, and requirement mode (Required or BestEffort) When a user in that tenant signs from a specific region Then the resolved TSA provider and policy match the configured mapping And changes to policy take effect within 60s of save And invalid configurations (e.g., fewer than two endpoints when Required) are rejected with validation errors And the selected policy OID is present in the timestamp token And the chosen provider and policy identifiers are captured in audit logs
TSA Request/Response Audit Logging
Given a timestamp request is initiated When the TSA responds or the request times out/fails Then an immutable audit record is written including: tenant ID, document ID (SHA-256 digest), correlation ID, TSA provider and endpoint, policy OID, request nonce and hash algorithm, response status/result, TST serial number, genTime, round-trip latency, attempt count, and failover flag And sensitive payloads (private keys, full document content) are never logged; only digests are stored And audit records are available to tenant admins within 2 minutes and retained for 7 years And audit logs are append-only with integrity checks to detect tampering
Timestamp Time Independent of System Clocks
Given the signing host clock is skewed by ±24 hours When a document is signed with timestamping enabled Then the displayed signing time and stored event time equal the TSA genTime from the token (±1s) regardless of local clock And the audit log records both local clock and TSA genTime for comparison And third-party validators report the timestamp as valid
Network Performance and Retry Behavior
Given transient network errors occur when contacting the TSA When requesting a timestamp Then the client uses TLS 1.2+ with HTTP keep-alive and a per-attempt timeout of 3s And it retries up to 3 times with exponential backoff starting at 500ms plus jitter And successful timestamp acquisition meets SLOs: P50 ≤ 1.5s, P95 ≤ 5s, P99 ≤ 10s per request in production And metrics for success rate, latency percentiles, and error codes are emitted and visible in monitoring dashboards
Graceful Degradation and Post-Stamping Workflow
Given both configured TSA endpoints are unavailable And the tenant policy mode is BestEffort When a user signs a document Then the signing completes without a timestamp, the PDF is marked as not timestamped, and a background job queues a retry within 5 minutes And the UI shows "Timestamp pending" with last attempt status and next retry time And when the background job succeeds, a DocTimeStamp is added via incremental update without altering the signed content, and the user is notified And if the tenant policy mode is Required, the signing fails within 10s with a clear error and no partially signed file is produced
HSM-Backed Key and Certificate Management
"As a security administrator, I want signing keys stored and used inside an HSM with audited access so that signatures cannot be forged and compliance requirements are met."
Description

Provision and manage the RoofLens signing keys in a FIPS 140-2/3 validated HSM or cloud KMS with hardware-backed key protection, strict role-based access controls, and full audit logging. Support key rotation, certificate renewal, and chain management, with alerts ahead of expirations. Allow per-environment and optional per-tenant signing identities, and enable bring-your-own-certificate for qualified customers. Automate CSR generation and issuance with a publicly trusted document-signing CA compatible with common PDF readers’ trust lists. Prevent key export, enforce least-privilege usage, and provide disaster recovery procedures.

Acceptance Criteria
Provision Non-Exportable Signing Keys in FIPS-Validated HSM
Given a user with KeyAdmin role and MFA is verified, When they request a new signing key with algorithm ∈ {ECDSA_P-256, ECDSA_P-384, RSA-3072} and environment=prod, Then the key is generated inside a FIPS 140-2/3 validated HSM/KMS, marked non-exportable, allowed operations ∈ {sign, generate_csr}, a unique keyId is returned, and P95 key creation latency ≤ 10s. Given a user without KeyAdmin role, When they attempt to create, delete, or rotate a key, Then the request is denied with HTTP 403 and an audit event is recorded. Given an existing signing key, When any API attempts key export or wrap-for-export, Then the operation is blocked, no key bytes are returned, HTTP 400/403 is returned, and a High-severity audit event is emitted. Given a key is successfully created, When the operation completes, Then an immutable audit record includes actor, role, MFA=true, source IP, requestId, keyId, algorithm, environment, timestamp (UTC), and outcome=Success.
Enforce Least-Privilege RBAC on Key Use and Management
Given system roles KeyAdmin, SignOperator, and Auditor are configured, When permissions are evaluated, Then KeyAdmin can create/rotate/delete keys and assign roles but cannot perform sign operations; SignOperator can perform sign and fetch associated certificate chain but cannot create/rotate/delete keys; Auditor can read metadata and audit logs but cannot sign/create/rotate/delete. Given an API token scoped to tenant T and environment E, When a sign request targets keyId not in scope (different tenant or environment), Then the request is denied with HTTP 403 and an audit event with reason=ScopeMismatch is recorded. Given repeated (≥5 within 1 minute) denied key-use attempts by the same principal, When rate-limiting is applied, Then subsequent attempts are throttled for at least 5 minutes and a Security alert is issued to admins.
Automate CSR and Certificate Issuance with Publicly Trusted CA
Given a signing key exists and organization certificate profile is configured, When Generate CSR is invoked, Then the CSR is created inside the HSM with subject fields per profile, keyUsage includes digitalSignature (and contentCommitment if configured), AIA/OCSP/CRLDP extensions requested as required by the CA, and the CSR never exposes private key material. Given the CA issues a document-signing certificate, When the certificate and chain are imported, Then the system validates public-key match to keyId, stores the full chain, verifies chain completeness to a root trusted by common PDF readers, and marks the certificate state=Active. Given a PDF is signed using the active certificate, When opened in default settings of major PDF readers, Then the signature validates as trusted without manual trust-store changes and revocation endpoints are reachable.
Key Rotation and Certificate Renewal with Expiry Alerts
Given a certificate will expire in 60/30/7/1 days, When the daily scheduler runs, Then alert notifications are sent to designated contacts via email and webhook with keyId, tenant, environment, and daysUntilExpiry. Given a rotation is scheduled for a key, When the cutover time is reached, Then a new key and certificate are used for all new sign operations, the previous key is set to state=DisabledForNewSigning but retained for verification history, and all changes are logged. Given a certificate renewal (same key) is requested before expiry, When renewal completes, Then the new certificate chain is associated with the existing key, and sign operations seamlessly use the renewed certificate with zero downtime (P95 sign latency increase ≤ 10%). Given any renewal or rotation fails, When the failure occurs, Then alerts with severity=High are sent immediately and the system automatically rolls back to the last known-good configuration.
Per-Environment and Optional Per-Tenant Signing Identities
Given environments dev, stage, and prod, When keys are created, Then each key is namespaced to its environment and cannot be listed or used from another environment; cross-environment access attempts return HTTP 403 and are audited. Given a tenant-enabled workspace, When per-tenant signing is configured, Then each tenant receives a distinct key/certificate pairing, and sign requests must include tenant context; attempts to use another tenant’s key are denied and logged. Given audit reporting is requested, When logs are queried by tenant and environment, Then results include only records within the requesting principal’s scope.
Bring-Your-Own-Certificate (BYOC) for Qualified Customers
Given a qualified customer and an HSM-resident keyId with a pending CSR, When the customer uploads a CA-issued document-signing certificate and chain, Then the system verifies the certificate public key matches the CSR/keyId, validates chain completeness and validity periods, checks OCSP/CRL endpoints are present, and associates the chain with keyId. Given the uploaded certificate fails validation (mismatch, untrusted root, expired, or missing revocation info), When import is attempted, Then the system rejects the upload with a specific error code and records an audit event with reason. Given BYOC is successfully enabled, When a PDF is signed, Then the signature validates in common PDF readers without additional trust configuration and LTV enablement is possible via available OCSP/CRL endpoints.
Disaster Recovery for HSM/KMS and Certificate Material
Given secure HSM backup procedures are configured, When backups run on schedule, Then key material is backed up using HSM-native secure wrap with M-of-N quorum, stored encrypted, and a completion audit event is recorded; no plaintext key material is ever exported. Given a simulated HSM/KMS outage in non-production, When DR restore is executed, Then keys, certificates, and configuration are restored, keyIds remain unchanged, sign operations resume within RTO ≤ 4 hours, and no permissions are broadened post-restore. Given DR testing is mandated quarterly, When a quarterly test completes, Then a report is generated containing recovery time, data loss window (RPO ≤ 15 minutes for metadata), and integrity checks (successful sign and verification), and the report is stored immutably.
Co-Signer Workflow and Routing
"As a project manager, I want to request co-signatures from homeowners and adjusters in a defined order so that all parties can approve the bid in one streamlined workflow."
Description

Enable inviting external co-signers (e.g., homeowner, adjuster) to sign the same PDF with sequential or parallel routing, configurable order, due dates, reminders, and expirations. Provide identity verification options (email link with one-time code, SMS OTP), ESIGN/UETA consent capture, and a clear signing UI. Apply compliant digital signatures for each co-signer and preserve LTV data for all signatures. Support declinations, reassignments, and cancellation, with secure access links and anti-tamper protections. Integrate notifications and status tracking within RoofLens jobs and expose events via webhooks for external systems.

Acceptance Criteria
Sequential and Parallel Co-Signer Routing
Given a document with co-signers configured in sequential order A -> B -> C, When the workflow is sent, Then only A receives an invite initially, And B is invited only after A signs, And C is invited only after B signs. Given a document with two parallel groups [A,B] -> [C,D], When the workflow is sent, Then A and B are invited simultaneously, And C and D are invited only after both A and B sign. Given a configured signing order and roles, When the sender reviews before sending, Then the UI displays the exact order and any parallel groupings, And changes to order are persisted upon send. Given a signer in a parallel group completes their signature, When other signers in that same group are still pending, Then the workflow remains in the same group until all signers in the group complete. Given routing is in progress, When a signer is removed or reassigned before they sign, Then the system maintains the original position in the order for the replacement and notifies affected parties. Given the final required co-signer completes, When the workflow state is updated, Then the document is finalized as Signed and Locked, And no further signatures can be added.
Signing Access Security and Identity Verification
Given an invite link is generated for a signer, When the link is issued, Then it contains a single-use, time-limited token with at least 128 bits of entropy, And the token expires after the configured validity window or after first successful use. Given a signer opens the invite, When identity verification method is Email OTP, Then a 6-digit one-time code is emailed, And the code expires in 10 minutes, And after 5 failed attempts the session is locked for 15 minutes, And up to 3 resends are allowed with new codes invalidating prior codes. Given a signer opens the invite, When identity verification method is SMS OTP, Then a 6-digit one-time code is sent by SMS to the configured number, And the code expires in 10 minutes, And after 5 failed attempts the session is locked for 15 minutes, And up to 3 resends are allowed with new codes invalidating prior codes. Given an expired or used invite link, When a signer attempts access, Then the system displays a non-PII error and blocks access, And allows the sender to issue a fresh link which invalidates prior tokens. Given OTP verification succeeds, When the signer proceeds, Then the audit log records method, timestamp, masked destination, and IP, And the signer is allowed to view the document; otherwise access is denied. Given excessive OTP or link requests from the same signer or IP, When rate limits are exceeded, Then the system returns 429 with retry-after and does not disclose whether the email/phone exists.
ESIGN/UETA Consent Capture
Given a signer has not provided ESIGN/UETA consent for this signing session, When they begin the session, Then the ESIGN/UETA disclosure is presented prominently with a required checkbox and an Agree action, And proceeding to view or sign the document is blocked until consent is granted. Given a signer grants consent, When consent is captured, Then the system records timestamp, IP address, user agent, disclosure version, and signer identifier in the audit trail, And a copy of the disclosure text is stored with the record. Given a signer declines consent, When they choose not to agree, Then the session is terminated, the workflow is halted for that signer, status is set to Declined - No Consent, and the sender is notified. Given a signer is re-invited for the same workflow, When prior consent exists and the disclosure version has not changed, Then consent is not required again and is referenced from the audit trail; otherwise updated consent is required.
PAdES Signatures with LTV for All Co-Signers
Given a signer completes their e-signing, When the system applies the digital signature, Then a standards-compliant PAdES signature is embedded for that signer, the PDF is incrementally updated, and prior signatures remain valid. Given a digital signature is applied, When LTV data is attached, Then OCSP/CRL responses and the signing certificate chain are embedded, And a trusted timestamp is applied, Enabling offline validation. Given the fully signed document is opened in a compliant PDF validator (e.g., Adobe Acrobat), When validation is performed offline, Then all signatures show as valid with LTV enabled, And the document shows as signed and locked. Given any post-sign modification to the PDF bytes, When validation is performed, Then at least one signature shows as invalid/tampered, And the system flags the document as altered. Given multiple co-signers complete in sequence or parallel, When each signature is applied, Then the signature appearance and metadata identify the signer, signing time, and reason, And the final document retains LTV data for all signatures.
Decline, Reassign, and Cancel Flows
Given a signer has been invited and has not signed, When the signer chooses to decline, Then they must provide a reason, the audit trail is updated, the sender is notified, and downstream routing is paused for that branch until the sender resolves (reassigns or cancels). Given a signer is pending, When the sender reassigns to a new email and optional name, Then the original link is revoked, a new secure invite is issued, audit entries record the change (old -> new), and the replacement inherits the same routing position and due date. Given a workflow is in progress, When the sender cancels the signing request, Then all pending links are revoked immediately, notifications are sent to affected parties, the document is marked Cancelled, and no additional signatures can be collected. Given a signer is reassigned after providing consent but before signing, When the reassignment occurs, Then the new signer must provide their own ESIGN/UETA consent and pass identity verification; prior consents remain in the audit trail but do not transfer. Given a signer declines after being the only remaining signer in a parallel group, When the decline is recorded, Then the workflow proceeds if other required signers in the group have completed and business rules permit continuation; otherwise it remains paused for sender action.
Clear Signing UI and Accessibility
Given a signer opens the signing session on a mobile device (viewport 360x640) or desktop, When the signing UI loads over a typical 4G connection, Then the above-the-fold content renders within 2 seconds and the full document is interactive within 5 seconds. Given required fields exist for a signer, When the session starts, Then the UI clearly indicates the number of required actions, guides the signer field-to-field, prevents completion until all required fields are satisfied, and provides a clear Finish action. Given accessibility requirements, When navigating the signing UI, Then all interactive controls are keyboard accessible, have appropriate ARIA labels/roles, and meet WCAG 2.1 AA contrast ratios; screen reader announcements are provided for errors and step changes. Given common errors (expired link, failed OTP, network loss), When an error occurs, Then the UI shows a human-readable, actionable message without exposing sensitive details and provides a path to retry or contact the sender. Given zoom and page navigation needs, When the signer interacts with the document, Then pinch-to-zoom (mobile) and zoom controls (desktop) work, page thumbnails are available, and the current page and progress are visible.
Notifications, Status Tracking, Reminders, and Webhooks
Given a workflow is sent, When recipients are invited, view, consent, verify, sign, decline, reassign, expire, or when the sender cancels, Then in-app status within the RoofLens job updates in real time with per-signer timestamps and a complete event history. Given reminders are configured, When a signer remains pending, Then automatic reminders are sent according to the configured cadence (e.g., every 2 days) until the signer completes or the due date/expiration is reached; manual reminders can be triggered by the sender; no reminders are sent after completion or cancellation. Given due dates and expirations are set, When the due date is reached, Then the signer is flagged Overdue and a reminder is sent; When the expiration time passes, Then pending invites are invalidated, status is set to Expired, and notifications are sent to the sender and affected signers. Given webhooks are configured with a shared secret, When any signing event occurs (invited, viewed, consented, verified, signed, declined, reassigned, reminder_sent, overdue, expired, canceled, completed), Then a webhook is delivered within 60 seconds including job_id, document_id, signer_id, event, timestamp, routing_position, verification_method, and a payload HMAC signature; 3 retries with exponential backoff occur for non-2xx responses. Given idempotency is required by downstream systems, When duplicate deliveries occur due to retries, Then each webhook contains a stable event_id to allow receivers to de-duplicate processing.
Seal Lock and Tamper Detection UX
"As an estimator, I want sealed PDFs to be read-only and visibly flag any alterations so that recipients immediately see if the document has been tampered with."
Description

Once all required signatures are applied, lock the document within RoofLens by setting appropriate PDF permissions and finalizing the signature fields so that any change breaks validation. Display clear in-app verification states (Valid, Modified, or Broken) with explanations and next steps. Prevent edits to sealed versions, forcing new revisions to create a new version and signature cycle. Ensure the sealed PDF shows a clear status in standard PDF readers without additional tools, and surface any validation failures prominently to users and admins.

Acceptance Criteria
Final Signature Locks and Seals PDF
Given a document with defined required signers When the final required signer applies a compliant digital signature Then the system applies a PAdES-LTV final signature with embedded OCSP/CRL and RFC 3161 timestamp And sets PDF permissions to disallow editing, form filling, commenting, page extraction, and further signing And marks all signature fields read-only And the document state becomes "Valid" in-app within 3 seconds of signature completion And the sealed version is assigned an immutable version ID and checksum
In-App Verification States and Guidance
Given any user views a document When the document is sealed and unaltered Then display a "Valid" badge with a tooltip explaining it is locked and court-ready and provide primary actions: Download, Share Given any user views a sealed document that has a newer unsealed revision created from it When viewing the sealed prior version Then display a "Modified" badge indicating a newer revision exists and provide CTAs: Open Latest Revision, Request Signatures Given any user views a sealed document whose integrity check fails When the app validates the signature Then display a "Broken" badge with cause (e.g., byte mismatch, invalid signature), last known valid timestamp, and CTAs: Download Original, Create New Revision And badges appear on both list rows and detail headers and meet WCAG AA contrast (>= 4.5:1)
Edit Prevention and New Revision Workflow
Given a sealed document with state "Valid" When a user attempts any edit action (annotate, rotate, replace pages, edit text, modify fields) Then block the action and present a modal to Create New Revision When the user confirms Then create a new version with incremented version number, copy content from the sealed PDF as the base, purge all signatures, reset required signers, and redirect to the new draft And the sealed version remains immutable and visible in version history And edit APIs for the sealed version return 403 with a machine-readable error code
Tamper Detection and Alerting
Given a sealed document stored by the system When the file bytes are altered (e.g., edited externally) and the document is opened, validated, or re-uploaded Then server-side validation detects signature failure or digest mismatch, sets state to "Broken" within 5 seconds, and records failure details (reason code, certificate chain status) Then notify the document owner and workspace admins via in-app alert and email within 60 seconds And block send/share/approve actions until a new revision is created And display a persistent header message on the document with cause and next steps
Validation in Standard PDF Readers
Given a sealed PDF downloaded from the system When opened in Adobe Acrobat Reader DC (current) and Foxit Reader without internet access Then both readers report the signature as valid and the document as locked with no unsigned changes since signing When a user attempts to edit in these readers Then the reader warns that changes will invalidate the signature When any change is saved Then re-validation in RoofLens marks the file as "Broken" upon upload or automated check
Long-Term Validation (LTV) Readiness
Given a sealing operation When the final seal is applied Then embed OCSP/CRL responses for the full certificate chain and an RFC 3161 trusted timestamp so that offline validation succeeds without network access And the signature profile conforms to PAdES-B-LT or better anchored to a trusted root in the configured trust store And the in-app Signature Details panel displays TSA time, certificate subject, issuer, serial, and revocation data status
Audit Trail and Admin Visibility
Given any sealing or validation event When it occurs Then append an immutable audit log entry with event type, actor, UTC timestamp, document version ID, checksum, certificate subject/serial, issuer, TSA token serial, and outcome (Pass/Fail) When a document transitions to "Broken" Then surface an admin dashboard banner within 60 seconds containing the document ID, owner, last valid time, and a link to audit details
Automated Long-Term Validation Refresh
"As a records manager, I want validation data to be embedded and automatically refreshed so that documents remain verifiable long after certificates or revocation info expire."
Description

Implement background jobs to periodically refresh embedded OCSP/CRL data and apply archival timestamps to upgrade documents to PAdES B-LTA where configured. Track revocation info expiry and proactively renew validation data before it becomes stale, even after signers’ certificates expire. Maintain a resilient cache of revocation data, handle large CRLs efficiently, and re-seal with chained timestamps to preserve evidence over time. Provide admin controls for retention policies and compliance reports on validation health across all sealed documents.

Acceptance Criteria
Proactive OCSP/CRL Refresh Before Expiry
Given a sealed PDF with embedded OCSP/CRL entries expiring at time T and a refresh threshold of 72 hours When the background refresher runs at or before T-72h Then the system fetches fresh OCSP/CRL for every certificate in each signer chain and embeds them as new DSS entries without altering visible content And then the document validates offline with LTV OK in Adobe/DSS with no external network calls And then the job records success with timestamps and responder URLs in the audit log
Archival Timestamp Upgrade to PAdES B-LTA
Given a PAdES B-LT document and the tenant has Upgrade to B-LTA enabled When a refresh cycle completes or the scheduled upgrade time occurs Then the system applies an ETSI EN 319 142-1 compliant archive timestamp (RFC 3161) from a configured trusted TSA, chaining with any prior timestamps, and embeds updated validation data And then external validators (EU DSS, Adobe) recognize the document as PAdES B-LTA with signature validity preserved And then offline validation succeeds without any external fetch
Evidence Preservation After Signer Certificate Expiry
Given a sealed document where the signer’s certificate has expired after the original signing time When a refresh occurs post-expiry Then the system preserves the original signing-time status (OCSP/CRL proving good status at/around signing time) and adds a new archive timestamp And then the signature remains valid at signing time under LTV policies without requiring the signer to re-sign
Large CRL Handling and Cache Efficiency
Given an issuer that publishes 100 MB full CRLs and 2 MB delta CRLs When the system refreshes revocation data Then it prefers delta CRLs, downloads no more than 5 MB for that issuer during the cycle, completes the job in under 5 minutes, and keeps peak memory usage ≤ 256 MB And then the CRL is stored compressed and deduplicated in the shared cache by issuer key ID so that subsequent documents using the same issuer complete refresh in under 30 seconds via cache hit
Resilient Retry, Backoff, and Rate Limiting
Given an OCSP/CRL/TSA endpoint that returns timeouts or 5xx errors When a refresh attempt is made Then the system retries with exponential backoff and jitter up to 6 attempts (total wait < 30 minutes), enforces a per-endpoint rate limit of 1 request/second, and opens a circuit breaker for 10 minutes after repeated failures And then the document is marked Refresh Deferred, existing LTV data remains intact, and the job is automatically re-queued And then if completion before the LTV expiry threshold becomes unlikely, the document is flagged At Risk in compliance reports at least 24 hours in advance
Admin Retention Policies and Compliance Reporting
Given an admin user managing SignSeal settings When they configure revocation cache retention (e.g., 90 days), refresh thresholds (e.g., 72 hours), and select a TSA profile via UI or API Then the settings are validated, versioned, and audited (who, what, when) and take effect on the next cycle without downtime And then the admin can generate a compliance report (CSV/PDF) filtered by date range and health status that lists per document: last refresh time, next LTV expiry, signature level (B-LT/B-LTA), health (Green/Amber/Red), and at-risk reasons And then generating a report for 10,000 documents completes in ≤ 60 seconds
Re-seal Integrity and Multi-Signature Verification
Given a document with multiple signatures (primary and co-signers) sealed by SignSeal When refreshed LTV data is embedded and/or an archive timestamp is applied Then all signatures remain valid, the PDF revision increments without visual changes, and the document stays read-only such that any byte modification breaks validation And then external validators (Adobe, EU DSS) confirm each signature’s validity and the timestamp chain back to a trusted TSA root
Court-Ready Evidence and Audit Package
"As an insurance adjuster, I want an exportable evidence package and full audit trail so that I can defend the document’s authenticity and signing process during disputes or litigation."
Description

Capture a comprehensive audit trail including signer identities, consent records, authentication steps, IP addresses, user agents, timestamps (system and TSA), document hashes before/after signing, certificate details, and validation results. Allow export of an evidence package containing the sealed PDF, audit log, cryptographic hashes, TSA receipts, and verification instructions. Ensure data integrity with tamper-evident logging and secure retention aligned with legal and customer policies. Make audit details viewable in-app for each document and accessible via API for downstream systems.

Acceptance Criteria
Complete Audit Trail Capture on Signing Event
Given a document is signed by all required parties via SignSeal When the final signature is applied and the document is sealed Then the audit log for the document includes for each signer: full name, email, unique signer ID, and role/order And includes an explicit consent record with timestamp and the version of the e-sign disclosure accepted And includes authentication steps with method, outcome, and timestamp(s) And includes IP address and user agent for each action (viewed, consented, authenticated, signed) And includes system timestamp (UTC, ISO 8601 with milliseconds) and a TSA timestamp token for sealing And includes SHA-256 hashes of the document immediately before signing and after sealing And includes signature certificate details (subject CN, serial, issuer, validity period, key usage) and validation result (valid/invalid, chain, revocation status)
Tamper‑Evident Audit Log Integrity
Given the audit log is stored for a document When an integrity verification is run on the log Then the verification succeeds and reports an unbroken hash chain or signature for all events And when any event is programmatically altered in a test environment Then verification fails and identifies the first compromised record And the system prevents runtime edits to persisted audit records (append-only) And each event includes its own SHA-256 digest and the previous event digest (or signature) to enable chain reconstruction
Evidence Package Export (ZIP) Contains Required Artifacts
Given a completed, sealed document When a user exports the evidence package Then a single ZIP is generated within 30 seconds and is downloadable for 24 hours And the ZIP contains: the sealed PDF; the audit log in JSON and human-readable PDF; a manifest.json listing all entries with SHA-256; original and final document hashes; TSA receipt files (.tsr/.tsd); signer certificate chain files (PEM/DER); and verification instructions (README.txt) And the manifest's hashes match the actual files in the ZIP And the ZIP filename includes the document ID and signing completion timestamp And the export action is logged in the audit trail with requester identity, IP, and timestamp
In‑App Audit Details View Per Document
Given a user with permission opens a document's Audit tab When the page loads Then the audit timeline displays chronologically with pagination (minimum 50 events per page) And filters are available for signer, event type, and date range And selecting an event reveals full metadata including consent, auth method/outcome, IP, user agent, timestamps, hashes, certificate details, and validation results And timestamps display in the user’s locale with hover to reveal UTC ISO 8601 And the view supports export of currently filtered events to CSV and JSON
API Access to Audit Data
Given a system integrates via API with a valid OAuth 2.0 access token with scope audit.read When it requests GET /v1/documents/{documentId}/audit-events with optional filters (signerId, eventType, from, to, page, size) Then the response is 200 with paginated results and total count And each event includes: id, type, signer, consent, auth steps, IP, userAgent, timestamps (system and TSA where applicable), hashes, certificate details, validation status, previousHash, and currentHash And when requesting a non-existent documentId the API returns 404 And when missing scope the API returns 403 and no data is leaked And response time p95 is <= 500 ms for pages up to 100 events
LTV and TSA Validation of Sealed PDF
Given the sealed PDF from a completed signing When opened in Adobe Acrobat Reader with network connectivity disabled Then the signature status shows Valid with Long-Term Validation (LTV) and embedded revocation data And the document shows a trusted timestamp from a TSA that chains to a trusted root And independent verification using ETSI-compliant tools validates the PAdES signature and RFC 3161 tokens And the PDF's SHA-256 hash matches the hash recorded in the audit log and evidence manifest
Retention Policy and Legal Hold Enforcement
Given an admin configures a retention policy of N years for audit artifacts When documents reach their retention expiry and are not on legal hold Then the system performs defensible deletion of audit artifacts and evidence packages within 24 hours And a deletion receipt is written to an immutable log with document ID, deletion time, actor, and hashes And when a legal hold is applied to a document, deletions are suspended and the UI/API indicate hold status And retention setting changes are auditable and do not retroactively shorten existing retention

RedactSafe

Enables role‑based, permanent redactions (PII, policy numbers) that maintain a verifiable manifest. Share privacy‑compliant copies with carriers or homeowners without breaking tamper evidence, while keeping a secure, fully detailed original for internal records.

Requirements

Role-based Redaction Policies
"As a compliance officer, I want to configure role-based redaction templates so that shared documents exclude PII while internal staff retain access to originals."
Description

Define and enforce redaction rules by user role, document type, and field sensitivity (e.g., names, addresses, policy numbers) so that shared outputs automatically exclude protected data while internal roles retain full visibility. Integrates with RoofLens RBAC, export pipelines, and PDF generation to apply policies at share/export time and through API. Supports reusable policy templates per carrier and jurisdiction with versioning and change history.

Acceptance Criteria
Carrier Export Applies Role-Based Redactions
- Given a project containing PII fields (names, addresses, policy numbers) and a user with role "Estimator" - And a redaction policy for role "Estimator" marking those fields as "redact on share/export" - When the user exports a PDF for carrier sharing - Then all fields defined in the policy are redacted in the exported PDF - And no fields not listed in the policy are altered or redacted - And the policy name, version, and timestamp are recorded in the export metadata - And the original, unredacted source remains unchanged and accessible to roles permitted for full visibility
Internal Admin Access Retains Full Visibility
- Given an Admin role with "view unredacted originals" permission and an active external-sharing redaction policy - When the Admin opens the document in RoofLens - Then the Admin sees full, unredacted content - And an indicator shows that redactions will apply on external share/export - And an internal-only download contains unredacted content - And audit logs record the view/download event with role and policy context
Document-Type and Field Sensitivity Rules Enforcement
- Given policy rules defined per document type (Estimate PDF, Damage Map, Photo Set) and sensitivity tags (PII, Sensitive-Operational) - When a user with role "Field Adjuster" exports each document type - Then only fields whose sensitivities are marked for that document type and role are redacted - And redaction styles (e.g., black box, hash overlay) match policy configuration per field type - And redaction is applied at correct coordinates for PDFs and exact pixel regions for images - And OCR-extracted text matching policy fields is also redacted in outputs
Carrier/Jurisdiction Policy Templates with Versioning
- Given reusable templates mapped to carriers and jurisdictions with semantic versions and change history - When a user selects a carrier and jurisdiction at export - Then the system applies the highest active template version matching the carrier and jurisdiction, falling back to defaults by precedence - And the applied template version, change log reference, and effective date are stored with the export manifest - And editing a template creates a new version and preserves prior versions as read-only - And an audit trail captures who changed what, when, and a required reason
RBAC-Governed Policy Management
- Given RoofLens RBAC defines permissions (create/edit policy, approve template, assign policies to roles) - When a user without "edit policy" permission attempts to modify a policy or template - Then the action is blocked with a 403/permission error and no changes persist - And when a user with "approve template" permission publishes a template, it becomes selectable for exports - And policy assignment to roles is only permitted by users with "assign policy" permission - And all management actions are captured in immutable audit logs
API Export Applies Policies and Returns Redacted Outputs
- Given a POST to /api/exports with target=carrier, role=Estimator, documentType=EstimatePDF, and policyTemplateId or inferred carrier/jurisdiction - When the request is valid - Then the API responds 202 with an export job id and upon completion provides URLs to the redacted PDF and a JSON manifest of applied redactions - And the manifest includes policy id, version, list of fields redacted (name, type, page, coordinates), counts, and checksums of input/output - And if no matching policy/template is found, the API returns 409 with a descriptive error and no export is produced - And idempotency keys prevent duplicate exports for the same request within a defined window
Permanent Burn-in Redactions (PDF/Image)
"As a project manager, I want redactions to be irreversible in shared files so that sensitive data cannot be recovered by recipients."
Description

Render redaction regions directly into pixel data and vector content, flatten layers, remove hidden text, sanitize selectable text, and strip content streams so redacted information cannot be recovered. Applies to RoofLens PDFs, annotated damage maps, and aerial images, ensuring irreversible removal across all exported formats, including multi-page estimates and attachments.

Acceptance Criteria
Raster Exports: Pixel-Level Burn-In
Given a RoofLens image export (PNG and JPEG) with applied redaction regions, When opened in a professional image editor, Then the image contains a single flattened layer with no editable layers, masks, or alpha channels revealing original content. Given the exported PNG and JPEG, When sampling any pixel inside a redacted region, Then pixel values equal the configured redaction fill color exactly (e.g., #000000) with zero variance. Given the exported raster file, When extracting metadata and embedded previews/thumbnails, Then no unredacted preview or EXIF thumbnail contains original content from redacted regions. Given the exported raster file, When scanning for hidden channels or history data, Then no original pixels from redacted areas are recoverable.
Vector PDFs: Path Flattening and Object Removal
Given a RoofLens PDF containing text, vector paths, and images under a redaction box, When exporting the redacted PDF, Then all content objects intersecting the redaction region are removed from content streams rather than hidden behind overlays. Given the redacted PDF, When attempting to select or extract text within any redacted box, Then zero characters are returned. Given the redacted PDF, When inspecting structure trees, content streams, annotations, and OCG/layers, Then there are no objects, annotations, or optional content groups that reveal redacted content. Given the redacted PDF, When rasterizing pages at 2400 DPI and visually comparing, Then no ghosting or faint original content appears within redaction areas.
Hidden and OCR Text Sanitization
Given a document with invisible text layers and OCR text containing PII beneath imagery, When applying redactions and exporting, Then all hidden/OCR text within redacted regions is deleted from the PDF and not merely clipped or masked. Given the redacted output, When performing a full-text search for the redacted PII strings, Then zero hits are returned. Given the redacted output, When copy/pasting from within a redacted region in common viewers, Then nothing is copied. Given the redacted output, When running text extraction via PDFBox or similar tooling, Then no redacted strings are returned.
Content Stream and Metadata Scrubbing
Given a redacted PDF, When viewing raw content streams and object dictionaries, Then no substrings of removed text or image data for redacted zones are present. Given the redacted PDF, When inspecting XMP metadata, document info, and custom properties, Then no PII values from redacted regions are present. Given the redacted PDF, When analyzing for incremental-save remnants and previous revisions, Then no unredacted content is accessible from earlier revisions. Given the redacted PDF or image, When inspecting embedded fonts, ICC profiles, and associated resources, Then they contain no recoverable redacted PII strings.
Multi-Page and Attachment Coverage
Given a 12-page RoofLens estimate with two embedded image attachments and annotations, When applying redactions and exporting to PDF and a ZIP of page images, Then every targeted region across all pages and attachments is redacted per burn-in rules. Given the same source, When batch-exporting to PDF, PNG, and TIFF, Then redactions are present identically across all formats and pages. Given the exported multi-page PDF, When verifying coordinates, Then no page has redactions missing, offset, or misaligned by more than 1 pixel or 0.2 mm.
Irreversibility Validation via Recovery Attempts
Given a redacted PDF, When lowering opacity, deleting overlay rectangles, toggling layers, or revealing annotations in third-party editors, Then no original content appears because it has been removed. Given a redacted PDF or image, When running qpdf, pdfimages, strings, exiftool, and binwalk over the file, Then no original content from redacted regions is recovered. Given a redacted output, When recompressing, optimizing, or "Save As" in third-party tools, Then redacted areas remain opaque and unchanged, revealing nothing beneath.
Tamper Evidence and Manifest Consistency
Given an original internal record and a generated redacted share copy, When computing cryptographic fingerprints, Then the redacted copy has its own fingerprint and references the original’s fingerprint in a redaction manifest. Given the redacted share copy, When verifying the tamper-evident signature or checksum, Then validation succeeds post-redaction. Given the redaction manifest, When comparing listed regions, pages, fill color, and method with the output, Then they match exactly and indicate permanent burn-in applied.
Verifiable Redaction Manifest and Tamper Evidence
"As an insurance adjuster, I want a verifiable manifest of redactions so that I can prove what was removed without exposing the original document."
Description

Produce a cryptographically signed manifest detailing each redaction (page/coordinate bounds, content classification, policy ID, actor, timestamp, reason) and include SHA-256 hashes of original and redacted artifacts. Embed the manifest and signature in PDF attachments/metadata and expose a verification endpoint and in-app verifier, enabling recipients to confirm integrity and the exact scope of redactions without accessing the original.

Acceptance Criteria
Signed Redaction Manifest Generated on Export
Given a document with at least one redaction applied by an authorized user When the user exports the redacted PDF Then the system generates a redaction manifest for the export And the manifest includes SHA-256 hashes of the original artifact and the redacted artifact And the manifest is cryptographically signed with the platform signing key And the export job stores the signed manifest alongside the redacted PDF
Manifest Details Include Required Fields
Given a document with N redactions across one or more pages When the manifest is generated Then each redaction entry contains: page number, coordinate bounds (x1,y1,x2,y2), content classification, policy ID (if available), actor identifier, timestamp (ISO 8601 UTC), and reason And the manifest includes a total redaction count equal to N And all required fields are present for 100% of redaction entries
PDF Embeds Manifest and Signature
Given an export completes for a redacted document When the resulting PDF is inspected Then the PDF embeds the redaction manifest and its signature as PDF attachments and/or metadata And the embedded manifest and signature are discoverable by verifiers without external configuration And verification can proceed without external network access to retrieve the manifest or signature
Verification API Confirms Integrity and Scope
Given a client submits a RoofLens-produced redacted PDF to POST /api/redactions/verify When the API processes the file Then it returns HTTP 200 with verified=true when the signature validates and hashes match And the response includes original_sha256, redacted_sha256, manifest_id, signing_key_id, redaction_count, and an array of redaction entries (page, bounds, classification, reason) And the API-computed SHA-256 of the submitted PDF matches manifest.redacted_sha256
In-App Verifier Validates Without Original Access
Given a user opens the in-app verifier and selects a RoofLens redacted PDF When verification runs Then the app displays Verified = true when the signature validates and hashes match And the app displays the list of redactions with page and coordinate bounds, content classification, and reason And verification completes without accessing or requiring the original (unredacted) file
Tamper Detection Fails Verification on Altered Files
Given a redacted PDF or its embedded manifest/signature has been modified after export When the verification API or in-app verifier processes the file Then verification returns verified=false And the failure reason specifies which check failed (e.g., signature_invalid, redacted_sha256_mismatch, manifest_missing) And no redaction entries are displayed as verified
Privacy-compliant Share Links and Exports
"As a contractor, I want to share privacy-compliant copies with carriers and homeowners so that I meet privacy requirements without extra manual steps."
Description

Provide time-bound, access-controlled share links and downloadable exports that deliver only the redacted artifacts alongside the verification manifest. Integrate with RoofLens share modal and notifications, support recipient watermarking, optional print/download restrictions, and event logging for views/downloads to meet carrier and homeowner privacy requirements without manual post-processing.

Acceptance Criteria
Time‑Bound, Access‑Controlled Redacted Share Link
Given a project with approved redactions and an expiration time set When the owner creates a privacy‑compliant share via the RoofLens share modal and specifies recipients Then the share link returns only redacted artifacts and the verification manifest And the original, unredacted artifacts are not retrievable through the link or its APIs And access requires a valid recipient‑bound token or authenticated email match And requests from non‑authorized users return HTTP 403 And requests after the expiration return HTTP 410 and the token is revoked And link metadata displays share ID and expiration to the recipient
Watermarked Recipient‑Specific Views and Exports
Given watermarking is enabled for a share and a recipient R opens the link When R views or downloads any artifact Then each page/image displays a visible watermark containing R’s identifier (email or label), share ID, and timestamp And the watermark cannot be disabled by the recipient in the viewer And different recipients receive uniquely watermarked files And preview thumbnails also show the watermark And the downloaded file’s hash differs from the non‑watermarked equivalent while remaining verifiable by the manifest
Optional Print/Download Restrictions in Share Viewer
Given print/download restrictions are enabled for a share When a recipient opens the web viewer Then download and print controls are not visible or are disabled And Ctrl/Cmd+P triggers an in‑app block message and does not render print from the viewer And direct asset URLs are short‑lived, recipient‑scoped, and require valid tokens And PDFs generated via the viewer have print and copy permissions disabled in file metadata And when restrictions are disabled, download and print controls are available
Verification Manifest Delivery and Integrity
Given a redacted share is created When a recipient accesses the link Then a verification manifest is available for download alongside the artifacts And the manifest lists artifact IDs, redaction regions, timestamps, and SHA‑256 hashes of the redacted files And the manifest is signed with the platform signing key and includes a signature and public key reference And recomputing a file’s hash locally matches the manifest value And signature verification using the published public key succeeds
Tamper‑Evidence Preservation End‑to‑End
Given a recipient downloads any redacted artifact from a share When the file is altered in any way Then verification using the manifest/signature reports the file as modified/tampered And unaltered files verify as valid and untampered And internal originals remain unchanged and accessible only to authorized internal users And the share does not expose any reference or link to originals
Share Modal Configuration and Validation
Given a user opens the RoofLens share modal on a project with redactions When the user selects Privacy‑compliant Redacted Share, sets recipients, expiration, watermarking, and restrictions Then the modal validates required fields and enforces expiration between 1 hour and 30 days And it displays a count of included redacted artifacts and a redacted preview And saving creates the share, shows success state, and copies the share URL on demand And policy defaults (e.g., watermarking on) are applied per organization settings And any errors (invalid recipient, past expiration, missing artifacts) are shown inline with actionable guidance
Event Logging for Views and Downloads
Given any view or download occurs via a share link When the event happens Then an audit record is persisted within 1 second including share_id, artifact_id, recipient_id or token hash, event_type (view/download), timestamp (UTC), IP, and user_agent And duplicate events are de‑duplicated by share_id+artifact_id+recipient+event_type within a 5‑minute window And owners can filter and export events to CSV by date range and share_id And optional notifications are sent to the owner on first view and first download per share when enabled
Auto-Detection of PII and Sensitive Fields
"As an estimator, I want the system to auto-suggest sensitive data to redact so that I can prepare shareable documents faster and with fewer errors."
Description

Use OCR and entity recognition to detect and classify PII and domain-specific identifiers (names, addresses, emails, phone numbers, policy and claim IDs) within PDFs, tables, annotations, and images. Pre-populate suggested redaction zones with confidence scores and policy mappings, allowing users to confirm, adjust, or dismiss suggestions before burn-in. Supports language/locale settings and customizable sensitivity thresholds.

Acceptance Criteria
Detect PII across PDFs, images, tables, and annotations
Given a 20-page PDF (≤50 MB) containing vector text, embedded raster images, tables, and PDF annotations with PII classes (name, address, email, phone, policy ID, claim ID) When the document is uploaded with locale = en-US and default sensitivity thresholds Then analysis completes in ≤90 seconds And suggested redaction zones are returned for each detected entity with fields: class, page, bounding box (x,y,w,h in points), confidence (0–1), and policyRuleId And text rotated up to ±90° and skew up to 10° is detected And micro-averaged precision ≥0.95 and recall ≥0.90 against the project’s ground-truth test set of 200 documents
Review and adjust suggested redaction zones
Given detection results have been generated for a document When a Redacter opens Redaction Review Then each suggestion displays class label, confidence, policy rule name, and a visible bounding box on the page And the user can Accept, Adjust (resize/move), or Dismiss each suggestion individually or via multi-select And adjustments snap to underlying text regions with a tolerance of 3 points And keyboard shortcuts (A=Accept, D=Dismiss, Arrow keys=Move, Shift+Arrow=Resize) perform the same actions and are accessible And accepted items are marked “Ready for burn-in” without applying redaction yet
Workspace and per-session sensitivity thresholds
Given workspace defaults define per-entity minimum confidence thresholds When an Admin updates thresholds and saves Then new analyses use the updated thresholds and persist for the workspace And for an already analyzed document, when a Redacter adjusts the per-session threshold filter Then the suggestion list updates within ≤1 second by filtering existing results (no re-OCR) And a control allows re-running detection with the new thresholds; when invoked, a new analysis version is created and previous results remain accessible via version history
Locale-aware recognition for en-US and es-MX
Given the workspace locale is set to en-US When analyzing documents with US-format phone numbers, addresses, and policy IDs Then detection meets precision ≥0.95 and recall ≥0.90 on the en-US validation set And given the locale is set to es-MX When analyzing documents containing Spanish names, Mexican addresses and phone formats (+52) Then detection meets precision ≥0.93 and recall ≥0.88 on the es-MX validation set And date, number, and address tokenization follows locale rules so that addresses are not split across lines in bounding boxes
Transparent handling of OCR gaps and low-confidence items
Given pages with low contrast or unrecognized fonts When OCR confidence for a page drops below 0.5 Then the page is flagged “OCR incomplete” and suggestions are withheld for that page And the user is presented with options to enhance image and retry OCR or proceed without suggestions And entities below the active threshold appear under a “Low confidence (filtered)” panel and are not selected by default And all failure reasons are recorded in the detection manifest with page references
Role-based access to detection and suggestion actions
Given roles Viewer, Redacter, and Admin exist When a Viewer opens a document Then detection suggestions are hidden and no redaction actions are available And when a Redacter opens the document Then suggestions are visible and Accept/Adjust/Dismiss actions are enabled And when an Admin opens the document Then threshold and policy configuration controls are available And all actions on suggestions (create, accept, adjust, dismiss) are appended to the tamper-evident manifest with userId, timestamp (UTC), entityId, old/new bbox, previousStatus→newStatus, and manifest chain hash
Redaction Editor and Live Preview
"As a team member, I want an intuitive redaction editor with live preview so that I can finalize compliant documents accurately and efficiently."
Description

Offer an in-document editor to add, resize, and label redaction regions; assign reasons and policies; and view a live side-by-side preview of the final redacted output. Provide keyboard shortcuts, undo/redo, snap-to-annotation guides, and validation that all required fields are covered per selected policy before allowing share/export.

Acceptance Criteria
Create and Adjust Redaction Regions on a Document Page
Given a document page is loaded and the Redaction tool is active When the user drag-selects an area Then a redaction region is created with visible resize handles, a default label of "Unlabeled", and no policy selected Given a redaction region exists When the user drags a corner or edge handle Then the region resizes without aspect constraint, enforces a minimum size of 8x8 px, and saves final bounds to 1 px precision within page limits Given a redaction region exists When the user drags inside the region Then the region moves within page bounds and its final position is saved to 1 px precision
Assign Reason, Label, and Policy to a Redaction Region
Given a redaction region is selected When the properties panel is opened Then the user can select a Reason from a predefined list, enter a Label up to 50 characters, and select a Policy from available policies Given the selected Policy defines allowed reasons When the user saves region properties Then the region must include a Reason that is allowed by the selected Policy or an inline validation error is shown and the save is blocked Given the user enters a Label When saving properties Then leading and trailing whitespace is trimmed and empty labels are rejected with an inline error
Live Side-by-Side Preview Reflects Redactions in Real Time
Given the preview panel is open When a redaction region is created, moved, resized, deleted, or its properties change Then the preview updates to the redacted rendering within 300 ms of interaction end and visually matches final export styling Given multiple edits occur in rapid succession under 200 ms apart When rendering the preview Then updates are debounced so the preview reflects the latest state within 300 ms of the last edit Given the preview toggle is On When the document is reopened in the editor Then the preview panel state persists
Keyboard Shortcuts and Undo/Redo for Redaction Editing
Given the editor has focus When the user presses R Then the Rectangle Redaction tool becomes active Given a region is selected When the user presses Delete or Backspace Then the region is removed Given any edit action (create, move, resize, delete, property change) has occurred When the user presses Ctrl/Cmd+Z Then the last action is undone and the preview updates accordingly Given there is an undone action When the user presses Ctrl/Cmd+Shift+Z Then the action is redone and the preview updates accordingly Given a region is selected When the user presses an Arrow key Then the region nudges by 1 px; holding Shift nudges by 10 px Given the user begins drawing a new region When they press Esc Then the in-progress drawing is canceled and no new region is created Given an active editing session When performing sequential edits Then the undo history retains the last 50 actions across the session
Snap-to-Annotation Guides Improves Alignment
Given existing annotations or page edges are present When a redaction region edge, centerline, or corner is within 6 px of a guide Then the region snaps to the guide and a snap indicator is shown Given the user holds Alt/Option while dragging When within snap tolerance Then snapping is temporarily disabled Given snapping occurs When the user releases the mouse Then the snapped position is saved and reflected in the preview
Policy Coverage Validation Blocks Share/Export Until Complete
Given a policy is selected for the document When validation runs Then each policy-required field type must be covered by at least one redaction region assigned a matching Reason Given validation fails When the user attempts to Share or Export Then the action is blocked and a modal lists missing required field types with page numbers and counts Given validation passes with zero missing required items When the user opens Share or Export Then the action is enabled and proceeds without blocking errors Given the coverage state is displayed When regions are added or removed Then the counts of covered and missing items update in real time
Multi-Page Editing and Preview Consistency
Given a multi-page document is loaded When navigating between pages Then redaction regions are shown and editable only on the active page and the preview reflects the active page's redactions Given a region is copied on page N When the user pastes on page M Then the region is created on page M at the same coordinates if within bounds; otherwise its position is clamped to fit within page bounds Given the page zoom level changes When editing or hit-testing regions Then region geometry is preserved in document coordinates and cursor hit-testing remains accurate within 1 px
Secure Original Preservation and Access Controls
"As a security administrator, I want originals preserved securely with tightly controlled access so that we can reference full details internally without risking external exposure."
Description

Store immutable originals in separately encrypted storage with strict RBAC, just-in-time access approvals, and full audit trails. Redacted copies are generated on demand; originals are never exposed via share links or downloads. Implement key rotation, secret management, and metadata/EXIF scrubbing to prevent leakage of hidden data across all export channels and APIs.

Acceptance Criteria
Immutable Originals in Separately Encrypted Storage
Given an original asset is uploaded When it is persisted Then it is encrypted with a key unique to the originals partition and distinct from derivative keys Given retention is configured to a policy window When a modify or delete is attempted before expiry Then the operation is rejected with 403/MethodNotAllowed and the attempt is immutably logged Given an authorized service reads an original When decryption occurs Then TLS in transit is enforced and KMS key usage is recorded with keyId and requestor context Given daily integrity verification is scheduled When the job runs Then a randomized sample of originals is hash-verified against the manifest and any mismatch triggers a P1 alert
Strict RBAC on Original Access
Given defined roles (Estimator, Adjuster, Viewer, Admin) When requesting access to an original Then only principals with the Original.View permission can initiate a JIT request; all others receive 403 without disclosing asset metadata Given a service token without originals:read scope When calling the originals API endpoint Then the request is denied and only redacted endpoints are returned in the error payload Given a permission change occurs When evaluating a new access request Then the decision reflects the latest policy within 60 seconds and is recorded in the audit log
Just-in-Time (JIT) Access Approval Workflow
Given a user submits a JIT request for an original When the request is created Then approvers receive a notification containing assetId, purpose, and requested duration Given an approver grants JIT for T minutes When the requester opens the asset Then access is view-only in-app, download is disabled, and the session auto-expires at T with no grace period Given an active JIT session exists When an approver revokes it Then access terminates within 30 seconds and subsequent requests return 401/expired-session Given a batch JIT request includes multiple assets When approvals are recorded Then each asset has an independent approval record and expiry
Audit Trail Completeness and Tamper Evidence
Given any action against originals (create, read, deny, approve, rotate) When the action occurs Then an immutable audit record is written including actor, time, IP, assetId, action, decision, and reason Given an audit export is generated When the manifest is built Then each record includes a hash and the export includes a Merkle root enabling verification of completeness and integrity Given a privacy review queries share events When filtering for original-byte egress Then zero events exist and the query returns only redacted asset deliveries
Originals Never Exposed via Share Links or Downloads
Given a user creates a share link for a job When an external party accesses the link Then only redacted copies are served and attempts to target original IDs or paths return 404/403 without reveal Given a direct download endpoint is invoked with an original assetId When the request is processed Then the endpoint returns 403 and instructs the requester to use JIT workflow Given CDN caching is enabled When redacted assets are cached Then cache keys and origins never map to original storage locations and originals are not present in CDN logs
Metadata and EXIF Scrubbing on All Exports and APIs
Given a PDF, JPEG, PNG, or ZIP export is generated When the file is produced Then EXIF/IPTC/XMP, GPS coordinates, camera serials, and hidden comments/layers are stripped except required PDF/A fields Given an image or PDF is delivered via API When metadata is scanned by automated tests Then no PII or hidden metadata is detected across 20+ common tags and the check passes with zero findings Given filenames are constructed for exports When files are saved Then filenames contain no PII/policy numbers/GPS and follow the pattern {jobId}_redacted_{artifact} Given CI runs on the export service When metadata regression is detected Then the build fails and blocks deployment to any environment
Key Rotation and Secret Management Without Downtime
Given quarterly key rotation is scheduled When rotation executes Then originals remain readable and are re-encrypted in the background with error rate for 5xx < 0.1% during the window Given an old key is retired When decryption is attempted with the retired keyId Then access is denied and an on-call alert is generated with affected asset counts Given secrets are managed in a vault When an engineer attempts to retrieve plaintext outside approved paths Then the request is denied and audited; applications use short-lived tokens only Given an emergency rotation drill is initiated When keys are rolled early Then recovery completes within 30 minutes and post-rotation integrity checks pass 100%

CodeCrosswalk

Automatically maps detected materials and damages to the exact jurisdiction‑ and carrier‑specific line codes (Xactimate, CoreLogic/Symbility, and franchise templates). Uses ZIP/county, policy type, and carrier rules to pick the right codes and units, maintains a versioned crosswalk, and highlights deltas when rules update—reducing rekeys, rejections, and reviewer friction.

Requirements

Jurisdiction-aware Code Selection Engine
"As a roofing estimator, I want RoofLens to automatically map detected materials and damages to the correct carrier- and jurisdiction-specific line codes so that I can generate compliant estimates without manual rekeying."
Description

Implements a deterministic rules-driven engine that maps RoofLens-detected materials and damages to the exact jurisdiction- and carrier-specific line codes (Xactimate, CoreLogic/Symbility, and franchise templates). Inputs include ZIP/county, carrier, policy type, dwelling type, and detected roof components. The engine applies precedence, exclusions, and conditional logic (e.g., deductible type, repair vs. replace) to select correct codes, units, modifiers, waste factors, and notes. Provides stable, idempotent results, with explicit fallbacks for unmapped items, and flags any confidence gaps. Integrates post-detection in the estimate pipeline and returns a normalized payload: line code, description, UoM, quantity, pricing context, rule reference, and rationale. Supports sub-structures (e.g., detached garage), multi-slope assemblies, and localized code sets. Must compute results within target latency (<2 seconds per job) and expose API/SDK entry points for synchronous and batch use.

Acceptance Criteria
Deterministic and Idempotent Mapping
Given fixed inputs including ZIP, county, carrier, policy type, dwelling type, deductible type, repair/replace intent, and detected roof components And a specified ruleset version and effective date When the engine is executed multiple times synchronously and asynchronously Then the normalized outputs (line items, order, quantities, UoM, modifiers, waste factors, notes, pricing context, rule references, rationale) are identical across runs And the outputs contain no run-specific non-deterministic fields And repeated submissions with the same idempotency_key return the same result without re-execution
Jurisdiction- and Carrier-Aware Code Selection
Given inputs specifying ZIP and county that map to a jurisdiction, and a carrier and policy type, and a dwelling type And detected materials and damages including multi-slope assemblies When the engine evaluates applicable rules with precedence and exclusions Then it selects the correct code set (Xactimate/CoreLogic/Franchise) and exact line codes, UoM, quantities, modifiers, waste factors, and required notes per jurisdiction/carrier policy And repair vs. replace and deductible type conditions alter selections as defined by the rules And no disallowed or excluded codes are emitted for the given jurisdiction/carrier/policy type
Explicit Fallbacks for Unmapped Items and Confidence Gaps
Given detected components lacking a mapped rule for the active jurisdiction/carrier When the engine processes the job Then each unmapped item is returned with a standardized fallback entry including placeholder code, description, UoM, estimated quantity, and a confidence_gaps array And each fallback includes a human-readable rationale, rule_reference=null, and a machine-readable reason code And the engine emits a top-level warning flag and does not drop unmapped items silently
Normalized Payload Completeness and Traceability
Given a job with a primary structure and a detached garage, each with multiple slopes When the engine returns results Then every line item includes: line_code, description, uom, quantity, modifiers, waste_factor, required_notes, pricing_context (price_list_id or carrier context), structure_id, slope_id, rule_reference (id and version), and rationale And line items are grouped or taggable by structure_id and slope_id to support downstream formatting And rule_reference.id and version correspond to an existing ruleset entry at the time of evaluation
Latency Under Target SLO
Given production-equivalent hardware and configuration and up to 10 concurrent synchronous requests And jobs each containing up to 150 detected components When 200 consecutive jobs are executed through the synchronous API Then each job completes engine evaluation in under 2000 ms measured from engine entry to payload emission And the batch API processes 200 jobs with average per-job engine time under 2000 ms and overall completes without timeouts
API and SDK Entry Points for Sync and Batch
Given the published API and SDK When a client invokes the synchronous endpoint with required inputs (ZIP/county, carrier, policy type, dwelling type, components) and optional parameters (ruleset_version, effective_date, idempotency_key) Then the service returns HTTP 200 with the normalized payload on success within the latency SLO; HTTP 4xx/5xx with machine-readable errors on invalid inputs or internal failures And the batch endpoint accepts a list of jobs, returns a job_id, supports polling and callback webhooks, and delivers per-job results and errors And SDK methods exist for at least Node.js and Python with typed models matching the API schema
Versioned Rules Repository & Governance
"As an operations admin, I want to manage a versioned library of mapping rules with approvals and effective dates so that estimates are traceable and reproducible over time."
Description

Creates a centralized, versioned repository for crosswalk rules with effective-date windows by geography and carrier, template type, and policy constraints. Supports draft/review/publish workflow, role-based permissions, approvals, and rollback. Imports rule updates from carriers, vendor catalogs, and franchise templates; validates schema, detects conflicts, and runs regression checks against fixtures. Each rule is addressable (rule ID), semantically versioned, and annotated with source, citation, and changelog. Projects can be locked to a specific rule version for reproducibility, while allowing later comparison to newer versions. Exposes admin UI and APIs for CRUD, bulk import/export, and automated nightly sync jobs.

Acceptance Criteria
Publish Versioned Rule with Effective Dates and Scope Filters
Given a valid draft rule with ruleId, scope (ZIP/county, carrier, templateType, policyType), effectiveStart <= effectiveEnd, and mapped codes/units And the rule passes schema validation When an approver publishes the rule Then the rule status is Published with a semver assigned And the rule is retrievable via API/UI by geocode, carrier, templateType, policyType, and an as-of date within the effective window And requests outside the effective window do not return the rule And publishing another rule with an overlapping window and identical scope is blocked unless an explicit supersede link is provided, returning error code RULE_WINDOW_OVERLAP
Draft-Review-Publish Workflow and Role Permissions
Given roles Admin, RuleEditor, Approver, and Viewer are configured When a RuleEditor creates or edits a draft rule Then only Approver or Admin can approve and publish the draft And the author of the draft cannot self-approve when a separate approver exists And unauthorized roles attempting to publish receive HTTP 403 with reason INSUFFICIENT_ROLE And all actions (create, edit, submit, approve, publish) are logged with timestamp, actor, and before/after diffs
Carrier/Vendor Catalog Import with Schema Validation and Conflict Detection
Given a carrier or vendor catalog update file is submitted via bulk import or nightly sync When the import job runs Then each record is validated against the rules JSON schema; invalid records are rejected with line-level errors and the job fails if any critical errors exist And valid records create or update draft rules with source set (Carrier/Vendor/Franchise) and citation captured And conflicts with existing rules (same scope with overlapping dates or unit mismatches) are flagged with a conflict report and publishing is blocked until resolved or superseded And the job produces a summary with counts of inserted, updated, skipped, and errors accessible via API/UI
Automated Regression Checks Against Fixtures on Rule Changes
Given a library of fixture projects with expected codes/units/estimates is configured When a rule change is submitted for publish Then a regression test suite executes automatically against all relevant fixtures And publish is blocked if any fixture deviates beyond configured thresholds, with a detailed diff of line items, codes, units, and totals And passing results are stored with the ruleId@version for traceability
Semantic Versioning, Rule Identification, and Metadata
Given a rule is modified When the change is published Then semantic versioning is applied: patch for metadata-only/non-behavioral changes, minor for additive non-breaking, major for breaking changes And the rule is addressable by ruleId and version (ruleId@version) via API And metadata includes source, citation URL, change summary, author, approvedBy, and a timestamped changelog entry And querying by ruleId returns all versions sorted by version and effective window
Project Lock to Specific Rule Version and Delta Comparison
Given a project is created at time T with ruleset version V and is locked to V When newer rule versions are published Then all computations for the project continue to use V until explicitly upgraded And a comparison view shows deltas between V and the latest for the project scope, including added/removed/changed codes, units, and pricing impacts And upgrading re-runs estimates under the new version and preserves prior results with an audit trail
Rollback Published Rule Version
Given a published rule version V is causing defects When an Admin performs a rollback to prior stable version Vprev for the same scope Then V is marked Deprecated and excluded from resolution for as-of dates after the rollback timestamp And Vprev is reactivated as Current with a changelog entry recording the rollback reason and actor And API/UI reflect the rollback within 1 minute and list impacted projects and rules in an audit report
Delta Diff & Update Notifications
"As an estimator, I want to see what changed when mapping rules update and optionally apply those updates to my open estimates so that I can keep submissions compliant without surprises."
Description

Highlights and communicates differences when crosswalk rules change. For each affected estimate, shows a side-by-side diff of before/after codes, units, quantities, pricing context, and notes; indicates reason (rule ID change, effective-date rollover, new carrier guideline). Allows users to preview, selectively accept, or defer updates, and to re-run mapping in batch. Sends configurable in-app, email, and webhook notifications; provides a dashboard of impacted projects with severity ranking. All applied updates are logged with user, timestamp, previous value, new value, and justification for audit and rollback.

Acceptance Criteria
Side-by-Side Diff Rendering of Crosswalk Changes
Given an existing estimate with crosswalk version Vprev and a newer crosswalk version Vnew that changes at least one mapped line When the user opens the Delta Diff view for that estimate Then the system renders a two-column before/after comparison for every changed line item including: code, unit, quantity, price list/version or basis, notes, line total, and per-line delta And unchanged line items are hidden by default with a toggle to "Show unchanged" And each changed line displays a Reason badge with one of: Rule ID change, Effective-date rollover, New carrier guideline, Deprecated code, Unit normalization And the view displays the overall estimate total before, after, and absolute and percent delta And the diff loads in ≤ 2 seconds for estimates up to 300 line items on a standard broadband connection And numeric values display with locale-appropriate formatting and original unit precision preserved
Selective Acceptance and Deferral of Changes
Given a Delta Diff showing N changed line items When the user selects a subset of line items and clicks Apply Updates Then only the selected changes are committed to the estimate and deferred items remain flagged as "Pending" And the system prompts for a justification note (0–500 chars) if the org policy requires it; otherwise it is optional And the estimate version increments and a read-only snapshot of the previous version is created And taxes, waste, and O&P are recalculated consistently with org settings And the apply operation completes in ≤ 1.5 seconds for up to 100 selected changes And the UI provides "Accept All" and "Defer All" controls and a per-line toggle
Batch Re-run and Apply Across Impacted Projects
Given the Impacted Projects dashboard filtered to a set of K projects with pending crosswalk updates When the user with Batch_Update permission starts a batch re-run Then the system recomputes mappings using crosswalk version Vnew for all selected estimates and stores a preview diff for each And a batch job status panel shows counts for queued, processing, succeeded, failed, and skipped, updating at least every 5 seconds And individual failures do not block other estimates; failures include error codes and retry guidance And the batch is idempotent via a client-supplied idempotency key; retries do not duplicate work And throughput is at least 50 estimates per minute under default plan conditions And upon completion the user can "Apply All" or apply per-project from the results list
Configurable Notifications for Rule Updates
Given org-level notification settings for channels (in-app, email, webhook), severity threshold, frequency (immediate or daily digest), recipients, and webhook secret When a crosswalk update impacts at least one estimate in the org Then notifications are sent per settings and channel within the configured frequency window And in-app notifications display a badge count and link directly to the Impacted Projects dashboard And emails include project name, estimate ID, previous/new crosswalk versions, number of affected lines, and severity bucket in the subject or preheader And webhook POST payload includes: event_type, org_id, project_id, estimate_id, crosswalk_prev_version, crosswalk_new_version, severity_score, change_counts, and HMAC-SHA256 signature in the header And webhook delivery retries up to 3 times with exponential backoff on HTTP 5xx and marks permanent failure on 4xx And user- or org-level unsubscribe/opt-out preferences are honored
Imparted Projects Dashboard with Severity Ranking
Given crosswalk changes affecting multiple estimates When the user opens the Impacted Projects dashboard Then the list shows each project with badges for carrier, jurisdiction, last updated, and a severity bucket And severity bucket is computed as: Critical if total delta ≥ 10% or any regulatory code change; High if 5–<10%; Medium if 1–<5%; Low if <1% And the dashboard supports filtering by carrier, jurisdiction, severity, and date range (default last 90 days), sorting by severity and last updated, and pagination for 25 rows per page And clicking a row opens the Delta Diff view for that estimate And the dashboard loads within ≤ 2 seconds for up to 1,000 impacted estimates
Audit Logging and Role-Based Rollback
Given any update is applied (single or batch) When the update is committed Then an immutable audit record is created per changed field including: user_id, role, timestamp (UTC ISO 8601), estimate_id, line_item_id, field_name, previous_value, new_value, reason, crosswalk_prev_version, crosswalk_new_version, justification, and request_id And audit entries are viewable and exportable (CSV and JSON) by users with Audit_View permission And a user with Rollback permission can restore an estimate to any prior snapshot; the rollback creates its own audit record And rollback is blocked for locked estimates; the user sees a clear error with the lock reason And the audit log preserves order and is tamper-evident via hash chaining or provider-backed immutability
Reason Traceability and Context Pinning
Given an estimate that was mapped under specific context (ZIP/county, policy type, carrier) and crosswalk version Vprev When a newer crosswalk version Vnew triggers changes Then the diff displays the governing rule IDs, effective dates (before/after), and links or references to carrier guidelines where applicable And the estimate metadata pins the applied crosswalk version and context so re-runs are comparable across time And if context (ZIP/county, policy type, or carrier) has changed since original mapping, the diff explicitly flags the context change as a reason And deprecated or superseded rules show "Deprecated" or "Superseded" reasons respectively
Unit Normalization & Conversion Mapping
"As a carrier reviewer, I want quantities automatically converted and rounded to my required unit conventions so that estimates pass validation on first submission."
Description

Normalizes RoofLens measurement outputs into carrier- and template-specific units of measure and rounding conventions. Applies UoM conversions (e.g., SQ from sqft, LF, EA), waste factors, coverage multipliers, and minimums per material/damage type. Enforces rounding rules (e.g., banker’s vs. conventional), decimal precision, and carrier-specific preferences (e.g., drip edge in LF vs. EA). Validates consistency between quantities and selected codes; surfaces warnings for improbable combinations. Provides a reusable service used by the selection engine and export connectors to ensure consistent, reproducible quantities across all destinations.

Acceptance Criteria
Shingle Area SQ Normalization with Banker’s Rounding (Carrier A)
Given carrier=Carrier A, targetUoM=SQ, precision=0.01, roundingMode=Bankers, wasteFactor=10% (applied after SQ conversion), min=1.00 SQ, and measuredArea=2,345 sqft When normalization runs Then baseQuantitySQ = 23.45 And quantityWithWaste = 25.795 And roundedQuantity = 25.80 And normalizedQuantity = 25.80 SQ And result includes {uom:"SQ", precision:2, roundingMode:"Bankers"}
Drip Edge UoM Selection by Carrier
Given carrier=Carrier X, preferenceUoM=LF, precision=1, roundingMode=Conventional, measuredDripEdge=128.4 LF When normalization runs Then normalizedQuantity = 128 LF And result includes {uom:"LF", precision:0} Given carrier=Carrier Y, preferenceUoM=EA with packagingLength=10 LF/EA, roundingMode=Ceiling, measuredDripEdge=128.4 LF When normalization runs Then normalizedQuantity = 13 EA And result includes {uom:"EA", conversion:"LF/10 -> ceil"}
Ridge Cap LF with Waste and Minimum Enforcement
Given targetUoM=LF, precision=1, roundingMode=Conventional, wasteFactor=15% (after measurement), min=10 LF, measuredRidge=8.2 LF When normalization runs Then quantityWithWaste = 9.43 LF And roundedQuantity = 9 LF And minApplied = true And normalizedQuantity = 10 LF
Underlayment Rolls: Coverage, Layers, Waste, Ceiling
Given targetUoM=EA (rolls), coveragePerEA=200 sqft/EA, layers=2, wasteFactor=10%, roundingMode=Ceiling, measuredArea=1,850 sqft When normalization runs Then effectiveArea = 3,700 sqft And areaWithWaste = 4,070 sqft And rawEA = 20.35 And normalizedQuantity = 21 EA
Consistency Across Selection Engine and Export Connectors with Rule Version Pinning
Given jobId=J123, rulesVersion="v1.8", identical inputs, and both SelectionEngine and ExportConnector invoke the Normalization Service When normalization runs in both contexts Then outputs match on {normalizedQuantity, uom, precision, roundingMode, minApplied, rulesVersion} And normalizationId is identical across contexts When rules update to "v1.9" but job remains pinned to "v1.8" Then subsequent exports use unchanged "v1.8" results And when compareToLatest is requested Then a diff object is returned showing any value changes under "v1.9" without altering pinned results
UoM-Code Consistency and Improbable Combination Warnings
Given selectedLineCode expectsUoM="SQ" but normalizedUoM="LF" When validation runs Then export is blocked with errorCode="UOM_MISMATCH" and a corrective hint showing expected UoM Given ridgeLength=400 LF, roofArea=1,000 sqft (10 SQ), warningThreshold.ridgeLFPerSQ=5.0 When validation runs Then a warning code="IMPROBABLE_RATIO_RIDGE_PER_SQ" is emitted with observed=40.0, threshold=5.0 And export may proceed while warning is persisted
Multi-Format Export Connectors
"As an estimator, I want one-click exports of the mapped line items in the exact format required by my carrier or franchise so that I can submit without reformatting."
Description

Generates ready-to-submit estimate payloads in carrier and franchise-required formats, including Xactimate, CoreLogic/Symbility, and franchise templates. Maps internal normalized line items to destination fields (codes, descriptions, UoM, quantities, notes, price context) and performs preflight validation to catch format or required-field issues. Supports one-click export via UI and asynchronous API jobs, with downloadable files and webhooks on completion. Includes destination-specific test suites and fixtures, schema version negotiation, and graceful error reporting with actionable remediation hints.

Acceptance Criteria
UI One-Click Export to Selected Destination
Given a completed estimate with a selected destination (Xactimate, CoreLogic/Symbility, or Franchise) and no preflight errors When the user clicks Export in the UI Then an export job is created with status "Queued" within 1 second And the UI displays live status updates (Queued, Processing, Completed, Failed) And upon completion, a downloadable file appears with a filename including estimate_id, destination, and schema_version And the download link remains valid for at least 60 minutes And the exported file validates against the negotiated destination schema without errors
API Export Job with Webhook Notification
Given an authenticated POST to /exports with estimate_id, destination, and callback_url When the request body is valid Then the API responds 202 with job_id and initial status "Queued" And the job transitions through statuses until "Completed" or "Failed" And upon completion, the system POSTs a signed JSON payload to callback_url containing job_id, status, destination, schema_version, and file_url when Completed And the signature is HMAC-SHA256 in header X-Signature using the tenant's shared secret And delivery is retried up to 3 times with exponential backoff starting at 30 seconds on non-2xx responses And file_url is time-bound (>= 60 minutes) and tokenized (single-use)
Xactimate Export Mapping and Schema Compliance
Given an estimate with CodeCrosswalk-resolved Xactimate codes based on ZIP/county, policy type, and carrier When the estimate is exported to Xactimate Then each line item uses the resolved Xactimate line code, description, UoM, quantity, and price context And line item notes are preserved with proper escaping per Xactimate requirements And monetary totals and taxes match internal calculations within ±0.01 of the currency unit And the payload validates 100% against the negotiated Xactimate schema version
CoreLogic/Symbility Export Mapping and Required Fields
Given an estimate with CodeCrosswalk output for CoreLogic/Symbility When the estimate is exported to CoreLogic/Symbility Then each line item maps to the correct activity code, unit, quantity precision (max 2 decimals unless otherwise required), and price context And roof area, pitch, and waste factors populate their required fields And any unmapped item or missing required field is flagged in preflight and blocks export And the payload passes destination schema and required-field validation
Preflight Validation Blocks Invalid Exports with Actionable Hints
Given an estimate containing destination-required violations (e.g., null UoM, negative quantity, unmapped code, missing carrier/policy type) When export is attempted via UI or API Then the export is blocked and no export job is created And the response returns a list of validation errors with fields: field_path, code, message, severity, remediation_hint And correcting the flagged fields and reattempting export results in preflight success within 2 seconds for estimates up to 200 line items
Schema Version Negotiation and Audit Logging
Given a destination advertising supported schema versions (e.g., [v2.1, v2.2]) and a client request with schema_version="auto" When export is initiated Then the connector selects the highest compatible version not exceeding the destination's maximum (e.g., v2.2) And the negotiated version is stored with the export job and persisted in audit logs with timestamp and destination And if no compatible version exists, the export is blocked with error code SCHEMA_INCOMPATIBLE and an actionable remediation hint And if the selected version is deprecated, a deprecation warning is returned in the response and surfaced in the UI
Traceability, Overrides, and Audit Trail
"As a compliance manager, I want each code selection to show its source rule and allow documented overrides with approvals so that we can defend estimates during carrier review."
Description

Provides end-to-end traceability for every mapped line item, including source rule ID, version, effective dates, and citations. Allows authorized users to override code selection, UoM, or quantity with reason codes and optional attachments, with configurable approval workflows by role and carrier. Records a tamper-evident audit log capturing user, timestamp, previous and new values, and justification. Exposes an exportable audit report and embeds traceability metadata into generated estimate files where supported. Integrates with org-level RBAC and supports policy-based restrictions on override scope.

Acceptance Criteria
Traceability Metadata Visible on Each Mapped Line Item
Given an estimate is generated with mapped line items When I open a line item’s details in the estimate UI or API payload Then I see source_rule_id, rule_version, effective_start_date, effective_end_date, and source_citations populated And I see selection_context fields for zip, county, carrier, policy_type, and crosswalk_version And the values exactly match the rule repository for the referenced rule_version
Authorized Override with Reason Code and Optional Attachment
Given my role is permitted to override for the carrier and policy on this job And an override policy defines which attributes I may change (code, uom, quantity) When I propose a change to an allowed attribute And I select a reason_code from the configured list And I optionally attach supporting documentation Then the system validates my override scope against policy and accepts the submission And the override record stores user_id, role, timestamp (UTC ISO 8601), field, previous_value, new_value, and reason_code And the line item reflects the override immediately if policy requires no approval, otherwise the override status is Pending Approval and the original values remain in effect
Carrier- and Role-Based Approval Workflow for Overrides
Given an override requires approval per carrier and role policy When the override is submitted Then designated approvers are notified with the full change details and justification And an approver can Approve or Reject with an optional comment And on Approve the line item updates to the new values and the approval action is recorded And on Reject the line item remains unchanged and the rejection is recorded and the requester is notified
Tamper-Evident Audit Log for All Changes
Given any create, update, or delete affecting line item mapping, overrides, approvals, or exports When the event is committed Then an append-only audit record is written capturing user_id, role, timestamp (UTC ISO 8601), entity_id, field, previous_value, new_value, and justification And the audit ledger maintains an integrity hash that changes with each entry so that any alteration invalidates verification And only users with Audit.View permission can read entries and no user can modify or delete entries
Exportable Audit Report with Filters
Given I have permission to export audit data When I filter by date range, job or claim ID, user, carrier, and rule_id And I choose Export CSV or Export PDF Then the exported file contains all matching entries with all audit fields and an integrity checksum And timestamps in the export are ISO 8601 with timezone offset And the export action is recorded in the audit log
Traceability Metadata Embedded in Generated Estimate Files
Given an estimate is generated in a supported output format When I open the output Then for PDF outputs a Traceability appendix lists each line item with source_rule_id, rule_version, effective dates, citations, and override status And for machine-readable outputs that support custom metadata the same fields are embedded per line item And when an output format does not support embedding a separate machine-readable manifest is provided and referenced from the estimate
RBAC and Policy-Based Restrictions on Override Scope
Given org-level RBAC and carrier policy define which attributes a role may override When a user attempts to override an attribute outside their permitted scope or to a disallowed code or UoM Then the system blocks the change and displays a policy-based error message And the denied attempt is recorded in the audit log with reason policy_violation

PricePulse

Keeps unit pricing current and defensible by syncing regional price lists with supplier feeds and storm‑surge adjustments. Flags stale or mismatched lists, shows real‑time margin/variance impact, and can lock to a carrier‑approved schedule per file—delivering up‑to‑date, consistent pricing with clear rationale.

Requirements

Supplier Feed Integration & Normalization
"As an estimator at a roofing company, I want RoofLens to automatically pull and normalize current unit prices from my suppliers so that my estimates reflect true costs without me maintaining spreadsheets."
Description

Implement connectors to major roofing suppliers (e.g., API and secure file drop/CSV) to ingest unit pricing, packaging, and UoM data on a scheduled and on‑demand basis. Normalize incoming data to a canonical SKU catalog with region, brand, and packaging mappings; handle currency, UoM conversions, and duplicate resolution. Support API auth (keys/OAuth), rate limiting, retries, and incremental updates with change detection. Provide admin tools for mapping SKUs and reviewing ingestion errors, plus fallbacks for manual upload when an API is unavailable. Ensure validated, normalized prices flow into PricePulse as the authoritative source for downstream pricing and analytics.

Acceptance Criteria
Scheduled and On-Demand API Ingestion with Auth, Rate Limits, Retries, and Incremental Sync
Given a supplier API connection with API Key or OAuth 2.0 Client Credentials configured and validated When a scheduled sync runs at the configured cron expression or an authorized user clicks "Sync Now" Then the system authenticates using the configured method and records the token acquisition outcome And requests only records changed since the last successful watermark; if the supplier lacks change markers, a full sync is performed And the client enforces the supplier’s rate limit as configured (e.g., <= 100 requests/min) without exceeding it And transient 429/5xx/network errors are retried up to 3 times with exponential backoff (1s, 4s, 9s) and jitter; permanent 4xx errors are not retried And the job completes within 10 minutes for 50k SKUs or reports progress every 60 seconds for larger datasets And the run result includes counts for fetched, changed, created, updated, deleted, and failed records and persists them to the ingestion ledger
Secure File Drop and CSV Ingestion with Schema Validation and Error Handling
Given a PGP-encrypted or plain CSV/TSV/GZIP file in the configured SFTP/S3 drop location with a supplier-specific filename pattern When the polling job detects a new file or an admin triggers "Ingest File" Then the system validates file integrity, size <= 50 MB, and required columns per supplier profile; rejects otherwise with a failure report And the parser auto-detects delimiter and encoding (UTF-8/UTF-16) and normalizes header aliases to canonical fields And valid rows are staged; invalid rows are quarantined with row number, column, and error reason; overall job does not fail if >= 95% rows are valid And the file is processed exactly once; duplicates are detected via checksum and moved to an "already_processed" archive without re-ingestion And upon completion, a summary report is generated with total rows, valid, invalid, upserted, and quarantined counts and is available in Admin > Ingestion
SKU Normalization with Canonical Mapping, UoM and Currency Conversions, and Duplicate Resolution
Given staged supplier items with supplier SKU, description, brand, packaging, UoM, currency, region, and price When normalization runs Then each item is mapped to a canonical SKU or flagged as "unmapped" with a mapping suggestion based on fuzzy match score >= 0.9 And UoM conversions apply configured rules (e.g., bundle -> squares, each -> roll) with precision to 4 decimals; prices are converted accordingly And currency is converted to the tenant’s currency using the FX rate effective at the file/job timestamp; rounding follows bankers’ rounding to 2 decimals And duplicates (same supplier SKU, region, effective date) are collapsed by keeping the latest timestamp and discarding older ones And items missing mandatory fields are quarantined with specific reasons; overall normalization fails if > 5% of items are quarantined And normalization outputs records with canonical SKU, region, effective_date, unit_price, UoM, packaging, and metadata ready for publishing
Admin Tools for SKU Mapping, Error Review, and Reprocessing
Given a user with the PricePulse Admin role When they open Admin > Supplier Mappings Then they can search by supplier SKU, description, or brand and view current mapping status and last updated timestamp And they can create/edit a mapping to a canonical SKU, define UoM/pack conversion factors, and preview the resulting unit price before saving And they can bulk import or export mapping rules via CSV with validation feedback within 5 seconds for files up to 10k rows And they can view ingestion/normalization error queues with row-level details and trigger "Reprocess" after correcting data or mappings And all mapping and reprocess actions are audited with user, timestamp, before/after values, and reason
Manual Upload Fallback When Supplier API Is Unavailable
Given a supplier integration is marked unavailable or degraded When an authorized user uploads a price list via the Manual Upload wizard Then the system validates against the supplier’s schema profile and displays validation results within 10 seconds for files <= 20 MB And on success, the data enters the same staging and normalization pipeline as API feeds with the job linked to the originating file And on failure, the user is shown row-level errors and a downloadable CSV of rejected rows; no partial updates are committed unless >= 95% valid rows And the uploaded file is stored in secure object storage with retention >= 180 days and linked in the job audit
Change Detection, Versioning, and Publishing to PricePulse
Given normalized items are ready for publish When publish executes Then only items with detected changes in unit_price, packaging, UoM, or effective_date since the last published version are versioned And each published item carries an effective_date and source_job_id, and previous versions are retained for audit And a PricePulse "prices.updated" event is emitted per changed canonical SKU-region within 60 seconds of publish completion And downstream pricing and analytics reflect updated values within 2 minutes, and a consistency check verifies sample records match published data And the publish run fails fast if the PricePulse sink is unavailable and automatically retries up to 3 times with exponential backoff
Observability, Audit Logging, Security, and Idempotency
Given ingestion and normalization jobs execute When any job starts, progresses, or completes Then metrics (duration, throughput, success rate, retry count) are emitted to monitoring with labels supplier, region, job_type, and status And structured logs include correlation_id and source identifiers; audit logs capture user actions and data changes with before/after snapshots And secrets for API keys/OAuth are stored encrypted at rest, access is restricted to the ingestion service, and all accesses are logged And rerunning a job with the same source inputs does not create duplicate records (idempotency), verified by stable primary keys and checksums And a dashboard shows last sync time per supplier/region and highlights feeds older than 24 hours as "stale"
Regional Price List Versioning & Sync Rules
"As an operations manager, I want versioned, region‑specific price lists that auto‑apply by job location so that bids are consistent and compliant with local pricing and policy."
Description

Create region‑scoped price lists with versioning, effective/expiry dates, tax/freight/fuel surcharges, and ZIP/postcode‑to‑region mapping. Define selection rules that auto‑assign the correct price list to a project based on job address and organization policies. Provide side‑by‑side diffing between versions, rollback to prior versions, and a full audit log of changes. Expose APIs and UI to manage regions, inherit defaults, and override by branch. Guarantee consistent, defensible pricing across territories and seamless integration with estimate generation.

Acceptance Criteria
Auto-Assign Price List by Job Address and Policy Rules
Given a project with a job address ZIP/postcode mapped to Region A and an org default price list for Region A with an active version, When the project is created or its address is updated, Then the active version is auto-assigned within 2 seconds and the selection rationale is recorded in the audit log. Given a branch-specific override price list exists for Region A, When selection runs, Then the branch override takes precedence over the org default and is recorded as the applied rule. Given multiple eligible price lists due to overlapping rules, When selection runs, Then the system resolves using the configured rule priority order deterministically and records the resolved rule ID and priority in the audit log. Given a project file is locked to a carrier-approved schedule, When selection would otherwise change the assigned price list, Then the assignment is not changed and a "locked" reason is logged and displayed in the UI. Given a project address that does not match any region mapping, When selection runs, Then the global default price list is assigned and the project is flagged "no region match" with a link to fix mappings. Given an estimate is generated for a project, When pricing is calculated, Then the assigned price list version (including tax, freight, and fuel surcharges) is used for all unit prices and totals.
Versioned Regional Price List with Effective/Expiry and Surcharges
Given a user with Pricing Admin role, When creating a new price list version, Then they must set region, effective start date/time, optional expiry date/time, and version label; save succeeds only if effective < expiry when provided. Given an existing price list has an active or future version, When saving another version with an overlapping effective window for the same list and region, Then the system rejects the save with a validation error indicating the overlap. Given a price list version includes tax, freight, and fuel surcharges, When values are entered, Then the system validates allowed ranges (0–100% for percentage fields and >= 0 for fixed amounts) and shows a preview of their effect on totals. Given a future-dated version exists, When the effective start is reached, Then it becomes active automatically and the previous active version becomes inactive without user action. Given a published version, When a user attempts to edit line items in-place, Then the system requires creating a new version derived from the prior one and links it as the successor. Given region-level defaults exist, When a branch overrides specific surcharge rates, Then the override is stored at branch scope and applied without modifying the parent region version.
Side-by-Side Diff of Price List Versions
Given two versions are selected for comparison, When the diff view loads, Then it displays added, removed, and changed line items with old value, new value, absolute delta, and percent delta for each changed field including surcharges and taxes. Given a user searches by item code or description in the diff view, When a query is entered, Then results are filtered within 300 ms on lists up to 10,000 items. Given the diff is displayed, When the user exports, Then CSV and PDF exports are generated with exactly the same counts and values as shown on screen. Given a known test change set of N modified items, M added items, and K removed items, When the diff runs, Then the totals and individual deltas match the fixture within 0.01 units and 0.01% tolerance.
Rollback to Prior Version with Comprehensive Audit
Given multiple prior versions exist, When a Pricing Admin initiates a rollback to version V, Then the system creates a new version V' that clones V's contents, sets effective start to now, and marks V' as active for the region. Given rollback completes, When estimates are created after the rollback time, Then they use version V'; When estimates created before the rollback are recalculated, Then they continue using their originally assigned version unless explicitly re-selected by a user with permission. Given a rollback is performed, When viewing the audit log, Then entries include actor, timestamp, source version ID, new version ID, justification note, and a diff summary. Given a user without Pricing Admin permissions, When attempting to perform rollback, Then the action is denied with 403 via API and disabled in the UI with an explanatory tooltip. Given rollback occurs, When notifications are dispatched, Then subscribers of the affected region/list receive an in-app and email notification within 1 minute.
ZIP/Postcode-to-Region Mapping Management and Conflicts
Given a CSV import of ZIP/postcodes with region codes, When the file is uploaded, Then the system validates format, rejects unknown region codes, and reports duplicates and overlaps with line numbers before any changes are applied. Given overlapping mappings are detected between Region A and Region B, When saving changes, Then the system blocks the save until the conflict is resolved by setting a rule priority or removing the overlap, and the resolution is recorded in the audit log. Given manual entry of a ZIP/postcode range, When saved, Then the range is expanded and stored as discrete codes without gaps and is searchable within the mapping UI. Given mapping changes are published, When auto-assignment runs for new projects, Then the new mappings are used; When viewing existing projects, Then assigned price lists remain unchanged unless a user triggers "re-evaluate assignment". Given a job address ZIP/postcode is not present in any mapping, When selection runs, Then the system assigns the global default and surfaces a warning with a link to add the missing code to a region.
API and UI Parity for Region and Price List Administration
Given authenticated clients with proper scopes, When calling the API to create/read/update/delete regions, mappings, price lists, versions, selection rules, and overrides, Then endpoints respond with 2xx on success, 4xx with field-level errors on validation failure, and 403 on insufficient permissions. Given API resources support pagination and filtering, When requesting lists with page and filter parameters, Then responses include total count, next/prev cursors, and only matching records. Given webhooks are configured, When a version is published or rolled back, Then a webhook is delivered with event type, resource IDs, and timestamps, with at-least-once delivery and signature verification. Given operations are performed via the UI, When compared to available API capabilities, Then all supported API operations are achievable in the UI with equivalent validations and feedback, including import/export of mappings and diff exports. Given API versioning policy is in place, When clients send an older API version header, Then backward-compatible responses are returned and deprecation warnings are included when deprecated fields are used.
Storm Surge Adjustment Engine
"As a sales manager, I want controlled surge adjustments applied during post‑storm periods so that our pricing reflects market conditions while remaining explainable to carriers and customers."
Description

Build an event‑aware pricing layer that applies temporary, explainable multipliers or offsets by region, category, and SKU during post‑storm periods. Ingest signals from external weather/storm feeds and internal indicators (claims volume spikes, supplier surge flags). Support configurable caps, decay curves, start/end dates, and business rules, with manual overrides requiring rationale. Provide preview/simulation of adjustments before publishing, and record every applied adjustment with its source and parameters. Ensure adjustments integrate cleanly with base lists and are reversible without data loss.

Acceptance Criteria
Auto-Activation from Storm and Internal Signals
Given an external storm feed event covering region R1 effective at a specified start time and an internal claims volume spike in R1 >= 150% of the 30-day rolling baseline within 24 hours, When the signal ingestion job runs, Then a new storm-surge adjustment record is created in Draft with region=R1, sources recorded (storm_feed, claims_spike), start_at set, a default decay preset applied, and no prices are altered until status is Published. Given a supplier surge flag for region R1 and category "Shingles" with severity High and no storm feed event, When the ingestion job runs, Then a storm-surge adjustment record is created scoped to R1/"Shingles" in Draft with source recorded (supplier_flag) and requires manual review before publish. Given no qualifying external or internal signals in a region, When the ingestion job runs, Then no new storm-surge adjustment is created.
Scoped Application, Caps, and Precedence Resolution
Given base price list L1 where SKU S100 (category "Shingles") in region R1 has price $100.00, and a Published event with rules: category("Shingles") multiplier=1.15 cap=20% uplift, and SKU(S100) offset=+$5.00, When pricing S100 during the event, Then final price = $120.00 (rounded to 2 decimals), respecting the 20% cap on net uplift and SKU-level rule precedence over category-level. Given overlapping events E1 (multiplier 1.10 cap 25%) and E2 (offset +$3 cap 15%) both active for SKU S101 in region R1 with base $200.00, When pricing S101, Then the engine applies the business rule "most specific takes precedence; otherwise combine and enforce the lowest cap": final price = $223.00 and uplift ≤ 15% vs base.
Time-Bound Decay Curves per Region Time Zone
Given event E1 in region R1 (time zone America/Chicago) with start_at 2025-09-05T00:00:00-05:00, end_at 2025-09-26T00:00:00-05:00, linear decay from multiplier 1.20 at start to 1.00 at end, When pricing at 2025-09-12T12:00:00-05:00, Then the computed multiplier is 1.133 (±0.001) and is rounded to 3-decimal precision before application. Given the same event, When pricing at or after 2025-09-26T00:00:00-05:00, Then the multiplier equals 1.00 (no surge). Given an exponential decay curve configured with half-life 7 days from 1.20 toward 1.00, When pricing at day 7, Then multiplier ≈ 1.10 (±0.01).
Preview/Simulation of Pricing Impact Before Publish
Given a Draft event configuration and an estimate file with 150 line items scoped to region R1, When the user runs Preview, Then the system produces a non-persistent simulation showing per-line base price, adjusted price, delta, and total margin/variance impact; no live price lists are modified; and the preview can be exported as CSV and PDF. Given the same draft, When the user publishes the event, Then a new price overlay version is created and becomes active for eligible files, and the first pricing evaluation under that version matches the preview results within ±$0.01 per line.
Audit Trail with Source and Parameters
Given an adjusted price is calculated for any line item due to an active event, When the audit API GET /price-adjustments/audit?fileId={id} is called, Then the response includes per line: event_id, rule_scope (region/category/SKU), rule_type (multiplier|offset), parameters (values, caps, decay, start/end), sources (storm_feed ids, internal indicators), user_overrides (if any), calculation timestamp, base_price, adjusted_price, and effective multiplier; and all values match the calculation used. Given an event is Edited, Published, Unpublished, or Reverted, When querying the event history, Then all state transitions are recorded immutably with user, timestamp, and diff; soft-deleting an event does not remove its audit entries.
Manual Override with Required Rationale and Permissions
Given a user without the PricingAdmin role attempts to change any parameter of an event, When they click Save, Then the system denies with 403 and no changes are persisted. Given a user with the PricingAdmin role edits an event rule, When they attempt to Save without entering a rationale of at least 10 characters, Then Save is disabled and an inline error states "Rationale required (min 10 characters)." Given the same user provides a rationale and Saves, Then the change is persisted, the rationale is stored, and the event version increments; if Enforce Caps is enabled, any values exceeding caps are rejected with validation errors and nothing is saved.
Reversal and Base List Integrity
Given a Published event has affected prices for region R1, When the event is Unpublished or reaches end_at, Then all subsequent pricing evaluations use base price list values with no residual multipliers or offsets; the base price list data remains unmodified. Given estimates F1 and F2 were priced during the event, When the event is Unpublished, Then F1 and F2 retain their historical priced snapshots with references to the event in audit; re-pricing them after unpublish yields prices without event adjustments. Given the event is re-published with identical parameters, When pricing the same items, Then computed adjusted prices are idempotent and equal to prior results within ±$0.01.
Stale/Mismatch Detection & Alerts
"As an estimator, I want to be alerted when a price list is outdated or my line items don’t match the active list so that I can fix issues before sending a bid."
Description

Introduce automated checks that flag stale price lists past freshness thresholds and detect SKU/UoM mismatches between estimates and the active list. Surface inline warnings during estimate creation, provide remediation suggestions (update list, remap SKU, accept exception), and optionally block submission based on policy. Deliver health dashboards and notifications (in‑app/email) for admins with drill‑downs to affected files. Log detections and resolutions for auditability, reducing pricing errors and rework.

Acceptance Criteria
Inline Stale Price List Warning During Estimate Creation
Given an estimator opens or edits an estimate with an active price list And the price list last-refreshed timestamp exceeds the configured regional freshness threshold When the estimator adds a line item, edits pricing, or opens the pricing panel Then an inline warning is displayed within 300ms indicating the list is stale And the warning shows: last refreshed date/time, threshold, and days stale And the warning offers actions: Update Price List (if permitted), Switch to Latest Regional List, View Change Log And the stale status badge appears on each affected line item priced from the stale list And if the price list is locked to a carrier-approved schedule that is still within its lock period Then the UI shows an informational “Locked—no action required” state instead of a stale warning And after a successful Update or Switch action, the stale warning clears without page reload And all warning displays and clears are tracked as UI events with file ID and user ID
SKU/UoM Mismatch Detection and Guided Remediation
Given an estimate has one or more line items And the active price list is selected for the file’s region When a line item’s SKU is not found in the active list or its UoM differs from the list’s definition Then the line item is flagged with a “Mismatch” tag and inline error text specifying SKU-not-found or UoM-mismatch And a Remediate button offers options: Remap to Suggested SKU, Select Different SKU, Convert UoM (with factor shown), or Accept Exception (with justification) And selecting Remap updates the line item to the chosen SKU and recalculates price before tax within 100ms for up to 200 items And selecting Convert UoM applies the displayed conversion factor and updates quantity and unit price consistently And selecting Accept Exception requires a justification (minimum 10 characters) and records the exception policy state And line items marked as Non-Catalog (custom) are excluded from mismatch detection by default unless policy includes custom items And all remediation actions are logged with before/after values, user, timestamp, and resolution type
Policy-Based Submission Blocking for Pricing Issues
Given an organization policy is configured to block submissions when price lists are stale beyond N days and/or when unresolved mismatches exist And a file has at least one blocking issue according to the policy When a user attempts to submit, finalize, or export a carrier/customer-ready estimate Then the submission is blocked and a modal lists each blocking issue with links to remediate And users with the Override role can proceed after entering an override reason (minimum 10 characters) And accepted exceptions with required approvals are not treated as blocking And the modal displays the policy name, version, and evaluation time And a blocking attempt creates an audit log entry with policy outcome, issues, user, and action And once all blocking issues are resolved, the same submission action succeeds without additional steps
Admin Price List Health Dashboard with Drill-Down
Given an admin opens the PricePulse Health dashboard When viewing Price List Health for a selectable time window (Last 7/30/90 days) and region filters Then KPIs display counts and percentages of files with: Up-to-date, Stale, SKU Mismatch, UoM Mismatch And charts and tables load within 3 seconds for up to 10,000 files And clicking a KPI drills down to a paginated list of affected files showing file ID, assignee, region, list name/version, issue type, days stale, and policy status And selecting a file opens its detail panel with direct links to the estimate and remediation actions And the dashboard supports CSV export of the current filtered view and preserves filter context in the export And data reflects detections updated within the last 5 minutes (timestamp shown)
In-App and Email Notifications for Detections
Given notification preferences are configured for estimators and admins When a stale price list or mismatch is first detected on a file Then the file assignee receives an in-app notification within 1 minute with issue type, severity, and deep link to remediate And an email notification is sent if the recipient has email alerts enabled And subsequent detections of the same issue type on the same file are batched to at most one email per 24 hours unless severity escalates And admins receive a daily summary email listing new and unresolved issues by region And notification delivery status (sent, bounced, opened) is recorded per message
Detection and Resolution Audit Logging
Given the system detects or resolves a stale list or SKU/UoM mismatch When the event occurs Then an immutable audit record is written with timestamp (UTC), file ID, price list ID/version, rule ID, issue type, severity, actor (system/user), action (detected, remapped, converted, exception accepted, override, updated), and before/after values And audit records are retained for at least 7 years and are queryable by file ID and date range And authorized users can export audit records to CSV/JSON with PII fields redacted according to policy And the audit trail allows reconstructing the price list state used at submission time
Real‑time Margin & Variance Impact
"As a business owner, I want real‑time visibility into how current pricing affects my margins so that I can tune bids confidently and protect profitability."
Description

Compute and display line‑item and total margins using current costs versus target sell prices, and show variance against a baseline or carrier schedule. Recalculate instantly when switching price sources, applying surge rules, or editing quantities. Provide freshness indicators, delta charts, and what‑if toggles to compare scenarios and understand margin impact before publishing. Enable export of margin/variance summaries into the estimate PDF and CSV for analysis.

Acceptance Criteria
Switching Price Source Updates Margins and Variance in Real Time
Given an open estimate with at least two available price sources When the user selects a different price source Then all line-item cost, sell, gross margin dollars, gross margin percent, and totals recalculate within 1,000 ms at the 95th percentile and the UI reflects the new values without a page reload Given a selected baseline (e.g., Carrier Schedule X or Original snapshot) When the price source is changed Then variance per line and for totals is recalculated and displayed as delta dollars and delta percent relative to the baseline Given currency and percent display rules When values are shown Then currency is rounded to 2 decimals and percent to 1 decimal using half-up rounding Given recalculation completes When the UI updates Then the active price source name, region, and effective timestamp are displayed in the header
Applying Surge Rules Recalculates Costs and Shows Impact
Given surge adjustment rules are configured for the job’s region When the user toggles surge on/off or changes the surge factor Then line-item costs and totals adjust per rule and margins/variance recalculate within 1,000 ms at the 95th percentile Given surge rules are applied When the user hovers the surge indicator Then a tooltip lists the rule name/ID, factor applied, scope (materials/labor), and effective date/time used in calculations Given surge is toggled off When the user compares scenarios Then the delta chart shows the difference between surge-on and surge-off scenarios in dollars and percent
Editing Quantities Recomputes Line and Total Margins Instantly
Given an estimate with existing line items When the user edits a quantity, unit rate, or waste factor Then the affected line(s) and totals recompute cost, sell, gross margin dollars, and gross margin percent within 500 ms at the 95th percentile Given recalculation occurs When values change Then dependent totals (subtotals, taxes if applicable, grand total) and variance figures update within the same render cycle without manual refresh Given a multi-line edit session When the user commits changes (enter/tab/out of field) Then the what-if scenario retains all edits for comparison until explicitly reset
Freshness Indicator and Stale Price Flagging
Given a selected price source with a last-update timestamp When the estimate loads Then a freshness badge shows age as Fresh (≤24h), Warning (>24h and ≤7d), or Stale (>7d) with the exact timestamp Given a region or schedule mismatch When the active price list’s region or schedule does not match the job’s region or assigned carrier schedule Then a Mismatched List warning is displayed with details of the mismatch Given a Stale or Mismatched List status When the user attempts to publish or export Then the system prompts for confirmation and records the acknowledgment on the estimate
Delta Charts and What‑If Toggle Compare Scenarios
Given a selectable baseline scenario When the user toggles to an alternative scenario (price source, surge state, or quantity edits) Then a delta chart renders line-item group deltas and total delta in dollars and percent within 1,000 ms at the 95th percentile Given a displayed delta chart When values are shown Then negative margin deltas display in red with a minus sign and positives in green with a plus sign, and tooltips show current, baseline, and delta values Given up to three scenarios are selected When the user cycles through scenarios Then the chart and summary badges update to the selected comparison without a page reload
Export Margin and Variance Summary to PDF and CSV
Given an estimate with computed margins and variance When the user exports to PDF Then the document includes a Margin & Variance Summary section with per-line and total margin dollars, margin percent, variance dollars, variance percent, active price source, baseline name, surge state, and data timestamps that match the on-screen values Given the same estimate When the user exports to CSV Then a file is generated with one row per line item plus totals and columns: line_id, description, qty, unit, cost, sell, margin_dollars, margin_percent, variance_dollars, variance_percent, price_source, baseline, surge_state, calc_timestamp Given rounding/display rules When comparing PDF/CSV to the UI Then currency values match to 2 decimals and percent to 1 decimal with identical rounding behavior
Lock to Carrier‑Approved Schedule Behavior
Given a file is locked to a carrier-approved schedule When the user attempts to switch price sources Then line-item sell prices remain fixed to the locked schedule while costs reflect the selected current source for margin calculation Given lock is enabled When margins are displayed Then gross margin dollars = sell_from_locked_schedule − cost_from_current_source and gross margin percent = (gross margin dollars ÷ sell_from_locked_schedule) × 100, and variance is computed against the locked schedule as baseline Given lock is enabled When the user views controls Then a lock icon and the schedule name/effective date are shown and what‑if toggles exclude selling price changes
Carrier Schedule Lock per File
"As an insurance adjuster, I want to lock an estimate to my carrier’s approved price schedule so that revisions remain compliant and easy to justify."
Description

Allow an estimate to be bound to a specific carrier‑approved price schedule and version, preserving that schedule for the file’s lifecycle. Enforce read‑only pricing when locked, with role‑based overrides that require rationale and produce a difference report against the carrier schedule. Provide dual pricing view (carrier vs market) and ensure exports include the locked schedule reference and variance breakdown. Maintain compatibility with downstream revisions and supplements without breaking the lock.

Acceptance Criteria
Locking a File to a Carrier Schedule and Version
Given an unlocked estimate file with a selectable carrier price schedule and version When a user with Edit Estimate permission selects a carrier schedule and version and clicks Lock Schedule Then the file is marked Locked with schedule name, region, publisher, version, effective date, lock timestamp, and locking user displayed in the header And the locked schedule metadata is persisted to the file record and across sessions And unit prices for all existing line items resolve to the locked schedule and version And automatic price refreshes and background price syncs are disabled for this file
Enforcing Read-only Pricing Under Lock
Given a file with a locked carrier schedule When a user without Pricing Override permission attempts to edit any unit price, change the schedule/version, apply a custom price, or run Refresh Prices Then the action is blocked and a message explains the file is locked to the selected carrier schedule/version with a link to view lock details And quantity, waste, and measurement fields remain editable and totals recalculate using the locked unit prices And an audit entry is recorded for the blocked attempt including user, time, action type, and line item (if applicable)
Role-based Override With Rationale and Difference Report
Given a file with a locked carrier schedule and a user with Pricing Override permission When the user initiates a price override (line-item or global) Then the user must select override scope (line item, group, entire estimate) and enter a rationale of at least 10 characters And the system generates a Difference Report comparing overridden values to the locked schedule at line, section, and total levels (absolute and percent) And overridden fields are visually flagged and filterable, and an audit record (who, when, what, rationale) is stored And a Revert to Carrier control restores locked prices per the selected scope
Dual Pricing View: Carrier vs Market
Given a file with a locked carrier schedule and available market pricing from PricePulse for the same region and effective window When the user toggles Dual Pricing View Then the estimate displays Carrier Unit, Market Unit, Variance $, Variance %, and Total Variance at line and summary levels And the displayed market values never change the locked totals or exports unless an explicit override workflow is completed And if market pricing is unavailable for an item, the variance cells display N/A and are excluded from variance totals
Export Includes Locked Schedule Reference and Variance Breakdown
Given a file with a locked carrier schedule When the user exports the estimate to PDF and data formats (CSV/JSON) Then the export header includes carrier schedule name, publisher, region, version, effective date, lock timestamp, and locking user And the body reflects locked carrier prices and totals, and a Variance section shows carrier vs market deltas by line and summary And any overrides include the user rationale and a Difference Report appendix
Revisions and Supplements Preserve the Lock
Given a locked estimate When a user creates a revision or a supplement Then the lock persists automatically in the new revision or supplement And existing line items retain their locked unit prices while quantity changes recalculate totals using the locked prices And new line items use the locked schedule version; if an item code is missing, the user must map to an equivalent item or enter a custom price via the override workflow (with rationale and diff) And all changes are traceable and reflected against the locked baseline in the Difference Report
Controlled Unlock or Schedule Change via Revision
Given a file with a locked carrier schedule and a user with Admin Pricing Lock permission When the user attempts to change the schedule/version or unlock the file Then the system requires a rationale and creates a new estimate revision carrying the new lock, preserving the prior revision and its lock intact And the system presents an impact summary showing items affected and projected total variance before confirmation And an audit record captures who, when, from→to schedule/version, and rationale; prior exports remain associated with the previous revision
Pricing Rationale Audit Trail & Export
"As a project manager, I want a clear audit trail of pricing sources and adjustments so that I can defend estimates during reviews or disputes."
Description

Capture end‑to‑end pricing provenance per estimate, including supplier sources, list versions, surge adjustments, overrides, timestamps, and responsible users. Store an immutable audit log and generate an exportable appendix (PDF/JSON) with human‑readable rationale and machine‑readable metadata. Optionally produce a cryptographic hash/signature for tamper evidence and support retention policies and search. Embed references in the bid package to reduce disputes and speed approvals.

Acceptance Criteria
End-to-End Pricing Provenance Capture
Given an estimate exists and a pricing event occurs (supplier sync, price list update, surge adjustment, or manual override), When the estimate is saved, Then the system appends an immutable audit event with: estimate_id, event_id, event_type, timestamp (UTC ISO8601), user_id, session_id, supplier_id and name, price_list_id and version, surge_adjustment value and basis, affected line_item_ids, previous_value, new_value, override_reason (if any), and source_feed_version. Given any user or API attempts to modify or delete an existing audit event, When the request is processed, Then the system returns 403 Forbidden, writes no changes, and logs the attempt as a security event. Given the database, When querying the audit trail for an estimate, Then events are returned in strict chronological order by a monotonically increasing sequence number with no gaps for successful writes.
Tamper Evidence via Cryptographic Seal
Given the org-level or file-level setting "Cryptographic Seal" is enabled, When an estimate’s audit trail is finalized (status Ready to Send or Export), Then the system generates a canonical JSON of the audit trail, computes its SHA-256 hash, signs it with ECDSA (secp256r1) using the org’s private key, and stores the hash, signature, public key identifier, and seal timestamp with the record. Given a sealed audit trail, When verification is requested via UI or API, Then the system recomputes the hash from stored content, validates the signature, and returns a verification result of "valid" with the matching hash; if the content has changed, it returns "invalid" and blocks export. Given the export PDF, When generated, Then it displays the SHA-256 hash and includes a QR code that encodes the verification payload (hash, signature_id, timestamp).
Audit Trail Export: PDF and JSON Appendix
Given a user with export permission selects "Export Pricing Rationale" for an estimate, When the export is initiated, Then the system generates within 15 seconds a PDF appendix and a JSON file; filenames include estimate_id and UTC timestamp. Given the PDF appendix, When opened, Then it presents, per line item, unit price, supplier source, price list version, surge adjustment and basis, manual overrides with user and reason, and timestamps; it includes summaries for margin/variance impact and carrier lock status (if any). Given the JSON export, When validated, Then it conforms to the published schema version, includes all audit events and required metadata, and if sealing is enabled the JSON hash matches the value printed on the PDF. Given a bid package is generated, When the user chooses "Include Pricing Rationale", Then the PDF appendix is appended to the bid PDF and the JSON is attached or linked via a secure, expiring URL embedded in the PDF.
Retention Policy and Legal Hold Compliance
Given an organization admin sets audit retention duration and legal hold defaults, When saved, Then the settings are persisted and an admin-audit event is recorded. Given audit events older than the retention duration for a file without legal hold, When the scheduled retention job runs, Then the events are irreversibly purged, a retention-purge event is logged, and if sealing was enabled the seal metadata (hash, signature, key id, sealed_at) remains for verification. Given a file on legal hold, When the retention job runs, Then no audit events for that file are purged until the hold is removed by an authorized user; hold additions/removals require justification and are logged with user and timestamp. Given a compliance export is requested, When generated, Then it lists the current retention settings, active holds for the estimate, and the next scheduled purge date.
Search and Filter Audit Logs
Given a user with Audit Read permission accesses audit search, When they filter by estimate_id, project number, user, event_type, supplier, price_list_version, or date range and optionally keyword search override_reason, Then results return within 2 seconds for up to 10,000 events and include total count with pagination controls. Given API access, When calling the audit search endpoint with filters and pagination, Then the response matches the contract (HTTP 200, JSON schema-compliant), enforces authorization scopes, and excludes restricted estimates. Given a result row, When clicked in the UI, Then the system opens the estimate’s audit timeline scrolled to the selected event.
Stale/Mismatched Price List Flagging Recorded
Given PricePulse flags a stale or mismatched price list for an estimate, When the flag is raised, Then an audit event is created capturing detection_type, detected_at, detected_by (system), affected supplier and price list identifiers/versions, and estimated margin/variance impact. Given a user responds to the flag (update list, acknowledge exception, or ignore), When the action is taken, Then the action, user, timestamp, justification (if required), and resulting price changes are logged; the export includes both the flag and the resolution. Given audit search, When filtering by detection_type in {stale, mismatch}, Then all affected estimates are returned with counts by status (open, acknowledged, resolved).
Carrier Schedule Lock and Override Governance
Given a file is locked to a carrier-approved pricing schedule, When the lock is applied, Then an audit event records lock_on timestamp, locked_by, carrier schedule identifier/version, and the scope of lock. Given the file is locked, When a user attempts to change unit pricing, apply surge, or add an override that conflicts with the lock, Then the system blocks the change, displays the lock rationale, and logs the attempt with user, timestamp, and attempted field. Given an authorized user unlocks or applies an approved exception, When performed with required justification text, Then the action is permitted, the justification is mandatory and recorded, and the export explicitly shows the lock state and any exception details.

ProofLinks

Attaches verifiable evidence and reasoning to every line item—cropped, GeoSeal‑verified photos, detected damage labels, measurement references, and relevant code/policy citations. Reviewers can click through in the PDF or verification page for instant context, speeding approvals and strengthening supplements.

Requirements

Multi-Evidence Line-Item Attachments
"As a roofing estimator, I want to attach multiple forms of evidence to each line item so that reviewers can immediately understand and validate the charge without follow-up questions."
Description

Provide capability to attach multiple evidence objects (cropped photos, damage labels, measurement references, code/policy citations, and rationale notes) to any estimate line item. Supports many-to-one mappings, ordering, versioning, and required/optional flags per line item type. Evidence objects are stored as structured entities with metadata (type, source, capture time, provenance). UI indicates attached evidence counts and types; API endpoints allow create/read/update/delete. Export pipeline renders concise badges in the PDF and embeds references for the verification portal. Ensures no single attachment exceeds defined size limits and that total payload remains within export thresholds.

Acceptance Criteria
Attach Multiple Evidence Types to a Single Line Item
Given a user is editing an estimate line item When they attach multiple evidence objects of allowed types (cropped photo, damage label, measurement reference, code/policy citation, rationale note) Then the line item displays per-type counts and a total count in the UI And the attachments persist and are retrievable via API GET with matching counts and types And at least 20 evidence objects can be attached to a single line item without error Given a user removes an attached evidence object When they confirm deletion Then the UI counts update immediately And the evidence mapping is removed via API and no longer returned by subsequent GET calls Given the user reloads the estimate When the line item is opened Then all previously attached evidence objects appear with correct types, thumbnails/labels, and counts
Evidence Ordering and Reordering Persistence
Given a line item has 5 evidence objects When the user reorders them via drag-and-drop to a new sequence Then the new order is saved and reflected after page refresh And the API returns the evidence list in the same order (order index field) And the exported PDF and verification portal display evidence in that order Given the user inserts a new evidence at position 2 When they save Then the new evidence appears at position 2 and existing items shift accordingly across UI, API, and export
Required vs Optional Evidence Validation by Line Item Type
Given a line item type has required evidence rules (e.g., >=1 cropped photo and >=1 measurement reference) When the user attempts to mark the line item as Ready or export the estimate without meeting the rules Then the action is blocked And the UI shows an inline message listing the missing required evidence types And the API responds 422 with a machine-readable list of missing types Given the required evidence rules are satisfied When the user marks the line item as Ready or exports Then the action succeeds without validation errors Given optional evidence types are configured When the user saves the line item without optional evidence Then no validation error is raised
Evidence Versioning and Audit Trail
Given an existing evidence object attached to a line item When a user updates its content (e.g., new crop image or corrected citation) and selects Save as New Version Then a new version is created with an incremented version number And prior versions remain read-only and viewable And metadata captures editor, timestamp, and optional reason And the line item references the latest version by default Given a user opens Version History for an evidence object When they view differences Then changes between versions are displayed for all changed fields (content hash, notes, citations, labels) Given the latest version is not desired When the user selects Revert to Version N and confirms Then a new latest version is created based on Version N and becomes the active reference Given an export is generated When the line item has multi-version evidence Then the latest versions are used in the PDF badges and verification portal links And the portal provides access to the full version history
API CRUD for Evidence Objects with Required Metadata
Given an authenticated client with scope estimate:write When it POSTs /line-items/{lineItemId}/evidence with type, source, captureTime, provenance, and initial order Then the API responds 201 with id, version=1, and echoes metadata And the evidence is associated to the line item and returned by GET Given a client requests GET /line-items/{lineItemId}/evidence When the line item has attachments Then the API returns a paginated, ordered list including id, type, order, version, sizeBytes, source, captureTime, provenance, createdBy, createdAt Given content needs to change When the client POSTs /evidence/{id}/versions with new content fields Then a new version is created and becomes latest; PATCH on content fields is rejected with 409 and guidance to create a new version Given a client sends PATCH /evidence/{id} to update non-content fields (order, requiredFlagOverride) Then allowed fields update successfully and are reflected in subsequent GETs Given a client sends DELETE /evidence/{id} When the evidence has multiple versions Then the evidence and all versions are soft-deleted and excluded from default GET results And GET with includeDeleted=true returns them with deletedAt metadata Given invalid input (unknown type, missing required metadata, or invalid timestamps) When POST or PATCH is called Then the API responds 400 with field-level error details
PDF Badges and Verification Portal Deep Links
Given a line item has Photo x3, Measure x1, Code x2, Note x1, Label x2 When the estimate is exported to PDF Then the PDF shows concise badges per type with accurate counts and standard icons And clicking a badge in an interactive viewer opens a verification portal deep link listing ordered evidence with thumbnails/previews and key metadata And the deep link includes immutable identifiers (estimateId, lineItemId, evidenceIds with version numbers) Given the PDF is opened in a non-interactive viewer When badges are clicked Then no error occurs and badges still display counts and icons Given an evidence resource referenced by a deep link is unavailable When the portal loads the list Then the portal marks the item as unavailable with reason (deleted, permission) and shows audit metadata
Size Limits and Total Payload Threshold Enforcement
Given the configured single-attachment size limit is enforced When a user uploads an evidence object exceeding that limit Then the client blocks the upload when detectable and the server rejects with 413 Payload Too Large and a user-friendly message And the UI displays remaining per-file limit information Given the estimate approaches the configured total export payload threshold When the user initiates export and the projected package exceeds the threshold Then the export is blocked And the UI lists the contributing line items and aggregate size by type And the API responds 422 with per-line-item and per-type size details and suggestions (compress/remove) Given the user reduces sizes via recrop/compression or removes attachments When export is retried Then export proceeds once under the threshold Given an upload is interrupted When the connection drops mid-stream Then no partial evidence record is persisted and no orphan attachment increases the payload size
GeoSeal Provenance & Tamper Detection
"As an insurance adjuster, I want cryptographically verifiable photo provenance so that I can trust the authenticity and location of the evidence used to justify line items."
Description

On photo ingest, capture GPS coordinates, timestamp, altitude, device ID, and compute a cryptographic hash. Generate a server-signed GeoSeal that binds metadata to the media and store it with the asset. Display verification status (verified, unverifiable, or mismatched) alongside each image in the portal and as an icon in the PDF. Provide a verification panel showing raw metadata and seal validation results. Detect and flag metadata inconsistencies and post-edit alterations by rehashing on access. Gracefully handle non-GPS or third-party images by marking them as "unverified" while still attachable.

Acceptance Criteria
GeoSeal Creation on Photo Ingest
- Given an image is uploaded via API or portal, When ingestion starts, Then the system extracts EXIF GPS latitude/longitude (WGS84), altitude (meters), capture timestamp (UTC), and device identifier if present. - Given a valid image file, When hashing is performed, Then the system computes a SHA-256 over the exact original bytes and persists the hash digest. - Given extracted metadata and file hash, When a GeoSeal is created, Then the server signs a canonical JSON payload binding hash, metadata, and capture source with ECDSA P-256 using the platform private key. - Given a GeoSeal is created, When the asset record is saved, Then the GeoSeal and public key identifier are stored immutably with the asset and are not user-editable. - Given missing metadata fields, When the GeoSeal is created, Then absent values are recorded as null without blocking ingest.
On-Access Verification & Rehash Tamper Detection
- Given an asset with a stored GeoSeal, When the image is accessed or exported, Then the system rehashes the current bytes and compares to the sealed hash. - Given the sealed hash equals the current hash and the signature validates against the referenced public key, When verification completes, Then the status is set to "Verified". - Given the signature fails validation or the hash does not match, When verification completes, Then the status is set to "Mismatched" and an audit event is recorded. - Given key rotation has occurred, When verification runs, Then the system resolves and uses the correct historical public key via the key ID and verification still succeeds if the seal is intact.
Verification Status Indicators in Portal and PDF
- Given images are listed in the portal, When verification status is available, Then each image shows a badge: green check "Verified", red X "Mismatched", gray dash "Unverified". - Given a report PDF is generated, When images are rendered, Then the same badges appear adjacent to each image with a legend on the first page. - Given a user hovers or focuses a badge, When a tooltip is triggered, Then the UI shows a short reason string (e.g., "Hash mismatch", "No GPS data"). - Given assistive technologies, When badges are rendered, Then they include accessible labels conveying status and reason.
Verification Panel: Raw Metadata and Validation Results
- Given a user clicks the verification badge or link, When the panel opens, Then it displays raw metadata: latitude, longitude, altitude (m), capture timestamp (ISO 8601 UTC), device ID, file hash, signature algorithm, key ID, and seal creation time. - Given the verification results, When shown, Then the panel displays fields for "Signature valid" (true/false), "Hash match" (true/false), "Metadata completeness" (list of missing fields), and "Overall status". - Given a need to export, When the user selects Copy JSON or Download, Then the canonical GeoSeal JSON and verification results are copied/downloaded. - Given access control rules, When a user without permission tries to open the panel, Then access is denied and no sensitive key material is exposed.
Metadata Inconsistency Detection Rules
- Given an ingested image with GPS metadata, When values are outside plausible ranges (latitude ∉ [-90,90], longitude ∉ [-180,180] or altitude < -500 m or > 10000 m), Then the system flags "Inconsistent metadata" and sets status to "Unverified". - Given the capture timestamp differs from server receive time by more than 5 minutes into the future or 7 days into the past, When verification runs, Then a warning is recorded and shown in the panel. - Given EXIF has been rewritten after ingest but file bytes are unchanged, When verification runs, Then the sealed metadata prevails and discrepancies are listed without changing "Verified" if the hash and signature still match.
Handling Non-GPS or Third-Party Images
- Given an uploaded image lacks GPS and/or device ID metadata or originates from a third-party source, When ingestion completes, Then the system still creates a GeoSeal with available fields and sets status to "Unverified". - Given an "Unverified" image, When attaching to a line item or ProofLink, Then attachment is allowed and the badge displays "Unverified" with a reason (e.g., "Missing GPS"). - Given a user views the verification panel for an "Unverified" image, When opened, Then the panel explicitly lists missing fields and recommends steps to achieve verification.
Performance, Security, and Audit Logging
- Given a batch of up to 100 images, When ingestion executes, Then 95th percentile seal creation latency is ≤ 2.0 seconds per image and throughput is ≥ 20 images/minute per worker. - Given a user opens an image or PDF, When verification runs, Then 95th percentile per-image verification completes in ≤ 300 ms using cached keys. - Given seals are signed, When keys are used, Then private keys are stored in an HSM/KMS with audit logging and public key IDs are embedded in the seal. - Given any status change to "Mismatched" or "Unverified", When it occurs, Then an immutable audit event is recorded with actor, timestamp, reason, and previous/new status.
Auto-Crop, Annotations, and Redaction
"As a field technician, I want photos to auto-focus on the relevant roof area with clear annotations so that reviewers immediately see what supports the charge without exposing sensitive details."
Description

Automatically crop photos to the area relevant to the line item by leveraging detected roof components and damage regions. Provide manual fine-tuning with handles and a before/after toggle. Support annotation overlays (arrows, labels, dimensions) with style presets and maintain a clean, consistent visual standard across exports. Include configurable redaction to blur house numbers, faces, or license plates while preserving the original in a secure archive. Maintain a transformation history to preserve chain-of-custody and allow reversion.

Acceptance Criteria
Auto-Crop to Detected Roof Component/Damage Region
- When a line item is created or opened and linked to a detected region, the system generates a crop around the target region with 8% ±2% padding and centers the region in the frame. - The auto-crop completes in ≤ 2.0 seconds per 12MP image on the standard processing tier. - On a labeled validation set, the crop bounding box achieves IoU ≥ 0.70 for ≥ 90% of samples; below-confidence detections (< 0.60) fall back to full-frame with a highlight overlay. - The cropped image maintains a minimum longest edge of 1200 px; if the region is smaller, upscale with bicubic and flag "low-detail" in metadata. - The crop is deterministic for identical inputs and records image_id, detection_id, line_item_id, source_bbox, padding, and algorithm_version.
Manual Crop Fine-Tuning with Before/After Compare
- Users can adjust the crop via 8 handles with minimum crop size 256×256 px and an optional aspect-ratio lock toggle. - Keyboard nudge moves the crop by 1 px; Shift+Arrow moves by 10 px; Ctrl/Cmd+Z and Ctrl/Cmd+Y (or Cmd+Shift+Z) provide undo/redo with no practical limit. - A Before/After toggle switches within ≤ 300 ms and supports side-by-side comparison with synchronized zoom/pan. - "Reset to Auto" restores the last auto-crop; all edits are non-destructive until export. - Saving a manual adjustment records user_id, timestamp, and pre/post bounding boxes in transformation history.
Standardized Annotation Overlays with Style Presets
- Users can add arrows, text labels, and dimension lines; dimension lengths derive from calibrated scale with error ≤ 2% versus known references. - Snap-to-edge operates with a 5 px tolerance; Shift enforces angle snapping at 0°, 45°, 90°. - Presets available: Standard, High Contrast, Dark; each enforces font size ≥ 11 pt at 300 DPI, stroke ≥ 2 pt, consistent arrowhead sizes, and a restricted brand color palette. - Text and dimension labels meet WCAG contrast ratio ≥ 4.5:1 via halo/outline; annotations scale to maintain legibility across PDF and web exports. - Z-order ensures redactions appear above underlying pixels; annotations cannot be edited within exported assets.
Configurable PII Redaction with Original Preservation
- Automatic detection and blurring for faces, license plates, and house numbers with per-category on/off toggles (default on). - Detection confidence threshold ≥ 0.50; reviewers can add/remove masks with brush sizes 8/16/32 px; blur sigma ≥ 20 px. - Redactions are applied irreversibly in exports; the original unredacted file is retained in a secure archive with SHA-256 hash and role-based access. - Redaction actions are recorded in transformation history; exports indicate the count and categories of redactions applied. - If a redaction would obscure a dimension label, the user is prompted to reposition the annotation before export.
Transformation History and Chain-of-Custody with Reversion
- Every transformation (auto-crop, manual crop, annotation add/edit/delete, redaction) creates an immutable audit entry with user_id, UTC ISO 8601 timestamp, action_type, parameters (coordinates in source pixel space), and pre/post hashes. - The original image’s SHA-256 hash is stored once and referenced by all entries; exported variant hashes are recorded per asset. - Users can view history and revert to any prior state in ≤ 1.0 s for images ≤ 25 MB; reversion creates a new entry without deleting prior entries. - Marking an export as "final" locks the asset; any subsequent edit requires admin override and records reason and approver in the audit trail. - The verification view displays read-only history and validates hashes against the stored original.
Consistent Exports and ProofLink Verification Integration
- PDF exports embed cropped images with annotations and redactions at 300 DPI, support Letter and A4, and apply the selected style preset consistently. - Each line item includes at least one ProofLink URL in the PDF that resolves to the correct verification page with the corresponding image and history. - Export runtime is ≤ 90 seconds for 30 images (12MP) on the standard tier; resulting PDF size is ≤ 25 MB unless an admin override is applied. - Repeating an export with identical inputs produces byte-identical PDFs (excluding timestamp metadata) and identical image hashes. - The verification page renders the image and overlays in ≤ 2.5 seconds on a 10 Mbps connection and respects export locks (no edits allowed when marked final).
Measurement Traceback & Segment Highlighter
"As a reviewer, I want to click a line item quantity and see exactly which roof segments and rules produced it so that I can validate measurements quickly and reduce disputes."
Description

For each line item quantity, provide a clickable reference that traces back to the 3D/2D measurement model: slopes, edges, ridges, valleys, eaves, and penetrations. In the portal, highlight the contributing segments, show measurement totals, unit conversions, and rounding rules applied. Embed static snapshots in the PDF for offline review. Handle revised measurements by versioning and clearly indicating which estimate version each line item references.

Acceptance Criteria
Click-through Traceback Highlights Contributing Segments
Given an estimate with line items mapped to roof components (slopes, edges, ridges, valleys, eaves, penetrations) When a user clicks the traceback icon/link next to a line item in the portal Then the measurement viewer opens on the correct project and version, auto-zooms, and highlights only the segments that contribute to that line item And each highlighted segment is visibly outlined and labeled with its ID/type and dimension (length/area/count) And the summed raw measurement displayed matches the pre-conversion quantity used for the line item within ±0.1% or ±0.01 unit (whichever is larger) And multi-area contributions across elevations are all included in the highlight set And 2D vs 3D view is selected based on the line item’s stored view preference, with a toggle available to switch without changing totals
Measurement Details Panel Displays Totals, Conversions, and Rounding
Given a line item traceback is opened When the Details panel is viewed Then it shows: (a) raw measured total with base unit, (b) each conversion step (e.g., pitch/waste factors), (c) unit conversion output, (d) final estimate quantity, and (e) rounding rule name and value And the panel shows the exact formula trail with intermediate results for transparency And the final quantity in the panel equals the quantity on the estimate line item to the displayed precision And the panel lists all contributing segments with IDs and per-segment dimensions And switching unit system (Imperial/Metric) updates displayed units and intermediate values consistently without altering the stored estimate quantity And hovering a segment entry in the list highlights that segment in the viewer and vice versa
PDF Snapshot Embeds Offline Measurement Evidence
Given an estimate PDF is generated When a line item has a measurement traceback Then the PDF includes a static snapshot image showing the contributing segments highlighted with a legend And the snapshot includes: measurement model view (2D/3D), measurement version ID, estimate version ID, generated timestamp, and page reference And the snapshot caption summarizes: raw total, conversion steps, final quantity, and rounding rule applied And image resolution is at least 150 DPI effective at A4/Letter so segment labels are legible when printed And the line item also contains a clickable ProofLink URL that opens the verification page when online And if no traceback is available, the PDF shows a clear “No traceback available” note with a reason code
Versioning and Change Indicators Preserve Traceback Integrity
Given measurements are revised and saved When a new measurement model version is created Then the system assigns an immutable version ID and timestamps it And each estimate line item stores the referenced measurement version and estimate version And when a line item’s referenced measurement version is not the latest, the portal displays an “Out-of-date measurement” badge with an option to update and a diff summary And updating to the latest version requires explicit confirmation and records an audit log capturing prior values, version IDs, user, and timestamp And tracebacks always resolve to the stored version unless the line item is explicitly updated And the PDF always displays the version IDs that the line item references
Performance, Resilience, and Fallback Behavior
Given a user clicks a traceback link in the portal When the measurement viewer loads and highlights segments Then click-to-highlight completes within 1.5 seconds at p95 for projects up to 1,000 segments on a standard broadband connection And the measurement viewer initial load completes within 3.0 seconds at p95 And generating PDF snapshots adds no more than 2.0 seconds at p95 to PDF export time per 50 line items And if the interactive viewer fails to load, the UI shows a non-blocking error with a retry action and displays the same data in a static fallback panel using the latest available snapshot And all failures are logged with correlation IDs without exposing internal error details to end users
Comprehensive Feature-Type Coverage and Edge Cases
Given supported roof component types include slopes, edges, ridges, valleys, eaves, and penetrations When tracebacks are generated for each type Then the highlight, labeling, and totals correctly reflect the semantics of each type (e.g., area for slopes, length for edges, count/diameter for penetrations) And grouped/merged segments indicate grouping in the list with roll-up and per-segment values And ignored/excluded segments are not included in totals and are denoted with an exclusion reason when inspected And multi-pitch roofs apply the correct pitch factor per contributing slope And penetrations show count and per-penetration dimensions where available, with map pins in 2D and markers in 3D And automated tests include at least one fixture per component type and pass 100% for these scenarios
Jurisdictional Code/Policy Citation Mapper
"As an estimator, I want to attach the right code or policy citation for my location and carrier so that my line items are approved without lengthy negotiations."
Description

Maintain a curated, versioned library of building codes, manufacturer specifications, and insurer policy guidelines indexed by jurisdiction, insurer, product, and effective dates. Suggest relevant citations for a line item based on project address, insurer profile, and detected damage type. Allow users to search, attach, and quote specific sections with deep links. Flag expired or superseded citations and prompt for updates. Provide an admin UI for library maintenance and bulk imports.

Acceptance Criteria
Auto-Suggest Citations by Context
Given a project with a resolvable address (jurisdiction determined), an insurer profile, and a line item with detected damage type and product When the user opens the Citations panel for that line item Then the system returns ≥3 relevant citation suggestions ranked by relevance within 800 ms (p95) for libraries ≤50k records And the suggestions exclude expired or superseded versions by default based on the project As-Of date And each suggestion displays source type, title, section, jurisdiction/insurer, product (if applicable), effective date range, version ID, and a confidence score (0–1) And the user can Select individually or Add All; selections persist on save And if no suggestions are found, the UI shows "No suggestions" with a link to Search and logs telemetry event citation_suggestion_empty
Search and Filter Citation Library
Given the user opens the Citation Library When they search by keywords, exact phrase (quoted), section number, or boolean operators (AND/OR/NOT) and apply filters (jurisdiction, insurer, product, source type, As-Of date) Then results return within 1.5 s (p95) with total count and pagination (25 per page by default) And only versions effective on the As-Of date are shown unless "Include superseded" is enabled, in which case superseded items are included and visually flagged And each result shows source type, title, section, snippet with highlighted terms, jurisdiction/insurer, product tags, effective date range, version ID, and deep-link icon And clicking a result opens a details pane with full text, metadata, and a copyable deep link URL
Attach and Deep-Link Citations to ProofLinks
Given a line item in an estimate When the user attaches one or more citations from suggestions or search Then the line item shows attached citations with section ref, source, and version badge And exporting the estimate to PDF and the online verification page renders each attachment with a 200–400 character quoted snippet, source metadata, and a deep link that opens the exact section in the viewer And PDF rendering preserves snippet and metadata when opened offline; deep-link URLs remain visible as text And clicks on deep links in the verification page are tracked per citation and line item
Versioning, Effective Dates, and Supersession Handling
Given the library contains multiple versions of a citation When a new version supersedes a prior one Then the prior version is flagged Superseded and references the successor via superseded_by; the successor references supersedes And suggestions and default searches exclude Superseded/Expired citations based on the project As-Of date, with an option to include them And if an attached citation becomes expired or superseded relative to the project As-Of date, the UI shows an Update Available prompt with one-click Replace to attach the successor while retaining an audit trail And existing exported estimates preserve the originally attached citation text and metadata (no retroactive mutation)
Admin Bulk Import with Validation and Rollback
Given an Admin with Library Maintainer role uploads a CSV or JSON file following the documented schema When they click Validate Then the system performs schema and field validation (required: source type, title, section_ref, text, jurisdiction, effective_start; optional: effective_end, insurer/product tags, supersedes; version_id unique) and reports counts of New, Update, Duplicate, Error And a Preview shows sample rows with their intended action and warnings (e.g., date range overlaps) When Import is confirmed Then records are upserted in batches with transactional rollback on any batch error; a downloadable import report lists row outcomes and error messages And all created/updated records are audit-logged with user, timestamp, and import job ID; permission is enforced (non-maintainers blocked)
Admin UI Maintenance and Guardrails
Given a Library Maintainer opens the Admin UI When creating or editing a citation/version Then field validation enforces: effective_start ≤ effective_end (if present), unique section within a source/version, valid jurisdiction code, valid URLs, and valid supersedes/superseded_by references And the maintainer can manage insurer and product tags, set supersession links, and preview the deep-link anchor And soft-delete is supported with restore; hard delete is blocked if the citation is referenced by any active line item, with a list of referencing items shown And all changes (create/update/delete/restore) are audit-logged with before/after diffs, user, timestamp
Interactive PDF Deep Links & Verification Portal
"As a claims reviewer, I want to click from the PDF directly into a verification page with all supporting evidence so that I can approve or challenge items efficiently."
Description

Generate PDFs with per-line-item icons that deep-link to a secure verification page containing evidence details, GeoSeal validation, measurement traceback, and citations. Links are signed, time-limited, and revocable. The portal renders fast, mobile-friendly views with pagination for multi-item reviews and preserves the export context. Record link click metrics and return status signals for approval workflows. Provide graceful degradation when links are disabled by including summary evidence snapshots in the PDF.

Acceptance Criteria
PDF Line-Item Icons and Deep Links
Given an estimate with at least 1 line item and Interactive Links enabled When a PDF is generated Then each line item row displays a clickable evidence icon adjacent to the line item text with accessible label "View Evidence" And each icon links to a unique verification URL containing a signed, unguessable token and the line item/export identifiers And the PDF contains no broken or empty link annotations (0% broken links on preflight) And clicking the icon in a standards-compliant PDF viewer opens the verification URL in the default browser And exporting an estimate with up to 500 line items completes within 120 seconds on the standard build tier
Secure Signed, Time-Limited, Revocable Links
Given a generated deep link with a signed token When the current time is before the token's expiry and the token is not revoked Then the verification page returns HTTP 200 over HTTPS only and renders the requested line item evidence And the token provides at least 128 bits of entropy and is signed (e.g., HMAC or equivalent) with server-held secret keys When the token is expired Then the server returns HTTP 410 Gone with a non-sensitive expiry message and no evidence data When the token is revoked via the admin/API revoke endpoint Then access attempts return HTTP 403 Forbidden within 60 seconds of revocation And tokens are rate-limited to mitigate brute-force attempts and are never logged in full in server logs
Verification Portal Evidence Completeness
Given a valid deep link for a specific line item export snapshot When the verification page loads Then it displays: (a) cropped, GeoSeal-verified photos with badges and timestamps; (b) detected damage labels with confidence; (c) measurement traceback including referenced facets/edges and values; (d) relevant code/policy citations with sources; (e) line item summary (description, quantity, unit price, total) And it shows export context (estimate ID, export version, line item ID, created-at) matching the originating PDF And all values reflect the immutable export snapshot; subsequent project edits do not change the displayed data And GeoSeal validation status indicates Verified/Invalid with hash and capture metadata; Invalid hides sensitive EXIF and shows remediation guidance
Mobile Performance and Responsiveness
Given a mid-tier mobile device on throttled 4G (Lighthouse mobile defaults) When opening a verification page Then 95th percentile Largest Contentful Paint ≤ 2.5s and Time to Interactive ≤ 3.5s over the last 7 days And initial page payload (HTML+CSS+JS excluding images) ≤ 1.5 MB and total image bytes ≤ 3 MB for a typical single-item view And the layout is responsive at 320–414px widths with no horizontal scrolling, tap targets ≥ 44px, and text ≥ 16px base size And accessibility checks pass with contrast ratio ≥ 4.5:1 for text and all images have alt text or aria-labels
Pagination and Multi-Item Review Navigation
Given a verification session for an export with more than 20 line items When the reviewer opens the portal Then items are paginated with a default page size of 20 and controls for Previous/Next and direct page selection And the URL encodes the current page and filters so links are shareable and restorable on reload And Next/Previous Item actions preserve applied filters/sorts and load the correct adjacent item And the API returns accurate total counts and the last page correctly reflects remaining items And keyboard navigation (←/→) advances items when focus is within the review area
Click Metrics and Approval Status Signals
Given a deep link is opened When the verification page is requested Then a click event is recorded with timestamp, link ID, export version ID, line item ID, user agent, referrer, and anonymized IP prefix within 5 seconds And metrics are queryable via API by date range, estimate/export, and line item Given approval actions are enabled for the workflow When a reviewer clicks Approve/Reject/Needs Info in the portal Then the status is persisted with actor metadata and a webhook is delivered to the registered callback URL within 2 seconds (p95), with retries and idempotency keys for up to 24 hours on failure And the portal visibly confirms the action and disallows duplicate submissions
Graceful Degradation with Summary Evidence Snapshots
Given Interactive Links are disabled at export time or the recipient's environment blocks external links When the PDF is generated Then each line item includes an embedded evidence summary (1–2 cropped photos, GeoSeal status summary, key measurements, and top citations) directly in the PDF And no external link annotations remain in the document (0 external URIs) And the PDF remains ≤ 25 MB for up to 100 line items with readable images (≥ 150 DPI effective) And a references section aggregates per-line-item evidence snapshots for quick offline review And all text remains searchable/selectable, not rasterized
Secure Sharing, RBAC, and Audit Logging
"As an operations manager, I want controlled, auditable access to ProofLinks so that sensitive project data is shared securely and compliance requirements are met."
Description

Implement role-based access controls for viewing, commenting, and downloading evidence. Support organization-level roles (admin, estimator, reviewer, external reviewer) and shareable, expiring links for third parties. Log all access, changes, and exports with timestamps, user IDs, IP addresses, and item references. Provide exportable audit reports and webhooks for approval/denial events. Ensure storage and transmission encryption in line with industry best practices.

Acceptance Criteria
Org RBAC: View, Comment, Download Permissions
Given a logged-in user within an organization, When they open a ProofLinks verification page, Then permissions are enforced per role as follows: Admin can view all evidence assets, add/delete comments, download PDFs and original evidence files, manage share links, and edit role assignments; Estimator can view all evidence assets, add comments, download PDFs and original evidence files, and create/disable share links for projects they own or are assigned to, but cannot edit org role assignments; Reviewer can view all evidence assets and add comments, can download PDFs, but cannot download original evidence files and cannot create/disable share links. Given an external reviewer accessing via a share link, When link options allow comments and/or PDF download, Then only those enabled actions are available; all other restricted actions are disabled in UI and return 403 via API. Given a user attempts an action they lack permission for, When the action is invoked, Then the UI displays "Insufficient permissions" and the API returns HTTP 403 with a stable error code and request_id. Given org RBAC settings are updated by an Admin, When a role-holder refreshes or makes a new request, Then effective permissions reflect the change within 60 seconds.
Shareable Expiring Links for External Reviewers
Given a permitted org user creates a share link, When configuring the link, Then they can set scope (single proposal or entire project), expiration (absolute UTC datetime or duration up to 90 days), optional passcode, and allowed actions (view only, allow comments, allow PDF download). Given a valid share link is accessed before expiry with the correct passcode (if set), When the external reviewer opens it, Then they can access only the scoped content and cannot navigate to other organization data. Given a share link reaches expiry or is revoked, When it is accessed, Then the system returns 410 (web) or 403 (API) and the UI shows "Link expired or revoked". Given a share link is configured as single-use, When the first successful access occurs, Then subsequent access attempts are blocked with HTTP 403 and logged. Given a share link is created, Then a short HTTPS URL and QR code are generated; HTTP is not served; copying or previewing the link never reveals the passcode.
Comprehensive Audit Logging of Access, Changes, and Exports
Given any view, comment, share create/update/delete, role change, download, export, or webhook delivery event occurs, Then an audit log entry is written within 5 seconds containing: event_id (UUIDv4), event_type, occurred_at (UTC ISO-8601 with ms), actor_type (user_id or link_id), actor_ip, org_id, project_id, item_ref (line_item_id or asset_id), outcome (success/failure), http_status (if applicable), and request_id. Given audit logs exist, When queried by an Admin for a time range, Then results are immutable, paginated, and ordered by occurred_at descending; updates and deletes are not permitted via UI or API. Given a PDF export or evidence download is initiated, Then the exported file metadata includes the originating audit event_id; the corresponding audit entry records file hash and byte size.
Exportable, Filterable Audit Report
Given an Admin opens the Audit Reports page, When they filter by date range, project, user_id/link_id, action types, outcome, and http_status, Then the on-screen results reflect the filters and display a total count. Given filtered results are shown, When the Admin exports, Then CSV and JSON downloads are available; files include headers/keys: event_id, occurred_at, event_type, actor_type, actor_id_or_link_id, actor_ip, org_id, project_id, item_ref, outcome, http_status, request_id, user_agent. Then the exported file row count equals the on-screen filtered count; files up to 100,000 rows generate within 30 seconds; larger exports are queued and delivered within 15 minutes via a signed HTTPS link that expires in 24 hours.
Webhooks for Approval/Denial Events
Given webhook endpoints with shared secrets are configured by an Admin, When a line item decision is recorded as Approved or Denied by a Reviewer or via API, Then a webhook is sent within 10 seconds with event types prooflink.item.approved or prooflink.item.denied. Then the JSON payload includes: event_id, occurred_at, org_id, project_id, line_item_id, decision (approved/denied), decided_by (user_id or link_id), reason (optional), and previous_decision (if any). Then the HTTP request includes an X-RoofLens-Signature header using HMAC-SHA256 over the raw body with a timestamp; receivers can verify freshness within 5 minutes. When the endpoint returns non-2xx, Then retries occur up to 6 times with exponential backoff up to 30 minutes; undelivered events are retained for 7 days in a dead-letter queue; all delivery attempts are captured in audit logs.
Encryption for Storage and Transmission
Given any client-to-server or server-to-server connection, Then TLS 1.2+ is enforced with HSTS; TLS 1.0/1.1 and weak ciphers are rejected; all share links are HTTPS with valid certificates. Given evidence files, PDFs, and audit logs at rest, Then they are encrypted with 256-bit keys managed by the cloud provider KMS; keys are rotated at least every 90 days; access is governed by least-privilege IAM policies. Given presigned URLs for evidence downloads are issued, Then each URL is scoped to a single file and HTTP method, expires within 15 minutes, and can be revoked early by invalidating the underlying token. Given backups are created, Then backups are encrypted at rest and in transit and restoration is tested at least monthly with results recorded in the audit log.

GapGuard

Prevents misses and conflicts by scanning the scope against roof geometry, climate zone, and carrier/franchise rules. Auto‑suggests required adds (e.g., ice‑and‑water % by zone, steep/two‑story charges, step flashing at walls) and flags redundancies or incompatible methods—cutting first‑pass errors and disputes.

Requirements

Pluggable Compliance Rule Engine
"As an estimator, I want the system to automatically apply the correct set of rules for my carrier and region so that I get consistent, compliant suggestions without manual cross-checking."
Description

Centralized engine that evaluates the job context (roof geometry, location/climate zone, carrier/franchise program, selected line items) against a library of rules to produce suggested adds and conflict/compatibility flags. Supports rule types such as Required Add, Threshold, Conflict, and Redundancy. Rules are scoping-aware (by carrier, franchise, program, jurisdiction, climate zone), versioned with effective dates, and have priority/precedence resolution. Provides a JSON/DSL rule format, import/export, and a test harness to validate rules against sample jobs. Ensures fast evaluation (<500 ms per job) and deterministic outputs for auditability.

Acceptance Criteria
Required Add Suggestion by Climate Zone and Geometry
Given a job with climateZone="CZ-5", eaveLengthFt=240, valleysLengthFt=60, and no ICE_WATER line item selected And a Required Add rule R1 scoped to CZ-5 with formula rolls = ceil(((eaveLengthFt + valleysLengthFt) * 3) / coveragePerRollSqFt) where coveragePerRollSqFt=200 When the engine evaluates the job Then output.suggestedAdds includes {lineItemCode:"ICE_WATER", quantity:5, ruleId:"R1", scopeMatch:["CZ-5"], type:"RequiredAdd"} And output.trace includes an entry for R1 with inputs, formula, intermediate values, and final quantity=5 And no conflict or redundancy flags reference ICE_WATER Given a job with wallIntersectionLF=83 and no STEP_FLASH line item selected And a Required Add rule R2 that requires STEP_FLASH with quantity = roundUpToNearest(10, wallIntersectionLF) When the engine evaluates the job Then output.suggestedAdds includes {lineItemCode:"STEP_FLASH", quantity:90, ruleId:"R2", type:"RequiredAdd"} And output.trace contains a deterministic calculation path for R2
Threshold Rule: Steep/Tall Roof Charge
Given a job with pitch=7/12, stories=1, roofAreaSquares=30, and no STEEP_CHARGE selected And a Threshold rule T1 that requires STEEP_CHARGE when pitch >= 6/12 with quantity = roofAreaSquares When the engine evaluates the job Then output.suggestedAdds includes {lineItemCode:"STEEP_CHARGE", quantity:30, ruleId:"T1", type:"Threshold"} Given a job with pitch=5/12, stories=2, roofAreaSquares=24, and no TWO_STORY_CHARGE selected And a Threshold rule T2 that requires TWO_STORY_CHARGE when stories >= 2 with quantity = roofAreaSquares When the engine evaluates the job Then output.suggestedAdds includes {lineItemCode:"TWO_STORY_CHARGE", quantity:24, ruleId:"T2", type:"Threshold"} Given a job with pitch=5/12, stories=1, roofAreaSquares=28 When the engine evaluates the job with T1 and T2 present Then no STEEP_CHARGE or TWO_STORY_CHARGE suggestions are emitted
Conflict and Redundancy Detection for Scope Items
Given selected lineItems include ["FULL_TEAR_OFF", "OVERLAY"] And a Conflict rule C1 declares FULL_TEAR_OFF incompatibleWith OVERLAY with severity="Error" When the engine evaluates the job Then output.conflicts includes {ruleId:"C1", items:["FULL_TEAR_OFF","OVERLAY"], severity:"Error", recommendation:"Remove OVERLAY"} And output.trace records the conflict decision and recommendation deterministically Given selected lineItems include ["STARTER_STRIP", "STARTER_STRIP"] And a Redundancy rule RDN1 flags duplicate occurrences of STARTER_STRIP When the engine evaluates the job Then output.redundancies includes {ruleId:"RDN1", item:"STARTER_STRIP", count:2, recommendation:"Remove duplicate"} Given a Required Add rule would suggest OVERLAY but a confirmed selection FULL_TEAR_OFF exists and conflict rule C1 applies When the engine evaluates the job Then the OVERLAY suggestion is suppressed and output.trace references C1 as the suppression reason
Rule Scoping by Carrier, Program, Jurisdiction, and Climate Zone
Given carrierId="ACME", programId="Preferred", jurisdiction="CO", climateZone="CZ-5" And rules S1(scope:{carrier:"ACME", program:"Preferred", jurisdiction:"CO"}), S2(scope:{carrier:"Other"}), S3(scope:{climateZone:"CZ-5"}) When the engine evaluates the job Then only S1 and S3 are eligible and produce outputs; S2 produces no outputs And output.trace lists matchedScopes for S1 and S3 Given carrierId changes to "Centauri" with the same job context When the engine evaluates the job Then S1 is not applied; only rules scoped to Centauri or global scope are applied And all emitted outputs include scope metadata showing why they matched
Rule Versioning and Effective Date Selection
Given a job with jobDate=2025-09-04 And rule V0 {ruleId:"ICE_WATER", version:0, effectiveStart:2024-01-01, effectiveEnd:2024-12-31} And rule V1 {ruleId:"ICE_WATER", version:1, effectiveStart:2025-01-01, effectiveEnd:2025-12-31} When the engine evaluates the job Then only V1 is considered active and any outputs reference version:1 Given a job with jobDate=2026-01-02 and no successor version after V1 When the engine evaluates the job Then the ICE_WATER rule is excluded from evaluation and no ICE_WATER output is produced Given two versions overlap on jobDate, V2(version:2, effectiveStart:2025-06-01, effectiveEnd:2025-12-31) and V1(version:1, effectiveStart:2025-01-01, effectiveEnd:2025-12-31) When the engine evaluates the job Then the engine selects the highest versionNumber (V2) deterministically; output.trace records the tie-break rationale
Priority and Precedence Resolution for Overlapping Rules
Given two Required Add rules target ICE_WATER: RC(priority:90, scope:"Carrier") yields quantity=2; RJ(priority:80, scope:"Jurisdiction") yields quantity=4 with combineStrategy="max" When the engine evaluates the job Then output.suggestedAdds includes ICE_WATER with quantity=4 and sourceRules:[RC,RJ] And output.trace records precedence: combineStrategy="max" chosen, quantities compared, result=4 Given two Required Add rules target DRIP_EDGE with equal priority=80 but different specificity (Program-specific vs Global) and combineStrategy="override" When the engine evaluates the job Then the more specific (Program-specific) rule wins; only that rule contributes to the final quantity And tie-break rule is deterministic: priority desc, specificity desc (carrier>program>jurisdiction>climate>global), then ruleId ascending Given a Required Add and a Conflict rule both apply to the same line item When the engine evaluates the job Then Conflict rules take precedence to prevent emitting incompatible adds; the suppressed add is listed in trace with suppressionReason referencing the conflict ruleId
Performance, Determinism, Audit Trace, and Rule DSL Import/Export with Test Harness
Given a corpus of 1,000 jobs and a ruleset of 500 active rules When evaluated on reference hardware (4 vCPU, 8 GB RAM) Then per-job evaluation latency meets p95 <= 500 ms and p99 <= 800 ms; memory usage stays below 250 MB per worker Given the same job context and rules evaluated 10 times When comparing outputs Then suggestedAdds, conflicts, redundancies, and trace entries are byte-for-byte identical and stably ordered Given a valid rule JSON/DSL document conforming to schema version X.Y When imported via API Then the engine stores it, returns 201 with ruleId/version, and it becomes available for evaluation Given an invalid rule JSON/DSL (schema violation or unsafe expression) When import is attempted Then the API returns 400 with machine-readable error codes, field paths, and no partial rule is activated Given any stored rule When exported Then the exported JSON/DSL round-trips: import(export(rule)) yields a semantically equivalent rule; normalized fields (e.g., sorted conditions) ensure deterministic equality Given the bundled test harness with sample jobs and expected outputs per rule type (RequiredAdd, Threshold, Conflict, Redundancy) When the harness is run in CI Then it produces a pass/fail report (JSON) with 100% pass rate for reference fixtures and coverage by rule type >= 1 test each
Geometry-Aware Scope Validation
"As a roofing estimator, I want validations tied to the actual roof geometry so that required line items and quantities are accurate the first time."
Description

Maps RoofLens measurement outputs (facets, edges, eaves, rakes, valleys, pitch, stories, wall intersections, penetrations) to validation checks that infer required components and labor charges. Calculates quantities for items like step flashing at walls, drip edge, ridge cap, valley metal, starter, ventilation, steep/two-story charges based on thresholds. Detects missing required items relative to geometry and flags over- or under-quantification. Exposes normalized measurement features to the rule engine with unit conversions and rounding rules.

Acceptance Criteria
Step Flashing at Wall Intersections Auto‑Suggest and Quantity Validation
Given a measurement set with wall intersections totaling 124 LF and a configured step flashing factor of 1.5 pcs/LF and 20 pcs/bundle When the scope has no Step Flashing line item Then the system flags a missing required item and suggests Step Flashing with 186 pcs (10 bundles after ceiling rounding) and links the suggestion to the wall intersection geometry Given a measurement set with wall intersections totaling 80 LF and configured factor 1.5 pcs/LF When the scope includes Step Flashing at 80 pcs Then the system flags under‑quantification showing expected 120 pcs and required delta 40 pcs Given a measurement set with wall intersections totaling 0 LF When validation runs Then no Step Flashing requirement is suggested and no missing/overage flags are raised Given a measurement set with wall intersections totaling 62 LF and a factor of 1.5 pcs/LF When the scope includes Step Flashing at 120 pcs Then the system flags over‑quantification showing expected 93 pcs and overage 27 pcs
Drip Edge Linear Footage Calculation and Rounding
Given eave length 220 LF and rake length 160 LF and drip edge sold in 10‑ft sticks with 7% waste When validation runs Then required drip edge is ceiling(((220+160)*1.07)/10) = 41 sticks and a missing item is suggested if not present Given computed requirement 41 sticks When the scope includes Drip Edge at 36 sticks Then the system flags under‑quantification and recommends increasing to 41 sticks Given computed requirement 41 sticks When the scope includes Drip Edge at 45 sticks and the carrier tolerance is 0 sticks Then the system flags over‑quantification and recommends reducing to 41 sticks Given a measurement set where rake length = 0 and eave length > 0 When validation runs Then the required drip edge equals eave length only and no rake footage is included
Ridge Vent vs Box Vent Compatibility and Ridge Cap Quantity
Given total ridge length 140 LF and hip length 60 LF and ridge cap bundle coverage 20 LF/bundle When validation runs and no ridge‑related items are present Then the system suggests Ridge Cap at ceiling((140+60)/20) = 10 bundles and flags a missing item Given selected ventilation includes Ridge Vent 120 LF and Box Vents qty 8 When validation runs Then the system flags method incompatibility and requires either ridge vent or box vents per rule set, not both Given Ridge Vent 120 LF is selected and ridge cap coverage is required over vent When validation runs Then the system validates Ridge Cap >= ceiling((ridge LF covered by vent + remaining ridge + hips)/20) bundles and flags under/over‑quantification accordingly Given ridge length = 0 and hip length = 0 When validation runs Then no Ridge Cap is required or suggested
Valley Method Requirement and Exclusivity
Given total valley length 85 LF and carrier rule requires either Open Valley Metal (10‑ft sticks, 5% waste) or Closed‑Cut Shingle Valley (no metal) When validation runs and neither method is in the scope Then the system flags a missing valley treatment and suggests a compliant method per default rule (e.g., Open Valley Metal at ceiling((85*1.05)/10) = 9 sticks) Given the scope includes both Open Valley Metal and Closed‑Cut Shingle Valley When validation runs Then the system flags redundancy/incompatibility and requires selection of one method only Given computed Open Valley Metal requirement 9 sticks When the scope includes 7 sticks Then the system flags under‑quantification and recommends 9 sticks Given there are 0 LF of valleys When validation runs Then no valley method is required or suggested
Starter Strip Placement and Quantity Accuracy
Given eave length 240 LF and rule set allows starter on eaves only with bundle coverage 100 LF/bundle and 5% waste When validation runs and Starter is absent Then the system flags a missing item and suggests Starter at ceiling((240*1.05)/100) = 3 bundles Given eave length 180 LF and rule prohibits starter on rakes When the scope includes Starter quantity derived from eaves + rakes Then the system flags non‑compliant placement and recomputes quantity using eaves only Given computed Starter requirement 3 bundles When the scope includes 5 bundles and carrier tolerance is 0 bundles Then the system flags over‑quantification and recommends 3 bundles
Steep Pitch and Two‑Story Labor Charges Thresholding
Given average roof pitch 8/12 and steep threshold 7/12 and roof area 32 SQ and charge type per SQ When validation runs and no steep charge is present Then the system flags a missing Steep Charge and suggests 32 SQ at the configured steep rate Given story count = 2 and two‑story threshold >= 2 stories and charge type per job When validation runs and no two‑story charge is present Then the system flags a missing Two‑Story Charge and suggests qty = 1 job at the configured rate Given rules specify steep and two‑story charges are mutually allowed but not duplicative per SQ When the scope contains two separate steep charges totaling more than the roof area Then the system flags duplication and recommends a single steep charge totaling 32 SQ Given average roof pitch 6/12 and story count = 1 When validation runs Then no steep or two‑story charges are required or suggested
Normalized Measurement Features Exposed to Rule Engine
Given a completed RoofLens measurement When the rule engine queries normalized features Then it receives facets area in SQ (1 SQ = 100 SF), edges classified as eaves/rakes/ridges/hips/valleys in LF, wall intersections in LF, pitch as ratio (e.g., 8/12), story count as integer, and penetrations with count and diameters Given feature values in mixed units (e.g., meters, feet) When normalization runs Then units are converted to system defaults (SQ, LF, count) before evaluation and stored with precision of 2 decimals Given a rounding rule set per item (e.g., sticks 10 LF, bundles coverage, rolls coverage) When quantity calculations execute Then results are rounded using ceiling to the next sellable unit and the raw pre‑rounded value is preserved for audit Given a rule references an unavailable measurement feature When validation runs Then the engine returns a structured error and the UI flags the rule as inapplicable without blocking other validations
Climate Zone & Code Compliance Mapping
"As an adjuster, I want zone-based requirements applied automatically so that estimates reflect local codes and carrier standards without manual research."
Description

Determines climate zone and local code context from job address via geocoding and cached datasets (e.g., IECC/NOAA zones, municipality overlays). Maps zones to prescriptive requirements such as ice-and-water shield coverage percentages at eaves/valleys, underlayment types, ventilation ratios, and cold/heat climate practices. Maintains an updatable dataset with versioning and effective dates, and exposes zone-derived constraints to the rule engine. Provides graceful fallbacks when zones are ambiguous and logs data provenance for audits.

Acceptance Criteria
Geocode Address to Resolve Climate Zone and Municipality Overlay
Given a valid job address in a supported region, When zone mapping is requested, Then the system returns latitude/longitude with confidence >= 0.90, IECC climate zone ID, NOAA climatic region ID, and municipality overlay ID within 2,000 ms P95. Given the same address was mapped within the last 365 days, When zone mapping is requested, Then cached geocoding results are used and returned within 800 ms P95. Given an address cannot be geocoded, When zone mapping is requested, Then the system returns error code GEO_NOT_FOUND and no zone or requirements are assigned.
Map Zone and Jurisdiction to Prescriptive Requirements
Given resolved climate zone(s) and municipality overlay, When deriving prescriptive requirements, Then the output includes iceAndWater.eavesPercent (0–100), iceAndWater.valleysRequired (boolean), underlayment.type (enum), underlayment.layers (integer), ventilation.ratio (numeric), climate.cold (boolean), with units and controlled vocabulary per schema v1. Given municipality overlay requirements are stricter than the broader zone, When deriving requirements, Then the stricter requirements are selected. Given the output is validated, When checked against the JSON Schema, Then 100% of 50+ reference fixtures pass.
Ambiguous Zone Boundary Handling and Conservative Defaults
Given multiple geocoding candidates within 500 m or a boundary intersection, When mapping zones, Then ambiguity.score is computed in [0,1] and the top candidate is auto-selected only if score >= 0.80. Given ambiguity.score < 0.80, When returning mapping, Then requiresConfirmation=true, the most stringent constraints across candidate jurisdictions are applied, and a user-facing warning is generated. Given a user confirms the correct jurisdiction, When re-running mapping, Then requiresConfirmation=false and constraints reflect the confirmed jurisdiction.
Dataset Versioning and Effective Date Resolution
Given multiple dataset versions with effectiveDate, When mapping at asOfDate T, Then the version selected is the latest with effectiveDate <= T and datasetVersionUsed is returned. Given a job mapped at T0, When new datasets are published at T1 > T0, Then the job retains its original datasetVersionUsed unless re-evaluated explicitly. Given an explicit datasetVersion is provided, When mapping, Then that version is used and echoed in the response and audit record.
Expose Zone-Derived Constraints to GapGuard Rule Engine
Given successful mapping, When the rule engine evaluates a job, Then constraints are available under context.zone.* including iceAndWater.eavesPercent, iceAndWater.valleysRequired, underlayment.type, underlayment.layers, ventilation.ratio, climate.cold, municipality.code. Given constraints are injected, When rules execute, Then additional context injection overhead is <= 50 ms P95. Given any required constraint is missing, When rules execute, Then a safe default is supplied per schema and a warning is logged.
Provenance and Audit Logging
Given any zone mapping run, When audit logs are queried by jobId, Then the record includes jobId, timestamp, inputAddress, normalizedAddress, coordinates, geocodingProvider, providerVersion, geocoderConfidence, datasetVersions (IECC, NOAA, municipality), effectiveDateUsed, citations[], ambiguity.score, requiresConfirmation, fallbackDecisions[], userOverride (optional), checksum. Given an audit record exists, When retrieved within 24 months of creation, Then it is returned within 200 ms P95. Given datasets change or a user override occurs, When mapping is re-run, Then a new immutable audit record is appended without altering prior records.
Performance, Reliability, and Caching SLAs
Given normal operating conditions, When processing 10,000 consecutive mapping requests, Then success rate for supported addresses is >= 99.5%, P95 latency <= 2,000 ms, and P99 latency <= 5,000 ms. Given the geocoding provider is unavailable, When mapping a new address, Then the system serves cached mappings when available and otherwise returns GEO_UPSTREAM_UNAVAILABLE. Given 50 concurrent requests per second, When mapping requests are processed, Then error rate is <= 0.5% and P95 latency <= 2,500 ms.
Conflict & Redundancy Detector
"As a project manager, I want automatic detection of conflicting or duplicate scope items so that I can resolve issues before sending the bid."
Description

Analyzes selected line items to identify mutually exclusive methods (e.g., recover vs tear-off), incompatible materials (e.g., synthetic vs felt underlayment), duplicate entries (e.g., double ridge cap), and overlapping charges (e.g., two steep charges). Assigns severity levels (error, warning, info) and recommends a single corrective action. Integrates with estimate editing to highlight offending items inline and prevents contradictory selections when configured as blocking.

Acceptance Criteria
Block mutually exclusive methods: Tear-off vs Recover
Given an estimate contains "Recover over existing shingles" for a roof area When the user adds "Tear-off existing shingles" for the same roof area Then the detector raises a conflict with severity "Error" within 300 ms and highlights both items inline And the detector recommends a single corrective action: "Remove the most recently added mutually exclusive item" When the user clicks "Apply Fix" on the conflict Then the recommended item is removed, the conflict is cleared, and the estimate total recalculates immediately
Detect incompatible underlayments: Synthetic vs Felt
Given the estimate has a "Primary Underlayment" set to "Synthetic underlayment" When the user adds "15# felt underlayment" to any of the same roof areas Then the detector raises severity "Error" within 300 ms and highlights both line items inline And the detector provides a single recommended action: "Keep Synthetic underlayment; remove 15# felt underlayment" When the user applies the recommended fix Then the felt underlayment line item is removed and the conflict list updates within 300 ms
Prevent duplicate ridge cap entries across sources
Given the estimate includes a "Hip & Ridge Kit (includes Ridge Cap LF)" When the user adds a standalone "Ridge Cap (LF)" covering ridge segments with ≥ 90% overlap with the kit coverage Then the detector flags a duplicate with severity "Warning" within 300 ms and highlights the standalone item And the detector recommends a single action: "Remove standalone Ridge Cap (LF)" And duplicates are detected even if SKUs differ but both map to material class "Ridge Cap" When the overlap is < 10% Then no duplicate warning is raised
Flag overlapping steep/two-story charges by roof area
Given two "Steep charge" line items apply to the same roof sections Then the detector flags overlap with severity "Warning" within 300 ms and recommends: "Remove the duplicate steep charge" Given two "Two-story charge" line items apply to the same roof sections Then the detector flags overlap with severity "Warning" and recommends: "Remove the duplicate two-story charge" Given steep or two-story charges are scoped to distinct roof sections with 0% overlap Then no overlap warning is raised
Severity mapping and escalation rules
Rule: Mutually exclusive methods (e.g., Tear-off vs Recover) are assigned severity "Error" Rule: Material incompatibilities (e.g., Synthetic vs Felt underlayment) are assigned severity "Error" Rule: Duplicate entries (same functional material/charge covering ≥ 50% overlapping scope) are assigned severity "Warning" Rule: Minor overlaps (allowances or charges with >0% and <50% overlapping scope) are assigned severity "Info" Rule: When multiple issues affect the same conflict group, the displayed severity equals the highest severity in the group Rule: Each conflict group exposes exactly one recommended corrective action
Inline highlight and one‑click fix in estimate editor
Given any detected conflict exists When the estimate editor is open or a line item is changed Then offending items are highlighted inline with a severity-colored badge and message within 300 ms And a single "Apply Fix" control is shown per conflict group When the user clicks "Apply Fix" Then the recommended change is applied atomically, an undo entry is created, the conflict is removed, and highlights disappear within 300 ms And an audit event "gapguard.conflict.fixed" is recorded with conflict id, action, and user id
Blocking vs advisory behavior on contradictory additions
Given detector mode is set to "Blocking" When the user attempts to add a line item that creates an Error-level conflict Then the addition is rejected, a toast explains the contradiction and references the offending item, and no estimate totals change Given detector mode is set to "Advisory" When the user attempts the same addition Then the line item is added, the conflict is raised with severity "Error" within 300 ms, and the user can resolve it via the provided single "Apply Fix" action
Auto‑Suggest Quantified Adds
"As an estimator, I want the system to propose correctly quantified add-ons so that I can finalize a complete scope faster with fewer omissions."
Description

Generates one-click, prefilled line items for required adds based on rules and measurements, including computed quantities (LF/SF/EA), waste factors, and cost codes. De-duplicates against existing scope, merges with compatible items when appropriate, and presents a preview panel to apply all or selective suggestions. Supports organization-specific item catalogs and carrier program templates.

Acceptance Criteria
Generate Rule-Based Quantified Adds
- Given a completed roof geometry and an active rule set (climate zone + carrier template), when the user opens GapGuard suggestions, then suggestions are generated within 3 seconds for jobs ≤30 planes and within 5 seconds for jobs ≤100 planes. - Given rule expressions reference geometry metrics (eave LF, rake LF, valley LF, step-wall LF, roof area SF, slope, stories), when suggestions are generated, then each item includes computed quantity, unit (LF/SF/EA), applied waste factor, and mapped cost code. - Given packaging increments are configured for an item (e.g., 100 SF rolls, 10 EA), when quantities are calculated, then displayed quantities round up to the nearest increment and the raw pre-rounded quantity is stored for audit. - Given dependency rules (e.g., starter with shingle overlay), when a parent item is suggested, then required dependent items are also suggested with correctly computed quantities.
De-duplicate Suggestions Against Existing Scope
- Given the existing scope contains an item with the same cost code as a suggestion, when suggestions are generated, then the duplicate is suppressed and marked as Already in scope in the preview. - Given catalog equivalence mapping designates two items as equivalent, when either exists in the scope, then the other is not suggested. - Given an existing scope item partially overlaps a suggested quantity, when suggestions are generated, then the suggested quantity equals max(0, computed − existing) and the suggestion is omitted if the result is 0. - Given a suggestion is suppressed due to de-duplication, when the preview renders, then a Dedupe badge with reason code is shown for that rule outcome.
Merge Compatible Items on Apply
- Given a suggested item is compatible with an existing line per rule mapping, when the user applies the suggestion, then the existing line is retained and its quantity increases by the suggested amount; no new line is created. - Given compatible items use different but convertible units, when merging, then the system converts units using configured factors before quantity update. - Given a merge occurs, when the scope updates, then the existing line’s cost code, pricing, notes, and attachments are preserved and only quantity and extended price change. - Given up to 50 merges in a single apply operation, when the user clicks Apply Selected, then merges complete within 2 seconds and the price delta reflects only the net change.
Preview Panel with Selective Apply and Price Delta
- Given suggestions are available, when the preview panel opens, then each row displays item name, cost code, unit, quantity (with waste and rounding), rule source ID, and any conflict/dedupe flags, plus a per-item and total price delta. - Given selection controls, when the user toggles Select All/None or individual rows, then the Apply button enables only when ≥1 row is selected and shows the selected count. - Given the user clicks Apply Selected, when processing completes, then applied lines appear in the scope with updated totals and the panel reflects which items were applied or remained due to conflicts. - Given a network latency of <300 ms p95 to the API, when opening the preview, then first contentful render occurs within 1.5 seconds for up to 200 suggested rows.
Organization Catalog and Carrier Template Mapping
- Given an organization-specific item catalog is active, when suggestions are generated, then each suggestion resolves to the org’s mapped cost code, description, and unit for that rule. - Given a carrier program template is selected on the job, when suggestions are generated, then template-specific overrides (item mapping/pricing) are applied in preference to org defaults. - Given a rule has no mapping in the active catalog/template, when the preview renders, then the suggestion is disabled with a Missing Mapping flag and cannot be applied until mapping exists. - Given the mapping is added by an admin during the session, when the user refreshes suggestions, then the previously missing suggestion resolves and becomes applicable with correct pricing.
Waste, Unit Conversion, and Rounding Rules
- Given a rule defines a waste factor, when computing quantity, then waste is applied after base quantity calculation and before rounding to the item’s increment. - Given unit conversions are needed (in→ft, ft→LF, SF↔Squares), when quantities are computed, then conversions use standard factors (12 in = 1 ft; 1 Square = 100 SF) and results match within 0.5% of independent calculations across test fixtures. - Given per-item rounding precision is configured (e.g., 1 LF, 0.1 SQ, 1 EA), when quantities are displayed, then they are rounded up to the configured precision consistently. - Given the job’s waste profile preset is changed, when the user refreshes suggestions, then affected quantities recompute using the new waste values.
Conflict Detection for Redundant/Incompatible Methods and Overrides
- Given two suggestions are mutually exclusive per rules (e.g., Full Ice & Water Underlayment vs Eave+Valley Ice & Water), when suggestions are generated, then only the higher-priority suggestion is auto-selected and the alternative displays an Incompatible flag with reason. - Given a suggestion conflicts with an existing scope item, when the user attempts to apply both, then the system blocks application and prompts the user to resolve by choosing one. - Given geometry indicates an add is not applicable (e.g., no step-wall LF), when suggestions are generated, then the non-applicable item is not suggested. - Given org policy allows override, when the user chooses Override and Apply on an incompatible suggestion, then the system applies the selected item, records the override with reason code, and leaves the alternative unapplied.
Explainable Flags & Audit Trail
"As a franchise owner, I want transparent explanations for each suggestion so that my team and carriers can trust and verify why decisions were made."
Description

Attaches human-readable rationales to each suggestion/flag, citing the rule ID, data inputs (e.g., pitch 8/12, wall intersection 42 LF, Climate Zone 5), and source references (carrier guideline, code). Persists all rule evaluations, user actions, and versions in a job-level audit log. Exposes explanations in the UI and optionally includes a compliance appendix in exported PDFs for dispute reduction.

Acceptance Criteria
Flag Explanation Visible in UI
Given a job with at least one GapGuard flag When the user clicks or hovers the flag/exclamation icon in the scope or geometry panel Then a panel or tooltip opens within 300 ms showing a human-readable explanation And the explanation includes rule_id, severity, suggestion text, evaluated_on (ISO 8601), data_inputs (names and values), and source_references (name and citation/URL) And the user can copy the full explanation as plain text via a "Copy" control And explanations longer than 1,000 characters are truncated with an ellipsis and can be expanded to full text
Explanation Content Accuracy and Traceability
Given a deterministic test job with climate_zone=5, pitch="8/12", and wall_intersection_lf=42 that triggers rule "IWR-05" When GapGuard generates a suggestion requiring ice-and-water coverage per rule "IWR-05" Then the explanation lists rule_id "IWR-05" And data_inputs include climate_zone=5, pitch="8/12", wall_intersection_lf=42 (with units where applicable) And source_references include at least one carrier/code citation (e.g., guideline ID or code section) And all numeric values in the explanation match the engine evaluation results within ±0.1 of the stored values And the explanation includes a permalink to the rule definition or knowledge source
Audit Log Captures Rule Evaluations and User Actions
Given any job evaluated by GapGuard When rules are evaluated and suggestions are created, accepted, overridden, or ignored, and when exports are generated Then an immutable audit event is appended for each action with fields: event_id, job_id, actor (user_id or system), event_type, rule_id (if applicable), prior_value, new_value, user_comment (optional), timestamp (ISO 8601), and ruleset_version And audit events are ordered by timestamp and retrievable via UI and API with filters for event_type, actor, rule_id, and date range And the audit log implements tamper-evidence (e.g., hash chain) detectable on read And audit events are retained for at least 24 months
Ruleset Versioning and Reproducibility
Given a job initially evaluated with ruleset_version=v1 and later re-evaluated after a rules update to ruleset_version=v2 When the job is re-evaluated Then the audit log records both v1 and v2 evaluation results with their respective ruleset_version identifiers And the UI shows which ruleset_version produced each flag/suggestion And re-running the evaluation with v1 on the same inputs reproduces the original v1 outputs byte-for-byte And users can lock a job to a specific ruleset_version at export time
Compliance Appendix in PDF Export
Given a job with at least one GapGuard flag or suggestion explanation When the user exports a PDF with "Include Compliance Appendix" toggled on Then the PDF contains an appendix listing each applicable rule flag/suggestion with rule_id, title, rationale text, data_inputs, source citations (URL or code), evaluated_on timestamp, and user disposition (accepted/overridden/ignored) And if there are no flags or suggestions, the appendix states "No GapGuard flags or suggestions" And hyperlinks in citations are clickable And a job with up to 100 explanations exports in under 20 seconds on standard infrastructure
API Access to Explanations and Audit Trail
Given an authenticated user with read access to a job When they request GET /api/jobs/{job_id}/gapguard/audit with optional filters (date range, event_type, rule_id) and pagination (page_size<=100) Then the API returns 200 with JSON that includes explanations and audit events matching the filters, along with pagination metadata And 401 is returned for unauthenticated callers and 404 for non-existent or unauthorized job_id And personally identifiable information is redacted per policy in API responses and exports And for a job with 10,000 events the API responds within 500 ms at p95 under normal load
Override Workflow & Permissions
"As an operations lead, I want controlled overrides with justification so that we remain compliant while handling edge cases responsibly."
Description

Provides role-based controls to accept, dismiss, or override suggestions and to waive blocking conflicts with a required justification note. Supports carrier-locked rules where overrides are disabled, and organization settings to toggle blocking vs advisory behavior. Records user, timestamp, reason, and impact in the audit log and surfaces override summaries in the final estimate.

Acceptance Criteria
Role-Based Accept or Dismiss Suggestion
Given a user with the Estimator or Admin role is viewing GapGuard suggestions for a project When the user clicks Accept on a suggestion Then the suggestion is applied to the estimate, the suggestion status becomes "Accepted", and an audit entry is created with user ID and timestamp Given a user with the Estimator or Admin role is viewing GapGuard suggestions for a project When the user clicks Dismiss on a suggestion Then the suggestion status becomes "Dismissed" and an audit entry is created with user ID and timestamp Given a user without Accept/Dismiss permission attempts to perform these actions via UI or API When the action is initiated Then the controls are hidden or disabled in the UI and any API attempt returns HTTP 403 with error code GG_PERMISSION_DENIED and no estimate changes occur
Waive Blocking Conflict With Required Justification
Given a blocking conflict is present and the user has Waive permission When the user selects Waive Conflict Then a modal appears requiring a justification note with a minimum of 15 characters Given the modal is displayed When the user submits a valid justification Then the conflict status becomes "Waived", the estimate recalculates, and the audit log records user, timestamp, justification text, affected rules/line items, and estimate total delta Given the modal is displayed When the user submits with no justification or fewer than 15 characters Then the waiver is rejected and an inline validation message is shown without changing conflict status
Carrier-Locked Rules Cannot Be Overridden
Given a rule is marked as carrier-locked When any user views that rule in GapGuard Then Override/Waive controls are disabled and a tooltip explains "Override disabled by carrier" Given a rule is marked as carrier-locked When any user attempts to override or waive it via API Then the request is rejected with HTTP 423 and error code GG_RULE_LOCKED and no audit change entry is created (an access-denied attempt entry may be logged separately) Given a rule is carrier-locked When generating the estimate Then the rule’s required adds are enforced if applicable and cannot be removed by user action
Organization Setting: Toggle Blocking vs Advisory
Given an Org Admin opens GapGuard settings When the Admin toggles a rule set from Blocking to Advisory and saves Then future detections in the current project after re-scan become warnings (non-blocking), Waive is not required to finalize, and the change is audit logged with user and timestamp Given an Org Admin opens GapGuard settings When the Admin toggles a rule set from Advisory to Blocking and saves Then new conflicts block estimate finalization until waived with justification, and the change is audit logged Given a non-admin user views settings When they open the GapGuard settings page Then the Blocking/Advisory controls are read-only and show the current org-wide value
Audit Log Completeness for Override Actions
Given any Accept, Dismiss, or Waive action succeeds When the action completes Then an audit entry is created containing: action type, rule ID/name, project ID, user ID, user role, ISO 8601 UTC timestamp, justification (if provided), previous value, new value, estimate total delta, and source IP Given a system or network error occurs after an action is initiated When the operation cannot be fully committed Then neither the estimate change nor the audit entry persists (atomicity), and the user sees an error message with correlation ID Given an auditor filters the log When filtering by action type and date range in UI or API Then results are paginated and accurately reflect the filtered criteria
Override Summary Included in Final Estimate
Given at least one Accept, Dismiss, or Waive occurred on the project When the user generates the final PDF estimate Then a "GapGuard Overrides" section is included listing each item’s rule name, action type, justification (first 200 chars), user display name, date/time, and line-item/total impact; totals match the estimate values Given no overrides or waivers occurred When the user generates the final PDF estimate Then the "GapGuard Overrides" section is omitted or displays "No overrides" as per org template setting Given the estimate is shared via public link When a recipient views/downloads it Then the same override summary content is present and identical to the PDF
Concurrent Updates and Revert of Waiver
Given two authorized users have the same project open When User A waives a blocking conflict with justification Then User B sees the change within 5 seconds or on next interaction, including who waived and the justification Given a waived conflict exists and the org allows reversion When User B reverts the waiver Then the conflict returns to active status, the estimate recalculates, and a new audit entry links to the original waiver Given a user attempts to finalize the estimate with stale conflict state When their client state is older than the server state Then finalization is blocked with HTTP 409 or UI conflict message, prompting refresh

SmartAssemblies

One‑click assemblies expand into fully quantified line items—starter, drip edge, underlayment, ridge vent, pipe jacks—calculated from RoofLens measurements and local rules. Conditional components toggle based on materials and slopes, with prewritten notes for consistency and speed across teams.

Requirements

Assembly Template Builder
"As an estimator, I want to create and reuse assembly templates that auto-quantify components from measurements so that I can produce consistent, accurate estimates in one click."
Description

Provide a configurable template builder to define roofing system assemblies composed of components (e.g., starter, drip edge, underlayment, ridge vent, pipe jacks) with quantity formulas mapped to RoofLens measurements (area, eave/rake/ridge/valley lengths, facets, penetrations). Support waste factors, rounding and packaging rules (ceil to bundle/box), min/max quantities, dependencies, and material system presets (asphalt, metal, tile). Ensure templates can be saved, cloned, imported/exported, and organized by brand, material, and region for rapid reuse and consistency across estimates.

Acceptance Criteria
Create and Save Assembly Template with Mapped Measurement Formulas
Given I am in the Template Builder creating a new assembly template When I add components Starter, Drip Edge, Underlayment, Ridge Vent, and Pipe Jacks And I map each component’s quantity formula using RoofLens tokens area_sqft, eave_length_ft, rake_length_ft, ridge_length_ft, valley_length_ft, facet_count, and penetration_count And the formula validator reports no syntax or unknown-token errors And I enter required metadata: template name, material system, brand, and region Then I can save the template successfully And the template appears in the template list with the correct metadata And reopening the template shows all components and formulas exactly as configured
Apply Waste Factors at Component and Template Levels
Given a template has a global waste factor of 10% and Underlayment has a component-level waste factor of 5% And a sample measurement set is applied with area_sqft=1000 and other tokens populated When quantities are computed Then each component’s pre-pack quantity equals base_formula_result * (1 + waste%) And components without a component-level waste use the 10% global waste And Underlayment uses its 5% component-level waste instead of the global waste And waste factors must be between 0% and 100% inclusive; values outside this range block save with a validation error
Enforce Rounding and Packaging (Ceil to Bundle/Box)
Given a component has package_size=33.3 (sqft per bundle) and rounding mode set to "ceil to package" And the component’s pre-pack quantity computes to 98 sqft When packaging is applied Then the order quantity equals 3 packages (ceil(98 / 33.3) = 3) And the covered quantity reflects 3 * 33.3 = 99.9 sqft And if package_size is missing or <= 0 while "ceil to package" is selected, saving is blocked with a validation error And switching rounding mode to "none" yields an order quantity of 98 (no packaging applied)
Apply Min/Max Quantity Constraints with Defined Evaluation Order
Given a component has a base formula, a waste factor, min_quantity=2, max_quantity=10, and package_size=5 with rounding mode "ceil to package" And the evaluation order is defined as: base -> waste -> min/max clamp -> packaging When the base formula yields 6.2 units and a 10% waste is applied Then the pre-clamp quantity is 6.82 units And the clamped quantity remains 6.82 (between min 2 and max 10) And packaging rounds to 10 units (ceil(6.82 / 5) * 5) And if the clamped quantity were 1.5, the min rule would raise it to 2 before packaging And if the clamped quantity were 12, the max rule would cap it at 10 before packaging
Configure Component Dependencies and Conditional Toggles
Given Ridge Vent is configured with a dependency expression: ridge_length_ft > 0 AND material_system = "asphalt" And Pipe Jacks are configured with a dependency expression: penetration_count > 0 When a measurement set has ridge_length_ft=0 and material_system="asphalt" Then Ridge Vent is automatically excluded from the assembly When ridge_length_ft is changed to 120 Then Ridge Vent is included And when material_system is changed to "metal" Then Ridge Vent is excluded again And if a circular dependency between components is created, the builder blocks save with a clear validation error
Use Material System Presets with Brand/Region Organization
Given presets exist for Asphalt, Metal, and Tile that preload default components and rules When I create a new template and select the Asphalt preset with Brand="Brand X" and Region="Midwest" Then the builder pre-populates components and default formulas for Asphalt And I can modify components and rules and save the template And the saved template is tagged with Material=Asphalt, Brand=Brand X, Region=Midwest And filtering the template list by Material=Asphalt AND Brand=Brand X AND Region=Midwest returns this template And filtering by other combinations does not return this template
Clone and Import/Export Templates with Integrity Validation
Given a template exists with components, formulas, waste, min/max, packaging, dependencies, and metadata When I clone the template Then a new template is created with a unique ID and name auto-suffixed with "Copy" And all components and rules are identical to the source template When I export the template Then a JSON file is produced containing schema_version, metadata, components, formulas, waste, min/max, packaging, dependencies, and tags (brand/material/region) When I import that JSON file Then a new template is created that matches the exported configuration exactly, except for regenerated unique IDs and timestamps And if schema_version is unsupported or required fields are missing, the import fails with a descriptive error and no template is created
Conditional Logic Engine
"As an estimator, I want assemblies to automatically toggle components based on slope, material, and code rules so that I don’t miss required items or add unnecessary ones."
Description

Implement a rules engine that automatically includes, excludes, or swaps components within an assembly based on roof attributes (slope thresholds, story count, facet exposure), selected materials, climate zone, and local code requirements. Support IF/THEN logic, numerical thresholds, per-facet vs whole-roof evaluation, and default fallbacks. Expose user-visible toggles for ambiguous cases (e.g., ridge vent vs. box vents) with sensible defaults. Validate rules at design time and at runtime to prevent conflicts or missing mandatory components.

Acceptance Criteria
Per-Facet vs Whole-Roof Rule Scoping
Given a roof with three facets where A=3:12, B=6:12, C=6:12 and an assembly containing a per-facet rule "if slope < 4:12 then add Double-Layer Underlayment" and a whole-roof rule "add Drip Edge to all eaves" When the Conditional Logic Engine evaluates the assembly Then Double-Layer Underlayment is added only to facet A and not to facets B or C And the quantity for Double-Layer Underlayment is calculated from facet A’s area only And Drip Edge is added to all eaves across facets A, B, and C And the rollup shows per-facet quantities and a roof-level total for Drip Edge
Slope Threshold Underlayment Swap (Boundary Tested)
Given a rule set: if slope < 4:12 then Underlayment = Double-Layer; else Underlayment = Single-Layer And facet X has slope 4:12 And facet Y has slope 3.99:12 When the rules are evaluated Then facet X uses Single-Layer and does not receive Double-Layer And facet Y uses Double-Layer And operator semantics follow the rule definition (>=, >, <=, <) with correct boundary behavior
Ridge Vent vs Box Vents Toggle with Defaults
Given ridge length ≥ 12 ft, material = Asphalt Shingle, attic = Vented, and local code allows ridge vents When the assembly expands Then the visible "Vent Type" toggle defaults to "Ridge Vent" And the engine adds Ridge Vent with quantity computed from measured ridge length And if the user switches the toggle to "Box Vents" then Ridge Vent is removed and Box Vents are added with quantity = ceil(required exhaust NFA / per-vent NFA) And switching the toggle back to "Ridge Vent" reverses the components and recalculates quantities And if ridge length < 12 ft, the default is "Box Vents" while the toggle remains visible And the selected toggle state persists with the estimate and re-applies on regenerate
Climate Zone and Local Code Enforcement for Mandatory Components
Given climate zone CZ-5 or colder or jurisdictional code requiring eave ice protection When the assembly expands for Asphalt Shingle Then "Ice & Water Shield - Eaves" is included with coverage ≥ 24 inches inside the exterior wall line and quantity computed from measured eave length And the item is marked mandatory/locked and cannot be removed by the user And if code requires Drip Edge on eaves and rakes, both components are included and locked And attempts to delete or bypass mandatory components are blocked with an error indicating the governing rule/code
Material-Driven Component Inclusion/Exclusion
Given selected roof material = Metal Standing Seam When the rules evaluate the assembly Then components not applicable to metal (e.g., Starter Shingle, Ridge Cap Shingle, Rubber Pipe Jack) are excluded And metal-appropriate components (e.g., Ridge Cap - Metal, Underlayment - High-Temp, Metal Pipe Boot) are included And for facets with slope < 3:12 a Self-Adhered Membrane is added per facet in addition to High-Temp underlayment And quantities derive from RoofLens measurements (ridge length, boot count, area) with correct unit types
Design-Time Rule Validation and Conflict Detection
Given an author configures two rules that both target the same exclusive slot ("Vent Type") with overlapping conditions and no priority When saving or publishing the rule set Then the system blocks publication and lists conflicts including rule IDs, slot name, and component names And unreachable rules and circular dependencies are identified with warnings/errors And any slot lacking a matching rule and lacking a default raises an error "No applicable rule or default for slot" And all validation errors indicate the exact rule line/reference to fix
Runtime Validation and Default Fallbacks on Missing Data
Given expansion occurs with missing attributes (e.g., climate zone = null) and configured organizational defaults exist When the rules evaluate Then the engine applies the configured default values and logs which defaults were used And if a mandatory decision cannot be made and no default exists, expansion is blocked with an error listing the missing inputs and affected components And if conflicting components would be added simultaneously, rule priority resolves the conflict; if unresolved, expansion is blocked with a clear error And no estimate is produced with missing mandatory components
Local Code and Supplier Pack Rules
"As a contractor, I want assemblies to reflect my local code requirements and supplier SKUs so that my estimates are compliant and match what I can actually procure."
Description

Integrate a location-aware rule library keyed by ZIP/postcode and jurisdiction to enforce code-driven requirements (e.g., ice-and-water shield coverage from eaves, underlayment type by slope, drip edge mandatory on eaves/rakes). Map components to supplier-specific SKUs, packaging sizes, coverage rates, and availability. Allow selection of a supplier profile per job, with pricing and packaging applied to assembly calculations. Provide graceful fallbacks when a rule or SKU is unavailable and surface warnings for user review.

Acceptance Criteria
ZIP-Based Jurisdiction Rule Enforcement
Given a job address whose ZIP/postcode maps to Jurisdiction J with active roofing code rules R And RoofLens measurements include eave length L_e, rake length L_r, ridge length L_rg, and plane areas A_i When SmartAssemblies generates line items for the selected assembly Then each applicable rule in R is evaluated and applied to the calculation And a Rules Applied list is shown containing the rule IDs and versions matched for the job And mandatory components (such as drip edge on eaves/rakes and ice-and-water shield coverage from eaves) are included per R And computed quantities for those components map to the corresponding RoofLens measurements (eaves -> L_e, rakes -> L_r, ridges -> L_rg, areas -> sum(A_i)) with ≤0.5% variance
Supplier Profile Pricing and Packaging Application
Given a job has Supplier Profile S with SKU mapping table M defining SKU, coverage per unit C_u, package multiple P_u, and unit price U for each component type T And SmartAssemblies requires a raw quantity Q for component T When the assembly is calculated Then the system selects the SKU from M for T And computes units N = ceil(Q / C_u) And rounds units up to the next whole package multiple: N_p = ceil(N / P_u) * P_u And sets the line item's SKU to the selected SKU, quantity units to N_p, and extended price to N_p * U
Slope- and Material-Conditional Components
Given roof planes have slopes S_i and the assembly material is M And Jurisdiction J's rule set defines component inclusion and underlayment type by slope and material When SmartAssemblies expands the assembly Then for each plane i, the underlayment component type and layer count are selected per the slope/material rule table And ridge ventilation components are included only where the rule conditions are met And pipe jack counts are derived from detected penetrations filtered by rule thresholds and mapped to the required flashing type per M
Graceful Fallbacks and Warnings for Missing Rules or SKUs
Given a job where a required rule or SKU mapping for a component is missing When SmartAssemblies computes the assembly Then the system falls back in this order for rules: jurisdiction parent rule (if any) > system default rule And for SKU mapping: supplier generic category SKU (if configured) > system default generic SKU And any fallback applied sets a Warning flag on the affected line items And a consolidated warning panel enumerates each missing rule/SKU by component with counts and suggested actions
Supplier Availability and Substitution
Given Supplier Profile S marks a mapped SKU as unavailable for the job's fulfillment location and provides an ordered list of substitute SKUs in the same category When SmartAssemblies calculates the assembly Then the system auto-selects the first available substitute SKU and applies its coverage, packaging, and pricing rules And if no substitute is available, the SKU fallback policy is used and a Warning is shown And the substitution is recorded in the Rules Applied/Source panel for user review
Supplier Profile Switch Recalculates Packaging and Pricing
Given an assembly has been calculated with Supplier Profile S1 And the user switches the job to Supplier Profile S2 When the assembly is recalculated Then all SKUs are remapped to S2's mapping table And package counts, coverage conversions, and extended prices are recomputed using S2's data And any previously shown warnings are re-evaluated and updated to reflect S2's availability and rules
Jurisdiction Resolution and Precedence
Given the job address intersects multiple jurisdictions with overlapping rules And the system precedence is City > County > State When resolving the effective rule set Then the final rules applied equal the union of all applicable rules with higher-precedence rules overriding lower-precedence ones by rule key And the Rules Source report shows, for each applied rule, the jurisdiction source and any overridden rule
One‑Click Expand and Quantify
"As an adjuster, I want to click once to produce a complete quantified estimate so that I can deliver fast, consistent bids."
Description

Enable a single action to expand a selected assembly into fully quantified line items with units, pricing, and prewritten notes, computed from RoofLens measurements and applied rules. Handle multi-facet roofs, multiple slopes, hips/ridges/valleys, penetrations, and waste/rounding logic. Insert the generated items into the estimate and render a ready-to-send PDF scope. Provide a real-time preview and complete operation within 2 seconds for typical jobs, with progress feedback and graceful error handling for edge cases.

Acceptance Criteria
One-Click Expansion Produces Quantified Line Items
Given a user selects a SmartAssembly and triggers One‑Click Expand And the assembly defines components with units, waste, rounding, pricing source, and notes And a RoofLens measurement set is available for the property When the expansion runs Then the system creates line items for each included component And computes each quantity strictly from RoofLens measurements and assembly rules And applies component-level waste after base quantity and before rounding And applies the configured rounding mode and unit step per component And attaches unit, unit price, extended price, and prewritten note to each item And excludes components whose conditions evaluate to false based on material/slope/locale rules And all computed quantities and prices equal rule-calculated values exactly
Complex Roof Quantification and Waste/Rounding Logic
Given a measurement set with multiple facets, multiple slopes, hips, ridges, valleys, and penetrations When One‑Click Expand computes quantities Then linear components for ridges equal the sum of ridge lengths; hips equal the sum of hip lengths; valleys equal the sum of valley lengths And penetration-counted components equal the count of matching penetrations filtered by the component's selection rules And area-based materials use facet areas grouped by slope/material per assembly configuration And waste percentage is applied after base quantity and before rounding for each applicable component And the configured rounding mode (up/down/nearest) and unit step are applied per component And each line item exposes calculation details (raw, waste, rounded, final) for audit
Real-Time Preview Accuracy
Given the user opens the real-time preview for a selected assembly When the user changes any input affecting rules (e.g., material choice, waste factor) or measurements are updated Then the preview refreshes within 200 ms of the change And the preview lists the exact line items, quantities, units, unit prices, line totals, and notes that will be inserted And after insertion, the final estimate items match the preview 100% for item identity, quantity, unit, unit price, line total, note, and grouping And conditionally excluded components are removed from the preview within 200 ms when their conditions evaluate to false
Performance, Progress Feedback, and Graceful Error Handling
Given a typical job (total roof area ≤ 8,000 sq ft, ≤ 50 facets, ≤ 30 penetrations) When the user clicks One‑Click Expand Then compute, insert, and PDF render complete within 2,000 ms at p95 and 3,000 ms at p99 And if processing exceeds 300 ms, a progress indicator appears within 100 ms and updates at least every 500 ms And the UI remains responsive and allows the user to cancel before insertion commits And on recoverable errors (missing price list, missing measurements, rule evaluation failure), the system shows a clear message with resolution steps and a correlation ID, preserves estimate state, and enables retry And on cancellation or failure, no partial line items are committed to the estimate
Estimate Insertion, Grouping, and Idempotency
Given an assembly is expanded into an existing estimate When insertion occurs Then all generated items are grouped under the assembly name with the configured sort order And subtotals, taxes, and grand totals update correctly for the estimate And re-expanding the same assembly in the same estimate updates existing assembly items in-place without creating duplicates And an Undo action reverts the insertion (or update) as a single atomic step And rapid repeated clicks or concurrent expansion by the same user produce a single, de-duplicated result And the operation is recorded with timestamp, user, assembly identifier/version in the activity log
Pricing, Taxes, Currency, and Local Rule Application
Given an active price list, currency, locale (imperial/metric), and estimate date are set When One‑Click Expand runs Then unit prices are sourced from the active price list effective on the estimate date; if an item is missing a price, expansion halts with a clear missing-price message and no partial commit And taxes, markups, and labor/material allocations are applied per estimate settings And currency formatting and unit conversions reflect the locale, yielding correct units and monetary values And local code or climate-zone-driven components toggle strictly per rule evaluation And prewritten notes insert verbatim with placeholders resolved from estimate and property context
Ready-to-Send PDF Scope Generation
Given items have been inserted from an assembly expansion When the user requests the PDF scope Then the PDF includes company branding, client/claim details, assembly grouping, line items with quantities, units, unit prices, line totals, notes, subtotals, taxes, and grand total And all numeric values in the PDF equal the estimate data within the smallest currency unit And tables paginate correctly on Letter/A4 with repeated headers and no text truncation or overflow And PDF generation for a typical job completes within 1,000 ms at p95 And the generated PDF is attached to the estimate record and available for immediate download and email
Prewritten Notes and Spec Text Automation
"As a project manager, I want standardized notes auto-filled based on assemblies so that all estimates communicate scope clearly and consistently."
Description

Attach standardized, organization-approved notes to components and assemblies using tokenized templates (e.g., {manufacturer}, {slope}, {coverage_area}) that resolve from selections and measurements. Auto-insert notes into line items and final PDFs to ensure clear scope descriptions and reduce disputes. Support multi-language variants, formatting controls, and per-customer or per-carrier note sets to meet documentation standards.

Acceptance Criteria
Token Resolution for Assembly and Component Notes
Given an assembly with selected manufacturer "GAF", slope 6:12, and coverage_area 2,345 sq ft When a template containing {manufacturer}, {slope}, and {coverage_area} is applied Then the rendered note contains "GAF", "6:12", and "2,345 sq ft" exactly once each and no unreplaced tokens remain And numeric values respect the template’s rounding rule (0 decimals) and unit formatting with thousands separators And if any token lacks a source value and no default is defined, a validation error lists the missing tokens and the note is not inserted And if a default is defined for a missing token, the default value is inserted and flagged as a default in the render log
Auto-Insertion into Line Items and Final PDF
Given organization-approved notes attached at assembly and component levels When the SmartAssembly is expanded into line items Then the appropriate notes are auto-attached to each generated line item without duplication and in the configured order And component-level notes override assembly-level notes for the same subject where specified And when the estimate is exported to PDF, the notes appear under each line item’s notes section and match the on-screen preview byte-for-byte And if a component or line item is removed prior to export, any notes tied to it are excluded from the PDF and any downstream exports
Multi-Language Note Variant Selection
Given the project language is set to Spanish (es) When rendering notes that have EN and ES variants Then the ES variant is used for all rendered notes And where an ES variant is unavailable, the EN default is used and the fallback event is logged with a count of affected notes And numeric/date formatting adheres to the project locale (e.g., 1.234,56 and 04/09/2025 for es-ES) And when a carrier override to English is selected, the English variant supersedes the project language for all notes
Per-Customer/Carrier Note Set Assignment
Given the customer profile is set to "ABC Insurance" mapped to note set "ABC Claims" When SmartAssembly notes are rendered Then only notes from "ABC Claims" are attached where applicable And when the customer is changed to "XYZ Insurance", attached notes update to the "XYZ Claims" set and removed/added notes are summarized for user confirmation before commit And users without the "Template Admin" role cannot detach required notes from a carrier-mandated set
Formatting Controls Preservation in Outputs
Given a note containing bold, italics, bullet lists, numbered lists, and line breaks When viewing in-app and exporting to PDF and CSV Then formatting is preserved in-app and in PDF (no raw markup visible) and CSV contains plain-text equivalents without HTML artifacts And long lines wrap at the defined content width (6.5 in on Letter) without truncation, and page breaks do not split list items or words mid-line And hyperlinks render as clickable in PDF and as plain text in CSV with the URL preserved
Conditional Notes Based on Slope and Material
Given roof slope is >= 8:12 When notes are rendered Then the steep-slope safety note is included; if slope < 8:12 it is excluded And given material is Metal, shingle-specific notes are excluded and metal-specific notes included And when slope, material, or ventilation selections change, attached notes re-evaluate and update within 2 seconds, with the change recorded in the estimate history
Template Versioning and Approval Workflow
Given templates require approval When a template is edited and submitted Then it is unusable until approved by a user with the "Template Approver" role and the approved version is assigned a new semantic version (e.g., v1.2) And when rendering an estimate created with v1 after v2 is released, the estimate retains v1 text unless the user selects "Refresh to v2", which re-renders notes and records the action in the audit log And the audit log records who, what, when, version, and affected estimates for create, update, approve, and apply events
Manual Overrides with Audit Trail
"As an estimator, I want to adjust calculated quantities when field conditions warrant while keeping an audit trail so that I maintain accountability and flexibility."
Description

Allow users to override calculated quantities, waste percentages, component toggles, and SKU selections after expansion while clearly indicating deviations from calculated values. Record user, timestamp, and reason for each override, enable revert-to-calculated, and surface validation warnings when overrides risk code noncompliance. Provide optional PDF indicators for overridden lines to maintain transparency with customers and carriers.

Acceptance Criteria
Override Quantities and Waste on Expanded Assembly
Given a SmartAssembly has been expanded with calculated quantities and waste values And the estimate is in an editable state When the user edits a line item's quantity or waste percentage and clicks Save Then the edited field is marked "Overridden" with a visible badge and highlight And the delta from the calculated value is displayed as +/− amount or percentage adjacent to the field And the estimate subtotal and totals recalculate within 1 second of save And the override persists after page reload and across sessions
Audit Trail Capture for Overrides
Given a signed-in user with edit permissions modifies a quantity, waste percentage, component toggle, or SKU selection When the user clicks Save Then an audit record is created containing userId, userName, UTC timestamp (ISO 8601), assemblyId, lineItemId, field name, previousValue, newValue, and reason And saving is blocked unless a reason between 5 and 250 characters is provided And the audit record is visible in the line item's History panel within 1 second of save And subsequent overrides append new records without overwriting prior entries
Revert to Calculated Value (Single and Bulk)
Given one or more fields are in an Overridden state When the user clicks Revert on a single overridden field Then the value resets to the latest system-calculated value and the Overridden badge is removed And an audit record is created with action "revert", previousValue, and restoredCalculatedValue When the user selects Revert All Overrides at the assembly level and confirms Then all overridden fields within that assembly are reverted and badges removed And estimate totals recalculate within 1 second
Component Toggle Overrides with Dependency Recalculation
Given conditional components are auto-selected based on material and slope rules When the user manually toggles a component Off or On Then dependent quantities and costs are recalculated to reflect the new component state And any line items made obsolete by toggling Off are excluded from pricing and marked Excluded And an audit record captures the toggle action and impacted items And if the toggle violates a local rule, an inline validation warning appears referencing the rule identifier and jurisdiction
SKU Selection Override and Pricing Update
Given a line item has a default SKU selected When the user replaces it with another SKU from the catalog and saves Then unit price, taxes, and coverage factors update based on the new SKU And the line item's required quantity is recalculated if the new SKU has different coverage or packaging And an audit record logs oldSkuId, newSkuId, and reason And if the new SKU is incompatible with selected material/slope or local rules, an inline validation warning is shown with recommended compatible alternatives
Noncompliance Risk Validation and Acknowledgment
Given one or more overrides may risk code noncompliance When the system detects such a risk Then display a warning banner listing each affected line item, rule reference, jurisdiction, and severity (Warning or Error) And allow the user to proceed only after acknowledging each warning via checkbox And all acknowledgments are recorded in the audit trail with userId and UTC timestamp And the system does not auto-correct or block the override beyond requiring acknowledgment
PDF Indicators for Overridden Lines
Given an organization setting "Show override indicators on PDFs" exists and is enabled When the user generates a PDF estimate Then each overridden line item is marked with an asterisk and footnote "Manually adjusted from calculated value" And the PDF includes an Overrides Summary listing line item, field, calculated value, overridden value, user initials, and date When the setting is disabled Then the PDF contains no override indicators or summary And PDF generation duration does not increase by more than 10% compared to the same estimate with the setting disabled
Versioning and Team Publishing
"As an operations lead, I want to manage and publish standard assemblies to my team so that everyone estimates the same way."
Description

Provide versioned assembly templates with draft, review, and publish states. Enable role-based access (admin/estimator/viewer), organization-wide libraries, and personal workspaces. Offer change logs and side-by-side diffs between template versions, with the ability to migrate existing estimates to a newer version and preview impacts before applying to active jobs.

Acceptance Criteria
Draft, Review, Publish Workflow for Template Version
Given an admin creates a new assembly template and saves without submitting for review Then the template is saved in Draft state and is visible only to the creator and admins in their personal workspace Given a Draft template with all required fields populated When the creator submits it for review Then the template state changes to Review and a change-log entry is recorded with actor, timestamp, and summary Given a template in Review When an admin approves it Then the template state changes to Published, becomes visible in the Organization Library, and the Published version is immutable Given a template in Review When an admin requests changes Then the template returns to Draft with reviewer comments recorded in the change log Given a Published template When any user attempts to edit it Then the system creates a new Draft version linked to the Published version; the Published version remains unchanged and in use
Role-Based Access Controls for Template Versioning
Given a user with Admin role When accessing template features Then they can create, edit, approve, publish, archive, and view change logs and diffs for all templates Given a user with Estimator role When accessing template features Then they can create and edit templates in their personal workspace, submit for review to the Organization Library, view and use Published templates, view diffs, and initiate estimate migrations; they cannot publish or approve Given a user with Viewer role When accessing template features Then they can view and use only Published templates and cannot see Draft or Review templates, change logs for unpublished versions, or initiate migrations Given a Viewer attempts to access a Draft or Review template via direct link Then access is denied with HTTP 403 and the attempt is logged in the audit trail
Organization Library vs Personal Workspace
Given an Estimator creates a new template Then it is saved to their personal workspace by default and is not visible in the Organization Library Given a Draft template in a personal workspace When the user submits it for review Then a copy is created in Review state within the Organization Library while the personal Draft remains unchanged Given a Published template in the Organization Library When searched by users Then it appears in results for Admin, Estimator, and Viewer; Draft and Review templates do not appear for Estimators or Viewers Given an Admin attempts to delete a Published template Then deletion is blocked; only archival is allowed and the template remains retrievable for existing estimates Given a user filters templates by scope Then the system returns results scoped to Personal or Organization accordingly within 2 seconds for up to 500 templates
Change Log Creation and Integrity
Given any template event occurs (create, edit, submit for review, approve, request changes, publish, archive, migrate estimates) Then a change-log entry is created with event type, actor, timestamp (UTC ISO 8601), template ID, version, and a machine-generated summary of fields changed Given an Admin or the template owner views the change log Then entries are displayed in reverse chronological order and can be filtered by event type and date range Given a change-log entry exists When any user attempts to edit or delete it Then the action is disallowed; logs are immutable and only new entries can be appended Given a change-log entry references a versioned change When the user opens it Then the system links to the side-by-side diff for the referenced versions
Side-by-Side Diff Between Template Versions
Given two versions of the same template exist When the user opens the side-by-side diff Then added, removed, and modified components, rules, notes, and cost values are highlighted with counts for each change type Given a numeric field changed between versions Then the diff shows old value, new value, and percent change to two decimal places Given the diff view is loaded for a template with up to 200 line items and 50 rules Then the initial render completes within 3 seconds on a standard broadband connection Given the user filters the diff by change type or component category Then only matching differences are displayed and counts update accordingly
Preview Impacts Before Migrating Estimate
Given an active estimate created from template version X When the user opens "Preview migrate to version Y" where Y is a Published version of the same template Then the system shows a preview with a per-line-item change type (New/Removed/Modified/Unchanged), quantity changes, unit-price changes, and net total delta including taxes/fees Given an estimate contains manual overrides on quantities or prices When the migration preview is generated Then overrides are flagged and preserved by default with an option to overwrite; the preview clearly indicates the effect of either choice Given the preview is generated for an estimate with up to 300 line items Then results return within 10 seconds and match recomputed totals within $0.01 of applying the changes Given a migration would remove a component required by local rules for the job's slope/material Then the preview blocks application and displays a blocking validation error referencing the rule
Apply Migration to Active Job with Revisioning
Given an active estimate based on version X and a validated preview to version Y When the user confirms Apply Then the system creates a new estimate revision labeled with version Y, preserves the prior revision for rollback, updates the job with the new line items, and records a change-log entry Given the migration apply process encounters an error mid-operation Then all changes are rolled back atomically and the estimate remains on the prior revision with an error message logged and shown to the user Given the migration is applied Then the resulting totals (subtotal, taxes, fees, grand total) match the preview within $0.01 and all conditional components are toggled as previewed Given a user tries to apply a migration to a Closed job Then the action is blocked with an explanatory message; only Active jobs can be migrated

ExportPreflight

Runs a readiness check before Xactimate export: code completeness, required photo attachments, note compliance, grouping/order, and carrier‑specific formatting. Fix issues in one click, then output a clean ESX/CSV or send via API—ensuring imports load cleanly with minimal edits.

Requirements

Carrier Rule Profiles
"As an adjuster handling multiple carriers, I want to apply a carrier-specific profile so that my export is validated and formatted to that carrier’s rules without manual rework."
Description

Configurable profiles for carriers and TPAs specifying required line-item codes, grouping and ordering, tax and waste handling, naming conventions, photo and note requirements, and ESX/CSV export formatting. The preflight engine selects a profile per job (manual selection or auto-detected from claim metadata) and applies it to validate, normalize, and format the estimate accordingly. Includes versioning, effective dates, and regional variations to accommodate changing rules. Integrates with RoofLens’ estimate builder, pricing region settings, and export services to ensure Xactimate imports match carrier expectations with minimal edits and reduced rejections.

Acceptance Criteria
Profile Selection: Auto-Detect with Manual Override
Given a job with claim metadata carrier="Acme Insurance", region="TX", and lossDate "2025-07-15" with an active profile "Acme TX v3" (effective 2025-06-01 to 2025-12-31) When ExportPreflight initializes Then the selected Carrier Rule Profile is "Acme TX v3" and the selection source is recorded as "Auto-Detected" Given the same job and the user manually selects profile "Acme National v2" When ExportPreflight runs Then the override profile "Acme National v2" is applied and the selection source is recorded as "User Override" Given a job whose metadata matches no active profile When ExportPreflight initializes Then the profile "Generic Default" is applied and a warning "No carrier profile match; using default" is displayed
Required Codes: Validation and One-Click Fix
Given an estimate under profile "Acme TX v3" where required codes include ["RFG300", "FEE-DRONE"] and deprecated codes include ["RFG123"] And the estimate uses code "RFG123" and omits "RFG300" When ExportPreflight validates required and deprecated codes Then it flags 2 issues with messages identifying the missing and deprecated codes When the user clicks "Fix All" Then "RFG123" is replaced by the profile-mapped replacement "RFG310" And "RFG300" is added with quantity computed per profile formula q = (roofArea/100) * (1 + waste%) within ±1% of expected value And re-running preflight shows 0 code issues
Grouping and Ordering Normalization
Given a profile that specifies group order ["Roof", "Gutters", "Interior"] and line-item order within each group And an estimate whose items are out of the specified order and groups When ExportPreflight normalizes grouping and ordering Then the estimate structure is reordered to match the profile exactly And a diff report lists moved items and new positions And re-running preflight reports "Grouping/Ordering: Pass"
Tax and Waste Rules by Region
Given profile "Acme TX v3" with materialTax=8.25%, laborTax=0%, shingleWaste=10% with rounding to 2 decimals And an estimate with $10,000 materials and $5,000 labor for shingles When totals are calculated in preflight Then material tax equals $825.00 ± $0.01 and labor tax equals $0.00 And shingle quantities reflect a 10% waste factor within ±0.01 squares of expected Given profile "Acme OR v1" with materialTax=0%, laborTax=0%, shingleWaste=12% When the same estimate is validated Then both taxes equal $0.00 and shingle quantities reflect a 12% waste factor within ±0.01 squares
Naming Conventions and Identifier Formatting
Given a profile rule: EstimateName = "{Carrier}-{ClaimNumber}-{InsuredLastName}", maxLength=60, allowedChars = A-Z a-z 0-9 and "-" And job metadata Carrier="Acme", ClaimNumber="CL-12345", InsuredLastName="Ramos/Smith" When ExportPreflight applies naming rules Then the export name becomes "Acme-CL-12345-RamosSmith" (invalid characters stripped) And the final name length is ≤ 60 characters And the same name appears in ESX/CSV headers as required
Photo and Note Requirements Enforcement
Given a profile requiring ≥ 8 photos total with tags ["overview","elevation","damage"] and captions, and notes sections ["Scope Summary","Safety"] non-empty And the job has 6 photos with 3 missing captions and only "Scope Summary" note present When ExportPreflight validates artifacts Then it reports: photos missing=2, captions missing=3, missing note sections=["Safety"] When the user clicks "Fix All" Then note templates for missing sections are inserted and auto-filled with job metadata placeholders And photo counts remain flagged until the user adds required photos with tags and captions And re-running preflight after user additions shows all artifact checks Pass
Export Formatting and Xactimate Import Cleanliness
Given profile "Acme TX v3" requires ESX v28 schema, CSV column set "ACME_STD_2025", unit mapping to imperial, and API delivery to https://api.acme.com/claims/{claimId}/estimates And preflight validation passes When the export is generated and delivered Then ESX validates against v28 schema with 0 errors and 0 warnings And CSV contains all required columns in the specified order with correct headers And the API responds HTTP 200 with a non-empty importId And the post-import validator reports 0 normalization changes needed
Line-Item Code Completeness Check
"As a roofing estimator, I want the system to catch incomplete or invalid line items so that my export loads into Xactimate without errors or missing data."
Description

Automated validation that scans all estimate line items for missing or invalid Xactimate codes, unit types, quantities, pricing region/date, waste factors, tax settings, and required attributes. Flags duplicates, incompatible combinations, and ungrouped items; enforces grouping and order by trade/room per active carrier profile. Presents actionable, line-referenced messages and auto-suggests correct codes and prices using RoofLens’ code library and pricing API. Ensures generated ESX/CSV contains complete, consistent data for clean import with minimal post-editing.

Acceptance Criteria
Detect and Auto-Suggest Fixes for Missing or Invalid Xactimate Codes
Given an estimate contains line items with missing codes or codes not found in the active price list for the selected region/date, when the completeness check runs, then each offending line item is flagged with its line reference and a reason of "Missing code" or "Invalid code". Given flagged items exist, when the user clicks Auto-suggest, then at least 95% of flagged items display a single top suggestion including code, description, unit, and price for the active region/date from the RoofLens code library. Given suggestions are visible, when the user accepts a suggestion for a line, then that line is updated in-place and its flag is cleared. Given unresolved code flags remain, when the user clicks Fix All, then the top suggestion is applied to all unresolved flagged lines and a change summary is shown. Then after fixes are applied, 0 line items remain with missing or invalid codes.
Validate Units, Quantities, and Pricing Region/Date via Pricing API
Given the estimate has a selected pricing region and price list date, when the completeness check runs, then each line’s unit type must be one of the allowed units for its code; otherwise the line is flagged "Unit mismatch" with the allowed units listed. Given a line quantity is null, zero, or negative, when the check runs, then the line is flagged "Invalid quantity" and the suggested quantity is derived from linked RoofLens measurements or set to 1 if no measurement exists. Given the pricing region or price list date is missing, when the check runs, then export is blocked and the user is prompted to select region/date before proceeding. Given region/date are set, when prices are refreshed, then each line price matches the pricing API for the selected region/date within ±0.01 and lines with mismatches are updated automatically. Then after applying fixes, 100% of lines have a valid unit, a positive quantity, and a current price tied to the selected region/date.
Enforce Waste Factors and Tax Settings per Carrier Profile
Given the active carrier profile defines required waste ranges and tax rules by trade/code, when the completeness check runs, then any line missing a waste factor or outside the configured min/max is flagged with the expected range. Given taxability must match carrier profile and jurisdiction, when the check runs, then lines with incorrect tax settings are flagged with the expected taxable/non-taxable value. When the user selects Apply profile defaults, then missing/out-of-range waste factors are set to the profile default and incorrect tax settings are corrected for all flagged lines. Then after applying fixes, 0 lines remain with waste or tax rule violations.
Detect Duplicates and Incompatible Line-Item Combinations
Given duplicate lines are defined as same code, trade, room, unit, and description, when the check runs, then duplicates are grouped and flagged with a merge suggestion. When the user clicks Merge duplicates, then duplicate groups are collapsed into a single line per group with quantities summed and pricing recalculated, and all duplicate flags are cleared. Given a ruleset of mutually exclusive or incompatible code pairs/groups, when the check runs, then any violations are flagged with a recommended replacement or removal. When the user clicks Fix All on incompatibilities, then recommended replacements/removals are applied and all incompatibility flags are cleared. Then after fixes, the estimate contains no duplicate lines and no incompatible code combinations.
Enforce Grouping and Ordering by Trade and Room
Given the active carrier profile specifies grouping by trade and room and a sort order within groups, when the check runs, then ungrouped lines are flagged and a preview of the target structure is generated. When the user selects Apply grouping/order, then lines are reorganized into the specified trade/room hierarchy and sorted as per the profile without altering quantities or prices. Then running the check again produces no grouping/order flags (idempotent), and group headers reflect the correct counts of lines per trade and room.
Export Readiness and Clean Import Validation
Given all completeness checks are passing, when the user exports to ESX/CSV or sends via API, then the internal import validator returns zero errors and zero blocking warnings. Given there are flagged issues, when the user clicks Fix All, then all auto-fixable issues are resolved in one action and non-fixable issues remain with clear, line-referenced guidance. Given any flag is selected in the message panel, when clicked, then the corresponding line is focused in the estimate editor and the suggested fix is visible. Then exported ESX/CSV or API payload imports into Xactimate without manual edits and the ExportPreflight status shows Ready.
Attachment & Evidence Completeness
"As a contractor submitting a claim package, I want to be alerted to missing or mis-labeled photos so that my submission meets carrier evidence requirements the first time."
Description

Validation of required photo evidence and documentation against the active carrier profile, including elevation overviews, damage close-ups, annotated images, and measurement references. Checks image labeling, association to elevations/slopes, EXIF timestamps, and minimum photo counts and angles. Supports auto-attach from RoofLens photo sets and drag-and-drop additions. Applies severity policies to block export on critical gaps or warn on advisories. Improves documentation completeness to reduce disputes and rejections during carrier review.

Acceptance Criteria
Carrier Profile Photo Requirements Enforcement
Given a job with an active carrier profile that defines required photo categories (elevation overviews, damage close-ups, annotated images, measurement references) with minCounts per elevation/roof plane/line item When ExportPreflight runs Attachment & Evidence Completeness validation Then for each required category and scope, the system verifies counts >= minCounts as defined in the profile And marks the category Pass when all scopes meet or exceed minCounts And marks the category Fail and lists missing counts per scope when any scope is below minCounts
Photo Labeling and Association Accuracy
Given photos in the job media library When validation runs Then each required photo must have a label from the allowed taxonomy defined by the active carrier profile And each labeled photo is associated to the correct elevation, slope, or roof plane where applicable And no required photo may remain with an "Uncategorized" label And duplicate photos (identical file hash) are counted once and flagged as duplicates
EXIF Date/Time and Location Compliance
Given required photos with EXIF metadata When validation runs Then each required photo must have an EXIF timestamp within the job visit date range or within +/- 24 hours of the scheduled date if no range is set And if GPS EXIF is present, the photo location must be within 100 meters of the job address centroid And photos missing EXIF fields are flagged as Advisory or Critical according to carrier profile policy
Minimum Angles and Elevation Coverage
Given the carrier profile specifies minimum angles or viewpoints for elevation overviews and slope coverage (e.g., N/E/S/W per elevation or minAngles per roof plane) When validation runs Then the system verifies that each elevation/roof plane has the required distinct angles/viewpoints based on EXIF heading or manual angle tags And any missing angles per elevation/slope are listed with required vs present counts
Auto-Attach and Revalidation on Add
Given photos exist in RoofLens auto-captured sets or are added via drag-and-drop When the user adds or removes photos Then the system attempts to auto-attach them to required categories and scopes using labels/AI and prior mappings And re-runs validation within 2 seconds of the change And updates pass/fail counts and severity badges in the Preflight panel without page refresh
Severity Policy Enforcement and Export Blocking
Given carrier-defined severity policies for evidence gaps (Critical vs Advisory) When validation results include any Critical gaps for attachments/evidence Then ESX/CSV export actions and API send are disabled and a blocking banner lists the critical items And when only Advisory gaps remain, export actions remain enabled and display a warning badge And the banner provides one-click Fix suggestions where available
Annotation Evidence for Damage Line Items
Given line items marked as requiring annotated images by the carrier profile When validation runs Then each such line item must have at least one annotated image linked to it with visible markups and a readable caption/note if required And missing annotations are flagged with the specific line item IDs and suggested candidate images for quick attach
Note Compliance Validator
"As an adjuster, I want guidance to make my notes compliant so that the carrier accepts the estimate without back-and-forth revisions."
Description

Analysis of line-item, room, and summary notes to enforce carrier-required disclaimers, cause-of-loss statements, measurement references, and prohibited terms. Provides macro templates, inline suggestions, and minimum-detail checks (e.g., slope, material, quantities) while supporting redaction of sensitive data. Generates pass/fail results with suggested edits and quick-insert options to achieve compliance prior to export. Reduces back-and-forth and increases first-pass acceptance.

Acceptance Criteria
Missing Carrier Disclaimer Auto-Fix Before Export
Given a claim with a carrier profile that requires a summary disclaimer And the project has no matching disclaimer text in Summary Notes When the Note Compliance Validator runs Then it marks the Disclaimer check as Fail and flags Summary Notes with a blocker And it presents the carrier-specific disclaimer macro as a quick insert option When the user applies Insert Macro Then the disclaimer text is inserted at the top of Summary Notes with the carrier label And revalidation updates the Disclaimer check to Pass
Cause-of-Loss Statement Enforcement
Given the export profile requires a cause-of-loss statement in Summary Notes And the configured keywords include hail, wind, fire, water, or theft When the validator scans Summary Notes Then it verifies presence of at least one configured cause-of-loss keyword and an event date in YYYY-MM-DD or MM/DD/YYYY format And if either element is missing it marks the check as Fail and provides a single actionable suggestion When the user inserts the cause-of-loss macro and adds the event date Then the check updates to Pass
Measurement References in Line-Item Notes
Given steep-slope roof replacement line items exist with associated measurements When the validator evaluates line-item notes Then each note must include: a slope value in n:12 format, a material type from the allowed list, and a quantity with unit (SQ or SF) And any note missing one or more required tokens is flagged as Fail with the missing fields listed When the user adds the missing tokens via inline suggestions Then the affected line-item checks update to Pass
Prohibited Terms Detection and Replacement Suggestions
Given the carrier profile defines a list of prohibited terms and approved alternatives When the validator scans all line-item, room, and summary notes Then every occurrence of a prohibited term is highlighted and listed with location context And the check status is Fail until all prohibited terms are removed or replaced via quick actions When the user selects Replace All Then all prohibited terms are replaced with approved alternatives And revalidation shows the check as Pass
Sensitive Data Redaction Prior to Export
Given redaction is enabled for exports And notes contain PII patterns such as phone numbers, emails, or policy numbers When the validator runs Then it flags PII findings as Warnings with a quick toggle to Redact When the user enables Redact PII for export Then all detected PII in exported ESX, CSV, and PDF is replaced with [REDACTED] And in-app notes retain the original content for authorized users And revalidation confirms no PII appears in the export preview
Inline Suggestions and Batch Quick-Fix
Given multiple note compliance failures are detected across the project When the user clicks Fix All Then macros, replacements, and missing tokens are applied in batch without altering non-flagged content And a summary lists the number of changes by category (disclaimers inserted, terms replaced, tokens added) And revalidation runs automatically and updates the overall Note Compliance status
Export Gate Based on Note Compliance Status
Given ExportPreflight is triggered for Xactimate export When any Note Compliance check has status Fail with severity Blocker Then the export actions (ESX, CSV, API) are disabled and the user is shown required fixes When all blockers are resolved and only warnings remain Then export actions are enabled and the final report summarizes remaining warnings and zero blockers And the export payload includes the compliant notes
One-Click Auto-Fix
"As a roofing estimator, I want to apply recommended fixes in one click so that I can reach export-ready status quickly and confidently."
Description

Automated remediation that applies system-recommended fixes to common preflight issues: mapping to standard Xactimate codes, auto-grouping and ordering items, defaulting missing fields (trade, tax, waste), relabeling photos, and inserting required notes/disclaimers. Presents a change summary with granular accept/reject controls, versioning, and undo. Re-runs validations after application to confirm a clean state. Accelerates export readiness and reduces manual editing time.

Acceptance Criteria
One-Click Auto-Fix: Map Items to Standard Xactimate Codes
Given a project with unmapped line items and available mapping recommendations When the user clicks One-Click Auto-Fix Then every unmapped line item with a single recommendation is assigned that Xactimate code And items with multiple viable recommendations are flagged as Needs Review without assignment And all code changes are captured in the Change Summary with old code, new code, item ID, and timestamp And a new estimate version is created
Auto-Grouping and Ordering of Line Items per Carrier Profile
Given line items without compliant grouping/order and a selected carrier profile When the user clicks One-Click Auto-Fix Then line items are grouped per profile rules (e.g., Trade > Area > Activity) And line items within each group are ordered per the profile's sequence definition And group labels meet profile naming constraints (character set and max length) And the export preview reflects the new grouping and order
Default Missing Fields: Trade, Tax, and Waste
Given line items missing trade, tax applicability, or waste percentage When the user clicks One-Click Auto-Fix Then defaults are applied per the active project template for each missing field And waste percentage is applied only to material-eligible items And tax flags are set per jurisdiction settings in the project And each defaulted field is annotated as Auto-Fix in the Change Summary And all defaults are reversible with Undo
Photo Relabeling to Carrier Naming Convention
Given attached photos with labels that do not meet the carrier naming convention When the user clicks One-Click Auto-Fix Then photo captions and filenames are updated to the pattern defined by the selected carrier profile And original filenames are retained in metadata And photos missing an area are assigned the project's default area or flagged for review And the Change Summary lists each relabeled photo with old and new label values
Insert Required Notes and Disclaimers
Given a selected carrier profile that requires specific notes and disclaimers When the user clicks One-Click Auto-Fix Then required notes are inserted into the estimate header and applicable line items And duplicate notes are not created And dynamic placeholders in the notes are resolved with project data And inserted notes appear in preview and will be included in ESX/CSV exports And the Change Summary lists each inserted note
Change Summary with Granular Accept/Reject, Versioning, and Undo
Given proposed changes generated by Auto-Fix When the user opens the Change Summary Then changes are grouped by category with per-category counts And each change can be individually accepted or rejected before apply And Accept All and Reject All actions are available And applying accepted changes creates a new version with a diff view to the prior version And clicking Undo reverts to the prior version and restores its validation state
Auto Re-run Validations to Confirm Clean Export State
Given Auto-Fix changes have been applied When validations re-run automatically Then all auto-fixable preflight errors present before Auto-Fix are resolved And any remaining errors are listed with reasons and next-step guidance And if no errors remain, the project status is set to Export Ready and ESX/CSV export buttons are enabled And initiating a test export completes without validation errors
Export Dry-Run & Error Preview
"As a user preparing a deliverable, I want to preview potential export errors so that I can resolve them before sending files to the carrier."
Description

A simulated ESX/CSV generation that predicts Xactimate import warnings and errors before producing final files or sending via API. Displays a prioritized checklist with severity levels, line references, and deep links to problem areas, along with suggested fixes or auto-fix options. Supports gating rules to block export on critical issues and allow warnings with acknowledgment. Shortens feedback loops and prevents failed imports.

Acceptance Criteria
Dry-Run Predicts Import Issues Without Final Export
Given a project with up to 500 line items and a selected carrier profile When the user clicks "Dry Run" from the ExportPreflight modal Then the system simulates ESX and CSV generation without creating files or calling external APIs And returns a checklist of predicted import issues within 15 seconds at the 95th percentile And each issue includes severity (Critical, Error, Warning), predicted Xactimate code/message if available, and precise line references (section, line number, item code) And the dry-run result is cached for the project until any relevant data changes (items, notes, photos, grouping, profile)
Prioritized Checklist Sorting and Filtering
Given dry-run results contain more than one issue When the checklist is displayed Then issues are sorted by severity (Critical > Error > Warning), then by section, then by line number And the UI shows total counts per severity and overall And filter controls allow filtering by severity, section, and item code and update the list in under 200 ms And clearing filters restores the default prioritized order
Gating Rules for Export Based on Issue Severity
Given organizational gating rules are enabled for the active carrier profile And the dry-run contains one or more Critical issues When the user attempts "Export" or "Send via API" Then the action is blocked with a banner explaining required fixes and a link to "View Issues" And the "Override" control is unavailable to users without Export.OverrideCritical permission Given only Warnings remain When the user proceeds to export Then the system requires a one-click acknowledgment capturing user, timestamp, and warning summary And the export continues
Deep Links to Problem Areas in Estimate Editor
Given an issue includes line references When the user clicks "Open" Then the system navigates to the estimate editor at the exact line item And highlights the item and the offending field And focuses the field ready for edit within 500 ms after view load When the user applies a valid fix and clicks "Re-run Dry Run" Then the issue no longer appears
One-Click Auto-Fix with Re-Run and Undo
Given an issue type supports auto-fix (e.g., missing trade code, incorrect grouping/order) When the user clicks "Auto-Fix" Then the system presents a confirmation summarizing the exact changes And upon confirm, applies changes atomically and logs a change summary (who, what, when) And automatically re-runs the dry-run and updates the checklist, removing resolved issues And provides a single-click "Undo" that reverts the auto-fix and re-runs the dry-run
Carrier-Specific Formatting and Completeness Validation
Given the carrier profile "ABC Mutual - Residential" is selected When the dry-run executes Then it validates required elements per profile: line item codes present, activities set, waste factor rules, note templates applied, grouping/order, and required photo attachments per line item And flags each violation with severity per rule, line references, and suggested fixes (template names or attachment counts) And marks photo-attachment violations as Critical if the profile defines them as mandatory
Consistency Between Dry-Run Predictions and Final Export Paths
Given a dry-run returns no Critical issues When the user exports ESX or CSV Then the generated file imports into Xactimate with zero errors and no more warnings than predicted by dry-run When the user chooses "Send via API" Then the receiving system accepts the payload with zero errors and no more warnings than predicted by dry-run And the system writes an audit record with user, timestamp, carrier profile, export target, dry-run hash, and outcome
Direct API Export & Delivery
"As a contractor, I want to deliver my estimate directly to the carrier’s system so that I don’t have to download and manually upload files."
Description

Direct transmission of validated estimates to Xactimate or third-party intake endpoints via secure APIs using OAuth/API keys, with retries, idempotency keys, and delivery receipts. Supports secure attachment upload, metadata mapping, and webhook callbacks for status updates. Provides a fallback to ESX/CSV download when APIs are unavailable. Records a full audit log of transmissions and outcomes. Streamlines delivery and provides confirmation without manual uploads.

Acceptance Criteria
Successful OAuth 2.0 Export to Xactimate
Given a validated estimate with ExportPreflight status "Pass" and an active OAuth 2.0 configuration for Xactimate When the user triggers "Export via API" Then the system obtains an access token using the configured OAuth grant and scopes And includes an idempotency key and correlation ID on all outbound requests And transmits the estimate payload over TLS 1.2+ to the Xactimate endpoint And receives a 200–202 response containing a provider receipt ID And persists the receipt ID and marks the export status as "Sent" And displays a success confirmation to the user within 5 seconds of the API response
API Key Export to Third-Party Intake Endpoint
Given a validated estimate and a configured third-party endpoint with API key authentication When the user triggers "Export via API" Then the system sends a POST to the configured URL over TLS 1.2+ And includes the configured API key header and an idempotency key And validates the payload against the mapped schema (required fields present, types correct) And treats a 200–202 response with a receipt ID as success And treats a 409 duplicate-for-same-idempotency-key as success and records the prior receipt ID And persists the outcome and shows confirmation within 5 seconds
Idempotent Retries With No Duplicate Records
Given the initial export attempt returns a transient failure (network timeout or HTTP 5xx) When the system retries automatically Then it reuses the same idempotency key for each retry And retries up to 3 times with exponential backoff starting at 5 seconds And stops retrying upon first 2xx success or any terminal 4xx error And records exactly one successful delivery per export (confirmed by 2xx or duplicate 409 with same receipt ID) And writes each attempt with timestamp, error code, and outcome to the audit log
Attachment Upload and Metadata Mapping
Given the estimate has required attachments identified by ExportPreflight When the export runs Then each required attachment is uploaded securely (direct upload or pre-signed URL) before finalizing the export And the payload references all uploaded attachments per target schema And the number of uploaded attachments equals the number required; unsupported types or oversize files are rejected with clear errors And the export fails fast with an actionable message if any attachment upload fails; no partial export is recorded
Webhook Callback Handling and Status Updates
Given the provider sends a webhook callback for a previously sent export When the callback is received at the configured endpoint Then the system validates the signature using the configured secret and rejects invalid signatures with 401 And correlates the callback to the export via receipt ID or correlation ID And updates the export status based on the callback payload (e.g., Accepted, Processing, Imported, Error) And persists the full callback payload and timestamp in the audit log And surfaces the updated status in the UI within 10 seconds of a valid callback
Fallback to ESX/CSV Download When APIs Are Unavailable
Given API export is unavailable due to authentication failure, configuration error, or provider outage detected (token fetch failure, DNS/timeout, or repeated 5xx) When the user attempts to export Then the system informs the user that API export is unavailable with a clear reason code And offers an immediate ESX/CSV download using the same validated estimate data And generates the ESX/CSV file within 30 seconds and makes it available to the user And records the fallback decision and associated error details in the audit log
Comprehensive Audit Logging of Transmissions and Outcomes
Given any export attempt (API or fallback) When the operation completes or fails Then the system appends an immutable audit record containing user, timestamp, endpoint identifier, idempotency key, request/response codes, provider receipt ID (if any), attempt count, attachment counts, and message And redacts configured sensitive fields before storage And makes audit records queryable by export ID and date range for authorized users

Template Lockstep

Centrally controlled estimate templates with versioning, one‑click rollouts, and safe rollback. Push updates across branches on a schedule, auto‑migrate in‑flight bids with a clear change diff, and lock critical sections to stop local edits. Ensures every office quotes the same assemblies, notes, and structure—cutting rework and protecting margins.

Requirements

Template Versioning & Changelog
"As an operations manager, I want to version estimate templates with clear change history so that I can control changes and trace what was used on any bid."
Description

Provide semantic versioning for estimate templates with immutable released versions and editable drafts. Each release includes auto-generated changelogs, manual release notes, and a visual diff against prior versions (assemblies, line items, pricing formulas, notes, and structure). Prevent direct edits to released versions; require new drafts for changes. Enable quick lookup of which template version was used on any bid and support compare/restore of historical versions. Integrates with RoofLens’ estimate engine and PDF export so generated bids reference the exact template version and metadata.

Acceptance Criteria
Draft Creation and Semantic Version Assignment
Given an existing template with latest released version v1.2.0, When a user creates a new draft from it, Then the draft is marked "Draft", is fully editable, and no immutable version number is assigned until release. Given a draft is being released, When the user selects a semantic version bump type (major/minor/patch), Then the system proposes the next version accordingly (2.0.0/1.3.0/1.2.1), enforces X.Y.Z format, and blocks release if the version is not greater than the current latest release or already exists. Given a new template with no prior release, When the first release is created, Then a valid semantic version (e.g., 1.0.0) is required and the release author and timestamp are recorded.
Immutability of Released Template Versions
Given a released template version, When a user attempts to modify assemblies, line items, pricing formulas, notes, or structure, Then the UI prevents edits and prompts to "Create Draft" to modify. Given a released template version, When an API update request targets it, Then the server rejects the request with HTTP 409 Conflict and no data is changed. Given a released template version from which a draft was created, When the draft is released, Then the original released version remains unchanged and accessible in version history.
Release Changelog Auto-Generation and Manual Notes
Given a draft compared to the immediately prior released version (or an empty baseline for the first release), When the draft is released, Then the system auto-generates a changelog with Added/Changed/Removed sections for assemblies, line items, pricing formulas, notes, and structure, including item identifiers and counts. Given a release in progress, When the user enters manual release notes, Then the notes are saved with the release, attributed to the author, and displayed in version history and release details. Given a release has been created, When viewing its changelog, Then the auto-generated content is immutable and accurately reflects the computed differences for that release.
Visual Diff Across Template Versions
Given any two versions (draft or released) of the same template, When the user opens the visual diff, Then differences are displayed side-by-side with highlights for added/removed/modified across assemblies, line items, pricing formulas, notes, and structure, and the user can filter by category. Given a visual diff is displayed, When the user expands a changed pricing formula or note, Then the diff shows token-level/text-level changes highlighting insertions and deletions. Given a visual diff is displayed, When loaded, Then a summary header shows total Added/Changed/Removed counts per category.
Bid and PDF Reference Exact Template Version Metadata
Given a bid is generated using template version v1.3.0, When the estimate is calculated and the PDF is exported, Then the bid stores the template ID and version "v1.3.0" and the PDF displays the template name, version, and release date in its metadata/footer. Given the template is later updated to a newer release, When the existing bid is reopened or recalculated, Then it continues to reference and use the original stored template version unless the user explicitly selects a different version. Given a bid was generated from a draft template version, When the PDF is exported, Then the PDF and bid metadata clearly label the template version as "DRAFT" with the draft timestamp.
Version Lookup, Compare, and Restore
Given any bid, When viewing bid details, Then there is a link to the exact template version used, opening it in read-only view within version history. Given a historical version (released) is selected in version history, When the user clicks "Restore as Draft", Then a new draft is created that copies that version exactly, the restore action is logged with user and timestamp, and no historical versions are altered. Given any two historical versions are selected, When the user clicks "Compare", Then the same visual diff view is shown and can be navigated by category; when no differences exist, Then the system shows "No changes detected". Given the version history list is displayed, When loaded, Then versions are ordered by release date descending and each entry shows version number, release author, release date, and a changelog/notes preview.
One‑Click Rollout & Safe Rollback
"As a template admin, I want to deploy or revert template versions with one click so that all offices stay aligned without manual updates."
Description

Enable atomic deployment of a selected template version to chosen branches/teams via a single action, with pre-flight validation (schema checks, pricing formula tests, required fields) and dry-run reporting. Support instant safe rollback to the previously active version, preserving state and documenting reason codes. Handle partial failures gracefully with retry and per-branch status reporting. Integrates with permissions, notifications, and audit logging to ensure controlled, consistent rollouts platform-wide.

Acceptance Criteria
Dry-Run Pre-Flight Validation for Selected Template and Branches
Given a user with "Template Rollout:Execute" permission selects template version V and branches [B1..Bn] And provides an optional rollout note When the user triggers "Dry Run" Then the system performs schema validation, pricing/formula compilation, and required-field checks for V against each branch And computes a change diff between each branch’s current active version and V And simulates auto-migration of in-flight bids, reporting counts of affected bids and any non-migratable items per branch And returns a dry-run report with a unique DryRunID, per-branch readiness (Ready, Warnings, Blocked), blocker details, and estimated migration counts And makes no persistent changes to templates, branches, or bids And records the dry run in the audit log linked to DryRunID
Atomic Rollout Execution Across Selected Branches
Given a successful DryRunID with no Blocked branches and user confirmation When the user clicks "Roll Out Now" Then the system assigns a unique RolloutID and begins rollout for all selected branches And for each branch, either fully activates version V and completes bid auto-migrations, or makes no change if any step fails And writes per-branch status transitions: Pending → In Progress → Succeeded or Failed with reason codes And publishes a rollout summary with counts of succeeded/failed branches, bids migrated, and links to per-branch diffs And updates the branch’s active template version only on success And emits completion notifications to configured recipients with RolloutID and outcomes
Partial Failure Handling and Idempotent Retry
Given a rollout with some branches Failed due to transient errors or recoverable validation issues When the automatic retry policy runs (up to 3 attempts with exponential backoff) or an authorized user triggers "Retry Failed Branches" Then only Failed branches are retried using the same RolloutID and template version V And successful retries transition branch status to Succeeded and update summaries and notifications And repeated execution of "Retry Failed Branches" is idempotent and does not duplicate migrations or alter Succeeded branches And each failure captures machine-readable reason codes and diagnostics for support
Instant Safe Rollback to Previously Active Version
Given a completed rollout RolloutID that changed one or more branches to version V When an authorized user triggers "Rollback" and selects a required reason code Then each affected branch atomically reverts its active template to the immediately previous version V_prev And no data loss occurs; existing bids remain accessible, and any partial migrations are either fully reversed or left consistent with V_prev rules And the system assigns a unique RollbackID, updates per-branch statuses, and logs the reason code and initiator And completion notifications are sent with RollbackID and outcomes
Permissions, Notifications, and Audit Coverage
Given platform roles and permissions are configured When a user without "Template Rollout:Execute" or "Template Rollout:Rollback" attempts dry-run, rollout, retry, or rollback Then the action is denied with a clear error and no side effects And all attempted and successful dry-runs, rollouts, retries, and rollbacks are recorded in the audit log with user, timestamp, template version(s), branches, counts, reason codes, and operation IDs And notifications are delivered to branch owners and subscribed roles on dry-run completion, rollout completion, failures, and rollback, including links to reports and operation IDs
Scheduled Rollout with Blackout Windows
Given a user schedules rollout of version V to branches [B1..Bn] at time T in timezone Z and provides a DryRunID not older than 24 hours And organization blackout windows are configured When the scheduled time T is reached Then the system re-validates pre-flight checks; if any branch is Blocked, rollout for that branch is skipped and stakeholders are notified with reasons And if within a blackout window for any branch, rollout for that branch is deferred until the window closes, with status Deferred And successful branches proceed as in immediate rollout under one RolloutID And the schedule record is updated with actual start/end times and per-branch outcomes
Scheduled Rollouts & Maintenance Windows
"As a regional manager, I want to schedule template updates during off-hours so that crews aren’t disrupted mid-estimate."
Description

Allow administrators to schedule template rollouts for a future date/time with branch-aware time zones, optional blackout windows, and phased waves. Provide pre-rollout notifications and reminders, plus pause/resume controls during execution. Include calendar views and ICS export to coordinate with field teams. Scheduling respects branch dependencies and prevents overlapping deployments that could impact active estimating sessions.

Acceptance Criteria
Schedule Rollout in Branch-Aware Time Zones
- Given an admin selects branches in different time zones and sets a rollout to start at 09:00 local branch time on a future date, When the schedule is saved, Then the system stores and displays the correct local start times per branch and the corresponding UTC times. - Given a rollout time is set in the past for any branch, When saving, Then the system blocks the schedule and shows a validation error indicating the earliest allowable time. - Given a branch observing daylight saving time, When a rollout is scheduled across a DST transition, Then the start time honors the branch’s local civil time at 09:00 on the chosen date.
Enforce Global and Branch Blackout Windows
- Given global and/or branch-specific blackout windows exist, When an admin attempts to schedule a rollout that starts within any blackout window for a targeted branch, Then the system prevents scheduling and indicates the conflicting blackout window. - Given a rollout would cross into a blackout window mid-execution for remaining branches, When the blackout begins, Then those branches are not started and are automatically deferred to the next available non-blackout time with a visible reschedule notice. - Given blackout windows are edited, When a change affects a future rollout, Then the calendar and rollout detail reflect the new times and any affected branches are revalidated within 60 seconds.
Phased Waves with Branch Dependencies
- Given an admin defines waves (Wave 1, Wave 2, etc.) with specific branch sets and stagger offsets, When saving, Then each branch is assigned to exactly one wave with a computed start respecting its local time zone. - Given branch B depends on branch A, When the admin attempts to place B in a wave scheduled before A’s completion window, Then the system blocks the configuration and prompts to move B to a later wave. - Given Wave 1 completes with status Pass/Fail by branch, When a dependency for a branch in Wave 2 fails, Then that dependent branch is automatically skipped and flagged with a dependency-failed status.
Pre-Rollout Notifications and Reminders
- Given a rollout is scheduled, When saved, Then pre-rollout notifications are queued to target roles (e.g., branch admins and estimators) at configurable lead times (e.g., 24h and 1h) per branch local time. - Given the rollout schedule is edited or canceled, When changes are saved, Then updated notifications are sent within 5 minutes and prior reminders are canceled. - Given notifications are sent, When delivered, Then the system records delivery status per recipient and exposes counts (sent, failed) on the rollout detail.
Pause and Resume During Execution
- Given a rollout has started, When an admin clicks Pause, Then no new branches or waves start, while in-progress branch executions are allowed to complete, and the rollout status changes to Paused within 10 seconds. - Given a rollout is Paused, When an admin clicks Resume, Then the next scheduled branches/waves begin from the pause point respecting blackout windows and dependencies, and status changes to Running within 10 seconds. - Given a rollout is Paused, When the pause exceeds a blackout start for pending branches, Then those branches are deferred to the next valid window upon Resume.
Calendar View and ICS Export
- Given scheduled rollouts and blackout windows exist, When viewing the calendar, Then month/week/day views display all items color-coded by type and filterable by branch and template. - Given a user clicks Export ICS for a rollout or for a branch, When downloaded, Then the .ics file contains VEVENT entries with correct DTSTART/DTEND in the branch’s time zone, a unique UID, and summary/description matching the rollout details. - Given the ICS is imported into a standard calendar client, When viewed, Then the event times match those shown in the RoofLens calendar for that branch.
Prevent Overlapping Deployments and Protect Active Sessions
- Given a rollout targeting a branch is scheduled, When a second rollout is created that would overlap the same branch’s time window, Then the system blocks the second rollout and shows the conflict. - Given an active estimating session exists in a branch at the planned start time, When the rollout start time arrives, Then the rollout for that branch is delayed until the session ends or a max defer time is reached, and the action is logged. - Given deferred starts occur due to active sessions, When the rollout completes, Then the audit trail lists each deferment with start/end timestamps and reasons.
Auto‑Migrate In‑Flight Bids with Diff Review
"As an estimator, I want my open bids to automatically adopt template updates with a clear diff so that I can apply changes confidently without redoing work."
Description

Detect open/in-progress bids impacted by a template update and auto-map changes (added/removed/renamed line items, adjusted assemblies, pricing formula updates). Present a clear side-by-side diff with impact to totals, taxes, and markups, offering accept-all, selective apply, or manual remap options. Support idempotent migrations and record migration notes on the bid. Ensure no data loss with rollback to pre-migration snapshot. Integrates with PDF regeneration and activity timeline so recipients see the updated estimate version.

Acceptance Criteria
Detect Impacted In‑Flight Bids
Given a template update is published from v1.3 to v1.4 with changes to specific items, assemblies, and pricing formulas And the system has bids in states Open, In Progress, Closed, Won, and Lost across versions v1.2–v1.3 When impact detection runs Then only bids in states Open or In Progress whose template version differs from v1.4 and intersect the changed entities are flagged as Impacted And each impacted bid displays current version, target version v1.4, impacted entities, and pre‑calculated deltas for subtotal, taxes, markups, and grand total And bids in states Closed, Won, or Lost, and bids already at v1.4 are excluded
Auto‑Map Template Changes to Bid Lines
Given a bid using template v1.3 contains line items A, B, C and assembly D with quantities, notes, and line‑level discounts And template v1.4 adds item E, removes item B, renames item A to A′, adjusts assembly D structure, and updates the pricing formula for C When auto‑migration runs Then item A is mapped to A′ preserving quantity, unit, notes, attachments, and line‑level discounts And removed item B is marked Removed by template and excluded from totals unless the user explicitly retains it as a custom item And added item E is inserted in the correct section order with template default quantity and pricing And assembly D child items update to v1.4 while preserving bid‑specific quantities where applicable And item C recalculates using the new pricing formula with existing bid inputs And a mapping audit entry records old and new identifiers for each change
Side‑by‑Side Diff With Financial Impact
Given an impacted bid opens the migration diff view When the diff renders Then two columns show Current Bid and Updated Template with per‑line change badges in {Added, Removed, Renamed, Formula Changed, Assembly Changed} And users can filter to Changed only and expand/collapse assemblies And subtotal, taxes, markups, and grand total deltas are displayed at header and per‑section and match recalculated values within $0.01 And hovering a renamed or formula‑changed line shows before/after identifiers and formula expressions
Apply Options: Accept All, Selective, Manual Remap
Given the diff presents Apply options When the user selects Accept All Then all proposed mappings apply and the bid’s template version updates to v1.4 And when the user selects specific lines and clicks Apply Selected Then only those selections apply; remaining changes stay pending in the diff And when the user chooses Manual Remap for an unmatched or ambiguous line Then the user can map it to an existing bid line or a template item; upon save, the mapping applies and is stored as a rule for this migration And after any apply action, user‑entered migration notes are saved on the bid and visible in the activity timeline
Idempotent Migration Behavior
Given a bid has successfully migrated to template v1.4 When the migration process is re‑run against v1.4 with no additional template changes Then zero changes are proposed and no duplicate lines are created And when previously pending changes are applied and the process is re‑run Then only remaining unapplied changes are proposed And re‑running migration with no net changes does not regenerate the PDF or modify the activity timeline except for a No changes entry
Rollback to Pre‑Migration Snapshot
Given a pre‑migration snapshot is captured atomically before any changes are applied When the user clicks Rollback after a migration Then the bid restores exactly to the snapshot state including line items, quantities, notes, pricing formulas, section order, taxes, markups, and template version And any items added by migration are removed and any removed items are reinstated And a rollback event with reason and user stamp is recorded in the activity timeline And the current PDF is replaced with the snapshot PDF or regenerated to match the snapshot state
PDF Regeneration and Timeline Update Post‑Migration
Given migration changes are applied to a bid When the user finalizes the migration Then a new estimate PDF is generated with an incremented estimate version label and attached to the bid And public share links and previously sent portal views show the updated PDF without changing the link URL And the activity timeline logs Template migration applied, PDF regenerated, and Recipient views updated entries with timestamps
Section Locking & Role‑Based Edit Controls
"As a company admin, I want to lock sensitive template sections so that local offices can’t alter pricing or legal text."
Description

Provide granular locks on critical template sections (assemblies, cost catalogs, pricing formulas, disclaimers, scope notes, tax/markup rules) to block local edits. Configure role-based permissions and branch-level policies for read-only, editable, or request-override with approval. Enforce locks at UI and API layers, with clear indicators and rationale. Temporary overrides require approver authorization and are time-bound with automatic reversion. Prevents margin erosion and ensures consistent legal/technical language across offices.

Acceptance Criteria
UI Read-Only Enforcement for Locked Sections
Given a template has assemblies, pricing formulas, disclaimers, scope notes, tax/markup rules marked as Locked When a user without an active override opens the estimate editor for that template Then all inputs for those sections render disabled and inline edit controls are hidden And any attempt to paste/type changes into those sections is prevented client-side And clicking Save triggers a non-blocking validation that lists each locked section and prevents save with error code SECTION_LOCKED And no changes to locked fields are persisted in the database
API Enforcement with Lock Metadata
Given a section is Locked and the caller token lacks an active override for that section on the target estimate When the client issues PATCH/PUT requests touching fields under that section via the public API Then the API responds 403 Forbidden with error.code=SECTION_LOCKED and includes lockId, sectionKey, templateVersion, policySource (Global|Branch), rationale, and expiresAt=null in error.details And the response contains no partial updates; all atomic operations are rolled back And response time p95 is <= 500 ms under load of 100 rps And the same enforcement applies to bulk endpoints; locked records are skipped with per-item errors and the call overall returns 207 Multi-Status
Role-Based and Branch Policy Edit Controls
Given branch-level policy for Pricing Formulas is Read-Only, for Scope Notes is Request-Override, and for Assemblies is Editable And roles are defined: Estimator, BranchManager, TemplateAdmin with permissions per policy When an Estimator attempts to edit Pricing Formulas Then the UI blocks editing and the API returns 403 SECTION_LOCKED When an Estimator attempts to edit Scope Notes Then the UI shows a Request Override action and the API allows no write without an approved override When a BranchManager edits Assemblies Then the UI allows editing and the API accepts writes with 200, and changes are persisted And when a Global (TemplateAdmin) lock exists on any section Then it supersedes branch policy and all non-overridden writes are blocked
Override Request, Approval, Time-Bound Access, and Auto-Reversion
Given a section is Locked with policy=Request-Override and an Estimator requests an override for estimate E with rationale and duration=30 minutes When an Approver (BranchManager or higher) reviews the request Then approval requires MFA confirmation and records approverId, reason, maxDuration policy check, and scope (sectionKey, estimateId, requesterId) And upon approval, the requester can edit only the approved section on estimate E for the approved duration; all other locked sections remain blocked And the UI shows a countdown banner with remaining time; the API includes X-Override-Expires-At header on successful writes When the duration expires or the estimate is marked Finalized (whichever comes first) Then editing is immediately blocked without user refresh and the lock state re-applies automatically And any in-progress unsaved edits are discarded with a clear warning, and no further writes succeed And if the request is denied, both UI and API return OVERRIDE_DENIED with denial rationale
Lock Indicators and Rationale Visibility
Given a user views an estimate containing locked sections When a locked section header is rendered Then a lock icon is displayed with tooltip text that shows policySource (Global|Branch), rationale, lastUpdatedBy, and link to view policy And sections with Request-Override show a visible Request Override CTA; sections with Read-Only hide the CTA And when an override is active, a prominent banner shows the section name, approver, and time remaining (mm:ss) And all lock indicators meet WCAG 2.1 AA contrast and have ARIA labels and keyboard focus order
Audit Trail for Locks, Overrides, and Approvals
Given any lock policy change, override request, approval/denial, expiry, or blocked write attempt occurs When the event is committed Then an immutable audit record is stored with timestamp (UTC), actorId, role, branchId, estimateId, sectionKey, actionType, outcome, rationale (if provided), and before/after policy snapshot (for policy changes) And the Audit API allows filtering by date range, branchId, sectionKey, and actionType and returns results within p95 <= 800 ms for up to 10k records And exporting the audit log to CSV produces a file whose checksum remains stable for identical queries And audit records are visible to TemplateAdmin and BranchManager roles but not Estimator role
Branch Hierarchy & Propagation Rules
"As an admin, I want template updates to propagate through our branch hierarchy with controlled overrides so that each region gets the right configuration."
Description

Model organizational hierarchy (company → region → branch) with inheritance and scoped overrides. Let admins target rollouts to levels with options to include/exclude children, require local acknowledgment, or force apply. Support regional variables (e.g., tax rates, material availability) as parameters separate from locked content. Define conflict resolution precedence and guardrails to prevent local overrides of locked sections. Visibility tools show effective template at each branch and pending changes.

Acceptance Criteria
Hierarchy Modeling and Inheritance
Given a company with Region A and Branch A1 linked in a company → region → branch hierarchy And a company-level template T v1 exists And Region A defines an override for an unlocked field F in T When viewing the effective template at Branch A1 Then Branch A1 shows T v1 with Region A’s override for F applied And any locked sections from T v1 are not editable at Region A or Branch A1 And removing Region A’s override reverts Branch A1 to company defaults for F
Targeted Rollouts With Include or Exclude Children
Given template T v2 is scheduled for rollout targeting Region A with Include Children enabled and Branch A3 excluded When the scheduled rollout time is reached Then T v2 is applied to Region A and all its child branches except Branch A3 And Branch A3 remains on its prior effective version And other regions and branches outside Region A are unchanged And an audit log entry records the rollout scope, includes, excludes, timestamp, and actor
Require Local Acknowledgment Before Apply
Given template T v3 is targeted to Region B with Require Acknowledgment enabled And Branch B2 has not acknowledged the update When the scheduled time passes Then Branch B2 remains on its prior effective version and is flagged Pending Acknowledgment When a Branch B2 admin acknowledges in the UI Then T v3 is applied to Branch B2 immediately And the acknowledgment user, time, and version applied are recorded in the audit log
Force Apply Rollout Behavior
Given template T v4 is targeted to Company with Force Apply enabled And some branches have local overrides only on unlocked fields When the scheduled time passes Then T v4 is applied to all targeted scopes without requiring acknowledgment And locked sections in T v4 overwrite any conflicting local content And existing branch overrides on unlocked fields remain intact And an automatic change record shows which items were force-applied and which overrides were preserved
Locked Sections Guardrails
Given a section S in template T is marked Locked at the company level When a branch user attempts to edit, delete, or override S at a region or branch Then the action is blocked in the UI with an explanatory message And the API returns a 403 or validation error code with a lock reason And allowed edits to unlocked sections in the same template proceed without error And all blocked attempts are captured in the security audit log with user, time, and action
Regional Parameters Separate From Locked Content
Given Region C defines parameters tax_rate=8.25 and material_availability=[shingleA, shingleB] And template T v1 content is locked at the company level When Branch C1 generates an estimate using T v1 Then T v1 content is unchanged while the estimate uses Region C’s parameters for calculations and availability filtering When Region C updates tax_rate to 8.75 Then all child branches use 8.75 on subsequent estimates without changing the effective template version And parameter changes are tracked separately from template version history
Conflict Resolution Precedence and Visibility
Given precedence rules: Company content overrides Region, Region overrides Branch; Locked content overrides any lower-level override; Later effective rollout time supersedes earlier at the same level And multiple rollouts (v2 at Region D, v3 at Company) overlap for Branch D1 When computing the effective template for Branch D1 Then the engine deterministically selects the version and content per the precedence rules And the branch view displays the effective template version, source of each overridden field, and a diff of pending scheduled changes with effective dates And excluded nodes from any rollout retain their prior effective version
Audit Trail, Approvals, and Compliance Exports
"As a compliance lead, I want a complete audit trail of template changes and rollouts so that we can prove consistency and defend estimates."
Description

Capture a complete, immutable audit trail for template edits, approvals, rollouts, rollbacks, migrations, and lock changes with who, when, what changed, and why. Provide configurable approval workflows (single/multi-step) with SLA timers and escalation. Generate exportable logs (CSV/JSON/PDF) and signed snapshots for insurer and partner audits. Surface per-bid provenance showing template version, migration actions, and approvers. Retain records per data retention policy with secure storage and access controls.

Acceptance Criteria
Immutable Audit Trail for Template Lifecycle Events
Given any template lifecycle event (edit, approval, rollout, rollback, migration, lock change) When the event is committed Then an audit record is written with actor ID, role, timestamp (UTC ISO 8601), event type, entity IDs, change summary, and stated reason Given an audit record exists When a user attempts to modify or delete it via UI or API Then the system prevents the change and logs a tamper-attempt event Given an audit export is generated When its checksum/signature is verified Then the hash matches and the chain integrity is intact Given many audit records (≤10,000) When queried with filters (date range, actor, event type, template ID) and pagination Then the first page returns in ≤2 seconds, ordered by timestamp ascending/descending per request Given multi-tenant operation When writing or querying audit records Then records are strictly scoped to the tenant and branch context
Configurable Approval Workflow with SLA and Escalations
Given a template change requires approval When it is submitted Then the system routes to configured approver groups supporting single- and multi-step sequences and records the submission in the audit trail Given an approval step has an SLA of N hours (configurable) When the SLA elapses without action Then the request escalates to the next configured approver, the submitter and admins are notified, and the escalation is logged Given an approver takes action When approving or rejecting Then the decision, comment (why), timestamp, and approver identity are recorded immutably Given a change is pending approval When a user attempts to edit locked sections Then the edit is blocked with an explanatory message and the attempt is logged Given notifications are enabled When an approval event occurs (submit, remind, escalate, approve, reject) Then email and in-app notifications are delivered within 1 minute to the targeted recipients Given override is enabled for a specific role When an authorized user overrides an approval Then a second-factor challenge is required and the override is logged with reason
Exportable Audit Logs and Signed Snapshots
Given an authorized auditor When exporting logs Then CSV, JSON, and PDF formats are available for a selected time range and filters, with a maximum generation time of 60 seconds for ≤50,000 records Given an export is generated When the file is inspected Then a metadata header includes tenant, generator, time window, filters, record count, and checksum/manifest ID Given a signed snapshot is requested for a template version or bid When generated Then the PDF includes the complete content and approval chain and is accompanied by a detached signature file; the PDF also embeds a visible signature stamp with snapshot ID Given signature verification tooling and the published public key When verifying a snapshot or export Then verification succeeds; if it fails, the UI/API returns a clear verification-failed status and reason Given role-based export permissions When an unauthorized user attempts an export Then access is denied and the attempt is logged Given an export is downloaded When audited Then an audit record captures who, when, scope, file type, and source IP
Per-Bid Provenance and Provenance in Deliverables
Given a bid is viewed in the application When the provenance panel is opened Then it displays template ID, template version, migration steps applied, approvers, and timestamps in read-only form Given a bid PDF is generated When reviewed Then it includes a provenance section with the same fields and the signed snapshot ID Given a bid has been migrated When the user opens the change diff Then the diff between prior and current template versions is displayed within the bid context in one click Given an API client retrieves a bid When calling the bid details endpoint Then provenance is included in the response per the published schema Given authorization constraints When a user without access attempts to view another branch’s bid provenance Then access is denied and logged
Data Retention, Legal Hold, and Secure Access
Given a tenant-configured retention period When records exceed the configured period and are not on legal hold Then they are purged using cryptographic erasure and a purge receipt is written to the audit trail Given a legal hold is applied to a scope When the retention period elapses Then records in scope are retained until the hold is removed and the hold is auditable Given role- and branch-based access controls When a user queries audit data Then results are limited to their tenant and branch scope Given storage and transport requirements When data is stored or transmitted Then it is encrypted at rest (AES-256) and in transit (TLS 1.2+) and encryption keys are rotated at least annually Given access monitoring When an unauthorized access attempt occurs Then access is denied, an alert is generated, and the attempt is logged
Auto-Migration of In-Flight Bids with Audit and Rollback
Given a template update is scheduled for rollout When the rollout executes Then in-flight bids are auto-migrated per rules, a change diff is generated for each affected bid, and bid owners receive a notification Given an auto-migrated bid When a rollback is initiated within the configured window Then the bid reverts to the prior template version, totals are recalculated consistently, and the rollback is recorded in the audit trail Given a migration encounters an error When processing a bid Then the bid is skipped, the error with cause is logged, and a retry action is surfaced to authorized users Given migration rules intersect locked sections When a conflict is detected Then locked sections remain unchanged and the conflict is recorded with details in the audit log Given a multi-branch rollout completes When the summary is requested Then a report shows counts of migrated, skipped, failed bids and total duration for the rollout

Approval Matrix

Configurable approver workflows driven by role, dollar thresholds, margin floors, and exception types. Auto‑route bids for sign‑off, set SLAs, and approve/decline from web or mobile with reason codes. Keeps deals moving without email ping‑pong while guaranteeing that discounts and scope changes get the right eyes before sending.

Requirements

Configurable Approval Rules Engine
"As an operations admin, I want to configure rule-based approvals by amount, margin, and exception type so that bids are automatically routed to the correct approvers."
Description

Provide an admin UI to define approval rules driven by roles, deal amount thresholds, gross margin floors, discount percentages, line‑item exceptions, customer segment, region, and job type. Rules support condition operators, priority order, effective dates, versioning, and sandbox testing. On bid creation or update, the engine evaluates the estimate data and produces the required approver list and sequence. Ensures consistent governance, reduces errors, and automates routing while integrating with the pricing/estimate models and organization settings.

Acceptance Criteria
Create and Save Rule With Multi-Field Conditions
Given an Admin user with Manage Approvals permission When they create a rule with conditions: Role equals "Sales Rep"; Deal Amount >= 50,000; Gross Margin <= 22%; Discount Percentage > 5%; Line-item exceptions contains any ["Custom Fabrication","Non-Standard Material"]; Customer Segment equals "Commercial"; Region equals "West"; Job Type in ["Reroof","TPO"]; Priority = 10; Effective Dates set from today to 90 days from today Then the rule is saved and listed with all conditions, operators, priority, and effective dates exactly as entered And invalid operators, missing operands, or conflicting fields prevent save with clear inline error messages
Supported Condition Operators Evaluation
Given a bid with Deal Amount = 100000, Gross Margin = 0.20, Discount Percentage = 0.08, Customer Segment = "Commercial", Region = "West", Job Type = "Reroof", and Line-item exceptions tags include ["Non-Standard Material"] When evaluated against rules using operators >=, >, <, <=, between, equals, not equals, in, not in, contains any, contains all Then the engine returns matches consistent with operator semantics And examples: "Deal Amount >= 100000" matches; "Gross Margin between 0.18 and 0.22" matches; "Customer Segment in ['Residential']" does not match; "Line-item exceptions contains any ['Custom Fabrication','Non-Standard Material']" matches; "Discount Percentage <= 0.05" does not match
Priority-Based Approver Sequence Generation
Given three active rules match a bid with priorities 1, 5, and 10 requiring approver roles ["Finance Director"], ["Regional Manager"], and ["Sales Manager"] respectively When the engine evaluates the bid Then the returned approver sequence is ["Finance Director","Regional Manager","Sales Manager"] ordered by ascending priority And duplicate roles are de-duplicated, keeping the highest-priority position And each approver entry includes role identifier and sequence index starting at 1
Effective Dates Applied During Evaluation
Given Rule A effective from 2025-01-01 to 2025-03-31 in the organization's time zone and Rule B effective from 2025-04-01 to 2025-12-31 When a bid is evaluated on 2025-03-31 at 23:30 and on 2025-04-01 at 00:30 in the same time zone Then Rule A applies to the 2025-03-31 evaluation and Rule B applies to the 2025-04-01 evaluation And rules outside their effective window are not considered
Rule Set Versioning and Activation
Given Rule Set Version 1 is Active and Version 2 is Draft When the Admin activates Version 2 Then Version 1 transitions to Archived and Version 2 becomes Active And all subsequent evaluations use Version 2 without downtime And rule history shows timestamps, actor, and change summary for the activation event
Sandbox Test Evaluation Without Side Effects
Given an Admin opens Sandbox Test mode and provides a sample bid payload When they run evaluation Then the system displays matched rules, non-matching rules with reasons, and the generated approver sequence as a preview And no approvals, tasks, or notifications are created or sent in any environment
Automatic Re-evaluation on Bid Update
Given a bid has been evaluated and the required approver sequence stored When the user updates any governed field (deal amount, gross margin, discount percentage, line-item exception tags, customer segment, region, or job type) Then the engine re-evaluates within 5 seconds and produces an updated approver sequence And the system records that the required approval policy changed with a diff of previous vs new approver sequence
Exception Detection and Auto-Triggering
"As an estimator, I want the system to detect exceptions and auto-trigger approvals so that I don’t have to manually track policy violations."
Description

Continuously detect policy exceptions such as margin below floor, discount beyond threshold, non-standard SKUs, manual line‑item overrides, or scope changes after approval. Highlight exceptions on the bid, block send until required approvals complete, and automatically generate or refresh the approval path. Integrates with the estimate engine to compute margins and deltas, tracking changes across revisions to determine when re‑approval is necessary.

Acceptance Criteria
Margin Below Floor Triggers Approval and Blocks Send
Given a bid’s computed gross margin is below the configured floor When the estimate is saved or updated Then a "Margin Below Floor" exception is created and visibly highlighted on the bid with current margin and floor values And an approval path is generated per Approval Matrix and set to Pending And the Send action is disabled in the UI and the Send API rejects with HTTP 409 ApprovalsPending And the exception remains until all required approvals are recorded
Discount Beyond Threshold Exception Detection
Given a user applies a global or line-item discount exceeding the configured threshold for their role When the estimate is saved Then a "Discount Threshold Exceeded" exception is created with computed discount % and threshold And an approval path is generated and set to Pending And the bid cannot be sent until approvals are complete And upon all approvals being captured, the exception status changes to Cleared and Send is re-enabled
Non-Standard SKU Addition Triggers Exception
Given one or more added SKUs are flagged as Non-Standard in the catalog When those SKUs are present on the estimate Then a "Non-Standard SKU" exception is created listing the SKUs and quantities And an approval path is generated and set to Pending And Send remains blocked until approvals complete And if all non-standard SKUs are removed, the exception is automatically cleared and Send is re-enabled
Manual Line-Item Override Detection and Routing
Given a user manually overrides unit cost, unit price, or margin on any line item When the estimate is saved Then a "Manual Override" exception is created capturing field names and before/after values And the estimate engine recomputes totals and margin, recording the margin delta on the exception And an approval path is generated and set to Pending And Send is blocked until approvals are complete
Scope Change After Approval Requires Re-Approval
Given a bid revision is Approved with no active exceptions When items are added/removed, quantities changed, or pricing updated resulting in a total or margin delta that meets configured re-approval thresholds Then prior approvals are marked Superseded and a new revision is created And exceptions are re-evaluated against the new revision and the approval path is refreshed And Send is blocked until the refreshed approvals are complete
Exception Visibility and Audit Across Revisions
Given a bid with one or more active exceptions and multiple revisions When viewing the bid in the UI or via API Then each active exception displays type, source rule, computed values (e.g., margin %, discount %), severity, and required approvers And an audit log records exception create/update/clear events with user, timestamp, revision ID, and rule ID And a filter "Show changes since last approval" limits the list to exceptions introduced after the latest approved revision
Approve/Decline with Reason Codes (Web & Mobile)
"As a regional director, I want to approve or decline bids with reason codes from my phone so that I can keep deals moving while traveling."
Description

Enable approvers to approve, decline, or request changes from both web and mobile experiences with mandatory reason codes, optional comments, and attachments. Provide deep‑linked notifications to the exact approval item, display key bid context, and enforce justification collection for declines or overrides. Actions update bid status in real time, notify submitters, and are securely recorded for audit. Supports SSO auth and role‑based permissions.

Acceptance Criteria
Approve on Web with Mandatory Reason Code and Real‑time Updates
Given I am authenticated via SSO as an approver with role-based permission for the bid’s thresholds on web And I open a pending approval item from the approvals queue Then the approval view shows bid ID, customer name, property address, bid total, gross margin %, and any exception flags When I select a required reason code and choose Approve Then the bid status changes to Approved within 5 seconds and is reflected in the submitter’s bid view and the approvals queue And the submitter receives a notification containing a deep link to the bid within 5 seconds And an audit record is created capturing action=approve, timestamp (UTC), actor, role, device=web, reason code, previous status, new status, and pre/post margin values
Decline on Mobile with Reason Code, Optional Comment, and Attachment
Given I am authenticated via SSO on mobile with permission to act on the pending bid And I open the approval item from a deep-linked notification When I choose Decline And I select a required reason code And I optionally add a comment and one or more attachments Then submission is blocked until a reason code is selected And upon submit, the bid status updates to Declined within 5 seconds and the submitter is notified with a deep link And an audit record captures action=decline, reason code, comment (if any), attachment metadata, device=mobile, previous/new status, and timestamp (UTC)
Request Changes with Justification on Web or Mobile
Given I am authenticated via SSO and have permission to act on the pending bid on web or mobile And I have opened the approval item (from queue or deep link) When I select Request Changes And I select a required reason code And I optionally add comments and attachments Then the bid status changes to Changes Requested within 5 seconds and is reflected in the approvals queue and submitter’s view And the submitter receives a notification with a deep link to the bid’s change request context And an audit record captures action=request_changes, reason code, comments, attachment metadata, device type, previous/new status, actor, role, and timestamp (UTC)
Permission and Threshold Override Enforcement
Given the bid is flagged with exceptions (e.g., margin floor or dollar threshold exceeded) And I am authenticated via SSO When I lack override permission for the relevant role/threshold Then the Approve action is disabled or results in a 403 with a clear message indicating insufficient permission and no state change occurs When I have override permission and choose Approve And I select a required reason code And I provide mandatory justification text (minimum 1 character) Then the approval succeeds, the override is recorded in approval metadata, and the audit log marks the action as an override with justification captured
Key Bid Context Display in Approval View (Web & Mobile)
Given I open any pending approval item on web or mobile Then the approval view includes at minimum: bid ID, customer name, property address, submitter name, bid total (with currency), gross margin %, and a list of triggered exception types And all displayed values match the latest persisted bid data at time of render And fields restricted by role-based permissions are hidden from users lacking access, with no leakage in the API response
Concurrency Handling and Idempotent Actions
Given two approvers have the same pending approval item open When both submit conflicting actions (e.g., Approve and Decline) Then only the first committed action changes the bid status; the second receives a stale-state message and sees the updated status without creating a duplicate audit record And repeated taps/clicks by the same user within a short interval do not create duplicate actions (idempotency enforced by a unique action/request ID) And the approvals queue and bid detail reflect the final status for all viewers within 5 seconds
Audit Trail Completeness and Integrity
Given any approval action (Approve, Decline, Request Changes, Override) is performed When an authorized auditor views the bid’s audit log Then each action includes: action type, timestamp (UTC), actor ID and display name, role, device (web/mobile), previous status, new status, reason code, justification text (if provided), attachment metadata (filename, size, checksum), and source IP And audit records are immutable and tamper-evident; attempts to alter or delete are rejected and logged And authorized users can export the audit log with the same fields without exposing data to unauthorized roles
Multi-Step and Parallel Workflows
"As a sales manager, I want complex approval flows with conditional and parallel steps so that the right stakeholders sign off without delays."
Description

Support serial and parallel approval stages with flexible completion rules (any‑one‑of, all‑of, n‑of‑m) and conditional branching based on bid attributes. Prevent sending until all required stages finish, and require re‑approval when protected fields change. Allow step-specific instructions and visibility controls so participants see only what they need. Works with rule engine outputs to generate the precise path per bid.

Acceptance Criteria
Parallel Finance/Legal Approval with 2-of-3 Completion
Given a bid totaling $25,000+ with exception type "Contract Language" And a workflow with Stage 1: Sales Director (serial) and Stage 2: Finance, Legal, Ops (parallel, completion 2-of-3) When Stage 1 is approved Then Stage 2 creates three pending approvals and displays "2 of 3 required" And when any two approvers approve with reason codes, Stage 2 auto-completes And if the remaining approver later declines, the stage remains complete and the workflow does not reopen And the audit log records each action with user, timestamp, decision, and reason code
Conditional Branching on Margin Threshold to Executive Review
Given the margin floor for Product X is 35% And a bid margin is 32% And the workflow includes a conditional branch to "Executive Review" when margin < floor When the preceding stage completes Then the engine evaluates the condition and inserts "Executive Review" as the next serial stage And if the margin is edited to >= 35% before entering "Executive Review", the branch is skipped And if the margin changes to < 35% after exiting "Executive Review", re-approval rules (not branching) govern any reopenings
Block Send Until All Required Stages Complete
Given the workflow contains required stages A and B When any required stage is pending or declined Then the "Send Bid" action in UI and API is blocked And API attempts return HTTP 409 with error code APPROVALS_INCOMPLETE and list of incomplete stages And the "Send Bid" button remains disabled with a tooltip listing blockers And when all required stages are complete, the action becomes available without page reload
Re-Approval Trigger on Protected Field Changes
Given the workflow has completed Finance Approval based on price total and margin And protected fields include price total, margin, line items, and scope When any protected field changes after Finance Approval Then Finance Approval resets to "Needs Re-approval" and notifies assigned approvers And unrelated stages (e.g., Legal) remain complete unless their protected fields were changed And the audit log links the change event to the reopened stage(s) with diffs of fields
Step Instructions and Scoped Visibility
Given Legal Approval has step instructions and visibility limited to contract T&Cs and damage map And Finance Approval has visibility to pricing and margin but not contract T&Cs When a Legal approver views the task Then the instructions render above the approval controls and required attachments are accessible And pricing and margin fields are hidden or redacted And when a Finance approver views the task, contract T&Cs are hidden and pricing/margin are visible And users not assigned to the stage cannot see the step or its contents
Rule Engine Generates Workflow Path at Bid Creation
Given rule engine inputs include bid amount, margin, exception types, and customer segment And rules define serial and parallel stages per those inputs When a bid is created Then the system generates a workflow graph matching the evaluated rules, including completion rules (any-one-of, all-of, n-of-m) And the generated path is displayed to the creator with stage order, parallel groups, and completion requirements And if inputs change before any approval is recorded, the path is re-evaluated and updated; after any approval is recorded, only additive changes are allowed and no completed stage is removed
SLA Timers, Reminders, and Escalations
"As an approver, I want time-based reminders and escalations so that approvals don’t stall and I know when items need action."
Description

Allow per‑step SLA definitions with business hours and holiday calendars, send timed reminders to approvers, and auto‑escalate to designated alternates or managers when deadlines are missed. Provide pause/resume controls when bids are on hold, and surface countdowns and overdue indicators in queues. Capture SLA compliance metrics for reporting and continuous improvement.

Acceptance Criteria
Per-Step SLA with Business Hours and Holiday Calendars
Given a workflow step with an SLA of 8 business hours using calendar "US-Standard (Mon–Fri 09:00–17:00, US Federal Holidays)" and timezone America/New_York When an approval request is created on 2025-07-03 at 16:00 local time Then the due time is set to 2025-07-07 16:00 America/New_York and stored as 2025-07-07 20:00 UTC Given an approval request created outside business hours on 2025-06-10 at 19:30 America/New_York When the SLA is 8 business hours on the same calendar Then the due time is 2025-06-11 17:00 America/New_York Given a step configured with holiday calendar "None" When an approval request is created on a date that is a holiday in other calendars Then the due time calculation does not skip that date
Timed Reminder Notifications to Approvers
Given a step with SLA due at 2025-06-11 17:00 America/New_York and reminder offsets at 4 hours and 1 hour before due When the request is created on 2025-06-11 09:00 Then reminders are queued for 2025-06-11 13:00 and 2025-06-11 16:00 via in-app and email notifications to the assigned approver Given an approval created on 2025-06-11 at 16:30 with an 8 business hour SLA When reminders are configured at 4 hours and 1 hour before due Then reminders are scheduled for 2025-06-12 12:30 and 2025-06-12 15:30 within the step’s business hours Given the approver takes action before a scheduled reminder fires When the action timestamp precedes the reminder time Then all pending reminders for that step are canceled and none are sent Given an approval escalates to an alternate before a reminder fires When escalation occurs Then pending reminders for the original approver are canceled and new reminders are scheduled for the new assignee based on remaining time Given reminder delivery workers process a burst of events When multiple schedules target the same minute Then only one notification per channel per reminder offset is sent (no duplicates) and all sends are logged with timestamps
Auto-Escalation on Missed SLA to Alternates or Managers
Given a step with SLA due at 2025-06-11 17:00 and a designated alternate When no action is taken by 17:00:00 Then the approval is reassigned to the alternate at 17:00:00, the alternate receives an escalation notification, the original approver is notified of reassignment, and the audit log records the escalation with timestamp and reason "SLA missed" Given a step with no alternate but a configured manager When the SLA is missed Then the approval reassigns to the manager and notifications/audit entries are created Given an approval already escalated to an alternate with a new SLA of 8 business hours When the alternate also misses their SLA Then the approval escalates to the manager and only one escalation occurs per level (no loops) Given an approval has been reassigned due to escalation When the original approver attempts to approve Then the system blocks the action with message "Approval reassigned" and logs the attempt
Pause and Resume SLA Timers When Bid Is On Hold
Given a pending approval with 5:00:00 remaining on its SLA When a user with Approvals Admin role clicks Pause at 2025-06-11 13:00 Then the SLA timer stops, remaining time 5:00:00 is persisted, all pending reminders are suspended, and the queue item displays "Paused" Given the same approval is resumed at 2025-06-12 09:00 within the step’s business hours When Resume is clicked Then the SLA timer restarts with 5:00:00 remaining, reminders are rescheduled accordingly, and the new due time is 2025-06-12 14:00 local time Given an approval step is completed When Pause is attempted Then the Pause control is disabled and no timer changes occur Given an approval is paused When the bid is canceled or withdrawn Then no reminders or escalations are sent and the final audit record shows paused=true with end state
Countdowns and Overdue Indicators in Approver Queues
Given an approver queue item with 1 hour 30 minutes remaining When the queue is displayed Then the countdown shows "1h 30m" and updates at least every 60 seconds without page refresh Given an item passes its due time by 12 minutes When the queue is displayed Then the item shows an overdue badge and the timer reads "Overdue by 12m" in red Given the user sorts the queue by SLA When the queue loads Then items sort ascending by due time, with Paused items grouped after active items and before completed items Given the item is paused When the queue is displayed Then the countdown is replaced with "Paused" and no overdue styling is applied Given the user timezone is America/Los_Angeles and the step calendar is America/New_York When due times are displayed Then the due timestamp is converted to the user’s timezone while the underlying calculation uses the calendar timezone
SLA Compliance Metrics Capture and Reporting
Given an approval lifecycle that includes pauses and an escalation When the approval is completed Then the system stores metrics: slaMet (boolean), businessMinutesElapsed, pausedMinutesTotal, remindersSentCount, escalationsCount, dueAtUtc, decidedAtUtc, and stepId Given an approval that missed SLA by 45 business minutes When metrics are saved Then slaMet=false and slaBreachMinutes=45 Given the reporting API endpoint /reports/sla with date filters 2025-06-01..2025-06-30 When the endpoint is called Then results include stepId, dueAtUtc, decidedAtUtc, businessMinutesElapsed, pausedMinutesTotal, slaMet, slaBreachMinutes, remindersSentCount, escalationsCount, and approverRole Given the SLA dashboard is filtered by workflow step and date range When the charts render Then SLA compliance rate, average business time to decision, and breach count match the API aggregate values within 0.1%
Time Zone and Calendar Consistency (Including DST)
Given a step uses calendar Europe/Berlin (Mon–Fri 09:00–17:00) and an approver is in America/Chicago When an approval is created on 2025-03-28 at 16:30 Europe/Berlin with an SLA of 4 business hours Then the due time is 2025-03-31 12:30 Europe/Berlin and displays to the approver as 2025-03-31 05:30 America/Chicago Given the period spans the Europe/Berlin DST start on 2025-03-30 (Sunday) When calculating due time for the Monday business window Then the due time remains 12:30 Europe/Berlin and is not offset incorrectly by DST transition Given reminders are scheduled relative to due time for this step When users in different time zones view reminder timestamps Then reminders are scheduled and sent according to the step calendar’s timezone and displayed localized to each user’s timezone
Audit Trail and PDF Embedding
"As a compliance officer, I want a complete approval audit trail embedded in the bid package so that we can resolve disputes and meet audit requirements."
Description

Maintain an immutable audit log of approval events including user, role, timestamp, decision, reason code, and comments. Expose the log within the bid, export it as a report, and embed a summary section into the final PDF bid package. Link each approval to the exact bid version to show what changed between submissions. Enforce retention policies and provide searchable history for compliance and dispute resolution.

Acceptance Criteria
Immutable Approval Event Logging
Given a bid subject to Approval Matrix and a user submits an Approve/Decline with a reason code and optional comments When the action is committed Then an audit event is appended with: eventId (UUID), bidId, bidVersionId, userId, userDisplayName, userRole, decision (Approve|Decline|RequestChanges), reasonCode (from configured list), comments (<= 2000 chars), timestamp (UTC ISO-8601, ms precision), source (Web|Mobile|API), previousStatus, newStatus And the event is write-once: any attempt to edit or delete via UI or API returns HTTP 403 and no data changes occur And the event payload is checksum-hashed and validates against the stored hash And concurrent duplicate submissions with the same idempotency key within 60 seconds result in a single stored event And timestamps reflect server time, not client time
In-Bid Audit Trail Display
Given a bid detail page When a user with permission "View Approvals" opens the Audit Trail tab Then a table lists all approval events sorted by timestamp (desc) with columns: timestamp (localized to user TZ), user, role, decision, reason code, comments (truncated to 200 chars with full text on expand), bidVersionId And the table provides filters for date range, decision, role, reason code, user, bidVersionId, and keyword search on comments And pagination shows 50 rows per page with total count and supports jump to page And users without permission cannot access the tab; attempt returns 403 with no data leakage And initial load returns first page in ≤ 1.5s for up to 5,000 events
Audit Report Export
Given filtered audit trail results for a bid When the user clicks Export Then downloadable CSV and PDF files are generated containing exactly the filtered dataset with all required fields and column headers And file generation completes in ≤ 10s for up to 50,000 events; if > 10,000 events, an async export is queued and the user receives an email link on completion And filenames follow pattern: bid-<bidId>-audit-<timestampUTC>.{csv|pdf} And the PDF report includes cover metadata: bid number, property address, customer name, date range, generated-by (user and timestamp) And exports respect user permissions and tenant isolation; unauthorized export attempts are blocked (403) and logged
Embedded Approval Summary in Final Bid PDF
Given a bid is finalized for PDF package generation When the system generates the final PDF Then an "Approval Audit Summary" section is included before the signature page containing: current approval status, total approval event count, most recent decision with user/role/timestamp, list of approvers who acted with their decision and timestamp, latest reason code(s), and the bidVersionId being sent And the summary references the exact bid version included in the package And the PDF is flattened so the summary content is non-editable And a QR code or short URL is embedded that links to the secure online full audit trail (tokenized link expiring in 30 days) And total PDF generation time, including the summary, is ≤ 15s
Approval Event Version Linking and Change Diff
Given an approval event on a bid with multiple submissions When a user selects "View Changes" for that event Then a diff view compares the event's bidVersionId to the immediately previous submitted version showing: total price delta, margin delta, taxes delta, line items added/removed, quantity or unit price changes, and scope notes changes And each diff entry links to the corresponding bid section And if no prior version exists, the UI displays "Initial submission" with no diff And the diff is consistent with the version IDs stored on the event and reflects the numbers present in those versions And diff computation completes in ≤ 2s for up to 500 line items
Retention Policy Enforcement
Given tenant retention is configured to N years (default 7) When an audit event's age exceeds N years Then it is purged by a daily job and an aggregate purge record is appended to a retention ledger with counts and date range And purged events are no longer retrievable via UI or API And if a legal hold is applied to a bid or account, events under hold are excluded from purge until the hold is lifted And any change to retention settings is itself audited with user, timestamp, and old/new values And manual deletion attempts via UI or API are blocked (403) and logged
Searchable Audit History
Given a user with permission opens Audit History search When they query by bidId, property address, customer name, user, role, decision, reason code, date range, bidVersionId, or comment keyword Then results return matching events across all accessible bids within the tenant And results include facets for quick filtering by decision and role And response time is ≤ 2s for indexed field queries and ≤ 5s for keyword searches across 1M events And access controls ensure users only see events for bids they are authorized to view And results can be exported using the same filters via the Export action
Delegation and Out-of-Office Rules
"As an approver, I want to delegate my approval authority when I’m unavailable so that bids continue to progress without bottlenecks."
Description

Allow approvers to set time‑bound delegates and out‑of‑office rules, with admin overrides for coverage. Auto‑route steps to delegates during the defined window, notify both primary and delegate, and record delegated decisions in the audit trail. Prevent self‑approval and enforce separation of duties where configured. Provide visibility into active delegations in the approval path preview.

Acceptance Criteria
Primary Approver Sets Time-Bound Delegate
Given an approver sets a delegate user and a start/end time window in their profile When an approval step is assigned to the approver during that window Then the step is automatically assigned to the delegate And the assignment displays "Delegated to <Delegate Name>" on the approval card And approval steps created outside the window assign to the primary approver And window calculations respect the approver’s configured time zone
Out-of-Office Rule Auto-Routes Approvals
Given an approver activates an out-of-office rule with a start and end time and a named delegate When an approval is triggered within the OOO window Then the system routes the approval to the delegate And if no action is taken by the delegate before the OOO end time, the item reassigns back to the primary approver And approvals triggered after OOO expiration assign to the primary approver
Admin Override Assigns Coverage for Absent Approver
Given an admin with the required permission sets a coverage override for an approver for a defined window with a coverage user When an approval step targets the approver during that window or when the approver is OOO without a delegate Then the step routes to the coverage user And all currently pending steps for the approver within the window are immediately re-assigned to the coverage user And upon window expiration the routing returns to normal
Notifications Sent to Primary and Delegate
Given a step is delegated or routed due to OOO or admin override When the routing occurs Then both the primary approver and the delegate/coverage user receive notifications via in-app and email (if enabled) And notifications include the approval ID, due date/SLA, and the reason (Delegation, OOO, or Admin Override) And reminder notifications follow the existing SLA cadence while the item is with the delegate/coverage user
Audit Trail Records Delegated Decisions
Given a delegated or coverage-routed approval is acted on When the delegate/coverage user approves or declines and provides a reason code Then the audit trail records the actor as the delegate/coverage user, the primary approver they acted on behalf of, the reason code, timestamp, and channel (web or mobile) And the audit entry links to the delegation/OOO/override rule that enabled the routing And export and reporting views include these fields
Prevent Self-Approval and Enforce Separation of Duties
Given separation of duties rules are enabled for the workflow When a delegation, OOO, or admin override would assign an approval to the same user who created the bid, owns the opportunity, or is otherwise disallowed by SoD configuration Then the system prevents the assignment And it selects the next eligible approver per the approval matrix And if no eligible approver exists, the item is blocked with a clear error and an alert is sent to admins
Approval Path Preview Displays Active Delegations
Given a user opens the approval path preview for a bid When any step is subject to an active delegation, OOO rule, or admin override Then the preview clearly shows the effective approver as the delegate/coverage user with badges indicating Delegated/OOO/Admin Override and the time window And hovering/tapping reveals the primary approver and rule details And the preview updates in real time if a delegation starts, ends, or is revoked

Variance Bands

Define allowed ranges for waste factors, labor hours, line‑item quantities, and discounts by roof type and market. Soft warnings request justification; hard stops prevent risky submissions. Color‑coded guidance and inline suggestions help estimators stay compliant without slowing them down.

Requirements

Variance Band Administration Console
"As an operations admin, I want to configure allowed ranges by roof type and market so that estimators follow consistent, compliant standards."
Description

Provide an administrative interface to define and manage allowed ranges for waste factors, labor hours, line‑item quantities, and discounts. Support scoping by roof type, market, and customer segment; specify min/max thresholds, defaults, and enforcement type (soft warning vs hard stop). Include versioning with effective/expiration dates, preview of impacted SKUs/rules before publish, RBAC-controlled access, bulk import/export (CSV), change history, and rollback. Ensure validation of rule integrity (no overlapping effective windows without precedence), and surface a read-only preview of active rules to estimators.

Acceptance Criteria
Create and Publish Variance Band with Scope and Enforcement
Given I have "Variance Bands:Manage" permission and access the Administration Console When I create a variance band with scope = {roofType, market, customerSegment}, parameter = {wasteFactor|laborHours|lineItemQuantity|discount}, thresholds = {min, max}, default within [min,max], enforcementType = {Soft Warning|Hard Stop}, and effectiveStart (UTC) with optional effectiveEnd > effectiveStart Then the form validates required fields, numeric formats (percent fields 0.00–100.00 with up to 2 decimals; quantity/hours non-negative numbers), and prevents Save if invalid with inline error messages And when I Save as Draft, the band is stored with status = Draft and not applied to pricing/validation And when I Publish, the band is versioned (version += 1), status = Scheduled if effectiveStart in future or Active if effectiveStart <= now, and appears in the Active list
Validate Overlapping Effective Windows on Publish
Given there are existing Active or Scheduled bands for the same scope (roofType, market, customerSegment) and parameter When I attempt to Publish a new band whose effective window overlaps any existing band for that same scope and parameter without explicitly marking it to supersede the conflicted band(s) Then the system blocks Publish with an error listing the conflicting band IDs and their effective windows And when I either (a) set the existing band(s) effectiveEnd to be < new effectiveStart, or (b) mark the new band as Supersedes the conflicting band(s), Publish succeeds And overlapping windows across different scopes or different parameters are allowed
Preview Impacted SKUs and Rules Before Publish
Given I have a Draft variance band open for editing When I click "Preview Impact" Then a modal displays the list and count of impacted SKUs and estimator validation rules for the selected scope and parameter, with a side-by-side diff of current vs proposed thresholds and enforcement type And the modal shows the target effective window and any items with no impact are explicitly indicated as 0 And the Publish action is disabled until Preview Impact has been viewed in the current edit session
RBAC Access Controls for Administration Console
Given a user without "Variance Bands:Read" permission When they attempt to access the Variance Band Administration Console or its APIs Then access is denied (HTTP 403 for API; UI shows an access denied message) Given a user with "Variance Bands:Read" permission only When they view the console Then they can list and view bands but cannot create/edit/publish/import/export/rollback (controls disabled or hidden) Given a user with "Variance Bands:Manage" permission When they perform create/edit/publish/import/export/rollback Then the actions succeed and are audit-logged with user ID and timestamp
Bulk Import and Export of Variance Bands (CSV)
Given I have "Variance Bands:Manage" permission When I export variance bands (optionally filtered by scope and status) Then a CSV is downloaded within 10 seconds, UTF-8 encoded, comma-delimited, with headers: id,parameter,roofType,market,customerSegment,min,max,default,enforcementType,effectiveStartUTC,effectiveEndUTC,version,status When I import a CSV that matches the template Then the system validates every row (required fields, data types, value ranges, date formats ISO 8601 UTC) And if any row fails validation, the import is rejected with no partial writes and an error report CSV is provided listing rowNumber and errorMessage And a successful import creates Draft bands only; IDs are assigned on import; numeric precision and date constraints are enforced And maximum import size is 10,000 rows or 5 MB; larger files are rejected with an explanatory error
Change History and Rollback of Variance Bands
Given any create, update, publish, unpublish, or rollback operation occurs Then an immutable audit entry is recorded with timestamp (UTC), actor, action, and field-level diffs (old -> new), and an optional reason When I open the history for a band Then I can select a prior version and click Rollback And a new version is created copying that version’s values, with effectiveStart = now (UTC), effectiveEnd = null, and status set to Active if within window (otherwise Scheduled) And the previously Active version’s effectiveEnd is set to now (UTC) And the rollback is recorded in the audit log and visible in history/export
Read-Only Preview of Active Rules for Estimators
Given an estimator opens an estimate with a defined roofType, market, and customerSegment When the estimate screen loads Then a read-only panel lists the currently Active variance bands applicable to that job’s scope, showing parameter, min, max, default, enforcementType, and effective dates And the data matches the Admin Console’s Active records within a maximum cache delay of 5 minutes And no edit controls are present; links to administration are hidden for users without Manage permission
Context-aware Band Resolution Engine
"As an estimator, I want the system to automatically load the correct limits for my job context so that I don’t have to manually look up rules."
Description

Implement a rules resolution service that determines which variance bands apply to an estimate based on project context (roof type, market, insurer/program, date). Handle precedence, inheritance from global defaults, and graceful fallbacks when context data is incomplete. Normalize units and parameter types, compute derived limits (e.g., aggregate waste %), and cache resolved bands per estimate for performance. Re-resolve bands on context change and expose a deterministic explanation trace for transparency and debugging.

Acceptance Criteria
Resolve Bands by Full Context (Roof Type, Market, Insurer, Date)
Given a ruleset with R1: global default (effective 2024-01-01..9999-12-31) and R2: roof_type=Shingle, market=DFW, insurer=P123 (effective 2025-01-01..2025-12-31) And an estimate with roof_type=Shingle, market=DFW, insurer=P123, estimate_date=2025-09-04 When the engine resolves variance bands Then the resolved band source is rule_id="R2" And the resolved values exactly equal R2's configured bands And resolution is deterministic: 10 repeated resolutions return identical rule_id and values And date matching is inclusive of boundaries: estimate_date=2025-01-01 resolves to R2; estimate_date=2026-01-01 resolves to R1
Precedence and Inheritance from Global Defaults
Given rules R1: global default with waste_percent_max=10 and discount_percent_max=8 And rule R3: roof_type=Shingle, market=DFW with waste_percent_max=12 (discount_percent_max omitted) And an estimate with roof_type=Shingle, market=DFW, insurer missing, estimate_date=2025-05-10 When the engine resolves variance bands Then resolved.waste_percent_max=12 (from R3) And resolved.discount_percent_max=8 (inherited from R1) And no field explicitly set in R3 is overridden by R1 And precedence order is enforced: [roof+market+insurer] > [roof+market] > [roof] > [global] And tie-breaker within same specificity selects the rule with the most recent effective_start_date, then highest version
Graceful Fallback on Missing or Unknown Context
Given rules exist for market=DFW and a global default And an estimate with roof_type=Tile (no roof-specific rule), market=DFW, insurer missing, estimate_date=2025-03-01 When the engine resolves variance bands Then a rule that omits roof_type but matches market=DFW is selected And missing fields are inherited from the global default And the response includes fallback=true and missing_dimensions=["roof_type","insurer"] And the engine returns HTTP 200 (or success code) and no exception is thrown
Unit Normalization and Type Coercion
Given rule R4 defines labor_hours_per_area_max=0.08 per_sqft and waste_percent_max=0.1 as a ratio And platform canonical units are area="square" (100 sqft), length="linear_foot", percent=0..100 with max 2 decimals When the engine resolves variance bands for an estimate using canonical units Then returned labor_hours_per_area_max=8.00 per_square And returned waste_percent_max=10.00 And numeric fields are returned as numbers (not strings) with scale <= 2 decimals And if a rule field uses an unsupported unit, the field is omitted with a note in the explanation trace and inherited from the next-precedence rule (if available)
Derived Limit: Aggregate Waste Percent
Given base_waste_percent_max=12, steep_pitch_adder_percent_max=3, complexity_adder_percent_max=2, and global_waste_percent_cap=15 in the resolved inputs When the engine computes derived limits Then aggregate_waste_percent_max = min(12+3+2, 15) = 15 And aggregate_waste_percent_max is included in the output as 15.00 And the explanation trace shows inputs, expression, and capped result
Deterministic Explanation Trace and Auditability
Given a resolved estimate id=E123 When requesting the explanation trace for E123 Then the trace is JSON containing: estimate_id, resolution_timestamp_utc, matched_rules (ordered), overrides (field, source_rule_id, old_value, new_value), inherited (field, parent_rule_id), unit_conversions (field, from_unit, to_unit, factor), derived_calculations (field, expression, inputs, result), fallback_info And repeated identical requests return byte-for-byte identical JSON except resolution_timestamp_utc And the trace contains no PII and the payload size is <= 50 KB for <= 10 matched rules
Caching and Re-resolution on Context Change
Given estimate id=E124 with resolved bands cached When resolving again within 24 hours without any context change Then cache_hit=true and end-to-end latency p95 <= 10 ms When any context field (roof_type, market, insurer/program, estimate_date) changes Then cache is invalidated, bands are re-resolved (cache_hit=false), and latency p95 <= 150 ms with 1000 rules configured And two concurrent context updates within 50 ms result in a single recomputation, last-write-wins, and cache_version increments by 1
Real-time Validation and Enforcement
"As an estimator, I want immediate feedback and blocking when my inputs are out of bounds so that I can correct issues before submitting."
Description

Validate estimator inputs against active variance bands on field change and on submission. Trigger soft warnings that allow continuation only after justification capture, and hard stops that block submission, PDF generation, and external exports until resolved. Support per-line-item and aggregate checks, clear inline messaging, and structured validation results consumable by UI and API clients. Ensure low-latency feedback, debounced evaluation, and resilient behavior offline with queued validations syncing on reconnect.

Acceptance Criteria
Field Change Real-time Validation
Given active variance bands are loaded for the selected roof type and market And the estimator is editing a numeric field for a line item (e.g., waste %, labor hours, quantity) When the field value changes and the user stops typing for 300 ms (debounce window) Then validation executes and a result is returned to the UI within 500 ms end-to-end for 95% of events And while awaiting a result beyond 500 ms, a non-blocking "Validating..." indicator is shown And at most one validation request is in-flight per field per line item; newer edits cancel the prior request
Soft Warning with Justification Gate
Given a value breaches a soft variance band but not a hard band When validation results are displayed Then the UI shows an amber/orange soft warning message including band name, expected range, and actual value And Submit/Continue remains disabled until the user selects a justification reason and optionally enters notes (notes min length 5 when provided) And the justification is required once per offending field per submission attempt unless the value returns within band And the saved estimate includes a justification object {bandId, fieldPath, severity:"soft", reasonCode, notes, userId, timestamp}
Hard Stop Enforcement on Submission and Exports
Given one or more hard variance band violations exist in the estimate When the user attempts to Submit, Generate PDF, or Export to an external system Then the action is blocked and a red error banner summarizes the count and locations of hard violations And deep links navigate to the first offending field And no PDF file or export payload is produced And the API submission endpoint responds 422 Unprocessable Entity with a validation array detailing each hard violation
Per-Line-Item and Aggregate Band Checks
Given variance bands are defined for both per-line-item and aggregate totals (e.g., total labor hours, total discount %) When the estimator edits any contributing field Then the system evaluates both the specific line-item band and the relevant aggregate band And violations are reported independently with distinct identifiers and messages And resolving all per-line-item issues does not clear aggregate violations unless the aggregate returns within band And aggregate calculations include or exclude items per band configuration flags (e.g., includeAccessories=true)
Inline Messaging and Color-Coded Guidance
Given a validation result for a field Then the input shows inline color coding: green=in-band, amber=soft warning, red=hard stop And helper text or tooltip includes expected range, actual value, and a suggested corrected value computed per band rule And clicking "Apply Suggestion" updates the field to the suggested value and re-triggers validation And accessibility is preserved: non-color indicators (icons/text) are provided and contrast meets WCAG 2.1 AA
Structured Validation Results for UI and API
Given any validation run completes Then results conform to schema: [{id, fieldPath, scope:"line|aggregate", severity:"info|soft|hard", code, message, expected:{min,max,unit}, actual:value, suggested:value?, bandId, itemId?, aggregateId?, ts}] And a root-level schemaVersion is included and incremented on breaking changes And unknown fields in the payload are ignored by clients without error And clients may request resultsOnly=true to receive only the validation array without the full estimate payload
Offline Mode with Queued Validations
Given the estimator is offline or the validation API is unreachable When the user edits fields or attempts to submit Then local rules execute where available and remote validations are queued with inputs and timestamps And the UI indicates "Offline - validation queued" and allows continued editing; hard stops are enforced using the last known rules cache And upon reconnect, queued validations are sent in order, results reconcile, and the UI updates; any hard violations immediately block submission/PDF/export And no data loss occurs; retries use exponential backoff with a maximum of 5 attempts per queued validation
Exception Justification and Approval Workflow
"As an estimator, I want to provide justification for acceptable variances and request approval when needed so that I can proceed without violating policy."
Description

Provide a guided flow to capture justifications for soft-warning variances, including required reason codes, free-text notes, and optional photo/document attachments. Enable configurable approval thresholds and routing to managers, with notifications, SLAs, and reminder escalations. Allow approvers to approve/deny with comments; unblock submission upon approval; and record all actions in an immutable log. Support mobile-friendly input, draft saving, and re-use of common justification templates.

Acceptance Criteria
Soft-Warning Variance Triggers Guided Justification Flow
Given an estimate contains one or more soft-warning variances, when the estimator attempts to submit, then a guided justification flow is presented before submission. Given the justification flow is open, when the estimator selects a variance, then a reason code selection from the configured list is required and free-text notes must meet the configured min/max length before proceeding. Given attachments are optional, when the estimator uploads photos/documents, then only configured file types are accepted and configured per-file and total size limits are enforced with visible progress and error messaging. Given required fields are incomplete for any variance, when the estimator attempts to continue, then the Continue/Submit action remains disabled and inline validation messages identify missing items per variance.
Configurable Approval Thresholds and Routing
Given an exception justification is submitted, when its metrics meet or exceed configured approval thresholds by variance amount/percentage, roof type, or market, then an approval request is created and routed to the configured approver(s) and level. Given multi-level approvals are configured, when a higher threshold applies, then approvals are required in sequence and downstream approvers are not notified until prior level approval is recorded. Given routing configuration is updated after submission, when an approval is in flight, then the approval path follows the snapshot captured at submission time.
Notifications, SLAs, and Reminder Escalations
Given an approval request is created, when notifications are enabled, then the assigned approver receives notifications via the configured channels immediately with a deep link to the request. Given an approver SLA duration and reminder cadence are configured, when a request remains pending beyond the SLA, then a reminder is sent at the configured cadence until the maximum reminders is reached. Given an escalation target is configured, when the maximum reminders is reached without action, then the request is escalated and the escalated approver is notified per configuration. Given a request is denied with changes requested, when returned to the estimator, then the approver SLA pauses and an estimator response SLA starts; upon resubmission, the approver SLA restarts.
Approver Decisions and Submission Unblocking
Given an approval request is pending, when an approver records an approval with optional comment, then the associated estimate submission is unblocked and can proceed. Given an approval request is pending, when an approver records a denial with required comment, then the estimator is notified with the comment and the submission remains blocked until a new justification is submitted and approved. Given multiple approvals are required, when any approver denies, then the request status becomes Denied and remaining approvals are canceled; when all required approvers approve, then the request status becomes Approved and the submission proceeds.
Immutable Audit Log of Exception Activity
Given any exception event occurs (create, edit, submit, approve, deny, comment, attach/remove, route, notify, escalate), when the event is committed, then an immutable log entry is appended with timestamp (UTC), actor ID, role, action, target (estimate/variance), prior/new status, reason code, notes hash, attachment checksums, and routing snapshot. Given users view the audit log, when they filter by estimate ID, variance ID, actor, action type, or date range, then matching entries are returned in chronological order and are read-only to all users. Given a log entry exists, when any user attempts to modify or delete it, then the system prevents the change and records the attempted modification as a separate security event.
Mobile-Friendly Input and Draft Saving
Given a mobile device is used, when the justification flow is opened, then all inputs render responsively, support device-native keyboard types, and support camera/photo picker for attachments without horizontal scrolling. Given partial justification data has been entered, when the estimator taps Save Draft or after the configured autosave interval elapses, then a draft is saved with last-edited timestamp and can be resumed from the estimate. Given a session is interrupted, when the estimator returns, then the latest draft content is restored; if a newer draft exists on another device, then the user is prompted to select which draft to keep.
Re-use of Justification Templates
Given justification templates exist, when the estimator selects a template, then the reason code and notes are prefilled and remain editable before submission. Given template scopes are configured (Personal, Team), when a user with permission creates or updates a template, then it is saved to the selected scope and becomes available in the template picker for users with access. Given a template is updated after it was used on a prior submission, when viewing the prior submission, then the stored justification values remain unchanged and do not reflect template updates.
Color-coded Guidance and Inline Suggestions
"As an estimator, I want clear visual cues and recommendations so that I can quickly choose compliant values without slowing down."
Description

Enhance the estimate UI with accessible color coding (green within band, amber near thresholds, red out-of-band) and tooltips showing allowed ranges and rationale. Offer inline suggestions derived from roof geometry, historical estimator performance, and market norms, with one-click apply and quick-revert. Ensure WCAG 2.1 AA compliance, keyboard navigation, and user preferences to toggle hints. Provide a compact summary panel listing current variances and suggested fixes.

Acceptance Criteria
Color Coding Reflects Variance Bands
Given an estimate with variance bands and warning tolerances configured by roof type and market When the user enters or edits a waste factor, labor hours, line‑item quantity, or discount value Then the field indicator updates within 250 ms as: - Green when the value is within the configured band - Amber when the value is within the configured warning tolerance of a band threshold - Red when the value is outside the configured band And the indicator state recalculates immediately upon configuration changes or unit changes And on Save/Submit: red items trigger a hard stop if the band is configured as hard‑stop; amber/red trigger a justification dialog if configured as soft‑warning
Accessible Tooltips with Ranges and Rationale
Given a field with a color indicator or help icon When the user hovers, focuses, or taps the indicator/help icon Then a tooltip appears within 150 ms showing: - Allowed range (min/max) and warning tolerance in current units - The rule source (roof type + market) and rationale text - The current value and delta from nearest bound And the tooltip is anchored without obscuring the input, dismissible via ESC or blur, and does not exceed the viewport And if configuration data is unavailable, the tooltip states that ranges are unavailable and why
Inline Suggestions Generation and Ranking
Given roof geometry is imported and historical + market data are available When a monitored field is amber or red, or the user opens suggestions Then the system displays up to 3 suggestions labeled by source (Geometry, Your history, Market norm) with confidence scores and brief explanations And suggestions are computed within 500 ms client‑perceived time And if any data source is unavailable, its suggestion is omitted with a note indicating the missing source And suggestions respect active variance bands and units
One‑Click Apply and Quick Revert
Given suggestions are visible for a field When the user clicks Apply on a suggestion Then the field value updates immediately without page reload, the color indicator recalculates, and an audit entry records user, timestamp, old/new values, and suggestion source And an Undo control is shown for at least 10 seconds (or until the next edit); clicking Undo restores the prior value and audit trail records the revert And Save/Submit validations re‑run after apply or revert
Accessibility and Keyboard Compliance (WCAG 2.1 AA)
Given the estimate UI is loaded When evaluated with axe‑core on views containing indicators, tooltips, suggestions, and the summary panel Then there are no serious or critical violations And color indicators meet contrast ≥ 4.5:1 against adjacent backgrounds and do not rely on color alone (include icon/shape/pattern) And all interactive elements are reachable and operable via keyboard (Tab/Shift+Tab, Arrow keys in lists, Enter/Space to activate, ESC to dismiss overlays) And focus order is logical and visible; tooltips open on focus and close on ESC/blur; elements have appropriate ARIA roles, names, and states; labels are programmatically associated
Summary Panel of Variances and Fixes
Given one or more monitored fields are amber or red When the user opens the Variance Summary panel Then it lists each variance with field name, current value, allowed range, color state, and the top suggestion with one‑click Apply And counts and totals update in real time as values change And clicking an item focuses the corresponding field in the form And the panel loads within 300 ms and supports filtering by severity (amber/red)
User Preferences to Toggle Hints
Given a signed‑in user opens Preferences or a hints toggle within an estimate When the user disables Inline Suggestions and/or Tooltips Then suggestions and tooltips are hidden immediately in the UI while mandatory warnings/hard‑stops remain active And the preference persists across sessions and devices for that user and market context And re‑enabling restores the features without reload
Compliance Audit Trail and Reporting
"As an operations manager, I want visibility into variances and approvals so that I can enforce policy and improve estimator training."
Description

Capture comprehensive event logs for variance evaluations, warnings, hard stops, justifications, approvals, and overrides with timestamps, users, rule versions, and context. Provide dashboards and filterable reports by market, roof type, user, and date; KPIs such as out-of-band rate, average approval time, and top offending line items; and export to CSV/PDF. Include data retention settings, PII controls, and the option to embed a compliance summary in generated bid PDFs and through the API.

Acceptance Criteria
Audit Event Logging for Variance Evaluations and Actions
Given a variance evaluation is performed on a bid When the system evaluates line items and applies variance band rules Then an event record is written for each evaluation and triggered outcome (warning, hard_stop) within 2 seconds of the action And each record includes: event_type, bid_id, line_item_id (nullable), market, roof_type, rule_id, rule_version, user_id (or 'system'), timestamp (ISO 8601 UTC), evaluated_value, prior_value (if applicable), threshold_min, threshold_max, decision, and a correlation_id shared by all events from the same evaluation And the event store is append-only: update and delete attempts are rejected and logged And the events are retrievable via UI and API with identical field values
Justification, Approval, and Override Linkage and Integrity
Given a soft warning requires user justification or a hard stop requires supervisor approval When the user submits a justification or an approver records an override decision Then a child record is created and linked to the originating event via correlation_id and event_id And justification records store the exact text entered and the submitting user_id and timestamp And approval/override records store approver_id, decision (approved|denied), approval_note (optional), and timestamp And bid submission after a hard stop succeeds only when an approved override record exists; otherwise submission is blocked and the block is logged
Filterable Compliance Dashboard and Reports
Given compliance events exist across multiple markets, roof types, users, and dates When a user applies filters by market(s), roof type(s), user(s), and date range Then the dashboard list and aggregates reflect only matching events and bids And filters combine with AND across dimensions and OR within multi-select values And results update within 2 seconds for datasets up to 10,000 rows And counts and aggregates match the equivalent API query results And the selected filters persist in the URL and are restored on page reload
KPI Calculations Accuracy
Given a known test dataset with labeled outcomes When the KPIs are computed for a specified date range and scope Then Out-of-band rate equals (number of bids with ≥1 out-of-band event) ÷ (number of bids evaluated) to within 0% tolerance And Average approval time equals the arithmetic mean of (approval_timestamp − first_override_request_timestamp) across approved overrides And Top offending line items are the top N line items ranked by count of out-of-band occurrences, breaking ties by most recent occurrence And KPI values in the UI match API responses and CSV/PDF exports for the same filters
Export Reports to CSV and PDF
Given a user has applied filters and column selections on the compliance report When the user requests Export CSV or Export PDF Then the exported file contains only the filtered rows and selected columns in the current sort order And each export includes a header with filter summary, report period, generation timestamp (in user timezone), total row count, and generating user And CSV conforms to RFC 4180 (comma-delimited, quoted as needed, UTF-8) and opens in Excel and Google Sheets without column corruption And PDF includes the KPIs, charts (if visible), and tabular detail with no truncated text and is generated within 60 seconds for up to 50,000 rows And the number of records in the file matches the UI and API for the same filter set
Compliance Summary Embedding in Bid PDFs and API
Given a bid is generated When Include compliance summary is set to true Then the bid PDF contains a Compliance Summary section with counts of warnings, hard stops, justifications, overrides, approver names, rule versions applied, and timestamps And when Include compliance summary is false the PDF omits this section And GET /api/bids/{bid_id}/compliance-summary returns JSON with the same data used in the PDF And the PDF and API values match the underlying event logs for that bid
Data Retention and PII Controls
Given a tenant-level retention period (in days) is configured When events exceed the retention period Then they are permanently deleted within 24 hours of crossing the boundary, and a system audit event records the deletion job run And future exports and dashboards exclude deleted data And PII controls allow administrators to toggle storage and display of PII fields (e.g., user name, email, IP) When a user without the Compliance PII View permission views the UI, API, or exports Then PII fields are masked, and access to unmasked PII is denied and logged
Variance Bands Public API and Webhooks
"As a partner developer, I want to programmatically manage and validate variance rules so that external tools stay in sync with RoofLens policies."
Description

Expose secure REST endpoints to CRUD variance bands, retrieve resolved bands for a given context, and validate external estimate payloads. Implement OAuth2 scopes, rate limiting, and idempotency. Publish webhooks for rule publish/update/expire events to keep partner systems synchronized. Provide versioned schemas, sandbox environment, and detailed error contracts aligned with internal validation messages.

Acceptance Criteria
Secure OAuth2 Scopes and Token Handling
Given a request without a Bearer token When any Variance Bands API endpoint is called Then the response is 401 Unauthorized with a WWW-Authenticate: Bearer header and error.code = "UNAUTHENTICATED" Given a token missing scope "variance_bands.read" When GET /v1/variance-bands is called Then the response is 403 Forbidden with error.code = "INSUFFICIENT_SCOPE" and error.required_scopes includes ["variance_bands.read"] Given a token with scope "variance_bands.write" When POST /v1/variance-bands is called with a valid body Then the response is 201 Created with Location header to the new resource and the resource persists Given an expired or invalid token When any endpoint is called Then the response is 401 Unauthorized with error.code in ["TOKEN_EXPIRED","TOKEN_INVALID"]
CRUD Variance Bands with Idempotency and Rate Limiting
Given a valid write-scoped token and header Idempotency-Key: K When POST /v1/variance-bands is retried with the same key K and identical body within 24h Then the same status code and response body are returned and only one resource exists Given a valid write-scoped token and header Idempotency-Key: K When POST /v1/variance-bands is retried with the same key K but a different body Then the response is 409 Conflict with error.code = "IDEMPOTENCY_KEY_BODY_MISMATCH" Given an existing variance band When PATCH /v1/variance-bands/{id} is called with a valid partial update Then the response is 200 OK and the updated fields are persisted Given a variance band referenced by active rules/contexts When DELETE /v1/variance-bands/{id} is called Then the response is 409 Conflict with error.code = "RESOURCE_IN_USE" and the band is not deleted Given a client exceeds the per-minute quota When additional requests are made to any endpoint Then the response is 429 Too Many Requests with RateLimit-Limit, RateLimit-Remaining, and RateLimit-Reset headers
Resolve Effective Variance Bands by Context
Given query parameters roof_type, market, and effective_date When GET /v1/variance-bands:resolve is called Then the response is 200 OK with a resolved object where precedence = market override > roof_type default > global default and includes effective_from/effective_to Given no market-specific band exists for the effective_date When the resolve endpoint is called Then values fall back to the roof_type default, then to global defaults if roof_type default is absent Given any band whose effective_to < effective_date When resolving Then that band is excluded from consideration Given include=audit When resolving Then response includes source_ids, version, and published_at for each resolved field
Validate External Estimate Against Variance Bands
Given a valid estimate payload within all resolved variance bands When POST /v1/variance-bands:validate is called Then the response is 200 OK with result = "ok" and errors = [] Given one or more soft variance violations without provided justifications When validate is called Then the response is 422 Unprocessable Entity with error.code = "JUSTIFICATION_REQUIRED" and errors[*].pointer identifies each offending field Given soft variance violations with per-item justifications supplied in the request When validate is called Then the response is 200 OK with result = "ok_with_warnings" and warnings[*].severity = "soft" Given any hard variance violation When validate is called Then the response is 422 Unprocessable Entity with error.code = "HARD_BAND_EXCEEDED" and errors[*].message_key matches the internal validation key for the violated rule
Webhooks for Publish, Update, and Expire Events
Given a subscriber has registered a webhook with a shared secret When a variance band is published, updated, or expired Then a POST is delivered within 60 seconds with event.type in ["variance_band.published","variance_band.updated","variance_band.expired"], and headers X-RoofLens-Event-Id, X-RoofLens-Event-Schema: v1, and X-RoofLens-Signature (HMAC-SHA256 of raw body) Given the subscriber returns a non-2xx status When delivery is attempted Then the system retries at least 3 times with exponential backoff and stops retrying after a 2xx response or max attempts; 429 responses respect Retry-After Given duplicate deliveries of the same event id When the subscriber receives them Then the event.id is stable and can be used to deduplicate without data loss
Versioned Schemas and Sandbox Parity
Given the request includes Accept: application/vnd.rooflens.variance-bands.v1+json When any API endpoint is called Then the response conforms to the v1 schema; if an unsupported version is requested, the response is 406 Not Acceptable Given calls are made to https://sandbox.api.rooflens.com When the same endpoints are exercised Then responses conform to the same versioned schemas as production and use isolated data and rate limits Given a field is scheduled for deprecation in v1 When responses include the field Then Deprecation and Sunset headers are present with a documentation URL and timeline
Standardized Error Contracts and Correlation
Given any 4xx or 5xx response When an error is returned Then the body contains fields: code (stable machine code), message (human readable), details (array), pointer (JSON Pointer or field path), correlation_id, and source; and code/message_key match the internal validation catalog when applicable Given any request When a response is sent Then the response includes X-Request-Id and X-Correlation-Id headers; correlation_id is echoed in the error body on failures Given a client supplies Idempotency-Key When a response is returned Then X-Request-Id equals the provided Idempotency-Key for traceability

Margin Locks

Live margin tracking with configurable floors and targets. When a bid dips below guardrails, the estimate locks sensitive fields and offers smart fixes—price adjustments, assembly swaps, or alternates—to bring profit back in range. Prevents over‑discounting while preserving deal velocity.

Requirements

Configurable Margin Guardrails
"As an account admin, I want to set margin floors and targets by job type and product so that every estimate adheres to our profitability policy."
Description

Enable administrators to define and manage margin floors and targets at multiple scopes (account-wide, customer segment, job type, region, product/assembly). Support effective-dated configurations, currency and tax considerations, and role-based visibility/enforcement. Provide a configuration UI with validation, import/export, and API endpoints for programmatic management. Changes propagate instantly to active estimates with safe fallbacks and audit logging.

Acceptance Criteria
Scope Hierarchy and Precedence Resolution
Rule: Precedence order is Product/Assembly > Job Type > Customer Segment > Region > Account-wide. Given multiple matching rules across scopes When computing the effective margin guardrail for an estimate or line item Then the highest-precedence matching rule is applied; if none match, the account-wide rule applies. Given a line item with a Product/Assembly rule and other lower-scope matches When evaluating enforcement for that line item Then the Product/Assembly guardrails are used for that item regardless of lower-scope matches. Given two matching rules within the same scope When computing the effective guardrail Then the rule with the latest effective start date wins; if tied, the most recently updated record wins. Given no rule exists for a higher-precedence scope When evaluating guardrails Then the system safely falls back to the next matching lower-precedence scope without error.
Effective-Dated Configurations and Instant Propagation
Given an admin creates a guardrail with a future start date When current UTC time is before the start date Then the rule is not active and does not affect estimates. Given an admin creates or updates a guardrail with a start date of now When the change is saved Then all active estimates recompute margins and guardrails within 5 seconds and reflect the new effective rule. Given an active estimate that becomes below floor due to a configuration change When propagation occurs Then the estimate indicates a guardrail breach and enforces locks per role policy without losing in-progress user input. Given a specific-scope rule expires or is disabled When recomputing guardrails Then the system applies the next applicable lower-precedence rule automatically (safe fallback) and records the change in audit logs.
Currency and Tax-Aware Margin Calculations
Given a job currency different from the account currency and guardrails defined at the account level When evaluating margin against floors/targets Then values are converted using the system exchange rate for the evaluation timestamp, rounded to the currency’s minor units. Given tax-inclusive pricing is enabled for a region When calculating margin for guardrail comparison Then margin is computed on net price excluding tax and compared to floors/targets. Given exchange rates are temporarily unavailable When evaluating guardrails Then enforcement downgrades to warning-only for affected estimates, a clear message is shown, and the event is logged for follow-up.
Role-Based Visibility and Enforcement
Given role permissions: Admin (configure + override), Manager (view + approve override), Sales Rep (view only) When a Sales Rep attempts to access the configuration UI or API write endpoints Then access is denied with 403 and no changes are applied. Given a Sales Rep edits an estimate that is below floor When trying to modify locked price-sensitive fields Then the action is blocked with a message referencing the applicable guardrail and user’s role. Given a Manager uses the approved override capability When overriding a below-floor estimate Then a reason note is required, scope is limited to the current estimate, and the override is recorded in audit logs. Given an Admin views an estimate When inspecting guardrails Then the Admin can see the effective rule and its source scope; Sales Reps see only the effective values, not underlying configuration details.
Configuration UI Validation and Safe Operations
Given a user enters floor and target values When saving Then both must be between 0 and 99.99 with up to 2 decimal places, and floor must be strictly less than target. Given a user defines a rule for a specific scope combination When saving Then effective start date and currency/tax basis are required; overlapping effective periods for the same exact scope are rejected with a clear error. Given a rule is in use by active estimates When attempting to delete Then the system requires disablement (soft-off) instead of hard delete, presents the number of affected estimates, and applies fallback automatically if disabled. Given client-side validation passes When saving Then server-side validation re-checks the same rules; on success, the new/updated rule ID is returned; on failure, field-level errors are returned without persisting partial data.
Bulk Import/Export and API Management
Given an Admin requests export When exporting guardrails Then the system provides CSV and JSON including id, scope fields, floors/targets, effective dates, currency, tax basis, status, updated_at, and updated_by. Given an Admin uploads a CSV/JSON import file When processing Then rows are validated; valid rows are applied idempotently using external_id or natural key, invalid rows are rejected with row-level errors; a summary (created/updated/rejected counts) is returned. Given concurrent updates via API When a client omits If-Match or uses a stale ETag Then the update is rejected with 409 Conflict and no changes are applied. Given API access with insufficient permissions When calling POST/PUT/PATCH/DELETE endpoints Then the request is rejected with 401/403 and no side effects occur. Given a client queries guardrails When calling GET with filters (scope, effective window, status) and pagination Then results are returned in under 2 seconds for up to 10k records with consistent ordering.
Audit Logging and Traceability
Given any create, update, disable, or delete of a guardrail When the action completes Then an immutable audit record is written capturing actor, timestamp, before/after values, scope, source (UI/API/import), and reason/comment if provided. Given guardrail changes affect active estimates When propagation occurs Then an estimate-level audit entry is recorded linking the estimate to the guardrail change IDs and indicating the enforcement outcome (ok, warning, locked). Given an auditor queries activity When filtering audit logs by date range, actor, scope, or estimate ID Then matching records are retrievable within 2 seconds and exportable to CSV; logs are retained for at least 24 months.
Real-time Margin Calculator
"As an estimator, I want instant margin feedback as I change quantities and prices so that I can keep the bid within targets without trial-and-error."
Description

Compute gross margin in real time at line, section, and total levels using current cost basis (materials, labor, equipment, waste, overhead, fees), applied taxes, and discounts. Update calculations within 200 ms of any edit, with debouncing to minimize flicker. Handle alternates, assemblies, and multi-currency rounding rules. Expose a breakdown panel, tooltips for formula transparency, and a stable API for margin metrics consumed by other components.

Acceptance Criteria
Real-time recalculation latency with debounced updates
Given an open estimate with visible line, section, and total margin widgets And a user editing a numeric field (price, quantity, or discount) on a line item When the user types continuously with keystroke intervals under 150 ms for at least 1 second Then margin values render no more than once every 200 ms during typing And final line, section, and total margin values render within 200 ms after the last keystroke And the rendered values equal the reference calculation for the same inputs
Correct gross margin math across all levels
Given a line with cost basis: materials 500, labor 300, equipment 50, waste 10, overhead 90, fees 20 (total cost 970) And a sell price of 1500 with a 10% line discount And an 8% tax applied to the discounted sell price When margin is calculated Then line gross margin amount = (1500 - 150) - 970 = 380.00 And line gross margin percent = 380.00 / (1500 - 150) = 28.1481% rounded to 28.15% And section and total gross margin percents are sell-price-weighted averages of included lines (excluding tax) And taxes are excluded from both margin amount and percent but included in totals display
Alternates and assemblies influence on margin
Given a line with two alternates (A and B) and assembly swap options When Alternate B is marked selected and Alternate A is inactive Then only Alternate B's cost basis and sell price contribute to section and total margins And toggling selection back to Alternate A updates all affected margins within 200 ms When the assembly on the active alternate is swapped Then the new assembly's cost basis fully replaces the prior assembly in all margin calculations with no residual costs
Multi-currency conversion and rounding consistency
Given project currency = EUR (minor unit 0.01, rounding mode = half-up) And cost basis maintained in USD with an exchange rate snapshot of 1.10 USD per EUR When margin is calculated for a line with USD costs and EUR sell price Then costs are converted to EUR using the snapshot, and all displayed amounts are rounded to 0.01 EUR using half-up And section and total amounts equal the sum of their rounded line amounts within ±0.01 EUR And repeated edits around rounding thresholds do not cause oscillating values or flicker
Breakdown panel and formula tooltips
Given the margin breakdown panel is opened When the user hovers or focuses a margin value Then a tooltip appears showing the formula (sell before tax - cost basis) / sell before tax with numeric inputs and labels for materials, labor, equipment, waste, overhead, fees, discounts, and taxes And the breakdown panel lists each component and its source and updates within 200 ms of any edit And the tooltip and panel are accessible via keyboard (focusable, ARIA labels) and dismiss with Esc
Margin metrics API contract and cadence
Given the v1 margin metrics API is enabled When a user edits any field affecting pricing or cost Then an event or response payload containing lineId, sectionId, currency, sellBeforeTax, discounts, taxRateOrAmount, costBasis breakdown, marginAmount, marginPercent, and timestamp is emitted within 200 ms of the final keystroke And the payload schema matches the published v1 contract (no breaking changes), with additive changes gated by versioning And contract tests validate payloads for lines with alternates, assemblies, and multi-currency
Discount precedence and tax application order
Given a line with a 10% line discount and an estimate-level 5% discount And an 8% tax rate When margin is calculated Then the line discount is applied first to the line sell price, then the estimate-level discount is applied to the subtotal And taxes are calculated after discounts and excluded from margin computation And margin amount and percent reflect the fully discounted sell price
Sensitive Field Locking
"As a sales rep, I want key pricing fields to auto-lock when I’m under the margin floor so that I don’t accidentally over-discount."
Description

Automatically lock configurable pricing fields (e.g., unit price, discount, markup) when the estimate margin falls below the defined floor. Visually indicate locked fields, preserve draft values, and provide contextual explanations with links to allowed actions. Respect role permissions for bypass and log all lock/unlock events. Persist lock state across sessions and ensure parity across web and mobile editors.

Acceptance Criteria
Auto-Lock Trigger When Margin Falls Below Floor
Given an estimate with a configured margin floor and sensitive fields (e.g., unit price, discount, markup) And the current margin is at or above the floor When a user change causes the recalculated margin to drop below the floor Then all configured sensitive fields become read-only within 1 second And their current values are preserved without modification And any attempted edit is blocked with an inline message: "Locked due to margin below floor"
Visual Indicators and Contextual Explanation on Locked Fields
Given sensitive fields are locked due to margin below floor Then each locked field displays a lock icon and disabled styling per the design system And focus or hover shows a tooltip: "Locked: Margin {current}% below floor {floor}%" And the tooltip includes a "View allowed actions" link And the lock state and message are announced to screen readers
Allowed Actions Panel Accessible From Lock Tooltip
Given a user clicks "View allowed actions" from a locked field tooltip Then an Allowed Actions panel opens within 300 ms And it lists only actions permitted by the user's role that do not directly edit locked fields And selecting an action deep-links to the relevant workflow without unlocking sensitive fields And if no actions are available, the panel shows "No actions available for your role" and a link to request approval
Role-Based Bypass Unlock
Given a user has the "MarginBypass" permission and the estimate is locked When the user selects "Unlock with bypass" Then the system requires an approval reason (minimum 10 characters) before proceeding And upon confirmation, sensitive fields become editable for that user session only And a persistent banner displays "Bypass active" with user, timestamp, and a "Disable bypass" control And all bypass actions are recorded in the audit log
Audit Logging of Lock and Unlock Events
Given any lock, unlock, bypass-on, bypass-off, or blocked-edit occurs Then an audit record is written with: estimate ID, user ID (or system), UTC timestamp, action type, current margin %, floor %, and affected fields And audit records are immutable and viewable in the estimate's Activity log And blocked edit attempts record the field name and control that initiated the attempt
Lock State Persistence Across Sessions and Platforms
Given fields are locked due to margin below floor When the estimate is reopened on web or mobile editors Then the same fields remain locked with identical messages and icons And raising the margin to the floor or above on one platform reflects on the other within 5 seconds after sync And offline mobile editing prevents edits to sensitive fields until sync confirms an unlock state
Automatic Unlock When Margin Recovers to Floor or Above
Given sensitive fields are locked due to margin below floor When changes increase the recalculated margin to be equal to or above the floor Then all previously locked fields become editable within 1 second And lock icons and tooltips are removed And an "Unlock due to margin recovery" audit event is recorded
Smart Fixes Recommendations
"As an estimator, I want actionable suggestions to bring my margin back into range so that I can correct bids quickly without deep manual analysis."
Description

Offer guided remediation options when margin breaches occur, including price adjustments (e.g., increase by X to hit target), assembly swaps to higher-margin equivalents, and presenting customer-facing alternates. Simulate outcomes before apply, display expected margin impact deltas, and allow one-click application with undo. Leverage catalog data and rules to ensure compatible substitutions. Track chosen suggestions and effectiveness for continuous improvement.

Acceptance Criteria
Smart Fixes Panel Triggered on Margin Breach
Given an estimate whose current gross margin is below the configured floor or target When the user opens the Smart Fixes panel Then the system generates recommendations within 2 seconds for estimates up to 200 line items And the panel displays categories: Price Adjustments, Assembly Swaps, and Alternates (if available) And each recommendation shows predicted margin delta in percentage (±0.1% precision) and currency, plus resulting margin if applied And if a category has no valid items, a "No compatible options" message with a reason code is shown
Price Adjustment Recommendation to Hit Target Margin
Given a configured target margin T% for the estimate And the current margin is below T% When Price Adjustment recommendations are generated Then the system calculates the minimal price change (absolute and %) needed to achieve ≥ T% margin after rounding rules And the recommendation specifies whether it adjusts global markup or specific line items And simulation shows updated totals and margin without persisting changes When the user clicks Apply on the recommendation Then the estimate updates in ≤ 1 second and the resulting margin is ≥ T% within ±0.1% tolerance And exactly the documented fields are modified And an Undo action is available to restore prior values
Assembly Swap Recommendations with Compatibility Rules
Given catalog-defined compatibility and substitution rules for assemblies And the estimate contains one or more assemblies eligible for higher-margin substitutes When Assembly Swap recommendations are generated Then each proposed substitute passes all catalog compatibility constraints (form factor, code, dimensions, region, vendor availability) And each recommendation shows cost/price changes and predicted margin delta (% and $) And up to the top 3 substitutes per assembly are listed, sorted by highest margin gain When a swap is simulated Then the previewed margin accounts for material, labor, waste, and cascading assemblies When the user clicks Apply on a swap Then the line items update to the substitute SKUs, pricing recalculates within 1 second, and an Undo restores the original assembly
Customer-Facing Alternates with Margin Impact
Given the estimate is below a guardrail margin When Alternates are generated Then each alternate is compatible per catalog rules and is flagged as Optional for customer selection And each alternate displays customer-facing name/description, price, and predicted impact on margin if accepted And the simulation shows current margin and potential margin with the alternate accepted When an alternate is applied to the estimate as optional Then the base estimate totals remain unchanged and the optional item is added correctly with its pricing And Undo removes the optional item and restores prior state
Simulation Preview Before Apply
Given one or more recommendations are selected When the user clicks Preview Then no estimate data is persisted And the preview displays diffs for: total price, gross margin (% and $), and affected lines And the simulation result matches the post-apply result within ±0.1% for margin and ±$0.01 for totals And the preview computes within 2 seconds for estimates up to 200 line items When the user clicks Cancel Then the estimate remains unchanged
One-Click Apply, Undo, and Audit Trail
Given a recommendation is visible When the user clicks Apply Then the change persists in ≤ 1 second with transactional integrity (all-or-nothing) And an Undo control is immediately available When the user clicks Undo Then the estimate reverts to the exact prior state (totals, margin, items) within 1 second And an audit record is stored with: recommendation type, before/after totals and margin, user, timestamp, and changed fields And the audit entry is visible in the estimate activity log
Effectiveness Tracking of Applied Recommendations
Given recommendations are applied on an estimate When the change is saved Then a tracking record is stored with: estimateId, recommendationId, type, predictedMarginDelta, achievedMarginDeltaAtApply, reachedTarget (boolean), userId, timestamp And the system aggregates adoption and effectiveness metrics by recommendation type for reporting When the estimate status changes to Finalized or Sent Then effectiveness records are updated with final achieved margin and reachedTarget status And data is retained for at least 12 months
Override and Approval Workflow
"As a sales rep, I want to request an exception with justification so that strategic deals can proceed with controlled discounts."
Description

Provide a request/approve mechanism for exceptions below margin floors, capturing reason codes and notes. Support tiered approvers based on deal size, customer segment, and variance from floor, with SLA timers, notifications, and mobile/email approvals. On approval, temporarily unlock relevant fields and allow bid progression; on rejection, maintain locks and suggest alternatives. Maintain a complete audit trail and integrate with CRM for approval status sync.

Acceptance Criteria
Submit Override Request Below Margin Floor
Given an estimate where the current gross margin is below the configured floor and Margin Locks have locked sensitive fields And the current user has permission to request overrides When the user selects Request Override, chooses a reason code from the active list, and enters notes of at least 10 characters Then the system creates an Override Request with status Pending and associates it to the bid And captures metadata: current margin %, floor %, variance % and $, deal size, customer segment, requester, and timestamp And enforces that only one active Override Request exists per bid And keeps all margin-locked fields locked until an approval is recorded And displays a persistent banner indicating Pending Approval with the request ID
Tiered Approver Routing by Deal Size, Segment, and Variance
Given an admin-configured routing matrix that maps deal size, customer segment, and variance bands to approver tiers and SLAs When an Override Request is created Then the system determines the required approver sequence per the matrix and assigns approvers to Tier 1..N And notifies the Tier 1 approver(s) immediately And prevents Tier k+1 review until Tier k has approved And if the routing matrix yields no match, routes to the default approver group And records the routing decision in the audit trail
SLA Timers, Reminders, and Escalation
Given a Pending Override Request with defined per-tier SLA durations When the request enters Tier 1 Then the system starts an SLA timer for that tier and displays a countdown in the bid UI And sends reminders at 50% and 90% of SLA elapsed to the responsible approver(s) And upon SLA breach, escalates to the configured escalation approver and marks the tier as Escalated And records all reminder, escalation, and breach timestamps in the audit trail
Mobile and Email Approvals with One-Click Actions
Given an assigned approver with a verified email and mobile device When the system sends the approval request Then the approver receives an email and mobile notification containing Approve and Reject actions secured by a time-bound token (minimum 30 minutes validity) And when the approver taps Approve, the request transitions to Approved for that tier, capturing approver, timestamp, and optional notes And when the approver taps Reject, notes are required and a rejection reason code must be selected before submission And the system confirms the action and updates the request status in real time for the requester
Approval Unlocks Relevant Fields and Allows Bid Progression
Given an Override Request has been fully approved for all required tiers When the approval is recorded Then the system temporarily unlocks only the fields locked by Margin Locks relevant to the variance, for the configured unlock window duration And allows the bid to progress to the next stage, export, and send actions And displays an Approved Override active indicator with remaining time And if changes during the window cause margin to fall below the approved variance or the window expires, the system re-locks fields and requires a new override And all field changes during the window are captured with before/after values in the audit trail
Rejection Maintains Locks and Suggests Alternatives
Given an Override Request is rejected at any tier When the rejection is recorded with reason and notes Then the bid remains locked by Margin Locks And the system generates at least three smart fix suggestions (e.g., price adjustments, assembly swaps, alternates) that bring margin back to within floor or target as configured And the requester can apply a suggestion or revise the estimate and submit a new override request And the rejected request is closed with status Rejected and reason recorded
Audit Trail and CRM Approval Status Sync
Given any state change to an Override Request (create, route, approve, reject, escalate, expire) When the change occurs Then the system writes an immutable audit record with user, role, timestamp, action, details, and affected fields And exposes the audit log in the bid’s history view and via API And syncs approval status, approver, timestamps, and reason codes to the connected CRM within 5 minutes And if the CRM sync fails, retries occur using exponential backoff for up to 24 hours and a visible warning appears in the bid with last attempt time
Bid Gating and Export Guardrails
"As a sales manager, I want bids that violate margin policy to be withheld from sending until approved so that we protect profitability consistently."
Description

Enforce margin compliance at key milestones by blocking finalization, PDF generation, and external system sync when margin is below floor without an approved override. Provide clear gating messages, read-only preview, and a path to request approval or apply smart fixes. Ensure enforcement via both UI and API to prevent circumvention. Optionally annotate generated PDFs with margin status metadata or watermark for internal use.

Acceptance Criteria
Block Bid Finalization Below Margin Floor (No Override)
Given a bid's calculated gross margin is below the configured floor and no override is approved When the user attempts to finalize or submit the bid Then finalization is blocked server-side And a gating message displays current margin %, required floor %, and the reason for blocking And a visible action to Request Override and a link to Smart Fixes are presented And the block is recorded in the audit log with user, timestamp, bid ID, and margin values
Prevent PDF Generation Below Floor; Show Gating Message and Read-Only Preview
Given margin < floor and no approved override When the user clicks Generate PDF Then the export request is rejected server-side and no file is generated And the UI opens a read-only estimate preview (no price edits) with a gating banner And actions offered are Request Override, Open Smart Fixes, and Cancel And the download/export button remains disabled until margin >= floor or override is approved And an audit entry records the blocked export attempt
Block External System Sync Below Floor (UI and API)
Given an external integration is configured and margin < floor with no override When the user triggers Sync in the UI or a client calls the sync API Then the operation is not queued or sent And the API responds 409 Conflict with code EXTERNAL_SYNC_BLOCKED and includes current_margin and margin_floor And the UI shows a gating message with the same code and guidance to request override or apply smart fixes And an audit entry records the blocked sync attempt
Override Approval Unblocks Gated Actions with Audit Trail
Given a user with approval permission approves a margin override for a specific bid with reason and expiration When the override is active Then finalize, PDF export, and external sync actions succeed even if margin < floor And all resulting actions are tagged with override_applied = true in the audit log, including approver, reason, and expiry And when the override expires or is revoked, gating is reinstated immediately
Smart Fix Application Restores Margin and Unlocks Actions
Given a bid is below the margin floor When the user applies a Smart Fix bundle that increases margin to at least the floor Then the system recalculates margin and updates the bid status to Compliant And finalize, PDF export, and external sync controls become enabled And the UI shows the new margin % in green with a confirmation toast And the audit log records the specific fixes applied and resulting margin And if margin remains below floor after fixes, all gated actions remain blocked
Server-Side Enforcement Prevents UI or Direct URL Circumvention
Given margin < floor and no approved override When a user attempts to access finalize, export, or sync endpoints directly via URL or manipulated client requests Then the server rejects the requests consistently with 409 responses and specific error codes (BID_FINALIZE_BLOCKED, PDF_EXPORT_BLOCKED, EXTERNAL_SYNC_BLOCKED) And no state changes, files, or external calls are produced And error responses include correlation_id for support and machine-readable fields: current_margin, margin_floor, requires_override = true
Optional PDF Margin Status Annotation and Watermark
Given the organization setting "Annotate PDFs with margin status" is enabled and a PDF export is permitted (margin >= floor or override approved) When a PDF is generated Then the PDF contains embedded metadata fields (xmp:margin_status, xmp:margin_percent) and a semi-transparent watermark reflecting status: Compliant, Below Floor - Override Approved, or Below Floor - Non-Compliant And the watermark is omitted when the setting is disabled And metadata is present in the file properties and retrievable via API And an audit entry records the annotation status
Margin Audit and Analytics
"As an operations leader, I want analytics on margin guardrail effectiveness so that I can refine policies and coach the team."
Description

Capture detailed telemetry on margin events, including frequency and duration of breaches, fixes applied, recovery rates, approvals, and user actions. Provide dashboards and exports segmented by rep, team, product, region, and time, with filters and KPI benchmarks. Support data governance (retention windows, PII minimization) and role-based access. Expose a data feed to BI tools for advanced analysis.

Acceptance Criteria
Telemetry Capture for Margin Events
- Given a margin breach occurs on an estimate, When the lock engages, Then a telemetry record is written with fields: event_id, tenant_id, org_id, user_id, role, estimate_id, bid_id, product_id(s), region, timestamp_start (UTC), event_type (breach|lock|fix|approval|override), breach_amount, floor_value, target_value, client_version, source (UI|API). - Given the breach is resolved, When the lock disengages or a fix is applied, Then the same event is updated or a resolution record is appended with: timestamp_end (UTC), duration_ms, fix_type (price_adjustment|assembly_swap|alternate|scope_change|discount_removal|null), fix_delta_amount, fix_delta_margin_pp, recovery_status (recovered_to_floor|recovered_to_target|not_recovered). - Given an approval flow is triggered, When an approval is submitted/decided, Then telemetry includes approval_id, approval_required (bool), approval_outcome (approved|rejected), approved_by_role, approval_turnaround_ms. - Given high throughput (>=100 events/sec for 5 minutes), When events are ingested, Then ≥99.9% are persisted with end-to-end latency ≤2s and zero duplicates as verified by idempotency_key.
Segmented Dashboards and KPI Benchmarks
- Given a manager opens the Margin Analytics dashboard, When filters for time range, rep, team, product, and region are applied, Then all widgets refresh within 2s and the URL reflects the filter state for sharing. - Given KPI benchmarks are configured, When actuals exceed or underperform benchmarks, Then metrics visually indicate status (green/amber/red) and tooltips display benchmark values and variance. - Given the selected filters, When metrics render, Then the dashboard shows at minimum: breach rate (% of estimates), average breach duration, median time to recovery, recovery rate to floor (%), recovery rate to target (%), average fixes per estimate, approval rate (%), approval turnaround time, and top fixes by share. - Given a user clicks a chart datapoint, When drill-down is invoked, Then a detail table of underlying events appears with pagination, sortable columns, and consistent counts with the aggregate. - Given no data matches filters, When the page loads, Then a zero-state message is shown with no errors and offers to reset filters.
Export and Download Functionality
- Given a user selects Export on any dashboard view, When CSV is chosen, Then the file includes only in-scope rows per current filters and columns: event_id, timestamp_start_utc, timestamp_end_utc, timestamp_start_local, timestamp_end_local, tenant_id, org_id, user_id (hashed unless role has View PII), team, rep, product_id, region, estimate_id, event_type, breach_amount, duration_ms, fix_type, recovery_status, approval_required, approval_outcome, client_version. - Given the export size ≤250,000 rows, When the job runs, Then the download starts within 60s; if >250,000 rows, Then an async job is queued and completes within 5 minutes with a notification containing a secure download link that expires in 24 hours. - Given an org timezone is configured, When exporting, Then timestamps are included in both UTC and org local time, and the file header documents the timezone. - Given retention rules exclude older data, When exporting, Then the row count reflects only in-retention data and a footer notes the applied retention window and total rows.
Role-Based Access Control and Permissions
- Given a user with role Sales Rep, When viewing Margin Analytics, Then they can only see events tied to their own estimates, user identifiers are hashed, and approval comments are hidden. - Given a user with role Team Manager, When viewing, Then they can see events for users in their team hierarchy; PII (names/emails) remains masked unless the user has the View PII permission. - Given a user with role Admin or Analyst, When accessing analytics, Then they can view all segments, manage KPI benchmarks, retention settings, and configure BI data feeds. - Given an unauthorized user attempts access, When hitting the endpoint or UI route, Then a 403 is returned and UI entry points are not rendered. - Given a permission change is saved, When audited, Then an audit log entry exists with actor_id, change_summary, timestamp_utc, and previous vs new values.
Data Governance and Retention
- Given an organization sets a retention window (e.g., 365 days), When data ages beyond the window, Then a nightly purge deletes out-of-window telemetry, and a purge report is stored with counts and timestamps; no soft-deleted rows remain queryable. - Given PII minimization is enabled by default, When telemetry is stored, Then only user_id (UUID) and role are persisted; names/emails are excluded unless explicitly allowlisted and, if stored, are field-level encrypted. - Given a data subject deletion request is processed, When executed, Then telemetry is anonymized by rekeying user_id to a non-reversible surrogate within 7 days and excluded from PII-enabled exports. - Given compliance export is requested, When generated, Then it includes a data dictionary, retention policy version, and encryption status for sensitive fields.
BI Data Feed Integration
- Given an analyst creates a service token with scope margin_analytics.read, When using the API, Then they can query /v1/analytics/margin-events with pagination (limit up to 10,000), updated_after cursor, and receive next_cursor ensuring no duplicates and eventual consistency <60s. - Given a warehouse sync is configured, When connecting to Snowflake or BigQuery, Then daily full snapshots and 15-minute incremental loads are delivered to read-only schemas; schema changes follow semantic versioning and breaking changes are dual-written for ≥30 days. - Given feed health monitoring, When checked, Then a status endpoint exposes lag_seconds, last_success_at, and delivered_row_counts; alerts are sent if lag >900 seconds or if last_success_at >30 minutes ago. - Given network or auth failure, When retries occur, Then exponential backoff with jitter is applied up to 5 attempts and failures are surfaced in an admin alert center.
KPI Definitions and Calculations Accuracy
- Given a margin floor of 30% and target of 35%, When an estimate drops to 28% and is adjusted to 31%, Then recovery_status = recovered_to_floor and recovery metrics increment accordingly; if adjusted to 36%, Then recovery_status = recovered_to_target. - Given overlapping breaches on a single estimate, When calculating duration, Then time is measured from breach start to resolution without double-counting; concurrent product-level breaches roll up to estimate-level metrics using max(duration) and weighted averages for margins. - Given approval thresholds are configured (e.g., discounts >10% require approval), When events are processed, Then approval_required is set correctly and approval_turnaround = approved_at - requested_at. - Given rounding and display rules, When rendering metrics, Then percentages are displayed to one decimal place while calculations use full precision; totals reconcile to the sum of parts within ±0.1 percentage points.

Region Profiles

Market‑specific guardrails packaged into reusable profiles: permitted materials, code‑required adds, crew rates, taxes, and carrier preferences by ZIP/county. Auto‑apply on job creation so branches inherit the right rules by default—speeding setup and eliminating regional inconsistencies.

Requirements

Auto-Apply Region Profile on Job Creation
"As a branch estimator, I want the correct regional rules to apply automatically when I create a job so that I save setup time and avoid compliance mistakes."
Description

Automatically detect the job’s ZIP/county from the entered service address and apply the highest-precedence matching Region Profile at job creation. The applied profile sets default permitted materials, code-required adders, crew labor rates, taxes, waste/overhead rules, and carrier preferences for the job’s estimate. Implement deterministic precedence (job-specific override > branch default > ZIP > county > state > global) with clear fallback behavior when mappings are missing or ambiguous, including user alerts and a selectable resolution flow. Provide a “Reapply Profile” action that safely reapplies current profile rules while preserving user overrides according to configurable conflict policies (e.g., warn, merge, replace). Ensure idempotency, audit logging of applied profile and parameters, and compatibility with downstream estimate generation and PDF outputs.

Acceptance Criteria
Apply highest-precedence profile at job creation
Given a valid service address is entered and resolves to ZIP and county When the job is created Then the system selects the Region Profile using deterministic precedence: job-specific override > branch default > ZIP > county > state > global And the applied profile’s permitted materials, code-required adders, crew labor rates, taxes, waste/overhead rules, and carrier preferences populate the job defaults And the job header displays the applied profile name and its precedence source And for identical inputs, the same profile is selected consistently on repeated attempts
Fallback with clear alert when mapping is missing
Given no matching Region Profile exists at the resolved precedence level (e.g., no ZIP match) When the job is created Then the system applies the next available fallback in order: ZIP > county > state > global And a non-blocking alert on the job states that a fallback occurred and identifies the selected profile and missing level And if no profiles exist at any level, the system blocks estimate generation, prompts the user to select a Region Profile via a dialog, and allows proceeding only after selection
Ambiguity resolution when multiple profiles match at same precedence
Given the resolved ZIP or county maps to multiple Region Profiles at the same precedence level When the user creates the job Then a required resolution dialog lists candidate profiles with key attributes and no profile is auto-applied And the user must select one profile to proceed; cancellation blocks job creation with a validation message And upon confirmation, the selected profile is applied and the selection source is recorded as "User-resolved ambiguity"
Reapply Profile with Warn policy preserves overrides
Given a job has an applied Region Profile and user-edited values that conflict with the profile defaults, and the conflict policy is set to Warn When the user clicks "Reapply Profile" Then a diff modal lists each impacted field with current vs. profile values and defaults to preserving user overrides And when the user confirms with defaults, conflicting user overrides remain, non-conflicting fields refresh from the profile, and a success toast confirms preservation And invoking "Reapply Profile" again without further changes results in no value changes and no duplicate adders or taxes (idempotent)
Reapply Profile with Replace policy resets to profile defaults
Given a job has an applied Region Profile and user-edited values that conflict with the profile defaults, and the conflict policy is set to Replace When the user clicks "Reapply Profile" Then a confirmation modal warns that conflicting fields will be overwritten by profile defaults And when the user confirms, all conflicting fields are set to profile defaults, non-conflicting fields refresh, and no duplicate adders/taxes are created And invoking "Reapply Profile" again without further changes produces no additional modifications (idempotent)
Downstream estimate and PDF reflect applied profile
Given a job with an applied Region Profile When the user generates an estimate Then permitted materials are restricted to the profile’s list, code-required adders are included as line items, crew labor rates and taxes are applied to calculations, waste/overhead rules are used, and carrier preferences are reflected in formatting and nomenclature And when the user exports to PDF, the PDF shows the same materials restrictions, adders, rates, taxes, waste/overhead, and carrier preferences as the on-screen estimate
Audit logging and traceability of profile application
Given any Region Profile is applied or reapplied (including via fallback or ambiguity resolution) When the action completes Then an immutable audit entry is recorded with: job ID, timestamp, actor (user/system), applied profile name and version, mapping inputs (ZIP, county, state), precedence path taken, conflict policy used, fields changed, fields preserved, and outcome (applied, no-op, overwritten) And when an authorized user views the job audit trail, they can filter for "Region Profile" events and export them to CSV with all details
Region Profile Data Model & Hierarchy
"As a product admin, I want a structured profile model with a clear jurisdiction hierarchy so that regional rules are represented accurately and resolve overlaps predictably."
Description

Define a normalized Region Profile entity that captures jurisdiction scope (ZIPs, counties, states), effective start/end dates, semantic version, and rule bundles: permitted materials and substitutions, code-required line items with conditional triggers (e.g., pitch, deck type, climate zone), crew labor rates, tax schema, waste factors, overhead/profit, and carrier-specific preferences. Support inclusion/exclusion lists, inheritance from base templates, and precedence resolution for overlapping jurisdictions. Maintain authoritative identifiers (FIPS, USPS ZIP, state codes) and many-to-many mappings to support ZIPs spanning multiple counties. Expose the model via internal APIs with validation, referential integrity, and migration scripts for safe evolution.

Acceptance Criteria
Jurisdiction Scope with Authoritative IDs and ZIP-to-County Many-to-Many
Given a request to create a Region Profile with stateCodes, countyFips, and zipCodes using USPS and FIPS identifiers When POST /internal/region-profiles is invoked with a valid payload Then the profile is persisted with normalized scope tables and canonical identifiers stored (USPS state code, FIPS county code, USPS ZIP) Given ZIP 93561 maps to counties 06029 and 06107 When the profile is saved Then both county relationships are stored and retrievable via GET /internal/region-profiles/{id}/scope Given an invalid USPS state code, FIPS county code, or USPS ZIP in the payload When validation runs Then the API responds 422 with field-level errors and no records are written Given duplicate ZIP or county entries in the payload When saving the profile Then duplicates are de-duplicated and the stored scope is unique
Effective Dating and Semantic Versioning
Given a profile version v1.2.0 with effectiveStart=2025-01-01 and effectiveEnd=2025-07-01 (start inclusive, end exclusive) When resolving for date=2025-06-30 Then version v1.2.0 is returned Given the same profile key has another version whose effective window includes 2025-07-01 When resolving for date=2025-07-01 Then that version is returned Given two versions for the same profile key and jurisdiction have overlapping effective windows When attempting to save the later version Then the API rejects with 409 Conflict detailing the overlap range Given a version string not matching MAJOR.MINOR.PATCH When validation runs Then the API rejects with 422 and a semver format error Given effectiveEnd is null When resolving an active version Then the window is treated as open-ended
Permitted Materials and Substitutions Validation
Given materialsAllowed is a list of catalog codes and substitutions is a map of fromCode->toCode When saving the profile Then all codes must exist in the master catalog and all substitution targets must also be in materialsAllowed; otherwise respond 422 Given substitutions form a cycle (e.g., A->B and B->A) When validation runs Then the API rejects with 422 citing cyclic substitution Given case-variant or duplicate material codes in materialsAllowed When saving Then codes are normalized case-insensitively and stored uniquely Given GET /internal/region-profiles/{id}/materials When called Then the response includes materialsAllowed and a directed acyclic graph of substitutions
Conditional Code-Required Line Items Evaluation
Given rules contain conditional triggers (e.g., pitch>=6/12, deckType=OSB, climateZone=3) and output line items with quantity formulas When POST /internal/rules/resolve is called with site={pitch:"7/12", deckType:"OSB", climateZone:3} Then the response includes the expected code-required line items once, with computed quantities and unit bases Given a required trigger variable is missing from the request When resolving Then the engine applies the rule default or excludes the rule per definition and returns a decision trace indicating the outcome Given two rules produce the same line item under different conditions that are both true When resolving Then the higher-priority rule is applied and the item appears once with the selected parameters Given an unknown trigger key or out-of-range value in the request When resolving Then the API responds 422 and no mutation to stored rules occurs
Tax Schema, Waste Factors, O&P, and Crew Rates Modeling
Given taxSchema includes taxJurisdictionType, taxableCategories, and rates with effective dating When saving Then required fields are present, rates are nonnegative, and taxableCategories reference valid item categories; otherwise respond 422 Given crewLaborRates define trade, rateType (hourly|unit), currency, and effective window When resolving for a date Then the correct rate for that date is returned and overlapping windows for the same trade are rejected on save with 409 Given wasteFactor, overheadPercent, and profitPercent are provided When saving Then values must be within [0,1] and stored to 4 decimal places; out-of-range values are rejected with 422 Given carrierPreferences override rounding rules or line-item naming When resolving for a specified carrier Then carrier-specific values are applied, falling back to defaults when unspecified
Inheritance and Precedence Resolution for Overlapping Jurisdictions
Given a base template T0, a state profile S inherits T0, a county profile C inherits S, and a ZIP profile Z inherits C When resolving the effective rule set for a job located in Z Then precedence is applied as ZIP > county > state > base template and the merged rule set is returned Given a rule is explicitly excluded in Z via an exclusion list When resolving Then that rule is absent even if present in lower-precedence sources Given conflicting scalar values (e.g., wasteFactor) across levels When resolving Then the highest-precedence value is selected; for lists, a set-union minus explicit exclusions is used; for maps, key-level overrides apply Given a ZIP maps to two counties (C1, C2) and no ZIP-level profile exists When resolving Then a deterministic precedence is applied using an explicit priority index; if priorities tie, publish is rejected with 409 until resolved
Internal API Validation, Referential Integrity, and Safe Migrations
Given foreign keys from scope join tables to states, counties, and zips When attempting to delete a referenced identifier Then the database blocks the delete and the API returns 409 with guidance to detach references first Given a migration introduces a new nullable field (e.g., climateZoneSource) with backfill When the migration runs in production Then it is idempotent, reversible, and does not cause downtime; read/write paths handle nulls until backfill completes Given JSON schema validation for Region Profile payloads When a request includes unknown fields in strict mode or omits required fields Then the API rejects with 422 listing offending fields Given an API version bump from v1 to v1.1 When clients call v1 endpoints post-deploy Then responses remain backward-compatible and deprecations are logged for observability
Enforcement of Guardrails in Estimator
"As an estimator, I want the system to enforce regional rules while I build estimates so that my bids are consistent, compliant, and faster to produce."
Description

Integrate Region Profile rules into the estimate builder to enforce permitted materials and automatically insert code-required adders when trigger conditions are met. Provide inline validations and contextual warnings for disallowed items, with role-gated override capabilities requiring justification. Apply profile-driven crew rates, taxes, waste, and carrier preferences to all line-item and total calculations. Display provenance (which rule triggered which change) and maintain an audit trail of all enforcement and overrides. Ensure the enforcement engine is performant, testable, and works consistently across web UI, API-driven estimate creation, and PDF generation.

Acceptance Criteria
Auto-apply Region Profile on Estimate Creation (UI and API)
Given a job address ZIP/county linked to a single Region Profile When a user creates an estimate via Web UI or POST /estimates Then the profile is auto-attached to the estimate with profileId and profileVersion recorded Given overlapping ZIP and county profiles When both match Then the ZIP-level profile is selected and the selection rationale is captured in provenance Given no matching profile and a branch default profile exists When an estimate is created Then the branch default profile is attached; otherwise profileId remains null and the UI prompts to select a profile on first edit Given an estimate with a profile When the estimate is duplicated or imported via API Then the same profile and version are preserved unless explicitly overridden
Permitted Materials Enforcement with Inline Validation
Given a Region Profile with permitted and disallowed SKUs/categories When a user searches/selects a disallowed item Then the add action is blocked, an inline validation shows the violated rule and rationale, and permitted alternatives (if configured) are suggested Given a disallowed item is submitted via bulk import or API When processed Then the API responds 422 with error code RULE_MATERIALS_FORBIDDEN and details of offending items; no partial adds occur Given a user with override permission When attempting to add a disallowed item Then an override dialog requires justification (minimum 10 characters) before enabling confirm; on confirm, the item is added flagged override=true and visually marked Given a user without override permission When attempting to add a disallowed item Then no override option is presented and the add is blocked
Automatic Insertion of Code-Required Adders
Given Region Profile defines code adders with trigger conditions and quantity formulas When line items or job attributes satisfy triggers Then the adder items are auto-inserted once per applicable scope with calculated quantities and marked systemAdded=true Given enforcement is re-run on an unchanged estimate When applied Then no duplicate adder items are created (idempotent) Given trigger inputs change such that a rule no longer applies When recalculated Then previously inserted adder items are auto-removed or quantity-adjusted accordingly Given an auto-inserted adder is manually removed by a user Then a non-dismissible warning explains the code requirement and offers re-apply; the removal is logged as an override with justification required if persisted
Apply Crew Rates, Taxes, Waste, and Carrier Preferences
Given a Region Profile with crew rates, taxes, waste factors, and carrier preferences When line items are added or quantities updated Then unit rates, waste, tax, and carrier-specific mappings are applied to line items and totals per profile Then UI-presented subtotal, tax, and total equal server-calculated values within 0.01 and match API responses for the same estimate state Given carrier preferences require mapping/grouping When exporting to PDF or carrier-specific formats Then the mapped names and grouping are used consistently Given a profile change mid-estimate When the profile is updated to a new version Then the estimator prompts to reapply; upon confirmation, all affected values are recalculated and provenance entries note the rule version change
Role-Gated Overrides with Justification and Audit Trail
Given a user without override permission When attempting to bypass any guardrail Then the action is blocked and a message indicates required permission Given a user with override permission When overriding materials, adders, rates, taxes, or waste Then a justification field (minimum 10 characters) is required before save is enabled Then the override is recorded in an immutable audit trail capturing estimateId, ruleId, userId, timestamp (UTC), field, previousValue, newValue, justification, and channel (UI|API) Then audit entries are retrievable via GET /estimates/{id}/audit and visible in the UI Audit tab, with pagination and search by ruleId and userId
Provenance Display for Rule-Driven Changes
Given any system-enforced change or override When viewing an affected line item or total in the estimator Then a provenance indicator is displayed; on click/hover it shows rule name, ruleId, trigger summary, and timestamp When fetching the estimate via API Then each affected line item includes a provenance array enumerating ruleId, type (enforced|override), explanation, and source profileVersion When exporting to PDF Then a Change Log appendix lists rule-driven changes and overrides with references to line items; inclusion can be toggled by profile setting or export option
Performance, Determinism, and Cross-Channel Consistency
Given estimates with up to 150 line items and up to 20 active rules When performing initial enforcement or recalculation Then p95 server processing time is <= 800 ms and p99 <= 1500 ms in load tests representative of production When the same estimate state is produced via Web UI, API, and during PDF generation Then resulting line items, adders, and totals are identical within 0.01 and no channel-specific deviations occur When enforcement runs multiple times on an unchanged estimate Then results are deterministic and idempotent: no additional items are created and no values drift Then automated test coverage of the enforcement engine is >= 85% lines/branches and integration tests validate UI/API/PDF parity for at least 10 representative Region Profiles
Profile Management UI & Templates
"As an operations manager, I want an easy way to author and publish profiles using templates so that our teams stay aligned without manual reconfiguration."
Description

Provide an admin console to create, edit, clone, import/export, and archive Region Profiles with draft/publish workflows and effective dates. Enable bulk assignment of ZIPs/counties via CSV and an interactive map picker with validation against USPS and FIPS datasets. Offer starter templates for common markets (e.g., hurricane, hail, cold-weather codes) that prefill guardrails. Include change history with side-by-side diffs, impact previews (branches/jobs affected), and publish scheduling. Enforce RBAC so only authorized roles can publish or retire profiles, with notifications to affected branches upon changes.

Acceptance Criteria
Draft, Publish, Clone, and Archive Region Profiles
Given I have the Profile Editor role When I create a new Region Profile with all required fields completed Then the profile is saved as Draft and appears in the Drafts list Given a Draft profile When I click Publish and enter an effective start date (today or later) and optional end date (after start) Then the profile status updates to Published and the effective window is stored Given an existing profile When I select Clone Then a new Draft is created copying guardrails and jurisdiction assignments, with the name auto-suffixed with "Copy" and no effective dates set Given a Published profile When I click Archive Then its status changes to Archived, it can no longer be applied to new jobs, and existing jobs retain their original profile reference
Import and Export Profiles with Schema Validation
Given a Region Profile When I click Export Then a JSON file conforming to the Region Profile schema v1 is downloaded Given a valid Region Profile JSON file When I click Import and upload the file Then a new Draft profile is created with all fields populated from the file Given an invalid or duplicate Region Profile JSON file When I attempt to import Then the import is blocked and an error report lists each violation with the JSON path and reason
Bulk ZIP/County Assignment via CSV and Map Picker
Given a CSV file for jurisdiction assignment When I upload a file with headers ZIP and/or COUNTY_FIPS Then rows are validated against USPS ZIP and FIPS county datasets and the system displays counts of valid and invalid rows Given the CSV contains only valid codes When I confirm import Then the listed ZIPs/counties are assigned to the profile with no duplicates, and a success summary shows totals added/unchanged Given the CSV contains any invalid codes When validation completes Then no changes are applied and a downloadable error file lists the line numbers, codes, and reasons Given I use the interactive map picker When I multi-select ZIPs or counties and apply Then the selected jurisdictions are added to the profile, respecting USPS/FIPS boundaries and de-duplicating overlaps
Starter Templates Prefill Guardrails
Given I create a new profile from a starter template (Hurricane, Hail, or Cold-Weather) When I select a template Then permitted materials, code-required adds, crew rates, taxes, and carrier preferences are prefilled in the Draft Given a templated Draft When I edit any prefilled field and save Then my changes persist and do not alter the base template Given a templated Draft When I view profile details Then the template name and version used are displayed
Change History with Side-by-Side Diffs
Given a profile with multiple saved versions When I open Change History Then I can select any two versions and view a side-by-side diff highlighting added, removed, and changed values across all sections and jurisdiction assignments Given I view a specific version in history When I open its metadata Then I see version number, author, timestamp, and status (Draft/Published/Archived) Given two adjacent versions with no differences When I view their diff Then the UI indicates no differences found
Impact Preview Before Publish
Given a Draft or edited Published profile When I click Preview Impact Then the system displays counts of affected branches and in-flight jobs within the selected effective window, and I can download detailed lists as CSV Given the impact preview is shown When I confirm publish Then the profile is published and an audit entry records the actor, timestamp, and preview summary Given I cancel from the impact preview When I close the modal Then no publish occurs and the profile status remains unchanged
RBAC, Scheduled Publish, and Branch Notifications
Given a user without the Publisher role When they attempt to publish, schedule, or retire a profile Then the action is denied with a clear permission error and is logged Given a user with the Publisher role When they schedule a future publish date and time Then the profile moves to Scheduled and automatically becomes Published at the scheduled time Given a profile is published or retired When the action completes Then affected branches receive notifications and an audit log entry captures actor, timestamp, and affected jurisdictions
Branch Defaults, Overrides, and Auditability
"As a branch admin, I want to set branch defaults and control overrides with full auditability so that we maintain consistency and understand deviations."
Description

Allow organization and branch admins to assign default Region Profiles per branch and optionally per carrier. Enable job-level profile overrides based on role permissions, with mandatory reason capture and automatic audit logging (who, when, from/to, reason). Surface override indicators in the job header and estimates. Provide reporting on override frequency, top reasons, and financial impact to guide governance and training. Ensure API parity for setting and querying defaults and overrides.

Acceptance Criteria
Auto-apply Branch and Carrier Default Region Profile on Job Creation
Given a branch has a default Region Profile A and a carrier-specific default Region Profile B for carrier X When a new job is created in that branch for carrier X via UI or API Then Region Profile B is set as the job’s effective profile before the job is first saved and persisted with source=CarrierDefault Given a branch has a default Region Profile A and no carrier-specific default for carrier Y When a new job is created in that branch for carrier Y via UI or API Then Region Profile A is set as the job’s effective profile before the job is first saved and persisted with source=BranchDefault Then the effectiveProfileId and source are stored on the job record and visible in the job header
Role-based Job-level Profile Override with Mandatory Reason
Given a user with the "Override Region Profile" permission opens a job When they change the job’s effective Region Profile to a different profile Then a reason field is required (minimum 10 characters), and Save is disabled until provided; on save the override is applied Given a user without the "Override Region Profile" permission When they view the job in the UI Then the override control is not rendered Given a user without permission When they call the override API Then the API responds 403 Forbidden with error code=REGION_PROFILE_OVERRIDE_FORBIDDEN and no change occurs Then the override operation enforces optimistic concurrency using job version/ETag; conflicting saves return 409 Conflict with no partial updates
Comprehensive Audit Log on Profile Override
Given any job-level Region Profile change (including revert to default) When the change is saved via UI or API Then a single immutable audit record is created capturing: jobId, previousProfileId, newProfileId, userId, userRole, occurredAt (UTC ISO-8601), channel (UI|API), reason (string), requestId Then audit records are append-only; attempts to update or delete an audit record return 405 Method Not Allowed Then GET /jobs/{id}/region-profile-audits returns records filterable by date range and userId, sorted by occurredAt desc by default, responding within 2 seconds for up to 10,000 records Then exporting audits to CSV includes all captured fields with headers and UTC timestamps
Override Indicators in Job Header, Estimates, and PDFs
Given a job’s effective Region Profile differs from its applicable default When viewing the job Then an "Overridden" indicator is displayed in the job header showing fromProfileName → toProfileName with a tooltip that includes reason and timestamp When generating an estimate in the UI Then an inline badge indicates "Region Profile overridden" with from/to and a link to the audit trail When exporting estimate PDFs Then a footnote states "Region Profile overridden from {from} to {to} on {date} by {user}" and includes the reason Given a job reverts to the applicable default When viewing the job and its estimates Then the override indicators are not displayed
Override Reporting: Frequency, Reasons, and Financial Impact
Given the reporting module When a user runs the "Region Profile Overrides" report for a selected date range Then the report shows: total overrides, overrides per branch, per carrier, per user, and a top-5 reasons list with counts and percentages Then financial impact is calculated as the difference between the job’s total estimate immediately before the override and immediately after the override; the report shows sum impact, average impact, and p50/p90 per branch and carrier Then filters include branch, carrier, profile, user, and reason; applying filters updates metrics within 5 seconds for up to 100,000 jobs Then the report supports CSV export with all visible metrics and a detailed row-level extract listing jobId, occurredAt, fromProfileId, toProfileId, reason, deltaAmount, branchId, carrierId
API Parity for Defaults and Overrides
Given programmatic access When using the API Then the following endpoints exist and are documented via OpenAPI: PUT /branches/{id}/region-profile-default, PUT /branches/{id}/carriers/{carrierId}/region-profile-default, GET /jobs/{id}/effective-region-profile, POST /jobs/{id}/region-profile-override, GET /jobs/{id}/region-profile-audits, GET /reports/region-profile-overrides Then API validations match UI rules: permission checks, required reason (min 10 chars) for overrides, profile existence, branch/carrier applicability; invalid requests return 400 with machine-readable error codes Then write endpoints are idempotent via Idempotency-Key header; duplicate requests within 24 hours do not create duplicate overrides or audits Then all endpoints enforce rate limiting (minimum 60 requests/min per API key) and return 429 with Retry-After when exceeded
Jurisdiction Data Ingestion and Geo Lookup
"As a system owner, I want accurate and current jurisdiction mapping and reliable geo lookups so that profiles apply correctly even when datasets change."
Description

Implement scheduled ingestion and normalization of authoritative ZIP/county/state datasets (USPS, Census/FIPS), handling ZIPs spanning multiple counties and boundary updates. Provide a geo lookup service that resolves a job address to the most specific applicable jurisdiction, with caching, retries, and graceful degradation when third-party services are unavailable. Detect coverage changes that alter which profiles apply and notify admins with suggested updates. Expose health metrics and alerts for data freshness and lookup error rates.

Acceptance Criteria
Nightly Authoritative Data Ingestion and Normalization
Given USPS ZIP, Census county/FIPS, and state datasets are configured as sources When the nightly 02:00 UTC ingestion job runs Then 100% of new or updated records are pulled, parsed, and normalized into a unified schema with fields: zip5, county_fips, county_name, state_fips, state, effective_date, source_version And the job completes within 30 minutes and writes a success heartbeat timestamp And any schema or record-level parse errors are quarantined and reported with counts per source without blocking valid records And normalization persists ZIP-to-multiple-county crosswalks And the resulting dataset is versioned with an immutable version_id and a changelog entry
Multi-County ZIP Resolution
Given a full street address within a ZIP that spans multiple counties When a jurisdiction lookup is requested Then the service geocodes to rooftop/parcel centroid and returns the county_fips of the polygon containing the point And returns an alternatives array of other counties within the ZIP with confidence scores And no ambiguous result is returned if the point is inside a single county polygon Given only a ZIP code is provided without street or city When a jurisdiction lookup is requested Then the service returns the jurisdiction based on the population-weighted centroid with confidence="low" and degraded=true
Address-to-Jurisdiction Geo Lookup Precision
Given a curated test set of 10,000 US addresses with known county/state ground truth across all states and territories When batch lookups are executed Then county_fips accuracy is >= 99.5% and state_fips accuracy is 100% And p95 latency is <= 300 ms per lookup at 50 RPS sustained, p99 <= 800 ms And each response includes: zip5, county_fips, county_name, state_fips, state, source_version, dataset_version, cache=true|false, degraded=true|false, confidence=high|medium|low And the API returns stable error codes (4xx for client issues; 5xx is not used for third-party outages due to graceful degradation)
Caching, Retries, and Graceful Degradation
Given the primary third-party geocoding provider is timing out When jurisdiction lookups are performed Then cache serves repeat addresses with TTL=30 days and cache hit ratio >= 60% on repeated batch workloads And the service retries up to 3 times with exponential backoff (100ms, 200ms, 400ms) plus jitter, then falls back to a secondary provider or last-known-good data And responses are marked degraded=true when fallbacks are used while still returning HTTP 200 with best-effort jurisdiction And a circuit breaker opens for 2 minutes when error rate > 20% over a rolling 1-minute window to prevent cascading failures
Coverage Change Detection and Admin Notification
Given the nightly ingestion produces a delta where ZIP-to-county mappings or boundaries change When such changes would alter the Region Profile assignment for any branch or saved job Then a change set is generated listing affected ZIPs/counties, impacted profiles, and counts of branches/jobs And an in-app notification and email are sent to Org Admins within 15 minutes including suggested profile updates And admins can acknowledge or snooze the suggestion; acknowledgments are audit-logged with user, timestamp, and action And no duplicate notification is sent for the same change set version
Health Metrics Exposure and Alerting
Given the service is running When the /metrics and /health endpoints are queried Then metrics are exposed for: data_freshness_hours_by_source, ingestion_success_rate, last_ingest_timestamp, lookup_error_rate_1m/5m, cache_hit_ratio, lookup_latency_ms_p50/p95/p99, circuit_breaker_state And /health returns status=UP only if last_ingest_timestamp < 24h and lookup_error_rate_5m < 5% And alerts fire to the configured channel when data_freshness_hours_by_source > 48 or lookup_error_rate_5m >= 10%
Boundary Updates and Auditability
Given a county boundary is updated between dataset versions When a lookup is performed for an address near the updated boundary Then the result uses the latest polygon set and includes dataset_version in the response And the response includes boundary_change=true when the address lies within 100 meters of a changed polygon edge And if a prior lookup exists for the same address, an audit log entry records prior_result, new_result, and dataset_version_old->new
Profile Versioning, Effective Dates, and Migration
"As an operations lead, I want controlled rollout and migration of profile updates so that we minimize disruption and can audit what changed and why."
Description

Support semantic versioning and lifecycle states (draft, published, retired) for Region Profiles with effective start/end dates. Ensure jobs created after the effective date automatically use the latest version, while existing jobs remain stable unless explicitly migrated. Provide a migration tool that previews diffs (rates, materials, adders, taxes), estimates impact on totals, and applies changes with full audit logging. Emit events/webhooks on publish and migration for downstream integrations and branch notifications.

Acceptance Criteria
Enforce Semantic Versioning on Region Profiles
Given a user creates or updates a Region Profile version identifier When the version is saved Then the system must validate the identifier matches MAJOR.MINOR.PATCH where each segment is a non-negative integer (e.g., 1.0.3) And the version must be unique within the profile And versions marked Published or Retired cannot have their content or version identifier edited
Profile Lifecycle States and Transition Rules
Given a Region Profile version in Draft When a user publishes it Then the user must provide an effective_start (UTC) and optional effective_end (UTC) where start < end if end is provided And the system must prevent overlapping effective windows with any other Published version of the same profile And once Published, the version becomes read-only except for retiring Given a Region Profile version in Published When a user retires it Then the system must set effective_end to the retirement timestamp if not already set And no new jobs may auto-select this version after its effective_end And a Retired version cannot transition back to Draft or Published
Auto-select Latest Effective Version for New Jobs
Given a new job is created at timestamp T (UTC) and is associated to a Region Profile by ZIP/county When selecting a profile version Then the system must auto-select the single Published version whose effective_start <= T and (no effective_end or T < effective_end) And if multiple candidates would qualify, the publish action must have been blocked earlier by overlap prevention And if no candidate exists, the job creation flow must prompt for manual selection or block with an error explaining no active version is effective
Pin Existing Jobs to Original Profile Version
Given a job was created using Region Profile version A When a newer version B is Published or a prior version is Retired Then the job must remain pinned to version A for all calculations and re-pricing And the job record must display the pinned profile version identifier And no profile changes affect the job unless an explicit migration is executed
Migration Tool: Diff Preview and Impact Estimation
Given a user opens the migration tool for a job pinned to version A When the user selects a target version B Then the tool must display a structured diff across rates, materials, adders, and taxes including added/removed/changed items And the tool must compute and display estimated financial impact: line-item subtotals, taxes, grand total before vs. after, and deltas And the tool must surface blocking issues (e.g., unmapped or deprecated materials) with clear resolution guidance And the tool must restrict selectable targets to Published (not Draft/Retired) versions of the same Region Profile
Migration Apply: Atomic Update and Audit Logging
Given a user confirms migration of a job from version A to version B When Apply is executed Then the system must atomically update the job to version B and persist recalculated totals And create an immutable audit log entry capturing actor, timestamp, job_id, profile_id, from_version, to_version, diff summary, totals before/after, and outcome (success/failure) And on any failure, no partial changes persist and the audit log records the failure with error details
Publish and Migration Events/Webhooks Delivery
Given a Region Profile version is Published or a job migration completes When the event is emitted Then the system must publish events region_profile.published and region_profile.migrated with payload including ids, version identifiers, effective dates (for publish), job totals before/after (for migration), actor, and event timestamp And webhooks must be delivered to subscribed endpoints with an idempotency key and at least 3 retry attempts with exponential backoff on non-2xx responses And downstream integrations and branch notifications must receive the event within 60 seconds of occurrence in success cases

Compliance Pulse

Real‑time dashboard and alerts tracking template drift, override frequency, approval latency, and branch margin variance. Drill into outliers, export audit packs, and send weekly scorecards to managers. Surfaces risk before it hits the bottom line and drives continuous improvement across the franchise.

Requirements

Live Metric Ingestion & Drift Computation
"As an operations manager, I want metrics to update in real time so that I can react to emerging risk without waiting for end‑of‑day reports."
Description

Implement a real-time data pipeline to ingest estimating events, template selections, overrides, approvals, and job outcomes, then compute template drift, override frequency, approval latency, and branch margin variance within 60 seconds of occurrence. Maintain baselines by branch, template version, carrier, job type, and rolling window, with seasonality adjustments and backfill for historical comparisons. Ensure idempotent processing, data quality validation, and late-event handling. Store time-series metrics and aggregates optimized for dashboard queries and alerts, and expose an API for the dashboard and alert engine to consume.

Acceptance Criteria
Real-Time Metric Computation SLA (<=60s)
Given a valid estimating event (template selection, override, approval, or job outcome) is produced at time T_ingest, When the pipeline ingests the event, Then impacted metrics are updated and available via the metrics API within 60 seconds at p95 and within 120 seconds at p99. Given a continuous stream at 2,000 events per minute for 30 minutes, When processing under sustained load, Then end-to-end compute latency meets the same SLOs and the processing backlog never exceeds 2 minutes. Given a service restart or deployment during active ingestion, When traffic resumes, Then no events are lost and latency SLOs are re-achieved within 5 minutes.
Correct Drift and Variance Computation vs Baselines
Given a reference dataset of 10,000 events with expected KPI outputs by branch, template version, carrier, job type, and hour, When the pipeline processes the dataset, Then computed values match expected counts exactly and match expected rates/percentages within ±0.01. Given overrides on 100 out of 1,000 eligible line items, When override frequency is computed, Then it equals 10.00% for the corresponding dimension combinations. Given approvals with timestamps t_submitted and t_approved, When approval latency is computed, Then mean, median, and p95 values match reference results within ±0.1 minutes. Given realized job margins and baseline target margins, When branch margin variance is computed, Then variance equals realized minus baseline target and rollups across dimensions equal the sum/weighted averages of their children within rounding tolerance.
Baseline Maintenance by Dimensions, Rolling Windows, and Seasonality
Given baselines per branch, template version, carrier, and job type, When queried for any timestamp within the last 24 months, Then a baseline snapshot is returned for rolling windows 7d, 30d, and 90d with seasonality adjustments applied per specification. Given the daily baseline job runs at 02:00 UTC, When it completes, Then new baseline snapshots are versioned, immutable, and available by 02:15 UTC with effective_from and effective_to fields. Given a backfill request for a historical range (e.g., 2023-01-01 to 2025-08-31), When executed, Then baseline snapshots for all required dimensions are populated within 6 hours and tagged with the backfill run identifier. Given a query at time T for any dimension set, When retrieving a baseline, Then the snapshot selected is the latest effective snapshot whose effective_from ≤ T < effective_to.
Idempotent Processing and Exactly-Once Semantics
Given duplicate events with the same event_id arrive up to five times and out of order, When processed, Then aggregate counts and computed metrics are identical to processing a single instance of each event. Given transient sink write failures, When writes are retried, Then no duplicate rows are created and no aggregates are double-counted. Given a pipeline restart and replay from the last committed offset, When processing resumes, Then no events are skipped or double-processed and checkpoint/offset commits are atomic and durable.
Late Event Handling and Recomputations
Given events may arrive up to 7 days late relative to event_time, When a late event within the watermark is ingested, Then all affected metric windows are recomputed and exposed via API within 5 minutes of arrival. Given an event arrives later than the 7-day watermark, When processed, Then it is stored in a corrections queue and flagged for manual backfill without modifying finalized aggregates. Given out-of-order events within the watermark, When the watermark advances and windows finalize, Then recomputation is deterministic and repeated runs yield identical aggregates.
Data Quality Validation and Quarantine
Given an incoming event, When schema, required fields, type, and range validations execute, Then failing events are rejected to a quarantine store with error codes and payloads, and passing events continue to processing. Given referential integrity checks against branch, template version, and carrier catalogs, When a lookup fails, Then the event is quarantined and a data quality alert is emitted within 2 minutes. Given the daily DQ summary at 03:00 UTC, When the report is generated, Then it lists counts by error type and the proportion of rejected events remains below 0.5% over the prior 24 hours; exceeding the threshold triggers an on-call alert.
Metrics Store Performance and API Readiness
Given standard dashboard queries for aggregates by hour/day with filters on branch, template version, carrier, and job type, When executed under 50 concurrent users, Then p95 API latency is ≤ 300 ms for aggregate reads and ≤ 800 ms for 1-minute time-series over the last 30 days. Given pagination via limit and cursor, When retrieving high-cardinality time series, Then results are consistent and repeatable with correct next/prev cursors and no gaps or overlaps. Given OAuth2 client credentials authentication and per-client rate limits, When invalid tokens are used, Then the API returns 401; when rate limits are exceeded, Then the API returns 429 with a Retry-After header; when valid, Then 200 with correct data. Given a 99.9% availability target, When measured over a rolling 30-day window, Then API uptime meets or exceeds 99.9% with error budget tracking.
Alert Rules Engine & Threshold Management
"As a regional manager, I want configurable, routed alerts with noise controls so that I’m notified only about actionable anomalies for my branches."
Description

Provide a configurable rules engine to define alert thresholds and conditions per metric (drift %, override frequency, approval latency SLA breaches, margin variance). Support absolute and percentage thresholds, rolling windows, branch/template scoping, hysteresis to reduce noise, suppression windows, and schedules. Implement routing to managers based on org hierarchy and channels (in‑app, email, SMS), with escalation and deduplication. Log alert lifecycle (triggered, acknowledged, resolved) for auditability and performance review.

Acceptance Criteria
Template Drift % Threshold with Rolling Window and Hysteresis
Given a rule for metric "Template Drift %" with a 10% threshold over a 7-day rolling window and a 2% hysteresis, scoped to Branch=Dallas and Template=Residential A When the evaluated drift exceeds 10% for 2 consecutive 15-minute evaluations Then a single alert is triggered for Branch=Dallas, Template=Residential A And the alert remains active until drift falls below 8% for 2 consecutive evaluations And evaluations only include data from the defined branch and template And evaluations occur no more frequently than every 15 minutes
Approval Latency SLA Breach with Business Hours Schedule and Suppression
Given a rule for metric "Approval Latency" with threshold > 4 business hours, schedule set to Mon-Fri 08:00-18:00 local time, and a 2-hour suppression window after acknowledge When a job remains in Pending Approval for 4 hours 1 minute within scheduled hours Then an alert is triggered during scheduled hours via in-app and email to the Branch Manager And once acknowledged, no further notifications for the same branch/template scope are sent for 2 hours while the breach persists And no evaluations or notifications occur outside the scheduled hours
Override Frequency Threshold Per Template with Channel Deduplication
Given a rule for metric "Override Frequency" with threshold > 5 overrides per 100 estimates in the last 30 days, scoped to Branch=Phoenix, Template=Storm Claim v2, with channels in-app, email, and SMS When the override rate crosses the threshold for the scoped branch/template Then exactly one alert event is created for that scope And recipients receive at most one notification per channel per 24-hour period while the alert is active And repeated threshold crossings within the active period do not create duplicate alerts And notifications include template name, branch, current rate, threshold, and window
Escalation via Org Hierarchy on Unacknowledged Margin Variance
Given a rule for metric "Margin Variance" with threshold < -5% over a 14-day rolling window per branch, routed to Branch Manager with a 60-minute escalation to Regional Manager When the threshold is breached for Branch=Seattle and no acknowledgement occurs within 60 minutes of the initial notification Then the alert escalates to the Regional Manager with SMS and email notifications And escalation ceases immediately upon any acknowledgement And all acknowledgements record user, timestamp, and channel
Alert Lifecycle Logging and Audit Pack Export
Given any alert transitions through Triggered, Acknowledged, and Resolved states Then each lifecycle event is logged with timestamp, actor, scope (branch/template), rule ID and version, metric values at trigger and resolution, and per-channel delivery outcomes And the audit pack export contains the event timeline, rule configuration snapshot, notification payloads, and evaluation samples used to trigger and resolve And lifecycle logs are immutable and retained for at least 24 months And audit packs under 10 MB are generated and downloadable within 60 seconds
Suppression and Maintenance Window Behavior
Given a daily suppression window for Branch=Denver from 00:00-01:00 local time and a weekly maintenance blackout Sat 22:00-Sun 02:00 When any rule condition would breach during a suppression or blackout window Then no new alerts are emitted and the evaluations are recorded as Suppressed with the applicable reason And if the condition still breaches 5 minutes after the window ends, an alert is emitted at the next evaluation And active alerts do not auto-resolve during suppression windows; their state remains unchanged
Outlier Explorer & Drill‑Down
"As a quality lead, I want to drill into outliers and see the specific jobs and changes so that I can diagnose root causes and coach the team effectively."
Description

Create interactive dashboard widgets with drill‑down into outliers by branch, estimator, template, carrier, job type, and timeframe. Enable click‑through from an alert to a pre‑filtered investigation view showing contributing jobs, change history, overrides applied, approval steps with timestamps, and comparisons to baseline and peers. Provide contextual KPIs, sparkline trends, and exportable tables (CSV) for offline analysis. Optimize queries for sub‑second filtering and paging on large datasets.

Acceptance Criteria
Dimension Filters and Widget Drill‑Down
Given a user on the Compliance Pulse dashboard with outlier widgets visible When the user applies filters for branch, estimator, template, carrier, job type, and timeframe in any combination Then the widget counts and visuals update to reflect the intersection of filters within 1,000 ms at the 95th percentile And a "Reset Filters" control becomes enabled showing the number of active filters. Given a user selects an outlier data point (card, bar, point, or table row) When the user clicks the item Then the Investigation View opens in the same tab with those filters pre-applied and the clicked dimension highlighted And the initial results render within 1,000 ms at the 95th percentile. Given the Investigation View has more than one page of results When the user pages next/previous or changes page size Then the next page of results returns within 500 ms at the 95th percentile And total result count remains constant across pages for the same filter set.
Alert Click‑Through to Pre‑Filtered Investigation
Given an outlier alert notification in the dashboard or email When the user clicks the alert Then the Investigation View opens with filters from the alert payload (dimension, timeframe, threshold type) And the view header shows alert title, triggered timestamp, and severity. Given the Investigation View from an alert When the page loads Then sections are visible: Contributing Jobs table; Change History timeline; Overrides panel; Approval Steps with user and timestamps; Baseline and Peer Comparison cards And all sections load without errors with skeleton placeholders during data fetch. Given the user shares the alert link When another authorized user opens the link Then the same pre-filtered state is reproduced exactly (filters, sorts, columns) within their permissions.
Contextual KPIs and Sparkline Trends
Given the Investigation View loads with a selected timeframe When displayed Then KPIs include: Outlier Jobs Count, Average Variance %, Override Rate %, Median Approval Latency (hours), 95th Percentile Approval Latency (hours), Template Drift Index And each KPI shows a delta versus baseline and versus selected peer group. Given KPIs are visible When the user hovers any sparkline Then a tooltip shows period, value, and delta versus previous period and baseline. Given the timeframe picker is changed (Last 7/30/90/365 days or custom) When the user applies the new timeframe Then KPIs and sparklines recompute within 1,000 ms at the 95th percentile.
Baseline and Peer Comparisons Controls
Given the Investigation View is open When the user opens the Baseline & Peers selector Then they can choose baseline type (Global, Branch, Carrier, Template) and peer grouping (All branches, Region, Similar volume decile) And the selection persists in the URL and session storage. Given a baseline/peer selection is applied When comparisons render Then variance badges show color-coded direction with thresholds: green within ±2%, amber 2–5%, red >5% And comparison values match recalculated aggregates within 0.1% tolerance.
Export Current View to CSV
Given a populated Contributing Jobs table with active filters and column selections When the user clicks Export CSV Then a CSV is generated containing exactly the rows matching current filters and the columns in the current visible order with types formatted (dates ISO‑8601, decimals 2 places) And the file is encoded as UTF‑8 with a header row. Given an export exceeds 50,000 rows When the user confirms Export All Then an asynchronous export job starts and completes within 60 seconds for up to 200,000 rows And the user receives a downloadable file notification in‑app and via email. Given an export completes When the file is opened Then the first line contains column headers and the final line count equals the reported row count; no duplicated or missing rows relative to table pagination.
Large Dataset Performance and Paging
Given a dataset of at least 1,000,000 jobs across 12+ months in the test environment When applying any single filter or any combination up to five filters Then query execution completes and results render within 1,000 ms at the 95th percentile and 300 ms at the median. Given paging across large result sets When moving between pages of 100 rows Then the server returns results within 500 ms at the 95th percentile And client memory usage remains under 200 MB and server CPU under 70% average during a 5‑minute sustained paging test. Given repeated identical filter requests When issued within 10 minutes Then cached responses are served with latency under 200 ms while cache invalidates within 60 seconds of underlying data change.
Data Consistency and Outlier Definition Integrity
Given a dashboard widget shows N outliers for a selection When the user clicks into the Investigation View Then the Contributing Jobs table row count equals N for the same filters (no discrepancy). Given each row in the Contributing Jobs table When viewed Then it displays outlier reason code(s), threshold breached, variance value, and calculation method version; values match the backend service for that job. Given updates to outlier detection configuration When a new version is deployed Then the Investigation View indicates the applicable rule version and effective date And historical results are tagged with the rule version used at time of calculation.
Weekly Scorecards & Scheduled Delivery
"As a franchise manager, I want weekly scorecards delivered automatically so that I can monitor performance trends without logging in daily."
Description

Generate automated weekly scorecards per branch and region summarizing drift, override rates, approval latency, and margin variance trends, including rankings versus peers and week‑over‑week changes. Support distribution list management, time‑zone aware scheduling, PDF attachment generation, and links to the live dashboard. Include delivery retries, failure notifications, and a digest mode that bundles multiple branches for a manager. Archive sent scorecards for audit and historical reference.

Acceptance Criteria
Weekly Scorecard Generation (Branch and Region)
Given a configured weekly reporting window for a branch or region When the scorecard is generated Then it includes template drift %, override rate %, median approval latency (hours), and margin variance % at aggregate and per-job levels for the period And it includes week-over-week deltas for each metric with up/down indicators And it includes peer ranking position and percentile (branch ranked within region; region ranked company-wide) And it highlights the top 3 positive and negative margin variance outliers with job identifiers And all values are calculated from the same data version and timestamped And for tenants with up to 500 entities, generation completes within 5 minutes of the scheduled time
Time-Zone Aware Scheduling
Given a weekly schedule configured in an IANA time zone for a branch or region When the scheduled time occurs Then the scorecard is delivered at the configured local wall-clock time And daylight saving transitions are handled without duplicate or missed sends And the reporting window aligns to the local time zone week boundary And schedule updates made at least 1 hour before the next run take effect for that run; otherwise for the subsequent run And a preview of the next 3 run times matches the computed schedule
Distribution List Management and Access Control
Given a user with Scorecard Admin permission When they create or edit a distribution list Then they can add or remove email recipients, assign the list to branches and/or regions, and save changes And email addresses are validated and deduplicated per send across all assigned lists And only users with Scorecard Admin permission can modify lists; viewers can read but not change And changes are audited with who, what, and when and are effective for the next scheduled send And a test send can be triggered to a specified address without affecting the schedule
PDF Attachment Generation and Live Dashboard Deep Links
Given a generated scorecard for a branch or region When the email is composed Then a PDF attachment is included with consistent branding and the reporting period in the header And the file name follows the pattern: <entity_type>-<entity_name>-<week_end_YYYYMMDD>.pdf And embedded dashboard links open the live Compliance Pulse view filtered to the same entity and period And links require authentication and, if not authenticated, redirect to login then return to the filtered view And the PDF renders all tables and charts legibly on letter/A4 paper and is ≤ 5 MB
Delivery Retries and Failure Notifications
Given a transient delivery failure (e.g., SMTP 4xx or timeout) When sending a scorecard email Then the system retries up to 3 times with exponential backoff (1m, 5m, 15m) And on permanent failure (e.g., SMTP 5xx or DNS), the send is marked Failed without further retries And a failure notification is sent to the tenant notification channel and Scorecard Admins including error code, recipients, entity, and schedule time And all attempts and outcomes are logged and visible in delivery history And partial failures (some recipients) do not block other recipients
Manager Digest Mode Bundling Multiple Branches
Given a manager is assigned multiple branches and digest mode is enabled When the scheduled digest send occurs Then the manager receives a single email summarizing aggregate metrics and week-over-week deltas across their branches And the email includes a single attached PDF with a section per branch and a table of contents And the digest email size (including attachments) does not exceed 20 MB; if exceeded, attachments are omitted and secure links are included instead And dashboard deep links preserve branch filters when navigating from digest sections And the digest respects the manager’s configured time zone and schedule
Scorecard Archiving and Retrieval
Given a scorecard email is sent (successfully or with partial failures) When archiving is performed Then the archive stores the exact PDF(s), subject, body, recipients, entity, period, send timestamp, and delivery outcomes And the archive entry has a tamper-evident hash and unique ID And archived scorecards are searchable by entity, recipient, status, and week range and are retrievable within 60 seconds of send And authorized users can view, download, and export archives; unauthorized users cannot And archives are retained for at least 12 months or per tenant retention policy
Audit Pack Export & Evidence Retention
"As a compliance officer, I want downloadable audit packs with immutable evidence so that I can satisfy carrier and regulatory audits and resolve disputes quickly."
Description

Enable one‑click export of audit packs for selected periods, branches, or alerts containing flagged estimates, change logs, approval timestamps, template versions used, override details, manager comments, and supporting screenshots or attachments. Produce a ZIP with an index, metadata manifest, and checksums for integrity. Store immutable evidence snapshots with configurable retention (up to 7 years) and optional WORM storage. Provide export watermarking, access logging, and download expiration for compliance.

Acceptance Criteria
One-Click Export for Period and Branch Selection
Given a user with Export permission selects a date range and one or more branches and clicks "Export Audit Pack" When the export job completes Then a ZIP is available for download within 5 minutes for up to 10,000 estimates in scope And the ZIP contains at minimum: /index.html, /manifest.json, /checksums.txt, and /files/* And only flagged estimates within the selected period/branches are included And each estimate directory contains: estimate.pdf, changelog.json, approvals.csv (ISO 8601 UTC), template_version.txt, overrides.csv, comments.md, attachments/* And manifest.json records export_id, requested_by, created_at (UTC), selection filters, item counts by artifact type And counts in manifest.json match actual files present
Integrity Manifest and Checksum Verification
Given a completed export ZIP When SHA-256 hashes in checksums.txt are recalculated for each file Then every hash matches the file contents And manifest.json includes for each file: relative_path, sha256, size_bytes, mime_type And if any file is modified, missing, or corrupted Then verification fails and identifies the exact paths with mismatched hashes
Immutable Evidence Snapshot with Configurable Retention and WORM
Given evidence snapshots are enabled with a retention policy between 1 and 84 months When an export completes Then an immutable snapshot of all included artifacts and manifest is stored with the policy applied And when WORM is enabled for the tenant or bucket Then no delete or update operations are permitted on snapshot objects until retention expires And placing a Legal Hold prevents deletion regardless of retention expiry until the hold is removed And upon retention expiry Then objects are purged within 24 hours and an audit log entry is recorded with export_id and object count
Authorization and Access Logging for Export Actions
Given role-based access control is configured When a user without Export permission attempts to create or download an audit pack Then the action is denied with HTTP 403 and is logged When a user with Export permission creates or downloads an export Then an immutable access log entry is recorded with export_id, user_id, role, action (CREATE/DOWNLOAD/DENY), timestamp (UTC), ip, user_agent And revoking a user's Export permission immediately blocks further downloads for non-expired links
Watermarking of Exported Documents and Images
Given watermarking is enabled (default ON) When an audit pack is exported Then all PDFs and images in the ZIP are watermarked with: product name, export_id, requester email, and UTC timestamp And the watermark is semi-transparent, non-destructive, and does not obscure text (minimum 12% opacity, diagonal placement) And original source files stored for operations remain unwatermarked And disabling watermarking (by tenant policy) results in unwatermarked exports and is recorded in manifest.json
Time-Bound Download Links and Expiration Enforcement
Given a completed export When a signed download URL is generated with an expiry between 15 minutes and 30 days Then the link is valid only until the expiry timestamp (UTC) and is single-tenant scoped When a request arrives after expiry Then the service returns HTTP 410 Gone and logs the attempt And reissuing a link creates a new signed URL with a new expiry and invalidates the prior link And rate limiting enforces a maximum of 5 downloads per minute per user per export
Alert-Scoped Audit Pack Export
Given a user views a specific compliance alert with an associated set of flagged estimates When the user clicks "Export Audit Pack" from the alert context Then the resulting ZIP includes only estimates tied to that alert And manifest.json includes alert_id, alert_type, and estimate_count for the alert And for alerts with more than 5,000 estimates Then the export runs as a background job and the user receives an in-app notification and email with the download link upon completion
Margin Variance Benchmarking & Controls
"As a finance leader, I want margin variance benchmarked with control limits so that I can intervene before profitability erodes."
Description

Calculate margin variance versus configurable benchmarks by region, branch, job type, and season, with control charts to detect sustained variance beyond control limits. Allow exclusion filters for promotional periods or extraordinary events and annotate charts with known drivers. Surface leading indicators and contributor analysis (template changes, frequent overrides, estimator mix) to guide corrective actions. Expose APIs for finance systems to pull benchmarked metrics.

Acceptance Criteria
Configure Margin Benchmarks by Region/Branch/Job Type/Season
Given I am an Admin with Benchmark Management permission When I create or update a Gross Margin (%) benchmark for a specific combination of Region, Branch, Job Type, and Season with defined effective start and end dates Then the benchmark is saved with a unique ID, version, and effective window And precedence is applied as Branch > Region > Job Type > Season > Global when multiple benchmarks could apply And overlapping effective windows for the same specificity are rejected with a validation error identifying the conflicting records And a change log entry is captured with user, timestamp, prior values, new values, and required change reason And the new benchmark is applied to variance calculations for jobs whose close date falls within the effective window And unauthorized users cannot create or modify benchmarks and receive a 403 error
Compute Margin Variance and Control Charts with Sustained Variance Detection
Given realized margin data and applicable benchmarks exist for the selected dimensions and time grain (day/week/month) When I load the Control Chart for a Region/Branch/Job Type/Season filter Then the system calculates variance = realized_margin - benchmark_margin for each period to two decimal places And displays mean, UCL, and LCL computed from the last 12 completed periods excluding excluded data And flags any point beyond UCL or LCL with a red marker and tooltip showing values And creates a Sustained Variance alert when 3 consecutive points exceed UCL or LCL within the last 30 calendar days And renders the chart within 2 seconds for datasets up to 10,000 points on a standard network And the data and limits match a server-side recomputation within a tolerance of ±0.01 margin points
Exclude Promotional or Extraordinary Periods from Variance and Charts
Given exclusion filters support date ranges and tagged events (e.g., "Promo", "Storm Surge", "Extraordinary") When I apply an exclusion for a date range and/or one or more event tags Then excluded jobs are removed from variance calculations and from control limit baselines And the UI shows an "Exclusions active" badge and the percent of observations excluded And exports (CSV/PDF) and API responses reflect the same exclusions And an audit entry records who applied the exclusion, scope, and reason And removing the exclusion immediately recalculates metrics and limits within 2 seconds
Annotate Charts with Known Drivers and Persist in Exports
Given I have permission to manage annotations When I add an annotation to a specific period or range with a driver label, description, and optional link (e.g., change request ID) Then an annotation marker appears on the chart and shows details on hover including author and timestamp And the annotation is persisted, versioned on edit, and can be soft-deleted with recovery for 30 days And annotations are included in CSV and PDF exports with period/range coordinates and metadata And annotations are retrievable via API for the same filters applied to the chart
Leading Indicators and Contributor Analysis for Variance Drivers
Given contributor signals exist for template changes, override frequency, and estimator mix When I open the Contributors panel for a selected period or range Then I see a ranked list or waterfall of contributors with effect size in margin points and percent And the signed effects sum to within ±0.10 margin points of the observed variance for the selection And each contributor supports drill-down to the underlying job list filtered appropriately And contributor definitions and calculation windows (default last 90 days) are displayed in tooltips And the panel updates within 1 second when changing filters or time grain
Finance API: Pull Benchmarked Margin Metrics with Filters
Given I have a valid API key with scope finance.read When I call GET /api/v1/metrics/margin-variance with query params region, branch, jobType, season, from, to, grain, include=benchmarks,controlLimits,contributors,exclusions Then I receive a 200 response within 1.5 seconds for <=1000 periods with JSON containing periods[].{timestamp, realizedMargin, benchmarkMargin, variance, UCL, LCL}, exclusions[], and contributors[] when requested And unauthorized requests return 401, insufficient scope returns 403, invalid params return 400 with field-level errors, and rate limits are enforced at 600 req/min with 429 responses And all numeric values include units and 2-decimal precision; timezone is indicated via X-Timezone header And results respect benchmark precedence and active exclusions for the requested scope
Role‑Based Access & Data Permissions
"As a compliance admin, I want role‑based permissions and audit logs so that sensitive information is protected and every access is traceable."
Description

Implement RBAC and data scoping for Compliance Pulse, restricting visibility by role (estimator, manager, compliance, finance) and organizational unit (branch, region). Enforce field‑level masking for sensitive data in dashboards and exports, and require elevated permissions for audit pack downloads. Record comprehensive access and export logs, with anomaly detection for unusual access patterns. Integrate with existing SSO/IdP groups and support SCIM provisioning for automated user management.

Acceptance Criteria
Role-to-Permission Mapping and Enforcement
Given a user with role Estimator, when accessing any Compliance Pulse page or API, then only View Dashboards is allowed and all other actions (exports, audit pack downloads, admin settings) are blocked (UI disabled, API returns 403). Given a user with role Manager, when accessing Compliance Pulse, then View Dashboards and Export Compliance Dataset are allowed, and Audit Pack Download is blocked by default (API returns 403). Given a user with role Compliance, when accessing Compliance Pulse, then View Dashboards and Export Compliance Dataset are allowed, and Audit Pack Download is blocked by default (API returns 403). Given a user with role Finance, when accessing Compliance Pulse, then View Dashboards and Export Compliance Dataset are allowed, and Audit Pack Download is blocked by default (API returns 403). Given a user holding multiple roles, when permissions conflict, then the most permissive action is applied except actions explicitly requiring elevated permission which always remain blocked. Given a change to a user’s roles, when the user re-authenticates or their token refreshes, then permission changes take effect within 5 minutes.
Organizational Unit Data Scoping (Branch/Region)
Given a user assigned to Branch A, when viewing dashboards or exporting data, then only Branch A data is visible/exported and selecting other branches is disabled; API requests referencing other branches return 403. Given a Region Manager for Region X, when viewing or exporting, then data includes branches within Region X only; attempts to access outside the region return 403 and are logged. Given a Global scope user, when viewing or exporting, then data across all branches and regions is accessible. Given a user’s OU assignments change, when they re-authenticate or their token refreshes, then OU scoping updates within 5 minutes. Given any export or audit pack generation, then the output contains only data within the user’s current OU scope.
Field-Level Masking in Dashboards and Exports
Given a user without the Finance role, when viewing sensitive fields (e.g., unit cost, margin, net profit), then values are masked in UI (e.g., “••••”) and masked in API/export payloads. Given a user with the Finance role, when viewing the same fields, then values are unmasked in UI and API/export payloads. Given a user with multiple roles including Finance, when viewing sensitive fields, then values are unmasked; without Finance, they remain masked. Given masked data in exports, when the file is opened, then sensitive values are irreversibly masked (no client-side decryption possible). Given a role change that removes Finance, when the user’s session refreshes, then previously cached unmasked values are cleared and masked values are shown. Given aggregated metrics (e.g., average margin) for non-Finance roles, when policies allow aggregation, then only aggregates are visible while underlying row-level sensitive fields remain masked.
Elevated Permissions for Audit Pack Downloads
Given a user without the AuditPack.Download permission, when attempting to download an audit pack, then the UI action is disabled and the API returns 403 with error code AUDIT_PACK_FORBIDDEN. Given a user with the AuditPack.Download permission, when initiating a download, then the audit pack is generated within 2 minutes, the signed download URL expires after 10 minutes, and a successful download is recorded. Given a user with the AuditPack.Download permission, when requesting an audit pack for data outside their OU scope, then the API returns 403 and no file is generated. Given any generated audit pack, when downloaded, then its content respects OU scoping and field masking policies unless the user also holds Finance. Given a shared or expired download link, when accessed by another user or after expiry, then the API returns 401/410 and no data is leaked.
Comprehensive Access/Export Logging and Anomaly Detection
Given any dashboard view, filter change, export request, or audit pack download attempt, then an immutable log entry is written within 5 seconds including user ID, roles, OU scope, action type, resource identifier, UTC timestamp, IP, user agent, result (success/failure), record count or file size, and error code if applicable. Given access logs, when queried by users with the Compliance role, then logs are searchable by time range, user, action, and OU scope; other roles cannot access logs (403). Given log retention policies, then logs are retained for 365 days and are tamper-evident (WORM or hash-chained) and exportable for audit. Given anomaly thresholds, when a user downloads >3 audit packs in 1 hour or exports >250,000 rows in 24 hours or triggers >5 cross-OU access denials in 24 hours, then an alert is created within 1 minute and notifications are sent to designated compliance managers. Given alert triage, when a compliance manager marks an alert as reviewed, then the alert status updates and is recorded in the audit trail. Given a temporary whitelist for a user, when anomalies match whitelist criteria within the whitelist period, then alerts are suppressed for that user and period.
SSO/IdP Group Mapping and Enforcement
Given SSO via the tenant’s IdP (SAML/OIDC), when a user signs in, then application roles are assigned from IdP group claims according to the configured mapping; if no mapped group is present, access is denied (403) and logged. Given changes to a user’s IdP group membership, when the user signs in again or their token refreshes, then role changes take effect within 5 minutes and any removed roles are revoked. Given a disabled user in the IdP, when attempting to sign in, then access is denied. Given SSO enforcement enabled for the tenant, when accessing the login page, then password-based authentication is disabled and only SSO is available. Given a user who belongs to multiple mapped groups, when roles are resolved, then the resulting role set is the union of mapped roles subject to elevated-permission exceptions.
SCIM Provisioning and Deprovisioning
Given SCIM enabled, when a user is created in the IdP with mapped groups and OU attributes, then the user is provisioned in RoofLens within 60 seconds with corresponding roles and OU assignments. Given updates to a user’s attributes (name, email, roles, OU) in the IdP, when SCIM updates are sent, then changes are reflected in RoofLens within 5 minutes. Given a user is deprovisioned or removed from all mapped groups in the IdP, then the RoofLens account is disabled within 60 seconds and active sessions are revoked within 5 minutes. Given transient SCIM delivery failures, when provisioning events cannot be delivered, then retries occur with exponential backoff and an admin notification is sent if unresolved after 15 minutes. Given conflicting SCIM updates, when simultaneous changes occur, then last-write-wins resolution is applied deterministically and a warning is logged.

Badge Provisioner

Central console to issue WebAuthn passkeys and scannable crew badges with role scopes, branch restrictions, and expirations. Supports bulk provisioning, print‑ready QR/NFC, instant revoke, and live status. Cuts IT overhead while enforcing least‑privilege access for field uploads.

Requirements

WebAuthn Passkey Issuance & Device Binding
"As an IT admin, I want to issue WebAuthn passkeys to crew and adjusters so that they can authenticate securely without passwords during field uploads and console access."
Description

Implement end-to-end WebAuthn registration and authentication ceremonies to issue passkeys bound to user accounts and physical authenticators (platform and cross‑platform). Support attestation, resident/non‑resident keys, UV/PIN policies, credential backup/duplication flags, and origin restrictions tied to RoofLens console and field uploader surfaces. Persist public key credentials and metadata mapped to badge identities and enforce lifecycle operations (create, rotate, rebind, delete) with admin approval workflows and recovery options. Integrate with existing auth to mint session tokens limited by badge role scope, branch restrictions, and expiration. Provide telemetry on success/failure, device type, and risk signals for policy decisions.

Acceptance Criteria
Passkey Registration with Attestation and UV Enforcement
Given a badge identity with UV policy=required, attestation policy=preferred, and residentKey policy=preferred When the user initiates WebAuthn registration from https://console.rooflens.com using a platform authenticator Then the server issues a create() challenge with rpId=rooflens.com, user.id=badgeId, authenticatorSelection honoring policy, timeout<=60000ms And verifies clientData.origin=https://console.rooflens.com and rpIdHash=hash(rooflens.com) And validates the attestation statement via FIDO Metadata; if policy=preferred and trust is unavailable, proceed; if policy=required and trust fails, reject with 400 and no credential stored And enforces userVerification=required; if not satisfied, reject with 400 and no credential stored And persists credentialId, publicKey, aaguid, signCount, flags (residentKey, backupEligible, backupState), deviceType (platform/cross-platform), attestationTrustResult, mapped to the badge identity And completes server-side processing in <= 2,000 ms
Usernameless Field Upload Sign-in with Resident Credential
Given a non-expired, non-revoked badge configured with residentKey=required for field uploads and a previously registered resident credential When the user opens https://uploader.rooflens.com and initiates sign-in without providing a username Then the server calls navigator.credentials.get() with allowCredentials=[], rpId=rooflens.com, userVerification=required And verifies origin=https://uploader.rooflens.com, rpIdHash=hash(rooflens.com), signature, and credential linkage to the badge And validates branch restriction against the user's selected/assigned branch; if mismatch, deny with 403 and no token minted And updates and validates signCount to detect cloned authenticators; if counter regression is detected and the authenticator is not known to be counter-less, deny and flag risk=high And establishes an authenticated session upon success
Authenticated Session Token Scoped to Badge Role, Branch, and Expiration
Given a successful WebAuthn authentication for a badge When the server mints a session token Then the token contains claims: sub=badgeId, roles=badge.roleScopes, branches=badge.branchRestrictions, deviceId=credentialId, authn_method=webauthn, uv=present And the token exp is set to min(badge.expiresAt, now+12h) And tokens are not issued if the badge is expired or revoked And refresh tokens (if enabled) are rotated on each use and invalidated on revoke/rotate within <=60 seconds propagation
Passkey Lifecycle Operations with Admin Approval and Audit
Given an admin initiates a credential rotate, rebind, or delete for a badge and approvalRequired=true When a second admin approves the request Then the system enforces the workflow: pending->approved->active, with audit entries recording actor, action, target badge, credentialId, reason, timestamps And on rotate, the new credential becomes active and the old credential is invalidated immediately after successful authentication with the new credential, or automatically after 24 hours if not used And on delete, the credential is revoked immediately and cannot be used to authenticate And all changes propagate to all auth surfaces in <=60 seconds And attempts to authenticate with revoked credentials return 401 and are logged with reason=revoked
Backup Eligibility and Duplication Policy Enforcement
Given a role policy that disallows backup-eligible or backed-up credentials When a user attempts registration and the authenticator reports backupEligible=true or backupState=true Then the registration is rejected with 400 and reason=backup_not_allowed and no credential is stored And when the policy allows backupEligible but disallows backupState=true, registration of a credential with backupState=true is rejected; backupEligible=true and backupState=false is accepted and recorded And when policy allows both, the flags are stored and exposed in admin views and telemetry
Telemetry and Risk Signals for Registration and Authentication
Given any WebAuthn registration or authentication attempt When the event is processed Then telemetry records outcome (success/failure), timestamp, user agent, IP, geolocation (country/region), deviceType (platform/cross-platform), aaguid (if available), attestationTrustResult, UV result, backup flags, signCount behavior, surface (console/uploader), and latency And a risk score is computed using rules (e.g., impossible travel, origin mismatch attempt, counter regression); if risk>=8/10, require step-up or deny per policy and flag in audit And telemetry is queryable in the console within <=5 seconds of the event and retained for >=365 days And admins can export CSV for a selected date range and filters
Secure Recovery and Device Rebind
Given a user has lost access to their authenticator and the badge is not revoked When an admin completes identity verification per policy and issues a one-time recovery code Then the recovery code is valid for 15 minutes, single use, and scoped to rebind only And the user must complete WebAuthn registration on a new authenticator with UV=required within the validity window And upon successful rebind, all previous credentials for the badge are revoked and cannot authenticate And all actions are logged with actor, target badge, and timestamps; notifications are sent to the badge email and admin
Role & Branch Scoping Policy Engine
"As a branch manager, I want to restrict a badge to my branch and role so that crew can only upload to jobs we own and cannot see pricing or measurements."
Description

Deliver a centralized policy model to express least‑privilege access using role scopes (e.g., Field Uploader, Estimator, Adjuster), branch restrictions (office/region/site), and time‑bound expirations. Enforce policies consistently across API endpoints (upload, job access, estimate export) and UI components via middleware and attribute‑based checks. Support object‑level constraints (upload‑only, no pricing view), policy versioning, and atomic updates with rollback. Provide reusable evaluators for synchronous checks at request time and for token minting (claims embedding policy digests). Supply admin UX for assigning scopes during provisioning and REST/GraphQL APIs for automation.

Acceptance Criteria
API Role Scope Enforcement
Given a user has scope=FieldUploader and a valid token When they POST /api/v1/uploads with a jobId in an assigned branch Then the API returns 201 Created and the upload is associated to the job And When they GET /api/v1/jobs/{id} Then the API returns 403 Forbidden And When they POST /api/v1/estimates/{id}/export Then the API returns 403 Forbidden Given a user has scope=Estimator and a valid token When they GET /api/v1/jobs/{id} within an assigned branch Then the API returns 200 OK with job details And When they POST /api/v1/estimates/{id}/export within an assigned branch Then the API returns 202 Accepted and queues the export And When they POST /api/v1/uploads Then the API returns 403 Forbidden Given a user has scope=Adjuster without EditClaims sub-scope When they PATCH /api/v1/claims/{id} Then the API returns 403 Forbidden
Branch Restriction Enforcement
Given a token with branches=["region:west/office:phx"] and scope=Estimator When the user GETs /api/v1/jobs/{jobPhx} where jobPhx.branch=="region:west/office:phx" Then the API returns 200 OK And When the user GETs /api/v1/jobs/{jobTuc} where jobTuc.branch=="region:west/office:tuc" Then the API returns 403 Forbidden Given a token with scope=FieldUploader and branches=["site:den-001"] When the user POSTs /api/v1/uploads with jobId bound to site:den-001 Then the API returns 201 Created And When the user POSTs /api/v1/uploads with jobId bound to site:den-002 Then the API returns 403 Forbidden Given a token with scope=Admin and branchScope=Global When the user queries GraphQL { jobs(branch:"region:east") { id } } Then results include jobs from any branch
Time-Bound Expiration and Instant Revoke
Given a badge with exp=T (UTC) and a minted access token with exp=T When any request is made at time > T (accounting for 60s clock skew) Then the API returns 401 Unauthorized with error="token_expired" And the UI session is forced to sign out on next middleware check Given an active badge When an admin clicks Revoke in the console Then live status changes to Revoked within 10 seconds And any subsequent API request using that badge's credentials returns 401 Unauthorized within 30 seconds And scanning the badge QR/NFC shows "Access revoked" and no policy details And an audit log entry records who revoked, when, and reason
Object-Level Constraint: Upload-Only, No Pricing View
Given a user has scope=FieldUploader and constraint=no_pricing_view When they navigate to a job's estimate page in the UI Then pricing fields, totals, and export buttons are not rendered And attempts to invoke client routes for pricing are redirected with a toast explaining insufficient permissions And When they call GET /api/v1/estimates/{id}/pricing Then the API returns 403 Forbidden Given a user has constraint=upload_only When they POST /api/v1/uploads with a valid jobId in their branch Then the API returns 201 Created And When they PATCH /api/v1/jobs/{id} Then the API returns 403 Forbidden And GraphQL resolvers annotated with @requiresPricing deny access with code=FORBIDDEN
Policy Versioning, Atomic Update, and Rollback
Given active policy version=v1 and a draft=v2 When an admin publishes v2 Then all policy evaluators switch to v2 atomically within 15 seconds And no single request evaluates mixed versions (correlationId shows one version per request) And the system records an audit event policy_published { from:v1, to:v2, publishedBy } Given v2 is active When an admin triggers rollback to v1 Then all evaluators revert to v1 within 30 seconds And evaluations referencing v2's digest are rejected with error="inactive_policy" until re-minted And an audit event policy_rollback { from:v2, to:v1, reason } is stored And uptime is maintained with no 5xx rate increase > 0.1% during switchovers
Token Minting with Policy Digest Claims
Given a badge is provisioned with scopes, branches, constraints, and exp When an access token is minted Then the token includes claims: scopes[], branches[], constraints[], exp, policyVersion, policyDigest (sha256), sub, jti And the digest equals the server's active policy digest at mint time And the token is signed with kid referencing the current key Given a request presents a token When the middleware validates it Then it verifies signature, exp, and that policyDigest matches an active policy digest And if digest mismatches, the request is rejected with 401 Unauthorized error="invalid_policy_digest" And successful evaluations log policyVersion and digest to request metrics
Admin UX and Automation APIs for Provisioning
Given an admin creates a badge in the console When they select role scopes, branches, and expiration Then the UI validates combinations, prevents empty scope/branch, and previews the effective policy summary And on Save, the system returns a badge with ID, policyVersion, policyDigest, QR/NFC payload, and status=Active Given a CSV with columns: email, scopes, branches, expiry, constraints When the admin bulk imports it Then at least 95% of valid rows provision successfully, invalid rows are reported with line-level errors, and no partial creation occurs per row Given API clients use REST/GraphQL When they call POST /api/v1/badges or mutation createBadge with scopes, branches, expiry, constraints Then the service provisions badges identically to the console and returns IDs, policyVersion, policyDigest And all actions create audit entries and are visible in Live Status with filter by branch/scope
Print‑ready QR & NFC Badge Generation
"As an operations coordinator, I want to print QR/NFC crew badges in bulk so that field teams can scan in and upload photos without logging into a laptop."
Description

Generate secure, scannable crew badges with templated layouts (name, role, branch, expiration, serial) exportable as print‑ready PDF and digital PNG/SVG. Encode QR with short‑lived signed tokens or verification URLs and program NFC tags with NDEF payloads that reference the same token scheme. Provide design presets and custom branding, plus batch rendering for bulk runs. Avoid storing excessive PII on the badge; rely on cryptographic verification server‑side. Integrate with supported NFC encoders and printers, and link each badge to its WebAuthn credential and policy profile for unified lifecycle management.

Acceptance Criteria
Single Badge PDF/PNG/SVG Export
Given a user selects a layout preset and enters name, role, branch, expiration date, and serial, and applies branding, When they generate a badge, Then the system returns three assets: a 1-page print-ready PDF, a PNG, and an SVG for the same badge. And the PDF and SVG embed fonts and render the QR code as vector. And raster elements in PDF/PNG are exported at >=300 DPI. And the visible fields match the inputs exactly, including capitalization and diacritics. And exports complete within 5 seconds P95 for a single badge under normal load. And file names include the serial and revision (e.g., {serial}_{rev}).
QR Token Encoding and Server Verification
Given a generated badge, When the QR is scanned, Then it resolves to a verification endpoint using a short-lived signed token or signed URL that contains no PII. And the server verifies the signature and token freshness before returning 200 with badge_id and status=Active. And scanning an expired, revoked, or tampered token returns 401/403 without revealing PII. And token TTL is configurable and defaults to a short-lived value (minutes). And an audit log entry records scan time, badge_id, result, and requester IP.
NFC NDEF Programming and Validation
Given a supported NFC encoder is connected and a badge is selected, When the operator writes the NFC tag, Then the tag contains a single NDEF record that references the same token scheme as the QR (URL or custom type) and no PII. And the payload reads successfully on iOS and Android default readers. And a read-back verification confirms the written payload matches what was generated. And if 'Lock after write' is enabled, the tag is set to read-only; otherwise it remains writable. And if the tag capacity is insufficient or the device disconnects, the operation fails with a clear error and does not change the tag. And the tag UID (if available) is stored and linked to the badge record.
Batch Rendering for Bulk Runs
Given a CSV or API batch of up to 500 badges with complete fields, When batch render is started, Then the system generates all PDFs, PNGs, and SVGs and returns a single ZIP download plus a per-record status report. And 95% of batches of 500 complete within 10 minutes under normal load. And failures are isolated to affected records with retry guidance, without aborting the entire batch. And re-running the same batch with the same inputs is idempotent and does not create duplicate serials. And duplicate or missing serials are flagged with explicit error messages before rendering.
Design Presets and Custom Branding
Given a tenant selects a design preset or configures a custom layout, When previewing and exporting, Then the preview matches the exported assets within 2px for all elements at 100% scale. And presets include placeholders for name, role, branch, expiration, serial, and QR. And tenants can upload a logo (SVG or PNG) and select brand colors and fonts from approved lists. And presets can be saved, named, and reused across batches within the tenant. And branding assets are stored and served securely, and are embedded (not externally linked) in exports.
PII Minimization on Badge and Payloads
Given badges are generated, Then the printed and digital badge content is limited to name, role, branch, expiration, serial, and QR/NFC indicators. And the QR/NFC payload contains no PII and only a signed token or verification URL. And no email, phone, birthdate, home address, or other PII appears in the payload or encoded data. And verification retrieves user details server-side only after authorization checks. And logs and URLs do not expose PII, secrets, or signatures in query strings.
Lifecycle Linkage to WebAuthn and Policy with Instant Revoke
Given a badge is linked to a user's WebAuthn credential and policy profile, When the badge or credential is revoked, suspended, or its policy scope changes, Then newly minted tokens are blocked immediately and previously issued tokens become invalid within 30 seconds. And scanning a revoked or out-of-scope badge returns 403 with a generic message and no PII. And an audit log records the change, actor, timestamp, affected badge_id, and policy deltas. And regenerating the badge creates a new serial revision and fresh token scheme tied to the current WebAuthn credential.
Bulk Provisioning & CSV/API Import
"As an IT admin, I want to onboard seasonal crew members at once so that I can cut setup time and avoid manual data entry errors."
Description

Enable mass onboarding through CSV upload and programmatic APIs. Provide column mapping for user identity, role scope, branch, expiration, and notification preferences; validate inputs with inline error messaging and downloadable error reports. Offer dry‑run previews, idempotency keys for safe retries, and rate limits. Trigger enrollment notifications with self‑serve passkey registration links and optional manager approvals. Expose webhooks for downstream HRIS/IdP sync and support de‑duplication against existing users/badges. Maintain full audit trails for each imported record and resultant actions.

Acceptance Criteria
CSV Upload Mapping and Validation Feedback
Given I am a tenant admin with Bulk Provision permissions And I upload a CSV containing columns: email, role_scope, branch, expiration_date, notification_channel And I map CSV headers to system fields When I attempt to start the import Then the system validates that required fields (email, role_scope, branch) are mapped; if any are unmapped, the import is blocked with inline messages and the unmapped fields are highlighted And the system validates each row for data types and allowed values (email format, role_scope exists, branch exists, expiration_date is a valid future date or blank, notification_channel in [email, sms, none]) And invalid rows are surfaced inline with row number, field, and error message, and a downloadable error report (CSV) is provided with the same details And valid rows are accepted while invalid rows are skipped; the UI summarizes counts of total, created, updated, skipped_invalid, deduped And the system prevents creation if 100% of rows are invalid and displays "0 rows imported"
Dry-Run Preview Without Side Effects
Given I select "Dry Run" for a CSV import When processing completes Then no users, badges, or notifications are created or modified And an audit entry is recorded for the preview with a "no-op" flag And I see a preview summary with counts that would be created, updated, deduped, skipped_invalid, requiring_approval And I can download the same error report CSV that a real run would generate
API Safety: Idempotency Keys and Rate Limiting
Given I call the Bulk Import API with header X-Idempotency-Key=K and payload P When I retry the same request with the same key K within 24 hours Then the server returns the original result with an Idempotent-Replay indicator and does not create duplicates or re-send notifications And if I reuse key K with a different payload, the server responds 409 Conflict with message "Key reuse with different payload" and no changes are applied And if no X-Idempotency-Key is provided, the request is rejected with 400 and guidance to include a key And per-tenant rate limits are enforced at 10 bulk imports per minute and 1,000 rows per request; exceeding limits returns 429 Too Many Requests with a Retry-After header and rate limit headers (Limit, Remaining, Reset)
Enrollment Notifications and Manager Approval Flow
Given an import row with notification_channel=email and manager_approval_required=true When the import is executed successfully Then the designated manager receives an approval request with row context and has 7 days to approve or reject And only upon approval is the end user sent a passkey registration link; if rejected, the badge remains inactive and the row status is "Rejected by Manager" And the passkey registration link is single-use, expires in 14 days, and status transitions are tracked (Invited, Registered, Expired) And if delivery fails (hard bounce), the row is marked "Notification Failed" and is included in the error report with a reason code
Webhooks for Downstream HRIS/IdP Sync
Given a tenant has configured a webhook endpoint with a verified secret When import processing yields events (badge.created, badge.updated, import.row_failed, approval.requested, approval.completed) Then the system delivers signed HMAC-SHA256 webhook requests including event type, delivery_id, timestamp, import_id, row_number, and idempotency_key And a 2xx response acknowledges delivery; non-2xx responses trigger retries with exponential backoff (1m, 5m, 15m, 60m) up to 10 attempts And event ordering is preserved per import_id, and at-least-once delivery is guaranteed And tenants can view delivery status and manually retry failed deliveries from the console
De-duplication Against Existing Users/Badges
Given a CSV row whose email or employee_id matches an existing user or active badge in the tenant When the import runs Then no duplicate user or badge is created; the existing record is updated according to merge rules (role_scope, branch, expiration_date, notification_channel) And if merge rules conflict with tenant policy (e.g., branch change not allowed), the row is skipped with reason "Policy Conflict" and appears in the error report And deduped and updated counts are displayed in the summary, and the row outcome includes the target entity IDs
Audit Trail for Each Imported Record and Actions
Given any bulk import (CSV or API) When processing completes (including dry runs) Then an immutable audit log is stored per row with fields: import_id, row_number, actor (user or API key), source (CSV, API), timestamp, requester_ip, mapped fields, normalized payload, validations, dedup decisions, created/updated entity IDs, notifications sent, approvals requested/decisions, and webhooks dispatched And audit entries are queryable and exportable (CSV/JSON) by import_id and time range, with role-based access control And audit data is retained for at least 7 years and cannot be edited or deleted by tenant users
Instant Revoke, Suspend & Auto‑Expiry
"As a safety officer, I want to instantly revoke a lost badge so that unauthorized uploads are blocked immediately."
Description

Provide immediate revocation and suspension of badges and associated passkeys with propagation to token introspection, upload endpoints, and verification flows. Implement automatic expiration with configurable grace windows and notifications before/after expiry. Push revocation signals to caches and edge locations; include anti‑replay via nonce tracking and token rotation. Support bulk revoke for crews, branches, or imports, and expose APIs for SIEM/incident response integrations. Display clear state changes in the console and prevent new sessions while cleanly terminating active ones.

Acceptance Criteria
Immediate Single-Badge Revoke Propagates and Terminates Sessions
Given a badge with active sessions and valid tokens When an admin revokes the badge via console or POST /badges/{id}/revoke Then token introspection returns active=false and reason=revoked within 5 seconds globally And upload endpoints reject requests with 401 invalid_token error=badge_revoked within 5 seconds And QR/NFC verification returns state=revoked within 5 seconds at edge locations And no new sessions can be created for that badge immediately And all active sessions for that badge are terminated within 10 seconds, including WebSockets and streaming uploads And refresh/long-lived tokens for that badge are invalidated and cannot be used And an audit log entry records actor, badge_id, timestamp, scopes, and result=success
Badge Suspension Enforces Limited Access Without Permanent Revocation
Given a badge in good standing When an admin sets state=suspended via console or API Then new session creation is blocked with 403 error=badge_suspended And uploads and verification attempts are denied with 403 error=badge_suspended across APIs within 5 seconds And existing sessions are terminated within 10 seconds And associated passkeys remain enrolled and can be re-enabled later And removing suspension restores access without re-provisioning; new tokens can be minted after unsuspend And console and API reflect state=suspended with banner and timestamps
Automatic Expiry with Configurable Grace Window and Notifications
Given a badge with expiry_at set and grace_window_hours configured When current time is 7 days, 24 hours, and 1 hour before expiry_at Then owner and org admins receive notifications (email and in-app) containing badge_id and expiry_at When current time >= expiry_at Then badge state becomes expired and token introspection returns active=false reason=expired within 5 seconds And new session creation is blocked immediately with 401 error=badge_expired And existing sessions are terminated no later than grace_window_hours after expiry_at (or 15 minutes if grace_window_hours=0) And when current time = expiry_at + 24 hours, a post-expiry notification is sent And updating expiry_at before expiry extends validity and clears pending expiry alerts
Revocation Signal Fan-out to Caches and Edge Verification within SLA
Given globally distributed caches and edge verification endpoints When a badge is revoked, suspended, or expired Then cache entries related to the badge are purged or marked stale across 95% of edge locations within 5 seconds and 99% within 30 seconds And token introspection caches for that badge are bypassed until a fresh deny decision is observed And edge QR/NFC verification reflects the new state within 5 seconds And propagation latency metrics are recorded with p95<=5s and p99<=30s
Anti‑Replay via Nonce Tracking and Token Rotation
Given upload requests must include a one-time nonce bound to the access token When the same nonce is presented more than once within 24 hours Then the request is rejected with 401 error=replay_detected and the token is invalidated And a security event replay_detected is emitted within 5 seconds Given refresh tokens are single-use When a refresh token is reused Then the request is rejected with 401 error=refresh_token_reuse and the session is revoked Given QR/NFC badge credentials include a signed counter/nonce When the same credential is scanned twice without counter advance Then subsequent scans are denied with state=replay_detected and the attempts are rate-limited to 10 per minute
Bulk Revoke by Crew, Branch, or Import with Audit and Consistency
Given an admin selects a crew, branch, or import batch with N badges When Confirm Revoke All is executed via console or API (idempotency key provided) Then all targeted badges transition to state=revoked within 15 seconds for N<=10,000 And token introspection for each badge returns active=false reason=revoked within 5 seconds And no new sessions can be created for the targeted badges immediately And all active sessions for targeted badges are terminated within 10 seconds And a progress report shows total, succeeded, failed with retriable reasons And an audit log includes a summary entry and per-badge child entries And repeating the same request_id makes no additional changes
SIEM/Incident Response APIs for Revoke, State Query, and Event Delivery
Given a service principal with scope=incident:write When POST /ir/revoke is called with badge_ids or scope filters and an idempotency key Then the system revokes the specified badges and returns 200 with job_id and counts; unauthorized callers receive 403; rate limit is 600 rpm per org with 429 on exceed And GET /badges/{id}/state returns one of [active, suspended, revoked, expired] with timestamps and last_reason And webhook or event stream delivers events [revoked, suspended, expired, session_terminated, replay_detected] with signature, retries using exponential backoff up to 24 hours, and 99% delivered within 30 seconds And all API responses include trace_id and are described in OpenAPI; audit logs capture caller, action, and result
Live Credential Status Dashboard & Audit Logging
"As a compliance lead, I want a live view and audit history of badge activity so that we can prove chain of custody during insurance disputes."
Description

Build a real‑time dashboard listing badges and passkeys with status (active/suspended/revoked/expired), last activity, branch, role, and expiration. Include powerful filters, exports, and drill‑downs to individual events. Capture a tamper‑evident audit log of issuance, scans, verifications, uploads, policy changes, and administrative actions, with retention and privacy controls. Stream live updates via WebSockets/SSE and surface alerts for anomalies (excessive failures, out‑of‑region scans). Provide evidence packages for disputes, including signed log excerpts and hash chains for integrity verification.

Acceptance Criteria
Real-Time Dashboard Displays Accurate Credential Statuses
- Given an authenticated admin with branch scope loads the dashboard, when there are active, suspended, revoked, and expired credentials across multiple branches, then the grid displays one row per credential with columns: Credential ID, Type (Badge/Passkey), Status (Active/Suspended/Revoked/Expired), Branch, Role, Last Activity (RFC3339), Expiration (RFC3339). - Given new events occur (issuance, suspension, revoke, expiration, verification), when they are processed by the backend, then the dashboard reflects the new status and timestamps within 2 seconds for the 95th percentile and within 5 seconds for the 99th percentile. - Given the user sorts by Last Activity descending and paginates, when moving between pages of 50 rows, then response times are <= 700 ms for the 95th percentile and sort order is preserved across pages. - Given a credential with no activity exists, when displayed, then Last Activity shows "—" and is excluded from activity-based alerts. - Given the user's branch scope is limited, when loading the dashboard, then only credentials within permitted branches are visible and counts match the filtered scope.
Advanced Filtering and Export of Credential Records
- Given the user applies filters for Status ∈ {Active, Suspended}, Branch ∈ {Denver, Austin}, Role ∈ {Uploader}, Type ∈ {Badge, Passkey}, Expiration between [2025-01-01, 2025-12-31], and Last Activity within last 30 days, when submitted, then results match all selected predicates (logical AND) and render within 1 second for the 95th percentile. - Given a text search on Holder Name or Credential ID, when entering "Garcia", then only rows with matching terms (case-insensitive, substring) are returned. - Given filters are applied, when the user clicks Export, then a file is generated within 10 seconds containing only filtered rows and visible columns, available in CSV and JSON; timestamps are RFC3339, booleans are true/false, and status values are from the allowed set. - Given more than 100,000 matching rows, when exporting, then the system prompts to export in pages of 100,000 rows, and each page export succeeds; an audit event "export_initiated" is recorded with a summary of applied filters. - Given the user lacks permission to a branch, when filtering by that branch, then zero results are returned and export excludes that branch.
Drill-Down to Individual Credential Event Timeline
- Given a user clicks a credential row, when the details view opens, then it shows credential metadata (ID, type, holder, branch, role, status, createdAt, expiresAt) and a reverse-chronological timeline of events: issuance, scans, verifications, uploads, policy changes, and administrative actions. - Given the timeline loads, when fetching the first 50 events, then the server responds within 500 ms (p95) and includes eventId, eventType, timestamp (RFC3339), actor (user/service), IP, geo (ISO-3166 country, region), and device (UA/fingerprint) when available. - Given more than 50 events exist, when the user scrolls, then additional pages of 50 load via "Load more" and maintain order without duplicates. - Given the user applies an event-type filter (e.g., verifications), when applied, then only matching events are shown and counts update accordingly. - Given an unauthorized user deep-links to a credential detail URL, when accessed, then a 403 is returned and no metadata is leaked.
Tamper-Evident Audit Log Integrity and Verification
- Given an audit event is written, when persisted, then it is assigned a content hash (SHA-256) and linked to the previous event via hash chain; an hourly Merkle root is computed and signed with a platform Ed25519 key. - Given any audit dataset mutation is attempted in storage, when verification runs, then the chain verification endpoint detects the break and returns non-OK with the first invalid eventId. - Given a client requests /audit/verify?from=2025-09-01T00:00:00Z&to=2025-09-01T12:00:00Z, when processed, then the API returns Merkle proofs, signed roots, and recomputed hashes that validate all events in range within 15 seconds for up to 1,000,000 events. - Given clock skew exists, when events are ingested, then timestamps are normalized to UTC RFC3339 with recorded source clock offset; verification uses stored canonical timestamps. - Given key rotation occurs, when signing hourly roots post-rotation, then certificates/keys are versioned and verification includes the correct public key material; continuity across rotations is provable.
Live Streaming Updates and Anomaly Alerts
- Given the dashboard is open, when a WebSocket is available, then the client establishes a secure wss connection; otherwise it falls back to SSE; authentication uses a short-lived token and unauthorized connections are rejected. - Given new credential events occur, when streamed, then affected rows update in-place within 2 seconds (p95) and the connection remains stable with automatic exponential backoff reconnect on network loss. - Given a credential experiences >5 failed verifications within 10 minutes, when detected, then an "Excessive Failures" alert is surfaced in the UI (banner + row badge) and an alert event is recorded in the audit log. - Given a scan occurs outside the credential's allowed branch geofence, when detected, then an "Out-of-Region Scan" alert is surfaced and includes geo coordinates and reason; duplicate alerts are suppressed for 15 minutes per credential. - Given the user acknowledges an alert, when actioned, then the alert marker is cleared for that user session but remains auditable; rate limiting ensures no more than 5 alert toasts per minute per client.
Evidence Package Generation for Disputes
- Given an admin selects one or more credentials and a time range, when "Generate Evidence Package" is clicked, then the system produces a ZIP within 60 seconds containing: JSON log excerpt, chain proofs (Merkle branches), signed hourly roots, a detached Ed25519 signature over a manifest, and a human-readable PDF summary. - Given the evidence package is generated, when downloaded, then its manifest includes scope (filters, credentials, time range), hashes (SHA-256) for each file, and the platform public key fingerprint; a verify CLI command can validate the package offline. - Given package size would exceed 200 MB, when generation is requested, then the user is prompted to refine scope or the system splits packages by day without breaking chain proofs. - Given the requester lacks permission to included branches, when generation is attempted, then the request is rejected with 403 and no package artifacts are created. - Given a package link is issued, when 24 hours elapse or a revoke occurs, then the link expires and subsequent downloads fail with 410; an audit event records the access and expiry.
Retention and Privacy Controls for Audit Data
- Given retention policies are configured per event type and branch, when set to values between 30 and 3650 days, then the system enforces deletion on schedule and deleted events are no longer retrievable via API, UI, or exports. - Given a role with restricted PII access views the timeline, when rendered, then PII fields (e.g., IP, device fingerprint, holder name) are redacted or tokenized; exports respect the same redaction rules. - Given a legal hold is placed on a set of events, when retention jobs run, then held events are preserved until the hold is cleared and the action is auditable. - Given a retention configuration change is saved, when applied, then the change is logged with old value, new value, actor, timestamp, and a confirmation dialog warns if data loss will occur. - Given a data subject erasure request is processed, when executed, then personal fields are erased within 30 days while preserving non-PII event integrity via tombstones that keep hash chain continuity.
Verification Endpoint & Mobile Scan Flow (Online/Offline)
"As a crew lead, I want to scan a badge on my phone at a job site with spotty service so that I can verify a worker’s permissions and let them upload immediately."
Description

Offer a public verification endpoint that validates QR/NFC tokens, checks policy/expiration/revocation, and returns minimally scoped directives (e.g., upload permissions for a job/branch). Deliver a mobile‑friendly scan experience with deep links into the RoofLens Field Uploader. Support offline verification via short‑lived signed tokens with embedded policy digests and CRL/denylist stamps, with automatic refresh when connectivity returns. Include rate limiting, anti‑replay, and device fingerprinting to reduce fraud. Provide supervisor mode for on‑site validation and quick troubleshooting of denied scans.

Acceptance Criteria
Online Token Verification with Minimal Scope
Given a valid, unexpired, and not-revoked QR/NFC token When the public verification endpoint is called over HTTPS Then it returns HTTP 200 with minimally scoped directives (jobId, branchId, allowedActions) and a TTL that does not exceed policy And the response conforms to the published JSON schema and contains no PII or secrets Given an expired token When verified Then the endpoint returns HTTP 401 with error "token_expired" and no directives Given a revoked or denylisted token When verified Then the endpoint returns HTTP 403 with error "token_revoked" and no directives Given a malformed or unknown token When verified Then the endpoint returns HTTP 400 with error "invalid_token" Given normal network conditions When verifying a valid token Then P95 endpoint latency is <= 500 ms and an audit record (token hash, device fingerprint hash, outcome, timestamp, truncated IP) is stored
Mobile Scan Deep Link to Field Uploader
Given a user scans a valid RoofLens badge QR/NFC on a mobile device with Field Uploader installed When the token is verified Then the app opens via deep link and preloads directives for the specified job/branch and allowedActions Given the Field Uploader app is not installed When a badge is scanned Then the user is routed to the install page and, upon first launch, the original directives are applied via continuation Given a denied or invalid token When scanned Then the mobile UI displays a clear denial reason and a prompt to enter Supervisor Mode Given a token with single-use policy When used once successfully Then subsequent scans are rejected with "already_used" and no directives are applied Given accessibility settings are enabled When a scan succeeds or fails Then the UI provides haptic and voice feedback and meets WCAG AA contrast for the result screen
Offline Verification with Short-Lived Signed Tokens
Given a signed offline token issued <= 15 minutes ago containing a policy digest and CRL/denylist stamp When scanned without connectivity Then the mobile client verifies the signature using the embedded public key, validates expiry with <= 2 minutes clock skew, validates the policy digest, and grants only the encoded allowedActions Given an expired offline token or a CRL stamp indicating revocation since issuance When scanned offline Then access is denied with reason "offline_token_invalid" and no actions are permitted Given offline access was granted When connectivity returns before the token TTL elapses Then the client exchanges the offline token for an online session and immediately revokes access if the server indicates revocation Given the device remains offline past the token TTL When attempting restricted actions Then access is blocked until a successful online refresh occurs Given QR payload constraints When rendering offline tokens Then the QR payload size is <= 800 bytes and scans reliably on reference devices
Anti-Replay and Device Binding
Given a token is first presented from a device with fingerprint F When verification succeeds Then the token is bound to F for its remaining lifetime Given the same token is presented from a different device fingerprint When verified Then the request is denied with HTTP 409 and error "token_bound_to_other_device" and a replay alert is logged Given tokens include a one-time nonce When a previously seen nonce is presented again within the validity window Then the request is rejected with HTTP 409 and error "replay_detected" Given acceptable clock skew of +/- 2 minutes When validating nonce freshness and token timestamps Then tokens within the skew are accepted; otherwise rejected and logged
Verification Endpoint Rate Limiting and Abuse Controls
Given any source IP When it exceeds 60 verification requests within 60 seconds Then subsequent requests receive HTTP 429 with a Retry-After header indicating the remaining window Given any device fingerprint When it exceeds 120 verifications within 5 minutes Then responses return HTTP 429 and the mobile UI presents a human-verification challenge Given an API client exceeds burst and sustained thresholds When limits are breached Then the client is temporarily blocked for 10 minutes and an audit/security event is emitted Given traffic remains within thresholds When verifying tokens Then no rate limiting responses (429) are returned
Supervisor Mode Validation and Troubleshooting
Given a supervisor long-presses the scan result screen for 2 seconds and authenticates with a WebAuthn passkey When entering Supervisor Mode Then they can scan a badge to view token metadata (policy digest, expiration, revocation reason, last-seen device hash) without exposing secrets Given a denied scan When viewed in Supervisor Mode Then a "why denied" breakdown is shown with specific failing checks (expiration, revocation, scope mismatch, replay) and recommended fixes Given policy allows time-bound overrides When a supervisor issues a 30-minute override Then a signed override token is created, applied to the device, and an audit record with supervisor identity and scope is stored Given a badge is reported compromised When a supervisor taps Revoke in Supervisor Mode Then the badge is added to the CRL immediately and subsequent online verifications are denied; offline clients receive the update on next connectivity and revoke access

Device Lock

Bind badges to approved devices with hardware attestation and OS integrity checks. Uploads require a verified badge+device pair, blocking rooted/emulated devices and flagging anomalies in the Custody Ledger. Prevents spoofed photos and impersonation, strengthening evidence defensibility.

Requirements

Cross-Platform Attestation & Integrity Checks
"As a field inspector, I want my device to be verified by the operating system so that my uploads are trusted and defensible as evidence."
Description

Integrate hardware-backed attestation and OS integrity verification on iOS and Android to validate device authenticity prior to enabling sensitive actions. On iOS, leverage App Attest/DeviceCheck; on Android, use Play Integrity API with hardware-backed keys and evaluation types. Detect and score risk signals including jailbreak/root, emulator/virtual device, bootloader unlock, debug builds, and tampered OS. Surface a signed attestation verdict to the backend for every session and sensitive operation, cache short-lived results server-side, and enforce expiry to prevent replay. This requirement establishes the technical foundation that ensures only trustworthy devices can pair with badges and initiate uploads, directly reducing spoofed-photo and impersonation risk while fitting into RoofLens’ secure capture flow.

Acceptance Criteria
iOS App Attest: Hardware-backed Gate for Sensitive Actions
Given the app runs on iOS 14+ on a non-jailbroken device and a user initiates a sensitive action (session start, badge pairing, upload) When the client requests a server nonce and generates an App Attest assertion using a registered key Then the backend verifies the assertion signature, attestation chain, nonce, app and team identifiers, and that the environment is production And the verification confirms the device is not a simulator and not running under debugger And the backend issues an ALLOW decision with a signed verdict containing keyId, issuedAt, and expiresAt
Android Play Integrity: Strong Device Integrity Required
Given the app runs on Android 10+ with Google Play services and a user initiates a sensitive action or session When the client requests a Play Integrity token using a server-provided nonce Then the backend verifies the token signature with Google, nonce, package name, versionCode, and certificate digest And evaluationTypes includes HARDWARE_BACKED And deviceIntegrity includes MEETS_STRONG_INTEGRITY and account/app licensing verdicts indicate LICENSED And the backend issues an ALLOW decision with a signed verdict containing issuedAt and expiresAt
Block Jailbroken/Rooted, Bootloader-Unlocked, or Tampered OS
Given attestation indicates jailbreak/root, custom ROM, bootloader unlock, SELinux disabled, or OS tampering When the user attempts a sensitive action Then the backend returns DENY with reason codes for each detected signal And no sensitive operation is executed and no badge pairing or upload session is created And an anomaly event is recorded against the device and account
Prevent Emulator, Simulator, and Debug-Build Access
Given the app is running on an emulator/virtual device, iOS simulator, or a debug/non-production build When attestation or integrity verification is performed Then verification fails and returns DENY with reason EMULATOR or DEBUG_BUILD And the client receives a generic failure message without disclosing detection details And the attempt is logged with a high risk score and linked to the device fingerprint
Signed Verdict, Server-side Cache, and TTL Enforcement
Given a valid attestation is verified and a signed verdict is issued for the current session When subsequent sensitive actions occur within a cache TTL of 10 minutes Then the backend reuses the cached verdict and records the verdictId on each action And when the TTL expires, the next sensitive action requires a fresh attestation And reusing an expired or mismatched verdictId results in HTTP 401 with reason REPLAY And all verdicts are signed by a rotatable server key and include issuedAt and expiresAt
Replay Protection with One-time Nonces
Given the backend issues a cryptographically random 128-bit nonce per attestation request When a nonce is reused or an attestation token is replayed across requests Then the backend rejects the request with DENY and reason REPLAY_DETECTED And the event increments a device-level counter and applies rate limiting after 3 replays within 5 minutes
Risk Scoring, Thresholds, and Custody Ledger Flagging
Given attestation verification produces zero or more risk signals with configured weights When risk score >= 70 Then the backend blocks the action with DENY and writes a Custody Ledger entry containing signals, score, decision, and verdictId And when 30 <= risk score < 70 Then the backend returns ALLOW_WITH_WARN and writes a ledger entry with requiresReview=true And when risk score < 30 Then the backend returns ALLOW and writes a ledger entry And risk scoring configuration is versioned and covered by unit tests
Device Enrollment & Badge Binding
"As an org admin, I want to bind user badges to approved devices so that only authorized hardware can submit captures to RoofLens."
Description

Provide a guided enrollment flow that registers a device to a user’s badge after successful attestation. Generate and store a stable, privacy-preserving device fingerprint and a hardware-bound public key, then bind these to the badge and organization. Enforce organization-configured limits (e.g., max N devices per badge), require admin approval when limits are exceeded, and record enrollment metadata (time, geo, OS version, app build). Support re-attestation on app launch and periodic key rotation. Persist device-badge bindings in a secure registry to be referenced by policy checks across capture, upload, and account management flows.

Acceptance Criteria
Successful Device Enrollment and Badge Binding
Given a logged-in user with a valid badge and an attestation-capable device And org policy allows enrollment and current device count is below the configured limit When the user completes the attestation flow successfully Then the system generates a hardware-bound keypair and a stable, privacy-preserving device fingerprint And binds the device public key and fingerprint to the user’s badge and organization And persists the binding in the secure registry with status "Active" And the registry record includes enrollment metadata: server timestamp (UTC), coarse geolocation (if permission granted), OS version, device model, and app build number And the API returns a success response within 3 seconds (p95) And the device can immediately capture and upload subject to policy
Enrollment Blocked on Failed Attestation
Given a device that fails hardware attestation or OS integrity checks (e.g., rooted, emulated, tampered) When the user attempts to enroll the device Then enrollment is refused and no device-badge binding is created in the registry And the response includes error code "DEV_ATTEST_FAIL" and a human-readable reason And an audit event is recorded with timestamp, badge ID, org ID, and failure reason And the badge remains unchanged and the device is blocked from capture and upload
Device Fingerprint Privacy and Stability
Rule: The device fingerprint is deterministic on the same physical device across app reinstall and minor OS updates with ≥99% stability in test cohort Rule: The fingerprint changes if the hardware security identifier changes and differs across emulator instances Rule: The fingerprint contains no PII and is produced via salted, non-reversible hashing using an approved algorithm Rule: Collision rate is ≤0.1% in a 10,000-device test set Rule: The fingerprint is stored only in the secure registry and device secure storage and is never shared with third parties outside processing
Hardware-Bound Key Generation and Rotation
Given device enrollment succeeds Then a keypair is generated inside a hardware-backed keystore and the private key is marked non-exportable And the public key is stored in the registry bound to badge+org with key_version = 1 When the rotation interval elapses (e.g., 90 days) or an admin triggers rotation Then a new hardware-bound keypair is generated, key_version increments, and the previous key is marked "Retired" And subsequent attestations and uploads use the latest active key And previously signed evidence remains verifiable using retained retired public keys
Organization Device Limit Enforcement and Admin Approval
Given org policy sets max_devices_per_badge = N And a badge already has N active bound devices When a user attempts to enroll an additional device Then enrollment is placed into "Pending Approval" with no active binding created And an approval request is visible to org admins with device metadata and reason And if an admin approves, the binding is created and status changes to "Active"; if denied, the request is closed and the device remains blocked And the requesting user is notified of the decision within 60 seconds
Re-Attestation on App Launch
Given a previously enrolled device When the app launches or returns to foreground after more than 12 hours idle Then a background re-attestation is performed within 2 seconds (p95) without user interaction And if re-attestation passes, no user-facing disruption occurs And if re-attestation fails, capture and upload are disabled, and the user is prompted to re-enroll And a re-attestation result event (pass/fail, reason) is recorded in the registry
Registry-Backed Policy Enforcement Across Flows
Given an Active badge-device binding exists When the user attempts to capture or upload from that device Then the policy check queries the registry and permits the action Given a device without an Active binding (Unbound/Retired) When the user attempts capture or upload Then the action is blocked with error code "DEVICE_NOT_BOUND" And an admin-initiated unbind takes effect across capture, upload, and account management within 60 seconds (p95) And policy checks complete in under 200 ms (p95)
Attested Upload Gate
"As a technician, I want uploads to be blocked if my device is unverified so that spoofed photos can’t enter our jobs."
Description

Enforce that every photo, video, and telemetry upload includes a fresh, server-validated attestation token tied to the active badge-device pair. Reject uploads when attestation is missing, expired, mismatched, or high-risk. Bind the attestation to the upload payload via nonce and timestamp to prevent replay and cross-session misuse. Expose precise error codes to clients, fail closed by default, and expose policy toggles for org-level strictness (block vs. quarantine). This gate ensures only verified badge+device pairs can contribute evidence to jobs, blocking spoofed or impersonated submissions at the point of entry.

Acceptance Criteria
Verified Pair + Fresh Attestation Upload Succeeds
Given an active badge is bound to an approved device and the client holds a server-issued attestation token for that badge+device pair And the token includes claims: badge_id, device_id, job_id, asset_sha256, nonce (128-bit), session_id, issued_at, expires_at And the token signature is valid and token age <= 120s with acceptable clock skew <= 30s And the nonce has not been seen in the last 10 minutes for this badge+device And asset_sha256 matches the uploaded payload When the client POSTs the asset with the attestation token Then the server validates the token, binding, and claims before writing any bytes to non-quarantine storage And the server returns HTTP 201 with asset_id and attestation_id within 600ms p95 of request receipt (excluding upload transfer time) And the asset is accepted into the job, processing is queued, and the attestation metadata is persisted and linked to the asset And a custody ledger entry is created with decision=accepted and includes policy_snapshot_id
Missing or Expired Attestation Is Rejected (Fail Closed)
Given an upload request is received without an attestation token When the server evaluates the request Then the server rejects with HTTP 400 and error.code=ATTST_MISSING and stores 0 bytes And a custody ledger entry is created with decision=rejected and reason=ATTST_MISSING Given an upload request includes an attestation token that is expired or not yet valid When the server evaluates the token Then the server rejects with HTTP 403 and error.code=ATTST_EXPIRED and stores 0 bytes And a custody ledger entry is created with decision=rejected and reason=ATTST_EXPIRED Given an upload request includes an attestation token with an invalid signature When the server validates the signature Then the server rejects with HTTP 403 and error.code=ATTST_INVALID_SIG and stores 0 bytes And the default behavior is fail-closed regardless of org policy toggles for quarantine
Mismatched Badge–Device Pair Is Rejected
Given the attestation token badge_id or device_id does not match the active bound pair for the authenticated user or org When the server compares token claims against binding records Then the server rejects with HTTP 403 and error.code=ATTST_PAIR_MISMATCH and stores 0 bytes And a custody ledger entry is created with decision=rejected and reason=ATTST_PAIR_MISMATCH Given the badge is revoked or the device is unbound at the time of evaluation When the server checks binding status Then the server rejects with HTTP 403 and error.code in {ATTST_BADGE_REVOKED, ATTST_DEVICE_UNBOUND} and stores 0 bytes And the response includes error.details.badge_id and error.details.device_id
Payload Binding Prevents Replay and Cross-Session Misuse
Given the attestation token includes a nonce and asset_sha256 bound to the specific payload and session_id When the same nonce is presented again within a 10-minute replay window for the same badge+device Then the server rejects with HTTP 409 and error.code=ATTST_NONCE_REPLAY and stores 0 bytes And a custody ledger anomaly flag is set: replay_detected=true Given the uploaded payload hash does not match token.asset_sha256 When the server computes the payload hash pre-acceptance Then the server rejects with HTTP 422 and error.code=ATTST_PAYLOAD_BINDING_MISMATCH and stores 0 bytes Given the attestation token session_id does not match the client session or the request When the server verifies session binding Then the server rejects with HTTP 403 and error.code=ATTST_SESSION_MISMATCH and stores 0 bytes Given the token issued_at/expires_at are outside the allowable window considering ±30s skew When time validity is checked Then the server rejects with HTTP 400 and error.code=ATTST_TIME_WINDOW
High-Risk Attestation Policy: Block vs. Quarantine
Given the attestation evaluation yields high risk due to claims (rooted=true, emulator_detected=true, os_integrity=false) or risk_score >= 0.80 And the organization policy toggle is set to Block When the upload is evaluated Then the server rejects with HTTP 403 and error.code in {ATTST_OS_INTEGRITY_FAIL, ATTST_EMULATOR_DETECTED, ATTST_RISK_BLOCKED} and stores 0 bytes And the custody ledger entry records decision=rejected, risk_score, and policy_snapshot_id Given the attestation evaluation yields the same high-risk conditions And the organization policy toggle is set to Quarantine When the upload is evaluated Then the server responds HTTP 202 with state=quarantined and error.code=ATTST_RISK_QUARANTINED And the asset is stored in a quarantine bucket/keyspace isolated from production workflows and hidden from job consumers And a review task is emitted to the moderation/QA queue with attestation_id and risk details
Precise Error Codes and Response Telemetry
Given any attestation gate rejection occurs When the server returns the response Then the JSON body includes fields: error.code (one of the documented codes), error.message (human-readable), error.details (structured), trace_id, and policy_snapshot_id (if policy-influenced) And HTTP status codes map as: 400 (missing/format/time window), 403 (authz/policy/mismatch), 409 (replay), 422 (binding mismatch), 202 (quarantine), 201 (success) And p95 response latency for error paths is <= 400ms Given a successful acceptance occurs When the server returns the response Then the JSON body includes fields: asset_id, attestation_id, received_at, storage_class, and processing_state=queued
Custody Ledger Records Decisions and Anomalies
Given any upload attempt (accepted, rejected, or quarantined) When the server finalizes the gate decision Then a custody ledger entry is durably written before the HTTP response commits And the entry includes: attestation_id, badge_id, device_id, job_id, asset_sha256, nonce, issued_at, decision (accepted/rejected/quarantined), reason/error.code, risk_score (if present), policy_snapshot_id, server_timestamp, client_ip, geo (if resolved) And ledger entries are immutable and tamper-evident (write-once with cryptographic hash chain linking previous entry) And anomalies are flagged with booleans: replay_detected, os_integrity_fail, emulator_detected, time_window_violation And ledger write failures cause the gate to fail-closed with HTTP 503 and error.code=ATTST_LEDGER_UNAVAILABLE
Custody Ledger Anomaly Logging
"As a claims reviewer, I want anomalies to be recorded with audit details so that I can defend estimates against disputes."
Description

Record all attestation outcomes and policy decisions in the Custody Ledger for each asset. Log device fingerprint hash, badge ID, attestation provider result, risk signals, app build, OS version, time, geo, and action (enrolled, allowed, quarantined, blocked). Link anomalies to the job record and surface them in PDF exports and API so downstream reviewers can audit provenance. Provide searchable filters and an exportable CSV for compliance responses. This creates a durable, auditable chain of custody that strengthens evidence defensibility and accelerates dispute resolution.

Acceptance Criteria
Log Completeness for Attestation Outcomes
Given an asset is uploaded using a badge+device pair When device attestation executes and a policy decision is made Then a single Custody Ledger entry is written containing: asset_id, job_id, badge_id, device_fingerprint_hash (SHA-256), attestation_provider (name, version), attestation_result (pass|fail|inconclusive), risk_signals (array), app_build, os_name, os_version, action (enrolled|allowed|quarantined|blocked), server_timestamp (UTC ISO-8601), client_timestamp (UTC ISO-8601), geo {lat, lon, accuracy_m}, requester_ip, ledger_entry_id (UUIDv4) And server_timestamp is generated server-side And if any field is unavailable, it is recorded as null and a corresponding risk_signals code MISSING_<FIELD> is added And the ledger entry is linked to the asset_id and job_id
Blocked or Quarantined Events with Risk Signals
Given attestation indicates a high-risk condition (e.g., rooted, emulated, badge-device mismatch, OS integrity fail) When the policy decision is evaluated Then the ledger entry action is set to blocked or quarantined accordingly And risk_signals includes one or more of: ROOTED, EMULATED, BADGE_DEVICE_MISMATCH, OS_INTEGRITY_FAIL, ATTESTATION_INCONCLUSIVE, GEO_OUT_OF_BOUND, CLOCK_SKEW_EXCEEDED And the attestation_result and attestation_provider are recorded And an attestation_blob_reference or nonce is stored for audit correlation And the asset is prevented from being marked allowed in the same flow
Anomaly Linking and PDF Surfacing
Given a job contains ledger entries with action in (quarantined, blocked) or non-empty risk_signals When a job PDF is generated Then the PDF includes a Chain of Custody section listing each anomaly with: server_timestamp, badge_id, device_fingerprint_hash (first 8 chars), action, attestation_result, risk_signals, geo (lat, lon) And the section includes the job_id and asset_id for each anomaly And the PDF includes a reference link or ID to fetch full details via API And if no anomalies exist for the job, the PDF displays "No anomalies recorded"
API Access and Filterable Ledger Records
Given an authenticated tenant-scoped user requests GET /api/v1/jobs/{job_id}/custody-ledger When optional query params are provided: action, badge_id, device_hash, risk_signal, date_from, date_to, page, page_size, sort Then only records for the caller’s tenant and job_id are returned And all filters and sort are applied correctly And the response contains pagination metadata: total, page, page_size, next And each record’s fields match the persisted ledger entry values And cross-tenant access is rejected with 403 And invalid parameters return 400 with a descriptive error
UI Search and CSV Export for Compliance
Given a user opens the Custody Ledger view When filters are applied (action, badge_id, device_hash prefix, risk_signal, date range, geo bounding box) Then the result set updates to show only matching records And p95 query latency is <= 3 seconds for up to 50k records And selecting Export CSV downloads a UTF-8 CSV with headers and columns: asset_id, job_id, badge_id, device_fingerprint_hash, attestation_provider, attestation_result, risk_signals, app_build, os_name, os_version, action, server_timestamp, client_timestamp, geo_lat, geo_lon, accuracy_m, requester_ip, ledger_entry_id And the CSV respects all applied filters and date range And exports larger than 100k rows are segmented with continuation tokens
Immutability and Auditability of the Custody Ledger
Given a ledger entry has been created When a correction or update is needed Then the original entry remains immutable (no updates in place) And a new entry is appended with supersedes set to the prior ledger_entry_id And each entry includes content_hash (SHA-256) and prev_hash to support chain verification And the job PDF displays a job-level chain_hash value And an API verification endpoint returns Pass when the chain validates end-to-end and Fail otherwise
Offline Capture with Deferred Attestation
"As a crew lead working offline, I want to capture photos and submit them later once verified so that I can work in low-signal areas without sacrificing chain of custody."
Description

Allow users to capture media while offline and queue assets locally in an encrypted store. Require a locally generated hardware-backed signature at capture time, then enforce full server-side attestation before accepting uploads when connectivity returns. Mark assets as “quarantined” until a valid badge-device attestation is verified within a configurable freshness window; expire or require manual review if the window is exceeded. Preserve capture timestamps, coarse location, and device state snapshot to maintain custody continuity even without network access, supporting field work in low-signal job sites without compromising integrity.

Acceptance Criteria
Offline Capture Queues to Encrypted Store
Given a signed-in user with a verified badge-device pair and no network connectivity When the user captures a photo or video Then the asset is written to a local encrypted store using a keystore-backed key And the asset is queued with status "Quarantined" And the asset is not readable by other apps or via external file browsing And the queued asset persists across app relaunches and device reboots
Hardware-Backed Signature at Capture
Given a supported device with hardware-backed keys available When the user captures media (online or offline) Then the app generates a hardware-backed signature over the content hash, capture timestamp (UTC), coarse location (if available), device state snapshot hash, badge ID, and device ID And the signature is stored with the asset record And if a hardware-backed key is unavailable or attestation state is invalid, the capture is blocked and an error explains the reason
Deferred Upload Gate with Server-Side Attestation
Given a quarantined asset in the local queue and network connectivity is restored When the app attempts to upload the asset Then the server validates the badge-device pairing, verifies hardware attestation and OS integrity, and checks the local signature against the uploaded content and metadata And upon success the asset status changes from "Quarantined" to "Accepted" and it becomes eligible for estimates and PDFs And upon any validation failure the upload is rejected, the asset remains "Quarantined", and a failure reason is recorded and shown to the user And a Custody Ledger entry is created for the attestation result
Quarantine Freshness Window and Expiry/Review
Given a freshness window W (e.g., 24 hours) is configured And an asset was captured at time Tc and remains "Quarantined" When current time exceeds Tc + W without successful server-side attestation Then the asset transitions to "Expired" or "Pending Review" according to policy And the asset is blocked from inclusion in estimates/reports until accepted And the Custody Ledger records the transition with timestamps and policy applied
Metadata Preservation Without Network
Given the device is offline during capture When the user captures an asset Then the asset record stores immutable metadata: original capture timestamp (UTC), coarse location (if permission granted), device state snapshot (OS version, security patch, boot state, root/emulator flags), badge ID, and device ID And this metadata is included in the signed payload and transmitted upon reconnect And if location permission is denied, a "location unavailable" flag is stored; if mock location is detected, a flag is stored and uploaded
Rooted/Emulated Device Block at Capture
Given the device fails local integrity checks (e.g., rooted, emulator, bootloader unlocked) When the user attempts to capture media Then the capture is blocked and no asset is stored And the user sees an error indicating the device is not eligible for secure capture And upon reconnect, an attempted-capture anomaly event is written to the Custody Ledger with device integrity details
Badge-Device Mismatch on Upload
Given an asset was captured under badge B and device D and remains "Quarantined" When an upload attempt is made under a different badge or from a different device Then server-side attestation fails and the asset is not accepted And an anomaly event is written to the Custody Ledger including the observed badge/device IDs and timestamp And the client prompts the user to re-authenticate and re-pair before retrying
Admin Controls & Revocation
"As an org admin, I want to revoke compromised devices so that access is cut off immediately and evidence integrity is maintained."
Description

Deliver an admin console to review, approve, and revoke device registrations per badge and organization. Show device details (model, OS, last attestation, risk history) and active sessions. Provide immediate and scheduled revocation, require a reason code, and propagate revocation to invalidate tokens, end sessions, and wipe local keys on next check-in. Support bulk actions, CSV import/export, and API endpoints for MDM integrations. Notify affected users and record all admin actions in the Custody Ledger for auditability.

Acceptance Criteria
Admin Views Device Details and Sessions
Given an org admin is viewing a badge's registered device in the Admin Console When the admin opens the device detail panel Then the UI shows device model, manufacturer, hardware ID, OS name, OS version, OS integrity status, last attestation timestamp and outcome, attestation provider, risk score, and the last 10 risk events with timestamps and types And the UI shows active sessions with session ID (masked), client app version, IP, last activity timestamp, and location (city, region) And data freshness is within 5 seconds of the system of record And the admin can filter risk events by type and date range And the device's last check-in timestamp and method are displayed
Approval Workflow for Device Registration
Given a device registration is in Pending status for a badge within the admin's organization When the admin opens the pending registration Then the panel displays hardware attestation results, OS integrity status, device metadata, and risk summary And only users with Device Admin permission can approve or reject; others receive HTTP 403 When the admin clicks Approve Then the device status changes to Approved and the badge+device pair becomes eligible for uploads And a Custody Ledger entry is recorded with action APPROVE_DEVICE, admin user ID, device ID, badge ID, and timestamp When the admin clicks Reject Then the device remains Unapproved and the requesting client is notified of rejection And a Custody Ledger entry is recorded with action REJECT_DEVICE, admin user ID, device ID, badge ID, and timestamp
Immediate Revocation Propagates Security Actions
Given a device is Approved for a badge and has at least one active session When an admin selects Revoke Now and chooses a reason code from the predefined list Then the Confirm button is disabled until a reason code is selected And upon confirmation, a Custody Ledger entry is created with action REVOKE_DEVICE, reasonCode, admin user ID, device ID, badge ID, timestamp, and correlationId And all access tokens for the badge+device pair are invalidated within 60 seconds And all active sessions are terminated within 60 seconds and cannot make authenticated API calls And a push notification is attempted immediately; failures are retried with exponential backoff for up to 24 hours or until next check-in And on next app check-in, the client wipes local keys and displays a Device Revoked screen And the affected user receives in-app and email notifications within 2 minutes containing reasonCode and effective time
Scheduled Revocation Management
Given a device is Approved for a badge When an admin schedules a revocation with a future effectiveAt timestamp and a reason code Then scheduling requires a reason code and a future timestamp; past times are rejected with HTTP 400 And the schedule is displayed in a Scheduled Actions list with ability to edit time and reason or cancel up to 5 minutes before execution And a Custody Ledger entry is recorded at scheduling with action SCHEDULE_REVOKE_DEVICE and at execution with action REVOKE_DEVICE_EXECUTED And at the effectiveAt time (±60 seconds), tokens are invalidated and sessions terminated as per immediate revocation And the affected user is notified at execution time with reasonCode and effective time And if the device is offline, key wipe occurs on next check-in and is recorded in the Custody Ledger
Bulk Revocation via CSV Import
Given an admin has a CSV with columns badgeId, deviceId, reasonCode, effectiveAt(optional) When the admin uploads the CSV to perform bulk revocations Then the system validates headers, data types, org scoping, reasonCode values, and badge+device existence And a preview shows counts of valid rows, warnings, and errors with line numbers and messages When the admin confirms execution Then valid rows are processed and invalid rows are skipped without side effects And per-row outcomes (Created, Scheduled, Skipped, Error) are displayed and downloadable as a CSV report And at least 1,000 rows per minute are processed under normal load And each processed row creates an individual Custody Ledger entry and one batch summary entry is created for the upload
CSV Export of Device Registry
Given an admin is on the device registry list with filters applied When the admin clicks Export CSV Then the generated file includes for each device: deviceId, badgeId, user/badge display name, model, OS name, OS version, approval status, last attestation timestamp and outcome, risk score, last check-in timestamp, and active session count And the export respects current filters and column selections And files up to 50,000 records are generated within 30 seconds and streamed to the client And the file uses UTF-8, comma delimiter, quoted fields, and a header row with stable column names And a Custody Ledger entry is recorded with action EXPORT_DEVICES, admin user ID, filter summary, and record count
MDM API Endpoints for Revocation and Device Query
Given an MDM integration has an API token scoped to an organization When it calls POST /v1/admin/devices/{deviceId}/revoke with body {badgeId, reasonCode, effectiveAt(optional)} and an Idempotency-Key header Then the API validates org scope, device ownership, reasonCode, and effectiveAt; on success returns 201 Created with revocationId, status, and effectiveAt And duplicate requests with the same Idempotency-Key return the original response without creating duplicates And invalid inputs return 400 with error codes; cross-org access returns 403 When it calls GET /v1/admin/devices?badgeId={badgeId}&page=1&pageSize=50 Then the API returns device detail fields, pagination metadata, and sorting support And all API actions are recorded in the Custody Ledger with integrationId, actor type service, and timestamps
User Messaging & Remediation
"As a field user, I want clear guidance when my device fails checks so that I can fix the issue or request an override quickly."
Description

Provide clear, action-oriented in-app messaging for attestation failures and policy blocks. Map error codes to user-friendly explanations and step-by-step remediation (e.g., update OS, disable developer options, relaunch app, contact admin). Include a one-tap retry, an override request flow with justification and photo ID capture, and a support reference code linking to the Custody Ledger entry. Localize strings and respect accessibility guidelines. This reduces friction for legitimate users while maintaining strict enforcement against risky devices.

Acceptance Criteria
Friendly Messaging for Attestation Failure
Given an upload attempt fails device attestation with policy error code <code> When the error screen is displayed Then the user sees a plain-language title and explanation mapped from <code> without exposing internal terms And the screen presents 3–5 step-by-step remediation actions relevant to <code> And a visible Support Reference Code is shown and can be copied with one tap And the reference code is included in logs and linked to the corresponding Custody Ledger entry And the UI indicates whether Retry or Request Override is available based on the failure type And no personally identifiable information is shown on the error screen
One-Tap Retry After Transient Failure
Given a failure is flagged as retryable by the policy service When the user taps Retry Then device attestation is re-run and the original upload resumes without requiring app relaunch And a loading state is shown during the retry And no more than 3 retries are allowed within 10 minutes with exponential backoff of 5s, 15s, 30s And on success, the error banner is dismissed and the flow proceeds automatically And on reaching the retry limit, Retry is disabled and a message explains the limit
Override Request with Justification and Photo ID
Given a failure is flagged as non-retryable When the user selects Request Override Then a form requires a free-text justification with a minimum of 50 characters before submission And the flow captures front and back images of a government-issued photo ID within the app And the user must consent to data use before capture And on submission, a confirmation screen shows the Support Reference Code and submission timestamp And the request, justification, and ID images are uploaded to the admin review queue and linked to the Custody Ledger entry And if offline, the request is queued locally and auto-submitted within 5 minutes of reconnect And all captured PII is encrypted at rest and in transit, and access is audit-logged
Localized and Accessible Error Messaging
Given the device language is supported (at minimum English and Spanish) When an attestation or policy block message is shown Then all strings are localized for the device language with an English fallback if missing And strings use ICU-safe interpolation (no concatenated hard-coded text) And the screen is fully accessible: screen readers announce role, state, and error text; focus order is logical; actionable elements meet 44x44pt targets And text respects the device’s dynamic type up to 200% without truncation of critical content And color contrast meets WCAG AA (≥ 4.5:1) for text and interactive elements And status changes (errors, success) are announced via accessibility live regions
Contextual Remediation Steps by Failure Type
Given a block reason is categorized as one of: Rooted/Jailbroken, Emulator/Virtual Device, Outdated OS, Developer Options Enabled, System Time Skew, Integrity Service Missing/Outdated, Attestation Signature Invalid, or Network Tampering When the remediation panel is shown Then the app displays a numbered checklist tailored to the category with at least 3 actionable steps And each step includes a concise instruction (e.g., disable Developer Options, update OS to ≥ required version, correct system time via automatic network time) And a Check Again button is provided to re-validate immediately after the user completes steps And categories that cannot be remediated (e.g., Emulator, Rooted/Jailbroken) recommend switching to a compliant physical device and hide Retry
Support Reference Code and Custody Ledger Linkage
Given any attestation failure or override request occurs When the event is recorded Then a unique uppercase alphanumeric reference code (10 characters) is generated server-side and stored with the Custody Ledger entry ID And the code is displayed in-app with a Copy action and included in outbound support payloads And support tools can resolve the code to the exact ledger entry via API And the code contains no embedded PII and is not reversible to the ledger ID without backend access
Approved Override Unblocks Upload with Audit Trail
Given an override request for a specific badge+device pair is approved by an admin with an expiry window (e.g., 24 hours) When the user retries the upload within the approval window Then the upload proceeds while attestation still runs and the policy block is bypassed for that approval window only And the app shows an "Override active" banner with expiry time And on expiry or device/badge mismatch, the block is reinstated and the user is re-shown messaging and remediation And all actions (approval, bypass, success/failure) are linked to the Custody Ledger entry with timestamps and actor IDs

Offline Pass

Time‑boxed, geofenced offline credentials stored in secure hardware let crews scan and upload on sites with no signal. Captures are locally signed with GPS/time and replay protection, then auto‑sync and seal the chain‑of‑custody when back online. Keeps work moving without sacrificing trust.

Requirements

Offline Pass Issuance and Revocation
"As an operations manager, I want to issue and revoke time-limited, site-bound offline passes to specific crews and devices so that work can continue without signal while maintaining strict access control."
Description

Issue server-signed, time-boxed and geofenced offline passes bound to job IDs, crew members, and device fingerprints. Passes are delivered to the mobile app and stored using hardware-backed keystores, with device attestation to prevent cloning. Includes pass versioning, automatic expiration, remote revocation, and prefetch of revocation lists for offline checks. Integrates with RoofLens job assignments, roles, and notifications, and logs all lifecycle events for auditability.

Acceptance Criteria
Issuing Offline Pass with Device Attestation and Hardware-Backed Storage
Given a job J is assigned to crew member C with offline permission and device D has presented a fresh attestation bound to its hardware-backed key and fingerprint DF When the admin requests issuance of an offline pass for J to C on D Then the server issues a pass P signed by the server key verifiable by the current public key, containing pass_id, job_id, crew_id, device_fingerprint, issued_at, expires_at, geofence, pass_version, and nonce And expires_at is no later than the configured maximum offline duration for the organization And the mobile app stores P only in a non-exportable hardware-backed keystore and marks it unavailable if hardware-backed storage is not present And an issuance event is written to the audit log with actor, job_id, crew_id, device_id, pass_id, timestamps, and outcome And C receives a notification of pass availability
Using Offline Pass In-Field Within Timebox and Geofence
Given device D holds pass P bound to job J and crew C, P is unexpired, and the device location is within the defined geofence And the device is offline or has no reliable signal When C initiates a capture or upload for J Then the app validates P (server-signature, binding to J/C/D, time window with max local clock drift of 5 minutes, and location within geofence) And the action is permitted and locally signed with the device key, embedding pass_id, GPS fix, timestamp, and anti-replay counter And the record is queued for auto-sync upon connectivity restoration
Automatic Offline Pass Expiration Enforcement
Given pass P on device D has reached expires_at When C attempts any protected action for job J using P while offline or online Then validation fails within 1 second and the action is blocked And the app clearly indicates expiration and offers re-request/refresh And an expiration event is recorded locally and uploaded to the audit log on next sync
Remote Revocation Enforced Offline via Prefetched Revocation List
Given pass P (or device fingerprint DF, or user C for job J) has been revoked server-side And device D has a revocation list RL fetched within its TTL When C attempts to use P while offline Then the app checks RL and blocks usage if P/DF/C-J appears in RL with a newer sequence than P’s issuance And the app displays the revocation reason code And a blocked-usage event is recorded locally and uploaded on next sync
Revocation on Unassignment or Role Change
Given C is unassigned from job J or loses a role that grants offline access When the server processes the change Then all active passes for C on J are revoked and added to RL within 60 seconds And affected devices receive a push notification when online and remove local copies of P within 10 seconds of receipt And offline devices enforce revocation via RL upon next attempt
Pass Versioning and Backward-Compatible Validation
Given a device with app version A receives a pass with pass_version V When validating P Then validation succeeds if V is within the supported compatibility range declared by app A and all critical fields are recognized And validation fails with an actionable error if V is unsupported or a critical field is unknown And the app prompts for update when failure is due to unsupported version
Comprehensive Audit Logging and Notifications
Given any pass lifecycle event occurs (issue, download, activation, validation success/failure, use, revocation, expiration, deletion, sync) When the event is processed Then an immutable audit entry is created with event_type, timestamps (device and server), actor, job_id, crew_id, device_id, pass_id, result, reason_code, and IP/device metadata And entries are queryable by administrators within 10 seconds of server receipt and retained per policy And notifications are sent to relevant roles for issuance and revocation events within 60 seconds when online, queued and delivered on reconnect otherwise
Hardware-backed Keys and Local Capture Signing
"As a field technician, I want my offline captures to be cryptographically signed on my device so that office staff and insurers can trust the data origin and integrity when they sync."
Description

Generate a per-device keypair in Secure Enclave/StrongBox on first run and bind passes to the device public key. Each offline capture bundle is signed locally over a content hash, pass ID, job ID, GPS coordinates, timestamp, and a monotonic sequence number with per-capture nonce for replay protection. Package captures in a tamper-evident container that also records OS version, app version, and limited device attestation signals. No private keys leave hardware; signing APIs fail closed if hardware protections are unavailable.

Acceptance Criteria
Per-Device Hardware Keypair on First Run
- Given a compatible device with Secure Enclave/StrongBox available, when the app runs for the first time, then an asymmetric keypair is generated inside the hardware module and the private key is marked non-exportable. - Given the keypair exists, when the public key is requested across app restarts, then the same public key is returned and any attempt to export the private key results in an error. - Given hardware protections are unavailable, when attempting to generate or use the keypair, then the operation fails closed with a specific error code and no software fallback keys are created.
Bind Offline Pass to Device Public Key
- Given a newly issued offline pass, when it is activated on device D, then the pass metadata is bound to D’s public key fingerprint and the binding is acknowledged by the service. - Given pass P bound to device D, when P is presented or used on device E, then signing or upload using P is rejected with an "unbound device" error. - Given pass P is bound to D, when the app is reinstalled or restarted on D without hardware key loss, then P remains usable with the same hardware-backed key.
Local Capture Signing Payload Completeness
- Given an offline capture is finalized, when constructing the signing payload, then it includes content hash, pass ID, job ID, GPS coordinates, timestamp, monotonic sequence number, and a per-capture nonce. - Given the payload is signed, when verifying with the device public key, then the signature validates and altering any single field causes verification to fail. - Given any required field is missing or empty, when attempting to sign, then the signing operation is aborted with a validation error and no bundle is emitted.
Monotonic Sequence and Replay Protection
- Given a pass with current sequence N, when a new capture is signed offline, then the sequence used is N+1 and is durably persisted. - Given the same pass, when attempting to sign a capture with a previously used sequence number or nonce, then the operation is rejected before signing. - Given multiple captures across app restarts and device clock changes, when signing them offline, then sequence numbers remain strictly increasing with no duplicates or regressions.
Tamper-Evident Container and Attestation Fields
- Given a signed capture bundle is packaged, when the container is created, then it includes OS version, app version, and device attestation fields and is covered by the signature/integrity checksum. - Given the packaged bundle, when any byte of the container (including metadata) is modified, then signature verification fails. - Given the container is inspected, when reading metadata, then OS version, app version, and attestation values match those recorded at signing time.
Fail-Closed Signing Behavior
- Given Secure Enclave/StrongBox is locked, missing, or returns an error, when attempting to sign a capture, then the operation fails closed with a specific error and no unsigned or partially signed bundle is produced. - Given hardware access controls or attestation checks fail, when signing is requested, then the request is denied and no software key is generated or used as a fallback. - Given a previous signing failure due to hardware unavailability, when the hardware becomes available and the user retries, then signing succeeds without sequence gaps or regressions.
Geofence and Timebox Enforcement Engine
"As a foreman, I want the app to allow captures only within the permitted location and time window so that we comply with job and policy constraints even without connectivity."
Description

Enforce pass geofence and validity window entirely on-device with coarse-to-fine location checks, hysteresis to handle GPS drift, and user feedback when out-of-bounds or expired. Cache minimal map data and geofence polygons for offline validation and provide a countdown to expiry. Block capture initiation when policies are violated and record reason codes for later audit. Operates efficiently to minimize battery usage and respects platform location permission models.

Acceptance Criteria
Start capture within geofence and validity window
Given a valid offline pass with a cached geofence polygon and start/end times And the device reports location accuracy <= 15 m And current UTC time is within the pass start and end And current location is inside the polygon When the user taps Start Capture Then capture initiation succeeds within 1 second And an audit record is created with reason_code='POLICY_OK', pass_id, lat, lon, accuracy_m, provider, utc_time And no policy warning is shown
Hysteresis near geofence boundary
Given the pass is currently within its validity window And the user is moving near the geofence boundary When 3 consecutive location samples (> 10 m beyond the polygon boundary) are observed at <= 1 s intervals Then the app displays an Out of Bounds banner and disables Start Capture within 1 s When 2 consecutive in-bounds samples are observed Then the banner clears and Start Capture is re-enabled within 1 s And location sampling frequency during hysteresis evaluation does not exceed 1 Hz
Coarse-to-fine location cascade and offline validation
Given the device is offline and a pass polygon with its bounding box is cached locally Then the polygon cache size per pass is <= 500 KB and validation performs without network calls When the capture screen opens Then a coarse check using last-known or low-power provider completes within 500 ms And if the coarse fix places the device >= 200 m outside the bounding box, mark Out of Bounds and do not start high-accuracy GPS And if the coarse check is inconclusive (inside the box or within 200 m), start high-accuracy GPS with a 6 s timeout Then the final in/out decision uses the high-accuracy fix if available; otherwise block with reason_code='LOCATION_UNAVAILABLE'
Expiry countdown and warnings
Given a pass with an end_time in the future When the user is on the capture screen Then a visible countdown in mm:ss updates at 1 Hz to end_time And warning banners are shown at T-15:00 and T-05:00 When current time >= end_time Then Start Capture is disabled within 1 s and a tap logs reason_code='PASS_EXPIRED' When current time < start_time Then Start Capture is disabled and a tap logs reason_code='PASS_NOT_YET_VALID'
Policy violation blocking and reason code logging
Given any policy violation (geofence, time window, permission, accuracy, or location availability) When the user attempts to start capture Then the action is blocked And a local audit entry is appended containing pass_id, device_id, reason_code in {GEO_OUT_OF_BOUNDS, PASS_EXPIRED, PASS_NOT_YET_VALID, PERMISSION_DENIED, LOCATION_UNAVAILABLE, ACCURACY_INSUFFICIENT}, lat, lon, accuracy_m, provider, utc_time, app_version And the audit entry is retained offline and queued for sync And the UI displays a message matching the reason_code within 500 ms
Battery-efficient location monitoring
Given the capture screen is open and no capture is in progress Then the app requests location at <= 0.2 Hz when > 50 m from the geofence boundary And increases up to 1 Hz only when within 50 m of the boundary or during pre-capture checks And the additional battery drain attributable to the app over 60 minutes of monitoring is <= 3% on a reference device And average CPU utilization attributable to the app is <= 5% over the same period
Platform location permission compliance
Given location permission is not granted When the user opens the capture screen Then a single OS-native permission prompt is displayed And until permission is granted, Start Capture remains disabled and attempts log reason_code='PERMISSION_DENIED' When the OS grants Approximate location only Then the app requests Precise once; if declined, it proceeds with coarse checks and blocks start when accuracy > 50 m with reason_code='ACCURACY_INSUFFICIENT' When permission is revoked while the app is in use Then location updates stop within 1 s and a blocking banner is displayed
Encrypted Offline Capture Queue and Auto Sync
"As a crew member, I want my offline captures to automatically sync when we regain signal so that I don’t have to babysit uploads or risk data loss."
Description

Store photos, notes, and measurement artifacts in an encrypted offline queue tied to hardware-backed keys. Support resumable, chunked uploads with exponential backoff, duplicate suppression via content hashes, and ordering by pass sequence number. Auto-detect connectivity and policy (Wi‑Fi only, data saver) to trigger background sync, with clear progress indicators and failure reasons. Integrates with RoofLens job submission to ensure captures attach to the correct job upon sync.

Acceptance Criteria
Encrypt and Queue Captures Offline with Hardware-Backed Keys
Given an active offline pass and no network connectivity When a user saves a photo, note, or measurement artifact Then the item is stored in the offline queue encrypted at rest using AES-256-GCM with a device hardware-backed key And the plaintext is never written to disk And tamper detection causes decryption to fail if any byte is modified And the queue entry records a SHA-256 content hash, capture timestamp, GPS fix, pass ID, job ID, and sequence number
Content-Hash Duplicate Suppression Within a Pass
Given the offline queue already contains an item with the same SHA-256 content hash for the same pass ID and job ID When the user attempts to add another identical item Then the system prevents a second stored copy And increments a reference count and logs a dedup event And only one upload will be attempted for that content during sync
Resumable Chunked Uploads with Exponential Backoff
Given connectivity becomes available and background sync is allowed by policy When the system uploads queued items Then each item is uploaded in 4 MB chunks with per-chunk acknowledgment and persisted progress And on transient failures (timeouts, 5xx), retries use exponential backoff starting at 1s doubling up to 5m with jitter, up to 6 attempts per chunk And on non-retryable errors (4xx excluding 408/429), the item is marked Failed with the HTTP status, error body excerpt, and timestamp And if the app is killed or the device reboots, uploads resume from the last acknowledged chunk
Ordering by Pass Sequence Number
Given a set of queued items sharing the same pass ID and job ID with defined sequence numbers When syncing begins Then items are committed to the server in ascending sequence number order And no item with a higher sequence is committed before all lower sequences have been successfully committed And if an item fails, later sequence items for that pass are paused until the failed item is retried or skipped by user action
Connectivity Auto-Detection and Policy Compliance
Given the device transitions between Offline, Cellular, and Wi‑Fi connectivity states and the user policy is set to Wi‑Fi only or Data Saver When background sync evaluates conditions Then sync starts automatically only when conditions meet policy (e.g., Wi‑Fi for Wi‑Fi only, unmetered or user-consented on Data Saver) And sync pauses within 5 seconds when conditions no longer meet policy And a user-initiated "Sync Now" honors policy and shows a policy-blocked reason if prevented
User-Facing Progress Indicators and Failure Reasons
Given one or more items are queued or syncing When the user opens the Sync pane Then the UI displays per-item status (Queued, Uploading x%, Retrying in Ns, Failed [code], Completed), overall queue count, bytes remaining, and estimated time remaining And tapping a Failed item reveals the failure reason including last HTTP code, network state, and first 200 characters of server error message And successful uploads move to Completed and are removed from the queue view within 10 seconds
Correct Job Attachment on Auto Sync
Given each queued item stores its job ID and pass ID When the item finishes uploading Then the server associates the item with the correct RoofLens job And the client receives and records the server confirmation ID and attaches it to the local record And if the job ID is invalid or closed, the upload is aborted and the item remains in queue with status Failed (Job Attachment) and actionable guidance
Chain-of-Custody Verification and Audit Trail
"As a claims coordinator, I want an auditable chain-of-custody report when offline data syncs so that disputes can be resolved quickly with verifiable evidence."
Description

On sync, verify pass signatures, device-bound keys, geofence compliance, timebox validity, and replay protections (nonce and sequence). Create an immutable audit record linking capture bundles to job IDs, pass IDs, and device IDs, and attach a verification report to the job. Surface verification status in the web app and embed chain-of-custody metadata into generated PDFs and API responses. Route failed verifications to a review queue with granular reason codes and recommended remedies.

Acceptance Criteria
Signature and Device‑Bound Key Verification on Sync
Given a synced capture bundle containing passId, deviceId, bundleHash, captureTimestamp, signature, and signatureAlgorithm When the server verifies bundleHash and validates the signature against the pass public key using signatureAlgorithm and checks that the key is bound to the provided deviceId Then the verification succeeds, signatureVerified = true, deviceBindingVerified = true, and the result is recorded in the verification report And on failure, the verification is marked Failed with reason codes SIG_INVALID and/or KEY_DEVICE_MISMATCH and failing checks are included in the report And the canonical bundleHash is computed as SHA-256 over the canonicalized capture bundle payload and must match the signed hash
Geofence and Timebox Compliance Validation
Given a capture with GPS lat/long (WGS84) and capturedAt (UTC) and a pass with geofence polygon and validity window [validFrom, validTo] When the server evaluates location-in-polygon with a 5 m tolerance and capturedAt within [validFrom−120 s, validTo+120 s] Then geofenceVerified = true and timeboxVerified = true and distances/timestamp deltas are recorded in the verification report And if outside the polygon, mark Failed with reason code GEOFENCE_OUT_OF_BOUNDS and include distanceOutsideMeters And if outside the window, mark Failed with reason code PASS_TIMEBOX_VIOLATION and include deltaSeconds And if GPS or timestamp is missing, mark Failed with reason code MISSING_GPS_OR_TIME
Replay Protection via Nonce and Sequence
Given each capture includes a cryptographically random 128-bit nonce and a monotonically increasing sequenceNumber scoped to (passId, deviceId) When syncing, the server rejects a capture if its nonce was previously accepted for the same (passId, deviceId) within 180 days Then mark Failed with reason code REPLAY_NONCE_DUPLICATE and include firstSeenAt in the report And when syncing, the server rejects a capture if sequenceNumber <= lastAcceptedSequence for (passId, deviceId) Then mark Failed with reason code REPLAY_SEQUENCE_REUSED and include lastAcceptedSequence And on success, the server persists nonce to the seen set and updates lastAcceptedSequence atomically
Immutable Audit Record Creation and Linkage
Given verification (success or failure) completes for a capture bundle When the server writes an audit record Then an append-only record is created with auditId, jobId, passId, deviceId, captureId, bundleHash (SHA-256), verificationResults, createdAt (UTC), and writerVersion And attempts to update or delete the record are rejected (HTTP 409/Forbidden) and a new superseding record must be appended with supersedes = priorAuditId And the audit record is retrievable by auditId and by jobId, and bundleHash re-computation matches the stored value
Verification Status Surfaced in Web App and API
Given a job with one or more synced captures When verification completes for each capture Then the job detail UI displays a per-capture badge: Verified (green), Failed (red), or Pending (gray) within 5 seconds of completion And the job-level status aggregates as Verified if all captures verified, Failed if any failed, else Pending And GET /jobs/{jobId} and GET /captures/{captureId} include verificationStatus, verificationSummary, and verificationReportId fields consistent with the UI
Chain‑of‑Custody Metadata Embedded in PDFs and API
Given a job with verification results and an audit record per capture When generating the job PDF Then a Chain of Custody section is included per capture with: auditId, bundleHash, passId, deviceId, capturedAt (UTC), location (lat,long), geofence name/id, validity window, verificationStatus, and reasonCodes (if any) And a QR code encodes {auditId, bundleHash} and resolves to the public verification endpoint And API responses include a chainOfCustody object with the same fields and values that match the corresponding audit record
Failed Verifications Routed to Review Queue with Remedies
Given a capture verification outcome of Failed When persisting results Then a reviewQueue item is created within 10 seconds containing jobId, captureId, auditId, reasonCodes[], recommendedRemedies[], severity, and createdAt And the item appears in the Review Queue UI with status Open and can transition to In Review, Resolved, or Dismissed with audit trail of actions And recommendedRemedies map to reason codes (e.g., GEOFENCE_OUT_OF_BOUNDS -> "Confirm site boundary or re-capture within geofence") and are included in the report
Admin Console for Pass Management and Alerts
"As an administrator, I want a central console to manage offline passes and monitor usage so that I can control risk and keep crews productive."
Description

Provide a web console to create passes with geofence polygons and validity windows, assign them to crews and devices, view real-time pass status, and revoke individually or in bulk. Offer CSV export, webhooks, and email/SMS alerts for upcoming expirations, out-of-bounds attempts, and repeated sync failures. Enforce role-based access control and maintain an audit log of all administrative actions. Expose APIs for partner integrations to automate pass provisioning from scheduling systems.

Acceptance Criteria
Create Pass with Geofence and Validity Window
Given I am logged into the Admin Console with the Pass Manager role When I create a pass by providing a name, selecting a validity start and end in UTC, and drawing or importing a simple polygon geofence (3–200 vertices, non-self-intersecting, GeoJSON or WKT) Then the system validates inputs, rejects invalid polygons or end times not later than start, and displays inline error messages And upon successful save, a unique pass_id is generated and shown, and the pass appears in the list with the configured geofence and validity window And the pass default status is Pending until first device activation
Assign Pass to Crews and Devices
Given an existing pass in Pending or Active status and existing crews and devices in the directory When I assign the pass to one or more crews and device IDs (or generate enrollment tokens) Then the assignments persist and are visible on the pass details and crew/device profiles And unassigned devices are prevented from activating or using the pass (UI disabled, API 403) And removing an assignment prevents further activations and takes effect on the next device sync
Status Dashboard with Real-Time Updates and Revocation Controls
Given passes exist in various states When I open the Status dashboard Then each pass row shows status (Pending/Active/Expired/Revoked), assigned crews/devices count, validity window, last check-in time (UTC), and latest location in/out of geofence And the dashboard refreshes automatically at least every 10 seconds and supports manual refresh When I select one or more passes and click Revoke with a reason Then selected passes change to Revoked within 10 seconds server-side, display the reason, and revocation is queued for offline devices and enforced on next contact
CSV Export of Passes and Events
Given I have applied filters on passes or events When I click Export CSV Then a UTF-8 CSV downloads within 30 seconds containing a header row and columns appropriate to the selected dataset And pass export includes: pass_id, pass_name, status, start_at_utc, end_at_utc, created_at_utc, revoked_at_utc, assigned_crews, assigned_devices_count, last_checkin_at_utc, last_known_location And event export includes: event_id, event_type, pass_id, device_id, crew_id, occurred_at_utc, lat, lon, metadata And timestamps are ISO 8601 (UTC, Z suffix) and fields are comma-separated with RFC 4180 escaping And an audit log entry records the export with requesting user and filter summary
Alerts via Email/SMS and Webhooks
Given alerts are enabled and recipients configured When a pass is 24 hours or 1 hour from expiry Then the system sends email and SMS alerts to recipients and posts a webhook event pass.expiring with pass_id and expires_at_utc When a device attempts to activate or use a pass outside its geofence Then an out-of-bounds alert email/SMS is sent and a webhook pass.geofence_violation is posted with device_id, pass_id, lat, lon, attempted_at_utc When a device has 3 consecutive sync failures within 30 minutes Then a sync failure alert email/SMS is sent and a webhook device.sync_failed_repeatedly is posted with device_id and failure_count And webhooks use HTTPS POST with JSON, include an HMAC-SHA256 signature header, and retry up to 5 times with exponential backoff on non-2xx responses
RBAC Enforcement and Auditing of Administrative Actions
Given roles Admin, Pass Manager, and Viewer are configured Then only Admin and Pass Manager can create, edit, assign, revoke passes; only Admin can configure webhooks and messaging providers; Viewer has read-only access And unauthorized actions return HTTP 403 for APIs and are disabled in the UI with explanatory tooltips And every administrative action (create/update/assign/revoke/export/configuration change) is recorded in the audit log with actor user_id, role, ip_address, user_agent, timestamp_utc, object type/id, before/after diff, and outcome (success/failure) And audit entries are immutable and searchable by actor, action, object, and date range
Partner API for Automated Pass Provisioning
Given a partner has OAuth 2.0 client credentials with the pass:provision scope When the partner calls POST /api/v1/passes with name, validity (start_at_utc, end_at_utc), geofence (GeoJSON Polygon), assignments (crew_ids, device_ids), and an Idempotency-Key header Then the API validates inputs, returns 201 with the created pass (pass_id, status, attributes), and records an audit log entry And repeated requests with the same Idempotency-Key return the original response without creating a duplicate And the API is documented via OpenAPI 3.0 and enforces per-client rate limits (e.g., 100 requests/min) returning 429 with Retry-After on limit exceedance And partners can configure webhooks to receive pass.created and pass.updated events
Offline Mode UX Indicators and Safeguards
"As a crew lead, I want clear offline status and guidance in the app so that my team can complete captures confidently and avoid preventable errors on site."
Description

Deliver clear in-app indicators for offline status, pass validity, geofence proximity, and remaining time. Include an offline preflight checklist that syncs time, downloads geofence data, and caches required assets before leaving coverage. Provide actionable error messages, low-storage warnings, and a safe shutdown that flushes the queue to persistent storage. Present a crew-facing session summary for sign-off and support accessibility and localization requirements.

Acceptance Criteria
Offline Status and Connectivity Indicator
Given the device loses network connectivity while in the capture flow When connectivity drops for >1 second Then an Offline status badge and queued-capture count appear within 2 seconds and a tooltip reads "Offline — captures queued" Given connectivity resumes and remains stable for ≥10 seconds When the app detects network availability Then the status switches to Online, queued uploads begin automatically, and progress is visible; otherwise the indicator remains Offline Given airplane mode is enabled or no SIM is present When the user opens the capture screen Then the indicator shows Offline and cloud-only actions are disabled with a tooltip "Unavailable offline" Given offline mode is active When the user attempts a cloud-only feature (e.g., live map tile fetch not pre-cached) Then the action is prevented with a non-blocking banner linking to "Use cached map"
Pass Validity, Timer, and Geofence Proximity Indicators
Given a valid offline pass is loaded When the user is within the job geofence Then the UI displays Pass: Valid, a mm:ss countdown of remaining time, and geofence distance; the countdown updates at least once per second with drift ≤1s/min Given the user moves within the last 50 meters of the geofence boundary When the proximity threshold is crossed Then the proximity indicator turns amber and displays remaining distance using locale units (m/ft) Given the user exits the geofence radius When the boundary is crossed for >3 seconds Then a persistent red banner appears within 2 seconds, the capture button is disabled, and a "Navigate back" CTA is shown Given the offline pass expires while offline When the countdown reaches 00:00 Then further captures are blocked immediately, the status reads Pass: Expired, and a "Renew pass" CTA is shown once back online
Offline Preflight Checklist Completion
Given the device is online and the user taps "Start Offline Session" When the preflight runs Then it completes all required checks and shows step-level pass/fail: (1) time sync with trusted source, (2) download geofence polygons for selected jobs, (3) cache required map tiles/imagery at specified zoom levels, (4) verify free storage meets configured minimum per expected captures, (5) validate offline pass present and unexpired in secure hardware, (6) sensor availability check (GPS, compass), (7) background execution permission verified Given any preflight step fails When the user views the checklist Then failed steps show human-readable cause, error code, and a Retry action; the "Begin Offline" button remains disabled until all required steps pass Given preflight completed successfully When the user starts the offline session within 24 hours Then cached assets and geofences are used without re-download; if older than 24 hours, the app requires revalidation before proceeding
Error Handling and Low-Storage Safeguards
Given an offline capture error occurs (camera failure, sensor unavailable, signature failure) When the error is detected Then an inline error shows within 2 seconds with a concise message, error code, and next-step guidance; a retry is offered only if the operation is idempotent and safe Given device free storage falls below the configured threshold (default 500 MB) When the user is about to start a new capture Then a blocking warning appears with estimated space required per capture and actions to manage storage; new captures are prevented until threshold is met Given free storage drops below threshold during an active capture When the capture is in progress Then the current capture is allowed to complete if possible, is committed to persistent storage, and subsequent captures are blocked with a clear warning Given any offline error or storage warning is shown When the session summary is generated Then the messages persist in the session log with timestamps for later review and support
Safe Shutdown Flush to Persistent Storage
Given the app receives an OS terminate/background event or the user taps "End Offline Session" When unsynced items exist (captures, metadata, audit logs, signatures) Then the app flushes these to persistent storage atomically within 3 seconds, verifies checksums, and displays a progress indicator until completion Given the device hard reboots or the app is force-quit during offline work When the app is relaunched Then all queued items are intact without data loss, the queue deduplicates by content hash, and the user is prompted to resume sync Given the flush process is in progress When the user attempts to exit the app Then a non-dismissible modal prevents exit until flush completes or a 10-second timeout triggers a safe fallback that confirms what was persisted and lists any items requiring manual retry
Crew-Facing Session Summary and Sign-Off
Given the user ends an offline session When the summary screen is shown Then it displays start/end times, pass ID, crew member(s), number of captures, geofence compliance incidents, errors/warnings, and pending uploads; all values are available offline Given the crew reviews the summary When they provide sign-off Then they must check acknowledgments and provide name/signature; a tamper-evident hash is generated, a PDF summary is queued for upload, and the summary is locked from further edits Given unresolved blockers exist (e.g., failed captures, storage warnings) When attempting to sign off Then the app requires acknowledgment with reason entry before allowing completion and records this in the audit trail
Accessibility and Localization Compliance
Given a user relies on assistive technologies When navigating offline indicators, banners, dialogs, and controls Then all elements have accessible names/roles/states, support keyboard/focus navigation, meet WCAG 2.1 AA contrast, and provide text alternatives to color-only signals Given device text size is increased up to 200% When viewing offline-related screens Then layouts reflow without truncating critical information, and interactive elements remain tappable and visible without overlap Given the app language is set to English, Spanish, or French When viewing offline mode UI Then all strings are localized, dates/times are formatted per locale, and distance units display in m/km or ft/mi per locale settings

Site Auto‑Claim

After a badge scan, geofence verification auto‑selects the correct job, assigns the uploader role, and locks uploads to that file. Preloads flight presets and checklists so capture starts immediately. Eliminates misfiled photos and speeds on‑site kickoff.

Requirements

Badge Scan Authentication & Session Start
"As a field technician, I want to start my session by scanning my badge so that I can begin capture immediately without manual login steps."
Description

Enable users to initiate a secure on-site session by scanning a company-issued badge (NFC/QR). Validate identity against the organization directory and SSO, issue a time-bound session token, and register device metadata. On success, preload minimal user profile, role entitlements, and organization context for subsequent steps. Provide graceful failure states (e.g., expired badge, unknown user) with actionable prompts. Enforce session timeout and re-authentication policies aligned with security standards.

Acceptance Criteria
Badge Scan via NFC or QR Initiates Authentication
Given the app is in the foreground with network connectivity, When a valid company-issued badge is scanned via NFC or QR, Then the app shows an "Authenticating…" state within 500 ms and sends a single auth request containing badgeId (hashed), scanType (NFC|QR), timestamp, and a cryptographic nonce. Given a badge scan is in progress, When duplicate scans occur within 5 seconds, Then subsequent scans are ignored and no additional auth requests are sent. Given camera permission is not granted, When the user selects QR scan, Then the app prompts for permission and offers NFC scan (if supported) without leaving the flow.
Identity Validation Against Org Directory and SSO
Given a badgeId from a scan, When validation is performed, Then authentication succeeds only if the org directory returns an active user and the SSO validates the user session for the same org. Given the user is disabled or not found in the directory, When validation occurs, Then authentication fails with code AUTH_USER_INVALID and no session token is issued. Given SSO is unavailable or returns an error, When validation occurs, Then authentication fails with code AUTH_SSO_ERROR and the UI presents actions: Retry and Contact Admin; no session token is issued; the event is audit-logged.
Time‑Bound Session Token Issuance and Storage
Given authentication succeeds, When a session token is issued, Then the token expiry is set to the configured policy value (<= 60 minutes), includes userId, orgId, issuedAt, expiresAt, deviceId, and is signed (JWT or equivalent). Given a session token is received, When the app stores it, Then it is stored only in secure device storage (Keychain/Keystore) and is never written to plaintext logs. Given a session token is expired, When the app calls a protected endpoint, Then the server returns 401 TOKEN_EXPIRED and the app prompts for re-authentication via badge scan.
Device Metadata Registration on Session Start
Given authentication succeeds, When the session starts, Then the client sends device metadata (deviceModel, osVersion, appVersion, deviceId, locale, timeZone) with the sessionId, and the server persists it linked to the session. Given required metadata is missing, When the server validates the payload, Then it returns 400 METADATA_INVALID and the client retries with complete fields (max 1 retry). Given metadata is accepted, When the server responds, Then it returns 201 CREATED with the persisted sessionId; the event is audit-logged with no PII beyond hashed identifiers.
Minimal Profile and Entitlements Preload
Given a session token is active, When the app preloads context, Then it fetches and caches minimal profile (displayName, userId, orgId), role, and entitlements, and completes within 2 seconds on a stable 5 Mbps connection (<= 200 ms RTT). Given entitlements are loaded, When the user lacks uploader permission, Then upload-related UI and actions are disabled and a non-blocking banner indicates Insufficient permissions. Given preload fails, When a network error occurs, Then the app retries once with exponential backoff and shows a retry action without blocking sign-in state.
Graceful Failure Handling with Actionable Prompts
Given a badge is expired, When it is scanned, Then the app shows Badge expired (code BADGE_EXPIRED) with the expiration date (if available) and actions: Retry with another badge and Contact Admin; no token is issued. Given a badge is unknown, When it is scanned, Then the app shows Badge not recognized (code BADGE_UNKNOWN) and provides actions: Retry and Learn more; the event is audit-logged with a hashed badgeId. Given the device is offline, When a scan is attempted, Then the app shows a No network message with actions: Try again and Report issue; no auth request is queued that would auto-submit without user confirmation.
Session Timeout and Re‑authentication Enforcement
Given the session is idle beyond the configured idleTimeout (e.g., 15 minutes), When the user attempts a protected action, Then the app requires re-authentication via badge scan and the previous token is revoked server-side. Given the session exceeds the maxSessionDuration (e.g., 60 minutes), When background refresh would occur, Then refresh is denied and the user must re-authenticate; the server returns 401 TOKEN_MAX_AGE. Given the user signs out, When sign-out is confirmed, Then the token is invalidated server-side, secure storage is cleared, and sensitive runtime data (profile, entitlements) is purged from memory.
Geofence Job Auto-Selection
"As a site lead, I want the correct job to be auto-selected when I arrive so that all captures are tied to the right project without manual searching."
Description

Use high-accuracy location services to verify presence within predefined site geofences and automatically select the correct job record. Apply accuracy thresholds and dwell-time rules to reduce false positives, and implement tie-breakers when multiple jobs overlap (nearest centroid, start time proximity, crew assignment). Provide a fallback selector if no confident match is found and log all decision criteria for auditing. Syncs with scheduling/CRM to fetch upcoming jobs and geofence boundaries.

Acceptance Criteria
Auto-Select Single Matching Job Within Geofence
Given a user scans their badge and the device reports horizontal accuracy ≤ 10 m And exactly one active job geofence contains the user’s location for ≥ 8 continuous seconds When geofence verification runs Then the system auto-selects that job record And assigns the user the "Uploader" role on that job And locks all new media uploads to that job’s file And preloads the job’s flight presets and capture checklist And displays a confirmation banner within 2 seconds
Accuracy Threshold and Dwell-Time Enforcement
Given a user is physically within a job geofence When the device’s reported horizontal accuracy is > 10 m at any time during evaluation Then auto-selection is deferred until accuracy ≤ 10 m is sustained for ≥ 8 continuous seconds And no job is auto-selected while conditions are unmet And an unobtrusive "Improving location…" status is shown until thresholds are satisfied
Overlapping Geofences Tie-Breaker Resolution
Given a user meets accuracy and dwell thresholds within overlapping geofences A and B When candidates are evaluated for auto-selection Then prefer the job whose geofence centroid is nearest to the user’s current location And if centroid distance difference ≤ 3 m, prefer the job whose scheduled start time is closest to now And if still tied, prefer the job that includes the user’s crew/assignment per CRM And if still tied after all rules, invoke the fallback selector instead of auto-selecting
Fallback Manual Selector When Confidence Is Low
Given a user scans their badge and no candidate meets accuracy/dwell thresholds within 30 seconds, or no geofence contains the location, or tie-breakers remain unresolved When the fallback selector is invoked Then present a searchable list of upcoming jobs within 1 km and ±2 days, sorted by start time proximity And require the user to select a job or choose "None of these" And require a reason code for any manual selection (e.g., No GPS, Out of range, Overlap tie, Other) And upon confirmation, lock uploads to the chosen job and proceed And record that the selection was manual with the provided reason
Decision Criteria Audit Logging
Given any auto-selection attempt or manual fallback selection When a selection decision is made Then log: timestamp, user/badge ID, device ID, app version, accuracy value(s), dwell duration, candidate job IDs, centroid distances, tie-breakers evaluated with outcomes, selected job ID (or none), selection mode (auto/manual), confidence value, CRM sync timestamp, and a correlation ID And persist the log within 2 seconds of the decision And make the log retrievable via audit API and in-app history for at least 30 days And redact PII fields as per policy on export/download
CRM/Scheduling Sync and Geofence Caching
Given the user signs in or a periodic sync interval elapses When syncing with the scheduling/CRM service Then fetch active/upcoming jobs for the next 7 days and their geofence polygons, completing within 5 seconds at p95 And update the local cache atomically with a TTL of 60 minutes And if offline, use the last successful cache; if cache age > 24 hours, disable auto-selection and show the fallback selector with a staleness warning And reflect job updates/deletions on the next successful sync
Role Auto-Assignment & Permissions Lock
"As an operations manager, I want roles to be assigned automatically on site so that access is controlled and compliant without manual admin work."
Description

Upon successful geofence verification, automatically assign the user the appropriate on-site role (e.g., Uploader) for the selected job and enforce least-privilege permissions. Lock restricted actions (e.g., changing job context, deleting media) during the on-site session. Support supervisor overrides and escalation rules, and automatically revoke elevated rights on session end or geofence exit. Integrate with the platform’s RBAC service and reflect role state in the UI.

Acceptance Criteria
On‑Site Role Auto‑Assignment After Geofence Verification
Given a user with a valid badge and GPS enabled initiates on‑site check‑in at a geofenced location for Job X And the user has no active on‑site session When geofence verification succeeds (within 30 m of Job X centroid for ≥10 s) and the badge scan is validated Then the system auto‑selects Job X And assigns on‑site role "Uploader" with the least‑privilege permission set for Job X within 2 seconds And any broader off‑site roles are constrained to the on‑site least‑privilege set for the session context And the UI displays a role chip "Uploader · On‑site" and the selected job name And an audit event "role_assigned" is recorded with userId, jobId, coordinates, timestamp
Enforced Permissions Lock During On‑Site Session
Given the user is in an active on‑site session as "Uploader" for Job X When the user attempts a restricted action (e.g., change job context, delete media, modify RBAC, edit job settings) Then the action is blocked with HTTP 403 and error code RBAC_LOCK And the UI shows an inline message "Action locked during on‑site capture" And the job context switcher remains disabled and visibly greyed out And no data is mutated And an audit event "action_denied_locked" is recorded with attemptedAction and reason
Supervisor Override with Time‑Bound Elevation
Given a supervisor with "Override:OnSite" permission is assigned to Job X and is online And a worker with role "Uploader" requests temporary permission "DeleteMedia" for 15 minutes with a reason When the supervisor approves the override and completes MFA Then the worker receives permission "DeleteMedia" scoped to Job X for 15 minutes And the UI shows an override badge with a countdown timer and the stated reason And all actions performed under the override are tagged with overrideId in the audit log And if the supervisor cancels the override, the elevated permission is removed within 5 seconds
Automatic Rights Revocation on Session End or Geofence Exit
Given a user has an active on‑site session for Job X with any elevated rights When the user ends the session, logs out, is idle for 15 minutes, or exits the geofence by >30 m for ≥60 s Then all temporary elevations are revoked and the user's permissions revert to baseline RBAC And new restricted actions are blocked immediately And in‑flight uploads continue to completion but no new deletes or edits can be initiated And the UI removes override indicators within 3 seconds And an audit event "role_revoked" is recorded with reason
RBAC Service Integration and Fail‑Safe Behavior
Given the RBAC service is reachable When assigning on‑site roles and permissions Then the system fetches and applies the current permission set version and caches it for the session And rejects mismatched or unknown permission sets with a clear error message Given the RBAC service is unavailable for up to 5 minutes When initiating an on‑site session with a cached "Uploader" permission set (age ≤ 24 hours) Then the system allows capture and upload only and blocks all restricted actions by default And displays a banner "Limited mode: RBAC offline" And retries RBAC sync every 30 seconds until restored
UI Role State Visibility and Feedback
Given a role state change occurs (assignment, override grant, override revoke) When the state change is applied Then the role chip, lock icons, disabled states, and tooltips update within 1 second And tooltips enumerate which actions are locked and why And any override shows a countdown in mm:ss that updates every second And accessibility announcements are emitted via ARIA live region for role and lock changes
Comprehensive Audit Logging for Role Events
Given any role assignment, locked‑action denial, override grant or revoke, or automatic revocation occurs When the event is written to the audit log Then the record includes eventId, userId, jobId, timestamp (UTC), location (lat,long), deviceId, roleBefore, roleAfter, permissionDelta, actorId (for overrides), reason, and result And records are immutable and queryable in admin search within 2 minutes of occurrence And the export API returns the event within the same 2‑minute SLA And a tamper‑detection hash is computed and stored with the record
Upload Binding to Job File
"As a field technician, I want my captures to be locked to the correct job so that nothing is misfiled or lost."
Description

Bind all captured media (photos, videos, thermal, notes) to the auto-selected job ID and prevent cross-job misfiling. Tag each asset with immutable metadata (job ID, timestamps, GPS, device, operator) at capture time, queue securely on-device, and sync to the correct job file. Provide visibility into the current binding state and block uploads to other jobs unless an authorized override is granted, which must be audited.

Acceptance Criteria
Auto-Binding on Badge Scan and Geofence Match
Given a valid badge scan occurs within a single job’s geofence When the app resolves the job context Then the capture session auto-binds to that job ID within 2 seconds And the operator is assigned the Uploader role for that job And all capture pipelines are configured to target the bound job file And the UI header displays the bound job name/ID and operator role before capture can start
Immutable Metadata Tagging at Capture
Given the session is bound to a job ID When an asset (photo, video, thermal, note) is captured Then the system writes immutable metadata at capture time: job ID, operator ID, device ID/model, app version, capture timestamp (UTC), local timestamp with timezone, GPS lat/long with reported accuracy, media type, and content checksum And the user cannot edit or delete these fields in-app or via APIs; attempts are blocked and logged And if GPS is unavailable, GPS fields are set to null with reason "No Fix" and the event is logged; capture is not blocked
Upload Lock to Bound Job
Given a session with an active binding When the user attempts to change the upload destination to a different job Then the action is blocked with message "Uploads locked to Job <ID>" And upload APIs reject mismatched job IDs with HTTP 403 and error code UPLOAD_LOCKED And no asset from this session appears in any job other than the bound job upon server verification
Authorized Override With Audit Trail
Given a session bound to Job A When a user with "Override Upload Binding" permission initiates an override Then the app requires MFA and a required reason note (minimum 10 characters) And upon success, binding changes to Job B and is recorded in an immutable audit log entry with timestamp, actor, device, location, prior job ID, new job ID, and reason And all assets captured after override bind to Job B; assets captured before remain bound to Job A And unauthorized override attempts are denied with HTTP 403 and are audit-logged as failed attempts
Offline Secure Queue and Correct Sync
Given the device is offline or has degraded connectivity When assets are captured Then assets are stored on-device encrypted at rest and added to the bound job’s upload queue And the queue UI shows counts and total size by job And when connectivity is restored, the client syncs queued assets to the bound job using idempotency keys to ensure exactly-once server storage (no duplicates) And retries use exponential backoff with jitter up to 5 attempts; failures after the limit raise a user alert and leave items queued And successful uploads remove local copies per retention policy and update queue counts within 1 second
Binding State Visibility and Edge Conditions
Given the capture screen is open Then a persistent indicator shows current binding (job name/ID, operator, role, geofence status, queue count) and last sync time And changes to binding state update the indicator within 1 second When geofence is lost mid-session, capture continues and binding persists; the user is warned but not unbound When the operator’s Uploader role is revoked, further captures are blocked until rebinding or role restoration; queued items remain and sync when permitted And tapping the indicator opens a detail view with recent overrides, last sync result, and outstanding errors
All Media Types and Notes Are Bound
Given the session is bound When the user captures photos, videos, thermal imagery, or creates text/voice notes Then each asset inherits the same job ID binding and immutable metadata schema And video assets record start/end timestamps and GPS track when available And batch imports maintain per-asset metadata; assets cannot be retargeted to a different job without an authorized override And generated outputs (PDFs, estimates) only include assets bound to the job
Flight Presets & Checklist Preload
"As a drone pilot, I want presets and checklists ready on arrival so that I can start capture quickly and stay compliant with best practices."
Description

Automatically preload drone flight presets (altitude, overlaps, patterns, camera settings) and safety/compliance checklists based on client, roof type, and jurisdiction once the job is selected. Push presets to supported drone SDKs and require completion of critical checklist items before flight arming. Allow admins to manage versioned templates and per-client overrides, and cache presets for quick startup. Record checklist results with timestamps and operator identity.

Acceptance Criteria
Preset Selection and SDK Push on Job Selection
Given a job is selected via Site Auto‑Claim with client, roof type, and jurisdiction metadata And versioned templates exist for those selectors When the job becomes active in the mobile app Then the system resolves the template using priority: per‑client override > roof type > jurisdiction > default And preloads altitude, overlaps (frontlap/sidelap), flight patterns, and camera settings into the mission plan And displays the applied template name and version (e.g., "Residential‑Gable v2.3") to the operator And pushes the presets to the connected supported drone SDK and receives an acknowledgment within 3 seconds And if no matching template exists, the default template is applied and labeled accordingly
Critical Checklist Gate Before Flight Arming
Given critical checklist items are defined for the resolved template/jurisdiction And the operator is connected to a supported drone When the operator attempts to arm or start the mission Then arming is blocked until all critical items are completed And the UI lists remaining critical items with inline navigation to each And once all critical items are completed, arming becomes enabled within 1 second And skipped/non‑critical items do not block arming And each completion requires explicit operator confirmation (checkbox, signature, or value as defined by the item)
Admin Template Versioning and Overrides Management
Given an authenticated Admin opens the Templates Console When the Admin creates or edits a template with selectors (client, roof type, jurisdiction) and required fields (altitude, overlaps, patterns, camera settings) Then the template is saved as a new version with semantic versioning and an effective date And publishing makes the version available to mobile clients within 60 seconds And per‑client overrides take precedence over generic templates at runtime And the Admin can rollback to a prior version, which becomes the latest published version And non‑admin users cannot create, edit, or publish templates (permission error shown and logged) And all template changes are audit‑logged with actor, timestamp, and diff summary
Offline Caching and Fast Startup
Given the device is offline or on a high‑latency network And a matching template version was synced within the last 30 days When the job is selected Then presets load from cache within 2 seconds And the app marks presets as "cached" and records the version ID And when connectivity is restored, the app verifies freshness and replaces cached presets if a newer published version exists, within 60 seconds And if no cached presets exist, the app prompts the user and retries sync automatically until success or dismissal
Checklist Results Audit Trail
Given the operator completes any checklist item When the item state changes to Completed, Skipped, or N/A Then an immutable audit record is appended with job ID, operator identity (badge ID), device ID, template version, item ID, new state, ISO 8601 timestamp with timezone offset, and geolocation (if permission granted) And audit records are visible in the job timeline within 5 seconds of submission And export to CSV/PDF includes these fields and totals by state And attempted edits create new records; prior records remain preserved
SDK Push Failure and Unsupported Drone Handling
Given the connected drone SDK is unsupported or rejects the preset push When the push attempt fails Then the app displays a clear error with cause and a Retry action And logs the failure with SDK type, error code, and timestamp And allows manual mission configuration while maintaining the checklist gate for arming And tags the job with "SDK Push Failed" for admin visibility And subsequent successful pushes clear the tag automatically
Offline Mode & Deferred Sync
"As a field user, I want the feature to work without cell service so that my work continues uninterrupted in remote areas."
Description

Ensure core Site Auto‑Claim flows function with limited or no connectivity. Cache upcoming jobs, geofences, user entitlements, presets, and checklists for the day. Allow offline badge validation via signed tokens or last-known trust window, capture and locally encrypt assets, and queue metadata and logs for later sync. Provide clear offline/online indicators, conflict resolution policies, and automatic retry with exponential backoff once connectivity returns.

Acceptance Criteria
Offline Badge Validation via Signed Token
- Given the device has a cached signed badge token for the user and site that is within the trust window and there is no connectivity, When the badge is scanned, Then the user is validated offline within 2 seconds and granted access to Site Auto‑Claim. - Given the cached token is expired or marked revoked per last-known revocation data and there is no connectivity, When the badge is scanned, Then access is denied and an offline error code (OA-401) is shown without crashing. - Given system clock skew is within ±5 minutes of last sync, When validating the token offline, Then the allow/deny decision matches server-side validation for the same token and skew. - Given an offline validation occurs, When logging the event, Then a local audit record is written including hashed token ID, user ID, site ID, decision, and timestamp.
Geofence Auto‑Claim Offline
- Given cached geofences and jobs for the day and GPS is available, When the device enters a single matching geofence after successful offline badge validation, Then the correct job is auto-selected within 3 seconds and uploads are locked to that job. - Given multiple geofences overlap the current location, When auto-claim triggers offline, Then the user is prompted to select from the overlapping jobs and the chosen job is locked for uploads. - Given no matching geofence is found offline, When attempting to start capture, Then capture is blocked with an explanatory message and a retry option. - Given a job lock is active offline, When the user attempts to switch jobs, Then the action is prevented and a notice is shown.
Daily Cache Availability and Freshness
- Given the app synced successfully earlier the same day, When operating offline, Then jobs, geofences, user entitlements, flight presets, and checklists are available from cache for 24 hours after last sync. - Given cached data exceeds the local storage cap (default 500 MB, configurable), When new data would exceed the cap, Then least-recently-used non-locked items are evicted without affecting the current job. - Given cached data is stored locally, When inspected on device, Then it is encrypted at rest using OS keystore-backed keys and is unreadable by other apps. - Given cached data becomes stale (older than 24 hours) and there is no connectivity, When attempting offline operations, Then the user is warned and allowed to proceed only if within a grace period of 2 hours; otherwise operations are blocked.
Offline Capture, Encryption, and Queueing
- Given the device is offline and within a locked job, When photos/videos are captured, Then assets are saved locally within 1 second per photo (average on target devices), encrypted at rest, and added to an upload queue with associated metadata (timestamp, GPS, device ID, job ID, capture type). - Given the app is force-closed or the device reboots, When reopened, Then the queued assets and metadata are intact and the queue resumes without data loss. - Given remaining free storage drops below 10% or the queue size exceeds 5,000 assets, When capturing, Then the user is warned; below 5% free storage, capture is prevented until space is freed. - Given an asset is written to local storage, When checksum is computed, Then a SHA-256 content hash is stored to support later deduplication and integrity checks.
Offline/Online Status Indicators and UX Guards
- Given the device is offline, When in the Site Auto‑Claim flow, Then an 'Offline Mode' banner and icon are visible, showing queued item count and trust window remaining time. - Given an action requires connectivity (e.g., share PDF, create new site, change company settings), When attempted offline, Then the action is disabled or intercepted with a toast explaining it requires connectivity. - Given the trust window for offline badge validation is expiring within 10 minutes, When in Offline Mode, Then a countdown warning is displayed; on expiration, capture is halted until revalidated online.
Deferred Sync with Backoff and Conflict Resolution
- Given connectivity is restored, When any queued item exists, Then sync begins automatically within 10 seconds and uses exponential backoff (2s, 4s, 8s, up to 5m) on retryable failures, resetting after a successful request. - Given a queued asset matches an already uploaded asset by content hash or UUID, When syncing, Then the duplicate is skipped and marked as 'deduped' without user intervention. - Given the target job was modified or closed while offline, When syncing, Then assets are held in a 'Needs Review' state; the user is prompted to resolve to an existing job or create a new job, and no data is discarded without explicit confirmation. - Given partial sync failures occur (e.g., 502, timeouts), When syncing, Then failed items remain queued with error codes and are retried according to backoff policies while successful items are not retried.
Offline Audit Logging and Integrity
- Given any offline action occurs (badge validation, geofence lock, capture, queue changes), When logging locally, Then an append-only log entry is recorded with sequence number, event type, timestamps, device ID, and job ID. - Given logs are synced after connectivity is restored, When viewed in the admin audit endpoint, Then the sequence is contiguous per device and includes an HMAC or signature to detect tampering. - Given a log gap or tamper is detected during sync, When processing logs, Then the system flags the device, marks affected items with 'Integrity Warning,' and alerts an admin.
Audit Trail & On‑Site Events Log
"As a supervisor, I want a detailed on-site event log so that I can resolve disputes and verify compliance."
Description

Capture a tamper-evident timeline of on-site events including badge scan, geofence fixes, job selection rationale, role assignment, checklist completion, flight start/stop, and upload bindings. Store with timestamps, user, device, and location metadata; expose the timeline in the job file and allow export for dispute resolution. Enforce retention policies and integrity checks, and surface anomalies (e.g., geofence mismatch) to admins.

Acceptance Criteria
Badge Scan Audit Entry (On‑Site Auto‑Claim)
Given a verified employee scans their badge within an active site geofence When Site Auto‑Claim starts Then the system writes one immutable audit entry with fields: event_type='badge_scan', user_id, badge_id, device_id, job_id (if selected), timestamp_utc (ms), gps_lat, gps_lng, gps_accuracy_m, geofence_id, distance_m_to_geofence_center, and signature_hash And the entry is visible in the job file timeline within 5 seconds of creation And the entry cannot be edited or deleted via UI or API; attempts are rejected with HTTP 403 and are themselves logged as event_type='audit_modify_attempt'
Geofence Verification & Job Selection Rationale Logged
Given multiple nearby jobs are candidates within 150 m When the system auto-selects a job Then an audit entry 'job_selection' records candidate_job_ids, selected_job_id, selection_reason, geofence_match_score, distance_m, and rule_version And if a human override occurs, an 'override' entry records actor_user_id, reason_text, and timestamp, and the original rationale remains preserved And if no job meets threshold, a 'no_selection' entry records threshold, best_match_score, and a manual selection prompt is shown and logged as 'manual_selection_prompt'
Role Assignment and Upload Lock Binding Logged
Given the system assigns the uploader role upon auto-claim When the assignment occurs Then a 'role_assignment' entry records assigned_role='uploader', user_id, job_id, permissions_granted, and expiry (if any) And any photo/video upload from the device during the session is bound to job_id and logged as 'upload_binding' with file_id, checksum_sha256, capture_timestamp_utc, and storage_uri And any attempt to upload to a different job_id is blocked and logged as 'upload_lock_violation' with HTTP 409 returned
Checklist Completion and Flight Start/Stop Logged
Given flight presets and the pre-flight checklist are preloaded for the job When the user completes checklist items Then each item completion is logged as 'checklist_item' with item_id, status (pass/fail/na), timestamp_utc, and user_id And starting a flight logs 'flight_start' with drone_id, controller_id, firmware_versions, gps_fix (true/false), and timestamp_utc And stopping a flight logs 'flight_stop' with duration_s, battery_pct_end, and any incidents; if the app crashes, the next session writes a 'flight_stop_recovered' with inferred end_time and reason='app_crash'
Timeline Exposure in Job File and Export
Given a job file is opened by an authorized user When the timeline view is requested Then the on-site events timeline displays in chronological order with a local/UTC time toggle and source icons, loading within 2 seconds for up to 1,000 entries And an export action allows PDF and JSON export containing the exact stored fields and signature chain for the selected date range And the JSON export validates against schema 'rooflens.audit.v1' and includes a verification report of signature checks (pass/fail per entry)
Retention Policy and Integrity Enforcement
Given an organization policy of 7 years retention for audit logs When audit entries reach retention end Then entries are deleted via a scheduled job that writes 'retention_delete' entries listing range and count_deleted and produces a deletion certificate artifact And before deletion, entries remain write-once (WORM); any mutation attempt returns HTTP 403 and logs 'audit_modify_attempt' And integrity checks compute and verify chained hashes for all entries at least daily; any mismatch triggers an 'integrity_alert' with affected_entry_ids
Anomaly Detection and Admin Surfacing
Given anomalies such as geofence mismatch (>30 m outside), clock skew > 2 minutes, duplicate badge scans within 60 seconds, or missing signatures When such an anomaly is detected Then an 'anomaly' entry is added with type, severity (low/med/high), details, and suggested action And an admin alert is sent via email and in-app within 5 minutes containing job_id and a deep link to the timeline And the timeline UI surfaces a red banner and filters to affected events by default until acknowledged by an admin; acknowledgment is logged as 'anomaly_ack' with admin_user_id

Live Rollcall

Real‑time presence and shift timeline powered by first/last scan events show who’s on site, for how long, and in what role. Managers get alerts for no‑shows and early departures, while SLA forecasts improve with verified field presence. Export attendance for compliance and payroll.

Requirements

Real-Time Presence Dashboard
"As an operations manager, I want a real-time view of on-site personnel so that I can allocate crews and ensure safety and SLA compliance."
Description

A live, auto-refreshing dashboard that shows who is currently on each job site, their role, check-in status, and time on site. Data updates sub-minute via WebSockets/SSE with a polling fallback. Users can filter by job, crew, role, and shift date, and view presence badges (on-time, late, no-scan). Integrates with RoofLens job sites and crew assignments, honors RBAC permissions, and logs an auditable trail of presence changes.

Acceptance Criteria
Sub-minute Live Updates with Fallback
Given a permitted user has the Real-Time Presence Dashboard open for Job Site A When a worker scans a check-in at Job Site A Then the worker appears or updates on the dashboard within 60 seconds without manual refresh Given a permitted user has the Real-Time Presence Dashboard open for Job Site A When a worker scans a check-out at Job Site A Then the worker’s row disappears from the “On Site” list or shows status “Checked out” within 60 seconds Given the dashboard is connected via WebSocket/SSE When no real-time messages are received for 10 seconds Then the client shows status “Reconnecting” and within 5 seconds activates 30-second polling until real-time reconnects Given the client is in polling mode due to a dropped real-time connection When a real-time connection is re-established Then the client resumes live updates and stops polling, and the status indicator shows “Live” Given the user loses network connectivity When the dashboard detects offline status Then the status indicator shows “Offline” and no errors block interaction with filters or historical views
Presence Roster and Time-on-Site Display
Given a permitted user opens the dashboard scoped to a single job site When there are active check-ins Then each row shows person name, role, crew, check-in timestamp (site local time), and a time-on-site counter in hh:mm that increments every minute Given multiple active workers are on site When the dashboard loads Then rows are sorted by most recent check-in time (descending) by default, and the user can sort by any visible column Given a worker has checked in more than once in the same day When viewing the live roster Then only the current active presence is shown, and time-on-site reflects duration since the latest check-in Given the job site has a defined time zone When displaying timestamps Then all times are rendered in the job site’s local time with UTC offset preserved in the underlying data
Presence Badges: On-time, Late, No-scan
Given a scheduled shift exists for a worker on a selected date with a scheduled start time When the worker checks in at or before the scheduled start time Then the worker’s badge shows “On-time” Given a scheduled shift exists for a worker on a selected date with a scheduled start time When the worker checks in after the scheduled start time Then the worker’s badge shows “Late” Given a scheduled shift exists for a worker on a selected date When the scheduled start time has passed and the worker has not checked in Then the dashboard shows a “No-scan” badge for that worker for that date Given a worker with a “No-scan” badge checks in later that day When the dashboard receives the check-in event Then the badge updates from “No-scan” to “Late” in real time Given a worker has no scheduled shift for the selected date When the worker checks in Then no on-time/late/no-scan badge is displayed for that worker
Filtering by Job, Crew, Role, and Shift Date
Given a user applies any combination of Job, Crew, Role, and Shift Date filters When the filters are changed Then the roster updates within 1 second to reflect the intersection of all selected filters Given multiple values are selected within a single filter (e.g., multiple crews) When the filter is applied Then results include records matching any selected value within that dimension (OR), combined across dimensions with AND logic Given filters are active When the user selects “Clear All” Then all filters reset and the unfiltered live roster is shown Given filters are active When the page URL is copied Then the current filter state is encoded in query parameters so opening the URL in a new tab restores the same filtered view Given a Shift Date is selected for a past day When viewing the roster Then the dashboard shows the presence state for that date (including no-scan where applicable) and does not display live timers
RBAC Enforcement and Data Scoping
Given a user lacks permission to a job site per RBAC rules When they attempt to access that site’s dashboard or data via direct URL or API Then access is denied with HTTP 403 and the UI shows a not-authorized message Given a user has permissions limited to specific sites and crews When they open the filters Then only permitted job sites, crews, and roles are available for selection, and only permitted records render in the roster Given a user’s permissions are updated during an active session When the user next performs an action or the token is refreshed Then the new permissions take effect without requiring a full page reload
Auditable Trail of Presence Changes
Given the system receives a presence change (check-in, check-out, badge status change) When the event is persisted Then an audit record is created with at least: eventId, personId, jobSiteId, (optional) crewId, eventType, eventTimeUTC, source, and previousStatus/newStatus where applicable Given an audit record exists When queried via the audit API with siteId and date range Then results are returned in reverse chronological order with pagination metadata and immutable event payloads Given a correction is made to a presence state When the correction is applied Then the original audit record remains unchanged and a new audit record is appended describing the correction Given an authorized admin requests an export for a date range When the export is generated Then the file includes all audit fields, uses UTC for timestamps, and passes schema validation
Integration with RoofLens Job Sites and Crew Assignments
Given a presence event references a RoofLens job site and (optional) crew assignment When the dashboard renders the roster Then job site names and crew names resolve from the canonical RoofLens records, and unknown IDs are not displayed in the UI Given a job site is archived in RoofLens When loading the dashboard and filters Then archived sites are excluded from the live roster and filter options by default Given a worker is not assigned to a crew When they check in Then the roster shows the worker with crew value “Unassigned” without blocking the display of presence Given a presence event references a non-existent job site ID When the event is processed Then the event is rejected with an error and does not appear on the dashboard
First/Last Scan Event Ingestion
"As a field technician, I want my check-ins to be captured even if I’m offline so that my time and presence are recorded accurately."
Description

Reliable ingestion and processing of check-in/out events from mobile scans (QR/NFC), manual confirmations, and third-party timeclocks. Supports offline buffering with backfill, idempotent event handling, geofence validation, timestamp normalization with time zones, and deduplication. Associates events to users, roles, jobs, and shifts to derive presence states and durations for the timeline.

Acceptance Criteria
Mobile Scan Ingestion and Presence Derivation
Given a verified user assigned to Job J and Shift S with Role R and a valid QR/NFC token for Job J When the user performs a check-in scan with network connectivity Then the system validates the token maps to Job J and the user’s assignment and shift window And persists an IN event with event_id (UUID v4), captured_at (device time and tz offset), source=mobile, device_id, and geo (lat,long,accuracy) And associates the event to user_id, role_id=R, job_id=J, shift_id=S And updates the presence timeline to show the user present from captured_at (UTC-normalized) When a subsequent valid check-out scan occurs for the same user and job Then an OUT event is persisted and associated as above And the timeline interval is closed as [IN.captured_at, OUT.captured_at] with duration computed to the second And presence state alternates (IN then OUT) with no overlapping intervals for the same user and job
Offline Scan Buffering and Backfill
Given a mobile device is offline when a user scans a valid job token When connectivity is restored within 72 hours of the first buffered scan Then the client uploads buffered events in original capture order with original captured_at and geo And the server persists them with source=mobile and backfilled=true and processed_at in receive order And the presence timeline is recomputed using the original captured_at times When a buffered event is older than 72 hours at upload time Then the server rejects it with code=OFFLINE_WINDOW_EXCEEDED and does not alter the timeline And the client receives a response indicating which events were accepted or rejected
Idempotent Event Handling and Deduplication
Given an event with event_id E has already been accepted When the same event_id E is received again from any source Then the server returns 200 with idempotent=true and does not create a new presence state change Given two events with the same user_id, job_id, type (IN/OUT), source, and captured_at within 5 seconds and identical geo When both are received Then the later event is marked duplicate_of the earlier and ignored for timeline computation And a deduplication record is stored for audit with rule=content_hash_window And the resulting presence intervals remain unchanged
Geofence Validation on Scan
Given Job J has a circular geofence with radius R meters centered at (latJ,longJ) When a scan event is received with geo (latU,longU,accuracy=A) Then compute distance D between (latU,longU) and (latJ,longJ) And if D <= R + max(25m, A) the event is marked geofence_valid=true and may affect presence And if D > R + max(25m, A) the event is marked geofence_valid=false and does not alter presence And the API response includes geofence_valid and rejection_reason=OUT_OF_GEOFENCE when invalid And all invalid events are persisted for audit with no timeline update
Timestamp Normalization and Time Zone/DST Handling
Given events may arrive with device_captured_at and device_tz_offset or server_received_at When an event is persisted Then captured_at_utc is computed from device_captured_at and device_tz_offset with millisecond precision And original device_captured_at and device_tz_offset are stored unchanged And presence intervals are ordered and computed using captured_at_utc And across DST transitions, intervals never have negative duration and are disambiguated by UTC ordering And when job_time_zone is set, any derived local times for the timeline use job_time_zone consistently
Manual Check-In/Out Confirmation
Given a manager with permission MANAGE_ATTENDANCE selects a user and Job J When the manager records a manual IN or OUT with captured_at, reason_code, and optional geo Then the event is persisted with source=manual and created_by=manager_id and reason_code And it must not create overlapping intervals for the same user and job; otherwise the request is rejected with code=OVERLAP_DETECTED And geofence rules are applied; if invalid, the event is persisted but flagged geofence_valid=false and does not alter presence unless override=true is provided by a manager with override permission And all manual events are fully auditable (who, when, what changed)
Third-Party Timeclock Webhook Ingestion
Given a configured third-party integration with shared secret S and provider key P When the provider sends a webhook containing external_event_id, user_external_id, job_external_id, type, captured_at, and HMAC-SHA256 signature over the body using S Then the signature and a X-Timestamp header are validated (skew <= 5 minutes) and the request is accepted only if valid And events are deduplicated by external_event_id (idempotent on repeats) And events are processed asynchronously and ordered by captured_at_utc for timeline derivation And failures return 4xx for auth/validation errors and 5xx for transient errors, with retries supported via exponential backoff And all events are mapped to internal user_id and job_id; unmapped identifiers are rejected with code=UNMAPPED_REFERENCE and do not alter presence
Role-Based Shift Timeline View
"As a site supervisor, I want to review and adjust shift timelines with reasons so that attendance records reflect reality and pass audits."
Description

An interactive timeline that visualizes each person’s shift segments per job, derived from first/last scan events, breaks, and role changes. Supports anomaly flags (missing checkout, overlapping shifts), reason-coded adjustments with permissions, and a complete audit log. Syncs with project schedules and weather holds to distinguish excused gaps from no-shows and exports as PDF for client or compliance review.

Acceptance Criteria
Generate Shift Timeline from Scan and Break Events
Given a person has at least one check-in and one check-out scan on Job X for Date D When the timeline view is opened for Job X on Date D Then the timeline shows a working segment from the first check-in to the last check-out excluding recorded break intervals And total worked duration equals the sum of working segments excluding breaks Given one or more break start and break end scan pairs exist When viewing the timeline Then each break is shown as a non-working gap labeled Break and excluded from totals Given new scan events are received for Job X When 30 seconds have elapsed Then the timeline reflects the new segments without requiring a page refresh
Segment Timeline by Role Changes
Given a role change event occurs at time T during an active shift on Job X When viewing the timeline for that day Then the working segment is split at time T And each resulting segment is labeled with its corresponding role Given multiple role change events occur in the same day When viewing role totals Then the total duration per role equals the sum of its labeled segments within the day
Detect and Flag Missing Checkout and Overlapping Shifts
Given a person has a check-in but no check-out by 23:59 local time for Date D When viewing anomalies for Date D Then the shift is flagged as Missing Checkout and excluded from final totals until resolved Given two working segments overlap in time for the same person and job When the timeline is rendered Then an Overlap anomaly is displayed on the affected segments And anomalies are included in exports and anomaly filters
Apply Reason-Coded Adjustments with Permission Controls
Given a user has Time Editor or Manager permission When the user adds, edits, or deletes a shift segment, break, or role label Then the system requires selection of a reason code from the configured list before saving And totals recalculate immediately after save Given a user lacks the required permission When the user attempts an adjustment Then the action is blocked and the user is shown an insufficient permissions message
Maintain Complete Audit Log for Adjustments
Given any manual adjustment to a segment, break, role label, or anomaly resolution is saved When the audit log is viewed for the person and date range Then an immutable record exists containing actor, UTC timestamp, change type, prior value, new value, and reason code And records are ordered chronologically and are read-only Given the audit log is exported When exporting adjustments for a job and date range Then the export contains the same audit fields with no data loss
Sync Schedule and Weather Holds to Classify Gaps
Given a project schedule exists for Job X from S to E on Date D and a weather hold exists from H1 to H2 When a person has no presence during part of S to E Then any gap overlapping H1 to H2 is labeled Excused Gap And any remaining uncovered gap is labeled No-Show Given a person checks out before E and no weather hold covers the remaining time When viewing the timeline Then the time from checkout to E is labeled Early Departure
Export Role-Based Shift Timeline to PDF
Given the timeline is displayed for Job X within date range D1 to D2 When the user selects Export as PDF Then a PDF is generated and downloaded with filename pattern JobX_Timeline_D1-D2.pdf And the PDF contains per-person rows with role-labeled segments, total worked hours, total break time, and anomaly indicators with labels And the PDF includes a section summarizing reason-coded adjustments and their counts And the exported values match the on-screen totals for the same filters
No-Show and Early Departure Alerts
"As a project manager, I want immediate alerts for no-shows and early departures so that I can reassign resources and keep the job on schedule."
Description

Configurable alert rules by job, crew, or user to detect missed check-ins, late arrivals, and early departures against scheduled shifts. Sends push, SMS, and email notifications with escalation, quiet hours, and snooze controls. Suppresses alerts for approved delays (e.g., weather), links directly to the timeline for context, and exposes webhooks for external workforce systems.

Acceptance Criteria
Detect No‑Show at Shift Start
Given a scheduled shift for user U on job J from 08:00–16:00 local with a No‑Show grace period of 15 minutes And no check‑in scan is recorded for U at J by 08:15 When the time reaches 08:15 Then the system creates a No‑Show alert within 60 seconds And the alert includes userId, crewId, jobId, shiftId, scheduledStart, graceMinutes, ruleId, createdAt And only one No‑Show alert is created per user per shift And if a check‑in scan occurs after the alert, the alert is auto‑resolved within 60 seconds and marked Resolved with resolvedAt and resolver="system"
Detect Late Arrival Against Shift Schedule
Given a scheduled shift for user U on job J from 08:00–16:00 local with a Late Arrival threshold of 10 minutes When U’s first check‑in scan at J is recorded at 08:12 Then the system creates a Late Arrival alert within 60 seconds with deltaMinutes=2 And if U’s first check‑in occurs at or before 08:10, no Late Arrival alert is created And the alert references the firstScanId and firstScanAt timestamps
Detect Early Departure Before Shift End
Given a scheduled shift for user U on job J ending at 16:00 local with an Early Departure threshold of 20 minutes and inactivity verification window of 10 minutes And U’s last scan occurs at 15:30 When no further scans are recorded for U at J by 15:40 Then the system creates an Early Departure alert at 15:40 with minutesEarly=30 And if a subsequent scan occurs at or after 15:45 (within the threshold), the pending Early Departure alert is not created or is canceled if queued
Multi‑Channel Alert Delivery, Content, Timeline Deep Link, and Webhook
Given any alert (No‑Show, Late Arrival, Early Departure) is created for user U on job J And recipients have push, SMS, and email enabled When the alert is created Then push and SMS are sent within 60 seconds and email within 2 minutes And each message includes ruleType, personName, role, jobName, siteAddress or lat/lon, scheduledStart/End, observedEvent, deltaMinutes (if applicable), and a deep link to the Live Rollcall timeline (e.g., /timeline?jobId=J&userId=U&shiftId=S) And the deep link opens the timeline pre‑filtered to the shift on both web and mobile And a webhook POST is sent to the configured endpoint with JSON payload {alertId, type, jobId, crewId, userId, shiftId, timestamps, deltaMinutes?, status} And the webhook includes an HMAC‑SHA256 signature header X‑Signature and Idempotency‑Key, retries up to 5 times with 1,2,4,8,16 minute backoff until a 2xx response is received And notifications are deduplicated so no channel receives more than one delivery per alert unless escalated or manually re‑sent
Escalation on Unacknowledged Alerts
Given an alert is delivered to Primary recipients (e.g., Shift Supervisor) with an escalation policy to Ops Manager after 5 minutes and to Director after 15 minutes And acknowledgment is defined as any of: in‑app Ack button, clicking a secure Ack link, or replying “ACK” by SMS When no Primary recipient acknowledges within 5 minutes Then the system escalates to Ops Manager via all enabled channels and records escalatedAt and level=2 And if acknowledged at any time, further escalation stops and ackAt and ackBy are recorded and visible on the alert And if still unacknowledged after 15 minutes, the system escalates to Director (level=3) And no duplicate escalations are sent to the same level
Quiet Hours and Per‑Recipient Snooze Controls
Given recipient R has quiet hours configured from 21:00–07:00 local When an alert is generated at 22:15 Then push and SMS are suppressed for R and an email is sent within 2 minutes with quietHours=true in metadata And suppressed channels are not re‑sent automatically when quiet hours end for that alert And when R snoozes No‑Show alerts for job J for 30 minutes at 08:00 Then R receives no No‑Show alerts for job J until 08:30, while other recipients are unaffected And snooze is scoped (alertType, jobId, userId), persisted with snoozeUntil, and audited
Suppression for Approved Delays (Weather Hold)
Given an approved delay window (reason="Weather") exists for job J from 07:30 to 09:00 local and applies to No‑Show and Late Arrival rules And user U is scheduled 08:00–16:00 with No‑Show grace=15 minutes and has not checked in by 08:15 When the system evaluates rules at 08:15 Then no No‑Show or Late Arrival alert is created and the evaluation is logged as suppressed with reason="Weather" And when the delay window ends at 09:00, the system re‑evaluates And if U remains unchecked‑in at 09:15, a No‑Show alert is created at 09:15 with suppressionHistory referenced And suppressed alerts do not trigger escalations or webhooks during the suppression window
Verified Presence Geofencing
"As a business owner, I want check-ins verified at the job site so that payroll and SLA reporting are trustworthy."
Description

Geo-verified check-ins confined to job site geofences with configurable radius and anti-spoofing checks (GPS integrity, device fingerprint, optional selfie). Provides exception workflows requiring supervisor approval and reason codes. Visualizes geofences on the map, supports multi-site projects, and ties verified presence to SLA and payroll calculations.

Acceptance Criteria
Check-In Allowed Within Geofence Radius
Given an active job site with a polygon geofence and a configured check-in radius R (default 50 m) And the user is assigned to the job and authenticated And the device provides a GPS fix with horizontal accuracy ≤ A (default 30 m) and timestamp age ≤ 10 s When the user's location point is within the geofence (or within R of its boundary) Then the Check In action is enabled And on tap the system creates a Verified Presence record with fields: userId, projectId, siteId, timestamp, lat, lon, accuracy, deviceFingerprint, source='GPS', status='Verified' And the record is visible in Live Rollcall within 5 s And an audit log entry is written with event='check-in', outcome='verified'
Check-In Blocked Outside Geofence
Given a configured job site geofence and check-in radius R When the user's location point is outside the geofence buffered by R Then the Check In action is disabled or, if tapped, returns error 'Outside job geofence' And the UI displays distance to nearest allowed boundary in meters And no Verified Presence record is created And a 'Request Exception' option is presented
Anti-Spoofing Enforcement for Verified Check-In
Given AntiSpoofing policy is enabled with thresholds A (max accuracy, default 50 m) and S (max location age, default 15 s) and RequireGPS can be true/false When the user attempts to check in Then the attempt is rejected with a specific reason if any of the following is true: - OS reports mock location or emulator/simulator detected - Horizontal accuracy > A - Location age > S - Location provider != GPS and RequireGPS = true And the rejection is logged with antiSpoofing=true and a reason code And deviceFingerprint is captured and must match user's allowed device set; otherwise require re-authentication or admin approval per policy
Optional Selfie Verification Flow
Given site policy SelfieRequired = true And geofence and anti-spoofing checks have passed When the user attempts to complete check-in Then the app prompts for selfie capture with liveness detection And check-in completes only if liveness score ≥ L (default 0.8) and a face is detected And the selfie is stored and linked to the presence record per retention policy And after 3 consecutive liveness failures, the user is prompted to Request Exception and no verified presence record is created
Supervisor Exception Approval with Reason Codes
Given a user requests an exception for a blocked check-in When the user selects a reason code from the configured list and enters a note ≥ 10 characters Then a project supervisor receives an in-app and email notification within 30 s And the supervisor can Approve or Deny with a comment And on approval, a presence record is created with status='Exception-Approved', reasonCode saved, and it appears in Live Rollcall within 5 s And on denial, no presence record is created and the user is notified with the denial reason And all exception actions are captured in the audit log
Multi-Site Geofence Resolution and Map Visualization
Given a project with multiple active site geofences When the map view loads Then all site geofences render as overlays with site names within 2 s on a typical 4G connection And the app identifies the current site as: - If inside exactly one geofence: select that site - If inside multiple: select the one with smallest distance to centroid or highest priority - If inside none but within R of any: mark 'nearby' and allow navigation And the app displays 'You are in [Site Name]' or 'Outside all sites' status And the user can tap a site to view details and permissible area
SLA and Payroll Calculation from Verified Presence
Given verified presence records for a user on a project/day When calculating SLA and payroll Then SLA 'On-Site Start' = timestamp of first status in {'Verified','Exception-Approved'} check-in within geofence And SLA 'On-Site End' = timestamp of last verified or exception-approved check-out within geofence And payroll shift duration = sum of verified on-site intervals; records failing anti-spoofing are excluded unless Exception-Approved And recalculation occurs within 60 s of any presence record create/update And payroll export includes: presenceStatus, reasonCode (if any), check-in/out timestamps, total duration, siteId, accuracy
Attendance Export and Payroll Integration
"As an administrator, I want payroll-ready attendance exports so that I can process payroll and meet compliance with minimal manual work."
Description

One-click exports of approved attendance data in CSV, XLSX, and PDF with job codes, roles, overtime, breaks, and approval status. Schedules recurring deliveries via email or SFTP and offers an API endpoint. Supports locale-specific rules (daily/weekly overtime), customer-specific column mappings, and connectors for ADP, Gusto, and QuickBooks Time with audit metadata included.

Acceptance Criteria
One-Click Export of Approved Attendance to CSV/XLSX/PDF
- Given a manager with export permission and an active filter (date range, site/project), When they click Export and choose CSV, XLSX, or PDF, Then a file is generated and downloaded within 60 seconds containing only Approved records matching the filter and timezone selection. - Given a generated export, When inspected, Then the filename follows {org}_{site}_{dateFrom}-{dateTo}_{timestamp}.{ext} and the export includes a header row. - Given CSV export, When opened, Then it is UTF-8 encoded, comma-separated, double-quoted where needed, with CRLF line endings and no truncated rows for up to 25,000 records. - Given XLSX export, When validated, Then it conforms to OpenXML, preserves data types (dates, numbers), and contains a single worksheet named "Attendance" with freeze-pane on header. - Given PDF export, When opened, Then it renders a tabular report with page numbers, generated timestamp, filter summary, and fits standard Letter/A4 without clipped columns. - Given no Approved records in the selected range, When exporting, Then the system produces an empty template with headers and a "No approved records" note in PDF, and zero data rows in CSV/XLSX.
Required Fields in Export: Job Codes, Roles, Overtime, Breaks, Approval Status
- Given any export format, When inspecting columns, Then it includes at minimum: employeeId, employeeName, role, jobCode, siteName, shiftDate, clockIn, clockOut, breakMinutes, totalHours, regularHours, overtimeHours, approvalStatus, approverName, approvedAt. - Given a sample record, When validating data types, Then timestamps are ISO 8601 with timezone offset, durations are numeric with 2 decimal places (hours) and breakMinutes is an integer >= 0. - Given each row, When calculating regularHours + overtimeHours, Then it equals totalHours within a tolerance of 0.01 hours. - Given a record missing jobCode or role, When exporting, Then the corresponding cell is blank but the row is included and the file is valid. - Given records with multiple breaks, When exporting, Then breakMinutes equals the sum of all breaks on the shift. - Given approval state changes, When exporting Approved-only data, Then only the latest Approved version per shift is included, excluding Pending/Rejected.
Scheduled Recurring Deliveries via Email and SFTP
- Given an admin creates a schedule, When saving, Then they can choose frequency (daily/weekly/monthly), execution time, and timezone, and the next run time is computed correctly. - Given an email delivery schedule, When it runs, Then recipients receive one message per schedule with the selected format attached (or a secure download link if size > 20 MB), subject/body templating applied, and delivery logged with a unique runId. - Given an SFTP delivery schedule, When Test Connection is clicked, Then the system validates host, port, path, username, and SSH key successfully before allowing save. - Given a scheduled run, When a transient failure occurs, Then up to 3 retries with exponential backoff are attempted and on final failure an alert email is sent to owners with runId and error details. - Given a schedule configured for Approved-only data, When no Approved records exist for the window, Then the run is skipped with a "No data" log entry (no email/SFTP upload) if skip-empty is enabled. - Given a schedule uses a saved column mapping and locale, When it runs, Then output honors that mapping and locale rule set. - Given multiple schedules overlap, When both run, Then deliveries are independent and file names remain unique via timestamp + runId.
Attendance Export API Endpoint
- Given a valid API client, When calling the endpoint with Bearer token, Then the service returns 200 with attendance data and 401/403 for missing/invalid credentials. - Given query parameters date_from, date_to (ISO 8601), site_id, role, approval_status, and timezone, When supplied, Then results are filtered accordingly and timestamps rendered in the requested timezone. - Given large datasets, When requesting data, Then server supports cursor-based pagination (page_size <= 1000) and returns next_cursor until completion. - Given rate limiting, When exceeding 100 requests per minute per token, Then the API returns 429 with Retry-After header. - Given response schema, When validated, Then it matches the published OpenAPI spec including fields: employeeId, jobCode, role, shiftDate, clockIn, clockOut, breakMinutes, totalHours, regularHours, overtimeHours, approvalStatus, approverName, approvedAt. - Given idempotent export generation, When the same parameter set is requested within 15 minutes, Then the API returns the same dataset checksum and metadata (generatedAt, rowCount).
Locale-Specific Overtime Rules (Daily/Weekly)
- Given an organization/site locale configuration, When set to US-CA, Then daily overtime applies after 8 hours and double-time after 12 hours per day, reflected in overtimeHours. - Given locale set to US-Federal, When calculating, Then only weekly overtime applies after 40 hours per week with daily overtime disabled. - Given locale set to CA-ON, When calculating, Then weekly overtime applies after 44 hours per week and daily overtime is disabled. - Given shifts spanning DST transitions, When calculating totals, Then hours are computed using absolute timestamps so totalHours remains correct and rules apply to actual hours worked. - Given unpaid breaks, When present, Then breakMinutes are subtracted before evaluating overtime thresholds. - Given rounding rules of 5-minute to nearest, When enabled for the locale, Then clockIn/clockOut are rounded before overtime evaluation and rounding details are stored in audit metadata. - Given exports and API, When produced, Then totals reflect the configured locale rules effective at the shift date.
Customer-Specific Column Mappings
- Given an admin creates a column mapping template, When saving, Then they can rename headers, reorder columns, include/exclude optional fields, and the template is versioned with a unique id. - Given required fields, When attempting to save a mapping without employeeId, shiftDate, totalHours, or approvalStatus, Then validation prevents save with clear error messages. - Given a mapping is selected for an export, When generating, Then the output uses the mapped header names and column order and excludes unchecked fields. - Given a preview request, When run, Then a 20-row preview is rendered using the selected mapping and locale without writing a file. - Given header name conflicts, When duplicate names are assigned, Then validation blocks save until resolved. - Given scheduled exports and API requests referencing a mapping id, When executed, Then they use the current published version of that mapping and log the mapping id and version in metadata.
ADP, Gusto, and QuickBooks Time Connectors with Audit Metadata
- Given connector setup, When an admin authorizes ADP, Gusto, or QuickBooks Time, Then OAuth flow completes, tokens are stored encrypted, and connection status is Active. - Given approved attendance, When a sync runs, Then only Approved entries are pushed, with idempotency ensured via externalId and duplicate submissions rejected gracefully. - Given provider-specific schemas, When mapping fields, Then role/jobCode and hours (regular/overtime) map to provider earnings codes as configured per connector. - Given audit metadata requirements, When data is transmitted, Then each record includes exportedAt (ISO 8601), approvedBy (userId), approvalTimestamp, source ("RoofLens Live Rollcall"), importBatchId, and checksum. - Given transient provider errors, When encountered, Then the system retries per record up to 3 times with backoff and moves failures to a dead-letter queue with error detail. - Given sync monitoring, When viewing the connector dashboard, Then last run status, counts (created/updated/failed), and a downloadable error report are available. - Given SLAs, When approvals occur during business hours, Then connector push completes within 5 minutes 95th percentile.
SLA Presence Forecasting
"As an operations director, I want SLA risk forecasts based on on-site presence so that I can proactively adjust staffing and avoid penalties."
Description

Forecasts SLA adherence using real-time presence and historical crew-hour trends versus plan. Highlights under-staffed intervals, predicts risk to milestones, and recommends staffing adjustments. Incorporates weather data and job complexity tags to refine projections and surfaces confidence bands on the timeline and in downloadable reports.

Acceptance Criteria
Real-time Forecast Update on Presence Events
Given an active job with a published staffing plan and Live Rollcall enabled And at least one crew member currently on site When a first-scan or last-scan event is received from the field Then the SLA forecast is recomputed within 10 seconds of event receipt And the timeline updates to reflect new actual vs plan headcount for the affected interval And the forecast timestamp shows the latest recompute time in the user’s timezone And the system logs the recompute with job ID, event ID, latency, and model version
Under-Staffed Interval Detection and Highlighting
Given a planned crew-hour curve and actual presence aggregated in 15-minute intervals When actual crew-hours are below plan by more than 10% or by more than 0.5 FTE in any interval Then that interval is highlighted in red with a gap badge showing absolute and percentage shortfall And a tooltip displays planned FTE, actual FTE, and delta values And the summary panel shows the count of under-staffed intervals and total deficit hours for the selected date range
Milestone Risk Prediction and Alerts
Given project milestones with planned dates and required cumulative crew-hours When the forecasted cumulative crew-hours indicate a risk of missing any milestone Then the system computes a probability of miss using historical variance and current trend And assigns a risk level: Low (<30%), Medium (30–60%), High (>60%) And triggers an in-app alert and optional email for High risk within 1 minute of computation And the milestone card shows the top three contributing factors with percentage attribution
Staffing Adjustment Recommendations and What-If
Given a milestone or SLA target marked Medium or High risk When the user opens the Recommendations panel Then the system proposes the minimal additional crew by role, start time, and duration to reduce risk below 20% And displays the expected new probability of on-time completion and updated confidence band range And allows a one-click what-if apply that updates the timeline as a hypothetical plan without altering the live plan And recommendations respect role qualifications and hour limits configured for the job
Weather and Job Complexity Integration
Given a job with a geocoded location and assigned job complexity tags When the weather provider returns precipitation, wind, and temperature forecasts for the job window Then productivity coefficients are adjusted per configured rules derived from historical data And a data freshness indicator shows last weather update time and source And if the weather API is unavailable for more than 5 minutes, the system falls back to last known values and flags forecast confidence as degraded And complexity tags are included in model inputs and visible in the forecast metadata panel
Confidence Bands on Timeline and Reports
Given the SLA forecast model outputs a completion time distribution When the timeline is rendered Then 50%, 80%, and 95% confidence bands are drawn with distinct styles and labeled in the legend And hovering displays percentile time and probability values within ±1 minute of the model outputs And downloadable PDF and CSV reports include the same percentile values and legend And report values match on-screen values within ±0.5%

Preflight Attest

Role‑based safety and compliance checklists (Part 107, PPE, access permission) are signed with the user’s passkey at scan. Non‑compliant steps trigger guidance or require a supervisor override. Attestations attach to the job and Custody Ledger, reducing risk and audit friction.

Requirements

Role-Based Preflight Checklist Engine
"As a pilot in command, I want a preflight checklist tailored to my role and job context so that I can complete all required safety and compliance steps efficiently and consistently."
Description

Provides configurable, standards-aligned checklists that adapt by user role (pilot in command, visual observer, supervisor), account policies, job type, and location. Supports FAA Part 107 items, PPE verification, property access permissions, and custom organization-specific items. Includes conditional logic, required/optional flags, versioning with effective dates, localization, and mobile-friendly UI with offline capability. Integrates with RoofLens job context to prefill known fields (job address, client, permit), records per-item responses, evidence attachments (photos, files), timestamps, and geolocation. Ensures repeatable, auditable preflight processes that reduce risk and speed on-site setup.

Acceptance Criteria
Role-Based and Policy-Driven Checklist Assembly
Given a user with role "Pilot in Command" assigned to a job with account policy set "A", job type "Residential Inspection", and location within the United States When the preflight checklist is generated for the job Then the engine includes all applicable FAA Part 107 items, PPE checks, property access permission checks, and organization-specific items that apply to that role, policy, job type, and location And items not applicable to the user’s role or policy are excluded And required/optional flags reflect the governing account policy And the assembled checklist is stored as an instance linked to the job with a unique checklist instance ID
Conditional Logic Evaluation for Items
Given a checklist template with a controlling item "Waiver present?" and a dependent item "Validate waiver ID" that is visible only when the controlling item equals "Yes" When the user answers "Waiver present?" = "Yes" Then the dependent item becomes visible and required if configured as required And when the user changes the controlling answer to "No", the dependent item becomes hidden and its prior response and attachments are cleared from the active instance while the change is preserved in the audit log And conditional logic evaluation occurs within 200 ms on a modern mobile device (baseline: 2023 mid-tier)
Job Context Prefill and Field Protections
Given a job with address, client name, and permit number in RoofLens When the preflight checklist loads Then the corresponding checklist fields are prefilled with the job address, client name, and permit number And system-prefilled fields are read-only unless a policy explicitly allows edits And any permitted edits to prefilled fields are logged with editor user ID, timestamp, prior value, and new value
Per-Item Response Recording with Evidence, Timestamps, and Geolocation
Given a device with location services enabled and camera/file access granted When the user records a response for any checklist item Then the system stores the response value, responder user ID, device timestamp in ISO 8601 UTC, and GPS coordinates with reported accuracy And if an attachment (photo/file) is added, the file is stored and linked with size, MIME type, SHA-256 hash, and capture timestamp And each saved response is immutable after final sign-off; subsequent corrections require a supervisor override entry creating a new version with full audit lineage
Offline Execution and Sync Integrity
Given the device is offline at the job site When the user completes all required checklist items and later regains connectivity Then the full checklist instance, including responses, metadata, and attachments up to 150 MB total, syncs to the server within 60 seconds of network availability And any sync conflict is resolved per item using last-write-by-signer with a visible resolution log attached to the instance And if sync fails, the user is shown a retriable error and the data remains locally persisted until successful sync
Passkey Attestation, Non-Compliance Handling, and Supervisor Override
Given one or more items are marked non-compliant by the user When the user attempts to finalize the preflight Then the system blocks finalization and presents guidance for each non-compliant item And if account policy permits, a supervisor may perform a passkey authentication to override specific non-compliant items after providing a reason And upon successful finalization, the system generates a signed attestation containing the checklist hash, signer user IDs, timestamps, geolocation, and override details, and attaches it to the job and the Custody Ledger
Localization and Versioning with Effective Dates
Given the user’s locale is set to Spanish (es-US) and today’s date is within a template version’s effective period When the checklist loads Then all checklist labels and item texts display in Spanish with English fallback only where translations are missing And the selected template version is the latest published version whose effective date is on or before today and not sunset And the checklist instance records the template version ID and locale used for auditability
Passkey Attestation & Identity Binding
"As a safety manager, I want preflight attestations signed with passkeys so that I can trust who signed them and defend our process during audits or disputes."
Description

Requires users to sign completed preflight checklists using platform-supported passkeys (WebAuthn/FIDO2) with device biometrics, binding the attestation to their verified identity. Captures and stores a signed payload including user ID, role, job ID, checklist version, per-item outcomes, timestamp, device info, and GPS fix. Performs server-side signature verification, prevents replay, and flags discrepancies. Provides graceful fallback when passkeys are unavailable (admin-configurable) while maintaining an audit trail. Enhances non-repudiation and downstream audit confidence.

Acceptance Criteria
Successful Passkey Attestation with Identity Binding
Given a verified RoofLens user with a registered WebAuthn passkey and a completed preflight checklist for a specific job And userVerification is required for the authenticator When the user initiates signing of the checklist attestation Then the client creates a WebAuthn assertion using a server-issued, job-bound challenge that includes the checklist hash And the assertion indicates uv=true and originates from an allowed RP ID/origin And the server verifies the signature using the stored public key for the credentialId And the server persists an attestation record containing userId, role, jobId, checklistVersion, perItemOutcomes, timestamp (UTC), deviceInfo, gpsFix, credentialId, signature, challengeId And the attestation is attached to the job and appended to the Custody Ledger And the API returns 201 with attestationId
Complete Signed Payload Capture and Storage
Given a successfully verified attestation Then the stored payload includes exactly these fields: userId, role, jobId, checklistVersion, perItemOutcomes (each with itemId, result, notes), timestamp (ISO 8601 UTC), deviceInfo (authenticator AAGUID, platformType, userAgent), gpsFix (lat, lon, accuracyMeters), credentialId, signature, challengeId, checklistHash And gpsFix.accuracyMeters <= 25 when location permission is granted; otherwise gpsFix is null and reason="permission_denied" is recorded And timestamp is within 2 minutes of server verification time And SHA-256(perItemOutcomes + checklistVersion) equals checklistHash referenced in the challenge
Replay Prevention with Single‑Use, Time‑Bound Challenges
Given a challengeId issued for attestation When the same challengeId is used again for any subsequent request Then the server rejects the request with 409 Conflict and does not create or update any attestation And the challengeId automatically expires 120 seconds after issuance And any accepted attestation is linked to a unique, consumed challengeId marked usedAt with timestamp
Discrepancy Detection and Flagging on Identity or Payload Mismatch
Given a verified WebAuthn assertion whose payload conflicts with server truth data (e.g., userId not assigned to the job, role mismatch, job status closed, or checklistVersion outdated) When the attestation is processed Then the system stores the attestation with status="flagged" and discrepancyCodes detailing each conflict And the job view displays a prominent "Attestation Flagged" banner with reasons And the Custody Ledger entry includes the discrepancyCodes and requires supervisor override to clear And the API responds 202 Accepted with flag details
Admin‑Configurable Graceful Fallback without Passkey
Given organizational policy allows fallback and the user cannot complete a passkey attestation (no credential, unsupported device, or authenticator failure) When the user selects an allowed fallback method Then the system records fallbackMethod, reason, and identity of approver (if supervisor co-sign is required) And captures the same payload fields (userId, role, jobId, checklistVersion, perItemOutcomes, timestamp, deviceInfo, gpsFix) and binds them to the user’s verified account And marks trustLevel="fallback" on the attestation and Custody Ledger entry And if policy disallows fallback, the system blocks submission and shows guidance to register a passkey; no attestation record is created
Server‑Side Signature Verification and Failure Handling
Given an attestation attempt with invalid signature, mismatched rpId/origin, missing uv=true, or expired/unknown challengeId When the server performs verification Then the server rejects the request with 400/401, does not persist an attestation, and does not attach anything to the job or Custody Ledger And logs a security event with failureReason, userId (if known), jobId (if provided), deviceInfo, and IP And the API response includes a machine-readable error code and remediation guidance
Real-Time Compliance Validation & Gating
"As a drone operator, I want the system to validate compliance as I complete the checklist so that I don’t start a flight that violates rules or company policy."
Description

Automatically evaluates checklist responses against FAA Part 107 rules and account policies in real time, including controlled airspace requirements (LAANC), daylight/twilight restrictions, weather minima, NOTAMs/TFRs, battery health, and site access authorization. Provides inline guidance to remediate issues, blocks job start and image capture when critical items are non-compliant, and enables read-only access to capture modules until cleared. Supports offline prechecks with cached advisories and queues validations for sync when back online. Admin toggles allow integrations for airspace/weather providers and policy thresholds.

Acceptance Criteria
Block Job Start in Controlled or Restricted Airspace Without Clearance
Given the planned flight area intersects controlled airspace or an active NOTAM/TFR And no valid authorization token (e.g., LAANC/waiver) is attached to the job When the user attempts to Start Job or initiate Image Capture Then the system calls the configured airspace provider and evaluates authorization within 3 seconds And if authorization is absent or denied, Start and Capture actions are blocked and capture modules are read-only And inline guidance is displayed with steps to obtain authorization and a Retry Validation action And the block is lifted only after a valid authorization is verified and stored on the job And the decision, provider response, and timestamps are recorded to the Custody Ledger
Enforce Daylight/Twilight Operations Policy
Given the site location and current/scheduled time map to civil daylight, civil twilight, or night per FAA definitions And the account policy is configured for Daylight only or Twilight allowed with anticollision lighting When the user attempts to Start Job Then the system determines sun state using the configured provider or on-device ephemeris within 1 second And if state is Night and policy is Daylight only, the start is blocked as critical and capture modules remain read-only And if state is Twilight and policy requires anticollision lighting, an attestation step is presented and start remains blocked until attested (or supervisor override where enabled) And outcomes and timestamps are recorded to the job and Custody Ledger
Validate Weather Minima Against Account Thresholds
Given current site weather is retrieved from the configured provider (METAR/TAF or equivalent) within the last 10 minutes And account thresholds exist for sustained wind, gusts, visibility, ceiling, and precipitation When Preflight Validation runs or the user attempts to Start Job Then the system compares observed values to thresholds And if any value exceeds a critical threshold, Start and Capture are blocked and inline remediation guidance (wait conditions, adjust plan) with Retry Validation is shown And if values are within a caution band where supervisor approval is required, the system requests supervisor override before allowing start And the evaluation details and decision are recorded to the Custody Ledger
Battery Health Gate Prior to Capture
Given the flight battery telemetry is available from the connected aircraft or battery When Preflight Validation runs Then State of Charge must be >= the policy minimum (default 30%), cell voltage delta <= the policy maximum (default 0.05V), temperature within policy range, and cycle count <= policy limit And if any metric fails, Image Capture is disabled, modules remain read-only, and guidance to swap/charge/cool is displayed with Retry Validation And passing all metrics lifts the gate and allows capture And the measured values and decision are logged to the job
Require Site Access Authorization Before Capture
Given the job is marked as requiring site access authorization per account policy When the user completes the preflight attestation Then the user must attach or reference a permission artifact (uploaded document, signed digital form, or scanned QR) And the system validates the artifact freshness against policy, matches it to the job site (address/parcel), and verifies signer role And if missing or invalid, Start and Capture remain blocked until a valid artifact is provided or a supervisor override is granted And all checks and outcomes are recorded to the Custody Ledger
Offline Prechecks with Cached Advisories and Deferred Sync
Given the device is offline during Preflight Validation When validations require airspace, sun state, or weather data Then the system uses cached advisories not older than policy timeouts (e.g., airspace <=24h, weather <=2h, ephemeris local) And items with stale or missing critical data are marked Pending Validation and keep Start and Capture blocked; planner UI may be accessed read-only And non-critical items may proceed provisionally but Image Capture remains disabled until connectivity returns and validations complete And upon reconnection, queued validations auto-run, update gates in real time, and notify the user of results And offline/online transitions and final decisions are logged
Admin Toggles for Provider Integrations and Policy Thresholds
Given an admin updates airspace/weather provider toggles or policy thresholds in Settings When the changes are saved Then new configurations apply immediately to subsequent validations and to any in-progress preflight upon next validation attempt And if an integration is disabled, the system falls back to manual attestation and blocks critical checks until acceptable manual proof is provided And all configuration changes (who, what, when, old→new) are recorded to the audit log and reflected in validation decisions
Supervisor Override Workflow
"As a field supervisor, I want a controlled override process for edge cases so that necessary work can proceed without compromising our compliance record."
Description

Implements a structured override process for non-compliant checklist items, requiring supervisor selection, reason codes, risk acknowledgment text, and passkey signature from the approver. Supports time-bound and scope-limited overrides (single job, single flight, or duration), captures communication notes, and attaches all artifacts to the attestation. Triggers notifications for overrides and auto-creates follow-up tasks for post-incident review. Ensures exceptions are controlled, documented, and audit-ready.

Acceptance Criteria
Initiate Override on Non-Compliant Checklist Item
Given a Preflight Attest checklist item is marked non-compliant When the operator selects "Request Supervisor Override" Then the system requires selecting a supervisor with role=Supervisor and active account status Given no eligible supervisor is available in the org When the operator attempts to request an override Then the system blocks submission, displays contextual guidance, and logs the blocked attempt with operatorId, jobId, checklistItemId, and timestamp Given an override request is initiated When it is saved Then the system assigns a unique overrideSessionId and records jobId, flightId (if present), checklistItemId, operatorId, and createdAt
Supervisor Authentication and Authorization via Passkey
Given a supervisor is selected for an override request When the supervisor chooses Approve Then the system prompts for WebAuthn passkey and approves only upon successful cryptographic assertion bound to the supervisor account Given passkey verification fails or is canceled When the supervisor attempts approval Then the override remains Unapproved and the system records status=Denied, reason=AuthFailed, and failedAt Given more than 3 failed passkey attempts occur within 5 minutes When a further attempt is made Then the system rate-limits further attempts for 5 minutes and emits an audit log entry with rateLimit=true
Capture Reason Codes, Risk Acknowledgment, and Communication Notes
Given a supervisor is approving an override When they proceed to finalize approval Then the UI requires selection of a reasonCode from the managed list and acknowledgment of riskAckText via explicit checkbox; both are mandatory Given approval details are entered When the supervisor adds communication notes Then the system accepts 1–2000 characters and records notes with authorId and timestamp Given an override is approved When it is saved Then the system persists reasonCode, riskAckVersion, riskAckAcceptedAt, notes (if any), supervisorId, and approval timestamp
Time-Bound and Scope-Limited Overrides
Given an override is being approved When the supervisor selects scope Then the system requires one of: Scope=Single Flight (current flight only), Scope=Single Job (all flights in this job), or Scope=Duration (time-bound access) Given Scope=Duration is selected When start and end times are provided Then the system enforces endTime > startTime and records both; the override becomes ineligible after endTime Given an override has expired or is out-of-scope for a checklist item When an operator attempts to reuse it Then the system prevents reuse and prompts for a new override request
Notifications and Custody/Audit Attachment
Given an override is approved or denied When the decision is saved Then the system sends in-app and email notifications to the requester and job watchers within 60 seconds, including jobId, flightId (if any), checklistItemId, decision, supervisorId, reasonCode (if approved), scope, and expiry (if applicable) Given an override decision is saved When the attestation is generated or updated Then the attestation and job Custody Ledger each include an immutable entry with overrideSessionId, decision, supervisor signature reference, reasonCode, riskAck, notes, scope, and timestamps, retrievable via API and UI Given the attestation PDF is downloaded When the override exists Then the PDF contains an "Override" section summarizing supervisor, decision, reasonCode, scope, and timestamps
Auto-Creation and Tracking of Follow-Up Tasks
Given an override is approved When the decision is saved Then the system auto-creates a follow-up task assigned to the Compliance queue, linked to the job and overrideSessionId, with dueDate set to 3 business days after override expiry or flight completion (whichever is later) Given task creation fails due to transient error When the system attempts to create the task Then it retries up to 3 times with exponential backoff and, on final failure, notifies administrators and logs the error with correlationId Given the follow-up task is updated or completed When its status changes Then the override record reflects the linked taskId and current task status in the UI and API
Attestation to Job Record & Custody Ledger
"As an insurance adjuster, I want verifiable preflight attestations linked to each job so that I can quickly validate safety and access compliance during claim reviews."
Description

Automatically attaches the signed attestation, evidence, and validation results to the RoofLens job record and writes an immutable digest to the Custody Ledger. Generates verifiable artifacts (PDF and JSON) with checksum and QR code for third-party verification without platform access. Exposes API endpoints and webhooks for insurers/customers to retrieve attestations, and enforces data retention policies with legal hold support. Provides a timeline view within the job showing who did what and when.

Acceptance Criteria
Attach Signed Attestation and Evidence to Job Record
Given an authenticated user with permission "Attest:Submit" completes a Preflight Attest for job {jobId} with required evidence and validation results When the user submits the attestation Then the job record displays an Attestations section containing the signed attestation JSON, evidence file list with filenames and SHA-256 hashes, validation summary per checklist item, submitter identity, and ISO-8601 timestamp And the attestation JSON includes a WebAuthn signature object referencing the user's credentialId and signature counter And downloadable links for the PDF and JSON artifacts are present in the UI and retrievable via API GET /v1/jobs/{jobId}/attestations and GET /v1/attestations/{attestationId} And attachments are immutable; any subsequent edit creates a new attestation version and does not modify the prior record
Write Immutable Digest to Custody Ledger
Given attestation {attestationId} for job {jobId} is finalized When the system computes a JCS-canonicalized JSON and its SHA-256 digest Then a Custody Ledger entry is written with fields: ledgerId, jobId, attestationId, digest (hex), actorUserId, ISO-8601 timestamp, and platform signature, and the entry is marked immutable And any attempt to update or delete the ledger entry is rejected with 409 Conflict; corrections are recorded only as new entries that reference the prior entry via a supersedes field And recomputing the digest from the published JSON artifact matches the ledger digest
Generate Verifiable PDF and JSON Artifacts with Checksum and QR
Given an attestation is finalized When artifacts are generated Then the JSON artifact includes: schemaVersion, jobId, attestationId, checklist results, evidence references with SHA-256 hashes, webauthnSignature, and a checksum equal to the SHA-256 of the canonical JSON And the PDF artifact includes the same key metadata and embeds a QR code encoding a verification URL containing the ledger digest And both artifacts are named RoofLens_{jobId}_{attestationId}_{yyyyMMddHHmmssZ}.{json|pdf} and become available within 10 seconds of submission And any tampering with either artifact causes checksum verification to fail
Public Verification via QR Without Platform Access
Given a third party scans the QR code on the PDF or visits the verification URL When no authentication is provided Then the verification endpoint responds 200 with fields: verified (true|false), ledger digest, attestation timestamp, and minimal metadata without PII And the endpoint allows manual upload of a PDF/JSON artifact and returns verified true when the recomputed checksum matches the ledger digest, else verified false with reason digest_mismatch And invalid input returns 400; abuse is rate-limited to 60 requests/minute per IP with 429 on excess
API Endpoints and Webhooks for Attestation Retrieval
Given an API client holds an OAuth2 token with scope attestations:read When it calls GET /v1/jobs/{jobId}/attestations and GET /v1/attestations/{attestationId} Then responses include attestation metadata, artifact URLs, and ledger digest; 404 is returned if the resource does not exist; RBAC is enforced And GET /v1/attestations/{attestationId}/artifact.{pdf|json} returns the exact artifact with Content-SHA256 header and correct Content-Type And upon finalization, a webhook event attestation.created is delivered within 5 seconds, signed with HMAC-SHA256, with at-least-once delivery, exponential backoff retries up to 10 attempts, and signature verification data (timestamp, signature) in headers
Data Retention and Legal Hold Enforcement
Given a tenant retention policy of N days When an attestation exceeds N days and is not on legal hold Then the system purges the PDF/JSON artifacts and evidence files, redacts PII in the job record, and retains the ledger digest and minimal metadata And an audit log event attestation.purged is recorded with actor=system and timestamp And when a legal hold is applied by a user with role LegalAdmin, purges are deferred until hold release; applying/releasing hold records audit events And API requests for purged artifacts return 410 Gone and the Job Timeline displays a Purged entry
Job Timeline Shows Attestation and Ledger Events
Given a job with a submitted attestation When a user opens the Job Timeline tab Then events are listed for attestation.submitted, artifacts.generated, ledger.written, webhook.delivered, retention.purged, and hold.applied/hold.released, each showing actor, ISO-8601 UTC timestamp, and a link to details And events are ordered newest-first, filterable by event type, and the timeline loads within 2 seconds for up to 500 events And users without the ComplianceViewer permission see redacted sensitive fields while still viewing event presence
Scan-to-Start Mobile Check-in
"As a crew lead, I want to scan a job tag on-site to start preflight so that I’m sure I’m attesting for the correct property with minimal steps."
Description

Enables field users to initiate the preflight attest process by scanning a job QR code or tapping NFC at the site. Auto-loads the correct job, applies a geofence to confirm location, pre-populates known details, and launches the role-appropriate checklist. Supports offline caching of assigned jobs and secure sync when connectivity resumes. Minimizes tap count and reduces mismatch errors between field work and back-office records.

Acceptance Criteria
QR Scan Initiates Check‑in and Loads Correct Job
Given the user is authenticated and on the RoofLens mobile app home screen And the device camera is available And the QR code encodes a valid job ID for the user's organization When the user scans the QR code Then the app auto-loads the matching job and displays the job header (job ID, address, client name) within 3 seconds And navigates directly to the Preflight Attest start screen with the job context applied And the path from scan to checklist requires no more than 2 taps And if the user is not assigned to the job, a clear warning is shown and the user must confirm before proceeding And if the QR code is invalid or belongs to a different organization, the app blocks progression and displays an error message without loading a job
NFC Tap Initiates Check‑in at Site
Given the user is authenticated and NFC is enabled on the device And the site NFC tag contains a valid RoofLens NDEF payload with a job ID in the user's organization When the user taps the device to the NFC tag Then the app auto-loads the matching job and displays the job header within 3 seconds And navigates directly to the Preflight Attest start screen And the path from tap to checklist requires no more than 2 taps And if NFC is unavailable or disabled, the app prompts the user to use QR scan instead And if the tag payload is invalid or for a different organization, the app blocks progression and displays an error message
Geofence Validation Confirms On‑Site Presence
Given the device has location services enabled And the job has a geofence defined (center and radius) When check-in starts Then the app computes the user's distance to the job geofence And if within 100 meters of the geofence, marks location as verified and allows progression And if outside 100 meters, displays on-site guidance and requires supervisor override to proceed And if no location fix is obtained within 10 seconds, prompts retry or proceed offline with a "location unverified" flag And all outcomes are time-stamped and recorded in the job Custody Ledger
Role‑Appropriate Checklist Launch with Passkey Attestation
Given the user has one or more roles assigned on the job (e.g., Remote Pilot in Command, Visual Observer) And the job contains known details (job ID, address, scheduled date/time, customer, assigned aircraft if available) When the job check-in screen loads Then the app selects and presents the checklist matching the user's highest-permission role for this job And pre-populates the checklist header with job ID, address, scheduled date/time, user name, user role, and aircraft registration if available And prompts the user to authenticate with a platform passkey to sign the attestation before starting the checklist And if passkey authentication succeeds, the checklist becomes actionable And if passkey authentication fails or is canceled, progression is blocked and a retry option is shown
Offline Check‑in with Cached Jobs and Deferred Sync
Given the device is offline And the user has at least one assigned job cached on the device within the last 7 days When the user scans a QR code for a cached job or selects it from the assigned jobs list Then the app loads the job from cache and opens Preflight Attest in offline mode And pre-populates known details from the cache And saves all inputs and the attestation locally with encryption at rest And clearly indicates offline status and deferred sync And if the QR corresponds to a job not cached, the app blocks progression and shows "job not available offline" without data loss
Secure Sync on Connectivity Resume without Duplicates
Given the device regains connectivity after an offline check-in When the app syncs job context, attestation signature, geofence status, and checklist responses Then the data is transmitted over TLS 1.2+ and validated by the server And a single check-in record is created or updated on the server using an idempotent client-generated identifier And attachments (passkey attestation, timestamps, location status) are visible in the job and Custody Ledger And no duplicate check-in records are created even after multiple retries And the local offline cache is marked as synced and cleared of pending state
Mismatch Handling and Error Messaging
Given a scan or tap initiates check-in for a job When the loaded job does not match the user's intended site (e.g., geofence mismatch > 100 m, organization mismatch, or stale/expired code) Then the app prevents starting the checklist, displays a clear, actionable message describing the mismatch reason, and provides safe options (retry scan, contact supervisor) And logs the mismatch event with timestamp, user ID, and reason to the Custody Ledger And no job context is altered in back-office systems until a valid check-in is completed or a supervisor override is explicitly applied
Non-Compliance Alerts and Escalations
"As an operations manager, I want to be alerted when preflight checks are blocked or overdue so that I can unblock the team quickly and maintain compliance."
Description

Sends real-time notifications to designated supervisors when a checklist is blocked, when an override is requested, or when required attestations are overdue. Supports push, SMS, email, and Slack/MS Teams, configurable quiet hours, and escalation chains with SLAs. Logs all notifications in the job timeline and links to the specific non-compliant item for quick action. Provides an admin dashboard to monitor outstanding blocks and override volume.

Acceptance Criteria
Real-Time Alert on Blocked Checklist Step
Given a mandatory Preflight checklist step is marked non-compliant and blocks job execution at time T And designated supervisors are configured for the job When the block is recorded Then notifications are dispatched to all designated supervisors within 60 seconds via their configured channels And each notification includes job ID, job name, blocked step name, severity, actor, timestamp (UTC and local), and a deep link to the blocked item And a job timeline entry is created with the same metadata and delivery statuses per channel
Override Request Escalation and Audit
Given an operator requests a supervisor override on a blocked step When the request is submitted Then the primary supervisor is notified within 60 seconds via configured channels with a one-click approve/deny action And if not acknowledged within SLA1 (configurable), the request escalates to the next level in the chain And acknowledgment at any level stops further escalation And on approve/deny, the system records approver identity (passkey credential ID), decision, timestamp, and comment in the job timeline and Custody Ledger And the override outcome is linked to the specific blocked item
Overdue Attestation Alerts
Given required attestations must be completed before flight start time F And one or more attestations remain incomplete When current time exceeds F by the configured grace period Then overdue notifications are sent to the responsible operator and designated supervisors via their configured channels within 60 seconds And reminders repeat per configured cadence up to the maximum attempts And if still incomplete at SLA2 threshold, escalation proceeds to the next supervisor level And each notification attempt is logged in the job timeline with per-channel delivery status
Multi-Channel Delivery Preferences and Fallback
Given a recipient has enabled a set of channels and an ordered preference list among {push, SMS, email, Slack, Teams} When an alert is triggered Then the system sends via the enabled channels according to the recipient’s delivery policy (single-primary or multi-channel) And if a channel returns a definitive failure, the next priority channel is attempted within 30 seconds And no messages are sent to disabled channels And delivery status, provider message IDs, and failures are recorded per channel in the job timeline
Quiet Hours Enforcement
Given a user has configured quiet hours for their local timezone When an alert is triggered during quiet hours Then the system suppresses user-facing delivery for that user And SLA timers for that user’s escalation level pause during quiet hours and resume when quiet hours end And a suppressed entry is recorded in the job timeline noting quiet hours and next action time And alerts to other recipients not in quiet hours proceed normally
SLA-Based Escalation Chain Behavior
Given an escalation chain with levels L1..Ln and SLA thresholds t1..tn When an alert is sent to L1 at time T0 Then if no acknowledgment is received by T0+t1, the alert escalates to L2, and so on through Ln And acknowledgment can be performed via in-app UI or any channel’s action link And upon acknowledgment, the system stops escalation, marks the alert as acknowledged, records acknowledger identity, method, timestamp, and total time-to-acknowledge, and updates the job timeline And any duplicate in-flight notifications are canceled
Admin Dashboard Monitoring and Drill-Down
Given an admin opens the Non-Compliance Alerts dashboard When the dashboard loads Then the admin sees real-time counts of outstanding blocks, pending overrides, and overdue attestations, with age buckets (0–15m, 15–60m, >60m) And the admin can filter by date range, job, severity, channel, supervisor, status (sent, delivered, failed, acknowledged) And metrics display include average time-to-acknowledge, escalation rate, and override approval rate for the selected period And each row links to the job timeline and the specific non-compliant item And dashboard data auto-refreshes at least every 60 seconds

Guest Pass

Sponsor‑approved, single‑job guest badges for subs, adjusters, or auditors with capture‑only rights, export limits, and automatic expiration. Images carry visible guest identity watermarks and full traceability. Enables secure collaboration without sharing accounts or expanding licenses.

Requirements

Sponsor Approval & Invite Flow
"As a sponsoring contractor, I want to issue a scoped, time-limited guest pass with approved permissions so that subs can capture images for a specific job without needing a paid license or broad account access."
Description

Provides a sponsor-driven workflow to create, configure, and deliver guest passes for a single job. Sponsors select the job, set expiration date/time, define capture-only permissions, and configure export limits. The system generates a unique, scoped invite delivered via email/SMS with one-time passcode (OTP) verification; no full account is required. The flow records invite creation, acceptance, and terms consent; supports resend, revoke, and reissue actions. Integrates with RoofLens Jobs so guests land directly in the assigned job context, and enforces configuration across web and mobile capture experiences. All actions are written to the job’s audit trail for compliance.

Acceptance Criteria
Sponsor creates and configures a single‑job guest pass
Given a sponsor is authenticated and has Job A selected When the sponsor opens Guest Pass > Create and sets an expiration date/time in the job’s time zone, selects capture‑only permissions, and sets an export limit (e.g., 0–5) And the sponsor clicks Create Then the system validates required fields and denies submission if any are missing or invalid (with inline errors) And a unique, single‑use invite token scoped to Job A is generated And a confirmation screen shows the configuration summary (job, expiration, permissions, export limit)
Invite delivery with OTP verification and no account creation
Given a guest pass is created with delivery via email and/or SMS When the system sends the invite Then the message is delivered within 60 seconds and contains a unique invite link And on first link open, the guest is prompted for a one‑time passcode (OTP) sent to the same channel(s) And the guest can complete access without creating a full RoofLens account And OTP expires after 10 minutes and is limited to 5 attempts before lockout And the guest must accept the guest terms to proceed; consent is recorded upon acceptance
Guest lands directly in assigned job context
Given a guest opens the invite link and completes OTP and terms consent When access is granted Then the guest is routed to Job A’s capture view (web or mobile) within 3 seconds of verification And the job switcher and global navigation to other jobs are hidden/disabled And attempts to navigate to any other job or route return 403 and redirect back to Job A
Capture‑only permissions enforced across web and mobile
Given a verified guest is in Job A under a capture‑only pass When the guest attempts to upload images or videos Then uploads succeed and are attributed to the guest identity And when the guest attempts to edit measurements, create estimates, change job details, or delete sponsor data Then the actions are blocked with a capture‑only notice and no data is changed And the same permission rules apply identically on web and mobile capture apps
Export limits enforced per invite configuration
Given a guest pass is configured with an export limit of N When the guest attempts exports (e.g., photo download, PDF/image export) from Job A Then each successful export decrements the remaining count for that invite And when remaining count reaches 0 Then export controls are disabled and further attempts are blocked with a limit‑reached message And export counts and outcomes are recorded per invite in the audit trail
Expiration, revoke, resend, and reissue behaviors
Given a guest pass exists for Job A When the expiration timestamp is reached Then any new or active sessions for that invite are terminated within 60 seconds and access is blocked with an expired message When the sponsor clicks Revoke Then access is immediately blocked and the token is invalidated When the sponsor clicks Resend Then the original token is re‑delivered without changing validity When the sponsor clicks Reissue Then a new token is generated and the old token is invalidated And all actions reflect in the invite’s status
Audit trail captures full guest pass lifecycle
Given audit logging is enabled for Job A When the sponsor creates, resends, revokes, or reissues a guest pass Then an audit entry is written with actor, action, timestamp, job, and invite ID When the guest opens the link, requests/enters OTP (success/failure), accepts terms, accesses, uploads, exports, or is blocked by permissions/limits/expiration Then each event is logged with outcome, timestamp, channel, and originating IP/device And audit entries are visible in the Job A audit trail within 10 seconds of the event
Single-Job Scope Enforcement
"As a project manager, I want guest access limited to one job so that external collaborators can’t view or affect any other projects in our account."
Description

Ensures each guest pass is cryptographically bound to a single job_id, restricting UI navigation and API access to only that job’s assets and tasks. Removes global project lists, search, and cross-job navigation for guests. Validates job scope on every request server-side, blocks attempts to access other jobs, and returns sanitized errors. Optionally supports geofencing to the job address polygon for added assurance. Includes automated tests for scope checks and monitoring alerts for cross-scope access attempts.

Acceptance Criteria
Cryptographic Binding to Single Job ID
Given a valid guest pass token containing claim job_id = X and a valid signature When the server verifies the token Then the session scope is set to job_id X only Given the same token When any API or page request targets job_id X Then the request is authorized subject to role permissions and returns 2xx Given the same token When any API or page request targets job_id Y ≠ X Then the response is 404 Not Found with a generic message and no disclosure of job Y existence
Guest UI Locked to Single Job Context
Given a guest logs in with a Guest Pass bound to job X When the application loads Then only job X workspace is accessible and visible And global project list, global search, and job switcher controls are not rendered Given the same session When the guest navigates to /projects or /search or any cross-job route Then they are redirected to /jobs/X/capture and no cross-job data is shown
Server-Side Scope Validation on All Endpoints
Given a guest pass bound to job X When a REST request is made to /jobs/Y/* and Y ≠ X Then the server returns a sanitized 404 and performs no database writes or side effects Given a guest pass bound to job X When a GraphQL operation references job_id Y ≠ X in variables or input Then the server rejects the operation with a sanitized error and returns no partial data Given a guest pass bound to job X When a WebSocket subscription or stream requests data for job Y ≠ X Then the subscription is rejected with a sanitized error and no events are streamed
Direct URL and ID Tampering Blocked
Given an asset_id that belongs to job Y ≠ X When a guest bound to job X requests /assets/{asset_id} or attempts to generate a download for it Then the server returns a sanitized 404 and no presigned URL or bytes are produced Given a guest client attempts to submit payloads with job_id Y ≠ X (e.g., via modified form, headers, or query) When the server processes the request Then the server enforces job X, ignores client-supplied job_id, and rejects mismatches with a sanitized 404
Optional Geofence Enforcement
Given geofencing is enabled for the guest pass with polygon P and buffer = 25 meters When the guest attempts to start capture outside P+buffer Then the client shows "Out of capture zone" and the server rejects the start with 403 and no media is accepted Given geofencing is enabled for polygon P When captures occur within P+buffer Then uploads succeed and are attributed to job X Given geofencing is disabled for the guest pass When captures occur anywhere Then uploads are accepted subject only to single-job scope rules
Audit Logging and Alerting for Cross-Scope Attempts
Given any cross-scope access attempt occurs (requested job Y ≠ bound job X) When the server handles the request Then an audit log is written with guest_id, pass_id, bound_job_id, attempted_job_id, endpoint, timestamp, and masked IP And a monitoring alert is emitted to the configured channel within 60 seconds with severity ≥ Warning And the client receives a sanitized 404 (or generic error) without sensitive details
Automated Test Coverage for Scope Enforcement
Given the CI pipeline runs When unit, integration, and end-to-end tests for Single-Job Scope Enforcement execute Then all tests pass with zero failures And line coverage for scope enforcement modules is ≥ 90% and branch coverage is ≥ 80% And e2e tests verify UI lock to single job, URL tampering blocks, REST/GraphQL/WebSocket scope checks, and geofence on/off behavior
Capture-Only Mode
"As a guest subcontractor, I want a simple capture-only interface so that I can quickly upload site photos without risking accidental edits or exports."
Description

Implements a permission profile that limits guests to media capture and upload only. Disables measurement generation, editing, deletion, export, pricing, and sharing actions in UI; enforces the same restrictions via API authorization. Guides guests with contextual UI (e.g., disabled buttons with tooltips) and a streamlined capture screen. Supports background uploads, queued retries, and network-resilient transfers while preventing any data exfiltration paths. Sponsors can optionally enable limited annotations that do not alter measurements. All captured items are auto-associated with the job and the guest’s identity.

Acceptance Criteria
UI Restricts Guest to Capture-Only Actions
Given I am authenticated with a valid Guest Pass configured as capture-only for Job J When I land in the job workspace Then I am taken directly to the streamlined Capture screen and non-capture navigation items are hidden And the following actions are hidden or disabled: Generate Measurements, Edit Measurements, Delete Media, Export, Pricing/Estimate, Share Link And hovering or tapping a disabled control shows a tooltip: "Guest pass: capture-only. Contact sponsor to request access." When I attempt to access a restricted route via direct URL or deep link Then I am redirected to the Capture screen and shown a toast explaining the restriction
API RBAC Blocks Non-Capture Operations
Given I present a guest_capture_only access token When I call any of the following endpoints: POST /measurements/generate, PATCH /measurements/*, DELETE /media/*, POST /exports, POST /estimates, POST /shares Then the API responds 403 Forbidden with error code RBAC_CAPTURE_ONLY and no side effects When I call capture/upload endpoints: POST /media, PUT /uploads/*, GET /jobs/:id (metadata only) Then the API responds 2xx and includes no measurement geometry, pricing, or export URLs in the payload And the token cannot be exchanged or elevated to another role
Network-Resilient Background Upload Queue
Given I am on the Capture screen and the device loses connectivity or the app is backgrounded When I capture photos or videos Then each item is enqueued locally with status Pending and remains in the queue across app restarts And uploads use TLS 1.2+ with chunking and SHA-256 integrity checks; duplicates are prevented by content hash And the client retries failed chunks with exponential backoff (min 2s, max 120s) up to 7 attempts per item When connectivity is restored or the app returns to foreground Then uploads resume within 60 seconds and item statuses progress to Uploaded or Failed with an actionable error And no captured media is lost or uploaded to non-RoofLens domains
Watermarked Media and Traceability
Given a guest captures an image or video for Job J When the media is displayed in any guest-accessible view Then a visible, non-removable watermark overlays the media including guest full name/email, Job ID, capture timestamp, and "Guest" And the stored media record contains immutable metadata: guest ID, job ID, device ID, capture timestamp, and checksum And an audit log entry is recorded for capture and upload events with IP and user agent
Optional Guest Annotations Without Measurement Modification
Given the sponsor has enabled "Allow Guest Annotations" for Job J When the guest opens the Capture screen Then only non-metric annotation tools are available: Arrow, Circle, Text Note, Highlight And attempts to create, edit, or delete measurements, calibration, scale, pricing, or sharing are blocked with a tooltip and no data change occurs And annotations are saved as a separate overlay layer linked to the media and audit logged with guest identity and timestamp And guests cannot export or share annotations; sponsors can view, edit, or remove them
Prevent Data Exfiltration in App and API
Given I am a guest in capture-only mode When I interact with media or job data in the UI Then there are no controls for Export, Share, Download Original, Copy to Clipboard, or External Share And right-click/context menu save is disabled on media, and images are served via time-limited, authenticated URLs with Content-Disposition: inline and CORS restricted to app origins And API endpoints that return files or exports respond 403 to guest tokens and do not emit presigned URLs And local caches are encrypted at rest and are not written to the device photo gallery
Automatic Association and Audit Logging
Given a guest captures media within Job J When the item is saved and/or uploaded Then the server associates the media to Job J and the capturing guest; the guest cannot change these associations And the job activity feed shows the capture event with guest identity And the audit API returns an immutable record for the event including timestamp, IP, device, app version, and checksum
Identity Watermarking & Metadata Tagging
"As a compliance officer, I want guest media visibly and metadata-marked with identity and job details so that we maintain chain-of-custody and deter misuse."
Description

Applies visible, non-removable watermarks to all guest-captured images and videos showing guest name, organization, date/time, job ID, and pass ID. Uses diagonal tiling plus edge seals to deter cropping. Embeds matching IPTC/XMP metadata and cryptographic hash references for tamper detection. Stores an original master for internal processing but serves only watermarked derivatives in all guest-visible views and any sponsor-initiated exports that include guest media. Watermark styles adapt to light/dark imagery for legibility. All watermark and metadata parameters are recorded to the audit log.

Acceptance Criteria
On-Capture Watermark and Metadata Embedding for Guest Media
Given a guest with an active Guest Pass captures a photo or video for a specific Job ID When the media is uploaded to RoofLens Then a watermarked derivative is created with diagonal tiled text and edge seals on all four borders containing Guest Full Name, Organization, UTC Date/Time, Job ID, and Pass ID And the guest cannot disable or alter watermark settings And matching IPTC and XMP metadata fields are embedded with the same tokens and a cryptographic hash reference And for video, the watermark overlay appears on all frames at the configured tiling density and container-level metadata includes the same tokens and hash reference And an unwatermarked original master is stored in secure internal storage and is not addressable via guest-facing endpoints And an audit log entry is recorded with watermark tokens, style variant, tiling density, opacity, algorithm/version, IPTC/XMP keys written, hash algorithm, derivative and master checksums, actor, and timestamps
Guest and Sponsor Views Serve Only Watermarked Derivatives
Given any user (guest or sponsor) views or requests guest-captured media via UI or API (thumbnail, preview, full-size, or playback) When the media resource is delivered Then only the watermarked derivative is served and no master file URL is issued And any direct attempt to access a master returns 403/Not Authorized and is audit-logged with requester identity And the served derivative retains IPTC/XMP metadata and the cryptographic hash reference And an audit log event "served_watermarked_derivative" is recorded with requester identity, derivative checksum, and timestamp
Sponsor Exports Include Watermarked Guest Media with Metadata
Given a sponsor initiates an export (PDF, ZIP, or API export) that includes guest-captured media When the export is generated Then all included guest media are the watermarked derivatives; masters are never embedded And the exported assets preserve visible watermarks in all embedded images/thumbnails and video frames And the export manifest lists for each guest asset the derivative checksum, watermark parameter summary (style, density, opacity), and hash reference And the exported files retain IPTC/XMP metadata matching the watermark tokens And an audit log event "export_generated" records the included guest media list and watermark/metadata parameters
Anti-Cropping Watermark Resilience
Given a watermarked derivative image is cropped by up to 15% from any single edge or uniformly by up to 10% from all edges When the cropped image is inspected Then at least one diagonal watermark tile with legible tokens and at least one edge seal remain visible in the resulting frame And automated tests validate this across images at 1024x1024, 2048x2048, and 4000x3000 resolutions with pass rate ≥ 95% And the diagonal tiling density and edge seal thickness are recorded in the audit log for each asset
Adaptive Watermark Legibility on Light and Dark Imagery
Given representative media sets with low-key (dark), high-key (bright), and mixed-contrast scenes When watermarks are applied Then the system selects a light or dark style that yields a minimum 4.5:1 contrast ratio for watermark text over local background for ≥ 95% of characters And edge seals maintain ≥ 3:1 contrast to their immediate background along all borders And automated tests compute and verify these contrast thresholds on at least 50 images and 3 videos, all passing And no watermark text is clipped or truncated at image/video bounds
Tamper Detection via Hash and Audit Traceability
Given a previously exported watermarked derivative is re-ingested or verified by the system When any pixel-level modification or metadata removal results in a checksum that differs from the stored reference Then the asset is flagged as "Tamper Suspected", a detailed audit entry is written (expected vs observed checksum, missing metadata keys), and the altered file cannot replace the canonical derivative And if the checksums match and metadata keys are intact, the asset is marked "Integrity Verified" and logged And the hash algorithm (e.g., SHA-256) is consistent across capture, serve, and export events and is recorded in all related audit entries
Export Controls & Quotas
"As a sponsor, I want to set strict export limits for guests so that sensitive media cannot be downloaded or shared beyond what’s necessary for the job."
Description

Enforces configurable export rules per guest pass, including hard disable, file-type restrictions, resolution caps, daily/total quotas, and watermark-only derivatives. All export attempts are authorized server-side against pass policy and logged. Provides clear UI messaging to guests when exports are limited or blocked. Sponsors can override or extend quotas in real time. Integrates with existing PDF bid and media export pipelines to ensure guest media always carries watermarks and policy-compliant transformations.

Acceptance Criteria
Hard-Disabled Exports Enforcement
Given a valid guest pass for a single job with exports disabled by policy When the guest initiates any export (PDF bid, image, ZIP, CSV, API) from that job Then the server performs authorization against the pass policy and denies the request And responds with HTTP 403 and reason "exports_disabled" And no export artifact is generated or queued And the UI displays "Exports are disabled by your sponsor" with no retry action enabled And the attempt is logged with pass_id, guest_id, sponsor_id, job_id, export_type, outcome "blocked", reason "exports_disabled", timestamp, request_ip, user_agent, policy_snapshot_id
File-Type Restriction Policy
Given a guest pass allows only PDF and PNG export types When the guest opens export options Then only PDF and PNG choices are enabled in the UI; all other types are hidden or disabled with tooltip "Not allowed" When the guest attempts to export a disallowed type by any means (including direct API call) Then the server denies with HTTP 403 and reason "type_not_allowed" And the attempt is logged with requested_type, allowed_types, outcome "blocked", timestamp, policy_snapshot_id
Resolution Caps and Transformations
Given a guest pass with max image resolution 1600x1200 and max PDF DPI 150 When the guest exports an image or PDF from higher-resolution originals Then the server transforms outputs to not exceed the caps while maintaining aspect ratio And embeds the required watermark in the transformed output And strips any original-resolution sidecar data or embedded originals And records transformation details (pre_resolution, post_resolution, pre_dpi, post_dpi) in the export log And the UI indicates "Resolution limited by guest policy"
Quotas: Daily and Total Enforcement
Given a guest pass with a daily export count limit of 10 files and a total data size limit of 500 MB for the pass duration When exports complete successfully Then the server increments count and bytes_out and updates remaining counters atomically And when a requested export would exceed a limit Then the server denies with HTTP 429 and reason "quota_exceeded" including remaining_count and remaining_bytes in the payload And the UI displays remaining quotas in real time and resets the daily counter at 00:00 UTC And logs include pre_remaining and post_remaining for each successful export
Watermark-Only Derivatives Across Pipelines
Given a guest pass requiring watermark-only derivatives When the guest exports via the PDF bid pipeline or the media export pipeline Then all outputs include a visible, non-removable watermark containing guest_name, pass_id, job_id, and timestamp And original, unwatermarked files are not downloadable And attempts to request originals are denied with HTTP 403 and reason "originals_restricted" And exported files pass an automated watermark presence check And all events are logged with watermark_template_id, verification_result, policy_snapshot_id
Server-Side Authorization and Audit Logging
Given any export request originating from a guest session When the request reaches the server Then authorization is evaluated solely server-side against the current pass policy and job scope, independent of client UI state And a decision record is persisted with pass_id, sponsor_id, guest_id, job_id, export_type, policy_snapshot_id, decision ("allow" or "deny"), reason_codes, latency_ms, bytes_out (if any), transformation_summary, request_ip, user_agent, timestamp And allowed exports proceed only after the decision is stored And denied requests produce no export artifact
Real-Time Sponsor Override or Extension
Given a sponsor opens the guest pass admin and extends the daily count limit by +5 and removes JPEG from disallowed types while the guest session is active When the sponsor saves the changes Then the new policy takes effect within 5 seconds and applies to the next guest request without requiring guest re-login And the guest UI reflects updated remaining quotas and enabled export types on next refresh or poll And an audit event is recorded with editor_id, changeset, previous_values, new_values, effective_at, and affected_pass_id And subsequent exports are evaluated against the updated policy
Time-Bound Expiration & Auto-Revocation
"As a sponsor, I want guest access to automatically end on a set date so that project security is maintained without manual follow-up."
Description

Defines a required expiration for each guest pass with automatic revocation at expiry. Tokens and sessions are invalidated server-side; guests are signed out and can no longer upload or view job assets. Provides sponsor controls for immediate revoke and extend, plus optional pre-expiration reminders to both sponsor and guest. Handles partial uploads gracefully at cutoff with clear user feedback. Integrates with job lifecycle (e.g., closes access when job is archived) and includes health checks to prevent stale active passes.

Acceptance Criteria
Auto-Revocation at Expiry (Server-Side Invalidation)
- Given an active guest pass with expiry T and a valid access token, When server time >= T, Then all requests using that token to view, list, upload, export, or download job assets are rejected with HTTP 401 and error code GUEST_PASS_EXPIRED, and no asset mutations occur. - Given a guest web session active at time T, When server time >= T, Then the UI is forced to sign out within 30 seconds and displays a "Guest pass expired" message with the exact expiry timestamp. - Given any refresh token issued under the pass, When used after T, Then it is rejected and not rotated, and the session cookie is invalidated. - Given the pass reaches expiry, When auto-revocation occurs, Then an audit log entry PASS_AUTO_REVOKED is recorded with pass_id, job_id, sponsor_id, guest_id, and expired_at.
Immediate Sponsor Revocation
- Given a sponsor with manage rights opens a guest pass, When they click Revoke Now and confirm with a reason, Then the pass status changes to Revoked, all tokens/sessions are invalidated within 15 seconds, and subsequent requests return HTTP 403 with error code GUEST_PASS_REVOKED. - Given manual revocation is performed, Then an audit event PASS_REVOKED_MANUAL with reason and initiator_id is recorded and a notification is sent to the guest email immediately. - Given other guest passes exist for the same job, When one pass is revoked, Then other passes and sponsor accounts remain unaffected.
Extend Guest Pass Before Expiry
- Given a pending or active guest pass with expiry T, When the sponsor updates expiry to T2 > now and within org policy max duration, Then the pass reflects the new expiry instantly and the guest UI shows the updated expiry within 10 seconds. - Given an invalid extension request (past time, exceeds policy, or archived job), When submitted, Then the API rejects with HTTP 400 and error code one of PAST_TIME, EXCEEDS_POLICY, or JOB_ARCHIVED. - Given an extension is saved, Then an audit event PASS_EXTENDED is recorded and notifications are sent to sponsor and guest.
Pre-Expiration Reminder Notifications
- Given reminders are enabled with schedules at 24h and 1h before expiry, When the pass approaches those thresholds, Then the system sends exactly one reminder at each threshold to both sponsor and guest with job name, pass_id, expiry in recipient timezone, and action links to extend or revoke. - Given the pass is revoked or the job is archived before a threshold, When the threshold is reached, Then no reminder is sent. - Given a reminder is delivered, Then an audit event PASS_REMINDER_SENT with schedule label and recipient is recorded.
Graceful Cutoff During Upload
- Given a multi-part upload in progress for a guest at time T-ε, When server time reaches expiry T, Then the server completes assembly only if all parts were fully received before T; otherwise the upload is aborted, partial parts are discarded, and no partial asset becomes visible. - Given an upload is aborted due to expiry, Then the UI shows a non-dismissible banner stating "Your guest pass expired; the upload was not saved" and returns error code UPLOAD_ABORTED_PASS_EXPIRED to the client. - Given the sponsor extends the pass within 15 minutes after T, When the guest retries the same upload, Then the upload can resume from the last confirmed part; otherwise a fresh upload is required.
Job Lifecycle Access Closure (Archive)
- Given a job with one or more active guest passes, When the job is archived, Then all associated guest passes are immediately revoked, all tokens/sessions invalidated, and subsequent requests return HTTP 403 with error code JOB_ARCHIVED for those passes. - Given a job is unarchived, Then previously revoked guest passes remain revoked and cannot be reactivated; the sponsor must create a new pass. - Given archival-triggered revocation occurs, Then an audit event PASS_REVOKED_JOB_ARCHIVED is recorded for each affected pass.
Health Checks for Stale Active Passes
- Given the system runs a background health check every 5 minutes, When passes with expiry in the past are detected with active sessions or valid tokens, Then the process invalidates them within the same run and records PASS_HEALTH_REVOKED events for each correction. - Given clock drift or processing delay causes a pass to remain active more than 60 seconds past expiry, Then an alert is emitted to monitoring with metric guest_pass_revoke_lag > 60s and severity warning. - Given the health endpoint /health/guest-passes is called by ops, Then it returns JSON with counts of active, expiring_24h, expired_unrevoked, and last_run timestamp, and responds within 500 ms under normal load.
End-to-End Audit Trail & Traceability
"As an insurance adjuster, I want a complete activity history for guest contributions so that I can verify who did what and when for claim reviews."
Description

Captures an immutable event log for the entire guest pass lifecycle: creation, invite delivery, OTP verification, login, capture events (with device, OS, IP, GPS where available), uploads, failed attempts, export requests, policy evaluations, revokes, and expiry. Events are linked to job, guest identity, and pass ID. Provides a timeline view in the Job Audit panel and exportable reports (CSV/PDF) for dispute resolution and carrier audits. Supports retention policies and searchable filters to rapidly investigate incidents.

Acceptance Criteria
Immutable Lifecycle Event Logging
Given a guest pass lifecycle event occurs (creation, invite_sent, otp_verified, login, capture, upload, failed_attempt, export_request, policy_evaluation, revoke, expire) When the event is recorded Then an immutable audit record is written to an append-only store with fields: event_id, event_type, job_id, pass_id, guest_id (or null for system), actor_role, timestamp_utc_ms, request_id, device_model, os_version, app_version (if applicable), ip_address, geo_lat, geo_lng, geo_accuracy_m, geo_source, outcome, reason_code (optional), metadata_json And updates to existing records are disallowed; corrections are appended as new events referencing prior_event_id And a tamper-evident mechanism (hash chain or WORM) validates 100% of records via a verification endpoint And timestamps are normalized to UTC with ms precision; client clock skew does not affect ordering And any event is retrievable by event_id within 1 second for up to 100k events per job
OTP and Invite Delivery Auditability
Given a sponsor creates and sends a guest pass invite via email or SMS When delivery and verification processes occur Then events are logged for invite_created, invite_sent (channel, provider, message_id), delivery_status (delivered, bounced, failed) with provider codes And otp_issued is logged without storing the OTP secret; otp_verified_success and otp_verified_failure include attempt_count, ip_address, device, and reason_code And after N consecutive failures (policy-configurable), an otp_lockout event is logged and further attempts are blocked for the lockout window And resend_invite actions are logged with rate-limit decisions (allowed/denied)
Capture/Upload Metadata Traceability
Given a guest uses a Guest Pass to capture and upload media When captures and uploads occur Then a capture_started and capture_completed event is logged per asset with device_model, os_version, app_version, ip_address, and GPS data when available And if GPS is unavailable or denied, the event records geo_source = none and reason_code (e.g., permission_denied, hardware_unavailable) And each uploaded asset logs upload_started and upload_completed with asset_id, byte_size, checksum, mime_type, and storage_location And a watermark_applied event links asset_id to the visible watermark payload (guest_name/id and timestamp) and the job/pass identifiers And delete or edit attempts by the guest are denied and logged as capture_edit_denied with reason_code
Export Requests and Policy Evaluation Logging
Given a user (guest or sponsor) requests an export from a job with export policies When the export request is processed Then a policy_evaluated event records evaluated_rules, rule_versions, inputs (role, pass_limits_remaining), and decision (allow/deny) And if allowed, export_requested and export_completed events include export_type (PDF/CSV), filter_criteria, record_count, file_checksum, and storage_location And if denied, export_denied is logged with reason_code (limit_exceeded, role_not_permitted, policy_window_closed) and limits_remaining And export limits tied to pass_id are enforced; after exceeding limits, all further export requests are denied and logged
Revocation and Expiry Enforcement Audit
Given a guest pass can be revoked by a sponsor or expire automatically When a revoke or expiry condition occurs Then pass_revoked (actor, reason, timestamp) or pass_expired (system) events are logged And subsequent access attempts via the pass are blocked and logged as access_denied with http_status, reason_code (revoked/expired), and ip_address And no capture, upload, or export events succeed after revocation/expiry; attempted actions are logged as denied And expiry evaluation uses server time; client time changes do not bypass expiry
Job Audit Timeline View and Reporting (CSV/PDF)
Given an authorized user opens the Job Audit panel When the timeline is displayed Then events for the selected job appear in strict chronological order with columns: localized_timestamp, event_type, actor, pass_id, guest, device, ip, outcome, summary And filters by date_range, event_type, actor_role, pass_id, guest_id/email, outcome, and ip are available and can be combined And applying filters returns results within 2 seconds for up to 50k events per job; pagination/infinite scroll shows no gaps or duplicates And selecting an event reveals full details including raw metadata_json and links to related assets/exports And exporting the current (filtered) view produces CSV and PDF files whose rows exactly match on-screen results, include filter metadata and generation timestamp, and generate within 10 seconds for up to 10k events
Retention Policy Enforcement and Legal Holds
Given an organization has a configured audit retention period and may apply legal holds When retention windows elapse or holds are applied/removed Then purge_jobs are scheduled and logged with purge_started and purge_completed events including scope and counts; purged events are no longer retrievable And events under legal_hold remain exempt from purge; hold_applied and hold_released are logged with actor and reason And purge and hold operations require authorized roles; all attempts (allowed/denied) are logged And integrity checks post-purge confirm hash-chain continuity for remaining events and report results as integrity_check_passed

SurgeCap

Set hard and soft spend caps by day, week, event, or branch that automatically throttle bulk uploads and require approval when limits are hit. Ties into SLA Predictor and GeoHeat Overlay to prioritize credits for highest‑impact addresses first, protecting budgets during storms without slowing critical jobs.

Requirements

Multi-Level Spend Cap Policies
"As an operations manager, I want to define hard and soft spending limits by branch and time window so that we keep costs controlled without halting essential work."
Description

Enable administrators to configure hard and soft caps on spend, credits, or job counts by day, week, custom event window, and branch. Support inheritance and overrides at organization → region → branch levels, time zone awareness, and automatic reset schedules. Caps apply to key consumption points (bulk uploads, estimate generation, credit usage) and enforce hard stops or soft warnings based on policy. Provide currency configuration, cap preview before activation, and full audit history of policy changes. Integrate with billing/credits and user permissions to ensure only authorized roles can create or modify caps.

Acceptance Criteria
Admin defines hard/soft caps by metric, window, and scope
Rule 1: System supports metrics Spend (currency), Credits (integer), and JobCount (integer). Rule 2: Admin can set SoftCap and HardCap per metric for Daily, Weekly, and Custom Event (start/end) windows; SoftCap must be <= HardCap; values must be >= 0. Rule 3: Target scope selection supports Organization, Region(s), and Branch(es); at least one scope is required. Rule 4: Time zone defaults to branch local for Branch scope, region default (inherited from constituent branches or region setting) for Region scope, and org default for Organization scope; window boundaries use the target scope time zone. Rule 5: Automatic reset schedules are created on save: Daily at 00:00 local; Weekly at 00:00 Monday local; Custom Event ends at configured end timestamp; counters reset at each boundary. Rule 6: Spend caps require an organization currency configuration; spend thresholds are stored in minor units and displayed with correct currency formatting. Rule 7: Validation rejects missing required fields, invalid time windows (end <= start), and non-numeric/overflow inputs; on success, system returns a policyId and effectiveAt timestamp.
Policy inheritance and override resolution
Given an org-level Weekly Spend hard cap is defined and no region/branch overrides exist, When evaluating a branch in any region, Then the effective Weekly Spend hard cap equals the org-level cap. Given a region-level override exists for the same metric/window, When evaluating a branch in that region with no branch-level override, Then the effective cap equals the region-level value. Given a branch-level override exists for the same metric/window, When evaluating that branch, Then the effective cap equals the branch-level value, regardless of parent values. Rule: If both Daily and Weekly caps exist for a metric, enforcement applies the most specific scope per window and a breach of either window triggers enforcement. Rule: A disabled cap at a specific level causes fallback to the next parent level if present; if no parent cap exists, no cap is enforced for that metric/window. Rule: Effective policy evaluation uses the branch time zone to determine window boundaries.
Enforcement at bulk uploads, estimate generation, and credit usage
Given a hard cap would be exceeded by a requested action (bulk upload, estimate generation, or credit usage), When the action is submitted, Then the system blocks the action, returns error code HardCapExceeded, includes remaining allowance (0), metric/window/scope details, and writes an audit event. Given a soft cap threshold would be crossed but the hard cap would not, When the action is submitted, Then the system pauses the action, notifies the user, and requires approval from a user with Cap:Approve permission for the scope; upon approval within SLA, the action proceeds; without approval it is canceled; all outcomes are audited. Rule: Bulk uploads are throttled to remaining headroom when policy permits partial processing; excess items are queued with status PendingCapReset or PendingApproval. Rule: Counters for Spend/Credits/JobCount are incremented atomically to prevent race conditions; concurrent requests cannot overshoot caps. Rule: Enforcement emits user-visible messages showing remaining allowance and next reset time in the target scope time zone.
Cap preview prior to activation
Given a saved but inactive cap policy, When the admin selects Preview Impact for selected scopes, Then the system shows effective caps per metric/window, current usage, remaining headroom, and a count of items that would be blocked or require approval if activated now, without altering any counters or enforcement state. Rule: Preview reflects inheritance/overrides and time zone boundaries exactly as enforcement would. Rule: Activating a policy requires explicit confirmation; upon activation, effectiveFrom is set and preview state does not persist. Rule: If validation conflicts exist (e.g., SoftCap > HardCap), activation is blocked with actionable errors.
Audit history of cap policy lifecycle and enforcement
Rule 1: Every create, update, activate, deactivate, and delete action on a cap policy records an immutable audit entry capturing timestamp (UTC), actor userId, actor role, source IP, scope, change set (before/after), and policyId. Rule 2: Every enforcement event (block, throttle, approval request, approval/denial) records an audit entry with event type, metric/window, scope, remaining allowance, and related requestId. Rule 3: Audit records are retained for at least 24 months, filterable by date range, actor, scope, and event type, and exportable to CSV; access requires Cap:View permission.
Permissions and scoped authorization
Rule 1: Only users with Cap:Manage can create, update, activate/deactivate, or delete cap policies within their authorized scopes. Rule 2: Only users with Cap:Approve can approve soft-cap exceptions within their authorized scopes; approvals outside scope are rejected with 403 and audited. Rule 3: Users without required permissions cannot view sensitive fields (thresholds) for scopes they do not own; attempts are denied with 403 and audited. Rule 4: Branch-level managers may only manage policies for their branch; region admins for branches within their region; org admins for all scopes.
Currency configuration for spend caps
Rule 1: Organization Billing Admin can set the default currency (ISO 4217) used for Spend caps; branches and regions inherit this currency and cannot override it. Rule 2: Spend thresholds are entered and displayed in the configured currency with correct symbol and rounding; values are stored in minor units to avoid precision loss. Rule 3: Changing organization currency is blocked if there are active Spend caps; admin must deactivate or explicitly migrate caps before currency change; system provides a clear blocking message and audit entry. Rule 4: Enforcement, previews, reports, and notifications consistently use the configured currency; mixed-currency states are not permitted.
Adaptive Upload Throttling & Queueing
"As a back-office coordinator, I want bulk uploads to slow or queue as budgets tighten so that we avoid overruns while still progressing work."
Description

Automatically throttle bulk uploads when approaching soft caps and enforce rate limits or pauses at hard caps. Queue incoming jobs with fair-share allocation across branches and provide estimated start times, remaining budget indicators, and reordering based on priority policies. Support pause/resume, cancellation, and per-branch throughput controls to prevent budget overruns during surges while maintaining predictable processing for active jobs.

Acceptance Criteria
Soft Cap Throttling on Bulk Uploads
Given a branch with a soft cap of 100 jobs/day and current utilization of 90 jobs (90%), When a user submits a bulk upload of 20 jobs within 1 minute, Then the system limits intake to 1 job every 5 seconds, queues the remainder, and displays "Soft cap throttling active", And no job is rejected due to soft cap, And running jobs are not slowed or paused.
Hard Cap Enforcement and Auto-Pause of New Starts
Given a branch with a hard cap of 120 jobs/day and current utilization reaches 120, When additional jobs are uploaded, Then no new job is allowed to start, queued jobs show status "Awaiting approval: Hard cap reached", and an approval control is presented to authorized roles, And remaining budget indicator shows 0 to hard cap and updates in under 2 seconds, And active jobs continue uninterrupted.
Fair-Share Queue Allocation Across Branches
Given Branch A concurrency cap is 2 and Branch B concurrency cap is 1, And there are at least 9 queued jobs for A and 6 for B, When the scheduler assigns the next 9 job starts, Then 6±1 starts go to A and 3±1 to B (approximate 2:1 share), And neither branch experiences starvation (no more than 2 consecutive start slots without service while both have queued jobs).
Estimated Start Times and Budget Indicators
Given at least 20 queued jobs across branches under steady-state load, When a user views the queue, Then each job displays an estimated start timestamp and a countdown, And 95% of jobs start within the greater of ±10% or ±5 minutes of their displayed estimate, And soft/hard cap remaining indicators are shown per branch and per period (day/week/event) and update within 5 seconds of job acceptance/completion.
Priority-Based Reordering Under Surge
Given a configured priority policy that sorts by SLA deadline ascending then branch priority weight, And a queue of at least 10 jobs exists, When a higher-priority job is uploaded, Then the queue reorders within 2 seconds so that no lower-priority job is ahead of a higher-priority one, And already running jobs are unaffected, And an audit log entry records the reorder with reason and timestamp.
Pause, Resume, and Cancel Controls for Queued/Running Jobs
Given a job in Queued state, When a user with Job Manager role clicks Pause, Then the job state changes to Paused, it is skipped by the scheduler, and an ETA is removed, When the user clicks Resume, Then the job re-enters the queue per current priority with a new ETA within 3 seconds, When a user clicks Cancel before start, Then the job is removed from the queue and any provisional budget reservation is released immediately, When a user clicks Cancel after start, Then processing stops within 30 seconds, partial usage is recorded, and the budget indicator reflects the consumed portion.
Per-Branch Throughput Controls and Rate Limits
Given an admin sets Branch A max concurrent processing to 3 and start rate limit to 6 jobs/minute, When load exceeds these limits, Then the system never runs more than 3 concurrent jobs and never starts more than 6 jobs per minute for Branch A, And changing either limit takes effect within 10 seconds and is applied to subsequent scheduling decisions, And active jobs remain predictable: once started, a job is not paused due to throughput changes unless a hard cap policy explicitly dictates.
Cap Breach Approval Workflow
"As a branch manager, I want an approval step when requests exceed limits so that exceptions are controlled and documented."
Description

Require approval when a submission would exceed a soft or hard cap, with configurable rules for who can approve, multi-step approvals, SLA-based auto-approval exceptions, and time-bound approval windows. Provide in-app notifications, email alerts, and mobile push to approvers, capture justification notes, and maintain a detailed audit trail. Allow one-time overrides, temporary cap increases, and emergency bypass codes governed by role-based access control.

Acceptance Criteria
Soft Cap Breach Triggers Approval and Throttling
Given a bulk upload would exceed a configured soft cap, When the submission is validated, Then an approval request is created within 3 seconds and the overage items are paused with cap details shown (cap type, limit, used, projected overage). Given approver assignment rules by branch, cap type, amount bucket, and schedule, When the request is created, Then the first eligible approver or approver group is assigned; if none are available, Then fallback to the Org Admin group. Given an approval is pending for a soft cap breach, When additional uploads arrive, Then items that do not increase the overage continue, and any overage items are queued and not processed. Given a pending soft cap approval, When the approval is rejected or expires, Then all queued items are canceled with a visible reason and can be resubmitted without duplicate charges. Given a pending soft cap approval, When the approval is granted, Then queued items are released to processing within 60 seconds and the cap usage is updated atomically.
Hard Cap Breach Requires Elevated Approval Only
Given a submission would exceed a configured hard cap, When validation occurs, Then the system blocks all affected items and creates a hard-cap approval request; no auto-approval is permitted. Given a hard-cap approval request, When approvers are evaluated, Then only users with the Hard Cap Approver role can approve or deny; other users see read-only status. Given a hard-cap breach, When an emergency bypass is used, Then only users with the Emergency Bypass role can apply a bypass code, and all affected items remain blocked until the bypass or approval is recorded. Given a hard-cap breach, Then SLA-based auto-approval exceptions must not execute. Given a hard-cap request is approved, Then all blocked items begin processing within 60 seconds and the new cap usage is reflected immediately.
Multi-Step Approval with Time-Bound Windows and Escalations
Given a cap breach matches a workflow with N steps (1–3), When approval begins, Then steps are enforced sequentially with distinct approver groups per step and cannot be skipped without an Override Steps permission. Given each step has a time window (configurable 5 minutes to 24 hours), When 80% of the window elapses without action, Then the request escalates to the configured fallback group; When the window expires, Then the system auto-rejects or auto-escalates per policy and notifies the submitter. Given any step is rejected, When rejection is saved, Then the request status becomes Rejected, queued items remain blocked/throttled accordingly, and the submitter is notified immediately. Given the final step is approved, When the decision is saved, Then the system releases all queued items within 60 seconds and records the full approval chain in the audit log.
SLA-Based Auto-Approval Exceptions for Critical Jobs
Given a soft cap breach and an exception policy is enabled, When items include addresses classified Critical by SLA Predictor or GeoHeat above configured thresholds, Then the system auto-approves only those critical items up to the remaining exception budget and records the reason as SLA Exception Auto-Approval. Given an exception budget (by day/branch) is configured, When auto-approval consumes budget, Then the remaining budget is decremented atomically; when budget is exhausted, Then further items require normal approval. Given a submission contains both critical and non-critical items, When validation runs, Then critical items are processed first by descending SLA/GeoHeat score and non-critical items are queued for approval. Given a hard cap breach, Then SLA-based auto-approval does not execute under any circumstance.
Approver Notifications and Deep-Link Actions
Given an approval request is created and assigned, When notifications are sent, Then the assigned approver receives in-app notification within 10 seconds, email within 60 seconds, and mobile push within 60 seconds, each including cap type, amount at risk, branch, and a secure deep link to approve/deny. Given multiple requests are generated for the same approver within 1 minute, When notifications are dispatched, Then duplicate pushes/emails are suppressed and a batched summary is sent instead. Given no eligible approver is assigned within 2 minutes, When escalation rules run, Then the fallback group is notified across all channels. Given the deep link is opened, When the approver authenticates, Then they can approve/deny within the link flow and the decision is reflected in the app within 5 seconds.
Justification Notes and End-to-End Audit Trail
Given an approve, deny, override, temporary increase, or bypass action, When the approver submits the decision, Then a justification note of 10–2000 characters is required. Given any decision is recorded, When the audit entry is written, Then it contains: request ID, submission IDs, cap scope (day/week/event/branch), cap type (soft/hard), limits before/after, decision, justification, actor identity and roles, decision channel (app/email/push/API), timestamps, IP/user agent, and any SLA/GeoHeat factors used. Given audit records exist, When users query the audit log, Then records are immutable, filterable by date, branch, actor, decision, and exportable as CSV and JSON. Given compliance policies, When storing audit data, Then retention is at least 2 years and access is controlled by RBAC.
Overrides, Temporary Cap Increases, and Emergency Bypass RBAC
Given a one-time override is requested, When an approver with the Cap Override role approves it, Then it applies only to the specific request, processes the affected items immediately, and cannot be reused. Given a temporary cap increase is requested, When an approver with the Cap Increase role approves it with amount and expiry, Then the cap limit increases by the approved amount until the expiry timestamp, after which it reverts automatically; all calculations and UI reflect the new limit within 10 seconds. Given an emergency bypass is needed, When a user with the Emergency Bypass role generates a single-use code (2FA required), Then the code expires in 15 minutes, can be used once to process the request, and its use is fully audited; daily usage limits per role are enforced. Given organization policy constraints, When overrides or increases exceed configured maximums, Then the system blocks the action with a clear error and no changes are applied.
Impact-First Credit Allocation (SLA + GeoHeat)
"As a dispatcher, I want high-impact addresses to be prioritized automatically so that we meet SLAs and focus resources where they matter most during storms."
Description

Integrate SLA Predictor and GeoHeat Overlay to assign an impact score to each address, prioritizing queue order and approvals for high-value or time-critical locations during surges. Provide configurable weighting (e.g., SLA risk, claim severity, geo intensity), tie-break rules, and transparent scoring explanations shown to users and in logs. Ensure safe fallbacks when predictors are unavailable and allow manual priority boosts with audit logging.

Acceptance Criteria
Weighted Impact Score Computation and Queue Ordering
Given weights SLA=0.60, Severity=0.25, Geo=0.15 are configured at the branch level And an address has predictor outputs SLA Risk=0.90, Claim Severity=0.50, Geo Intensity=0.40 When the impact score is calculated Then the score equals 0.60*0.90 + 0.25*0.50 + 0.15*0.40 = 0.725 (±0.001) And the score is stored with 4-decimal precision and displayed rounded to 2 decimals And the processing queue is sorted by stored score descending within 2 seconds of calculation
Transparent Score Breakdown in UI and Logs
Given an address is queued with an impact score When a user opens the score details panel Then the UI shows the total score (0–100 or 0–1 scale as configured), each predictor’s raw value, each weight, and each weighted contribution And the exact formula and values used match the latest configuration at the time of scoring And an immutable log entry exists with timestamp, user-facing explanation, predictor versions, weights, raw predictor values, contributions, final score, request/correlation ID And the log is retained for at least 90 days and is exportable as CSV/JSON
Predictor Outage Safe Fallbacks and Staleness Handling
Given a scoring run and the SLA Predictor times out after 3 seconds or returns 5xx When the score is computed Then the remaining available predictors are re-normalized to sum to 1.0 and used And a visible “Degraded scoring” badge is shown on the item with tooltip stating which predictors failed And a log entry records the failure type, duration, and fallback path Given both SLA and Geo predictors are unavailable and a last-known score exists from ≤15 minutes ago When the score is needed Then the last-known score is reused and marked as “Stale ≤15m” in UI and logs Given no predictors are available and no last-known score exists When the score is needed Then a safe default score of 0 is assigned, the item is placed at the end of the queue, and processing is not blocked
Manual Priority Boost with Audit Trail and Controls
Given a user with role Ops Manager or higher selects an address When they apply a manual boost of +15 (range 0 to +20 required) and enter a reason of at least 10 characters Then the boosted score equals min(base score + 15, max 100 or 1.0 as configured) And the queue reorders to reflect the boosted score within 5 seconds And an audit record captures user ID, timestamp, IP, request ID, prior score, boost amount, new score, and reason And only Admins can delete or modify boosts; all changes are versioned in the audit log And removing a boost restores the prior base score and reorders the queue accordingly
SurgeCap Integration: Caps, Throttling, and Approval Queue Prioritization
Given a soft cap of 400 credits/day and a hard cap of 500 credits/day are active for a branch And a bulk upload requests 600 credits When processing begins Then credits are automatically allocated in descending impact score order until 400 are used without approval And items beyond 400 are moved to an approval-required queue sorted by impact score And no item beyond the soft cap is processed without an approval record And upon approval of an additional 50 credits, the next 50 highest-impact items are processed first And processing never exceeds the hard cap; remaining items are throttled and timestamped for next window
Dynamic Re-scoring and Deterministic Tie-Break Rules
Given weights are changed or new predictor data is received When re-scoring runs Then all affected items are re-scored and the queue is updated within 60 seconds And a re-scoring event is logged with before/after scores and reason (weights change or data refresh) Given two or more items have equal final scores to 4-decimal precision When ordering the queue Then tie-breaks are applied in this order: higher SLA Risk first, then higher Claim Severity, then earlier Claim Created At timestamp, then lexicographically ascending Address Line 1 And ordering is deterministic across pages and exports
Budget Utilization Dashboard & Alerts
"As a finance controller, I want real-time cap utilization and alerts so that I can proactively adjust budgets and avoid service disruptions."
Description

Deliver real-time visibility into cap utilization across organization, region, and branch with progress indicators toward hard/soft caps, forecasts of run-out times, and historical trends by day/week/event. Provide configurable alert thresholds with in-app, email, and SMS notifications for approaching or breached caps. Include drill-down to job lists, exportable reports, and filters by timeframe, branch, and event to support rapid decision-making.

Acceptance Criteria
Real-Time Cap Utilization Overview by Org/Region/Branch
Given I am an authenticated user with access to Organization X When I open the Budget Utilization Dashboard Then I see utilization (%) and remaining amount toward both hard and soft caps at organization, region, and branch levels And the values reflect spend events no older than 60 seconds from ingestion time And levels without configured caps display "No cap" with no denominator And soft-cap indicators turn amber at >=80% and red at >=100% And hard-cap indicators turn amber at >=80% and red at >=90% And "today" aggregations use my profile time zone
Run-Out Time Forecasting
Given a non-zero spend rate over the trailing 120 minutes When I view the selected scope and timeframe Then the dashboard displays estimated time remaining and timestamp to hit the soft and hard caps And the forecast refreshes at least every 5 minutes And if no cap is configured for a level, the forecast is hidden for that level And if spend rate is zero, the forecast displays "No projected run-out"
Historical Trends and Timeframe/Event Switching
Given I select Day, Week, or Event granularity When I view the utilization trend Then the chart and totals reflect the selected granularity and scope within 2 seconds And Day and Week views show up to the last 30 intervals; Event view spans the full event duration And historical data is available for at least the last 365 days And switching events updates all widgets and breadcrumbs consistently
Configurable Alerts for Approaching and Breached Caps (In-App, Email, SMS)
Given I have Manage Alerts permission When I create an alert rule for a scope with thresholds (e.g., 70%, 90%, 100%) and channels (in-app, email, SMS) Then the rule is saved and evaluated at least every 1 minute And crossing a threshold upward triggers notifications containing scope, cap type (hard/soft), current %, remaining amount, ETA, and a deep link And in-app alerts appear within 15 seconds, SMS within 2 minutes, and email within 5 minutes of the threshold crossing And the same threshold does not re-notify until utilization drops by at least 5 percentage points and crosses again, or 6 hours elapse And user-configured quiet hours suppress non-breach alerts; optional "Override for hard-cap breaches" sends regardless And failed deliveries are retried up to 3 times with exponential backoff and logged
Drill-Down to Job List from Caps and Alerts
Given I click a cap indicator or alert When the drill-down opens Then I see a job list pre-filtered to the same scope, timeframe, and event And the first page (50 rows) loads in under 2 seconds at the 95th percentile And columns include Job ID, Address, Branch, Event, Estimated Spend, Priority Score, Status, Uploaded Time And I can sort by Estimated Spend and Uploaded Time and filter by Status And a link to the job detail opens in a new tab with the same scope context
Exportable Reports for Utilization and Jobs
Given I have applied filters on the dashboard or job list When I click Export Then a CSV is generated with the same filters, including a header row and ISO 8601 timestamps with time zone And exports up to 100,000 rows complete within 60 seconds; larger exports up to 1,000,000 rows are queued and complete within 15 minutes And the exported data includes a summary row of totals for spend and remaining per scope And the download link is available only to users with permission to view the filtered scope
Dashboard Filters, Saved Views, and Shareable URLs
Given I adjust filters for timeframe (Today, This Week, Custom Range), branch (multi-select), region, organization, and event (multi-select) When I apply filters Then all dashboard widgets and lists update consistently within 2 seconds And the URL encodes the filter state so the view can be shared and restored And my last-used filters persist across sessions for my user And I can reset filters to Organization-wide and Today with a single action And I only see branches and events I am permitted to view
Event-Based Cap Scheduling & Templates
"As an operations planner, I want reusable event-based cap schedules so that we can activate surge protections quickly and return to normal operations smoothly."
Description

Allow creation of cap templates tailored for surge scenarios (e.g., storm response) that define ramp-up/ramp-down limits over a configurable event window. Support manual activation, scheduled start/stop, and event tagging for reporting. Provide branch-level exceptions, blackout periods, and auto-revert to baseline caps after the event. Include reconciliation tools to compare planned vs. actual consumption and suggest template adjustments for future events.

Acceptance Criteria
Ramp Schedule Creation and Enforcement
Given an admin user creates an event template named "Hurricane Surge" with a 5-day window (start 2025-10-01 00:00 to 2025-10-05 23:59, org timezone) and daily caps [50,100,75,60,40] When the template is applied to a new event and the event is activated Then on each event day the platform enforces the corresponding cap for auto-approvals and shows consumed/remaining in the dashboard within 1 minute latency And once a day's cap is reached, additional bulk uploads are routed to the Approval Queue with reason "Cap Reached" and an audit log entry is created with timestamp and user/context And saving the template is blocked with a clear validation error if any cap is negative/null or if the number of cap entries does not equal the event window days
Scheduled Activation/Deactivation
Given an event created from a template with scheduled start 2025-10-01 06:00 and stop 2025-10-05 22:00 in America/Chicago When the current time in that timezone reaches the start time Then the event status changes to Active within 60 seconds and caps begin enforcement And when the current time reaches the stop time Then the event status changes to Ended within 60 seconds, cap enforcement stops, and baseline caps resume And activation and deactivation notifications are sent to designated roles (Admin, Ops) and logged
Event Tagging for Reporting
Given the template defines required tags [EventType=Storm, Severity=High] and optional tags [Region, Carrier] When a user creates an event from the template and supplies Region=Gulf and Carrier=ABC Then the event is saved only if all required tags are provided; otherwise a blocking validation error identifies missing tags And all uploads processed during the event inherit the event tags And the Reporting UI and API support filtering and aggregation by these tags, returning results within 5 seconds for 30 days of data
Branch-Level Exceptions
Given an event with a Day 2 global cap of 100 and branch exceptions Branch A=70 and Branch B=50 When uploads are processed on Day 2 Then Branch A auto-approvals stop at 70, Branch B stops at 50, and aggregate auto-approvals stop at 100 (whichever limit is reached first) And if a branch exception exceeds the global cap, the branch is capped at the global cap and the exception is flagged "Clipped" with an alert And the dashboard displays per-branch consumed/remaining and the reason when a limit is hit
Blackout Period Enforcement
Given an event defines blackout windows daily from 01:00–03:00 local time with bypass roles [OrgAdmin] and bypass branches [HQ] When the time enters a blackout window Then bulk uploads from non-bypass roles/branches are prevented from auto-approval and queued with reason "Blackout" And uploads from bypass roles/branches continue unaffected And blackout start/end are shown in the event details and recorded in the audit log
Auto-Revert to Baseline Caps
Given baseline caps exist per branch and an event is active When the event ends (scheduled stop or manual end) Then within 5 minutes all branches revert to baseline caps and the event template is no longer applied And any pending approvals created due to event caps retain their queue status and reason And a confirmation banner appears in the dashboard for 24 hours summarizing the revert
Reconciliation and Template Adjustment Suggestions
Given an event completed with a planned schedule and actual consumption recorded per day and per branch When a user opens the Reconciliation view Then the system displays per-day planned vs. actual, absolute and percentage variance, and MAPE at the event and branch levels; CSV export completes within 10 seconds And if variance exceeds ±20% on two or more consecutive days for any branch, the system generates suggested template adjustments (e.g., increase Day N cap by 20% for overage, decrease for underuse) and shows a preview diff And when the user accepts a suggestion, a new template version is created with a version tag and change log entry; when rejected, the suggestion is dismissed and recorded with reason

Branch Buckets

Divide your main wallet into sub‑wallets for branches, teams, or roles with fixed allocations, rollover rules, and expiry windows. Give local leads visibility and autonomy while preventing one region from consuming the entire balance, improving control across franchises.

Requirements

Bucket Creation & Hierarchy Management
"As an operations admin, I want to create and manage sub-wallets for each branch so that budgets are separated and one region cannot consume all credits."
Description

Enable organization admins to create, edit, and archive sub-wallets (Branch Buckets) under the main wallet with names, branch codes, and optional parent-child relationships. Support assigning default buckets to teams, roles, or users and allow one level of nesting. Display real-time balances and state (active/archived). Enforce uniqueness of names/codes per organization and prevent deletion when balances or pending reservations exist. Integrate with RoofLens organization and user directories for ownership and access mapping. Provide validation, error messaging, and safe rollbacks for all CRUD actions.

Acceptance Criteria
Create Branch Bucket with Unique Name and Code
Given I am an Organization Admin in RoofLens And I am on Branch Buckets > Create When I enter a Name and Branch Code that are unique within my organization And I optionally select a Parent bucket (or leave None) And I submit Then a new bucket is created with state Active and balance 0.00 And it appears in the bucket list with its Name, Branch Code, Parent (if any), and state Active And Name and Branch Code uniqueness are enforced case-insensitively per organization And if Name or Branch Code are missing or invalid, inline errors are shown and the Create action is blocked
Manage One-Level Parent-Child Hierarchy
Given two existing Active buckets A and B in the same organization And bucket B currently has no Parent When I set B's Parent to A and save Then B shows Parent = A and A shows its Child count incremented by 1 And the system prevents setting any bucket's Parent to a bucket that already has a Parent (enforcing one level of nesting) And the system prevents circular references and cross-organization parenting with a clear error message
Assign Default Buckets to Teams, Roles, and Users
Given I am assigning defaults in Branch Buckets > Defaults When I set Bucket X as default for Role R, Team T, and User U Then the runtime default bucket resolution follows: User default overrides Team default; Team default overrides Role default And archived buckets cannot be selected as defaults And removing a default immediately updates the computed default for affected users on the next request
Edit Bucket Properties with Validation and Safe Rollback
Given an existing bucket with Name=N1, Branch Code=C1, Parent=P1 When I edit its Name and/or Branch Code and/or Parent and save Then updates persist only if all validations pass, including uniqueness per organization And on any failure, no partial changes are saved and the bucket remains at N1/C1/P1 And a success shows the updated fields in the list immediately; a failure shows specific inline error messages
Archive and Delete with Balance/Reservation Enforcement
Given an Active bucket When I Archive the bucket Then its state changes to Archived And it is excluded from new spend/reservation pickers while remaining visible in the list with state Archived And existing balances and history remain intact Given an Archived bucket with balance 0.00 and no pending reservations When I Delete the bucket Then the bucket is hard-deleted, all associations (default assignments, hierarchy links) are removed, and the list no longer shows it Given an Archived bucket with non-zero balance or pending reservations When I attempt to Delete Then deletion is blocked and I see an explanatory error indicating the conditions that must be met
Display Real-Time Balances and State
Given the bucket list view is open When a transaction or reservation affecting any listed bucket is posted by any user in the organization Then the affected bucket's available balance and state indicators update in the UI within 5 seconds without manual refresh And a Last updated timestamp reflects the latest data pull
Access Control and Directory Integration
Given RoofLens organization and user/role/team directories are available When an Organization Admin accesses Branch Buckets Then they can create, edit, archive, and delete buckets within their own organization only And non-admin users cannot perform CRUD, but users mapped as Local Leads for a branch can view balances for buckets whose Branch Code matches their directory branch And all selection lists for users, roles, and teams are sourced from the RoofLens directories and filter to the current organization
Scheduled Allocations & Top-ups
"As a finance manager, I want to auto-allocate credits to each branch on a schedule so that I don’t have to manually rebalance budgets each week."
Description

Provide recurring allocation schedules from the main wallet to Branch Buckets with fixed amounts, start/end dates, and cadence (weekly, biweekly, monthly). Support proration for mid-cycle joins, immediate one-time top-ups, per-period caps, and auto-top-up when balances drop below a configurable threshold. Allow selection of funding source (main wallet balance or payment method) and define behavior when funding fails (retry policy, alerting, partial allocation). Log all allocation events and expose them in UI and API.

Acceptance Criteria
Create and Execute Recurring Allocation Schedule
Given an org admin creates a schedule with fixed amount A, cadence (weekly|biweekly|monthly), start date/time, optional end date, and timezone TZ, and selects funding source S When the schedule is saved Then the schedule is persisted with a unique ID and the next run time computed in TZ And when the next run time occurs, amount A is allocated from S to the selected Branch Bucket as a single atomic transaction And balances reflect the allocation immediately with no intermediate negative states And processing is idempotent such that retried executions within the same window do not duplicate allocations (same schedule_id and run_at) And for monthly cadence, if the selected day is missing in a month, the run executes on the last day of that month at the configured time in TZ And for biweekly cadence, runs occur every 14 days anchored to the start date/time in TZ And no allocations occur after the end date 23:59:59 TZ if an end date is set And if the schedule is paused before run time, no allocation occurs and the schedule retains the next run time for when it is resumed
Prorated First Allocation for Mid-Cycle Start
Given a recurring schedule with proration enabled is created after the current period has started with base amount A and cadence (weekly|biweekly|monthly) When the first scheduled allocation executes Then the allocated amount equals round_to_cents(A * remaining_days_in_period / total_days_in_period) And for monthly cadence total_days_in_period is the number of days in the calendar month in TZ; for weekly/biweekly, total_days_in_period is 7 or 14 respectively And if the computed prorated amount is less than $0.01 but greater than $0.00, allocate $0.01 And the allocation event is flagged prorated=true and records base_amount, prorated_amount, and fraction_used And if proration is disabled, the first allocation uses the full amount A
Immediate One-Time Top-Up
Given an authorized user initiates a one-time top-up for Branch Bucket B with amount T, funding source S, and an idempotency key K When the user confirms the top-up Then the system charges or debits S and allocates up to T to B as a single atomic transaction subject to per-period cap And if remaining_period_cap < T, allocate remaining_period_cap and flag reason=capped And the event is recorded with type=one_time_top_up, initiator=user_id, and idempotency_key=K And duplicate requests with the same K within 24 hours do not create additional allocations And the UI reflects the new balance and event within 5 seconds of success And if funding is declined, no funds are moved, status=failed, and the user is shown the failure reason; manual top-ups are not auto-retried
Enforce Per-Period Cap Across All Credits
Given Branch Bucket B has a per-period cap C and a defined period window in timezone TZ When any credit (scheduled, auto_top_up, one_time_top_up) would cause total_credited_this_period to exceed C Then only min(requested_amount, max(C - total_credited_this_period, 0)) is allocated And if the remaining cap is 0, the credit is skipped with status=skipped and reason=capped And the allocation event records cap_applied=true, cap_value=C, and remaining_cap_after And cap windows reset at period boundaries in TZ and are visible via UI/API And schedules advance to the next run without error when an allocation is skipped due to cap
Auto Top-Up on Low Balance Threshold
Given Branch Bucket B has auto top-up enabled with threshold L, top-up amount U, funding source S, and cooldown M minutes When B’s balance drops strictly below L Then an auto top-up is enqueued immediately And at execution time, if an auto top-up for B succeeded within the last M minutes, this run is skipped with reason=cooldown Else allocate min(U, remaining_period_cap) from S And the event is logged with type=auto_top_up, trigger=threshold_cross, and cooldown_applied flag And if funding fails, retries occur per policy (e.g., exponential backoff) and alerts are sent; no more than one successful auto top-up occurs per cooldown window
Funding Source Selection, Failure Handling, and Partial Allocation
Given an allocation (scheduled, auto, or one-time) is configured with funding_source in [main_wallet, payment_method], partial_allocation flag (true|false), and a retry policy (max_attempts, backoff) When the allocation runs Then if funding_source=main_wallet and main_wallet_balance >= requested_amount, allocate the full amount And if main_wallet_balance < requested_amount and partial_allocation=true, allocate available main_wallet_balance, mark status=partial, and record remaining_amount And if main_wallet_balance < requested_amount and partial_allocation=false, do not allocate any amount and mark status=failed_insufficient_funds And if funding_source=payment_method, attempt to charge the payment method; on failure, retry up to max_attempts using the configured backoff and then mark status=failed with failure_code And upon final failure, alerts are delivered to the org owner and designated branch lead within 1 minute, and all attempts are logged with attempt_number and outcome
Event Logging and Exposure in UI and API
Given any allocation attempt or outcome occurs (scheduled, auto top-up, one-time top-up) When the transaction is processed Then an immutable event is written with fields: event_id, bucket_id, schedule_id (nullable), event_type, amount_requested, amount_allocated, currency, pre_balance, post_balance, funding_source, initiator (user|system), status (succeeded|partial|failed|skipped), reason_code (nullable), retry_count, created_at (UTC), timezone And the event appears in the Branch Bucket ledger UI within 10 seconds and can be filtered by date range, event_type, status, bucket, and exported And the API GET /v1/buckets/{bucket_id}/allocation-events returns the above fields, supports cursor pagination, server-side filtering (date range, type, status), and is sorted by created_at desc And pre_balance + amount_allocated - debits equals post_balance for succeeded/partial events, enforced transactionally And event records are read-only after creation and deletions are disallowed except via redaction policy that preserves metadata and reason
Rollover & Expiry Policies
"As a regional lead, I want unused credits to roll over up to a limit and then expire so that we keep budgets fair and encourage timely usage."
Description

Allow bucket-level configuration of unused credit rollover percentages or absolute limits per period, with configurable expiry windows and grace periods. Define what happens to expired credits (return to main wallet or forfeit) and support calendar-anchored or allocation-anchored windows. Handle time zone consistency at the organization level. Surface "expiring soon" counters and tooltips in the UI and expose policy metadata via API. Ensure atomic application of rollover and expiry at period close with idempotent processing and comprehensive error handling.

Acceptance Criteria
Bucket-Level Rollover Configuration (Percent vs Absolute Cap)
Given an org admin sets bucket rollover_type="percentage" and rollover_value=25 and no absolute_cap And the bucket has 400 unused credits at period close When the period closes Then exactly 100 credits are added to the next period's opening balance as a ROLLOVER_APPLIED ledger entry And 300 credits are not rolled and proceed to expiry handling per policy Given an org admin sets bucket rollover_type="absolute" and absolute_cap=150 And the bucket has 400 unused credits at period close When the period closes Then exactly 150 credits are rolled forward as a ROLLOVER_APPLIED ledger entry And 250 credits are not rolled and proceed to expiry handling per policy
Configurable Expiry Window and Grace Period
Given bucket expiry_window=30d after period close and grace_period=5d And 300 non-rolled credits remain at close When time reaches close_timestamp + 30d Then those credits are marked expired and a CREDITS_EXPIRED ledger entry for 300 is recorded And those credits become unavailable at (close_timestamp + 30d + grace_period if grace_period>0 else close_timestamp + 30d) And any attempt to spend expired credits after the applicable timestamp is rejected with error_code="CREDITS_EXPIRED"
Expired Credits Disposition (Return vs Forfeit)
Given policy action_on_expire="return_to_main_wallet" And 200 credits expire from a bucket When expiry is processed Then 200 credits are transferred back to the main wallet in a single atomic transfer with an audit reference to the bucket and period And the bucket balance decreases by 200 and the main wallet balance increases by 200 Given policy action_on_expire="forfeit" And 200 credits expire When expiry is processed Then no transfer to the main wallet occurs and 200 credits are permanently removed from circulation And a FORFEIT ledger entry is recorded with the expired amount and timestamp
Window Anchoring and Org Time Zone
Given org_time_zone="America/Chicago" and period_anchor="calendar_month" When evaluating the August 2025 close Then the period closes at 2025-08-31 23:59:59 in America/Chicago and all rollover/expiry timestamps are computed in that time zone Given org_time_zone="America/Chicago" and period_anchor="allocation_anchored" and last_allocation_at=2025-08-10T09:15:00-05:00 and period_length=30d When evaluating the next close Then the period closes at 2025-09-09T09:15:00-05:00 and all related timers use America/Chicago, including across DST boundaries And UI and API consistently display times in the org time zone with clear offsets
Policy Metadata API Exposure
Given a valid API token with scope=wallet.read When calling GET /api/v1/buckets/{bucketId}/policy Then the response status is 200 and includes fields: rollover_type, rollover_value, absolute_cap, period_anchor, expiry_window, grace_period, action_on_expire, org_time_zone, next_close_at, next_expiry_at And all timestamp fields are RFC 3339 with the org time zone offset And an ETag header is present and changes when any policy field changes Given an invalid bucketId When calling the same endpoint Then the response status is 404 with error_code="BUCKET_NOT_FOUND"
UI Expiring-Soon Counters and Tooltips
Given expiring_soon_threshold=7d and a bucket with 120 credits expiring within the next 7 days in the org time zone When a branch lead views the bucket dashboard Then an "Expiring soon" counter displays 120 next to the bucket And hovering or focusing the info icon shows a tooltip with the expiry date/time, threshold basis, and policy summary, accessible via keyboard and screen readers (ARIA role=tooltip, label present) And the counter refreshes within 60 seconds of any change in expiring amounts And credits expiring beyond the threshold are not included in the counter
Atomic and Idempotent Period Close Processing
Given a scheduled period-close job runs with idempotency_key=orgId:bucketId:period_end When the job executes Then rollover and expiry are applied within a single transaction so that either all ledger entries and balance updates commit or none do And rerunning the job with the same idempotency_key results in no additional ledger entries and identical balances And concurrent executions for the same period result in exactly one committed outcome; others no-op with a clear audit record And failures return structured errors with retryable/non_retryable classification and alerts are emitted to monitoring
Spend Controls & Role Permissions
"As a franchise owner, I want branch leads to have autonomy within limits so that spending stays controlled without central approvals for every job."
Description

Implement bucket-level access controls and spend rules, including who can view, spend from, or manage a bucket. Support per-user and per-role daily/monthly spend limits, allowed job types (e.g., measurement, damage map, estimate), and soft versus hard limits with optional approval workflows for overages. Map users to default buckets by branch/team and block cross-bucket spending unless explicitly permitted. Enforce rules at transaction time with clear error states and audit all policy evaluations.

Acceptance Criteria
Bucket Visibility by Role and Explicit Grants
Given a user without 'view' permission on Bucket A When they open the buckets list Then Bucket A is not visible in the list Given a user with 'view' permission on Bucket A When they open the buckets list Then Bucket A appears in the list Given a user with 'view' permission but without 'manage' permission on Bucket A When they open Bucket A settings Then the action is blocked with error code POLICY_MANAGE_DENIED and HTTP 403 Given a user with 'view' permission only on Bucket A When they attempt to initiate a transaction from Bucket A Then the action is blocked with error code POLICY_SPEND_DENIED and HTTP 403
Transaction-Time Spend Enforcement
Given a user with 'spend' permission on Bucket A and available balance >= $50 and job type 'measurement' allowed When they purchase a Measurement for Job #123 for $50 Then the transaction succeeds, $50 is deducted from Bucket A, and a transaction record is created with status 'Settled' Given a user without 'spend' permission on Bucket A When they attempt the same purchase Then the transaction is rejected before payment with error code POLICY_SPEND_DENIED, HTTP 403, and no balance change Given two concurrent spend attempts against Bucket A that would overdraw the balance When processed Then the system enforces atomicity; at most one succeeds and the other fails with error code INSUFFICIENT_FUNDS and no overdraft occurs
Per-User and Per-Role Daily/Monthly Spend Limits
Given a role limit of $500/day and $3000/month and a user-specific limit of $300/day When the user spends $250 in one day Then further spend over $50 that day is blocked with error code POLICY_SPEND_LIMIT_EXCEEDED_HARD and no balance change Given the same user has $2800 already spent in the current month When they attempt a $300 spend Then the transaction is blocked with error code POLICY_SPEND_LIMIT_EXCEEDED_HARD and no balance change Given both role and user limits are configured When enforcing at transaction time Then the most restrictive remaining limit (minimum of user vs role limit) is applied Given the daily/monthly reset boundaries at 00:00:00 UTC When the period rolls over Then counters reset and new spend is allowed within the new period limits
Allowed Job Types per Bucket and Role
Given Bucket A allows job types [measurement, estimate] and denies [damage_map] When a user with 'spend' permission purchases 'measurement' Then the transaction succeeds Given Bucket A denies 'damage_map' When a user attempts to purchase 'damage_map' Then the transaction is rejected with error code JOB_TYPE_NOT_ALLOWED and HTTP 403 and no balance change Given a role policy denies 'estimate' regardless of bucket When a user in that role attempts 'estimate' from any bucket Then the transaction is rejected with error code JOB_TYPE_NOT_ALLOWED_ROLE and no balance change Given job type permissions are updated to allow the previously denied type When the user retries the same purchase Then the transaction proceeds without requiring re-authentication if the session is still valid
Soft vs Hard Limits and Approval Workflow for Overages
Given Bucket A has a soft daily limit of $500 with approver Group X and a hard daily cap of $1000, and the user has 'spend' permission When they attempt a $600 transaction with $0 prior spend that day Then the transaction enters 'Pending Approval', no funds are deducted, and approvers in Group X are notified within 1 minute Given an approver approves within 24 hours When approval is granted Then the transaction executes, deducts $600, and the audit log links the approval decision to the transaction Given the approver rejects or the 24-hour window expires Then the transaction is canceled with error code APPROVAL_REJECTED or APPROVAL_EXPIRED and no balance change Given a transaction would exceed the hard daily cap of $1000 When attempted Then it is immediately rejected with error code POLICY_SPEND_LIMIT_EXCEEDED_HARD and no approval request is created
Default Bucket Mapping and Cross-Bucket Spending Block
Given a user is mapped to default Bucket B via branch/team When they initiate a purchase without selecting a bucket Then Bucket B is auto-selected for the transaction Given the same user attempts to spend from Bucket C without explicit cross-bucket permission When they submit the transaction Then it is rejected with error code CROSS_BUCKET_BLOCKED and HTTP 403 and no balance change Given an admin grants explicit permission to spend from Bucket C When the user retries the same transaction Then the transaction succeeds and is recorded against Bucket C
Policy Decision Audit Trail and Error States
Given any policy evaluation (view, manage, spend, job-type, limit, approval) When a decision is made Then an audit entry is stored with timestamp, request_id, user_id, role_ids, bucket_id, job_type, decision (allow/deny/pending), rule_ids_evaluated, and rationale text Given a denied action When the user is notified Then the UI/API returns a specific error code from {POLICY_VIEW_DENIED, POLICY_MANAGE_DENIED, POLICY_SPEND_DENIED, POLICY_SPEND_LIMIT_EXCEEDED_HARD, POLICY_SPEND_LIMIT_EXCEEDED_SOFT, APPROVAL_REQUIRED, APPROVAL_REJECTED, APPROVAL_EXPIRED, JOB_TYPE_NOT_ALLOWED, JOB_TYPE_NOT_ALLOWED_ROLE, CROSS_BUCKET_BLOCKED, INSUFFICIENT_FUNDS} and does not reveal sensitive rule configuration values Given audit logs When queried by authorized admins Then entries are append-only, filterable by date range, user, bucket, decision, and exportable as CSV Given a transaction approved via overage When viewing the audit trail Then the policy evaluation record links to the approval record and the resulting transaction record
Smart Transaction Routing & Reservation
"As a project manager, I want jobs to automatically charge the correct branch bucket so that billing stays accurate with minimal clicks."
Description

Automatically determine the charge bucket when a user initiates a RoofLens job using a deterministic order: project/batch override, user default, team default, then branch default. Reserve credits atomically at submission, release on failure, and convert reservations to charges on job start. Support configurable behavior when a bucket lacks funds (block, route to approval, or allow fallback if permitted). Ensure idempotent charging to prevent double-billing and handle partial refunds or job retries cleanly. Provide clear UI prompts for allowed overrides and transparent charge attribution on receipts and invoices.

Acceptance Criteria
Deterministic Bucket Selection Order
Given a project or batch has a charge bucket override configured When a user submits a RoofLens job under that project or batch Then the system selects the override bucket for reservation regardless of user, team, or branch defaults Given no project or batch override exists and the submitting user has a default bucket When the user submits a job Then the system selects the user's default bucket Given no project/batch override and no user default, but the user's team has a default bucket When the user submits a job Then the system selects the team's default bucket Given no project/batch override, no user default, and no team default, but the user's branch has a default bucket When the user submits a job Then the system selects the branch's default bucket Given none of the above defaults exist When the user submits a job Then the system blocks submission with an actionable error indicating no eligible charge bucket and how to resolve
Atomic Reservation and Charge Conversion Lifecycle
Given the selected bucket has sufficient available credits When the user submits a job Then an atomic reservation is created for the required amount and the submission response includes a reservation ID And the bucket's available balance reflects the reserved amount without posting a charge Given a submission fails validation after reservation but before job start (e.g., invalid imagery) When the failure is recorded Then the reservation is released within 60 seconds and the available balance is restored And no charge is posted Given a job transitions from queued to processing When the job starts Then the existing reservation is converted to a finalized charge with a charge ID linked to the reservation ID And the reserved amount is removed from the reserved balance and reflected as a charge
Insufficient Funds Handling Modes
Given the selected bucket's configuration is set to Block on insufficient funds When a user submits a job that exceeds available credits Then the submission is blocked with a clear insufficient funds message And no reservation is created Given the selected bucket's configuration is set to Route to Approval on insufficient funds When a user submits a job that exceeds available credits Then an approval request is created with job details and required credits And the submission status is Awaiting Approval And no reservation or charge is created until approved Given the selected bucket's configuration allows Fallback when insufficient funds When a user submits a job that exceeds the selected bucket's available credits Then the system evaluates the next eligible bucket(s) in deterministic order permitted by policy And if an eligible bucket with sufficient funds is found, a reservation is created in that bucket and the user is notified of the fallback And if no eligible bucket is found, the behavior follows the bucket's configured policy (Block or Route to Approval)
Idempotent Charging and Duplicate Submission Protection
Given an idempotency mechanism is in place (e.g., Idempotency-Key per job submission) When identical submissions are received with the same idempotency key within the deduplication window Then only one reservation and, subsequently, one charge exist And subsequent responses return the original job and reservation identifiers Given transient network errors cause client retries without changing the idempotency key When the backend receives multiple requests Then the system processes exactly one and returns consistent status for the duplicates Given a recoverable error occurs after reservation but before job start and the client retries the submission with the same idempotency key When the retry is processed Then the existing reservation is reused and no additional reservation is created
Partial Refunds and Job Retries
Given a charge has been posted for a job that is later partially canceled or adjusted When a partial refund of X credits is initiated according to policy Then a refund entry linked to the original charge is created for X within 60 seconds And X credits are restored to the charged bucket's available balance And receipts reflect the refund with reference to the original charge Given a job fails after a charge is posted and a retry is initiated When the retry completes successfully Then the total net charge across the failed attempt and the retry does not exceed the price of a single successful job And any overage is refunded as a partial refund linked to the original charge Given multiple partial refunds are applied to a single charge When balances are calculated Then the sum of refunds never exceeds the original charge amount and audit logs show each refund event
Override Prompting and Transparent Charge Attribution
Given a user's role permits bucket override at submission When the user opens the submission form Then the UI displays the auto-selected bucket and an option to choose from authorized buckets And selecting a different bucket requires explicit confirmation and captures an override reason if configured as required Given a bucket override is confirmed When the job is submitted Then the system uses the chosen bucket for reservation and charge for this job only And an audit log records who performed the override, when, and the from/to buckets Given a receipt or invoice is generated for any job When the document is produced Then it includes the charged bucket name and ID, branch/team attribution, reservation ID, charge ID, and indicators for override, fallback, or approval where applicable Given a user lacks override permission When the user opens the submission form Then the override control is not visible and the auto-selected bucket is read-only
Branch Balance & Burn Analytics
"As a regional director, I want visibility into each branch’s balance and burn so that I can proactively adjust allocations and avoid work stoppages."
Description

Deliver dashboards and reports showing balances, allocated versus spent by period, burn rate, forecasted run-out dates, top spenders, and spend by job type per bucket. Provide filters by branch, team, role, and date range; CSV export; and scheduled email summaries. Visualize rollover and expiry impacts and call out anomalies (e.g., sudden spikes). Respect permissions so users only see authorized buckets. Optimize for performance with paginated queries and cached aggregates updated near real time.

Acceptance Criteria
Branch Leader Reviews Balance & Burn Dashboard
1) Given a user with access to specific Branch Buckets, when they open the Balance & Burn dashboard for a selected date range, then for each authorized bucket they see: current balance, allocated vs spent for the period, burn rate (% and $/day), forecasted run-out date, top 5 spenders, and spend by job type. 2) Given large datasets, when the dashboard loads using cached aggregates, then P95 load time is <= 2.5 seconds and P99 <= 5 seconds. 3) Given new transactions posted to any included bucket, when aggregates refresh, then dashboard metrics reflect the changes within 3 minutes. 4) Given any table or list exceeding 50 rows (e.g., spenders), when viewed, then results are server-side paginated with a default page size of 50 and user-selectable options of 25/50/100. 5) Given user permissions, when viewing the dashboard, then unauthorized buckets and their contributions are excluded from all widgets, totals, and charts.
Analyst Applies Branch/Team/Role/Date Filters
1) Given filter controls for Branch, Team, Role, and Date Range, when the user applies any combination, then all widgets and tables update to reflect the intersection of those filters. 2) Given date selection, when the user chooses absolute dates or relative presets (Last 7 days, MTD, QTD, YTD), then data is computed in the account’s timezone and defaults to MTD on initial load. 3) Given applied filters, when the user navigates via back/forward or shares the URL, then the filter state is preserved via query parameters. 4) Given applied filters that yield no matching data, when results render, then the UI displays a clear “No data for selected filters” message without errors and shows zeroed summaries. 5) Given user permissions, when opening filter dropdowns, then only values (branches/teams/roles) the user is authorized for are available to select.
User Exports Current View to CSV
1) Given a filtered analytics view, when the user clicks Export CSV, then a CSV is generated within 10 seconds containing exactly the rows and metrics visible under the current filters and permissions. 2) Given CSV generation, when the file is created, then it includes headers: bucket_id, bucket_name, branch, team, role, period_start, period_end, opening_balance, allocation, rollover_in, rollover_out, expiry_amount, spent, burn_rate_pct, burn_rate_per_day, forecast_runout_date, top_spenders_count, and one column per job type for spend_by_job_type. 3) Given numeric and date fields, when exported, then numbers are unformatted (plain) and dates use ISO 8601 (YYYY-MM-DD or YYYY-MM-DDThh:mm:ssZ) with the account timezone noted in a metadata row. 4) Given large results, when the export would exceed 50 MB or 1 million rows, then the user is prompted to refine filters or optionally receive an asynchronous email download link. 5) Given permissions, when exporting, then unauthorized buckets are excluded and an audit record is written (user_id, timestamp, filters, record_count).
Ops Schedules Recurring Email Summaries
1) Given scheduling options (daily, weekly, monthly) and selected filters/buckets, when a user schedules a summary, then emails are sent at 8:00 AM in the account timezone containing key KPIs (balance, allocated vs spent, burn rate, forecast run-out) and a link to download the CSV with the same filters. 2) Given a new or updated schedule, when saved, then the first send occurs at the next scheduled time and subsequent changes take effect within 5 minutes. 3) Given access control, when generating the summary and downloadable CSV, then the content is scoped to the scheduler’s permissions and recipients must authenticate to access the linked data; unauthorized buckets are never included. 4) Given email operations, when an email is sent, then delivery status (success/bounce) and job metadata (schedule_id, recipients, filter hash) are logged for audit. 5) Given user preferences, when a recipient clicks unsubscribe/manage notifications, then they can pause or modify their summary subscriptions without affecting others.
Visualizing Rollover and Expiry Impacts
1) Given buckets with rollover and expiry rules, when viewing the analytics charts, then each bucket shows beginning balance, allocation, rollover in/out, expiring-soon amounts, and projected available balance after expiry for the selected period. 2) Given expiring funds, when an amount will expire within the next 14 days relative to the selected end date, then the bucket is highlighted with a warning badge and a tooltip indicating expiry date and amount. 3) Given an expiry event within the date range, when rendering the timeline, then the available balance decreases on the exact expiry date and all dependent metrics update accordingly. 4) Given user interaction, when hovering a chart element, then a tooltip explains the calculation (inputs, rollover rules, expiry logic) and exact values. 5) Given permissions and filters, when visualizing, then only authorized and in-scope data is rendered and rolled up.
Anomaly Detection Flags Sudden Spend Spikes
1) Given historical daily spend per bucket, when a day’s spend exceeds 200% of the trailing 14-day average and the absolute increase is >= $500, then the system flags an anomaly within 10 minutes of data arrival. 2) Given a flagged anomaly, when displayed on the dashboard, then it shows bucket, date, spike amount, baseline average, and top contributing job types/spenders, with a link to drill down. 3) Given user actions, when an anomaly is acknowledged by an authorized user, then it is muted from email digests for 24 hours but remains visible with an “Acknowledged” status. 4) Given low-volume noise, when daily spend is < $200, then anomalies are not triggered even if percentage thresholds are met. 5) Given data retention, when anomalies are older than 90 days, then they are archived and excluded from default views (available via a historical filter).
Restricted User Sees Only Authorized Buckets
1) Given a user scoped to specific branches/teams/roles, when they access analytics, exports, or scheduled summaries, then only authorized buckets and aggregates are visible and included. 2) Given a direct URL or API request referencing an unauthorized bucket, when attempted, then the system returns 403 Forbidden and no sensitive data is leaked. 3) Given schedule management, when a user attempts to schedule emails for unauthorized buckets, then the action is blocked with an explanatory message; if a user loses access, affected schedules are auto-paused and the owner is notified. 4) Given cached aggregates, when a user’s permissions change, then cache keys invalidate and access changes are enforced within 5 minutes across dashboards, exports, and emails. 5) Given auditing, when access is denied or sensitive analytics are viewed, then an audit event is recorded with user_id, scope, bucket_ids, timestamp, and action.
Audit Trail, Alerts & Webhooks
"As a compliance officer, I want a complete audit trail and real-time alerts so that we can investigate disputes and integrate spending signals into our ERP."
Description

Maintain an immutable audit log for all bucket-related events, including creation, updates, allocations, rollovers, expiries, top-ups, spend transactions, overrides, and approvals, with actor, timestamp, IP, and before/after values. Provide configurable alerts for low balances, allocation failures, upcoming expiries, and breached limits via email and Slack. Expose webhook events with signed payloads and retry/backoff policies for external integrations. Offer searchable UI for audits with filters and export capability.

Acceptance Criteria
Audit Log for Bucket Lifecycle and Transactions
Given a user performs a bucket-related event (creation, update, allocation, rollover, expiry, top-up, spend transaction, override, or approval) When the operation is successfully committed Then exactly one audit record is appended containing: event_type, event_id (UUID), bucket_id, org_id, branch_id, actor_user_id, actor_email, actor_ip, occurred_at (UTC ISO 8601), before_values (JSON), after_values (JSON), correlation_id And the audit record is immutable; any attempt to update or delete it via API or UI returns 403 and is itself logged as a security event And duplicate audit records are not created for the same event_id
Audit UI Search, Filter, Sort, and CSV Export
Given a user with Audit Viewer permission opens the Audit Log UI When they apply filters by date range, event type(s), bucket(s), actor, IP, and free-text search Then the results reflect all filters, return within 2 seconds for up to 10,000 matching rows, and are sortable by occurred_at and event_type And pagination supports page sizes of 25, 50, and 100 with a visible total count When the user exports Then a CSV downloads containing only filtered rows with columns: event_id, occurred_at, event_type, bucket_id, org_id, branch_id, actor_user_id, actor_email, actor_ip, before_values, after_values, correlation_id And exports up to 50,000 rows complete within 10 seconds; larger requests return a clear error message
Low Balance Alerts via Email and Slack
Given a bucket has a low-balance threshold configured (amount or %) When the bucket balance crosses from above to at-or-below the threshold due to a spend or allocation Then an email is sent to configured recipients and a Slack message is posted to the configured channel within 60 seconds And the alert content includes bucket name, branch, current balance, threshold, currency, and a link to the bucket And no additional low-balance alerts are sent until the balance recovers above the threshold and crosses it again
Allocation Failure, Breached Limits, and Upcoming Expiry Alerts
Given an allocation fails or a spend/allocation breaches a configured limit When the failure or breach occurs Then an email and Slack alert are sent within 60 seconds including bucket, attempted amount, limit (or failure reason), and correlation_id And repeated alerts for the same condition on the same bucket are deduplicated for 15 minutes Given a bucket has funds expiring on date D and an expiry alert window (default 7 days) When the current date enters the window Then a daily email and Slack reminder is sent until expiry or until funds are rolled over or reduced below the expiring amount
Webhook Delivery, Signing, and Retry/Backoff
Given a webhook subscription exists with endpoint URL, shared secret, and selected bucket-related event types When a matching event occurs Then the system POSTs a JSON payload to the endpoint within 60 seconds with headers: X-RoofLens-Event, X-RoofLens-Delivery, X-RoofLens-Timestamp, X-RoofLens-Signature And X-RoofLens-Signature is an HMAC-SHA256 of "<timestamp>.<payload>" using the shared secret And any HTTP 2xx response marks the attempt successful; non-2xx or network errors trigger retries at 1m, 5m, 15m, 1h, and 6h (max 5 attempts) with jitter And deliveries are at-least-once; duplicate deliveries reuse the same X-RoofLens-Delivery id for idempotency
Webhook Subscription Configuration and Test Delivery
Given an Organization Admin configures webhooks When creating or updating a subscription Then they can set an https endpoint URL, shared secret, and select event types limited to bucket-related events And the shared secret is write-only (only last 4 characters visible after save) And a "Send test event" action sends a signed ping immediately and displays delivery status and response code
Consistent Timestamps and IP Capture Across Channels
Given any audit log entry, alert, or webhook is produced When the record or payload is inspected Then occurred_at is in UTC ISO 8601 with millisecond precision And actor_ip reflects the originating client IP (respecting X-Forwarded-For behind trusted proxies) And the same event_id and occurred_at values are consistent across the audit log, alert content, and webhook payload for the same event

Predictive Refill

Just‑in‑time auto top‑ups driven by forecasted demand from pipeline volume, scheduled flights, weather alerts, and historical burn. Set min/max refill amounts and preferred funding sources, get proactive alerts, and avoid work stoppages from surprise low balances.

Requirements

Demand Forecast Engine
"As an account owner for a roofing contractor, I want RoofLens to forecast upcoming credit usage based on our jobs and schedules so that top-ups happen just in time and prevent job delays without overfunding."
Description

Compute rolling forecasts of account balance consumption over a configurable look‑ahead window (e.g., 3–14 days) using inputs from active pipeline volume, scheduled drone flights, regional weather alerts, and historical burn rates. Expose a service that outputs projected daily usage, recommended next refill date and amount, and confidence intervals, with graceful degradation to heuristic models (e.g., moving averages) when inputs are missing. Recalculate forecasts on relevant events (new jobs, schedule changes, severe weather alerts, sudden consumption spikes) and at a minimum daily cadence. Provide per‑organization/workspace scopes, configurable business hours/timezone handling, and clear API/contract for the refill decision module. Ensure low-latency responses (<300 ms p95) and resiliency (retries, circuit breakers) to keep refill decisions timely.

Acceptance Criteria
Configurable Rolling Forecast Window
Rule: When lookAheadDays is within [3,14], the service returns exactly lookAheadDays entries in dailyUsage with non‑negative numeric values. Rule: When lookAheadDays is omitted, the service defaults to 7 and returns 7 entries. Rule: When lookAheadDays is outside [3,14], the service returns HTTP 400 with errorCode="OUT_OF_RANGE" and does not compute a forecast. Rule: Each dailyUsage entry includes a confidenceInterval {low, high} where 0 ≤ low ≤ high. Rule: The response includes recommendedRefill {date, amount} where amount ≥ 0 and date is a valid ISO‑8601 timestamp.
Event-Driven Recalculation
Given a new job is added, a scheduled flight is created/updated/canceled, a severe weather alert is received for the organization’s service region, or a consumption spike event is detected When the event is recorded by the system Then a new forecast is generated for the affected org/workspace and available via API within 5 minutes, with generatedAt > previous generatedAt. Rule: A daily batch refresh runs at least once every 24 hours per org/workspace even if no events occur. Rule: Recalculation scope is limited to the impacted org/workspace; other tenants’ forecasts remain unchanged.
Graceful Degradation and Heuristic Fallback
Given one or more inputs (pipeline, scheduled flights, weather alerts) are unavailable or stale beyond their configured freshness thresholds When a forecast is requested Then the service returns a forecast using a heuristic model (e.g., moving average) without error (no 5xx) and marks metadata.fallbackUsed=true. Rule: metadata.missingInputs lists each unavailable input source by key and reason. Rule: Response still includes dailyUsage, confidenceInterval per day, and recommendedRefill. Rule: The modelVersion field indicates the heuristic model used.
Per-Organization and Workspace Scoping
Rule: Requests require orgId and optionally workspaceId; responses echo scope {orgId, workspaceId}. Rule: Forecasts are computed using only data from the specified scope; no cross‑tenant or cross‑workspace data is read or returned. Rule: Changing data in workspace A does not alter forecasts returned for workspace B within the same org. Rule: If workspaceId is omitted, org‑level defaults are applied; if both exist, workspace overrides org defaults. Rule: Attempts to access another org’s scope return HTTP 403.
Timezone and Business Hours Handling
Given an organization timezone and business hours configuration When the forecast is generated Then daily buckets align to calendar days in the org’s timezone (including DST transitions), and timestamps are returned in ISO‑8601 with timezone offset. Rule: recommendedRefill.date falls within the org/workspace business hours; if not possible within look‑ahead, the earliest next business window is selected. Rule: If business hours are not configured, defaults are applied as configured at the platform level and echoed in metadata.appliedBusinessHours. Rule: Inputs with timestamps in other timezones are normalized to the org timezone before computation.
Performance and Resiliency SLA
Rule: p95 latency for forecast read requests is < 300 ms measured at the service boundary under nominal load; SLO metrics are exported and queryable. Rule: Dependencies are wrapped with timeouts, retries (with backoff), and circuit breakers; a dependency outage results in fallback behavior rather than request failure when possible. Rule: Operations are idempotent for the same (orgId, workspaceId, lookAheadDays, inputsVersion) tuple; duplicate requests do not create duplicate work or inconsistent results. Rule: Transient failures are logged with correlation IDs; user‑visible error responses include stable error codes.
API Contract for Refill Decision Integration
Rule: An OpenAPI (v3) specification is published for endpoint GET /v1/forecast and includes: dailyUsage[], confidenceInterval per day, recommendedRefill {date, amount}, modelVersion, inputSourcesUsed[], missingInputs[], generatedAt, scope {orgId, workspaceId}, timezone. Rule: Contract includes explicit error models with codes (e.g., OUT_OF_RANGE, UNAUTHORIZED, RATE_LIMITED) and examples. Rule: Versioning policy is documented; non‑breaking changes are additive; breaking changes require a new major version path. Rule: A contract test suite passes against the deployed service and a mock server, proving compatibility with the refill decision module.
Configurable Refill Rules & Thresholds
"As a finance admin, I want to set thresholds and caps for automatic refills so that our balances stay healthy without risking overspend or cash flow issues."
Description

Enable admins to define trigger thresholds and guardrails that govern automatic refills, including minimum balance to trigger, minimum/maximum refill amounts, rounding rules, daily/weekly caps, and blackout windows. Support per-organization defaults with workspace-level overrides and environment-based feature flags. Allow rule evaluation to incorporate forecast confidence (e.g., increase refill amount when high-confidence surge is predicted) and to halt automation when accounts are on hold. Provide REST/GraphQL APIs and admin UI for creation, validation, simulation (“what‑if” preview), and versioned change history. Respect timezones and business hours when calculating triggers and scheduling refills.

Acceptance Criteria
Auto-Refill Trigger, Amount Calculation, Rounding, and Cap Enforcement
Given an organization rule set with min_balance_trigger = 200, min_refill_amount = 150, max_refill_amount = 600, rounding_rule = "nearest_10", daily_cap_amount = 1000, weekly_cap_amount = 3000, and current_balance = 185 When the refill evaluator runs Then exactly one refill is created because current_balance < min_balance_trigger And the computed refill amount is rounded to the nearest $10 according to rounding_rule And the computed refill amount is not less than min_refill_amount and not greater than max_refill_amount And the sum of all refill amounts scheduled for the day does not exceed daily_cap_amount; any excess is deferred to the next allowable window And the sum of all refill amounts scheduled for the week does not exceed weekly_cap_amount; any excess is deferred and logged with reason = "weekly_cap_exceeded" And if current_balance >= min_balance_trigger, no refill is created
Blackout Windows, Business Hours, and Timezone-Accurate Scheduling
Given blackout_windows = daily 20:00–06:00 and business_hours = Mon–Fri 08:00–18:00 in timezone = America/Denver And a trigger occurs at 21:15 America/Denver on a Tuesday When the system schedules the refill Then the refill is deferred to the next business_hours opening outside blackout_windows, i.e., Wednesday 08:00 America/Denver And timestamps persisted and exposed via API/UI reflect America/Denver local time with the correct UTC offset, including DST transitions And if the next day is a non-business day (e.g., Saturday), the refill is scheduled for the following business day at 08:00 local time
Forecast Confidence Scaling of Refill Amounts
Given a rule with high_confidence_threshold = 0.80 and high_confidence_multiplier = 1.25 And a predicted demand surge with forecast_confidence = 0.86 and a base_refill_amount = 200 When the refill amount is calculated Then the applied amount = round(base_refill_amount * high_confidence_multiplier) respecting the configured rounding_rule and within [min_refill_amount, max_refill_amount] And if forecast_confidence < high_confidence_threshold, the multiplier is not applied (amount = round(base_refill_amount) within bounds) And an audit field captures forecast_confidence, multiplier_applied = true|false, and calculated_pre_round amount for traceability
Automation Halt When Account Is On Hold
Given an organization with account_status = "on_hold" When refill evaluation runs or a scheduled refill becomes due Then no new refills are created and any pending unsent refills are canceled with reason = "account_on_hold" And a notification event is emitted to admins indicating automation halt And API/UI surfaces a banner/status showing automation disabled due to hold
Workspace-Level Overrides Take Precedence Over Organization Defaults
Given organization default rules (e.g., rounding_rule = "nearest_10", min_balance_trigger = 250) And a specific workspace override (e.g., rounding_rule = "up_to_25", min_balance_trigger = 300) When a trigger occurs in that workspace Then the workspace override values are used for evaluation and scheduling And other workspaces without overrides continue to use organization defaults And the evaluation log shows the resolved effective configuration source per field (override or default)
Admin UI and API: Create, Validate, Simulate, and Version History
Given an admin with appropriate permissions When they create a rule via REST POST /v1/refill-rules and GraphQL mutation createRefillRule Then the API returns 201/OK with rule_id and the rule is visible in the Admin UI list within 5 seconds And invalid inputs (e.g., min_refill_amount > max_refill_amount, negative thresholds, overlapping blackout windows) return 422 with field-level error codes and messages And invoking simulation via REST POST /v1/refill-rules/{id}:simulate or GraphQL simulateRefillRule with inputs (balance, pipeline_volume, schedule, timezone) produces a deterministic, side-effect-free preview including: proposed_refills[], applied_rules[], and explanatory rationale And any saved change creates a new immutable version with version_id, author, timestamp, diff summary; rollback creates a new version referencing the prior version_id And all operations are recorded in an audit log retrievable by rule_id and version_id
Environment-Based Feature Flags Gate Rule Evaluation and Surfaces via API/UI
Given feature flags predictive_refill_rules enabled in "staging" and disabled in "production" When evaluation runs in staging Then rules are evaluated and refills may be scheduled according to configuration And when evaluation runs in production with the flag disabled, no automatic refills are evaluated or scheduled, and a skipped_with_reason = "feature_flag_disabled" is logged And REST/GraphQL endpoints expose the effective feature flag state per environment, and the Admin UI displays the current environment and flag state
Funding Source Management & Payment Orchestration
"As a finance admin, I want to add preferred funding sources with safe fallbacks so that refills succeed automatically even if the primary payment method fails."
Description

Allow multiple payment methods per account (primary and fallbacks) with prioritized routing and automated retries. Store cards and ACH mandates via a PCI‑compliant vault using tokenization, and support pre‑authorization checks and micro‑deposits for ACH where required. Implement idempotent payment execution, exponential backoff on failures, configurable retry windows, and automatic failover to backup methods. Expose webhooks and events for payment outcomes, update balance atomically upon confirmation, and reconcile against provider ledgers daily. Support regional compliance (e.g., SCA for EU cards) and surface clear error codes and recovery steps.

Acceptance Criteria
Primary-Fallback Routing for Predictive Refill Charge
Given an account with multiple funding sources prioritized and active And min/max refill amounts are configured When a Predictive Refill triggers a top-up of an amount within limits Then the highest-priority eligible source is selected And the tokenized instrument from the PCI-compliant vault is used And the selection, priority order, and reason are recorded in an immutable audit log And a payment_intent id with correlation id is returned
Idempotent Execution Prevents Duplicate Charges
Given a payment request contains an idempotency key K scoped to (account, amount, currency) When the same request is retried within the idempotency window (default 24h) Then only one provider charge is created And all retries return the same payment_intent id and final status And if any parameter differs with the same K, a 409 Conflict is returned and no new charge is created And idempotency records expire after the configured window and are purged
Configurable Retry with Exponential Backoff
Given the provider returns a transient error (network timeout, 5xx, rate limit) When executing a payment attempt Then the system retries up to R=3 times with exponential backoff (base=2, full jitter) within a window W=15 minutes And each attempt is logged with attempt number, timestamp, provider error code, and scheduled next retry time And if retries are exhausted, the client receives a standardized error code and human-readable recovery steps And retry counts and windows are configurable per environment
Automatic Failover After Retry Exhaustion
Given failover is enabled and backup funding sources exist And the primary source hard-declines or exceeds the retry window When processing a top-up Then the system attempts the next-priority eligible source automatically And prevents circular retries between sources And records failover reason, attempt outcomes, and final disposition in the audit log And the user is notified via alert and webhook of failover attempts and results
ACH Verification and Card SCA/Pre-Auth Compliance
Given a new ACH bank account is added When micro-deposits are initiated Then two deposits are posted and must be verified within T=7 days before use And after 3 failed verification attempts the ACH method is locked and returns a specific error code Given an EU card is used When charging or setting up for future use Then SCA (e.g., 3DS) is invoked as required and status requires_action is surfaced when pending And a $0/$1 card pre-authorization check is performed; failures block use with standardized error codes And all cards/ACH mandates are stored via PCI-compliant tokenization; no PAN or bank account numbers persist in app databases
Payment Outcome Webhooks and Event Payloads
Given any payment attempt changes state When an outcome occurs (succeeded, failed, requires_action, refunded) Then an event is emitted and a webhook is delivered within S=5s with a signed payload And the payload includes account_id, intent_id, source_id, amount, currency, status, error_code, error_message, correlation_id, idempotency_key, and timestamps And webhook deliveries are idempotent; duplicates can be detected via event id and signature And missed deliveries can be replayed via an authenticated API endpoint
Atomic Balance Update and Daily Ledger Reconciliation
Given a provider confirmation of success (capture/settlement per method) When updating the account balance and internal transaction ledger Then both update atomically in a single transaction so no intermediate reads see a partial state And system-wide reads reflect the new balance within X=500ms And a daily reconciliation job at 02:00 UTC compares internal ledger to provider statements and flags discrepancies >= $0.01 And flagged discrepancies create tasks and alerts, freezing auto top-ups for affected accounts until resolved
Proactive Alerts & Approval Workflow
"As an operations lead, I want timely alerts and approvals for large or exceptional refills so that I can prevent work stoppages while maintaining spend control."
Description

Deliver proactive notifications when forecasted balance will breach thresholds within a configurable window, when refills are scheduled, executed, or fail, and when manual intervention is required. Support in‑app, email, and SMS channels with user-level preferences and quiet hours. Provide approval workflows for refills above a configurable amount or during blackout windows, with role‑based access control, approver routing, SLAs, and escalation paths. Include a real‑time activity feed, digest summaries, and localized message templates. All approvals and actions must be auditable and tamper‑evident.

Acceptance Criteria
Forecast Breach Alert Window Notification
Given an account has a configured minimum balance threshold and a forecast window And the system predicts the balance will breach the threshold within that window When the prediction first meets the breach condition Then a proactive alert is generated within 5 minutes And the alert includes projected breach timestamp, predicted shortfall amount, recommended refill amount, account identifier, and a link to initiate approval And repeated alerts for the same breach are suppressed for 2 hours unless the predicted shortfall increases by at least 10% or the breach time moves sooner by at least 30 minutes
Refill Lifecycle Notifications and Activity Feed
Given a refill is scheduled, executed, fails, or requires manual intervention When the corresponding event occurs Then an in-app notification is created immediately and email/SMS are sent per user channel preferences within 2 minutes And a real-time activity feed entry is written within 5 seconds with event type, amount, funding source (if applicable), confirmation/error code, and actor And email delivery retries up to 3 times and SMS up to 2 times with exponential backoff; final failures are recorded And duplicate notifications for the same event ID are not sent across any channel
User Preferences, Quiet Hours, and Channel Fallback
Given a user configures per-event channel preferences and quiet hours with a timezone When an event triggers outside quiet hours Then notifications are sent on all opted-in channels When an event triggers during quiet hours Then notifications are deferred until quiet hours end unless the event is marked critical (failures or manual intervention), which bypasses quiet hours And if delivery on the primary channel fails permanently, the system falls back to the next opted-in channel within 2 minutes and records the fallback And opt-out requests for SMS/email are honored within 1 minute and no further messages are sent on that channel And preference changes take effect within 2 minutes and persist across sessions
High-Value and Blackout Refill Approval Workflow
Given a refill exceeds the configured approval amount or falls within a configured blackout window When the refill is initiated (automatic or manual) Then an approval request is created and execution is blocked until approved And the approval request displays amount, currency, funding source, scheduled window, initiator, and justification And approvers can Approve or Deny; Deny requires a reason; all actions capture actor and timestamp And expired approval requests auto-cancel and notify initiator and approvers
RBAC, Approver Routing, SLAs, and Escalations
Given RBAC roles are configured (Initiator, Approver, Admin) When an approval request is created Then only Approver/Admin roles may approve, and the initiator cannot approve their own request And routing rules assign approvers based on organization, region, and amount tiers; sequential multi-step approvals up to 3 steps are supported And an SLA timer starts at request creation; reminders are sent at 50% and 90% of SLA duration And on SLA breach, the request escalates to the next approver or an Admin; escalation is recorded and notifications are sent immediately, bypassing quiet hours And approvers may set delegates with effective date ranges; during delegation, approvals route to the delegate
Localized Message Templates and Content Validation
Given message templates exist for supported locales with required placeholders {org_name},{account_id},{event_type},{amount},{currency},{timestamp_local},{funding_source},{approval_link},{reason} When a notification is generated Then the locale resolves by user -> org default -> en-US fallback and renders localized text, date/time, and currency And if a required placeholder is missing in a locale template, sending is blocked, a validation error is logged, and only a complete fallback template is used And admins can preview any template with sample data before enabling it
Audit Trail Integrity and Tamper-Evidence
Given an append-only audit log with hash chaining is enabled When any alert is sent, preference is changed, approval action occurs, routing decision happens, SLA reminder or escalation is sent, or a refill state changes Then an audit entry is appended containing entity ID, actor, UTC timestamp with millisecond precision, event type, before/after snapshot, and a cryptographic hash linking to the previous entry And daily integrity checks validate the hash chain and alert on anomalies And a read-only API allows filtered export by date range, entity, and request ID; access is role-restricted And audit records are retained for at least 7 years
Data Connectors for Pipeline, Flights, and Weather
"As a technical administrator, I want reliable data feeds from our pipeline, flight schedules, and weather alerts so that forecasts reflect real demand without manual data entry."
Description

Integrate with internal job pipeline, flight scheduling systems, and third‑party weather services to supply the forecast engine with accurate, timely inputs. Provide connectors for common CRMs and job systems used by roofing contractors, flight planners, and at least one severe weather alert API, with clear mapping of fields to demand signals. Define polling and webhook ingestion strategies, schema validation, deduplication, and backfill processes. Implement retry/backoff, dead‑letter queues, and connector health dashboards with alerting on data freshness SLAs. Normalize data into a unified model that supports regional segmentation and seasonality.

Acceptance Criteria
CRM/Job Pipeline Connector Field Mapping and Ingestion
- Supports webhook ingestion where available and polling fallback at ≤5-minute intervals. - Initial backfill ingests last 180 days and completes within 60 minutes for datasets ≤50,000 records; emits BackfillExceededSLA alert if breached. - Required mappings present: external_id, stage/status, expected_start_date, created_at, updated_at, location (lat/long or address), roof_area_sqft (nullable), deal_value (nullable), region. - 100% ingested records pass schema validation or are routed to DLQ with reason code; overall validation pass rate ≥99%. - Idempotent dedup: records with same external_id and updated_at ≤ previously processed are not re-ingested; duplicate write rate ≤0.1%. - All timestamps normalized to UTC with original timezone captured. - Incremental freshness P95 lag ≤15 minutes during business hours. - Each successful batch writes audit fields: source_system, ingestion_timestamp, checksum.
Flight Scheduler Connector for Upcoming Missions
- Ingests scheduled flights up to 14 days ahead; create/update/cancel reflected within ≤2 minutes via webhook or within ≤5-minute polling interval. - Required fields mapped: flight_id, job_id (nullable), scheduled_start, scheduled_end (UTC), site_location (lat/long), pilot_id (nullable), drone_type (nullable), status. - Reschedules update the existing record (no duplicates); latest times visible; change_log captures prior schedule. - Cancellations marked status=cancelled within ≤2 minutes; downstream demand signal updated within ≤5 minutes. - Backfill of last 30 days completes within ≤20 minutes for ≤10,000 flights. - Idempotency and dedup behavior matches pipeline connector; duplicate write rate ≤0.1%.
Severe Weather Alert API Integration and Forecast Normalization
- Integrates with at least one severe weather alert API covering all configured customer regions. - New alerts available to forecast engine within ≤5 minutes of provider publish time at P95. - Unified fields mapped: alert_id, type, severity, start_at, end_at, polygon/geojson, region; null-safe validation enforced. - De-duplicates reissued alerts by alert_id + effective window; updates in place when severity escalates. - Handles rate limits with exponential backoff (max 5 retries, capped at 15 minutes) and moves to DLQ after max attempts with provider response logged. - Expired alerts are archived and excluded from active demand calculations within ≤10 minutes of end_at.
Unified Demand Model with Regional Segmentation and Seasonality
- Normalizes sources into jobs_pipeline, flights, weather_alerts, and regional_demand_signals unified views. - Region computed deterministically from location per customer boundary rules; 100% of records have region or are routed to DLQ with RegionUnresolvable reason. - Seasonality features computed nightly per region: 3-year monthly seasonality index and 12-week rolling demand; coverage ≥95% of regions with sufficient data. - Forecast engine queries unified views with P95 read latency <200 ms under 100 concurrent requests on staging-sized dataset. - Schema versioning enforced; breaking changes deploy with ≥2-week backward compatibility window. - Lineage fields present on all records: source_system, source_version, external_id, first_seen_at, last_updated_at, processing_run_id.
Robust Ingestion Reliability: Retry, Backoff, DLQ, and Idempotency
- Retries transient failures (HTTP 5xx/timeouts) with exponential backoff (2^n seconds + jitter) up to 5 attempts per message. - Non-retriable errors (HTTP 4xx excluding 429) go to DLQ immediately with reason; 429 honors Retry-After and retries. - DLQ retention ≥14 days; reprocess operation replays selected messages idempotently with ≥99% success. - Automatic circuit breaker engages when error rate >5% over 5 minutes; ingestion pauses and alert is sent; manual resume required. - Idempotency key = sha256(source_system + external_id + updated_at); duplicate write rate ≤0.1%. - Exposes metrics: success_count, error_count, retry_count, max_lag_seconds, dlq_depth with P50/P95 lag on dashboards.
Connector Health Dashboard and Data Freshness Alerts
- Dashboard shows per-connector: status, last_success_at, current_lag_seconds, throughput (records/min), error_rate, retry_rate, dlq_depth, webhook delivery status. - Enforces freshness SLAs: flights P95 lag ≤10 minutes; pipeline P95 lag ≤15 minutes; weather P95 lag ≤5 minutes during 7:00–19:00 local time windows. - Breach alerts sent to Slack and email within ≤2 minutes when SLA exceeded for >10 minutes; PagerDuty P2 for weather SLA breach >15 minutes. - Hourly synthetic monitors validate each connector end-to-end; failures create incidents with run logs attached. - Access controlled to Ops/Admin roles; all dashboard views and changes audited.
Historical Backfill and Reconciliation
- Configurable backfill windows: pipeline up to 24 months; weather/flights up to 6 months; estimated record counts shown pre-run. - Backfill progress UI exposes percent complete and ETA; operations can pause/resume without data loss. - Post-backfill reconciliation report compares source vs ingested counts by day and region; absolute variance ≤0.5% or discrepancies listed. - Backfill events are flagged to suppress refill actions; forecast engine operates in backfill-safe mode to prevent false positives. - Re-running backfill over overlapping windows does not create duplicates; only newer updated_at overwrites existing records.
Refill Scheduling & Safeguards
"As an account owner, I want refills scheduled intelligently with guardrails so that we avoid both work stoppages and unnecessary frequent charges."
Description

Orchestrate just‑in‑time refill execution based on forecast outputs and configured rules, choosing the optimal time to top up to avoid low balances while minimizing cash tied up. Enforce safeguards such as rate limits (e.g., one successful refill per N hours), daily/weekly caps, and anti‑thrash logic to prevent frequent small refills. Support timezone-aware scheduling, blackout periods, and idempotent job handling with concurrency control. Provide clear state transitions (scheduled, pending payment, confirmed, failed, canceled) and automatic re‑scheduling on transient failures.

Acceptance Criteria
Optimal just‑in‑time refill scheduling using forecast and min/max rules
- Given account minBalance M, maxBalance U, minRefill Amin, maxRefill Amax, processingLatency P, lookaheadHorizon H, and a forecasted balance curve; When the forecast shows balance will drop below M at time D within H; Then the system schedules one refill at time S such that S <= D - P and as late as possible while keeping predicted balance >= M until confirmation. - Given the same inputs; When selecting refill amount; Then the system computes amount A in [Amin, Amax] so that post‑confirmation balance <= U and predicted balance remains >= M through H; And A is the minimal amount satisfying these constraints. - Given the forecast never crosses below M within H; When the scheduler runs; Then no refill is scheduled.
Enforce rate limit of one successful refill per N hours
- Given rateLimitHours = N and a successful refill confirmed at t0; When any trigger attempts a refill at t1 where t1 < t0 + N; Then the attempt is not executed and is rescheduled no earlier than t0 + N with reason = "rate_limit". - Given a refill is rescheduled due to rate limit; When the cooldown elapses; Then the rescheduled refill executes only if safeguards (caps, blackout, anti‑thrash) still permit; Otherwise it is re‑evaluated and either further rescheduled or canceled with the appropriate reason logged.
Enforce daily and weekly refill amount caps
- Given dailyCap D and weeklyCap W and cumulativeConfirmedDay Cd and cumulativeConfirmedWeek Cw; When scheduling a refill of amount A; Then the system ensures Cd + A <= D and Cw + A <= W before execution. - Given Cd + A would exceed D (or Cw + A would exceed W) and there exists an adjusted amount A' >= Amin that keeps within caps; Then the system adjusts to A' and proceeds; Otherwise the refill is not executed and is rescheduled to the next period boundary with reason = "cap_exceeded". - Given the period boundary is reached; When reevaluating; Then caps reset for that period and the refill may proceed if other safeguards allow.
Anti‑thrash aggregation to prevent frequent small refills
- Given antiThrashMinInterval Tthr and aggregationWindow Wagg and Amin; When multiple small shortfalls are predicted within Wagg; Then the system aggregates them into a single refill scheduled no more frequently than once per Tthr. - Given the computed aggregated amount < Amin and next opportunity exists within Wagg; When deciding execution; Then the refill is delayed until amount >= Amin or Wagg expires, whichever comes first; If Wagg expires and amount < Amin, execute at Amin only if it does not violate caps and rate limits; otherwise reschedule. - Given a refill was confirmed at t0; When any new trigger occurs before t0 + Tthr; Then it does not produce an additional refill and is merged into the next eligible schedule.
Timezone‑aware scheduling with blackout periods
- Given accountTimezone TZ and configured blackout windows B (expressed in TZ); When computing schedule time S; Then all calculations use TZ and S must not fall within any blackout window in B. - Given the optimal time S* falls inside a blackout window; When scheduling; Then the system shifts S to the nearest allowed boundary outside the blackout while still satisfying minBalance M, rate limits, and caps; If no valid time exists before predicted breach, choose the earliest post‑blackout time and record reason = "blackout_shift". - Given daylight saving transitions in TZ; When computing S; Then the system correctly handles ambiguous/missing local times and preserves intended ordering without duplicate or skipped executions.
Idempotent job handling and concurrency control
- Given an idempotencyKey K derived from the refill intent; When duplicate schedule requests with the same K are received; Then only one job is created or retained and the others return a reference to the existing job without creating additional payment attempts. - Given multiple workers attempt to execute the same job concurrently; When acquiring the execution lock; Then only one worker proceeds to payment and others no‑op, leaving the job in a consistent state. - Given a retry of the same refill intent; When calling the payment provider; Then the same payment idempotency key is used to prevent double charges; And the system guarantees at‑least‑once scheduling with exactly‑once payment execution.
State transitions and automatic re‑scheduling on transient failures
- Given allowed states {scheduled, pending_payment, confirmed, failed, canceled}; When a job starts; Then state transitions scheduled -> pending_payment; On provider success it transitions pending_payment -> confirmed with recorded confirmation details. - Given a transient error (e.g., timeout, 5xx, network) during pending_payment; When detected; Then state becomes failed with failureType = transient and the job is automatically rescheduled with exponential backoff up to maxAttempts M; Each retry re‑evaluates safeguards (rate limit, caps, blackout, anti‑thrash) before execution. - Given a non‑recoverable error (e.g., hard decline, invalid funding); When detected; Then state becomes failed with failureType = permanent and no auto‑reschedule occurs. - Given a user/system cancellation; When applied before execution; Then state transitions scheduled -> canceled; If during pending_payment and supported by provider, cancellation is attempted and final state is canceled or failed accordingly.
Forecast Rationale, Audit Logs & Reporting
"As a finance manager, I want transparent logs and reports about forecasts and refills so that I can audit spend, reconcile payments, and tune our automation rules."
Description

Record and expose the rationale behind each refill decision, including the forecast snapshot, input signals, rules evaluated, and payment outcomes. Maintain an immutable audit log of configuration changes, approvals, notifications, and financial transactions with user and timestamp metadata. Provide dashboards and exports (CSV/API) for refill history, forecast accuracy, avoided work stoppages, and spend distribution by funding source. Support filtering by date range, workspace, and project tags, enabling finance and ops to reconcile and optimize settings.

Acceptance Criteria
Rationale Snapshot Visible for Each Refill Decision
Given an auto or manual refill decision is executed or simulated When a user opens the refill detail view or calls GET /refills/{id} Then the response includes a read-only rationale snapshot containing: forecast_at (ISO 8601 UTC), forecast_horizon, forecasted_demand_units, input_signals (pipeline_volume, scheduled_flights, weather_alerts, historical_burn) with values and as_of timestamps, rules_evaluated with rule_id, inputs, outcome, decision with refill_amount, min_amount, max_amount, funding_source_id, policy_version, and payment_outcome with authorization_id and status And the snapshot remains immutable after creation (subsequent configuration changes do not alter prior snapshots) And retrieval completes within 1.5s at P95 for a workspace with up to 10k refills
Append-Only Audit Log for Key Events
Given any configuration change, approval action, notification, or financial transaction occurs When the system writes an audit record Then the record includes event_type, entity_id, actor_user_id or system, workspace_id, project_tags[], timestamp (ISO 8601 UTC), before and after (where applicable), and correlation_id And audit records are append-only; attempts to update or delete return HTTP 403 and are themselves logged as audit_tamper_attempt events And records are chained via hash(prev_hash, record_id, payload) to enable tamper detection And audit records are queryable by date range, workspace, and project tags with P95 query latency <= 2s for 1M records And retention is at least 7 years
Refill History Dashboard with Filtering and Export
Given a Finance or Ops user opens Refill History When they apply filters for date range, one or more workspaces, and project tags Then the table shows only matching records with columns: refill_id, date_time_utc, amount, currency, funding_source, status, initiator, and link to rationale And totals (count and sum) reflect the filtered set And P95 query plus render time is <= 2s for up to 100k records And when the user exports CSV, the file contains exactly the filtered rows with a header row, UTF-8 encoding, and ISO 8601 UTC timestamps
Forecast Accuracy Reporting (MAPE and Bias)
Given a selected date range and workspace/tag filters When the user opens Forecast Accuracy Then the system computes and displays MAPE (%) and bias (%) overall and by week, comparing forecasted demand vs actual consumption/spend And records with less than 48 hours of actualization lag are excluded from accuracy calculations And a drill-down from a weekly point lists included refills with forecast vs actual values and links to their rationale snapshots And CSV export and GET /reports/forecast-accuracy return aggregates that match the UI for the same filters
Avoided Work Stoppages Identification
Given a minimum balance threshold is configured for a workspace When the forecasted balance without refill would cross the threshold within the forecast horizon And when a timely refill prevents the threshold breach Then the system records one avoided_stoppage event linked to the refill with fields: projected_min_balance_without_refill, threshold, refill_time, refill_amount, post_refill_balance And avoided stoppage counts and details appear on dashboards and exports, filterable by date range, workspace, and project tags And scenarios without a projected breach are not counted as avoided stoppages
Spend Distribution by Funding Source
Given date range, workspace, and project tag filters When the user views Spend by Funding Source Then totals per funding_source are shown and the sum equals the filtered refill total within 0.01 of the currency unit And percentages sum to 100% +/- 0.1% And clicking a funding source shows its underlying refills And CSV export and GET /reports/spend-distribution return aggregates and details that match the UI
Comprehensive Exports and API Pagination
Given any dashboard (Refill History, Forecast Accuracy, Spend Distribution) has filters applied When the user exports CSV or calls the corresponding API Then the output respects all filters and includes documented fields with stable column names, ISO 8601 UTC timestamps, and currency codes And large result sets are chunked: CSV produces multiple files when > 1,000,000 rows; API uses cursor-based pagination with default page_size=100 and max=1000 And repeated calls with the same cursor return the same page unless new records are appended And first-byte latency for exports is <= 5s for up to 5,000,000 rows and the job status is trackable via an export_id

Cost Tags

Require each address spend to carry tags (cost center, campaign, carrier, job ID) for clean attribution. Export detailed spend to accounting/ERP, track ROI by storm event or channel, and hold teams accountable with clear, drill‑downable reports.

Requirements

Required Cost Tags Enforcement
"As a finance manager, I want all spend records to require standardized cost tags so that attribution and downstream accounting are accurate and consistent."
Description

Implement a mandatory tag schema for all spend records associated with an address/job, including cost center, campaign, carrier, and job ID. Provide server- and client-side validation, field typing, and normalization (e.g., casing, whitespace, allowed characters). Support inheritance of default tags from parent objects (account, project, storm event) with the ability to override at the line-item level. Prevent save/submit of estimates, invoices, and procurement costs unless required tags are present and valid. Expose validation errors via API and UI, and support configurable required/optional fields per tenant. Outcome: every spend line is consistently attributed for accurate reporting and downstream integrations.

Acceptance Criteria
Block Submission Without Required Tags
Given a user attempts to save or submit an estimate, invoice, or procurement cost line without one or more required tags (cost center, campaign, carrier, job ID) When the action is performed via UI form or API POST/PATCH Then the operation is rejected and no record or update is persisted And the UI displays inline errors for each missing/invalid tag field And the API responds with HTTP 422 and field-level error details for each offending tag Given all required tags are present and valid When the user saves/submits the record Then the operation succeeds and the record status changes as expected
Tag Normalization and Format Enforcement
Rule: cost center must match ^[A-Z0-9_-]{1,32}$, trimmed and stored uppercase Rule: campaign must match ^[A-Za-z0-9 _-]{1,64}$, trimmed and internal whitespace collapsed to single spaces Rule: carrier must match a tenant-configured carrier code (case-insensitive), stored as the canonical uppercase code Rule: job ID must match ^[A-Z0-9_-]{1,36}$, trimmed and stored uppercase Given any tag values contain leading/trailing whitespace or mixed case When the record is validated Then values are normalized per rules above prior to persistence Given any tag value violates its allowed pattern or list When validation runs client-side and server-side Then the value is rejected with a specific error message identifying the violated rule
Default Tag Inheritance With Line-Item Override
Given an account, project, or storm event has default tag values configured When a new spend line is created under that scope without explicit tag inputs Then the line auto-populates tags by inheritance using precedence: Line Item override > Project > Storm Event > Account And inherited values are visible in the UI and returned by the API Given a user edits an inherited tag on a line item When the value is overridden Then the override is saved only on that line item and persists across edits and submissions And subsequent changes to parent defaults do not alter existing line items
Tenant-Configurable Required/Optional Tags
Given a tenant admin configures which tags are required vs optional per spend type (estimate, invoice, procurement) When any user creates or edits a spend record for that type Then enforcement of required tags reflects the current tenant configuration And required fields are visibly marked in the UI and cannot be left blank And attempts to save via API without required tags are rejected with HTTP 422 and field-level errors Given the admin updates the configuration When a new create/edit occurs after the change Then the updated requirement rules take effect immediately for new validations (existing persisted records are unaffected until edited)
Consistent Validation Error Exposure (API and UI)
Given a validation failure for missing or invalid tags occurs When the request is made via API Then the response body includes, for each invalid field: field name, error code, human-readable message, and the rejected value And the HTTP status is 422 Unprocessable Entity Given a validation failure occurs in the UI When the form displays errors Then each invalid field shows an inline message, is announced to screen readers, and focus moves to the first errored field on submit
Bulk Create/Import Enforcement
Given a user imports spend lines via CSV upload or calls a bulk API endpoint When some rows lack required tags or contain invalid tag values Then invalid rows are rejected with row-level error details and no partial data for those rows is persisted And valid rows are created/updated successfully in the same run And a summary reports counts of processed, succeeded, and failed rows with references to errors
Exports and Reporting Use Normalized Tags Only
Given accounting/ERP export or reporting is executed for spend lines When the export/report is generated Then every line includes cost center, campaign, carrier, and job ID in their stored normalized forms And no line without all required valid tags is included in the export And tag values in exports match exactly what is stored on the spend line items, enabling downstream matching and attribution
Tag Dictionary & Governance
"As a system admin, I want to centrally manage allowed tag values and permissions so that tagging remains consistent and compliant across teams."
Description

Provide an admin console to define and manage allowed tag values and structures (e.g., predefined lists for cost centers, carriers, campaigns), including hierarchical tags, effective dates, deprecations, and merges (with automatic backfill/migration). Enable role-based permissions for who can create, edit, or apply tag values. Include bulk upload/export of dictionaries, validation rules (regex/length), and localization/aliases for display names. Ensure tenant scoping and auditability of governance actions. Outcome: controlled vocabularies that prevent tag sprawl and ensure clean, consistent attribution across the organization.

Acceptance Criteria
Create Hierarchical Tag Value with Effective Dates
Given a Tag Admin is authenticated within tenant scope And a tag dictionary type (e.g., Cost Center) exists with configured validation rules When the admin creates a new tag value with a unique key within the tenant, a display name, an optional parent tag, and an effective start date (and optional end date) Then the system persists the tag with its hierarchy, enforces key uniqueness within the tag type and tenant, validates fields against regex/length rules, and returns a success response And the tag is marked Active if the current date is within the effective period And the new tag appears in dictionary queries and UI pickers filtered by active status
Deprecate Tag Value and Prevent New Use After End Date
Given a tag value exists with an effective end date set by a Tag Admin When a user attempts to apply that tag to a new spend record with a transaction date after the end date Then the system blocks the assignment, surfaces a validation error, and does not persist the change And existing historical records prior to end date remain unchanged and reportable And the tag is labeled Deprecated in admin views and hidden from default pickers for new records
Merge Tag Values with Automatic Historical Backfill
Given two tag values (Source and Target) of the same type exist within the same tenant And a Tag Admin initiates a merge of Source into Target with a specified merge effective date When the merge is confirmed Then the system updates all historical references of Source on or before the merge effective date to Target without data loss And marks Source as Merged/Deprecated and creates an alias from Source to Target for lookups And records the before/after mappings in the audit log with actor, timestamps, and counts of updated records And dictionary/API reads resolve Source to Target post-merge
Role-Based Permissions for Create/Edit/Apply
Given role definitions exist (e.g., Tag Admin, Tag Editor, Tag Applier, Viewer) When a Tag Admin attempts create/edit/deprecate/merge actions Then the actions are permitted and audited When a Tag Editor attempts to edit allowed fields (e.g., display name, aliases, localization) but not merge or delete Then only permitted actions succeed; restricted actions are blocked with authorization errors When a Tag Applier assigns tags to spend records Then assignment succeeds only for active, allowed values And a Viewer cannot perform any create/edit/apply actions
Bulk Upload and Export of Tag Dictionary with Validation
Given a CSV/JSON template defining required columns (key, type, display_name, parent_key, start_date, end_date, locale, alias, validation_profile) When a Tag Admin uploads a file Then the system validates each row against schema and configured regex/length rules, rejects invalid rows with detailed error messages and row numbers, and imports valid rows And the import produces a summary (processed, created, updated, failed counts) and is idempotent by natural key within tenant And parent-child relationships and effective dates are established where provided And the admin can export the current dictionary for the tenant including all locales, aliases, and effective periods
Localization and Aliases for Tag Display Names
Given a tag value has localized display names and one or more aliases configured When a user views or searches tags with a specified locale Then the system returns the localized display name for that locale, falling back to the default locale if missing And searches resolve aliases to the canonical tag key without duplicating results And alias strings are unique per tag type within a tenant and cannot collide with existing keys or other aliases
Tenant Scoping and Immutable Audit Log of Governance Actions
Given multiple tenants exist When an admin from Tenant A queries or manages the tag dictionary Then only Tenant A's tags, aliases, and hierarchy are visible and mutable; no cross-tenant keys or references are accessible And all governance actions (create, edit, deprecate, merge, import/export) are captured in an append-only audit log with tenant ID, actor, action, timestamp, before/after values, and correlation ID And the audit log is filterable by action, actor, date range, and exportable by authorized roles
Address/Job Tagging UX
"As a project estimator, I want an easy way to assign and edit tags at the job and address level so that all related spend inherits the correct attribution."
Description

Design intuitive UI components to assign, view, and edit tags at the address/job level and on individual spend line items. Include quick-pick lists from dictionaries, search-as-you-type, recent/favorites, and clear required-field indicators with inline validation. Support inheritance preview (show which child records will be affected) and batch edit within a job. Provide status badges for "Missing Tags" in lists and mobile-friendly forms for field teams. Outcome: fast, low-friction tagging that increases completion rates without slowing workflows.

Acceptance Criteria
Job-Level Tag Assignment with Required Fields and Inline Validation
Given the Job Details > Tags panel is open and required tags are configured (cost center, campaign, carrier, job ID) When the panel loads Then required fields display a visible required indicator and accessible label that includes "Required". Given any required tag field is empty or invalid When the user attempts to save Then the Save action is disabled and inline errors render under each offending field with specific messages. Given a tag is dictionary-locked (no free text) and the input value is not in the dictionary When the user tries to commit the value Then an inline validation message appears "Select a value from the list" and the value is not saved. Given all required fields contain valid values from their dictionaries or permitted free text When the user taps Save Then the tags persist to the job within 1 second and a success toast confirms "Tags saved".
Inheritance Preview from Job to Child Line Items
Given a job has child spend line items and job-level tags have changed When the user chooses "Apply to line items" Then a preview modal displays the number of line items to be updated and a per-tag before→after summary. Given the preview modal is displayed When the user confirms Apply Then only the listed child records are updated and a success summary shows counts updated per tag. Given the preview modal is displayed When the user cancels Then no child records are modified.
Batch Edit Tags for Selected Line Items in a Job
Given the Line Items tab is open with items listed When the user selects two or more items Then a contextual action bar appears with "Batch Edit Tags". Given the Batch Edit dialog opens on a mixed selection When tag fields have differing existing values Then the field shows "Mixed" state and an "Overwrite" control per field is available. Given the user inputs new tag values and checks Overwrite for selected fields When the user confirms Apply Then those fields are updated on all selected items; unchanged fields remain as-is; a completion message shows the count of items updated.
Search-as-You-Type with Quick Picks, Recent, and Favorites
Given a tag input gains focus When the user types one or more characters Then suggestions appear within 300 ms, ranked by exact prefix match, favorites, and recency; navigating via keyboard or touch selects an entry. Given no suggestions match the input When dictionaries allow custom entries Then a "Create 'value'" option is displayed and can be selected to add the value; otherwise an explicit "No results" message appears. Given favorites and recent selections exist When the user opens a tag field without typing Then Favorites appear first, followed by up to 10 Recent selections for that tag; the user can star/unstar entries.
Missing Tags Status Badges and List Filtering
Given a list view of jobs/addresses or line items is displayed When an item is missing any required tag Then a "Missing Tags" badge appears inline with the item and shows the number of missing tags. Given the user clicks or taps a "Missing Tags" badge When the edit panel opens Then focus is placed on the first missing required field. Given the user fills all required tags for an item When the changes are saved Then the "Missing Tags" badge is removed within 1 second and list-level filter counts are updated.
Mobile-Friendly Tagging Form for Field Teams
Given a device viewport between 360 and 428 px wide When the tagging form loads Then the layout is single-column, all tap targets are at least 44x44 px, and the Save/Cancel bar is sticky at the bottom. Given the user edits tag fields on mobile When entering values Then the appropriate mobile keyboard is invoked per field type and the Next action advances to the next required field. Given a poor network connection (3G) When saving valid tags Then the form shows a loading indicator and does not allow double-submit; on success the form completes within 3 seconds median; on timeout after 10 seconds an actionable retry message appears.
Bulk Tagging & Import Mapping
"As an operations coordinator, I want to bulk apply and import tags mapped from external data so that I can quickly normalize legacy and high-volume records."
Description

Enable bulk operations to apply or modify tags across selected addresses/jobs/spend lines, with conflict detection and preview. Provide CSV import with column-to-tag mapping, validation reports, and partial success handling. Support rule-based auto-tagging (e.g., derive carrier from claim number pattern, map campaign from source UTM, default cost center by branch) and scheduled backfills for historical data. Offer public APIs to submit tag updates at scale with idempotency keys and rate limits. Outcome: efficient onboarding and ongoing maintenance of consistent tags across large datasets.

Acceptance Criteria
Bulk Apply/Modify Tags with Preview and Conflict Detection
Given I select addresses, jobs, or spend lines and choose tag operations (add, update, remove) When I click Preview Then I see total selected, total impacted, per-record before/after tag diffs, and conflicts flagged with reasons When I confirm Apply Then only non-conflicting records are updated, conflicting records are skipped, and a summary shows counts of updated/skipped/unchanged with reasons And no changes are made if I cancel And each successful update creates an audit log entry with user, timestamp, and before/after values
CSV Import with Column-to-Tag Mapping, Validation, and Partial Success
Given I upload a CSV When I map CSV columns to supported tags or ignore unmapped columns Then the system validates required fields, value formats, allowed lists, and referential integrity, and produces a row-level validation report When I proceed with import Then valid rows are applied, invalid rows are skipped, and I can download a CSV report of errors and warnings And re-importing the same file with the same mapping is idempotent and does not duplicate changes
Rule-Based Auto-Tagging and Precedence
Given rules exist (e.g., derive carrier from claim number regex, map campaign from UTM source, default cost center by branch) When new or updated records enter the system Then matching rules apply tags automatically And manual user-set tags override rule-derived tags And rules have explicit priority and effective date ranges; conflicts resolve by highest priority And invalid rule definitions are rejected with validation messages
Scheduled Backfills for Historical Data
Given I configure a backfill with a ruleset, scope filters, and a schedule When the job runs Then it previews impact counts before applying, applies per-record updates transactionally, and produces a completion summary with updated/skipped/unchanged and reasons And the job is resumable and idempotent; reruns do not duplicate changes And notifications are sent on start and completion (success or failure) to specified recipients
Public API for Bulk Tag Updates with Idempotency and Rate Limits
Given I call POST /tags/bulk-updates with a valid access token and an Idempotency-Key When I submit up to the maximum allowed items per request Then the API validates payload shape, permissions, and tag values; processes updates atomically per record; and returns per-item result statuses And repeating the request with the same Idempotency-Key and identical payload returns the original result without reapplying changes within the idempotency window And exceeding the tenant rate limit returns HTTP 429 with a Retry-After header And error responses use documented codes and messages; unauthorized or forbidden requests return 401 or 403
Saved Import Mapping Templates
Given I perform a CSV import and map columns to tags When I save the mapping as a template with a name and optional default flag Then the template is available on subsequent imports When I select a template Then the mapping pre-populates; missing or changed columns are highlighted for correction before import
Concurrency, Locking, and Safe Rollback
Given a bulk operation is running When another process modifies a targeted record before apply Then the system detects a version conflict and skips that record with a conflict reason And per-record updates are atomic; no partial tag state is persisted for a record And I can pause or cancel the operation; cancel stops new work, preserves already applied changes, and produces a summary And if a worker restarts mid-operation, processing resumes without duplicating previously applied changes
ERP/Accounting Export with Tags
"As an accounting analyst, I want exports that include detailed tag fields so that our ERP can reconcile spend by cost center, campaign, carrier, and job ID."
Description

Deliver export pipelines (CSV download, SFTP delivery, REST webhook) that include detailed spend line items with associated tags and identifiers (job ID, address, customer, GL mappings). Provide configurable field mappings per integration (e.g., QuickBooks, Sage, NetSuite), scheduling, incremental exports with idempotency, and retry/error reporting with alerts. Include a test/sandbox mode and sample files for validation. Outcome: seamless handoff of tagged spend to accounting/ERP for reconciliation and financial reporting by cost center, campaign, carrier, and job.

Acceptance Criteria
CSV Export Includes Tags and Identifiers
Given a user selects a date range and channel(s) When they trigger "Download CSV" Then the file downloads with a header row, comma delimiter, and UTF-8 encoding And Then each line item includes: export_batch_id, line_item_id, timestamp, address_id, address, customer_id, customer_name, job_id, cost_center_tag, campaign_tag, carrier_tag, GL_account, GL_class, GL_department, amount, currency, tax, memo And Then 100% of line items contain non-null values for cost_center_tag, campaign_tag, carrier_tag, and job_id or the export is blocked with a validation error listing offending row IDs And Then totals for amount and tax in the CSV equal the totals shown in the UI for the same filters within 0.01 currency units
SFTP Scheduled Incremental Export with Idempotency
Given an SFTP destination is configured and a daily schedule at 02:00 local time When the schedule triggers Then a file named <export_batch_id>_<YYYYMMDD_HHMM>.csv is atomically written (temp file then rename) to the configured directory And Then the export contains only line items with updated_at strictly greater than the last_successful_checkpoint and includes the new high_water_mark And Then if the same batch is retried, no duplicate line items are delivered (idempotent by line_item_id within export_batch_id) And Then on network failure, retries occur with exponential backoff up to 5 attempts; after final failure the batch is marked Failed and no partial file remains on the server
REST Webhook Delivery and Retries
Given a webhook endpoint with HMAC secret is configured and enabled When an export batch is ready Then the system POSTs a JSON payload containing metadata {export_batch_id, high_water_mark, record_count, checksum} and the array of line items And Then requests include an X-Signature header computed per spec and use gzip compression if payload > 1 MB And Then 2xx response marks the batch Delivered; 429/5xx/timeouts trigger retries with exponential backoff up to 6 attempts; 4xx (except 429) mark the batch Failed without further retries And Then each attempt is logged with timestamp, response code, and latency and is visible in the Export History UI
Per-Integration Field Mapping Configuration
Given a user selects an integration template (QuickBooks, Sage, NetSuite) When editing field mappings Then available source fields include all tags and identifiers; available target fields reflect the integration schema for the selected template And Then saving requires that all required target fields are mapped, data types are compatible, and transformation rules validate without errors And Then mappings are versioned with name, author, and timestamp; previous versions can be restored And Then a "Test with Sample" action generates a sample file/payload using the current mapping and shows pass/fail schema validation results
Test/Sandbox Mode and Sample Files
Given Test Mode is enabled for an integration When exports run (manual or scheduled) Then deliveries are sent to sandbox endpoints/paths and are marked as TEST in filenames/payload metadata And Then no records are marked as reconciled in production while Test Mode is enabled And Then users can download up-to-date sample CSV and JSON files that match the current mapping and schema
Error Reporting, Alerts, and Retry Controls
Given any export attempt fails or completes with partial errors When the run ends Then an in-app alert appears and an email/Slack notification is sent to configured recipients within 5 minutes And Then the alert includes export_batch_id, failure reason(s), failed row count, and recommended next actions And Then users with appropriate role can trigger "Retry batch" or "Rebuild from last checkpoint" from the Export History UI, and the action is audited
Financial Reconciliation and Completeness
Given a specified date range and filters (cost center, campaign, carrier, job) When an export is generated Then 100% of line items include non-null cost tags and job_id; rows with missing tags are rejected with explicit errors before delivery And Then the sum(amount) and sum(tax) per tag and overall in the exported data match the in-app Reports totals for the same filters within 0.01 currency units And Then record_count in batch metadata equals the delivered row count; checksum verifies file integrity for CSV/SFTP and JSON/webhook deliveries And Then address_id and job_id are not duplicated for the same line_item_id within a batch
ROI Dashboard & Drilldowns
"As a business owner, I want ROI dashboards with drilldowns by storm event and channel so that I can see which efforts drive profitable work."
Description

Provide analytics that aggregate revenue, cost, and margin by tags (storm event, campaign, cost center, carrier, job ID) with time filters and comparisons. Enable drill-down from summary tiles and charts to address-level details and underlying spend lines. Include saved views, scheduled email exports, and CSV download. Support cross-filtering, multi-select tags, and permissions-aware data visibility. Outcome: actionable insights to track ROI by event or channel and hold teams accountable with transparent, drill-downable reports.

Acceptance Criteria
Aggregate Metrics by Tags with Time Filters and Period Comparison
Given a user with access to the ROI Dashboard and tagged revenue/cost data is available When the user selects a date range and chooses one or more tag dimensions to group by (storm event, campaign, cost center, carrier, job ID) and selects a comparison period (Previous period, Same period last year, or Custom) Then the dashboard displays Revenue, Cost, Gross Margin, and Margin % aggregated correctly by the selected tags and date range And the grand totals equal the sum of the underlying permitted records for the applied filters (tolerance ±0.01 for currency, ±0.1% for percentages) And comparison values display absolute delta and percent change for each metric per tag group And currency values are rounded to 2 decimals and percentages to 1 decimal consistently across tiles and charts
Drill‑Down from Summary Tiles to Address and Spend Line Details
Given the user is viewing summary tiles or a chart on the ROI Dashboard When the user clicks a tile or a chart segment (bar, line point, pie slice) Then the app navigates to a details view listing addresses scoped to the clicked metric and preserves all active filters and date range And the address table includes columns: Address, Job ID, Storm Event, Campaign, Cost Center, Carrier, Revenue, Cost, Gross Margin, Margin % And selecting an address reveals underlying spend lines with fields: Date, Cost Item, Cost Tag(s), Amount, Source, Note/Reference And a breadcrumb indicates Dashboard > [Tile/Chart] > [Tag/Group] and Back returns to the same scroll/selection state on the dashboard And exporting from the details view (CSV) includes only the filtered addresses and spend lines in context
Saved Views with Scheduled Email Exports
Given the user has configured filters, tag groupings, comparisons, and sorts on the ROI Dashboard When the user saves the configuration with a unique name Then the saved view appears in the user's Saved Views list, is persisted across sessions, and can be set as default And the user can set visibility to Private or Share with selected roles/teams; only recipients with permission can see/use the view When the user schedules an email for a saved view with cadence (daily/weekly/monthly), time zone, format (PDF attachment, CSV attachment, or link), and recipients Then recipients receive the email within 15 minutes of the scheduled time, and the content reflects the saved view's current data and permissions And the owner can pause, edit, or delete the schedule; all changes are logged with user, timestamp, and action
CSV Download of Filtered and Visible Data
Given the user is viewing any dashboard or drill‑down table with active filters When the user initiates CSV download Then the generated CSV contains exactly the columns visible in the current view plus a metadata header with export timestamp, date range, and applied filters And the number of rows in the CSV matches the number of rows shown in the UI for the current view (within ±1 if pagination/truncation indicator is shown) And numeric fields use '.' as decimal separator, no currency symbols, ISO date format (YYYY‑MM‑DD), and UTF‑8 encoding with headers And for exports > 50,000 rows, an async job is queued and the user is notified; the download link is delivered in‑app and via email within 10 minutes And exports enforce the user's permissions; out‑of‑scope data is excluded
Cross‑Filtering and Multi‑Select Tagging
Given the user interacts with tag filters and charts on the ROI Dashboard When the user selects multiple values within a single tag dimension Then the filter logic applies OR within that dimension and AND across different tag dimensions, and all tiles/charts update accordingly And clicking a chart element adds a corresponding filter chip and cross‑filters the other visuals And applied filters are shown as removable chips; a Clear All action resets the dashboard to its default state And the URL reflects the current filter state so that reloading or sharing restores the same view And tag pickers support searching and selecting up to 1,000 values, including Select all in current search results
Permissions‑Aware Data Visibility
Given role‑based and row‑level permissions are configured for organizations, cost centers, carriers, campaigns, and addresses When a user views the ROI Dashboard, drill‑downs, exports, or saved views Then only data within the user's permission scope is visible and included in calculations and totals And tag values and addresses outside the user's scope are not shown in pickers or tables; shared saved views with out‑of‑scope data load with masked or excluded segments and a notice is displayed And scheduled emails and CSV exports deliver only permitted data for each recipient And all access is auditable with user, timestamp, action, and scope applied
Data Accuracy, Freshness, and Performance Guarantees
Given tagged revenue and cost data is ingested and cost tags may be updated throughout the day When the ROI Dashboard and drill‑downs are loaded Then aggregated totals match the sum of underlying permitted records within ±0.01 currency units and ±0.1% for Margin % across tiles, charts, and details And a Data as of timestamp is displayed; tag or cost changes are reflected in aggregates within 15 minutes of change commit And performance meets: initial dashboard render ≤ 3s for up to 100k records and ≤ 6s for up to 500k records; drill‑down details load ≤ 3s for up to 10k rows; interactions (filter add/remove) update visuals ≤ 1.5s And the system supports tag cardinality up to 10,000 unique values per tag dimension without errors or incomplete results
Tag Audit Trail & History
"As a compliance officer, I want a full audit history of tag changes so that we can trace decisions and correct errors."
Description

Record a complete, immutable history of tag changes on addresses, jobs, and spend lines, capturing who changed what and when, with before/after values and reason codes. Provide UI to view history, filter by user/tag/date, and revert selected changes with safeguards. Expose audit data via API and include it in export logs for compliance. Outcome: traceability that supports reviews, dispute resolution, and accountability across teams.

Acceptance Criteria
Audit entry creation on tag changes
Given a user with tag-edit permission updates, adds, or removes a tag on an address, job, or spend line via UI or API When the change request includes a non-empty reasonCode Then exactly one audit entry is appended within 2 seconds containing: auditId, entityType, entityId, tagKey, action(add|update|remove), oldValue, newValue, reasonCode, actorUserId, actorName, actorRole, source(UI|API), requestId, ipAddress, occurredAt (UTC ISO 8601 with milliseconds) And if reasonCode is missing or blank, the change is rejected with 400/validation error and no audit entry is written And bulk updates create one audit entry per tag per entity changed, correlated by the same requestId
Audit records immutability and access
Given any audit entry exists When any user attempts to edit or delete an audit entry via UI or API Then the system prevents modification (405/Forbidden), no changes are made, and the attempt is logged as a security event And audit entries are read-only in all user-facing surfaces and only appended; no delete endpoints or UI affordances exist And audit entries are retained indefinitely unless a legal-retention policy is configured; purge actions (if any) are disabled for standard users and require administrative legal hold override
History UI: filter, sort, and paginate audit entries
Given a user with history-view permission opens the History UI for an address, job, or spend line When they apply filters for userId, tagKey, date range (from/to), action(add|update|remove|revert), source(UI|API), and entityType, and choose sort by occurredAt (asc/desc) Then the list shows only matching audit entries, sorted as requested And the first page loads within 2 seconds for datasets up to 100k entries; default page size is 50 and can be set to 25/50/100; totalCount is displayed And each row displays: tagKey, oldValue→newValue, reasonCode, actorName, occurredAt (UTC), action, source; clicking a row opens full details
Revert tag change with safeguards
Given a user with revert permission selects a past audit entry for a specific tag on an entity in the History UI When they click Revert, review the confirmation diff, and provide a non-empty reasonCode Then the system validates entity is active, tag is applicable, and revert complies with tag validation rules And if valid, the tag is set to the selected entry’s prior value and a new audit entry is appended with action 'revert', referencing the original auditId and capturing the revert reasonCode And if there have been subsequent changes to that tag, the revert still appends a new entry without mutating prior entries; if a concurrent change is detected, the operation fails with 409 Conflict and instructs the user to refresh And if validation fails, no change is applied and a precise error is shown
Audit trail API: query and retrieve
Given an authenticated client with scope audit:read When it calls GET /api/audit-logs with filters entityType, entityId (single or list), tagKey (single or list), userId, action, source, occurredAt[from,to], page, pageSize (max 100), sort (occurredAt asc|desc) Then the API returns 200 with results within 800 ms for up to 10k matching records, including pagination metadata (page, pageSize, totalCount) And each item includes: auditId, entityType, entityId, tagKey, action, oldValue, newValue, reasonCode, actorUserId, actorName, actorRole, source, requestId, occurredAt (UTC), ipAddress And unauthorized requests return 401; insufficient scope returns 403; invalid parameters return 400 with field-specific error messages
Compliance exports include audit history
Given a user triggers an export of spend/address/job data for a specified date range When the export completes Then the export package includes an audit_history.csv containing all audit entries for entities included in the export window And the file contains columns: auditId, entityType, entityId, tagKey, action, oldValue, newValue, reasonCode, actorUserId, actorName, source, occurredAt (UTC), requestId And only users with export permission can generate the file; sensitive fields (e.g., ipAddress) are excluded unless the user has compliance-admin role

Job Reserve

When you bulk import or create Route Bundles, the wallet earmarks the exact credits needed and shows reserved vs. available in real time. Prevents mid‑run failures, auto‑releases unused reserves after SLA windows, and keeps crews moving without manual juggling.

Requirements

Atomic Credit Earmarking for Route Bundles
"As an operations manager, I want credits to be automatically earmarked when I create or import a route bundle so that runs don’t fail midway due to insufficient funds."
Description

Implement an atomic reservation service that calculates the exact credits required per route bundle based on bundle size, imagery type, processing options, and SLA, then earmarks those credits as “Reserved” (not consumed) in the organization wallet. Prevent oversubscription under concurrency with transactional locking and idempotency keys. Support an all-or-nothing reservation policy per bundle with an optional configurable partial-reserve fallback and automatic rollback on failure. Expose REST endpoints and internal events for reservation create/update/cancel, and integrate with Route Bundle create/edit flows, wallet top-up, and multi-tenant org wallets.

Acceptance Criteria
Exact Credit Calculation per Bundle
Given a route bundle with N jobs, imagery_type, processing_options, and SLA provided And the org wallet has any available balance When the reservation calculation endpoint is requested for the bundle Then the service returns required_credits as an integer derived from configured pricing rules for imagery_type, processing_options, and SLA applied per job and summed And the calculation is deterministic for identical inputs And rounding is to whole credits using the configured rule And the response includes a per-job breakdown and a total
Atomic Reservation (All-or-Nothing Policy)
Given an org wallet with available_credits >= required_credits for the bundle When POST /reservations is called with policy="all_or_nothing" Then exactly required_credits are moved from wallet.available to wallet.reserved in a single transaction And wallet.consumed remains unchanged And the API responds 201 with reservation_id and status="reserved" Given available_credits < required_credits When POST /reservations is called with policy="all_or_nothing" Then the API responds 409 with error_code="INSUFFICIENT_CREDITS" And no changes are made to wallet balances
Concurrency and Idempotency Safety
Given two or more concurrent POST /reservations requests for the same bundle with the same Idempotency-Key header When executed within a 5-second window Then only one reservation record is created And all callers receive the same reservation_id and status Given concurrent reservations for distinct bundles that together exceed available_credits When executed Then no oversubscription occurs and at most one succeeds; others fail with 409; wallet.available never becomes negative
Partial-Reserve Fallback Configuration
Given org has partial_reserve_fallback=true and available_credits < required_credits When POST /reservations is called with policy="all_or_nothing" Then the service reserves exactly available_credits, records shortfall=required_credits - available_credits, and returns 202 with status="partial" And an event reservation.partial_reserved is emitted with reservation_id, org_id, bundle_id, reserved_credits, shortfall Given partial_reserve_fallback=false under the same credit conditions When POST /reservations is called with policy="all_or_nothing" Then the API responds 409 and no credits are reserved
Automatic Rollback and SLA-Based Auto-Release
Given a reservation exists and downstream processing fails before any credit consumption When the system receives a processing.failed signal for the bundle Then the reserved credits are returned to wallet.available within 60 seconds and reservation status becomes "cancelled" And an event reservation.cancelled is emitted with reservation_id and reason Given a reservation remains unused past its SLA expiry timestamp When the expiry time elapses Then the system auto-releases the reservation within 60 seconds and emits reservation.released with release_reason="sla_expired"
Route Bundle Create/Edit/Cancel Integration
Given a user creates a new route bundle via UI or bulk import When the bundle is saved Then the system calculates required_credits and attempts reservation per policy, updating wallet.available and wallet.reserved accordingly and surfacing success/error in the bundle UI Given a bundle is edited such that required_credits changes When the edit is saved Then the reservation is atomically adjusted by the delta (increase if available, or fail/partial per policy; decrease releases surplus) and emits reservation.updated Given a bundle is cancelled before processing starts When cancellation is confirmed Then the associated reservation is released immediately and emits reservation.cancelled
Multi-Tenant Isolation and Wallet Top-Up Effects
Given multiple org wallets exist in a multi-tenant environment When reservations are created, updated, or cancelled for one org Then no wallet balance, reservation record, or event is visible or applied to any other org Given an org has a partial reservation with shortfall > 0 When the org wallet is topped up Then the system auto-fulfills the reservation to full within 60 seconds, emits reservation.fulfilled, and updates wallet.available and wallet.reserved accordingly
Real-time Wallet Visibility (Reserved vs Available)
"As a dispatcher, I want to see reserved versus available credits in real time so that I can schedule crews with confidence and avoid delays."
Description

Provide real-time wallet visibility in UI and API showing Available, Reserved, and Consumed balances, plus a per-bundle allocation breakdown. Push live updates via WebSocket/SSE and render countdown timers to SLA expiry for each reservation. Include filters (project, date range, bundle, user), tooltips explaining states, and quick actions to release or extend holds. Ensure performant aggregation for large accounts, mobile-responsive views, and secure multi-tenant scoping.

Acceptance Criteria
Real-time Balances Panel and API Consistency
Given a signed-in user viewing the wallet panel for their tenant And the wallet has non-zero Available, Reserved, and Consumed balances When a reservation is created, extended, released, or consumed via any channel (UI/API) Then the UI balances for Available, Reserved, and Consumed update within 2 seconds of the event And the UI balances exactly match the /wallet API response for the same tenant within 1 second of the update And balances are displayed with 2 decimal precision and correct currency/units And loading/error states are shown if the wallet cannot be fetched, with a retry control And no balances from other tenants are ever displayed
Per-Bundle Allocation Breakdown at Scale
Given a tenant with at least 10,000 Route Bundles and active reservations When the user opens the Per-Bundle Allocation view Then the table lists Bundle ID, Project, User, Reserved Credits, Consumed Credits, SLA Expiry, and Status for each bundle And column sorting (asc/desc) works for all visible columns And pagination shows 50 rows per page by default and is configurable to 25/100 And p95 first-render time for a 50-row page is ≤ 800 ms with server-side aggregation enabled And the sum of Reserved and Consumed in the breakdown equals the wallet-level Reserved and Consumed for the applied filter set (±0 discrepancy) And on screens 320–767px wide the layout is mobile-responsive (no clipped text, horizontal scroll only for the table body, tap targets ≥ 44px)
SLA Countdown Timers and Auto-Release Behavior
Given a bundle reservation with a future SLA expiry timestamp When the user views the reservation in the UI Then a countdown timer shows HH:MM:SS remaining and decrements every second with ≤ 1s drift over 10 minutes And when the timer reaches 00:00:00 the reservation status changes to Released within 2 seconds And the wallet Reserved decreases and Available increases by the unused reserved amount atomically And audit logs capture auto-release with reservation ID, previous expiry, release time (UTC), and quantity And timers display in the user's local timezone while server-side events use UTC consistently
Multi-Field Filters and Aggregated Totals
Given filters for Project, Date Range (created/expiry), Bundle, and User When the user applies any combination of these filters Then the list and the aggregated totals (Available, Reserved, Consumed) update to reflect only the filtered scope And zero-result states display a clear message and a Reset Filters action And date ranges are interpreted in the user's timezone for UI display and converted to UTC for queries And the active filter set is encoded in the URL query string and restored on page reload and share And clearing all filters restores the unfiltered totals and list
State Tooltips and Context Help
Given info icons adjacent to Available, Reserved, Consumed, and SLA fields When the user hovers or focuses the info icon or label Then a tooltip appears within 150 ms containing a concise definition and an example for that state And tooltips are accessible (keyboard focusable, ESC to dismiss, ARIA-describedby for screen readers) And tooltips do not overflow viewport edges; on mobile they use a bottom sheet or centered popover And a Learn More link opens documentation in a new tab
Quick Actions to Release or Extend Holds
Given a user with the Wallet:ManageReservations permission When the user clicks Release on an active reservation Then a confirmation modal requires a reason and the action completes within 2 seconds And wallet balances reflect the release immediately (Reserved decreases; Available increases by the unused amount) And the reservation row status updates to Released and is no longer actionable When the user clicks Extend on an active reservation Then they must choose a new expiry within policy limits (max +72h from current time) or receive a validation error And a successful extension updates the SLA expiry and countdown within 2 seconds And all actions (release/extend) are idempotent, protected against double submit, and fully audit-logged
Live Updates via WebSocket/SSE with Secure Multi-Tenant Scoping
Given the client is authenticated and subscribes to live wallet updates for its tenant via WebSocket or SSE When reservations are created, extended, released, or consumed Then update messages are delivered with ordered sequence numbers and applied in order And a heartbeat is sent at least every 25 seconds; missed heartbeats trigger auto-reconnect with exponential backoff up to 60 seconds And if WS fails, the client falls back to SSE, then to 5-second polling, without user intervention And p95 end-to-end latency from event to UI update is ≤ 2 seconds under 1,000 events/min per tenant And subscription attempts to another tenant are rejected with 403 and no data leakage And message payloads contain only tenant-scoped fields and no PII beyond user display name/ID
Auto-release of Unused Reserves with SLA Windows
"As a finance admin, I want unused reserved credits to auto-release after the SLA window so that funds aren’t locked up and cash flow remains predictable."
Description

Introduce a resilient scheduler that automatically releases unused reserved credits when SLA windows expire or when bundles are canceled. Make SLA duration configurable per organization and per route type; support grace periods, manual extensions, and business-hours-aware timers. Handle partial consumption by releasing the remainder, ensure idempotent releases, persist job state for recovery on restart, and emit notifications and ledger entries for every release action.

Acceptance Criteria
Auto-Release on SLA Expiry
Given an active route bundle with 50 credits reserved and 0 consumed and an SLA of 4 business hours with no grace period When the SLA expiry timestamp is reached Then the system releases 50 credits back to the wallet within 60 seconds And wallet balances show reserved decreased by 50 and available increased by 50 And the bundle’s reserve status is set to "Released" And exactly one notification event of type "reserve_released" and one ledger entry with amount 50 and reason "SLA expired" are emitted
Release on Bundle Cancellation
Given an active route bundle with 80 credits reserved and 0 consumed When the bundle is canceled by an authorized user or automation Then the system releases 80 credits back to the wallet within 30 seconds And wallet balances reflect the release And the bundle’s reserve status is set to "Released - Canceled" And exactly one notification event and one ledger entry are emitted with reason "Bundle canceled"
Configurable SLA per Organization and Route Type
Given OrgA default SLA is 6 business hours and route type "inspection" override is 2 business hours When a new reserve is created for an "inspection" route Then the reservation’s expiry is persisted as now + 2 business hours in OrgA’s timezone When a new reserve is created for a route type with no override Then the reservation’s expiry is persisted as now + 6 business hours When SLA configuration changes after a reserve is created Then existing reservations retain their original persisted expiry unless manually extended
Grace Periods and Manual Extensions
Given a reserve with expiry T and a configured grace period of 30 minutes When T is reached Then no release occurs until T + 30 minutes Given an authorized user submits a manual extension of 1 hour before T + 30 minutes When the extension is saved Then the new expiry becomes T + 1 hour (plus any remaining grace behavior as configured) And an audit record captures user, timestamp, and reason And the reservation does not auto-release before the new expiry (and applicable grace)
Business-Hours-Aware SLA Timing
Given OrgA business hours are Mon–Fri 08:00–17:00 local time and weekends/holidays are excluded When a reserve is created Friday at 16:00 with a 4 business-hour SLA Then the expiry is computed as Monday 11:00 local time And the timer schedules firing at that local time When business-hours settings change after reserve creation Then existing reservations keep their computed expiry unless manually extended
Partial Consumption Remainder Release
Given a reserve of 120 credits and 45 credits have been consumed before expiry When a release is triggered by SLA expiry or cancellation Then only 75 credits are released to the wallet And wallet balances show reserved decreased by 75 and available increased by 75 And the ledger entry records released=75 and consumed=45 for traceability And the notification payload includes released and consumed amounts When consumption and release occur concurrently Then the final released amount equals max(reserved - consumed at commit time, 0)
Idempotency, Persistence, and Recovery
Given an in-flight release operation for a reserve of 60 credits When the release command is retried due to a transient error Then no duplicate release occurs And exactly one ledger entry exists with a stable idempotency key And at most one notification is delivered per subscriber Given the scheduler service crashes and restarts When it resumes Then all due releases are processed within 60 seconds of restart And previously completed releases are not executed again And job state (expiry, grace, extensions, next run) is read from durable storage
Pre-run Validation and Failure Prevention
"As a crew lead, I want the system to block runs that don’t have sufficient reserved credits so that my team doesn’t get stranded mid-route."
Description

Add a pre-run validation gate that verifies the required credits are reserved and consistent with the planned tasks before a run can start. Block initiation if mismatches are detected and provide remediation flows to top up the wallet, shrink the bundle, or re-attempt reservation. Enforce checks at both API and UI layers, support retries with exponential backoff, and apply distributed locks during run initiation to maintain reservation integrity and prevent mid-run failures.

Acceptance Criteria
Block Run on Reserved Credit Mismatch (API and UI)
Given a route bundle requires N credits and the wallet has M reserved credits where M < N When the user or API attempts to start the run Then initiation is blocked and zero jobs are started And the API responds 409 ReservationMismatch with fields: required=N, reserved=M, deficit=N-M And the UI displays a blocking banner with the deficit and remediation options: Top Up, Shrink Bundle, Retry Reservation And the Start Run control remains disabled until validation passes And an audit log entry is recorded with correlationId and reason=ReservationMismatch
Remediation: Wallet Top-Up from Validation Gate
Given a run is blocked for insufficient reserved credits and the user selects Top Up When the user purchases X credits successfully Then the wallet available balance increases by X and the system immediately re-attempts reservation And if reserved >= required after re-attempt, the run becomes startable and the UI updates within 3 seconds And if the purchase fails, no credits are reserved and the error is shown without enabling Start And the API exposes an idempotent endpoint to re-validate and reserve after top-up using the same idempotency key
Remediation: Shrink Route Bundle to Fit Available Credits
Given a run is blocked for reservation deficit When the user opens Shrink Bundle and deselects tasks Then the UI recalculates required credits in real time and shows remaining deficit until deficit <= 0 And Confirm Shrink is disabled while deficit > 0 And upon confirmation, the bundle is updated, validation is re-run, and if reserved >= new required, Start Run becomes enabled And deselected tasks are preserved as a draft list with counts and can be restored before start
Transient Failure Retries with Exponential Backoff and Jitter
Given the reservation or validation service returns a retryable error (HTTP 5xx, network timeout) When the system attempts to validate or reserve Then it retries up to 5 times with exponential backoff starting at 200 ms, multiplier 2, capped at 3 s, with full jitter And if a retry succeeds, the process continues without user intervention And if all retries fail, the API returns 503 ServiceUnavailable with error=ReservationServiceUnavailable and attempts=5 And the UI shows a temporary issue message and offers Retry, without enabling Start And logs include attempt count, last error, and correlationId
Distributed Locking During Run Initiation
Given two or more concurrent start attempts for the same bundle When initiation begins Then a distributed lock scoped to bundleId is acquired with TTL=30 s and an idempotency key And only the lock holder proceeds to reservation and run start And non-holders receive 423 Locked or 409 AlreadyInProgress with retryAfter <= 5 s And no duplicate reservations or duplicate job starts occur And the lock is released on completion or expires after TTL, with state remaining consistent
Consistent Error Codes and UI Disablement on Validation Failure
Given validation fails for a known reason When the API responds Then the status codes and error codes are as follows: - InsufficientReservedCredits -> 409 ReservationMismatch - ReservationExpired -> 409 ReservationExpired - ReservationServiceUnavailable -> 503 ServiceUnavailable And the response includes required, reserved, deficit when applicable and a correlationId And the UI disables Start Run, focuses the error banner, and announces via ARIA live region within 1 second And primary CTAs match the reason: Top Up (mismatch), Shrink Bundle (mismatch), Retry (service unavailable), Re-Reserve (expired)
Reservation Expiry Handling and Auto-Reattempt Prior to Start
Given the existing reservation will expire in less than 60 seconds at the time of start attempt When start is initiated Then the system automatically attempts a single renewal or re-reservation using the same bundleId and idempotency key And if renewal succeeds and reserved >= required, the run proceeds And if renewal fails, the run is blocked with 409 ReservationExpired and remediation options displayed And no credits are consumed on failure, and the stale reservation is marked released And audit logs include priorReservationId, newReservationId (if any), and timestamps
Bulk Import Reservation Support (CSV/API)
"As an operations coordinator, I want bulk imports to automatically reserve the required credits so that large drops are ready to dispatch without manual juggling."
Description

Enhance bulk import workflows to compute required credits on the fly and place reservations transactionally per bundle during CSV uploads and API submissions. Provide asynchronous processing with job IDs, progress tracking, and granular error reporting for insufficient credits, invalid rows, and conflicts. Support idempotency keys to avoid double-reserving, apply rate limiting, and emit webhooks/callbacks upon completion or failure with a per-record outcome summary.

Acceptance Criteria
Transactional Credit Reservation per Bundle (CSV/API)
Given an organization wallet and a bulk import payload (CSV or API) containing multiple route bundles with computed credit costs When the submission is accepted for processing Then the system computes required credits per bundle before any side effects And reserves the exact number of credits per bundle atomically (all-or-nothing per bundle) And updates wallet balances immediately (reserved and available) after each successful reservation And bundles that cannot be funded due to insufficient available credits are rejected without any reservation while other eligible bundles proceed And a bulk job_id is returned with initial state "queued"
Asynchronous Job Creation and Progress Tracking
Given a created bulk job_id When the client queries GET /bulk/jobs/{job_id} Then the response includes state in [queued, running, succeeded, failed, partially_failed] And includes counts: total_records, processed, succeeded, failed, reserved_credits_total And includes timestamps: created_at, started_at (nullable), finished_at (nullable) And percent_complete = processed / total_records is accurate to the nearest 1% And upon completion, state is terminal and finished_at is populated
Granular Per-Record Outcome and Error Reporting
Given a bulk job completes processing When the client requests GET /bulk/jobs/{job_id}/results Then the response contains one entry per input row with fields: input_index, record_ref, status in [success, error], and details And error entries include standardized code in [INSUFFICIENT_CREDITS, INVALID_ROW, CONFLICT, DUPLICATE_ID, SCHEMA_MISMATCH] and a human-readable message And success entries include reservation_id and reserved_credits And errored records create no reservations or other side effects
Idempotency for Bulk Submissions
Given a client includes Idempotency-Key K with payload P When the client retries the same request with K and identical P within the configured idempotency TTL Then the API returns the original job_id and does not create new reservations And if the client retries with K but a different P, the API returns 409 Conflict (Idempotency Mismatch) and no side effects occur And idempotency scope is per-organization and keys expire after the TTL
Rate Limiting and Safe Throttling
Given the client exceeds the configured rate limit for bulk import submissions When an additional submission is received Then the API responds 429 Too Many Requests with a Retry-After header And no job is created and no credits are reserved And subsequent requests succeed once below the limit without residual side effects
Webhooks/Callbacks with Per-Record Summary
Given a bulk job has a webhook_url configured When the job reaches a terminal state (succeeded, failed, or partially_failed) Then the system POSTs a JSON payload containing job_id, state, counts (total, succeeded, failed), reserved_credits_total, and a URL to download per-record results And the request is signed (HMAC) with a shared secret and includes an idempotency event_id And delivery is at-least-once with exponential retries for non-2xx responses up to the configured max_attempts And duplicate deliveries can be detected by event_id by the receiver
Conflict Detection and Concurrency Safety
Given two inputs in the same or concurrent jobs reference the same normalized bundle identifier When processed concurrently by multiple workers Then exactly one succeeds and creates a reservation; the other is marked error with code CONFLICT And no double reservation occurs under concurrent processing across workers or nodes
Notifications and Webhooks for Reserve Events
"As an administrator, I want timely alerts about reservation status changes so that I can proactively top up credits and adjust schedules."
Description

Deliver configurable notifications and webhooks for reservation lifecycle events including low available credits, reservation success/failure, impending SLA expiry, auto-release executed, and manual overrides. Offer in-app, email, and optional SMS channels with per-organization thresholds and per-user preferences. Include contextual deep links to wallet top-up and bundle details, deduplicate noisy alerts, and honor quiet hours and role-based visibility.

Acceptance Criteria
Low Available Credits Threshold Alert
Given an organization-level low-credits threshold T is configured and available credits drop to <= T When the threshold is crossed Then send a single alert within 60 seconds via enabled channels per each recipient’s preferences (in-app required; email/SMS optional) And include current available credits, threshold T, organization name, and a deep link to Wallet Top-up And deliver to Org Owners and Billing Admins by default; other roles see in-app only if they have wallet visibility And suppress duplicate alerts for 30 minutes per organization per threshold direction unless credits rise above T and cross again And honor quiet hours by suppressing email/SMS and delivering in-app only
Reservation Outcome Notifications (Success/Failure)
Given a Route Bundle reservation attempt completes for reservation_id R When it succeeds Then notify the bundle creator and watchers within 60 seconds per their channel preferences and include bundle ID, credits reserved, reservation_id R, and a deep link to Bundle Details When it fails Then notify the same recipients within 60 seconds, include reservation_id R, failure reason code/message, and a deep link (Wallet Top-up if InsufficientCredits; otherwise Error Details) And only users with read access to the bundle receive the notification And deduplicate so only one notification per reservation_id outcome is sent (retries or internal reprocessing do not create additional alerts) And honor quiet hours by suppressing email/SMS and delivering in-app only
Impending SLA Expiry Warning
Given a reservation with SLA expiry at time E and an org-configured warning schedule of 60 minutes and 15 minutes prior When the current time crosses E-60m or E-15m and the reservation is still active Then send a warning within 60 seconds to the bundle creator and watchers per their channel preferences And include reservation_id, time remaining, and a deep link to Extend/Release actions if the user has permission (else Bundle Details) And do not send warnings after the reservation is released or expired And deduplicate so each threshold warning fires at most once per reservation And honor quiet hours by suppressing email/SMS and delivering in-app only
Auto-Release Executed Notification
Given a reservation reaches expiry and auto-release executes for auto_release_id A When the auto-release completes Then update wallet balances to reflect credits returned within 5 seconds and notify the bundle creator and watchers within 60 seconds per their channel preferences And include reservation_id, credits released, auto_release_id A, and a deep link to Bundle Details And deliver only to users with access to the affected bundle; Org Owners/Billing Admins may opt-in org-wide And deduplicate so only one notification per auto_release_id A is sent And honor quiet hours by suppressing email/SMS and delivering in-app only
Manual Override Event Notification
Given a user with role Org Owner or Billing Admin performs a manual override on a reservation (e.g., force release or adjust amount) When the override is saved Then create an auditable event with actor, timestamp, before/after values, and reason, and notify the bundle creator, watchers, and Billing Admins within 60 seconds per their preferences And include reservation_id, override_id, action taken, and a deep link to the affected Bundle/Wallet record And enforce role-based visibility so only authorized users can view details And deduplicate so only one notification per override_id is sent And honor quiet hours by suppressing email/SMS and delivering in-app only
Webhook Delivery, Security, and Retry
Given an organization has at least one active webhook endpoint configured When any reserve event occurs (low_credits, reservation_success, reservation_failure, sla_warning, auto_release, manual_override) Then POST a JSON payload within 60 seconds containing event_type, event_id (UUID), occurred_at (ISO-8601), organization_id, reservation_id, bundle_id (when applicable), and event-specific fields And include an HMAC-SHA256 signature header (X-RoofLens-Signature) computed with the org’s shared secret over the canonical payload And include X-Idempotency-Key equal to event_id to support idempotent processing by receivers And retry on non-2xx responses with exponential backoff starting at 1s, doubling up to 6 attempts over ~15 minutes, then mark as failed and surface in an in-app delivery log with last error And do not send multiple POSTs for the same event_id to the same endpoint (one delivery attempt sequence per endpoint); multiple endpoints each receive the event once And provide a test mode that sends a sample event and records success on any 2xx response
Ledger and Audit Trail for Credit Lifecycle
"As an auditor, I want a detailed ledger of all reservation and release events so that I can reconcile charges and resolve disputes quickly."
Description

Create an immutable audit ledger that records reserve, consume, extend, release, and refund events with timestamps, actor, origin (UI/API), bundle/job references, previous/new balances, and correlation IDs. Provide filtering, search, and CSV export in the admin UI along with a read-only API for integrations. Implement retention policies, tamper-evident signing, and reconciliation views to support dispute resolution and financial audits.

Acceptance Criteria
Immutable Ledger Entry for Credit Events
- For each event in {reserve, consume, extend, release, refund}, the system appends exactly one ledger entry capturing: event_id, event_type, utc_timestamp (ms precision), actor_id, origin {UI|API}, bundle_id, job_id (if applicable), previous_balance, delta_amount, new_balance, correlation_id, and signature. - Constraint: previous_balance + delta_amount = new_balance; balances never become negative; numeric values use consistent currency/credit units. - Entries are write-once: any attempt to update or delete an entry is rejected with HTTP 405/no-op and is logged. - Idempotency: submitting the same event with the same correlation_id returns the original event_id and does not create a duplicate entry. - Timestamps are stored/displayed in UTC ISO 8601 with millisecond precision. - Extend events that do not change balances still produce a ledger entry with delta_amount = 0 and updated metadata (e.g., expiration).
Tamper-Evident Hash Chain and Signature Verification
- Each ledger entry includes a cryptographic signature and a hash-chain link (prev_hash) computed over a canonicalized payload; fields include alg and key_id. - On write, the service validates the signature and chain; invalid signatures or broken chain cause the write to be rejected with HTTP 400 and no entry is appended. - A verification job validates 100% of new entries at write time and performs a full chain verification daily; any failure raises an alert and marks affected entries as suspect in the UI. - The read-only API exposes signature, hash, and prev_hash; clients can verify end-to-end integrity. - Key rotation is supported: entries signed with retired keys still verify using archived public keys identified by key_id.
Admin UI Filtering, Search, Pagination, and Sort
- Admins can filter ledger entries by date range (UTC), event_type (multi-select), actor_id, origin, bundle_id, job_id, wallet_id, and correlation_id; filters combine with AND logic. - Keyword search supports exact match for event_id/correlation_id and substring match for actor_id. - Default sort is timestamp desc; users can sort by any column asc/desc; sorting is stable across pages. - Pagination offers page sizes 25/50/100 (default 50) and shows total results; empty result sets display a clear empty state. - Performance: filtered list renders within 500 ms at the 95th percentile for datasets up to 100k entries; pagination actions within 300 ms at P95.
CSV Export of Filtered Ledger Results
- Export downloads the current filtered result set in the same sort order; columns: event_id, event_type, utc_timestamp, actor_id, origin, bundle_id, job_id, previous_balance, delta_amount, new_balance, correlation_id, signature, hash, prev_hash. - CSV is UTF-8, RFC 4180 compliant; values are properly quoted/escaped; timestamps are UTC ISO 8601 with milliseconds; numbers are unformatted plain decimals. - If result set <= 100,000 rows, export completes synchronously and starts download within 5 seconds; otherwise an async job is created and a link is delivered within 10 minutes. - Exports respect authorization; non-admins cannot access ledger export endpoints; unauthorized attempts return HTTP 403. - File name pattern: ledger_YYYYMMDDThhmmssZ_<filter-hash>.csv.
Read-Only Ledger API for Integrations
- Provide GET /api/v1/ledger with filters: date_from/date_to, event_type, actor_id, origin, bundle_id, job_id, wallet_id, correlation_id; supports cursor-based pagination (limit, next_cursor). - Responses are sorted by timestamp desc by default and are stable within a snapshot; clients can request asc via sort=timestamp:asc. - The API is read-only: POST/PUT/PATCH/DELETE routes do not exist or return HTTP 405; no field in existing entries can be mutated. - Rate limiting enforced at 60 requests/min per token; over-limit returns HTTP 429 with Retry-After header. - OpenAPI schema published; field types and required fields match the schema; sample responses validate against schema in CI.
Retention, Archival, and Protected Purge
- Retention period is configurable per environment (default 7 years); entries older than retention are moved to immutable archive storage and removed from hot indexes. - Entries under dispute/hold are exempt from purge until hold is cleared; purge respects legal_hold=true flag. - A daily job performs archival/purge and emits a signed report summarizing counts moved/purged and any failures; report is viewable/downloadable in admin. - Archived entries remain verifiable (signatures/hash-chain) and retrievable via export/API by super-admins only. - Any purge/archival action is itself recorded as a ledger meta-event (no balance impact) for auditability.
Reconciliation View and Balance Snapshot Consistency
- System generates daily closing balance snapshots per wallet at 00:00:00Z derived solely from ledger entries; snapshots are stored and signed. - Reconciliation view compares computed balance from ledger events over a selectable range to the wallet’s reported balance; when data is consistent, the difference equals 0. - Any non-zero discrepancy is flagged with severity, provides drill-down to the contributing events (by correlation_id), and can be exported as CSV. - Backfill job can recompute snapshots for a historical period and must produce the same results deterministically. - P95 load: reconciliation for a 30-day period with 50k events completes within 5 seconds.

Guest Pockets

Issue time‑boxed, geofenced micro‑wallets to subcontractors or Guest Pass users with strict per‑job limits. Credits can only be spent on the assigned address(es), with full audit in the Custody Ledger—enabling secure collaboration without risking your main balance.

Requirements

Pocket Creation & Assignment
"As an operations manager, I want to create and fund a geofenced pocket tied to a job so that subcontractors can purchase only the RoofLens services needed for that address without exposing our main balance."
Description

Enable account owners to create and fund micro-wallets (Guest Pockets) tied to one or more job addresses, with configurable budget, allowed SKUs (e.g., AI Measurement Report, Damage Map, Automated Estimate), geofence geometry (radius or polygon), and validity window. Pockets can be assigned to existing subcontractor users or issued via Guest Pass to external collaborators. Funds are drawn from the main account balance and displayed as a dedicated pocket balance. Provide templates and cloning to speed setup for common job types. Persist pocket state and relationships to jobs, users, and the Custody Ledger. Surface creation/edit UI in the Job and Billing modules and expose the capability via API for automation.

Acceptance Criteria
Create and Fund Guest Pocket via Job UI
Given an account owner with sufficient main balance and a job with at least one address exists When the owner creates a Guest Pocket in the Job module and inputs name, selects one or more job addresses, sets a positive budget, selects allowed SKUs, defines a geofence (radius in meters or polygon with at least 3 vertices and no self-intersections), and sets a validity window (start < end) Then the pocket is created in Active state, the pocket balance equals the budget, the main balance is debited by the budget, and the pocket is associated to the selected job(s) and addresses And the pocket appears in both Job > Guest Pockets and Billing > Pockets lists with correct balances and metadata And a Custody Ledger entry records the debit with pocket ID, amount, actor, and timestamp And if any validation fails (insufficient balance, invalid geometry, empty SKU list, invalid dates), the system blocks creation and shows field-level errors without debiting funds
Assignee Management: Internal User and Guest Pass
Given a created pocket in Active state When the owner assigns the pocket to an existing subcontractor user Then that user can view and spend from only the assigned pocket and cannot view the main balance or other pockets And the assignment is persisted with user ID and effective time, and an assignment audit event is written to the Custody Ledger When the owner issues a Guest Pass for the pocket to an external email Then a one-time, expiring claim link is generated tied to the pocket and email, claiming requires email OTP verification, and on claim a guest user is provisioned with access scoped only to the pocket and assigned jobs And the Guest Pass link expires on first claim or at pocket end time, whichever comes first, and all claim and access events are logged in the Custody Ledger
Allowed SKUs and Spend Enforcement
Given a pocket with an allowed SKU list and a remaining balance When the assignee initiates a purchase using the pocket Then the purchase succeeds only if the SKU is in the allowed list, the selected job/address is within the pocket scope, the pocket is Active, and total cost (price plus applicable taxes/fees) is less than or equal to the remaining pocket balance And on success, the remaining pocket balance is decremented by the total cost and a Custody Ledger entry records the spend with SKU, amount, job ID, and asset ID And if the SKU is not allowed, the job is out of scope, the pocket is inactive/expired, or the cost exceeds the remaining balance, the transaction is blocked with a descriptive error and no balance change And concurrent purchase attempts against the same pocket are serialized to prevent double-spend
Geofence Definition and Enforcement (Radius/Polygon)
Given a pocket with a geofence defined as either a radius (in meters) around a job address or a polygon (at least 3 vertices, closed, non-self-intersecting) When a spend is initiated that attaches imagery or selects a job location with coordinates Then the system validates that the coordinates fall within the pocket geofence and one of the pocket’s assigned addresses before allowing the spend And if the uploaded asset lacks coordinates, the assignee must select one of the pocket’s assigned addresses; otherwise the spend is blocked And if the location is outside the geofence, the spend is blocked with an out-of-geofence error and no balance change
Validity Window Configuration and Enforcement
Given a pocket configured with start and end timestamps When the current time is within the validity window Then spend and assignment actions are allowed When the current time is before the start or after the end Then new spends are blocked, the pocket state reflects Not Started or Expired respectively, and edits are limited to account owners And all state transitions (e.g., Active to Expired) are recorded in the Custody Ledger with actor and timestamp
Templates and Cloning for Rapid Setup
Given an account owner on the Pocket creation screen When the owner saves the current configuration as a template with name, allowed SKUs, default geofence type, and default validity duration Then the template is stored and appears in the template list for future jobs When the owner chooses Clone from Template on a job Then a new pocket draft is pre-populated from the template and allows overrides to budget, addresses, geofence geometry, and dates before creation And no funds are debited and no ledger entries are written until the owner confirms pocket creation
API: Create and Manage Guest Pockets
Given a valid API token with pocket:write scope When a POST /pockets request includes name, job_ids, budget, allowed_skus, geofence (radius or polygon), validity_window, and optional assignee (user_id or guest_pass email) Then the API returns 201 with pocket_id and created attributes, the main balance is debited by budget, pocket balance equals budget, and a Custody Ledger entry is created When a GET /pockets/{id} request is made Then the API returns 200 with pocket state, balances, relationships (job_ids, assignee), and audit references When a PATCH /pockets/{id} edits allowed_skus, geofence, or validity_window by an owner token Then changes are persisted and audit logged; edits that would violate constraints (e.g., end before start, invalid geometry) are rejected with 422 and no state change
Geofence Spend Enforcement
"As a finance admin, I want all pocket spends to be validated against the assigned geofence so that credits cannot be used on other properties."
Description

Enforce that all pocket-funded transactions occur within the assigned job geofence and address set. Validate spend requests by cross-checking order metadata (address normalization), asset GPS/EXIF, drone flight logs, and uploader device location (when available). Block or require approval for attempts outside the geofence or for mismatched addresses, and present clear error messaging. Store geospatial evidence and evaluation outcomes in the Custody Ledger for auditability. Integrate enforcement into the checkout/purchase flow for reports and processing orders across web and API channels.

Acceptance Criteria
Auto-Approve Spend Within Geofence (Web Checkout)
Given a Guest Pocket with an assigned geofence polygon and address set And enforcement mode is Active And the user initiates a pocket-funded web checkout for a RoofLens report or processing order When the order address normalizes to a canonical form that matches an address in the pocket’s assigned set And at least one available geo-evidence signal (asset EXIF GPS, drone flight log, uploader device location) is within the geofence using a 25 m tolerance And no available signal indicates a location more than 25 m outside the geofence Then the spend request is authorized without manual approval And the geofence evaluation completes within 2 seconds at p95 And a Custody Ledger entry is created with decision=Approved, reason_codes=["InsideGeofence"], and all evidence artifacts attached
Block Spend Outside Geofence (Auto-Block Mode)
Given a Guest Pocket with enforcement mode=Auto-Block And an assigned geofence polygon and address set When the normalized order address does not match the assigned address set Or all available geo-evidence signals are more than 25 m outside the geofence Then the spend request is blocked and no funds are reserved or captured And the user sees an inline error message with code=GEOFENCE_OUT_OF_BOUNDS and a human-readable explanation referencing the job address And a Custody Ledger entry is created with decision=Blocked, reason_codes=["AddressMismatch" or "OutsideGeofence"], and all evidence artifacts attached
Require Approval Outside Geofence (Approval Mode)
Given a Guest Pocket with enforcement mode=RequireApproval When the normalized order address does not match the assigned address set Or the highest-precedence geo-evidence signal is more than 25 m outside the geofence Then the spend request transitions to state=PendingApproval and no funds are deducted And the account owner is notified via in-app notification and webhook within 60 seconds And the UI/API presents a link to Review Geofence Evidence And if no action is taken within 48 hours, the request auto-expires with state=Expired and the Custody Ledger records decision=Expired
Address Normalization and Matching
Given an order with an entered service address When the system normalizes the address to a canonical format and geocodes it Then the normalized address is matched to the pocket’s assigned address set by canonical ID or by distance ≤ 25 m to any assigned address geocode And the match process returns match_status ∈ {Exact, Near, NoMatch} and confidence ∈ [0,1] And orders with match_status=Exact are eligible to proceed; Near requires RequireApproval; NoMatch is Blocked in Auto-Block mode And the Custody Ledger stores normalized_address, geocode_point, match_status, and confidence
Geo-Evidence Fusion and Precedence
Given multiple geo-evidence signals may be available for an order (drone flight log, asset EXIF GPS, uploader device location) When signals conflict or are missing Then precedence is applied: DroneFlightLog > AssetEXIF > DeviceLocation > AddressGeocode And if any higher-precedence signal is >25 m outside the geofence, the decision is OutsideGeofence regardless of lower-precedence signals And if only lower-precedence signals indicate Inside and higher-precedence are missing, the decision is RequireApproval (if configured) or Block (Auto-Block) And the final decision includes selected_signal, all_signal_distances, and reason_codes in the Custody Ledger
Custody Ledger Evidence and Immutability
Given any pocket-funded spend request is evaluated When a decision (Approved, Blocked, PendingApproval, Expired) is produced Then an append-only Custody Ledger entry is written before the client receives the response And the entry contains: pocket_id, order_id, geofence_id+version, normalized_address, geocode_point, evaluation_mode, selected_signal, all_signal_metadata (coords, timestamps, EXIF fields, flight summary), computed distances to geofence, decision, reason_codes, evaluator_version, channel (web/api), and artifact_hashes And the entry is retrievable via UI and API within 2 seconds p95 And the entry cannot be modified after write; any correction is a new appended entry with prior_entry_id reference
API and Web Enforcement Parity and Error Contracts
Given a pocket-funded purchase is initiated via Web UI or REST API v1 When geofence enforcement runs Then identical decision logic and reason_codes are applied in both channels And API responses follow contracts: Approved → 201 Created with body {state:"Approved", decision:"Approved"}; PendingApproval → 202 Accepted with body {state:"PendingApproval"}; Blocked → 422 Unprocessable Entity with body {code:"GEOFENCE_OUT_OF_BOUNDS" or "ADDRESS_MISMATCH"} And repeated API calls with the same Idempotency-Key return the original decision for 5 minutes And Web UI displays the same code and human-readable message as the API body And direct order creation endpoints cannot bypass enforcement (requests without enforcement_result are rejected with 400)
Time-boxed Expiry & Auto-Reclaim
"As an account owner, I want pockets to expire and auto-return unused funds so that I’m not carrying open exposure after a job ends."
Description

Support start and end times for each pocket with automatic suspension at expiry and auto-reclaim of any unused funds back to the parent account. Allow owners to pause/resume pockets, extend end dates, and set pre-expiration reminders (e.g., 24/72 hours). Ensure all state transitions are atomic and recorded in the Custody Ledger with before/after balances. Handle timezone normalization, partial refunds for in-flight orders, and idempotent retries to prevent double-reclaims.

Acceptance Criteria
Start and End Time Enforcement with Automatic Suspension
Given a pocket with start_time and end_time defined in UTC and an initial balance > 0 When a spend is attempted before start_time Then the spend is rejected with error code POCKET_NOT_ACTIVE and no ledger debit is recorded And when current_time reaches start_time Then the pocket state becomes Active and spends are permitted And when current_time reaches end_time Then the pocket state transitions to Suspended within 60 seconds and all spend attempts are rejected with error code POCKET_EXPIRED And the suspension transition is recorded atomically in the Custody Ledger with a single state-change entry including before/after status and unchanged balances
Automatic Reclaim of Unused Funds to Parent Account
Given a pocket has reached end_time and has a remaining balance X > 0 When the auto-reclaim process executes Then X is debited from the pocket and credited to the parent account in a single atomic operation And the Custody Ledger records correlated entries (one debit, one credit) with a shared correlationId and before/after balances for both accounts And the pocket balance becomes 0 and the pocket remains Suspended And re-invoking the same reclaim with the same correlationId is idempotent and performs no additional transfer And if X = 0 at execution time, no transfer occurs and a no-op ledger entry is not created
Owner Pause and Resume Controls
Given an owner with Manage permissions issues a Pause on an Active pocket Then all spend attempts are rejected with error code POCKET_PAUSED and no funds move And the Custody Ledger records an atomic state-change entry with before/after status and unchanged balances When the owner issues Resume and current_time is between start_time and end_time Then spends are permitted within 30 seconds of resume When the owner issues Resume at or after end_time Then the operation is rejected with 409 CONFLICT and POCKET_EXPIRED, and no ledger entry is created
Extend End Date Behavior and Effects
Given an owner requests to extend a pocket’s end_time to a later timestamp When the new end_time > current end_time and the pocket is not yet past current end_time Then the end_time is updated, the pocket remains Active, and the Custody Ledger records an atomic state-change entry with before/after end_time and unchanged balances When the owner extends end_time after the pocket has expired and auto-reclaim has run Then the pocket state becomes Active only if current_time < new end_time; the pocket balance remains 0 until the owner funds it; no automatic re-funding occurs And all changes are recorded in the Custody Ledger with before/after end_time and balances
Pre-Expiration Reminder Notifications (24h/72h)
Given a pocket has end_time T and reminders configured for 72h and 24h before T When the system schedules reminders Then notifications are queued for T-72h and T-24h in UTC based on the pocket’s configured timezone and are not scheduled for times already in the past When each reminder time is reached Then a single notification is sent via configured channels (e.g., email, in-app, webhook) including pocket_id, address, end_time (localized), and remaining_balance And duplicate sends are prevented within a 2-hour window via a deduplication key And all sends are logged with correlationIds; failures are retried with exponential backoff up to 3 times
Timezone Normalization and Display
Given a user creates or edits a pocket providing start_time, end_time, and IANA timezone When the API receives the request Then it stores start_time and end_time in UTC (ISO 8601 with Z) and persists the provided IANA timezone And the API rejects invalid or ambiguous local times (e.g., nonexistent DST jumps) with 400 and validation details When the pocket is retrieved via API or UI Then times are returned in UTC and displayed in the user’s preferred timezone with correct conversion, and calculations (e.g., reminder scheduling, expiry) use UTC internally
Partial Refunds for In-Flight Orders at Expiry
Given an order authorized before end_time with authorization amount A is captured after end_time with captured amount C where 0 <= C <= A When the order completes Then the unused amount (A - C) is released back to the pocket and immediately reclaimed to the parent account if the pocket is expired And the Custody Ledger records correlated entries for capture, release (if any), and reclaim with before/after balances and a shared correlationId chain And retries of the completion webhook or job are idempotent and do not duplicate releases or reclaims And if the capture fails entirely, A is fully released then reclaimed; if C = A, no release occurs and only reclaim of remaining pocket balance (if any) proceeds
Spend Controls & Line-item Limits
"As a project manager, I want to restrict what a guest can buy and how much per transaction so that spend stays within the job budget."
Description

Provide granular controls per pocket: total cap, daily cap, per-transaction cap, and whitelisted SKUs/quantities (e.g., max 2 AI Measurement Reports, 1 Damage Map). Enforce limits consistently across UI and API, with deterministic rule evaluation and precise error messaging. Prevent non-whitelisted fees or add-ons from being charged to the pocket. Support rule previews during setup and real-time balance/limit checks during checkout. Log all rule evaluations and outcomes to the Custody Ledger.

Acceptance Criteria
Total Cap Enforcement with Channel Parity
Given a pocket with total_cap=500 and total_spend=480 When a 30 purchase is attempted via UI Then the transaction is denied with code=TOTAL_CAP_EXCEEDED, allowed_amount=20, overage=10, message="Pocket total cap exceeded", decision_id present, and no funds are captured, and a ledger entry is recorded When a 20 purchase is attempted via API with identical inputs Then the transaction is approved, total_spend becomes 500, the ledger entry is recorded, and the rule_order and rule_ids in the decision trace match those used in the UI
Daily Cap Accrual and Reset (UTC)
Given a pocket with daily_cap=200 and day boundary at 00:00:00 UTC When a 150 purchase occurs at 09:00 UTC Then it is approved and daily_spend becomes 150 When a 60 purchase occurs at 18:00 UTC Then it is denied with code=DAILY_CAP_EXCEEDED, allowed_amount=50, overage=10, and no funds are captured When time passes to 00:00:00 UTC the next day Then daily_spend resets to 0 without altering total_spend, and subsequent purchases re-accumulate toward the new day’s cap
Per-Transaction Cap at Checkout
Given a pocket with per_transaction_cap=100 When a checkout attempts to charge eligible subtotal=120 Then the transaction is denied with code=PER_TXN_CAP_EXCEEDED, offending_subtotal=120, cap=100, no partial authorization, and guidance to reduce cart below the cap When a checkout attempts to charge eligible subtotal=100 Then the transaction is approved
Whitelisted SKUs and Quantities Enforcement
Given a pocket with whitelist: AI_MEASUREMENT_REPORT max_qty=2 and DAMAGE_MAP max_qty=1; all other SKUs/add-ons disallowed When a cart contains AI_MEASUREMENT_REPORT qty=3 Then the transaction is denied with code=SKU_QTY_LIMIT_EXCEEDED, sku=AI_MEASUREMENT_REPORT, requested_qty=3, max_qty=2 When a cart contains RUSH_PROCESSING_ADDON (not whitelisted) Then the transaction is denied with code=SKU_NOT_WHITELISTED, sku=RUSH_PROCESSING_ADDON, and the pocket is not charged any fee or add-on When a cart contains AI_MEASUREMENT_REPORT qty=2 and DAMAGE_MAP qty=1 and no other items Then SKU checks pass and evaluation proceeds to cap checks
Deterministic Rule Evaluation and Error Payload
Given a fixed set of rules and identical inputs When the rules are evaluated multiple times across UI and API Then the decision is identical and the ordered rule_evaluations trace is byte-for-byte equal And the evaluation order is strictly: SKU_WHITELIST -> SKU_QTY -> PER_TXN_CAP -> DAILY_CAP -> TOTAL_CAP And on denial the response payload includes fields: code, message, rule_id, rule_type, rule_order_index, correlation_id, pocket_id, request_id, timestamp_iso, details (including allowed_amount/overage where applicable) And messages are plain-language, reference the specific violated rule, and do not expose internal stack traces
Rule Preview During Setup
Given an admin configures pocket rules (caps and SKU whitelist) When they run a rule preview for a specified sample cart and effective timestamp via UI Then the system returns Accept/Deny, remaining limits (per_txn_remaining, daily_remaining, total_remaining), and a full ordered rule_evaluations trace with rule_ids and outcomes And no balances or spends are mutated and no charges occur And an audit event of type=RULE_PREVIEW is logged to the Custody Ledger with correlation_id and the same evaluation trace When the preview is requested via API with the same inputs Then the response matches the UI preview
Custody Ledger Logging of Rule Evaluations and Outcomes
Given any approval, denial, or preview of a pocket spend When the decision is produced Then a Custody Ledger entry is written atomically with fields: event_type (SPEND_APPROVED|SPEND_DENIED|RULE_PREVIEW), pocket_id, actor_id/client_id, channel (UI|API), correlation_id, request_id, cart_snapshot, rule_evaluations[] (rule_id, rule_type, inputs, outcome, order_index), decision (approve|deny), message, pre/post counters (daily_spend, total_spend), timestamp_iso And entries are immutable, tamper-evident, and queryable by pocket_id and correlation_id And ledger writes succeed or the decision is not surfaced (no partial success)
Custody Ledger Audit Trail
"As a controller, I want a complete audit trail of pocket activity so that I can reconcile charges and resolve disputes."
Description

Write immutable ledger entries for every pocket event: creation, funding, configuration changes, geofence validations, successful/failed spends, refunds, approvals, expiry, and reclaims. Each entry captures actor identity (user or guest), timestamp, IP/device fingerprint, geospatial validation artifacts, before/after balances, and references to jobs, orders, and assets. Provide searchable UI and export (CSV/JSON) with role-based access controls to support reconciliation and dispute resolution.

Acceptance Criteria
Write Ledger Entries for All Pocket Events
Given an organization with Guest Pockets enabled and an existing pocket When the following events occur for that pocket: creation, funding, configuration change, geofence validation (success), geofence validation (failure), spend (success), spend (failure), refund, approval, expiry, reclaim Then exactly one ledger entry is appended for each event with eventType matching the event name, pocketId of the pocket, and a monotonically increasing createdAt timestamp And each entry includes a non-empty entryId unique within the ledger And the total number of new entries equals the number of events triggered
Ledger Immutability and Integrity
Given an existing ledger entry When any user (including Owner or Admin) attempts to update or delete the entry via UI or API Then the operation is rejected with HTTP 403 (or equivalent UI error), and no changes are persisted And the ledger is append-only and exposes a hash chain where entry.hash = H(entry.payload + previousHash) And verifying the hash chain over the last 10,000 entries yields no integrity violations in under 1 second
Complete Metadata Capture per Entry
Given a new ledger entry is created by any pocket event Then the entry contains non-null values for: entryId, eventType, pocketId, actorId, actorType (User|Guest|System), timestamp (UTC ISO-8601 to milliseconds), beforeBalance, afterBalance, currency And contains ipAddress and deviceFingerprint when the event originated from a client device; otherwise stores null And contains geospatialValidation for geofence-related events with: latitude, longitude, polygonId, inside (boolean), distanceMeters (>=0), method (GPS|Wi-Fi|ManualOverride), evidenceId And contains references where applicable: jobId, orderId, assetIds (array) And afterBalance = beforeBalance + deltaAmount, with deltaAmount signed and precise to 2 decimal places
Searchable Ledger UI with Advanced Filters
Given a user with access to the Custody Ledger When they search using any combination of: date range, eventType, actorId, pocketId, jobId, orderId, amount range, outcome (success|failure), geo status (inside|outside) Then the results reflect those filters and return the first page within 2 seconds for up to 10,000 matching entries And results are sortable by timestamp, amount, and eventType, and support pagination with pageSize selectable up to 200 And selecting a row reveals the full entry payload including links to related job, order, and asset records
Export Ledger Data to CSV and JSON
Given a filtered result set in the ledger UI When the user initiates an export to CSV or JSON Then the export contains exactly the records matching current filters and sorting, includes column headers/keys, uses UTF-8, and timestamps in UTC ISO-8601 And exports over 50,000 rows run asynchronously, notify the user on completion, and provide a download link valid for 24 hours And each export includes a SHA-256 checksum and recordCount metadata And the export enforces role-based scope and field redaction identical to on-screen results
Role-Based Access and Redaction
Given the following roles: Owner, Admin, Manager, Auditor, Guest When accessing the Custody Ledger Then Owner/Admin can view all entries for their organization; Manager can view entries for pockets and jobs within their assigned teams; Auditor has read-only access to all with export; Guest can view only entries for pockets to which they’ve been assigned And IP address and deviceFingerprint are visible to Owner/Admin/Auditor, masked for Manager (last octet and last 6 characters redacted), and hidden for Guest And attempts to access entries outside role scope return HTTP 403 and are logged as security events in the ledger
Geofence Validation Artifacts Recorded
Given a spend attempt occurs from a client within the app When geofence validation runs Then the resulting ledger entry stores latitude/longitude to 6 decimal places, polygonId, inside (boolean), distanceMeters, method (GPS|Wi-Fi|ManualOverride), and evidenceId And for failures the entry includes failureReason (OutsideGeofence|SpoofDetected|NoFix|Timeout) and the spend is blocked And the UI detail view renders a static map preview using the stored coordinates and polygonId without mutating any ledger data
Scoped Guest Access & Auth
"As a subcontractor, I want a simple, secure way to access the pocket for my job so that I can pay for required reports without creating a full account."
Description

Allow issuing Guest Pass links (email/SMS magic links) that grant least-privilege access scoped to the assigned pocket and job(s). Support optional MFA, device binding, and revocation. Require acceptance of terms and capture consent records. Trigger lightweight KYC prompts when cumulative pocket spend exceeds configurable thresholds. Provide a minimal guest workspace showing only allowed services, balance, geofence map, and expiration details to reduce friction while maintaining security.

Acceptance Criteria
Guest Magic Link Issuance and Authentication
Given an owner creates a Guest Pass scoped to pocket P and jobs [J1, J2] with contact email E or phone S, When they click Send Link, Then a single-use magic link with expiration T is created, delivered via the chosen channel within 60 seconds, and an issuance event with metadata (pocketId, jobIds, channel, expiration, issuerId) is written to the Custody Ledger. Given the recipient opens the magic link before T, When they complete the sign-in flow, Then a guest session is established without full account creation and an auth_success event is logged. Given a magic link has been used once, When it is attempted again, Then access is denied with reason link_already_used and the attempt is logged. Given optional MFA is enabled for the pass, When the recipient clicks the magic link, Then the configured factor(s) must be successfully completed before access is granted and mfa_success is logged. Given device binding is enabled for the pass, When the recipient completes first sign-in, Then the pass is bound to that device fingerprint; subsequent sign-ins from a different device are blocked with reason device_not_authorized and logged.
Least-Privilege Scope and Minimal Workspace
Given an authenticated Guest Pass scoped to pocket P and jobs [J1, J2], When the guest opens the workspace, Then only the following are visible: allowed services for P, current pocket balance, geofence map for assigned addresses, expiration date/time, and support link; no organization, billing, or unrelated navigation is shown. Given the guest requests any resource not within pocket P or jobs [J1, J2], When the API is called, Then the response is 403 forbidden with reason scope_violation and the event is logged. Given the guest attempts to view or modify settings, When the action is triggered, Then the UI controls are disabled/hidden and any direct API calls return 403. Given backend list queries are executed, When results are returned, Then they are filtered to scope with zero leakage verified by security tests (no records outside P, [J1, J2]).
Time-Boxing and Expiration Handling
Given a Guest Pass with expiration timestamp T, When current time is before T, Then the UI shows a countdown accurate within ±60 seconds and the API accepts requests. When current time reaches or exceeds T, Then new API calls return 401 pass_expired, the UI forces sign-out within 30 seconds, and an expiration event is logged. Given an owner extends expiration to T2 > T, When the pass is active, Then the new expiration is reflected in the guest UI within 15 seconds without requiring re-login and is logged as expiration_extended.
Geofence Enforcement and Map Display
Given a Guest Pass with assigned addresses A1..An represented as geofence polygons, When the guest initiates a spend or address-scoped action, Then the system validates location using device GPS (if permitted) with IP geolocation fallback and only allows the action if within any polygon. Given the guest is outside all geofences, When they attempt a spend, Then the action is blocked with reason geofence_blocked, the UI displays an explanatory message with a link to the map, and the event is logged. Given location permissions are denied or unavailable, When the guest attempts a spend, Then the UI prompts for permission and falls back to IP-based checks; if still outside geofence the action remains blocked. Given the guest opens the map, When the map loads, Then it displays geofence boundaries for A1..An and the guest’s approximate location (if available) with a privacy notice.
Terms Acceptance and Consent Capture
Given a first-time or returning guest without acceptance of the latest terms version V, When they open the magic link, Then they must review and accept terms before entering the workspace; decline returns them to a safe screen with no access. When terms are accepted, Then a consent record is created containing passId, scoped identifiers (email/phone hash), termsVersion V, timestamp (UTC), IP, device fingerprint, and a consent hash; a consent_captured event is recorded in the Custody Ledger and a receipt is available for download. Given terms are updated to version V+1, When the guest next signs in, Then the new terms are presented and must be accepted again before access is granted.
KYC Threshold Prompt and Spend Control
Given configurable thresholds T1 and T2 for cumulative pocket spend over a rolling 30-day window, When the guest’s cumulative spend S associated to their identity reaches or exceeds T1, Then the guest is prompted for lightweight KYC and spending actions are paused until required fields are submitted. When S reaches or exceeds T2, Then enhanced KYC is required (e.g., ID scan) and spending remains blocked until verification passes. Given KYC verification succeeds, When checks complete, Then spending is re-enabled and kyc_verified is logged with non-PII references; PII is stored only in the compliant vault. Given an admin updates thresholds, When the change is saved, Then new thresholds take effect within 5 minutes for subsequent checks and are logged as kyc_thresholds_updated.
Revocation and Active Session Termination
Given an owner selects Revoke on a Guest Pass, When they confirm, Then the magic link is invalidated immediately, all active sessions are terminated within 30 seconds, subsequent API calls return 401 pass_revoked, and a revocation event with revokerId and reason is logged. Given an owner selects Rotate Link, When rotation completes, Then a new magic link is generated and delivered to the recipient; the previous link is permanently invalid and attempts are logged as link_rotated. Given device binding was enabled, When revocation or rotation occurs, Then all bound devices are cleared and must be re-established on next successful sign-in.
Notifications & Approvals
"As a business owner, I want to be alerted to pocket activity and approve large spends so that I maintain control without micromanaging."
Description

Offer configurable notifications for key pocket events: funding, low balance thresholds, attempted rule/geofence violations, successful spends, and upcoming/actual expiry. Enable optional approval workflows for spends over a threshold, outside business hours, or flagged by risk signals. Support email, in-app, and webhook channels with delivery tracking and retries. Approvals must be time-bound, auditable, and fail-safe (auto-decline or auto-approve per policy).

Acceptance Criteria
Per-Event Channel Configuration and Dispatch
Given a Guest Pocket with notification preferences configured per event type (funded, low_balance, rule_violation, spend_success, expiry_upcoming, expiry_reached) and per channel (email, in_app, webhook) When any configured event occurs for that pocket Then only the selected channels for that event dispatch within 60 seconds And unselected channels do not dispatch any message And each dispatched message contains at minimum: event_type, pocket_id, job_address_or_id, occurred_at (UTC ISO8601), correlation_id And changes to notification preferences apply to events with occurred_at >= the change timestamp And every dispatch attempt is recorded with channel, status, correlation_id, and attempt_count in the Custody Ledger or an associated audit log
Low-Balance Threshold Alerts
Given a Guest Pocket with low-balance alerts configured as either a percentage threshold and/or an absolute amount threshold And an alert cooldown period of at least 6 hours is set When the pocket balance crosses from above to below any configured threshold due to a spend or fee Then a single low_balance alert is sent via the configured channels within 60 seconds And the alert payload includes pre_balance, post_balance, threshold_type, threshold_value, and remaining_percentage And no additional low_balance alert is sent for the same threshold until the cooldown elapses or the balance rises above and re-crosses the threshold And if multiple thresholds are configured, alerts are emitted for each threshold crossed, ordered from highest to lowest threshold
Geofence and Rule Violation Attempt Alerts
Given a Guest Pocket restricted to specified job address geofences and per-job spend rules When a user attempts a spend that violates any restriction (e.g., outside geofence, exceeds per-transaction limit, exceeds job total limit, unassigned merchant) Then the spend is blocked with no funds deducted And a rule_violation alert is sent via configured channels within 30 seconds And the alert payload includes violation_reason (OUTSIDE_GEOFENCE|TX_LIMIT|JOB_LIMIT|UNASSIGNED_MERCHANT|OTHER), attempted_amount, currency, user_id (if available), device_location (if provided), geofence_id or job_id, and correlation_id And a declined entry with reason and metadata is written to the Custody Ledger and is searchable by pocket_id and correlation_id
Successful Spend Notifications with Ledger Link
Given a Guest Pocket and an allowed spend that is authorized (and captured if applicable) When the spend posts successfully Then a spend_success notification is sent via configured channels within 60 seconds And the payload includes spend_id, pocket_id, job_id or address, amount, currency, remaining_balance, merchant_name or payee, line_item_count, approved_by (if approval occurred), and occurred_at (UTC ISO8601) And the payload includes a ledger_entry_id and ledger_url that resolve to the Custody Ledger entry for audit And duplicate notifications are prevented using an idempotency_key per spend_id per channel
Expiry Lifecycle Notifications
Given a Guest Pocket with an expiry_at timestamp When the current time is T-72h and T-24h prior to expiry_at Then an expiry_upcoming notification is sent at each threshold (or the next available threshold if the pocket is created after T-72h) And when current time reaches expiry_at, an expiry_reached notification is sent and pocket status is updated to expired And if expiry_at is extended, previously scheduled notifications are canceled and rescheduled accordingly, with no duplicate sends And notifications include pocket_id, prior_expiry_at, new_expiry_at (if extended), remaining_balance, and policy_on_expiry (e.g., reclaim or freeze)
Time-Bound Approval Workflow with Fail-Safe Policy
Given approval policies that require approval when any of the following are true: amount > configured_threshold, spend initiated outside business hours (per org timezone), or risk_flagged = true And a fail_safe policy is configured as either auto_approve_on_timeout or auto_decline_on_timeout When a spend is initiated that meets any approval condition Then the spend enters pending_approval state and no funds are captured until a decision And approval requests are dispatched via configured channels to the approver group with a response SLA (e.g., 15 minutes) displayed to the requester And an approver can approve or decline in-app or via secure email link; the decision is recorded with approver_id, decided_at, reason, and correlation_id And if no decision is received by SLA expiry, the fail_safe policy is applied automatically and recorded And all approval events (requested, approved, declined, timeout) are auditable in the Custody Ledger and linked to the resulting spend or decline
Delivery Tracking, Retry, and Idempotency for Email and Webhooks
Given email and webhook channels are enabled for notifications When any notification is dispatched Then per-attempt delivery status is tracked as queued, sent, delivered, bounced/failed for email and as HTTP status code for webhooks And webhooks must return 2xx to be considered delivered; otherwise, retries occur with exponential backoff and jitter up to 7 attempts over 60 minutes And each webhook is signed with HMAC-SHA256 and includes event_type, occurred_at, pocket_id, correlation_id, and idempotency_key; receivers can safely retry using the idempotency_key And email bounces are recorded and subsequent retries are suppressed for hard bounces And delivery outcomes are queryable by correlation_id and pocket_id within the platform for at least 90 days

Product Ideas

Innovative concepts that could enhance this product's value proposition.

SurgeSmart Triage

Auto-prioritizes storm uploads by hail size, due date, and drive-time; bulk assigns rush jobs in one click. Cuts manual triage and speeds same‑day bids by 30%.

Idea

AR Flight Coach

Phone-guided AR corridors and live coverage heatmaps prevent gaps while flying. Instant QC flags missed facets and low overlap, slashing re‑flights.

Idea

DisputeLock PDF

Tamper-evident PDFs with cryptographic hashes, GPS/time stamps, and full revision trails. Desk adjusters verify integrity in seconds, reducing scope disputes.

Idea

Line‑Item Brain

Maps detected materials and damage to jurisdiction‑specific line items with prices, ready for Xactimate export. Delivers consistent, defensible estimates in one pass.

Idea

Franchise Guardrails

Locked estimate templates, variance thresholds, and approver checkpoints enforce consistency across branches. Real‑time margin alerts stop over‑discounting.

Idea

Passkey Field Badges

Passwordless sign‑in with WebAuthn and scannable crew badges authorizes on‑site uploads. Every image is traceable to a person and device.

Idea

CreditDrop Wallet

Prepaid, pay‑per‑address wallet with team quotas and auto top‑up. Solo users control spend; franchises cap storm‑surge burn.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

P

RoofLens Launches AI-Powered Roof Measurement and Estimating Platform for Contractors and Adjusters

Imagined Press Article

For immediate release RoofLens today announced the general availability of its SaaS platform that converts drone and aerial photos into precise roof measurements, damage maps, and automated line‑item estimates. Built for small-to-mid-size roofing contractors and insurance adjusters, RoofLens eliminates manual measuring and spreadsheet juggling to produce ready‑to‑send PDF bids in under an hour—cutting estimating time by up to 70% and reducing disputes with defensible, verifiable evidence. The platform ingests imagery captured by on‑site technicians or third‑party aerial providers and reconstructs the roof in high fidelity, outputting accurate areas, perimeters, slopes, penetrations, and edge counts. RoofLens then maps detected materials and damage to jurisdiction‑ and carrier‑specific line items and prices, generating a clean, standardized estimate that can be exported to Xactimate, CoreLogic/Symbility, or shared as a tamper‑evident PDF. “Speed and trust decide who wins the job,” said Alex Morgan, CEO of RoofLens. “With RoofLens, a solo estimator can capture a roof, generate defensible quantities, and produce a polished, carrier‑aligned estimate in one sitting. It removes the busywork that slows teams down and replaces it with confidence—confidence that your numbers are right and your documentation will hold up.” Designed around the day‑to‑day realities of the field, RoofLens offers fast, no‑frills workflows for owner‑operators and robust controls for franchises and carriers. Solo Bid Builder users get per‑job pricing, same‑day turnaround, and ready‑to‑send PDFs. High‑volume teams can bulk import addresses, auto‑prioritize rush jobs, and keep work moving with live capacity and SLA insights. Insurance Desk Adjusters get defensible measurements, damage annotations, and an audit trail that simplifies reviews and minimizes back‑and‑forth. Key capabilities include: - Measurement and mapping: High‑accuracy area, edge, pitch, and penetration counts from drone and aerial imagery, with loss‑critical features labeled and measured. - Automated estimating: CodeCrosswalk maps materials and damages to the right jurisdiction- and carrier‑specific line codes; PricePulse syncs regional price lists and adjusts for storm surges; SmartAssemblies expand one‑click assemblies into fully quantified line items with prewritten notes. - Consistency and control: Template Lockstep enforces centrally managed estimate templates across branches; Approval Matrix routes bids for the right sign‑offs; Variance Bands and Margin Locks keep estimates within safe, profitable guardrails. - Evidence and defensibility: DisputeLock PDF with SignSeal, GeoSeal, QR Verify, ProofLinks, and a Custody Ledger provides a verifiable chain‑of‑custody, tamper evidence, and click‑through context for every line item. - Seamless export: ExportPreflight checks code completeness, required photo attachments, and carrier formatting before outputting ESX/CSV or sending via API. “RoofLens gives our desk team the two things we value most: consistency and verifiability,” said Priya Shah, a licensed public adjuster and early evaluator of the platform. “The line items match the carrier’s expectations, photos are tied to time and GPS, and every change has a clear reason. That combination reduces friction and helps policyholders get fair outcomes faster.” The platform is equally at home in the driveway and the back office. Sales reps can present vivid visuals and fast, defensible numbers to earn homeowner trust on the spot. Office leads can triage storm events using live capacity, route bundles, and SLA predictors. Production planners can turn approved scopes into clean material orders and schedules with CSV/ERP exports, minimizing overages and delays. RoofLens is engineered for transparency. Every report can carry a scannable QR that opens a lightweight verification page for reviewers—no software installs required. Adjusters can confirm document integrity, review chain‑of‑custody entries, and click through ProofLinks to see exactly which photos and measurements support each line item. When templates update, CodeCrosswalk highlights deltas so users know precisely what changed and why. “Contractors need to move at storm speed without sacrificing accuracy or profits,” said Jordan Lee, Head of Product at RoofLens. “We built RoofLens so that a first‑time user can generate a defensible estimate in under an hour, while multi‑branch organizations get the controls they need to scale consistently: locked templates, guardrails, approvals, and a crystal‑clear audit trail.” Availability and pricing RoofLens is available today in the United States and Canada. The platform supports a flexible, prepaid per‑address wallet with optional branch allocations, approvals, and auto top‑ups. Solo users can start on a pay‑per‑job basis, while high‑volume teams can unlock bulk pricing and advanced controls. New customers can be live in a day with guided onboarding and template setup. About RoofLens RoofLens is a software platform that turns drone and aerial photos into precise roof measurements, damage maps, and automated line‑item estimates. Purpose‑built for contractors, adjusters, and franchises, RoofLens eliminates manual measuring and spreadsheet guesswork, producing carrier‑aligned, ready‑to‑send PDFs in under an hour. With verifiable evidence and built‑in guardrails, teams work faster, reduce disputes, and protect margins. Media contact Press: press@rooflens.com Phone: +1‑555‑013‑4210 Web: https://www.rooflens.com

P

RoofLens Unveils Storm Triage and Routing Suite to Speed Same‑Day Bids During Hail and Wind Events

Imagined Press Article

For immediate release RoofLens today introduced a Storm Triage and Routing Suite designed to help roofing contractors and adjusters handle surge demand during hail and wind events. The suite combines live capacity management, automated prioritization, and turn‑by‑turn route optimization so teams can complete more high‑impact addresses the same day—without manual juggling. Built directly into the RoofLens platform, the suite centers on Capacity Balancer, SLA Predictor, Route Bundles, GeoHeat Overlay, Smart Rebalance, and a Triage Audit Trail. Together, these capabilities orchestrate people, vehicles, and deadlines in real time, making it easy to move rush jobs to the front of the line while keeping staff utilization healthy and transparent. “When a storm hits, the bottleneck moves from estimating to orchestration,” said Jordan Lee, Head of Product at RoofLens. “Our Storm Triage and Routing Suite automates the hard parts—who should do what, in what order, and by when—so coordinators can focus on customers. The result is higher same‑day completion rates, fewer missed SLAs, and significantly less windshield time.” How it works - Capacity Balancer distributes triaged jobs across estimators and drone operators using live capacity, time windows, and skill tags like steep, tile, or commercial. It prevents overload and raises same‑day completion without spreadsheets. - SLA Predictor calculates a per‑address probability of meeting the due time based on queue length, route ETA, daylight, and weather. It flags at‑risk jobs early and recommends actions—reassign, bundle, or rush—to protect commitments. - Route Bundles auto‑group nearby addresses into optimized runs with an ideal stop order. Leads can assign or reassign an entire bundle in one click to cut drive time and lift throughput. - GeoHeat Overlay layers hail sizes, wind swaths, and claim density on the triage board. Coordinators instantly spot high‑impact clusters and move them to the top, ensuring the worst‑hit customers are served first. - Smart Rebalance reshuffles priorities when new rush jobs arrive, honoring locks and preferences, and notifies affected users with clear reasoning to minimize confusion. - Triage Audit Trail logs the rationale behind every priority and assignment—hail metrics, due date, drive‑time, and rule hits. The record can be exported to defend decisions with carriers and management. The Storm‑Event Coordinator is a core RoofLens user persona: office leads triaging hundreds of addresses after a storm, balancing due dates, crews, and budgets. The new suite gives them an end‑to‑end command center that connects planning with execution and documentation. Coordinators can bulk import addresses, tag cost centers, reserve wallet credits for each run, and track live presence with scan‑in/scan‑out events from the field. “During last season’s hail surge we were drowning in sticky notes and map tabs,” said Fran Alvarez, regional operations director at a multi‑branch roofing franchise and an early pilot customer. “With RoofLens, our coordinators could prioritize the most impacted neighborhoods, auto‑build route bundles, and see which visits were at risk hours in advance. We took on more work without burning out the team—and our SLAs actually improved.” The suite pairs tightly with RoofLens’s evidence‑grade capture and estimating tools. Field Drone Operators get mobile flight presets and QC feedback to ensure usable results on the first visit. Estimators receive jobs with pre‑applied rules for the carrier and jurisdiction, and can produce standardized, defensible estimates quickly. When supplements are needed, Revision Diff and ProofLinks shorten negotiations by showing exactly what changed and why. Triage doesn’t stop at logistics. RoofLens also helps organizations manage surge budgets responsibly. Features like SurgeCap, Branch Buckets, Predictive Refill, Cost Tags, Job Reserve, and Guest Pockets provide granular control over spend by day, week, event, branch, or subcontractor. Credits are earmarked for route bundles, unused reserves auto‑release after SLA windows, and every dollar maps cleanly to the right storm event and cost center. “Storm response is a reliability problem,” added Alex Morgan, CEO of RoofLens. “Customers remember who showed up first and who delivered a clear, consistent bid. Our triage and routing suite is about making those promises and keeping them—at scale and under pressure.” Availability The Storm Triage and Routing Suite is available today for all RoofLens customers in the United States and Canada. The capabilities are included in platform plans, with advanced budget controls available as add‑ons. New customers can be live in a day with guided setup. About RoofLens RoofLens is a software platform that turns drone and aerial photos into precise roof measurements, damage maps, and automated line‑item estimates. Purpose‑built for contractors, adjusters, and franchises, RoofLens eliminates manual measuring and spreadsheet guesswork, producing carrier‑aligned, ready‑to‑send PDFs in under an hour. With orchestration, evidence, and guardrails built in, teams work faster, reduce disputes, and protect margins. Media contact Press: press@rooflens.com Phone: +1‑555‑013‑4210 Web: https://www.rooflens.com

P

RoofLens Introduces Evidence‑Grade PDFs and Chain‑of‑Custody to Reduce Insurance Scope Disputes

Imagined Press Article

For immediate release RoofLens today announced the availability of an evidence and compliance toolkit that brings tamper‑evident documents, verified imagery, and a full chain‑of‑custody to every job. The toolkit—centered on DisputeLock PDF, SignSeal, GeoSeal, QR Verify, ProofLinks, RedactSafe, Revision Diff, and a Custody Ledger—gives contractors, independent adjusters, and carriers a shared foundation of truth that reduces scope disputes and speeds approvals. Insurance reviews often stall when reviewers question whether photos are job‑specific, whether measurements are accurate, or why a line item changed. RoofLens solves these pain points at the document and data level. Each report can be cryptographically sealed, watermarked with GPS/time/altitude, and accompanied by a click‑through audit trail that shows exactly how each measurement and line item was derived. “Reviewers want to trust what they’re seeing,” said Alex Morgan, CEO of RoofLens. “With RoofLens, that trust is earned by design. Every page section can carry a verifiable watermark, every file can be digitally signed, and every change is recorded with an author, timestamp, and reason. The result is fewer emails back and forth and faster, fairer outcomes for policyholders.” Key components - DisputeLock PDF: Produces a tamper‑evident, court‑ready document that desk reviewers can verify in seconds. The moment a field is altered, the seal breaks and flags clearly. - SignSeal: Applies standards‑based digital signatures (PAdES/LTV) from RoofLens and optional co‑signers so validation works long‑term without extra tools. - GeoSeal: Watermarks each photo, measurement, and page section with verified GPS, time, altitude, and capture confidence. A roll‑up badge summarizes coverage integrity to disarm claims of stock photos or post‑hoc edits. - QR Verify: Adds a scannable QR and short link to every DisputeLock PDF that opens a lightweight verification page—hash match, capture timestamps, and signer list—no install required. - Custody Ledger: Embeds an append‑only chain‑of‑custody record showing who captured, uploaded, edited, approved, and shared, complete with device IDs, GPS/time, and WebAuthn identity. - ProofLinks: Attaches verifiable evidence and reasoning to every line item—cropped photos, detected damage labels, measurement references, and relevant code/policy citations. - RedactSafe: Enables role‑based, permanent redactions for PII and policy numbers that maintain a verifiable manifest, preserving tamper evidence while sharing privacy‑compliant copies. - Revision Diff: Generates a clear, color‑coded summary of what changed between versions—line items added or removed, quantity shifts, notes, and attachments—so supplements read like a concise commit history. For Insurance Desk Adjusters and carrier reviewers, the toolkit means less detective work. Reviewers can scan the QR on a PDF, confirm the document’s integrity, and drill into the context behind any line item in moments. For Policy Advocate Priya and other public adjusters, it provides a consistent way to present claims with defensible evidence and a transparent rationale. For contractors, it reduces re‑measures, re‑flights, and callbacks by setting a higher standard of documentation from the start. “RoofLens lets us lead with evidence, not opinion,” said Quinn Parker, QA lead at a regional contractor who participated in early access. “When a reviewer asks why a step flashing line is included, we point them to the ProofLink: a photo, the measurement reference, and the code citation. The conversation shifts from ‘if’ to ‘how fast can we approve this?’” The toolkit integrates seamlessly with RoofLens capture and estimating workflows. Field operators benefit from on‑site QC to eliminate gaps and ensure consistent image resolution. Estimators get CodeCrosswalk and PricePulse to align line items and unit prices with carrier expectations. Before export, ExportPreflight checks that required photos and notes are present and that grouping, order, and formatting meet carrier requirements. Security and privacy are first‑class concerns. RoofLens supports passkey‑based authentication and scannable crew badges to ensure each image is traceable to an authorized person and device. Device Lock verifies hardware integrity and blocks rooted or emulated devices, while Offline Pass allows secure capture at sites with limited signal. Every action is recorded in the Custody Ledger for accountability and compliance. “Too often, good work is undermined by missing context or weak documentation,” added Jordan Lee, Head of Product at RoofLens. “We designed our evidence toolkit to meet reviewers where they are—PDFs and links—and give them instant, verifiable context without extra software. It saves everyone time and reduces conflict.” Availability The evidence and compliance toolkit is available today for all RoofLens customers. DisputeLock PDF, SignSeal, GeoSeal, QR Verify, ProofLinks, RedactSafe, Revision Diff, and the Custody Ledger can be enabled per template, with defaults controlled centrally for franchises and carriers. About RoofLens RoofLens is a software platform that turns drone and aerial photos into precise roof measurements, damage maps, and automated line‑item estimates. With verifiable evidence and built‑in guardrails, teams work faster, reduce disputes, and protect margins across residential and commercial projects. Media contact Press: press@rooflens.com Phone: +1‑555‑013‑4210 Web: https://www.rooflens.com

P

RoofLens Debuts AR Flight Capture and QC Toolkit to Eliminate Re‑flights and Speed On‑Site Operations

Imagined Press Article

For immediate release RoofLens today announced a comprehensive AR flight capture and quality control toolkit that helps field teams capture reconstruction‑ready photo sets on the first visit—even in challenging conditions. The toolkit combines Corridor Auto‑Plan, WindSmart Pathing, Overlap Guardian, Facet Finder, Altitude Gate, and QC Instant Replay to deliver consistent coverage, reduce blurry shots, and eliminate costly re‑flights. On‑site capture quality is the foundation of fast, defensible estimates. Missed facets, low overlap, and inconsistent image resolution cause back‑office delays and repeat visits that drain time and budgets. RoofLens removes guesswork with live, in‑app guidance that turns best practices into a repeatable, guided routine for any technician. “Field crews shouldn’t need to be mapping experts to capture great data,” said Jordan Lee, Head of Product at RoofLens. “RoofLens turns the roof outline into an AR playbook—where to fly, how high, what angle—then monitors overlap and coverage live. If something slips, the app nudges the pilot to adjust in the moment, not after they’ve driven away.” Toolkit highlights - Corridor Auto‑Plan: One‑tap AR corridors auto‑generated from the roof outline and camera FOV set ideal passes, gimbal angles, and overlap targets. Crews get airborne in under a minute with a plan that guarantees full‑facet coverage, even on steep or complex roofs. - WindSmart Pathing: Live wind and gust sensing adapts headings, speed, and pass spacing on the fly. Visual and haptic prompts help counter drift and yaw so overlap stays on target in breezy conditions, reducing blurry shots, re‑flys, and battery swaps. - Overlap Guardian: A real‑time forward/side overlap gauge overlays the corridor. If coverage dips below threshold, lanes turn red and the app recommends slowing, tightening, or adding a pass. - Facet Finder: Computer vision identifies roof facets, dormers, valleys, and penetrations from the live feed, pinning any uncaptured surfaces with AR markers. Prompted obliques ensure loss‑critical features are documented for accurate measurements and damage mapping. - Altitude Gate: AR altitude bands lock in the ideal height for target ground sample distance and local airspace rules. Audible cues alert if the aircraft strays high or low, keeping scale accuracy tight and preventing reshoots due to inconsistent image resolution. - QC Instant Replay: On landing, RoofLens auto‑builds a coverage heatmap and gap list. Tap any gap to launch a micro‑mission that fills it in one or two passes, then generate a QC certificate to attach to the job record. The toolkit is designed for Field Drone Operators who value mobile UX, reliable coverage, and rapid feedback. It also benefits office teams by ensuring that every capture meets the quality bar for reconstruction, measurement, and damage detection—accelerating downstream estimating and reducing disputes. “Before RoofLens, we’d discover missing obliques back at the office,” said Casey Nguyen, commercial capture lead at an industrial roofing firm in the Midwest. “Now the app catches gaps on site and spins up micro‑missions to fix them. We’ve cut re‑flights dramatically and our estimators get usable data the first time.” Safety and compliance are integrated into the workflow. Preflight Attest provides role‑based safety checklists aligned to Part 107 and site requirements, signed with the user’s passkey. Site Auto‑Claim locks uploads to the right job using geofence verification after a badge scan, eliminating misfiled photos and speeding on‑site kickoff. Device Lock ensures only verified hardware and accounts can upload, blocking rooted or emulated devices. Offline Pass supports secure capture in low‑signal environments, storing time‑boxed, geofenced credentials in secure hardware until sync. Once synced, RoofLens’s estimation engine turns consistent photo sets into precise measurements and automated line items aligned to jurisdiction and carrier rules. ExportPreflight confirms that required photos, notes, grouping, and formatting are present before Xactimate export. The result is a clean, defensible submission with minimal edits. “Great capture is the surest way to protect SLAs and margins,” added Alex Morgan, CEO of RoofLens. “Our AR toolkit helps new pilots fly like seasoned pros, and it gives managers proof the job was done right—on the first visit.” Availability The AR flight capture and QC toolkit is available today on the RoofLens mobile app for supported iOS and Android devices and common prosumer drones. Customers can enable the features per template and user role, with training resources and quick‑start presets included. About RoofLens RoofLens is a software platform that turns drone and aerial photos into precise roof measurements, damage maps, and automated line‑item estimates. With guided capture, verifiable evidence, and built‑in guardrails, teams work faster, reduce disputes, and protect margins across residential and commercial projects. Media contact Press: press@rooflens.com Phone: +1‑555‑013‑4210 Web: https://www.rooflens.com

P

RoofLens Rolls Out Franchise Guardrails and Pricing Intelligence for Consistent, Profitable Estimates at Scale

Imagined Press Article

For immediate release RoofLens today launched a suite of guardrails and pricing intelligence designed for multi‑branch franchises and growing contractors that need to standardize estimating, protect margins, and accelerate approvals across markets. The suite spans Template Lockstep, Approval Matrix, Variance Bands, Margin Locks, Region Profiles, Compliance Pulse, CodeCrosswalk, PricePulse, GapGuard, SmartAssemblies, and ExportPreflight—delivering consistency by default and flexibility when it counts. The challenge for distributed organizations is multiplying: more regions, more carrier preferences, and more ways to drift from the standard. RoofLens addresses this by centralizing the rules and templates that shape every estimate, then surfacing real‑time guidance and controls that keep local teams fast and compliant. “Consistency is the first ingredient of scalable profitability,” said Fran Delgado, Director of Operations Strategy at RoofLens. “We built this suite so franchise leaders can set the playbook once and know it’s being followed—without slowing down the field. When exceptions arise, they’re routed to the right approver with context and a clock, not lost in email.” What’s included - Template Lockstep: Centrally controlled estimate templates with versioning, one‑click rollouts, and safe rollback. Push updates across branches on a schedule, auto‑migrate in‑flight bids with a clear change diff, and lock critical sections to stop local edits. - Approval Matrix: Configurable approver workflows driven by role, dollar thresholds, margin floors, and exception types. Auto‑route bids for sign‑off, set SLAs, and approve or decline from web or mobile with reason codes. - Variance Bands and Margin Locks: Define allowed ranges for waste factors, labor hours, quantities, and discounts by roof type and market. Live margin tracking and guardrails prevent risky submissions, while color‑coded guidance and suggestions keep estimators moving. - Region Profiles: Package market‑specific rules into reusable profiles—permitted materials, code‑required adds, crew rates, taxes, and carrier preferences by ZIP or county. Auto‑apply on job creation so branches inherit the right rules by default. - Compliance Pulse: Real‑time dashboard and alerts tracking template drift, override frequency, approval latency, and branch margin variance. Drill into outliers, export audit packs, and send weekly scorecards to managers. - CodeCrosswalk and PricePulse: Automatically map detected materials and damages to the exact jurisdiction‑ and carrier‑specific line codes. Sync regional price lists with supplier feeds and storm‑surge adjustments, flag stale or mismatched lists, and lock to a carrier‑approved schedule per file. - GapGuard and SmartAssemblies: Prevent misses and conflicts by scanning the scope against roof geometry, climate zone, and carrier or franchise rules, then expand one‑click assemblies into fully quantified line items with prewritten notes. - ExportPreflight: Run readiness checks before Xactimate export—code completeness, required photo attachments, note compliance, grouping, ordering, and carrier‑specific formatting—to minimize edits and rejections. For Franchise Standardizer Fran and QA Compliance Quinn—two of RoofLens’s core personas—the result is a step change in control and visibility. Templates and pricing stay in lockstep with policy and market changes, while exceptions are handled quickly and audibly. Estimators receive guidance in the flow of work, not as after‑the‑fact corrections, so they stay productive and confident. “Before RoofLens, every branch had its own flavor of a ‘standard’ estimate,” said Marcus Hill, VP of Operations at a national roofing franchise that piloted the suite. “Now our templates roll out overnight, our approvals have teeth, and our margins aren’t left to chance. Reviewers spend less time nitpicking structure and more time approving deals.” The suite also helps production and finance teams. Production Planners can trust that area and edge counts, waste factors, and assembly choices produce accurate material orders and schedules, reducing overages and delays. Finance gains line‑of‑sight into pricing sources, discounting behavior, and margin trends by market, with exports ready for ERP ingestion and audit. All controls are layered on top of RoofLens’s evidence and capture foundation. Every estimate is backed by verifiable measurements and photos, complete with QR‑based verification and a custody trail. When rules or templates update, CodeCrosswalk maintains a versioned history and highlights deltas so teams understand what changed and why—vital for internal training and external reviews. “Guardrails should feel like a safety net, not a speed bump,” added Delgado. “Our approach nudges estimators toward the best path and gives leaders the levers to keep the business healthy—without sacrificing the responsiveness customers expect.” Availability The guardrails and pricing intelligence suite is available today for RoofLens customers with multi‑user plans. Template Lockstep, Approval Matrix, Variance Bands, Margin Locks, Region Profiles, and Compliance Pulse are included in enterprise tiers; CodeCrosswalk, PricePulse, GapGuard, SmartAssemblies, and ExportPreflight are available across tiers with advanced options for franchises and carriers. About RoofLens RoofLens is a software platform that turns drone and aerial photos into precise roof measurements, damage maps, and automated line‑item estimates. With guided capture, verifiable evidence, orchestration, and guardrails, teams work faster, reduce disputes, and protect margins across residential and commercial projects. Media contact Press: press@rooflens.com Phone: +1‑555‑013‑4210 Web: https://www.rooflens.com

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.