Architectural project management

PlanPulse

Turn Drawings into Decisions

PlanPulse is a lightweight web app that centralizes architectural drawings and client conversations into a single visual workspace for independent architects and small-firm project leads, enabling real-time versioned markups and one-click client approvals to eliminate version chaos, halve revision rounds, and cut approval cycles by up to 40%.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

PlanPulse

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
Empower independent architects and small firms to deliver faster, transparent projects through real-time visual collaboration and decisive client approvals.
Long Term Goal
Within 3 years, onboard 10,000 architecture teams and reduce average project delays by 30%, becoming the default client-facing collaboration layer for small firms.
Impact
Helps independent architects and small-firm leads cut approval cycles up to 40%, halve revision rounds (50%), and save 6–12 hours per project within three months by converting scattered feedback into traceable one-click sign-offs, reducing delays and improving on-time delivery

Problem & Solution

Problem Statement
Independent architects and small-firm project leads lose days reconciling scattered client feedback, conflicting plan versions, and slow approvals because email, PDFs, and generic PM tools lack intuitive visual markups, versioned overlays, and client-friendly approval workflows.
Solution Overview
PlanPulse centralizes architectural drawings into a single visual workspace, using real-time, versioned visual markups and one-click client approvals to eliminate conflicting feedback and version chaos, converting scattered comments into definitive, traceable sign-offs on the drawing.

Details & Audience

Description
PlanPulse is a lightweight web app that centralizes architectural drawings and client conversations into a single visual workspace. It serves independent architects and small-firm project leads who need faster approvals and transparent client communication. PlanPulse eliminates version chaos and accelerates approvals, cutting approval cycles and saving six to twelve hours per project. Its standout feature is real-time, versioned visual markups that clients can annotate and approve with one click.
Target Audience
Independent architects and small-firm leads (25–55) needing faster approvals who prioritize visual, client-facing collaboration
Inspiration
I watched a young architect scribble angry red pen notes across a client's printed plan, then lug two thick print sets home to reconcile competing emails. The client annotated a PDF, the contractor messaged decisions in WhatsApp, and a week dissolved into version chaos. That tactile, frustrating scene sparked PlanPulse: a lightweight visual workspace with versioned markups and one‑click client approvals.

User Personas

Detailed profiles of the target users who would benefit most from this product.

A

Approval-Orchestrator Ava

- Owner’s representative PM at mixed-use developer; oversees 3–5 parallel projects. - Experience: 8–12 years in client-side delivery. - Company size: 50–200 employees; distributed stakeholders and executives. - Tools: Excel, Smartsheet, Procore, Teams; approval tracking spreadsheets.

Background

Started in construction administration, burned by a nine-email approval thread that derailed a schedule. Switched to owner’s rep role and champions single-source-of-truth tools to prevent repeat disasters.

Needs & Pain Points

Needs

1) Live view of pending approvals by owner. 2) Single thread for all decision comments. 3) Exportable audit trail for leadership reviews.

Pain Points

1) Approvals splintered across email, chats, and PDFs. 2) Executives review outdated drawings without notice. 3) Manual status spreadsheets constantly out of date.

Psychographics

- Urgency-driven, allergic to vague status. - Accountability hawk; timestamps or it didn’t happen. - Values clarity over polish, outcome over output. - Data-first storyteller for executive updates.

Channels

1) LinkedIn - owner’s rep groups 2) Procore Community - workflows 3) Microsoft Teams - org channels 4) Construction Dive - newsletter 5) YouTube - project controls

V

Version-Guardian Victor

- Role: BIM Coordinator at 20–40 person architecture firm. - Experience: 5–9 years; Revit and Navisworks power user. - Location: Chicago; coordinates multi-office consultants. - Tools: Revit, BIM 360, Bluebeam, Dynamo scripts.

Background

Moved from production drafting to firmwide standards after costly rework from wrong issue set. Now owns naming conventions and integrations that prevent drift.

Needs & Pain Points

Needs

1) Unbroken version history tied to sheets and models. 2) Diffs highlighting scope changes between issues. 3) Permissions mapping for internal and consultants.

Pain Points

1) Screenshot markups detached from authoritative files. 2) File naming drift causing downstream rework. 3) Consultants annotating outdated sheets unnoticed.

Psychographics

- Process purist; single source beats heroics. - Automates tedium; scripts wherever possible. - Values traceability over speed hacks. - Champions cross-discipline coordination rituals.

Channels

1) Autodesk University - sessions 2) Revit Forum - threads 3) r/Revit - tips 4) YouTube - Balkan Architect 5) AEC Hackathon - community

C

Constructability Checker Casey

- Role: Preconstruction Manager at regional general contractor. - Experience: 10–15 years; estimator-to-precon track. - Company size: 200–800 employees; design-build mix. - Tools: Procore, Bluebeam, Excel, Assemble.

Background

Learned the hard way when late design tweaks triggered a wave of RFIs and change orders. Now insists on clear, timestamped decisions before GMP.

Needs & Pain Points

Needs

1) Single hub for constructability comments by discipline. 2) Client-approved alternates tracked with cost notes. 3) Notifications when design sets supersede.

Pain Points

1) Bluebeam markups siloed from evolving drawing sets. 2) Late changes without explicit notification. 3) Conflicting attachments circulating in email.

Psychographics

- Risk radar always on, assumptions documented. - Prefers clarity over breadth; decisive milestones. - Collaborative yet firm on deadlines.

Channels

1) Procore Community - Q&A 2) LinkedIn - GC network 3) Bluebeam Forum - workflows 4) YouTube - precon tactics 5) ENR - newsletter

F

Finish-Focused Fiona

- Role: Interior Designer serving hospitality and high-end residential. - Experience: 6–10 years; boutique studio co-lead. - Location: Los Angeles; frequent site walkthroughs. - Tools: AutoCAD, SketchUp, Adobe CC, Pinterest.

Background

Lost weeks reconciling conflicting finish schedules after text messages and PDFs clashed. Adopted visual diffs and centralized comments to restore client trust.

Needs & Pain Points

Needs

1) Side-by-side visual diffs for finish options. 2) Room-based threads with photos and swatches. 3) Mobile approvals during walkthroughs.

Pain Points

1) Feedback scattered across texts, emails, and photos. 2) Spec changes lost between drawing versions. 3) Clients approving outdated images.

Psychographics

- Aesthetics zealot; details decide the feeling. - Client experience over process complexity. - Visual thinker; hates spreadsheet sprawl.

Channels

1) Instagram - portfolio 2) Pinterest - boards 3) LinkedIn - professional updates 4) ArchDaily - inspiration 5) YouTube - product demos

P

Permit-Path Priya

- Role: Permitting Coordinator within mid-size architecture firm. - Experience: 4–7 years interfacing with multiple AHJs. - Location: Midwest metro; hybrid work. - Tools: Bluebeam Studio, ePlan portals, Excel trackers.

Background

After a resubmittal stalled for missing change explanations, built a checklist habit. Now curates airtight logs linking sheet deltas to responses.

Needs & Pain Points

Needs

1) Automated change logs between submittals. 2) Threaded AHJ comment responses per sheet. 3) Timestamped approval records for audits.

Pain Points

1) AHJ feedback scattered across portals and emails. 2) Manual change narratives take hours. 3) Version confusion causes resubmittal delays.

Psychographics

- Compliance-first; dates, references, receipts. - Diplomatic with officials; persistent with teams. - Deadline-driven, checklist addict.

Channels

1) Bluebeam Studio - sessions 2) AIA KnowledgeNet - codes 3) LinkedIn - permitting groups 4) ICC Community - forums 5) YouTube - code updates

D

Detail-Defender Dana

- Role: QA/QC Lead at 40-person architecture firm. - Experience: 12–18 years; former technical architect. - Location: Remote-friendly; coordinates across time zones. - Tools: Newforma, Bluebeam, SharePoint, checklists.

Background

Managed a dispute where untracked client approvals escalated to claims. Committed to traceability and standardization across studios.

Needs & Pain Points

Needs

1) Locked approvals tied to specific sheets. 2) Deviation alerts against standard templates. 3) Reviewer assignments with due dates.

Pain Points

1) Approvals buried in email attachments. 2) Inconsistent markup conventions by team. 3) Review deadlines slipping unnoticed.

Psychographics

- Process absolutist; exceptions invite risk. - Detail-obsessed; zero tolerance for ambiguity. - Coaches teams, prefers systems over heroics.

Channels

1) Newforma Community - best practices 2) LinkedIn - risk management 3) AIA Trust - resources 4) r/architecture - practice 5) YouTube - QA/QC talks

Product Features

Key capabilities that make this product valuable to its target users.

Auto-Route Matrix

Automatically maps each sheet to the right approvers using tags (discipline, scope, budget impact), project templates, and learned patterns. Eliminates manual routing, reduces misfires, and ensures the first review hits the right desk every time.

Requirements

Sheet Tag Extraction & Normalization
"As a project lead, I want tags to be auto-extracted and normalized for each sheet so that routing rules apply consistently without manual data entry."
Description

Automatically extract and normalize routing tags (e.g., discipline, scope, budget impact, phase) from each sheet using filename parsing, title-block fields, embedded metadata, and OCR for PDFs. Map raw values to controlled taxonomies with validation, deduplication, and confidence scoring. Provide a lightweight review UI for human correction when confidence is low. Persist tags at the sheet-version level and expose them via the PlanPulse sheet model and API. Support project-level custom taxonomies and synonyms to ensure consistency across teams and projects, supplying reliable inputs to downstream routing logic.

Acceptance Criteria
Multi-Source Extraction and Merge Resolution
Given a PDF sheet is uploaded to a project with tag extraction enabled When the extraction job executes Then candidate tags are extracted from filename, title-block fields, embedded PDF metadata, and OCR of sheet content And each candidate includes tag type, raw value, source, and confidence scored between 0.00 and 1.00 And the system merges candidates per tag type using highest confidence wins; ties are broken by source precedence: title-block > embedded_metadata > filename > OCR And the extraction job finishes with status "extracted" recorded on the sheet-version
Taxonomy Mapping and Synonym Normalization
Given project-level controlled taxonomies and synonyms exist for discipline, scope, budget_impact, and phase When raw tag values are mapped Then each value is normalized to a canonical taxonomy term or flagged as unmapped And configured synonyms map to their canonical term And unmapped values produce a validation warning with up to 3 suggested canonical terms And the normalized tags are stored alongside their canonical IDs and source raw values
Deduplication and Confidence Aggregation
Given multiple sources produce the same tag type for a sheet-version When normalization completes Then duplicate values are consolidated into one normalized tag per type And the final tag stores provenance as an array of contributing sources with their confidences And the final confidence equals the maximum contributing confidence
Low-Confidence Review UI and Correction Flow
Given a sheet-version has any tag with confidence below the project threshold (default 0.80) When a reviewer opens the Sheet Tag Review panel Then low-confidence tags are highlighted and ordered ascending by confidence And reviewers can select from suggested canonical terms or search the taxonomy with autocomplete And saving applies validation against the project taxonomy and prevents invalid entries And accepted corrections persist on the sheet-version and update the final normalized tags immediately And the system records an audit log with user, timestamp, before and after values
Version-Level Persistence and Immutability
Given a sheet has version V1 with finalized normalized tags When version V2 is uploaded Then V1 tags remain immutable and queryable And V2 tags are re-extracted and normalized independently of V1 And manual corrections on V2 do not alter V1
API and Model Exposure of Tags
Given a sheet-version has normalized tags When a client requests GET /api/sheet-versions/{id} Then the response includes tags with fields: type, canonical_value, canonical_id, confidence, sources[], raw_values[] And GET /api/sheets/{id} includes latest_version_tags with the same structure And list endpoints support filtering by tags via query parameters (e.g., ?discipline=Architecture&phase=DD)
OCR Fallback and Performance SLA
Given a PDF lacks parsable title-block fields and embedded metadata for required tag types When extraction runs Then OCR is applied to the sheet and tag heuristics attempt extraction from OCR text And if OCR completes successfully, extracted values enter the same normalization pipeline And end-to-end extraction and normalization completes within 10 seconds for a single-page PDF ≤ 10 MB And failures set sheet-version extraction_status to "failed" with error_code and a retriable flag
Rules-based Routing Engine & Templates
"As a project lead, I want to define reusable routing templates that map tags to approvers so that new sheets are automatically sent to the right reviewers."
Description

Implement a deterministic routing engine that evaluates normalized tags against project routing templates to select approvers. Support rule priority, conditional logic (AND/OR), thresholds (e.g., budget impact bands), parallel and serial approval steps, and per-discipline exceptions. Provide conflict resolution and default fallbacks. Enable reusable project templates with versioning, change history, and safe publishing. Integrate with notifications and the approval workflow so routed sheets land in the correct approver inbox with the correct step order and due dates.

Acceptance Criteria
Deterministic Routing From Tags & Template
Given a sheet S tagged discipline=Structural, scope=Foundation, budgetImpact=12% and a project routing template T with a rule R1 that maps to approvers A1->A2 (serial) with due offsets +2d and +3d And all tags are normalized to the template’s taxonomy When the routing engine evaluates S against T at 10:00 UTC Then the selected approvers are exactly [A1, A2] in the defined serial order And due dates are computed as 2 and 3 calendar days from the evaluation timestamp And the routing result is identical for repeated evaluations with the same inputs And the engine stores an audit log containing {sheetId, templateVersionId, matchedRuleId=R1, evaluatedAt, approverIds, dueDates}
Rule Priority & Conflict Resolution
Given a template with two rules R1(priority=10, specificity=3) and R2(priority=8, specificity=5) that both match sheet S When routing runs Then R1 is selected because it has higher priority And the audit log records discardedMatches=[R2] with reasons=[lowerPriority] Given two rules R3(priority=5, specificity=5) and R4(priority=5, specificity=3) both match S When routing runs Then R3 is selected because it is more specific (more tag conditions matched) Given two rules R5(priority=5, specificity=5) and R6(priority=5, specificity=5) both match S and a default fallback route F exists When routing runs Then F is applied and the log records tieBreak="fallback"
Conditional Logic (AND/OR) and Threshold Bands
Given a rule R defined as ((discipline in {MEP, Structural}) AND (budgetImpact between 10.00% and 20.00% inclusive)) OR (scope contains "Life Safety") And sheet S1 has discipline=MEP and budgetImpact=15.00% When routing runs Then R matches S1 And sheet S2 has discipline=Architectural and scope="Life Safety - Alarms" When routing runs Then R matches S2 And sheet S3 has discipline=Structural and budgetImpact=9.99% When routing runs Then R does not match S3 due to threshold lower bound And threshold comparisons are evaluated with two-decimal precision using normalized percentages
Parallel and Serial Approval Step Orchestration
Given a template defines Step1: parallel[A1, A2] with dueOffset=+2d and Step2: serial[A3 then A4] with dueOffset=+3d (per step activation) And sheet S is routed at 12:00 UTC When tasks are created Then A1 and A2 each receive a pending task simultaneously with dueDate=12:00 UTC + 2 days And Step2 tasks are not created until both A1 and A2 complete Step1 When A3 completes Step2.1 Then a task for A4 is created immediately with dueDate set to step activation + 3 days And the workflow blocks final approval until all steps complete And cancellations or reassignments preserve step order and are recorded in the audit log
Per-Discipline Exception Overrides
Given a base template route for discipline=Structural maps to approvers [A1] And a per-discipline exception E for project P overrides Structural to [A5, A6] in parallel When a Structural sheet in project P is routed Then approvers [A5, A6] are selected and [A1] is not used And the audit log records overrideId=E and reason="discipline exception" And removal of E causes subsequent routings to revert to the base template behavior
Template Versioning, History, and Safe Publishing
Given template T has Draft v2 and Published v1 When routing runs Then only Published v1 is used for decisioning And changes saved to Draft v2 are recorded with author, timestamp, and diff summary When v2 is published Then v2 becomes the active Published version and v1 remains in history as read-only And an integrity check validates rule syntax, approver existence, and cycle-free step graphs before publishing; publish fails with descriptive errors if validation fails And a template cannot be deleted if referenced by any routing decision; attempts return a descriptive error
Notifications and Inbox Integration With Due Dates
Given sheet S is routed to approvers [A1, A2] in parallel with dueOffset=+2d and SLA=48h When routing completes Then inbox entries are created for A1 and A2 with dueDate=now+48h and correct step metadata And a single notification is sent per approver via configured channels within 60 seconds, idempotent on retries And the approver can open S from the inbox and see the route context (step number, due date, matched rule) And overdue reminders are scheduled according to template SLAs (e.g., first reminder at +24h) And if routing yields an empty approver set, the engine applies the configured fallback and logs the condition without creating inbox items
Approver Directory & Role Mapping
"As an architect, I want routing to target roles with current assignees and backups so that reviews continue even when someone is unavailable."
Description

Create a project-scoped directory of approver roles (e.g., Structural Lead, Client PM) mapped to individuals and backups with contact channels and working hours. Support synchronization with SSO/IdP groups where available, plus manual assignment. Routing rules target roles rather than individuals, with runtime resolution to the current assignee or designated backup based on availability. Enforce permission checks so only authorized recipients can view routed sheets. Provide APIs and UI for administrators to manage roles, memberships, and coverage rules.

Acceptance Criteria
Project Role Creation and Validation
Given I am a Project Admin, When I create a new Approver Role with a Role Name, optional Description, and Tags (discipline, scope, budget impact), Then the Role Name must be unique within the project and tags are persisted. Given the form is valid, When I click Save, Then the role is created, listed in the Approver Directory, and an audit log entry is recorded with actor, timestamp, and changed fields. Given a role is referenced by any active routing rule, When I attempt to delete it, Then deletion is blocked with an actionable error, and I am offered to Deactivate instead. Given a role is Deactivated, When routing rules evaluate, Then the role is excluded from targeting and a configuration warning is logged.
Member and Backup Assignment with Contact Channels and Working Hours
Given a role exists, When I assign a Primary and optional Backups, Then only current project members can be selected; inviting a new member is available; a Primary is required if the role is referenced by any active routing rule. Given a member is added to a role, When I set their Contact Channels (email required; optional phone/Slack) and Working Hours (days of week, start/end times, time zone), Then inputs are validated and saved per member. Given a role has Backups, When I reorder backups, Then the new priority order is saved and used in runtime resolution. Given I remove a member from a role, When I save changes, Then the member immediately loses role-based access and an audit entry is created.
SSO/IdP Group Synchronization with Manual Overrides
Given an SSO/IdP integration is configured, When I map an IdP group to a role, Then group membership is synchronized at least every 15 minutes and on-demand via "Sync Now". Given a sync occurs, When users are added to or removed from the IdP group, Then corresponding project members are added to or removed from the role as backups; the designated Primary must be explicitly set and must belong to the mapped group. Given the Primary no longer belongs to the mapped IdP group after sync and backups exist, When sync completes, Then the top-priority backup is automatically promoted to Primary and an audit notification is recorded; if no backups exist, the role is marked Uncovered. Given I add a manual backup not in the mapped group, When subsequent syncs run, Then the manual backup remains in the role unless explicitly removed by an admin and is labeled as a manual override.
Runtime Role Resolution for Routing
Given an active routing rule targets a Role, When a sheet triggers routing at time T, Then the system resolves the assignee by selecting the Primary if T is within the Primary's working hours (in their time zone), otherwise selecting the first Backup (by priority) whose working hours include T. Given no Primary or Backups are within working hours at time T, When resolution runs, Then the Primary is selected and the route is flagged Out-of-hours for visibility. Given a role has no Primary and no Backups, When resolution runs, Then routing fails with a configuration error, delivery is not attempted, and project admins are alerted.
Permission Enforcement for Routed Sheets
Given a sheet is routed to a Role and an assignee is resolved, When any user attempts to access the sheet, Then only the resolved assignee and Project Admins can view; all others receive HTTP 403 and cannot access via direct URL or shared link. Given role assignment changes after routing (e.g., next coverage window selects a different member), When the new assignee is resolved, Then access is updated within 60 seconds: the previous assignee loses access and the new assignee gains access; an audit entry records the change. Given a user without access is mentioned in a comment or notification, When they click a link to the routed sheet, Then the link does not reveal content and prompts them to request access from a Project Admin.
Administrator APIs for Roles, Membership, and Coverage Rules
Given I have a Project Admin API token, When I call the Roles API to create/update/delete a role, Then the API enforces admin scope and returns: 201 on create, 200 on update, 409 on duplicate Role Name, 400 on validation errors, and 423 when delete is blocked due to active references. Given I manage role membership via API, When I add/remove members, set backup priority, and set member working hours/time zone/contact channels, Then changes persist and are reflected in GET endpoints and in runtime resolution within 60 seconds. Given concurrent updates occur, When requests include ETag/If-Match headers, Then optimistic concurrency prevents lost updates and returns 412 on version mismatch. Given any admin API call is made, When it completes, Then structured audit logs are produced with actor, action, target, old/new values, and correlation ID.
Exception Handling & Manual Override
"As a project lead, I want to intercept and correct ambiguous routes so that no sheet stalls or reaches the wrong approver."
Description

Introduce a triage flow for cases where tags are missing, ambiguous, or routing rules conflict. Hold affected sheets in a visible triage queue, notify the project lead, and offer guided resolution: edit tags, select approvers, or choose a fallback template. Require a reason on manual overrides and capture corrections for continuous improvement. Apply SLA timers and escalation rules to prevent stalls, and log all actions for traceability.

Acceptance Criteria
Missing Tags Triage Queue Placement & Lead Notification
Given a newly uploaded or updated sheet lacks one or more required routing tags (discipline, scope, or budget impact) When Auto-Route executes Then the sheet is withheld from routing and added to the Triage Queue with status "Missing Tags" And the Triage Queue row displays sheet ID/name, missing tag list, time entered, SLA deadline, and assigned project lead And an in-app notification and email are sent to the project lead with a deep link to the triage item And no approver notifications are sent until the triage item is resolved
Ambiguous/Conflicting Rule Detection & Lead Notification
Given a sheet's tags map to multiple routing paths or conflicting rules When Auto-Route executes Then the sheet is added to the Triage Queue with status "Rule Conflict" And the triage detail view lists each competing path with approver sets and the rule(s) that triggered them And the system offers guided options: Edit Tags, Select Approvers, Choose Fallback Template And an in-app notification is sent to the project lead with the conflict summary and triage link
Guided Resolution: Edit Tags
Given a triaged sheet with missing or incorrect tags When the project lead selects Edit Tags, updates values, and clicks Save Then the system validates required fields and allowed values against the project taxonomy and returns inline errors if invalid And on success, the updated tags are saved and visible in the triage detail And Auto-Route re-evaluates; if a single routing path is found, the sheet is routed and removed from the Triage Queue And the correction (before/after tags, actor, timestamp) is stored for learning and audit
Guided Resolution: Manual Route via Approvers or Fallback Template
Given a triaged sheet When the project lead selects either Select Approvers or Choose Fallback Template and configures a route Then the system requires a non-empty manual override reason before submission And the chosen approvers/template are validated against role and availability constraints And on confirm, the sheet is routed per the selection and the triage item is closed And an audit record captures actor, timestamp, selection details, and the override reason And the override decision is captured for continuous improvement
SLA Timers and Escalation
Given the project's triage SLA threshold is configured to 5 minutes for testing and an escalation contact is set When a sheet enters the Triage Queue Then a visible countdown shows remaining time to SLA breach on the triage item And at 50% of the SLA time, an in-app reminder is sent to the project lead And at SLA breach, an in-app alert and email are sent to the project lead and the configured escalation contact And if unresolved at 2x the SLA, a secondary escalation is sent to the practice admin (if configured) And all reminder and escalation events are logged in the audit trail
Learning Suggestions from Prior Corrections
Given a prior triage resolution corrected tags or selected a manual route for a specific sheet type within the same project template When a new sheet with matching context (same template and equivalent tag pattern) enters Auto-Route Then the system displays a suggested route based on the prior correction with a confidence label And the project lead can apply the suggestion in one click or dismiss it And accepted suggestions route the sheet automatically and are recorded as reinforcement data
Audit Trail and Export
Given triage-related events occur (entry created, notifications, tag edits, manual overrides, fallback selections, SLA escalations) When a user with admin or auditor permissions opens the sheet's Audit Log Then every event is listed with timestamp, actor, action type, before/after values, reason (if any), and correlation ID And events are immutable (no edit/delete) and filterable by action type and date range And admins can export triage audit events for a selected project and date range to CSV and JSON
Adaptive Routing Suggestions (ML)
"As a project lead, I want the system to learn our routing patterns and suggest approvers so that setup time and misroutes decrease over time."
Description

Add a learning component that analyzes prior routing decisions and approval outcomes to suggest approvers when rules lack coverage or multiple matches exist. Provide a confidence score and plain-language rationale (e.g., similar discipline/scope patterns). Auto-apply suggestions above a configurable confidence threshold; otherwise present as ranked recommendations in the triage UI. Support opt-in at the project level, anonymize cross-project signals where required, and record model/version context with each decision for auditability.

Acceptance Criteria
Auto-Apply Above Threshold
Given project ML routing opt-in is enabled and auto-apply threshold = 0.80 And a sheet triggers routing with either multiple rule matches or no rule coverage When the system generates ML approver suggestions with confidence scores Then if the top suggestion confidence ≥ 0.80, that approver set is auto-applied to the sheet And the UI labels the decision as "Auto-applied by ML" and displays the confidence and rationale text And an audit record is created capturing sheet ID, project ID, full suggestion list, selected approver(s), confidence, threshold, model name, model version, timestamp, and decision type = auto-applied
Ranked Recommendations Below Threshold
Given ML routing opt-in is enabled and auto-apply threshold = 0.80 And routing is triggered for a sheet When the highest confidence suggestion is < 0.80 Then the triage UI displays a ranked list (min 3, max 5) of recommendations sorted by confidence descending with deterministic tie-breaking And each recommendation shows approver identity, confidence (two decimals), and a plain-language rationale And selecting a recommendation assigns the approver(s) and writes an audit record with the chosen option and confidence And choosing "None of the above" allows manual assignment and writes an audit record And no auto-apply occurs in this flow
Confidence Score and Rationale Quality
Given the system generates ML approver suggestions Then each suggestion includes a numerical confidence between 0.00 and 1.00 with two decimal places And each suggestion includes a rationale that cites at least one factor: discipline match, scope similarity, budget impact similarity, template alignment, or historical approval outcomes And confidence values and rationales shown in UI match the values returned by the public API for the same event And rationales contain no client names, project titles, or emails when anonymization is enabled
Project-Level Opt-In Control
Given a new project defaults to ML routing opt-in = Off When routing runs while opt-in = Off Then no ML inference calls are made, no suggestions are displayed, and only rule/template routing occurs When a project admin enables ML routing opt-in Then subsequent routing attempts produce ML suggestions per specification And an audit entry records the opt-in change with actor, timestamp, and old/new values
Cross-Project Signal Anonymization
Given the organization setting "Anonymize cross-project signals" is On When ML suggestions are generated using cross-project data Then rationales and surfaced evidence exclude client names, project titles, emails, and other PII, using generic descriptors (e.g., "similar healthcare project") And training/inference logs store only hashed or aggregated identifiers for external projects And API/UI payloads expose no plaintext identifiers from other organizations When the setting is Off Then suggestions may reference internal (same-org) context while still excluding PII from other organizations
Model and Version Auditability
Given any ML routing suggestion event (auto-applied or recommended) Then the audit record includes: model_name, model_version, feature_set_version, training_window_identifier (if available), inference timestamp, threshold at time of decision, rule coverage status, and candidate rules count And audit records are retrievable via API and filterable by model_version and project ID And re-running inference with the same inputs and model_version in a controlled test produces the same ranked order (deterministic tie-breaking) for at least 99% of sampled events
Configurable Auto-Apply Threshold
Given a project admin can configure the auto-apply confidence threshold Then the allowed range is 0.50 to 0.99 in 0.01 increments with default 0.80 When the threshold is updated Then the new value takes effect for routing attempts started within 1 minute of the change and is recorded in audit logs with actor and timestamp And auto-apply triggers only when top suggestion confidence ≥ current threshold And attempts to set values outside the allowed range are rejected with a clear validation error
Routing Audit & Performance Metrics
"As a firm owner, I want visibility into routing performance so that I can improve templates and reduce approval time."
Description

Maintain an immutable audit trail for every routing decision, including input tags, template snapshot, rule matches, ML suggestions, chosen recipients, timestamps, overrides, and outcomes. Provide dashboards and exports showing first-hit success rate, re-route frequency, time-to-first-review, and bottlenecks by role/discipline. Trigger alerts when misfire rates or cycle times exceed thresholds, enabling teams to refine templates and improve throughput.

Acceptance Criteria
Immutable Routing Audit Captured per Decision
Given a sheet is routed (initial or re-route) by Auto-Route Matrix When the routing decision is finalized Then the system writes an immutable audit record containing: input_tags, template_snapshot_id and hash, matched_rule_ids with versions, ml_suggestions with scores, chosen_recipient_ids with roles, decision_started_at, decision_completed_at, overrides (who, what, why), outcome, correlation_id, project_id, sheet_id, actor And attempts to update or delete the record are rejected and logged as mutation_blocked And the record is persisted within 2 seconds of decision_completed_at And the record is retrievable by sheet_id, correlation_id, and project_id
Audit Log Query, Filter, and Pagination
Given 10,000+ audit records exist for a project When a user filters by date range, discipline tag, rule_id, recipient_id, outcome, and template_version, sorts by decision_completed_at desc, and paginates 100 per page Then the correct subset is returned with stable sort, next/prev cursors, and total_count And p95 latency for the first page is <= 3 seconds And when a sheet_id is specified, all records for that sheet are returned in chronological order
Metrics Dashboard Accuracy and Definitions
Given a dataset with known ground truth of routes, re-routes, acknowledgments, and review start times When the Routing Performance dashboard is loaded for a selected date range Then metrics are computed as: - first_hit_success_rate = percent of sheets where the first route was acknowledged by at least one required approver without any re-route - re_route_frequency = average number of re-route events per routed sheet - time_to_first_review = time from initial route decision_completed_at to first review_started_at - bottlenecks_by_role_discipline = median and p95 time from assignment to review_started_at grouped by role and discipline And displayed values match the ground truth within 0.1% (or exact for counts) And metric tooltips show these definitions
Threshold-Based Alerts on Misfires and Cycle Time
Given project-level thresholds are configured: misfire_rate > 10% (7-day rolling) or median time_to_first_review > 24h When computations exceed any threshold for two consecutive calculation intervals Then an alert is created, shown in-app, emailed to project admins, and POSTed to the configured webhook with project_id, metric, threshold, current_value, window, and deep link to the dashboard And duplicate alerts are suppressed for 60 minutes per metric-project pair And an alert is auto-resolved and a resolution notification is logged when the metric returns below threshold
Export of Audit Logs and Metrics
Given a project admin requests an export of audit logs and metrics for a date range in CSV or JSON When the export job is submitted Then a file is generated with all matching records and the following fields for each audit record: project_id, sheet_id, correlation_id, decision_started_at, decision_completed_at, input_tags, template_snapshot_id, matched_rule_ids, ml_suggestions, chosen_recipient_ids, overrides, outcome And a manifest includes record_count and SHA-256 checksum And timestamps are ISO 8601 with timezone offset And p95 export completion time is <= 60 seconds for 100k audit records And the export is downloadable and can be delivered to a configured S3 bucket
Dashboard Performance and Freshness
Given a project with 25,000 audit records in the selected range When a user opens the Routing Performance dashboard Then time-to-first-paint is <= 2 seconds and p95 time-to-interactive is <= 3.5 seconds on a standard laptop over median network conditions And metrics reflect new audit records within 60 seconds of write And applying any filter (date range, role, discipline, outcome) updates charts within 500 ms
Drill-Down from Metrics to Audit Detail
Given the Routing Performance dashboard is loaded with filters applied When the user clicks a metric segment (e.g., Structural bottleneck) Then a drill-down list shows impacted sheets with re-route count, time-to-first-review, and last reviewer And clicking a sheet opens its full audit trail within 1 second And returning to the dashboard restores prior filters and scroll position
Routing Preview & Simulation
"As a project lead, I want to simulate routing outcomes before dispatch so that I can catch errors and tune templates."
Description

Offer a no-side-effects preview that simulates routing for a selected sheet or batch. Display the decision path, including matched rules, conflicts, and ML rationale, along with the proposed approver sequence and due dates. Allow what-if edits to tags or template rules and show the predicted impact before dispatch. Expose simulation via UI and API to support bulk uploads and template authoring workflows.

Acceptance Criteria
Single Sheet UI Preview — Decision Path & No Side Effects
Given a sheet in PlanPulse with tags and a selected project template When the user clicks "Preview Route" from the sheet toolbar Then the system displays a simulation panel within 2 seconds showing: predicted approver sequence with names, roles, and due dates; ordered decision path with matched rules and their priorities; ML rationale including confidence score per assignment; and any conflicts flagged with resolution applied And no notifications are sent, no tasks created, and the sheet's status remains unchanged And the panel is labeled "Simulation—no side effects" and shows a Simulation ID
Batch Simulation via UI — Multi-sheet Results & Performance
Given the user selects 2–200 sheets in a project When they click "Simulate Routing" Then the system returns a results table with one row per sheet, each showing status (Simulated/Needs Input/Error), approver count, first approver, and first due date And per-sheet error details are available for rows marked Error And P95 time to simulate 100 sheets is <= 10 seconds and 200 sheets is <= 20 seconds And no simulations produce side effects
What-if Edits — Predictive Impact & Non-persistence
Given the simulation panel is open for a sheet or batch When the user changes tags (e.g., discipline, scope, budgetImpact) or toggles template rules in "What-if" mode Then the system re-runs the simulation and displays differences versus baseline, including: added/removed/reordered approvers; due date changes with +/- deltas; and rule/rationale changes And the original sheet tags and template rules remain unchanged outside the simulation session And the user can click "Reset to Baseline" to discard all what-if edits
API Simulation Endpoint — Bulk & Template Authoring Support
Given a client POSTs to /api/v1/routing/simulations with a valid access token and payload containing one or more sheets (id or metadata) and optional draft template rules When the request is valid Then the response is 200 with a JSON body returning, per item: simulationId, decisionPath (matchedRules, conflicts, resolutions), approverSequence, dueDates, mlRationale (per-step confidence), and warnings And when the payload is invalid Then the response is 400 with machine-readable errors per item And the endpoint supports up to 500 sheets per request and responds with P95 <= 15 seconds And no persisted changes are made to sheets or templates And requests without Routing.Simulate scope are rejected with 403
Conflict Explanation & ML Rationale Transparency
Given multiple routing rules could apply to a sheet When a simulation is run Then the UI and API expose the conflict set, the resolution strategy applied (e.g., rule priority > specificity > last-updated > ML tie-break), and the chosen outcome And each ML-derived assignment includes a confidence score (0–1), top 3 feature attributions, and a reason label And if the final confidence for any critical approver falls below 0.6, the simulation flags "Needs Review" on that sheet
Due Date Calculation & Calendar Handling
Given a project with a default timezone and working calendar (business days, weekends, holidays) and template SLA offsets per approver role When a simulation computes due dates Then each approver's due date is calculated by applying the SLA offset in project timezone and honoring the working calendar (skip non-working days) And if an approver-level custom SLA exists, it overrides the template role SLA And all due dates are displayed in the viewer's local timezone in the UI and returned in ISO 8601 with timezone offset via the API And if required inputs are missing, the simulation returns a warning and uses the project default SLA

SLA Escalator

Applies per-rung deadlines with business-hour calendars and auto-escalation rules. If an approver stalls, it escalates to a delegate or manager and updates the ladder timeline, shrinking idle time without manual chasing.

Requirements

Business-Hour Calendars
"As a project admin, I want to define business hours and holidays for each project so that SLA deadlines reflect real working time and avoid penalizing off-hours."
Description

Provide organization- and project-level calendars to define working hours, weekends, and regional holidays used to compute per-rung deadlines. Support multiple time zones, daylight-saving adjustments, and calendar inheritance with project overrides. Deadlines and timers pause outside business hours and resume on the next working period to ensure fair SLA tracking. Admins can import holiday sets, set exceptions, and preview how a deadline will be calculated for a given approver. Calendar data is versioned and auditable, and integrates with PlanPulse approval records so each decision shows the calendar context used for its SLA.

Acceptance Criteria
Project Overrides Organization Calendar
Given an organization calendar with working hours Mon–Fri 09:00–17:00 in America/New_York and weekends off When Project Alpha defines an override that sets Fridays to 09:00–15:00 And a 6-business-hour SLA rung starts on Friday at 13:00 local project time Then the deadline is Monday at 12:00 local project time (2 hours on Friday + 4 hours on Monday) And Project Beta, which does not define overrides, continues to compute deadlines using the organization calendar without the Friday change And removing the project override reverts subsequent deadline calculations to the organization calendar without affecting already computed records
Cross-Time-Zone and DST-Aware Deadline Calculation
Given an approver in America/New_York with a calendar of Mon–Fri 09:00–17:00 and an 8-business-hour rung When the rung is assigned on Thursday at 16:00 ET with no holidays Then the deadline is Friday at 16:00 ET (1 hour Thursday + 7 hours Friday) Given an approver in Europe/Berlin with a calendar of Mon–Fri 09:00–17:00 and Monday is a regional holiday When the rung is assigned on Friday at 16:00 CET with an 8-business-hour duration Then the deadline is Tuesday at 16:00 CET (1 hour Friday + 7 hours Tuesday) Given a locale where a DST spring-forward occurs between assignment and deadline When an SLA spans the transition Then business time counted equals the configured duration in minutes with no lost or double-counted minutes, and the wall-clock deadline adjusts accordingly
Timers Pause Outside Business Hours
Given a calendar of Mon–Fri 09:00–17:00 and a 3-business-hour rung starts on Friday at 16:00 local When the business day ends at 17:00 Friday Then 1 business hour is consumed and the timer status is Paused until Monday 09:00 And the deadline is Monday at 11:00 local Given time passes during a non-working period (weekend or defined off hours) When checking the remaining business time Then the remaining business time is unchanged during the pause and resumes decrementing at the next working start
Admin Imports Regional Holiday Sets
Given an organization admin uploads a valid holiday set file for "US Federal Holidays" for the current and next year When the import completes Then the holidays appear in the organization calendar with correct dates and labels and no duplicate entries And selecting the holiday set for a project applies those holidays to that project’s deadline calculations Given a rung spans a holiday date from the applied set When computing the deadline Then the holiday is excluded from business hours and the deadline shifts to the next working period
One-Off Exceptions and Blackout Windows
Given Project Alpha defines a one-off working exception on Saturday 10:00–14:00 local and otherwise follows Mon–Fri 09:00–17:00 When a 3-business-hour rung starts Friday at 16:00 local Then 1 hour is counted on Friday and 2 hours on Saturday, producing a deadline of Saturday at 12:00 local Given a blackout exception is set for a specific weekday (e.g., Tuesday 13:00–17:00) When a rung would otherwise consume business time in that window Then that period is excluded from the calculation and the deadline shifts forward accordingly
Approver-Specific Deadline Preview
Given an admin opens the Calendar Preview tool and selects Approver Jane Doe in Europe/London, the project calendar, and a duration of 10 business hours starting Wednesday at 15:30 local When the preview is generated Then it displays the computed deadline timestamp, the list of business time segments used (with start/end times and dates), and the calendar name and version And the previewed deadline matches the runtime calculation engine result for the same inputs within 1 minute Given the admin changes inputs (approver, time zone, duration, start time, or calendar version) When regenerating the preview Then the output updates immediately to reflect the new calculation
Versioned Calendar and Audit Integration on Approvals
Given an organization calendar v1 exists and is used for active approvals When an admin edits working hours and saves the changes Then a new calendar version v2 is created with a recorded changelog and effective timestamp And new approvals use v2 while existing approval records retain a reference to v1 for their SLA computations Given an approval record is opened in PlanPulse When viewing its SLA details Then it shows the calendar context used (organization/project, time zone, holiday set, exceptions, calendar version) and the business-time breakdown used to compute the deadline
Configurable Approval Ladder
"As a project lead, I want to configure per-rung approvers, deadlines, and escalation paths so that approvals flow predictably and reflect my client’s hierarchy."
Description

Enable project leads to define approval ladders composed of ordered rungs, each with primary approver(s), AND/OR approval logic, per-rung SLA targets (e.g., 8 business hours), and escalation paths (delegate, manager, multi-hop). Provide reusable ladder templates, per-client overrides, and validation to prevent cycles or unreachable states. Each rung stores notification preferences, grace periods, and fallback behavior on reassignment. The ladder attaches to a drawing set or versioned markup within PlanPulse, ensuring approvals stay in sync with visual workspaces and one-click approvals.

Acceptance Criteria
Configure Ordered Rungs with AND/OR Approval Logic
Given I create a ladder with 3 ordered rungs (R1, R2, R3), When I save it, Then the order is preserved and displayed as 1→2→3 in the ladder timeline. Given R2 is configured with AND logic and approvers Alice and Bob, When only Alice approves, Then R2 remains Pending with status "1/2 approved" and does not advance. Given R2 is configured with AND logic and approvers Alice and Bob, When both approve via one-click within the drawing workspace, Then R2 completes and the ladder advances to R3 within 1 second of the second approval. Given R1 is configured with OR logic and approvers Carol and Dan, When any one approver approves, Then R1 completes immediately and the remaining approver can no longer approve. Given a rung has both primary approvers and delegates defined, When the primary approver completes approval, Then delegates are not requested and receive a single cancellation notification.
Per-Rung Business-Hour SLAs and Grace Periods
Given project business hours are Mon–Fri 09:00–17:00 in the project time zone and R1 SLA target is 8 business hours with a 30-minute grace period, When R1 is activated Friday 16:30, Then the SLA breach time is Monday 10:30 and escalation triggers at 11:00. Given R1 SLA is set to use the client calendar (Mon–Thu 10:00–18:00), When R1 activates, Then countdown uses the client calendar exclusively and excludes non-business hours. Given a rung has "pause SLA during weekends" enabled, When the rung is active across a weekend, Then elapsed business hours do not increase during Saturday and Sunday. Given I edit an active rung to change SLA from 8 to 4 business hours, When I save, Then the remaining time is recalculated immediately based on time already elapsed in business hours. Given a rung has a 15-minute grace period, When the SLA target is reached, Then notifications indicate "in grace" and no escalation occurs until the grace period fully elapses.
Multi-Hop Auto-Escalation and Timeline Updates
Given R2 defines escalation path: Delegate → Manager → Department Head, When the SLA plus grace elapses with no approval, Then the approver role reassigns to Delegate and a timeline event "Escalated to Delegate" is recorded with timestamp. Given R2 is escalated to Delegate and remains idle for 50% of the original SLA, When that threshold is reached, Then it escalates to Manager and records a timeline event, closing the Delegate request. Given a rung escalates, When any current assignee approves, Then the rung completes and all outstanding requests are withdrawn; the timeline shows "Approved by [Role]" and the ladder advances. Given an escalated rung, When the original primary approver attempts to approve after escalation, Then the system blocks the action and displays "Approval no longer requested" within the workspace. Given escalation occurs, When notifications are sent, Then recipients match the rung's notification preferences and all prior recipients receive a single closure notice.
Validation Prevents Cycles and Unreachable States
Given I add an escalation path that returns to an earlier rung approver, When I attempt to save, Then the system rejects the configuration with error "Escalation cycle detected" and highlights the offending path. Given I configure a rung with AND logic but no approvers, When I attempt to save, Then the system blocks save and shows "At least one approver required" inline. Given I configure two rungs with mutually exclusive conditions that cannot be satisfied, When I validate, Then the system flags "Unreachable rung" and prevents activation until resolved. Given I import a template with unknown users, When I assign placeholders, Then save succeeds; When placeholders are left unassigned, Then save is blocked with explicit missing-assignee errors.
Reusable Templates and Per-Client Overrides
Given I save a configured ladder as a template named "Standard Client A", When I create a new project and apply this template, Then all rungs, SLAs, logic, and escalation paths are copied. Given a ladder is applied from a template, When I override R2 SLA from 8 to 6 business hours for Client X, Then the template remains unchanged and the project shows an "Overridden" badge on R2. Given overrides exist, When I view the ladder diff, Then I can see per-rung differences (SLA, approvers, logic, notifications) compared to the template. Given a template is updated after being applied to a project, When I view the project ladder, Then I am prompted to optionally sync changes; selecting "Sync" updates only non-overridden fields.
Notification Preferences and Reassignment Fallback
Given a rung's notification preferences include in-app and email only, When the rung activates, Then only in-app alerts and emails are sent; no other channels are used. Given a rung's fallback behavior on manual reassignment is "Continue SLA", When I reassign the rung to a different approver, Then the remaining SLA time is preserved and continues counting in business hours; a 15-minute grace period is restarted. Given a rung's fallback behavior on manual reassignment is "Reset SLA", When I reassign, Then the SLA countdown restarts from the full target and the timeline records "SLA reset on reassignment". Given I reassign a rung to an approver who lacks access to the drawing set, When I confirm, Then the system blocks the reassignment and prompts me to grant access or choose another approver.
Attachment to Drawing Sets and Versioned Markups
Given a ladder is attached to Drawing Set DS-101, When I open DS-101 in the visual workspace, Then each rung's current approvers see one-click Approve actions inline with the markup panel. Given Version V1 has R3 pending, When I create Version V2 from V1, Then a new ladder instance is created for V2 at R3 with all approvals reset, and V1's ladder is locked as read-only. Given an approver attempts to approve V1 after V2 exists, When they click Approve, Then the system prompts to open V2 and blocks changes to V1. Given a ladder is attached to a specific markup item, When that markup is deleted, Then the system prompts to detach or reattach the ladder; if detached, all outstanding approval requests are canceled.
SLA Countdown Engine
"As a project lead, I want stalled approvals to automatically escalate when deadlines are reached so that I don’t have to manually chase approvers."
Description

Implement a reliable background engine that tracks per-rung SLA timers using the active business-hour calendar, pausing outside working hours and resuming automatically. The engine evaluates escalation thresholds, triggers actions at precise cutoffs, and updates approval state atomically to avoid duplicate or missed escalations. It logs all state transitions, handles retries with idempotency, scales across projects, and exposes health metrics. On each tick, it recomputes remaining time, posts events to the timeline, and publishes analytics signals without blocking user actions in PlanPulse.

Acceptance Criteria
Business-Hour Pause and Resume for SLA Timers
- Given a project calendar set to Mon–Fri 09:00–17:00 (Europe/London) with 2025-12-25 marked as a holiday, When an approval rung starts at 2025-12-24 16:50, Then the engine counts 10 minutes on 2025-12-24, pauses exactly at 17:00:00, resumes at 2025-12-26 09:00:00, and remaining_time is reduced by exactly 10 minutes. - Given an in-progress rung at 16:59:30 on a workday, When the clock passes 17:00:00, Then remaining_time does not decrease until 09:00:00 next business day and no timeline ETA updates are posted during the paused interval. - Given a calendar update (new holiday added) while a timer is paused, When the engine ticks next, Then resume_at recomputes using the updated calendar without manual restart and remaining_time never becomes negative.
Atomic Escalation at Cutoff with Idempotent Deduplication
- Given a rung with a 2h SLA started at 10:00:00, When cutoff 12:00:00 is reached, Then exactly one escalation is committed atomically with the state transition within ±1s of cutoff and no duplicate escalations occur. - Given two workers detect the same cutoff concurrently, When both attempt escalation, Then only one succeeds and the other observes a no-op due to an idempotency key; only one timeline event and one analytics signal exist. - Given a crash after state is persisted but before events are emitted, When the worker restarts, Then missing side-effects (timeline, notifications, analytics) are replayed exactly once using the same idempotency key.
Traceability: Timeline Events and Audit Logging on State Change
- Given state transitions (start, pause, resume, escalate, complete), When each occurs, Then a timeline event is appended containing ISO-8601 timestamp, actor/system, rung_id, previous_state, new_state, and a monotonically increasing sequence number. - Given an escalation, When it is logged, Then an immutable audit record is written with correlation_id, calendar_id, cutoff_at, reason, and server_time with clock skew ≤ 500 ms from NTP reference. - Given a request to view a rung’s audit trail, When fetched, Then entries are strictly ordered by sequence number, contain no gaps/duplicates, and are returned within 200 ms for up to 1,000 records.
Analytics Signal Publishing with Schema and Latency Guarantees
- Given any SLA state change, When processed, Then an analytics event is published to topic planpulse.sla.events with versioned schema and partition key = rung_id within 5 seconds at the 99th percentile. - Given transient broker backpressure, When publish fails, Then the engine retries with exponential backoff up to 5 minutes without blocking core processing and emits at most one consumer-visible event (idempotency enforced). - Given rollout of schema v2, When both v1 and v2 consumers exist, Then events include all fields required by v1 until a deprecation flag is enabled and no consumer error rate exceeds 0.1% during rollout.
Non-Blocking User Actions Under Engine Load
- Given 10,000 active timers across 300 projects, When users load drawings, save markups, or approve, Then the 95th percentile API latency for these actions increases by ≤ 10 ms versus baseline and no request is blocked by SLA engine locks. - Given a heavy tick cycle causing DB contention, When user requests arrive, Then priority is given to user-initiated transactions; success rate remains ≥ 99.9% and SLA engine work is deferred via backoff. - Given an engine outage, When users perform actions, Then actions succeed independently; upon recovery the engine catches up without requiring user retries and without reprocessing already-completed approvals.
Health Metrics Exposure and Alerting Readiness
- Given the engine is running, When GET /metrics is called, Then Prometheus metrics include: active_timers, ticks_processed_total, tick_lag_seconds, escalations_triggered_total, escalation_failures_total, retry_attempts_total, idempotent_dedup_count, worker_queue_depth, last_tick_timestamp, and per_tenant_active_timers. - Given normal operation, When observed over 15 minutes, Then tick_lag_seconds ≤ 5s for 99% of samples; if tick_lag_seconds > 30s for 5 consecutive minutes, an alert fires. - Given a terminal escalation failure, When retries exhaust, Then escalation_failures_total increments, an error log with correlation_id is emitted, and a pager alert triggers within 60 seconds.
Horizontal Scale Across Projects with Deterministic Tick Cadence
- Given ≥ 20,000 concurrently active rungs across ≥ 500 projects, When the engine runs, Then every active timer is evaluated at least once per minute and escalations fire within 10 seconds of cutoff for 99% of cases. - Given shard rebalancing or worker restarts, When load shifts, Then no rung remains unassigned for > 60 seconds and duplicate evaluations are deduplicated via idempotency with no duplicate user-visible effects. - Given a 5x load spike in Tenant A, When measured, Then Tenant B’s tick cadence and escalation latency meet the same SLOs (no cross-tenant starvation).
Auto-Escalation Notifications & Reassignment
"As a delegate or manager, I want clear escalation notifications with the relevant context so that I can take ownership and unblock the approval quickly."
Description

When a rung breaches its SLA, automatically notify the designated delegate or manager via in-app alerts, email, and optional chat integrations, including full context (drawing/version, prior comments, deadline, originating approver). Reassign the approval task while preserving original ownership history and maintaining a clear audit chain. Provide configurable reminder cadence, localized templates, and rate limiting to prevent notification fatigue. Update the approval record and subscriber feeds in real time so stakeholders see who is now accountable and by when.

Acceptance Criteria
SLA Breach Triggers Multi-Channel Notification
Given an approval rung has an assigned approver and an escalation recipient (delegate or manager) defined And the rung’s SLA is evaluated using the workspace business-hours calendar When the SLA is breached Then the system sends a notification to the escalation recipient via in-app alert and email And if a chat integration is enabled for the workspace, the system posts a chat message to the configured destination And the recipient selection follows rung rules: use delegate if active; otherwise use manager And all notifications are dispatched within 60 seconds of breach detection And no duplicate notifications for the same breach are sent per channel
Automatic Reassignment With Audit Trail Preservation
Given a rung’s SLA breach triggers escalation When escalation is executed Then the approval task is reassigned to the escalation recipient as the new accountable owner And the original approver remains recorded as the originating owner in the task history And the audit log records: timestamp, system actor, reason "SLA breach", previous owner, new owner, rung identifier, previous due-by, new due-by And the previous owner’s pending task is closed or marked superseded without loss of comments or attachments And the reassignment is visible in the approval detail UI within 5 seconds of execution
Notification Payload Contains Complete Context
Given a notification is generated due to a rung SLA breach When the notification is sent on any channel Then the notification content includes: project name, drawing title, version tag, deep link to the drawing/version, link to prior comments, originating approver name, rung identifier, breached deadline with timezone, and new accountable owner And all placeholders are populated with non-empty values And any links resolve to the correct resource with the user’s access controls enforced
Real-Time Record and Feed Updates Upon Escalation
Given a rung is escalated due to SLA breach When the reassignment completes Then the approval record updates to show the new accountable user and updated due-by And the ladder timeline displays an "Escalated" event at the breach time with the recipient And all subscribers’ activity feeds receive an entry describing the reassignment within 5 seconds And viewers with the approval open see the change without manual refresh (live update)
Configurable Reminder Cadence Per Rung
Given an admin configures a reminder cadence for escalated rungs (e.g., every N business hours) When the cadence is saved Then reminders are sent according to the defined schedule during business hours only And disabling the cadence immediately stops future reminders for that rung And per-rung cadence overrides supersede workspace-level defaults And reminders stop automatically once the approval is completed or reassigned to a new owner outside the escalation chain
Localized Notification Templates and Formatting
Given a recipient has a preferred locale set When an escalation notification is sent Then the channel-specific template for that locale is used And date/time fields are formatted in the project’s timezone using the recipient’s locale conventions And if a locale template is missing, the system falls back to the default language template And template tokens {drawing}, {version}, {deadline}, {originatingApprover}, {assignee}, {link} are populated correctly
Notification Rate Limiting and Deduplication
Given rate limiting is configured as L notifications per recipient per approval thread per 24-hour rolling window per channel When events would exceed the limit Then additional notifications are suppressed and recorded with reason "rate_limited" in logs/audit And reminders triggered within a 5-minute window are deduplicated so only one per channel is sent And in-app alerts coalesce into a single entry with an incremented count and latest timestamp And rate limiting never suppresses the first breach notification
Ladder Timeline Visualization
"As a project lead, I want a visual timeline of the approval ladder and its escalations so that I can quickly identify bottlenecks and communicate status to clients."
Description

Render a live, interactive timeline that displays each ladder rung with its SLA target, elapsed/remaining time, and any escalation events. Use color coding and icons to indicate upcoming breaches, active escalations, and completed steps. Allow users to drill into an event to view timestamps, responsible parties, and comments. The timeline updates in real time as the engine progresses and is accessible from the PlanPulse workspace attached to the drawing or markup, providing a shared, visual source of truth for clients and architects.

Acceptance Criteria
Real-time timeline refresh during active approval
Given a PlanPulse workspace is open to a drawing/markup with an active approval ladder and the Ladder Timeline is visible When the SLA engine posts a rung status change, new comment, or escalation event Then the timeline reflects the change within 2 seconds without a full page reload And the updated rung/event is visually highlighted for 3 seconds And no duplicate or out-of-order entries appear in the visible sequence When network connectivity is lost and restored within 60 seconds Then missed timeline updates are backfilled within 5 seconds using server timestamps And a "Last synced <time>" indicator shows the most recent server time
Business-hours SLA countdown on a ladder rung
Given a rung with a 4 business-hour SLA and a project calendar of Mon–Fri 09:00–18:00 local time (excluding configured holidays) When the rung starts at 16:00 on a business day Then remaining time shows 3:00 at start and decrements only during business hours And at 18:00 the remaining time freezes at 1:00 and resumes at 09:00 next business day And the breach time is calculated as 11:00 next business day And all time values are accurate to the minute and match server-side calculations
Visual alerts for upcoming breach and active escalation
Rule: A rung with remaining >25% of SLA displays status "On Track" with green color and check icon Rule: A rung with remaining between 10% and 25% displays "At Risk" with amber color and warning icon Rule: A rung with remaining <10% displays "Critical" with red color and warning icon Rule: After breach, the rung shows red "Breached" label and breach icon until resolved Rule: An actively escalated rung displays an escalation arrow icon and "Escalated to <assignee>" text And all iconography includes accessible labels describing the status for screen readers
Escalation event logging and display
Given an approver fails to respond before the rung SLA target When auto-escalation triggers to the delegate or manager Then an "Escalated" event is appended to the timeline with: previous assignee, new assignee, trigger ("SLA breach" or "Manual"), and server timestamp And the rung's current owner updates to the new assignee in the timeline within 2 seconds And the escalation event is clickable to open details And the audit trail for the ladder includes this event with an immutable ID
Drill-down details for a rung or event
Given the Ladder Timeline is visible When a user clicks a rung or any timeline event Then a details panel opens within 500 ms And it shows: start timestamp, SLA target datetime, elapsed time, remaining time (if active), breach time (if any), responsible party chain, and all comments with author and timestamp And it provides a deep link that, when copied, reopens the timeline scrolled to the same item And the panel closes via ESC key, backdrop click, or Close button
Workspace access and role-based visibility
Given a drawing or markup in PlanPulse has an approval ladder When a client user views the item's workspace Then a "Approval Timeline" entry point is visible and opens a read-only Ladder Timeline And internal users (architects/project leads) can filter rungs/events and access drill-down details And users without permission see a 403-style message instead of the timeline And when no ladder exists, the UI displays "No active approval ladder" instead of an empty timeline
Performance, ordering, and time display under typical load
Given a ladder of up to 10 rungs and 50 timeline events on a baseline device and network When the timeline is opened Then initial render completes within 1.5 seconds and interactive actions (expand, hover tooltips) respond within 200 ms And events are ordered strictly by server timestamp with stable ordering for equal timestamps using event IDs And all times display in the viewer's local time with a hover tooltip showing UTC
SLA Reporting & Audit Trail
"As a firm owner, I want SLA reports and a complete audit trail so that I can demonstrate compliance, quantify cycle-time reductions, and improve our approval process."
Description

Provide dashboards and exports that summarize SLA performance across projects, including average time per rung, breach rates, escalation frequency, and time saved. Offer filters by client, approver, project, and date range, with CSV export and an API endpoint for BI tools. Maintain an immutable, queryable event log of timer starts, pauses, escalations, reassignments, and approvals, linked to drawing versions and comments for full traceability. Reports surface trends that help firms optimize approval ladders and reduce idle time.

Acceptance Criteria
SLA Dashboard Metrics — Accuracy and Completeness
- Given a project set with SLA rungs, calendars, and events, When the SLA dashboard is loaded with no filters, Then it displays metrics: average time per rung (business-hour adjusted), breach rate per rung and overall, escalation frequency per approval and per rung, and time saved per approval and total. - Given a known test dataset, When the dashboard metrics are compared to backend reference calculations, Then each metric matches within 0.1% or 1 second, whichever is larger. - Given rungs with pauses, When average time per rung is computed, Then paused durations are excluded from business-hour time. - Given projects with different business-hour calendars and time zones, When metrics are computed, Then each rung uses its project calendar and time zone for business-hour calculations. - Given trend view for the last 12 weeks, When the dashboard loads, Then trend lines for breach rate, average rung time, and escalation frequency are shown per week. - Given an aggregation scope up to 50,000 rungs, When the dashboard loads, Then metrics render within 3 seconds p95.
Filters — Client, Approver, Project, Date Range
- Given multiple clients, approvers, and projects, When the user applies any combination of filters, Then all metrics, charts, and tables reflect the intersection of selected filters. - Given a date range filter, When the user selects start and end dates, Then included events and metrics are based on event occurred_at within the range in the user's time zone, inclusive of boundaries. - Given filter selections, When the page is reloaded or shared via URL, Then the selections persist via URL parameters. - Given multi-select fields for approvers and projects, When values are chosen, Then the result includes any records matching any selected values. - Given no records match the filters, When the dashboard loads, Then an empty state appears with zeroed metrics and an option to clear filters.
CSV Export — Scoped, Structured, and Safe
- Given current filter selections, When the user exports CSV, Then the file contains only rows within the filtered scope. - Given an export type selection, When "Rung Detail" is chosen, Then each row represents a rung instance with columns: project_id, project_name, client_id, client_name, approver_id, approver_name, approval_id, rung_id, rung_name, drawing_version_id, start_at, deadline_at, first_action_at, completed_at, breached (Y/N), escalations_count, business_hours_duration_minutes, time_saved_minutes. - Given an export type selection, When "Summary" is chosen, Then each row represents an aggregate (by project or client) with columns: scope_id, scope_name, approvals_count, rungs_count, avg_business_hours_per_rung, breach_rate_percent, escalation_frequency_per_approval, total_time_saved_minutes. - Given CSV generation, When any cell value begins with =, +, -, or @, Then it is prefixed with a single quote to prevent formula execution in spreadsheet applications. - Given large datasets up to 1,000,000 rows, When export is requested, Then the job is queued and completes within 10 minutes, producing a UTF-8 CSV (comma delimiter, LF newlines) with a downloadable link and email notification on completion. - Given date/time fields, When exported, Then they use ISO 8601 with timezone offset.
BI API Endpoint — Auth, Filtering, Pagination
- Given a valid OAuth2 access token with scope sla.read, When calling GET /api/v1/sla/metrics and GET /api/v1/sla/events, Then the API returns 200 with JSON per the published OpenAPI schema; otherwise returns 401 (unauthenticated) or 403 (unauthorized). - Given query parameters client_ids, approver_ids, project_ids, date_from, date_to, rung_ids, escalated_only, and breached_only, When provided, Then results are filtered accordingly and match the UI for the same filters. - Given large result sets, When page_size is set up to 1000, Then results are paginated via next_cursor without loss or duplication across pages. - Given high request rates, When rate limits are exceeded (e.g., 600 requests/min per token), Then 429 is returned with a Retry-After header. - Given a metrics query over 100k rung records, When executed, Then the API responds within 2 seconds p95.
Immutable Audit Trail — Event Integrity and Queryability
- Given any audit event has been written, When an update or delete is attempted via API or UI, Then the system rejects the operation and returns 405 or 403; only new compensating events may be appended. - Given an audit event, When retrieved, Then it includes fields: event_id (UUIDv4), event_type (timer_started|timer_paused|timer_resumed|escalated|reassigned|approved), occurred_at (ISO 8601 UTC), actor_type (system|user), actor_id, project_id, client_id, approver_id (nullable), approval_id, rung_id, drawing_version_id, comment_id (nullable), metadata (object). - Given an approval is viewed, When the Audit tab is opened, Then all related events are shown in chronological order with links to the referenced drawing version and comment. - Given the audit log API is queried with filters (event_type, project_ids, approver_ids, date_from, date_to), When executed, Then it returns matching events with cursor pagination. - Given a daily audit export, When generated, Then it includes a SHA-256 checksum file alongside the data file for integrity verification.
Time Saved Calculation — Baseline and Display
- Given baseline method "No Escalation" is selected, When time saved is calculated, Then for each approval it equals (simulated duration with escalations disabled using the actual event sequence and business hours) minus (actual duration with escalations), floored at 0 minutes. - Given rungs with no escalation, When time saved is calculated, Then their contribution is 0. - Given paused periods, When the simulation runs, Then paused durations are excluded from both baseline and actual durations. - Given a user hovers over "Time Saved" in the dashboard, When the tooltip appears, Then it explains the baseline method and shows baseline vs actual minutes for the current scope. - Given exports and API, When time_saved_minutes is requested, Then the value matches the dashboard within 1 minute for the same scope.
Access Control Parity — UI, CSV, and API
- Given a user with Viewer access to specific projects, When viewing dashboards, exporting CSV, or calling the API, Then only data from those projects and their clients is accessible; other data is excluded. - Given a client user, When accessing SLA reports, Then only data for their own client organization is visible. - Given a user without sla.report permission, When attempting to access dashboards, exports, or API, Then access is denied with 403 and no data is returned. - Given a superadmin, When accessing reports, Then all tenant data is available subject to tenant boundaries. - Given row-level security, When permissions change, Then subsequent UI views, exports, and API calls immediately reflect the new access scopes.

Smart Nudges

Delivers context-rich reminders across email, Slack, and in-app at the moments approvers are most likely to act. Includes a mini visual diff and pending items so approvers can one-click approve or comment without hunting for context.

Requirements

Send-Time Optimization
"As a project lead, I want nudges sent when approvers are most likely to respond so that approvals happen faster with fewer reminders."
Description

An engine that predicts and schedules nudges at the moments each approver is most likely to act, using historical open/click/approval behavior, local time zone, quiet hours, project urgency, and deadlines. Provides heuristic rules initially with a path to ML-based optimization, supports per-user overrides, frequency caps, no-disturb windows, and urgency escalation. Integrates with PlanPulse’s activity stream to trigger candidate send windows on new revisions or pending approvals, and ensures privacy by using only necessary, aggregated signals.

Acceptance Criteria
Local Time Zone and Quiet Hours Scheduling
Given an approver with time zone set and quiet hours configured When the engine predicts a best send time for a nudge Then the scheduled send time is within the approver’s local allowed window (outside quiet hours) And daylight saving time transitions are correctly applied to the scheduled send time And if the predicted time falls in quiet hours, it is shifted to the next allowed window within 24 hours And a scheduling log entry records the time zone, quiet-hours rule applied, and final scheduled time
Per-Approver Frequency Caps
Given daily and weekly frequency caps of N per day and W per week for the approver When multiple nudge triggers occur within a day and week Then no more than N nudges are sent that day and no more than W that week And caps reset at the approver’s local midnight And additional nudges beyond the cap are suppressed with reason code "cap_exceeded" And suppression counts appear in admin reporting for the relevant day/week
Per-User Send-Time and Channel Overrides
Given an approver override for preferred send time, preferred primary channel, and a no-disturb window When scheduling a nudge Then the nudge is scheduled for the next occurrence of the preferred time outside the no-disturb window And the preferred channel is used unless unavailable, in which case the system falls back to the next available channel And predictive scheduling does not violate override-defined constraints And the UI displays the next scheduled time and channel reflecting the override
Deadline-Driven Urgency Escalation
Given a pending approval with a deadline T and escalation thresholds at T-48h and T-4h When the item remains unapproved at each threshold Then send frequency follows the rule: ≤1/24h before T-48h, ≤1/8h between T-48h and T-4h, and ≤1/2h after T-4h until T or cap reached And quiet hours are bypassed only in the last 4 hours with an explicit "urgent" flag included in the message metadata And do-not-disturb windows are honored unless the approver has opted into "allow urgent" And the scheduling log records the escalation reason and next planned attempt time
Activity-Triggered Candidate Windows
Given a new revision is uploaded or an approval enters "awaiting approver" state in the activity stream When the event is recorded Then candidate send windows are computed within 60 seconds And if the approver is active in-app, the nudge is deferred by 15 minutes to avoid interruption And if the approver engaged with a previous nudge within the last 2 hours, the nudge is suppressed with reason "recent_engagement" And the scheduling decision (send, defer, or suppress) is written to an audit log with timestamp and reason
Privacy-Preserving Signal Use
Given the engine uses only necessary, aggregated signals (e.g., open rate buckets, click counts, last-engaged timestamps) and time zone When generating predictions and schedules Then no raw message content or full per-event clickstream is stored or processed And per-approver event data older than 90 days is aggregated or deleted per policy And analytics exports do not expose per-event PII And a data-usage log for each nudge lists accessed fields and retention timers
Model Toggle and Safe Fallback
Given a feature flag "ml_send_time" and a heuristic baseline When the ML model is enabled for 10% of approvers via randomized assignment Then ML generates predictions for the treatment group and heuristics for the control group And on model error or p95 prediction latency > 200 ms, the system falls back to heuristics within the same request And experiment metrics capture open-rate uplift, approval-latency reduction, and opt-out rate with predeclared success thresholds And disabling the flag immediately reverts all users to heuristics with no missed schedules
Channel Orchestration & Deduplication
"As an approver, I want to receive nudges in the channel I use most so that I don’t miss requests or get spammed across multiple places."
Description

A cross-channel delivery layer that selects the best channel (email, Slack, in-app) per recipient based on preferences, availability, and message type; deduplicates messages across channels; and falls back gracefully if a channel fails. Includes provider integrations (e.g., SES/SendGrid, Slack app with OAuth and chat scopes), rate limiting, retries with exponential backoff, idempotency keys, per-tenant quotas, and templating for consistent content. Ensures identity mapping between PlanPulse users and Slack/email identities and logs delivery outcomes for analytics and support.

Acceptance Criteria
Preference- and Availability-Aware Channel Selection
Given a recipient with channel preferences prioritized as Slack > Email > In-app and Slack presence is active When a Smart Nudge of type "Approval Request" is routed Then the message is delivered via Slack DM And no email or in-app message is sent And the routing decision is logged with recipient_id, message_type, selected_channel, and decision_reason Given the same recipient is in Slack DND or inactive and an email fallback is enabled in preferences When the same nudge is routed Then the message is delivered via Email within 60 seconds And no Slack or in-app message is sent Given a recipient preference of Email-only for "FYI" message type When an FYI nudge is routed Then Email is selected regardless of Slack availability And the suppression of other channels is logged
Cross-Channel Deduplication Window
Given a nudge with a dedup_key shared across Slack, Email, and In-app When orchestration executes within a 15-minute deduplication window Then at most one message is delivered across all channels for that dedup_key per recipient And all non-selected channel deliveries are suppressed and logged as deduplicated Given two triggers for the same dedup_key arrive within 2 seconds (race condition) When orchestration handles both Then only one delivery record is created per recipient And the second trigger returns a 200 with the same delivery_id and a deduplication flag Given a recipient interacts with the delivered nudge (Approve/Comment) When any suppressed deliveries for the same dedup_key are scheduled Then they are canceled within 10 seconds of interaction And cancellations are logged with correlation_id and cancel_reason=recipient_interaction
Graceful Fallback with Retries and Single-Delivery Guarantee
Given Slack delivery attempt returns transient errors (HTTP 5xx or rate limit 429) When retries are performed with exponential backoff (approximately 2s, 4s, 8s with jitter) Then no more than 3 retries are attempted within 2 minutes And retry attempts are logged with attempt_count and last_error Given the final Slack retry still fails When fallback channels are available per recipient preferences Then Email is selected and delivered within 120 seconds of the first failure And only one successful delivery occurs across channels for the dedup_key Given a permanent error (e.g., invalid_auth) is returned by Slack When orchestration evaluates fallback Then Slack is marked unhealthy for that tenant until re-authorization And Email is used if permitted And a support-visible alert is created with tenant_id and error_code
Provider Integrations and Identity Mapping
Given a tenant installs the PlanPulse Slack app via OAuth with scopes chat:write, users:read, users:read.email When an approver in that tenant is mapped to a Slack user ID Then chat.postMessage posts a DM to the correct user And the Slack message ts and channel are stored with the delivery record Given the tenant configures SES or SendGrid with valid credentials When a test email is sent Then the provider returns a 2xx/accepted response and the Message-ID (or provider_id) is stored And SPF/DKIM status is recorded if available Given no Slack mapping exists for a recipient but a verified email is present When routing a nudge Then Slack is skipped for that recipient with reason=identity_unmapped And Email is used if allowed by preferences Given neither Slack mapping nor email is available When routing a nudge Then the delivery is suppressed with status=unsendable And a diagnostic event is logged for support with recipient_id and missing_identity_fields
Rate Limiting and Per-Tenant Quotas Enforcement
Given a tenant quota of 100 nudges per rolling hour When 120 nudge requests are received within 60 minutes Then only 100 are accepted for delivery And 20 are queued or rejected with HTTP 429 and retry_after indicated And all decisions are recorded with tenant_id and quota_metrics Given provider-specific rate limits (e.g., Slack) are approached When orchestration batches messages Then the system throttles per provider guidance to avoid provider 429s And preserves message order per recipient Given a queued nudge exceeds a max wait time of 15 minutes due to throttling When the wait time elapses Then the nudge is canceled with status=expired And the expiration is logged with reason=rate_limited_expiry
Idempotency and Safe Concurrency
Given multiple POST /nudges requests carry the same idempotency_key within 24 hours for the same recipient and dedup_key When requests are processed concurrently by multiple workers Then only one delivery is created And all requests return 200 with the same delivery_id and idempotent=true Given a network timeout occurs after the provider accepted the message When the client retries with the same idempotency_key Then no duplicate provider calls are made And the original provider response metadata is returned Given the idempotency store TTL is 24 hours When the same idempotency_key is reused after TTL expiry Then a new delivery may be created And the event is logged with reason=idempotency_ttl_expired
Consistent Templating with Visual Diff and One-Click CTA
Given a nudge payload with variables (project_name, drawing_version, diff_image_url, pending_items, approve_url, comment_url) When rendering email, Slack, and in-app templates Then title and summary text match across channels And a mini visual diff or preview image is included when diff_image_url is present And Approve and Comment CTAs are present with signed, recipient-specific URLs Given a required template variable is missing or invalid When rendering occurs Then the render fails fast before any send attempt And the delivery is suppressed with error=template_validation_failed And the error includes the missing_fields list Given locale and branding settings per tenant When templates render Then localized copy and tenant branding (logo, colors) are applied consistently across channels And a render_time metric is captured and stays under 200 ms at P95
Mini Visual Diff Preview
"As an approver, I want to see a quick visual of what changed so that I can decide to approve or comment without opening the full drawing."
Description

Generation and embedding of a lightweight visual diff that highlights changes between the latest drawing revision and the approver’s last-reviewed version. Renders as a small image/HTML snippet with key annotations and a count of pending items, optimized to load under a size budget for email and Slack. Clicking opens the full diff in PlanPulse. Uses secure, expiring links (pre-signed URLs), CDN caching, and includes alt text for accessibility. Automatically regenerates on new uploads or markup changes and respects project permissions.

Acceptance Criteria
Email Nudge Mini Diff: Inline Preview, Pending Count, and Load Budget
Given an approver last reviewed revision Rn-1 for drawing D in project P And a newer revision Rn exists with changes since Rn-1 When a Smart Nudge email is generated for that approver Then the email includes an inline mini visual diff preview highlighting changed regions and an explicit pending items count And the preview payload (image plus wrapper HTML for the preview) is <= 150 KB And the preview becomes visible within 1.5 seconds at p95 on a 10 Mbps connection And clicking the preview opens the full diff in PlanPulse for D comparing Rn to the approver’s last-reviewed revision And the pending items count shown equals the backend pending items count at send time
Slack Nudge Mini Diff: Block Kit Rendering and One-Click Deep Link
Given an approver is targeted by a Smart Nudge delivered to Slack And the approver has last reviewed revision Rn-1 and revision Rn exists When the Slack message is posted Then the message contains a mini visual diff image with descriptive alt_text and a visible pending items count And the mini diff asset size is <= 150 KB And a primary action in the message opens the full diff in PlanPulse to Rn vs the approver’s last-reviewed revision And the message renders correctly on Slack desktop and mobile clients And if the image fails to load, the alt_text is shown and the primary action remains available
In-App Nudge Mini Diff: Render, Freshness, and Interaction
Given an approver is signed into PlanPulse and has a pending nudge for drawing D When the nudge is displayed in-app Then a mini visual diff preview is shown with highlighted changes and a pending items count And the preview loads from the CDN and renders within 500 ms at p95 after the nudge component mounts And activating the preview via mouse or keyboard opens the full diff view for D comparing the latest revision to the approver’s last-reviewed revision And if a newer revision or markup change occurs after page load, the in-app preview and count refresh within 30 seconds to reflect the latest
Secure Expiring Links: Pre-signed URL Validity and Permission Enforcement
Given a pre-signed URL is generated for user U to access the mini visual diff asset for drawing D When U accesses the URL before expiry and has project permission to view D Then the asset is served over HTTPS and displays successfully And the URL expires at the configured TTL (default 72 hours) And after expiry the URL returns HTTP 403 with no image content And if U’s project permission is removed after the nudge is sent, the URL returns HTTP 403 within 5 minutes of permission change And the token scope is limited to the specific asset and cannot be used to list or access other resources
Automatic Regeneration on New Uploads or Markup Changes
Given a mini visual diff exists for drawing D comparing the latest revision to an approver’s last-reviewed revision When a new revision is uploaded for D or markups on the latest revision are changed Then the mini visual diff and pending items count are regenerated within 60 seconds And upcoming nudges use the regenerated preview and updated counts And already-sent nudges reference a cache-busted asset URL so that loading the preview or clicking through presents the updated diff within 60 seconds And the previous asset is invalidated and no longer served after regeneration completes
Accessibility: Alt Text and Assistive Technology Support
Given a mini visual diff preview is embedded in email, Slack, or in-app When rendered in each channel Then the preview includes meaningful alternative text that states the drawing name, compared revisions, number of change highlights, and pending items count And email images include an HTML alt attribute; Slack images include alt_text; in-app preview has an aria-label on the clickable container And the alt text length is between 50 and 160 characters And the in-app preview is keyboard focusable and activatable to open the full diff
CDN Delivery, Performance, and Cache Invalidation
Given a mini visual diff asset is generated When it is requested by recipients in core regions (NA, EU) Then the asset is served via CDN with p95 TTFB <= 200 ms And the cache hit ratio for the asset is >= 90% over a rolling 24-hour window And any regeneration event issues a global cache invalidation or new content-hashed URL so stale content is not served beyond 60 seconds And asset URLs include a content hash to ensure reliable cache-busting
One-Click Approve or Comment
"As an approver, I want to approve or leave a comment directly from the nudge so that I can act quickly without hunting for context."
Description

Actionable buttons in email, Slack, and in-app notifications that allow immediate approval or comment capture without navigating through the full workspace. Implements secure, single-use tokens tied to user identity and project permissions, optional step-up auth for sensitive actions, and graceful conflict handling if a newer revision exists. Captures context (revision ID, diff hash, pending items) with the action, updates the PlanPulse activity feed in real time, and provides deep links to the workspace for follow-up. Includes optimistic UI and clear confirmations.

Acceptance Criteria
Email: One-Click Approve — Happy Path
Given an approver receives a PlanPulse email nudge for Revision R with a mini visual diff and pending items And the email contains a valid, single-use token bound to the approver and project permissions When the approver clicks Approve from the email Then the system records an approval on Revision R And captures context {revisionId=R, diffHash present, pendingItemIds present, source=email} And invalidates the token immediately upon success And updates the project activity feed in real time (visible within 2 seconds) And displays a confirmation page with success state and a deep link to R in the workspace
Slack: One-Click Comment — Inline Capture
Given an approver receives a Slack nudge with mini visual diff and pending items for Revision R And the message includes an authenticated action to Add Comment via a single-use token When the approver submits a comment without leaving Slack Then the comment is saved on Revision R with the approver’s identity And captures context {revisionId=R, diffHash present, pendingItemIds present, source=slack} And the activity feed shows the new comment within 2 seconds And Slack replies with an ephemeral confirmation containing a deep link to R
In-App Nudge Banner: Approve with Optimistic UI
Given an approver views an in-app nudge for Revision R When the approver clicks Approve Then the UI immediately shows an optimistic Approved state with spinner And on backend success, the state finalizes and activity feed updates within 2 seconds And on backend failure, the UI reverts to the prior state and displays a clear error without duplicate actions And the action payload includes {revisionId, diffHash, pendingItemIds, source=in-app}
Sensitive Action Requires Step-Up Authentication
Given project policy marks approvals on Revision R as sensitive And the approver has not recently satisfied step-up authentication When the approver clicks Approve from any channel (email, Slack, in-app) Then the system prompts for step-up authentication (e.g., OTP or SSO recheck) And only upon successful verification does the approval execute And on failure, timeout, or cancel, no approval is recorded and a non-success confirmation is shown And all attempts are logged with outcome and reason
Newer Revision Exists — Graceful Conflict Handling
Given a one-click Approve token references Revision R And a newer Revision R+1 exists at click time When the approver uses the token Then the system prevents approval of R And presents a message indicating a newer revision exists with a deep link to R+1 And records a non-destructive attempted action event on R And expires the token to prevent reuse
Token Security, Expiry, and Permission Enforcement
Given a one-click action token is issued to User U for Revision R with permission P When the token is used by any user not U, or after expiry (e.g., >72 hours), or after permission P is revoked/changed to insufficient Then the action is rejected with a safe error and no side effects on R And the system logs the reason (mismatch, expired, insufficient permissions) And the token cannot be replayed after any attempt (success or failure)
Context Capture and Deep Link Integrity
Given any one-click approve or comment is executed on Revision R When the action is processed Then the stored payload includes revisionId=R, diffHash, pendingItemIds (count and IDs), source channel, timestamp, and actor ID And QA can retrieve this payload via API/logs for verification And the confirmation deep link opens the exact revision R with the diff view focused by default
Nudge Preferences & Quiet Hours
"As an approver, I want to control how and when I get nudges so that reminders fit my schedule and preferences."
Description

A preference center where users and admins configure when and how nudges are delivered: channel priorities, time zones, quiet hours, daily/weekly digests vs. immediate nudges, frequency caps, and per-project overrides. Supports snooze/pause, global unsubscribe for email, compliance with GDPR/CCPA requests, and import of presence signals (e.g., Slack DND) where available. Provides sensible defaults by role, previews of next scheduled nudge, and audit of preference changes.

Acceptance Criteria
Role-Based Sensible Defaults
Given a newly invited user assigned role R When they open the Nudge Preferences for the first time Then all preference fields equal the system-defined defaults for role R (channel priorities, cadence, quiet hours, time zone, frequency cap) And no nudges are sent as a side effect of loading defaults Given a user with role R and no prior edits to a field When they click “Reset to role default” for that field Then the field value reverts to the role R default and is saved And the UI labels the field as “Default (Role)” Given a user with role R When they modify any defaulted field Then the field is marked as “Custom” and persists across sessions
Suppression Controls: Quiet Hours, Time Zone, Snooze/Pause
Given the user’s time zone is America/New_York and quiet hours are 21:00–07:00 local When a nudge is scheduled at 22:15 local Then the nudge is not delivered during quiet hours And it is scheduled for delivery at the first eligible time after 07:00 local respecting cadence and caps Given quiet hours spanning a DST transition in the user’s time zone When clocks shift Then quiet hours continue to apply to the same local times Given the user activates Snooze for 90 minutes When a nudge would be delivered during that window Then it is not delivered during the snooze window And deliveries resume automatically after snooze expires Given the user enables Pause When any nudge becomes eligible Then it is not delivered while Pause is enabled And deliveries resume only after Pause is disabled
Channel Priority & Presence Signals (Slack DND)
Given channel priority is [Slack, In‑app, Email] and Slack is connected And Slack DND is active at the scheduled delivery time When a nudge is eligible Then it is not sent via Slack And it falls back to In‑app And if In‑app is unavailable, it falls back to Email Given Slack connection is revoked When a nudge is eligible Then Slack is skipped for delivery and marked unavailable in the UI And the next priority channel is used without duplicate sends Given Slack DND turns off before the next scheduled nudge When the next nudge is eligible Then Slack is used again as the primary channel
Delivery Cadence & Frequency Caps
Given delivery cadence is Immediate and the daily frequency cap is set to 3 with a minimum spacing of 15 minutes When 5 eligible nudge events occur within the same local calendar day Then no more than 3 nudges are delivered that day across all channels And each delivered nudge is at least 15 minutes apart And excess nudges are suppressed Given delivery cadence is Daily Digest at 08:30 local When eligible events occur throughout the day Then a single digest is delivered at 08:30 containing those items And no immediate nudges are sent for those items And if there are zero pending items, no digest is sent Given delivery cadence is Weekly Digest on Monday at 09:00 local When eligible events occur during the week Then a single digest is delivered at the configured time And immediate nudges for those items are not sent
Per-Project Overrides
Given the user has global preferences set And a per‑project override is configured for Project A When a nudge is generated for Project A Then the override values (channel priority, cadence, quiet hours, frequency cap) are applied to that nudge And nudges for other projects continue to use the global preferences Given a per‑project override for Project A is removed When the next nudge for Project A is generated Then the global preferences are applied Given conflicting settings between global and project scope When evaluating delivery Then project‑level settings take precedence for that project And non‑overridden fields inherit from global values
Next Scheduled Nudge Preview
Given at least one pending item and current preferences When the preferences page loads Then a preview displays the next scheduled nudge time in the user’s local time, the channel to be used, and the project context Given the user changes any setting that affects scheduling or channel (e.g., cadence, quiet hours, channel priority, snooze/pause) When the change is saved Then the preview updates within 2 seconds to reflect the new next scheduled nudge Given Pause is enabled When viewing the preview Then the preview indicates delivery is paused and no time is scheduled Given Daily/Weekly Digest is enabled When viewing the preview Then the preview shows the next digest time Given there are no eligible pending items When viewing the preview Then the preview states that no nudges are currently scheduled
Audit Trail & Privacy Controls (Unsubscribe, GDPR/CCPA)
Given a user changes any preference value When the change is saved Then an audit record is created capturing actor, timestamp (UTC), field changed, old value, new value, scope (global/project), and source (UI/API/Import) And the record is visible in the audit list and exportable as CSV Given the user clicks the “Unsubscribe from all nudge emails” link in any nudge email When the unsubscribe confirmation is completed Then the user is globally unsubscribed from nudge emails immediately And a confirmation screen is shown And an audit record is logged with source “Email Unsubscribe” Given a valid GDPR/CCPA request is received to restrict or erase processing for a data subject When the request is marked active in the system Then all nudge processing for that user stops immediately across all channels And their email is set to globally unsubscribed for nudges And preference data is queued for deletion/anonymization per policy and its status is visible to admins And an immutable, minimal audit entry of the request is retained as permitted by law
Nudge Analytics & Audit Trail
"As a project lead, I want visibility into nudge effectiveness so that I can optimize settings and demonstrate accountability to clients."
Description

End-to-end measurement and compliance logging for nudges, including open/click rates, approve/comment conversions, time-to-approve by channel and send window, and A/B test results for templates and timing. Delivers project- and org-level dashboards, CSV/JSON export, and filters by date, channel, user, and project. Maintains an immutable audit log of who was nudged, when, via which channel, and the exact content variant, with retention policies and PII minimization. Feeds insights back into the optimization engine.

Acceptance Criteria
Event Tracking for Nudge Lifecycle
Given a nudge is dispatched via email, Slack, or in-app with template T and variant V When the recipient opens the nudge, clicks the primary CTA, submits an approve, or posts a comment via the nudge entry point Then an analytics event is recorded within 60 seconds with fields: nudge_id, recipient_user_id, project_id, org_id, channel, template_id, variant_id, ab_bucket, send_window, event_type, event_timestamp_utc And events are queryable in analytics within 5 minutes of occurrence And duplicate events are deduplicated by (nudge_id, recipient_user_id, event_type, session_id) within a 24-hour window And time_to_approve equals approve_timestamp_utc − send_timestamp_utc stored in integer minutes with ±1 minute precision And click-through and approve conversions are counted per unique recipient per nudge
Project & Org Analytics Dashboards Rendering
Given analytics exist for the selected project or organization and date range When a Project Lead or Org Admin opens the Nudge Analytics dashboards Then the dashboards display: open rate, click-through rate, approve conversion rate, comment conversion rate, median and p90 time-to-approve, each broken down by channel and send window And A/B test results (per template and timing) display variant lift (%), sample size, p-value, and 95% significance flag And initial render completes within 3 seconds for cached queries and 8 seconds for uncached queries on datasets up to 10 million events And all visible numbers match underlying event counts within ±0.1%
Analytics Filtering & Segmentation
Given a user applies any combination of filters: date range, channel (email/Slack/in-app), user (by ID or display name), and project When filters are applied Then all tiles, charts, and tables update to include only events matching all selected filters And filter chips show active filters and can be cleared individually or all at once And zero-results states show 0 values and a clear-filters action And filtered totals equal the sum of visible breakdowns within ±0.1%
CSV and JSON Export of Analytics and Audit Log
Given a user selects Export with current filters for either Analytics or Audit Log When the export is requested Then both CSV and JSON exports are generated with identical row counts and filter parity And Analytics export rows include: nudge_id, event_type, timestamp_utc, channel, template_id, variant_id, ab_bucket, send_window, project_id, org_id, recipient_user_id_token, session_id And Audit Log export rows include: entry_id, action, actor_user_id, target_user_id_token, channel, template_id, content_variant_hash, payload_digest, timestamp_utc, request_id, previous_entry_hash And timestamps are in UTC; local_tz is provided as a separate column/field And exports larger than 100,000 rows run asynchronously and deliver a pre-signed download link valid for 24 hours with progress status And malformed or unauthorized export requests return a descriptive error without generating files
Immutable Nudge Audit Trail
Given any nudge send or related action occurs When the audit entry is written Then the log is append-only and stores: who initiated (system/service/user_id), who was nudged (recipient_user_id_token), when (timestamp_utc), channel, template_id, content_variant_hash, ab_bucket, request_id, previous_entry_hash And attempts to edit or delete an existing entry are blocked; only redaction is allowed, which replaces PII tokens but preserves non-PII metadata and appends a Redaction entry referencing the original And the integrity chain validates so that recomputing hashes detects tampering And with an org retention policy configured (e.g., 365 days), entries older than the policy and not under legal hold are purged/redacted by a daily job and a retention report is generated with counts affected
PII Minimization and Access Controls
Given analytics and audit data are stored Then recipient email/phone are not stored in clear text; only salted-hash tokens are retained, and Slack identities are stored as platform user IDs And message content is not stored; only template_id, content_variant_id, and diff digest are retained And non-admin users cannot access exports containing PII tokens; only Org Admins with Data Export permission can export datasets with pseudonymous recipient IDs And upon a data subject deletion request, tokens and back-references are removed within 30 days while aggregate metrics remain intact
Optimization Engine Feedback Loop
Given new analytics events are recorded When the hourly aggregation job runs Then aggregated metrics by channel, send window, template_id, and variant_id for the trailing 30 days are delivered to the optimization engine via an idempotent API And delivery uses at-least-once semantics with deduplication by (window_start, template_id, variant_id) And failures are retried with exponential backoff up to 24 hours and surfaced via system alerts without blocking dashboard freshness And accepted optimization parameters are versioned and become available to the Smart Nudges scheduler within 15 minutes of receipt

Parallel Paths

Splits the ladder into concurrent approval tracks (e.g., MEP, structure, interiors) with a merge gate. Teams progress independently, and the system only advances once required tracks are complete, cutting overall cycle time.

Requirements

Track Configuration & Templates
"As a project lead, I want to quickly set up standardized parallel approval tracks using templates so that each discipline can work concurrently with clear roles and expectations."
Description

Provide UI and backend to define concurrent approval tracks within a project (e.g., MEP, Structure, Interiors), including track name, description, required vs optional, default reviewers and approvers, SLA targets, allowed file types and markup scopes, visibility (internal vs client-facing), and dependencies. Support reusable templates selectable during project setup and editable per project. Validate configuration to prevent circular dependencies and duplicate track keys. Persist settings at the project level and expose them to the workflow engine and permissions layer.

Acceptance Criteria
Create and Save Concurrent Track With Field Validation
Given a project track configuration form is opened When the user enters a unique track key matching ^[a-zA-Z0-9_-]{2,32}$ and a non-empty name And the user sets required/optional, visibility, SLA target (positive integer hours or days), allowed file types from the supported list, markup scopes from the supported list, and selects default reviewers/approvers that exist in the project team And all selected dependencies reference existing tracks in the same project Then the Save action is enabled When the user clicks Save Then the track configuration is persisted at the project level And retrieving the project settings shows the saved values exactly as entered
Prevent Duplicate Track Keys at Project Scope
Given a project that already contains a track with key "STRUCTURE" When the user attempts to create or import another track with key "STRUCTURE" Then the Save action is blocked And an error message states "Track key already exists in this project" And no changes are persisted When the user changes the key to a unique value Then the error clears and Save becomes enabled
Block Circular and Self Dependencies Among Tracks
Given tracks A and B exist in a project When the user sets A to depend on B and B to depend on A Then Save is blocked And an error message identifies a circular dependency path (A → B → A) Given a track C exists When the user sets C to depend on C Then Save is blocked with an error "A track cannot depend on itself" When the dependency graph is acyclic Then Save succeeds and the dependencies are stored
Apply Template at Project Setup and Allow Per-Project Overrides
Given a reusable template "Core+Shell" with three predefined tracks exists When the user selects the "Core+Shell" template during new project setup Then the project is pre-populated with the template's tracks and all field values When the user edits a track's SLA target and default approvers before saving the project Then the edits apply only to the project and do not modify the template When the project is created Then the persisted project configuration matches the edited values And the original template remains unchanged
Enforce Allowed File Types and Markup Scopes Within a Track
Given a track is configured to allow file types [PDF, DWG] and markup scopes [MEP] When a user attempts to upload a PNG to that track Then the upload is rejected with an error stating the allowed types When a user opens the markup tools within that track Then only the [MEP] markup scope tools are enabled and non-allowed scopes are disabled When a DWG file is uploaded Then the upload succeeds and is associated with the correct track
Honor Visibility Settings in UI and API (Internal vs Client-Facing)
Given a track is set to Internal visibility When a client user views the project workspace or calls the tracks API Then the internal track is not visible and cannot be interacted with When a project team member views the same Then the internal track is visible with full functionality per their permissions Given a track is set to Client-Facing When a client user with comment/approve rights accesses the project Then the track is visible and actions are permitted according to their role
Expose Track Config to Workflow Engine and Auto-Assign Roles
Given a project with tracks configured (required/optional, dependencies, SLA, default reviewers/approvers) When the configuration is saved Then the workflow engine receives the track definitions via the integration interface When a track instance is created by the workflow engine Then default reviewers and approvers are auto-assigned And a due date is computed from the SLA target using the project's working calendar And the track cannot start until its configured dependencies are completed And only required tracks gate the merge step while optional tracks do not block advancement
Merge Gate & Dependency Engine
"As a project lead, I want the system to advance only when all required tracks are approved so that we avoid premature sign-off and downstream rework."
Description

Implement a state machine and rules engine that evaluates each approval cycle against track completion requirements. The engine must block advancement until all required tracks reach Approved, allow optional tracks to complete asynchronously, and support admin override with reason logging and automatic stakeholder notification. Enforce intra-track sequencing where configured, handle rejections by reopening only the affected track, and recompute gate readiness in real time. Provide API/webhook events for gate-open, gate-closed, override, and track-status-changed. Persist decision outcomes for audits.

Acceptance Criteria
Gate Opens When All Required Tracks Approved
Given an approval cycle with required tracks ["MEP","Structure"] and optional tracks ["Interiors"] And MEP=Approved and Structure=Approved and Interiors=In Review When the rules engine evaluates the merge gate Then the merge gate state is Open within 1 second And the cycle state is ReadyToMerge And exactly one gate-open event is recorded and a webhook POST is issued to all registered endpoints
Gate Remains Closed When Any Required Track Not Approved
Given an approval cycle with required tracks ["MEP","Structure"] and optional tracks ["Interiors"] And MEP=Approved and Structure=In Review When the rules engine evaluates the merge gate Then the merge gate state is Closed And the cycle state does not advance And no gate-open event is emitted
Optional Tracks Do Not Block Merge
Given the merge gate is Open because all required tracks are Approved And an optional track "Interiors" is In Review When "Interiors" changes to RevisionsRequired Then the merge gate remains Open And the cycle state is unchanged And a track-status-changed event is emitted for "Interiors" only
Admin Override With Reason And Notifications
Given the merge gate is Closed due to at least one required track not Approved And the actor has Admin permissions When the admin issues an override with a non-empty reason string Then the merge gate state changes to Open within 1 second And required track statuses remain unchanged And an override event is recorded with actor_id, reason, and timestamp And stakeholder notifications (in-app and webhooks) are sent to all subscribed stakeholders within 2 seconds
Intra-Track Sequencing Enforcement
Given track "Structure" is configured with sequence ["PeerReview","LeadApproval","Approved"] When a user attempts to transition "Structure" from "PeerReview" directly to "Approved" Then the transition is rejected with error code SEQUENCE_VIOLATION And the track remains in "PeerReview" When the user transitions "Structure" from "PeerReview" to "LeadApproval" and then to "Approved" Then both transitions succeed And the final status of "Structure" is "Approved"
Rejection Reopens Only Affected Track And Recomputes Gate
Given the merge gate is Open with required tracks MEP=Approved and Structure=Approved When a reviewer rejects "Structure" Then "Structure" status changes to RevisionsRequired And other tracks' statuses are unchanged And the merge gate state changes to Closed within 1 second And track-status-changed and gate-closed events are emitted
Event Emission, Webhooks, And Audit Logging
Given a gate state change, override, or track status change occurs When the system processes the change Then an event is recorded with type in ["gate-open","gate-closed","override","track-status-changed"], cycle_id, gate_id, track_id (if applicable), previous_status, new_status, actor_id (if applicable), and ISO-8601 timestamp And a webhook POST with the same payload and an HMAC signature header is sent to each registered endpoint And the event outcome is persisted in the audit log and retrievable via the audit API filtered by cycle_id And override events include the non-empty reason in both the event payload and audit record
Track-Specific Version Streams & Markups
"As a discipline lead, I want my team’s markups and revisions to be tracked within our own stream so that parallel work does not overwrite or confuse other teams’ changes."
Description

Create isolated version streams per track that branch from the current drawing set, allowing each discipline to add annotations, attachments, and revisions without clashing. Maintain cross-links to the master drawing and enable a visual diff between a track’s latest and the master. On gate merge, consolidate approved track changes into the master revision with provenance metadata. Ensure one-click client approvals operate at both track and master levels and that rollback restores track state without affecting unrelated tracks.

Acceptance Criteria
Branching a Track from Current Master
Given a master drawing set at revision M3 and a new track named "MEP" is created When the track is initialized Then a new version stream for "MEP" is created branching from M3 with initial track revision T0 referencing M3 And the "MEP" stream uses its own sequential versioning (e.g., MEP-T1, MEP-T2) And no changes made in the "MEP" stream appear in other tracks or the master until a merge occurs
Isolated Markups, Attachments, and Revisions per Track
Given an "MEP" track revision MEP-T1 is open for editing and an "Interiors" track exists When a user adds annotations, uploads file attachments, and saves a new revision in "MEP" Then those annotations, attachments, and the revision are stored only in the "MEP" stream and are not visible in "Interiors" or the master And each stored attachment records metadata including track ID, author, timestamp, and track revision ID And when concurrent edits occur in different tracks on the same drawing area, no edit conflicts are raised prior to merge and each track retains its changes independently
Visual Diff: Track Latest vs Master
Given an "MEP" track latest revision MEP-T4 and the master is at M3 When the user requests a visual diff between MEP-T4 and M3 Then the system renders a diff highlighting additions, deletions, and modified elements from the track relative to the master And the diff provides toggleable layers for markups and underlying drawing changes And the diff view loads within 3 seconds for a drawing set up to 50 MB in size
Merge Gate: Consolidate Approved Tracks with Provenance
Given required tracks "MEP" and "Structure" are in Approved state and the master is at M3 When the merge gate is executed Then a new master revision M4 is created consolidating the approved changes from each required track And provenance metadata is recorded per contributing track including track name, contributing track revision IDs, approver identity, timestamps, and a diff summary And the contributing track revisions are locked against further edits and marked as Merged And a merge report is generated and linked to M4
One-Click Client Approvals at Track and Master Levels
Given a track revision (e.g., MEP-T2) is ready for client review When the client clicks Approve on the track Then the track status changes to Approved with timestamp, approver identity, and a digital acceptance receipt And when the client clicks Request Changes instead Then the track status becomes Changes Requested and a justification comment is required And given all required tracks are Approved When the client views the master Then the Approve action for the master is enabled; otherwise it is disabled with a list of pending tracks And when the client approves master revision M4 Then the master is marked Approved and further edits require creating a new master revision (e.g., M5)
Per-Track Rollback without Cross-Impact
Given a track "Interiors" has revisions INT-T0 through INT-T3 and has not been merged When a user performs a rollback to INT-T1 Then the current pointer becomes INT-T1 and INT-T2–INT-T3 remain in history but are superseded And the master and all other tracks remain unchanged And given the track had been merged previously When a rollback is performed on the track stream Then only the track stream state changes; the master remains at its current revision and subsequent merges from the track require re-approval
Cross-Linking and Navigation Between Track and Master
Given a track "MEP" branched from master M3 When viewing track revision MEP-T2 details Then a link to its base master revision M3 is displayed And when viewing master M3 Then links to all active track streams that branched from it are displayed with their current statuses And when the master advances to M4 after a merge Then existing track cross-links continue to reference their original base and indicate if a newer master exists
Role-Based Assignment & Access Control per Track
"As an admin, I want to restrict who can view and approve each track so that sensitive conversations and actions stay within the right audience."
Description

Extend RBAC to assign reviewers and approvers to specific tracks, limit approval actions to authorized users, and control visibility of track content for internal vs client-facing work. Support guest client access scoped to the tracks they are invited to, and ensure notifications and dashboards respect access boundaries. Provide bulk assignment, substitution for out-of-office rules, and audit of permission changes.

Acceptance Criteria
Track-Level Role Assignment via UI
Given a project admin on a project with tracks MEP, Structure, and Interiors When the admin assigns Alice as Reviewer and Bob as Approver on the MEP track via the Assign Roles modal Then Alice and Bob gain those roles on MEP only, and no roles are added to Structure or Interiors And the assignment persists after page refresh and API GET /tracks/{id}/roles reflects the change within 5 seconds Given a non-admin attempts to modify track roles When they submit an assignment change Then the request is rejected with 403 and no changes are persisted Given any assignment change is saved When the operation completes Then an audit record is written with actor, target user, track, role(s), timestamp (UTC), and reason (optional)
Approval Action Enforcement by Track
Given a user without the Approver role on Track A When they open a pending approval item on Track A Then Approve and Reject controls are not actionable (disabled or hidden) and API POST /approvals returns 403 Given a user with Reviewer role on Track A When they view the same item Then they can comment and request changes but cannot finalize approval Given a user with Approver role on Track A When they click Approve Then the item status on Track A changes to Approved, a track-approval event is emitted, and merge gate evaluation updates within 5 seconds
Internal vs Client Visibility Controls
Given an artifact in Track B is flagged Internal-only When a client user views Track B Then the artifact and its comments are not visible and counts exclude them Given an internal user views Track B When they open the artifact list Then Internal-only items are visible with an Internal badge Given the visibility of the artifact is toggled from Internal-only to Client-facing by an admin When a client user refreshes Then the artifact becomes visible to the client within 5 seconds and the visibility change is recorded in the audit log
Guest Client Access Scoped to Invited Tracks
Given a guest client is invited to Tracks C and D only When they sign in Then their workspace lists only Tracks C and D and direct navigation to any other track returns Access Denied (404 or 403) without leaking track names Given the guest clicks a deep link to an item in Track C When the page loads Then the item is accessible Given the guest clicks a deep link to an item in Track E (not invited) When the page loads Then access is denied and no item metadata is exposed Given the guest invitation is revoked or expired When the guest attempts access Then access is denied and any active tokens for that invitation are invalidated
Notifications and Dashboards Respect Access Boundaries
Given User X has access to Tracks A and B only When User X opens their dashboard Then counts, lists, and charts include items from A and B only and exclude all others Given a notification is generated for Track Z where User X lacks access When notifications are dispatched Then User X does not receive email, push, or in-app notifications for that event Given an admin grants User X access to Track Z When the dashboard is refreshed or the next sync runs Then Track Z items appear in dashboard widgets and query APIs within 60 seconds
Bulk Assignment and Out-of-Office Substitution
Given a project admin selects Tracks A, B, and C in bulk assignment When they assign Dana as Reviewer across selected tracks Then Dana appears as Reviewer on A, B, and C, with a summary indicating success/failure per track and API reflects changes within 5 seconds Given Approver Evan has an active Out-of-Office rule with delegate Kai from 2025-10-01 to 2025-10-10 When an approval request is created on Track B within that window Then Kai is auto-designated as temporary Approver for Track B approvals, receives notifications, and Evan cannot approve during the window Given the OOO window ends When the next approval action is initiated Then approval rights revert and Kai no longer has Approver permissions unless explicitly assigned
Permission Change Audit and Export
Given any permission change on any track When the change is committed Then an immutable audit entry is created with fields: project, track, actor, affected user, old role, new role, method (UI/API), timestamp (UTC), reason (optional), and requestId Given an auditor filters by track=MEP, actor=Admin, and a date range of 30 days When they run the search Then results return within 2 seconds for up to 10,000 records and match the filters exactly Given the auditor exports current filtered results as CSV When the export is requested Then a CSV with headers and all fields downloads within 10 seconds and only users with Admin role can export
Real-Time Track Dashboard, Notifications, and SLAs
"As an approver, I want clear, timely prompts and a live view of which tracks need my input so that I can keep the project moving without searching."
Description

Deliver a project dashboard showing per-track status, approver, time-in-state, and SLA countdowns, with real-time updates. Send actionable notifications and reminders to assigned reviewers and approvers, including escalation when SLA thresholds are breached. Provide digest summaries, snooze, and channel preferences. Surface blockers and required next actions directly in the card UI and allow quick approve/reject with comment where permitted.

Acceptance Criteria
Real-time per-track dashboard updates
Given a project with multiple approval tracks (e.g., MEP, Structure, Interiors) When a track’s status, current approver, time-in-state, or SLA value changes Then the dashboard reflects the change within 2 seconds via real-time transport And if real-time transport is unavailable, the dashboard polls and updates at most every 15 seconds And each track card displays status, current approver(s), time-in-state, and an SLA countdown timer in HH:MM format And updates are consistent across multiple open sessions for the same project
SLA countdowns, reminders, and escalation
Given a track is assigned with a configured SLA target When the assignment starts Then an SLA countdown begins using business time if enabled And reminders are sent to assigned approvers at 50% and 90% of SLA consumption And at SLA breach, an escalation notification is sent to the defined escalation recipients and the track card shows a red "SLA Breached" badge And SLA timers can be paused/resumed only when the track is placed On Hold with a required reason And all reminders and escalations are recorded in an immutable audit log with timestamp and recipients
Notification channels, actionable messages, and snooze
Given a user has notification channel preferences configured (in-app default; email and Slack/Teams optional) When the user is assigned as reviewer/approver or a reminder is due Then the notification is delivered to the selected channels within 1 minute And actionable notifications include Approve and Reject actions when the user has permission And Snooze options include 15 minutes, 1 hour, and until next business day 9:00 AM local And Do Not Disturb windows defer non-escalation notifications and send a summary when DND ends And users can opt out per project from non-escalation notifications; escalations cannot be disabled by responsible approvers
Daily and weekly digest summaries
Given a user has pending items in any project When it is 8:00 AM local on a business day Then a daily digest is sent listing pending reviews/approvals with status, time-in-state, and SLA remaining And weekly on Monday at 8:00 AM local, a digest includes counts, average time-in-state, and number of SLA breaches for the prior week And items already acted upon are excluded from the digest And digest delivery respects the user’s channel preferences And each digest item deep-links to its corresponding track card
Blockers and next actions surfaced on track cards
Given a track has unmet dependencies, missing required attachments, unanswered questions, or incomplete required approvals When the track card is rendered Then a Blocked indicator displays with specific blocker reasons And the Next Action displays a concise instruction with a primary CTA (e.g., Request File, Ask a Question, Remind Approvers) And resolving a blocker updates the card within 2 seconds and removes the Blocked indicator And when multiple approvals are required, the card shows who has approved and who is outstanding, including the quorum rule
Quick approve/reject with comment and permissions
Given an assigned approver is viewing a track card or an actionable notification When the approver selects Approve Then the track transitions to the next state, the approver and timestamp are recorded, and an optional comment is saved When the approver selects Reject Then a comment is required (minimum 3 characters), the track transitions to Revisions Needed (or configured fallback), and the requester is notified And an Undo option is available for 30 seconds; after that, reversal requires admin privileges and is audit-logged And users without permission see disabled Approve/Reject actions with a tooltip explaining the restriction
Time-in-state measurement and export
Given a track moves through workflow states When a state transition occurs Then the prior state’s duration is recorded to the second and accumulated per state And when business-time mode is enabled at the workspace level, weekends and configured holidays are excluded from timers And the dashboard displays time-in-state in a human-friendly format (e.g., 3d 4h 12m) And an export provides CSV with track ID, state, enter/exit timestamps (UTC), and durations And placing a track On Hold pauses timers until resumed
Audit Trail and Cycle-Time Analytics for Parallel Paths
"As a practice owner, I want detailed audit and performance insights across parallel approvals so that I can demonstrate compliance and optimize throughput."
Description

Record an immutable, time-stamped event log per track and for the merge gate, including configuration changes, assignments, approvals, rejections, overrides, and notifications. Provide export and APIs for compliance. Add analytics that report cycle time by track, overlap between tracks, bottleneck contributors, and time saved versus sequential baselines. Offer filters by project, client, discipline, and date range, and visualize trends to inform process improvements.

Acceptance Criteria
Immutable Event Log per Track and Merge Gate
Given Parallel Paths is enabled on a project with multiple tracks and a merge gate When configuration changes, assignments, approvals, rejections, overrides, or notifications occur on any track or the merge gate Then the system appends a new audit event with fields: eventId (UUIDv4), projectId, trackId (nullable for merge), gateId (for merge), eventType (from allowed set), actorId (nullable for system), occurredAt (UTC ISO-8601 ms), and metadata payload And events are append-only; attempts to update or delete an event via UI or API return 403 and the original event remains unchanged And each event includes a hash field computed as SHA-256(previousHash + canonicalEventPayload) enabling verification of an unbroken chain per track and for the merge gate And verifying the hash chain for a track or merge gate returns true for 100% of events And event ordering within a track and within the merge gate is strictly by occurredAt then eventId with no duplicates
Compliance Export of Audit Trail
Given a user with Auditor permissions selects a project/client/discipline and date range filter When they request an export in CSV and in JSON formats Then the exported file contains all and only events that match the filters with the exact same count as the in-app filtered view And each file includes an export header with: exportId, generatedAt (UTC), filters applied, recordCount, and sha256 checksum of the payload And timestamps in the export are UTC ISO-8601 with millisecond precision And the CSV uses UTF-8 encoding, RFC-4180 quoting, and a stable column order documented in the API And for up to 100,000 events, the export completes within 10 seconds 95th percentile And re-running the same export (same filters within 5 minutes) produces identical checksum and record count
Audit Trail API Access and Security
Given a client has a valid OAuth2 access token with scope audit:read When they call GET /api/audit-events with filters (projectId, clientId, discipline, startAt, endAt), sort (occurredAt desc|asc), and pagination (limit<=1000, cursor) Then the API returns 200 with a paginated list of matching events, including nextCursor when more results remain And requests without a valid token return 401; tokens without audit:read return 403 And invalid parameters (e.g., endAt < startAt, limit>1000, unknown filter) return 400 with machine-readable error codes And responses include ETag; repeated calls with If-None-Match return 304 when no changes occurred within the filter window And rate limits enforce 60 requests/second per token; exceeding returns 429 with Retry-After And all timestamps are UTC; ordering is stable across pages
Cycle Time by Track Analytics
Given a ladder run with one or more tracks containing events track.opened, approval.rejected (optional), and track.completed (final) When cycle time is computed Then each track’s cycle time equals the sum of active intervals from the first track.opened to the final track.completed, including any rework intervals between approval.rejected and subsequent completion And cycle time is displayed in hh:mm (and days for >24h) and matches the exported analytics CSV values And tracks without completion show cycle time as null and are excluded from averages unless "include in-progress" is enabled And project-, client-, discipline-, and date-range filters recalculate cycle times to reflect only tracks within the filter And results for a known fixture dataset match expected durations within ±1 second
Overlap Between Tracks Analytics
Given a ladder run with two or more tracks that have active windows between their track.opened and track.completed events When overlap is computed Then overlapDuration equals the total time where at least two tracks are simultaneously active And the concurrencyIndex equals overlapDuration divided by the total window from earliest track.opened to latest track.completed And the UI renders an overlap timeline (Gantt-like) highlighting concurrent segments and displays overlapDuration and concurrencyIndex And applying filters (project, client, discipline, date range) recalculates overlap metrics and visualization accordingly And analytics export includes overlapDuration and concurrencyIndex for the filtered set
Bottleneck Contributors Analytics
Given audit events include assignment.created and subsequent approval.granted or approval.rejected with actorId and role When bottleneck contributors are calculated for a selected filter set Then for each actor and role the system computes median and 90th percentile of time-to-action (assignment.created to next approval.*) And the dashboard lists the top 5 actors and top 5 roles by highest median time-to-action, excluding system actors And clicking an actor or role filters/drills down to the contributing tracks/events And results update within 2 seconds P95 for up to 10,000 events in scope And exported analytics include actor/role, sampleSize, median, p90, and filter context
Time Saved vs Sequential Baseline and Trend Visualization
Given per-track cycle times and a merge gate with merge.completed event for a ladder run When time savings are computed Then sequentialBaseline equals the sum of the cycle times of all required tracks And actualParallel equals the duration from the earliest track.opened to merge.completed And timeSaved equals sequentialBaseline minus actualParallel; percentSaved equals timeSaved divided by sequentialBaseline rounded to the nearest 1% And negative timeSaved displays as a negative value and percent in red with a tooltip explanation And the UI shows a trend line (weekly and monthly aggregates) of percentSaved over the selected date range with filters by project, client, and discipline applied And analytics export includes sequentialBaseline, actualParallel, timeSaved, percentSaved, and aggregation grain

Gatekeeper Checks

Runs pre-advance validations—resolved comments, locked versions, and conflict checks—before moving to the next rung. Prevents premature approvals and rework by ensuring the current stage is truly complete.

Requirements

Pre-Advance Validation Orchestrator
"As a project lead, I want an automatic pre-advance validation run so that I can prevent incomplete or conflicting work from moving forward."
Description

A centralized, pluggable engine that executes all gate checks (e.g., unresolved comments, version lock, cross-file conflicts) before any stage transition. It aggregates results into a standardized pass/fail/warn report, blocks progression on failed mandatory checks, and exposes events, API endpoints, and webhooks for integrations. The orchestrator integrates with the stage transition workflow and one-click approvals, supports parallelized rule execution for performance, and provides a consistent result schema for UI and audit consumption.

Acceptance Criteria
Standardized Validation Report Generation
Given a stage transition attempt for stageId S by user U with configured checks C1(pass), C2(fail, mandatory), C3(warn, optional), C4(pass), C5(fail, mandatory) When the orchestrator executes all gate checks Then it returns HTTP 200 with a JSON report containing fields: requestId(UUIDv4), projectId, stageId, actorId, startedAt(ISO-8601), endedAt(ISO-8601), durationMs, overallStatus(one of pass|fail|warn), counts{pass,fail,warn,error,skipped}, checks[] And each checks[] item includes: id, name, version, mandatory(boolean), result(one of pass|fail|warn|error|skipped), message, startedAt, endedAt, durationMs, artifacts[] And overallStatus = "fail" because at least one mandatory check failed And counts reflect the results exactly (pass:2, fail:2, warn:1, error:0, skipped:0) And the report is persisted and retrievable via GET /api/validations/{requestId} returning the identical payload
Stage Transition Block on Mandatory Failures
Given at least one mandatory check returns result = "fail" for a stage transition When POST /api/stages/{stageId}/advance is called Then the API responds 409 Conflict with body including overallStatus:"fail" and blockingReasons as a non-empty array of failing mandatory check ids And the stage status remains unchanged and no approval record is created And the one-click approval control in the UI is disabled and displays the failing checks list from the report schema And an audit log entry is created with reason:"blocked_by_gatekeeper" and the associated validation requestId
Parallelized Rule Execution and Timeout Handling
Given 10 checks that complete in ~100ms and 10 checks that attempt 600ms with perCheckTimeout=500ms and maxConcurrency=10 When the orchestrator runs the validation Then the total duration is <= 800ms from startedAt to endedAt And any check exceeding perCheckTimeout is marked result="error" with message containing "timeout" And if a timed-out check is mandatory, overallStatus = "fail"; if optional and no other failures, overallStatus = "warn" And if the validation request is cancelled by the user before completion, in-flight checks are cancelled and remaining checks are marked "skipped" with overallStatus reflecting executed results
Pluggable Check Registration and Project-Level Configuration
Given a new check plugin is registered via POST /api/checks with required fields {id, name, version, entrypoint, permissions, defaultMandatory} And project P enables the plugin with mandatory=true via PUT /api/projects/{projectId}/checks/{checkId} When a validation runs for project P Then the report includes a checks[] item for the plugin id and its configured version and mandatory flag And disabling the check at the project level excludes it from execution and from the report And attempting to register a plugin with duplicate {id,version} returns 409 Conflict And misconfigured plugins (missing entrypoint) are not executed and appear once in the report with result="error" and a diagnostic message
Events, API Endpoints, and Webhook Delivery
Given a webhook subscription exists for event "validation.completed" with targetUrl, secret, and retryPolicy(maxAttempts=5, backoff=exponential starting at 2s) When a validation completes for any stage Then the system emits the event internally and delivers an HTTP POST to targetUrl within 2s of completion with JSON payload {requestId, projectId, stageId, overallStatus, counts, checks} And the POST includes an HMAC-SHA256 signature header X-PlanPulse-Signature computed with the shared secret And on a non-2xx response, deliveries are retried up to 5 attempts with exponential backoff and stop after a 2xx And each delivery attempt is recorded with status code and latency and is retrievable via GET /api/validations/{requestId}/deliveries
Workflow Integration with One-Click Approvals and Override Policy
Given a user with role ProjectLead clicks "Approve and Advance" for stageId S When all mandatory checks return result in {"pass"} and no errors exist Then the stage advances, an approval record is created, and event "validation.passed" is emitted with the validation requestId Given a mandatory check fails and a user with role Principal selects "Override and Advance" and enters a justification of at least 20 characters When the override is submitted Then the stage advances despite failures, the validation report remains overallStatus="fail", and an audit entry is recorded with {actorId, justification, timestamp, requestId} And users without override permission receive 403 Forbidden when attempting to override
Audit Trail and Result Persistence
Given any validation run completes (pass, fail, or warn) When GET /api/validations/{requestId} is called Then the system returns the exact stored report along with an immutable audit envelope containing {hash, createdAt, createdBy, source} And the audit record includes correlationId linking the validation to the stage transition attempt and any approval/override actions And records are retained per policy (e.g., >= 365 days) and are tamper-evident (hash changes if any field is altered) And querying by stageId or projectId returns the full history ordered by startedAt descending
Resolved Comment Enforcement
"As a reviewer, I want the system to block stage advancement until all relevant comments are resolved or properly waived so that no feedback is missed."
Description

Enforces that all discussion threads and review comments linked to the current stage’s drawings and markups are either resolved or explicitly waived before advancing. Includes filters for scope (current stage, current sheet set), bulk resolve tools, role-based waiver permissions with required rationale, and exclusion of archived or out-of-scope threads. Integrates with the conversation layer to surface remaining open items and prevents premature approvals that would otherwise trigger rework.

Acceptance Criteria
Block Advancement Until All Stage Comments Resolved or Waived
Given the user has the "Advance Stage" permission And there exist one or more Open discussion threads linked to drawings or markups within the current stage scope When the user attempts to Advance or Approve the stage via UI button, keyboard shortcut, or API Then the action is blocked And a blocking banner displays the number of Open items and a "View items" link within 1 second And the gate remains blocked until every in-scope thread is either Resolved or Waived And out-of-scope or Archived threads do not affect the gate And once all in-scope threads are Resolved or Waived, the Advance/Approve action becomes enabled immediately
Scope Filtering: Current Stage vs Current Sheet Set
Given there are threads linked to a mix of entities across stages and sheet sets When the user sets validation scope to "Current Stage" Then only threads linked to entities in the current stage are counted as in-scope and listed, updating within 1 second When the user sets validation scope to "Current Sheet Set" Then only threads linked to sheets in the active sheet set are counted as in-scope and listed And threads outside the selected scope or with status Archived are excluded from counts and lists And a thread linked to multiple in-scope entities is counted once in totals and listings
Bulk Resolve with Required Disposition and Audit Log
Given the user has permission to resolve comments And at least two in-scope threads are Open When the user selects multiple threads and clicks "Bulk Resolve" and confirms Then all selected threads change status to Resolved And each change records resolver identity, timestamp, and optional resolution note (max 500 characters) And any thread that fails to update surfaces a per-thread error while successfully updating others And the in-scope Open count decreases accordingly within 1 second
Role-Based Waiver with Mandatory Rationale
Given the user has "Waive" permission When the user selects an Open in-scope thread and chooses Waive And provides a rationale of at least 10 characters Then the thread status becomes Waived And the waiver records user, timestamp, and rationale And Waived threads satisfy the gate equivalently to Resolved Given a user without "Waive" permission attempts to waive a thread Then the action is rejected with a permission error and no status change When an authorized user revokes a waiver Then the thread returns to Open and the gate revalidates within 1 second
Conversation Layer Surfacing Remaining Open Items
Given there are in-scope Open threads When the user clicks "View items" from the gate banner Then the conversation panel opens filtered to in-scope Open threads And each listed item includes a link that navigates to the exact drawing/markup location And resolving or waiving an item from the panel updates the gate count and enablement state within 1 second
Continuous Revalidation and Audit Snapshot on Approval
Given the gate shows 0 remaining in-scope Open threads When a new comment is added to an in-scope drawing or markup before advancing Then the gate revalidates automatically within 2 seconds and blocks advancement until the new thread is Resolved or Waived When the stage advances successfully Then the system stores an immutable audit snapshot including counts of Open/Resolved/Waived/Archived threads, resolver identities, waiver rationales, selected scope, and timestamp And the snapshot is retrievable from the stage history view
Version Lock & Snapshot Freeze
"As a project lead, I want the current working set to be locked and snapshotted at gate pass so that approvals are tied to a consistent, immutable version."
Description

Locks the exact versions of all in-scope drawings and markups at the moment of gate pass, producing an immutable, read-only snapshot with checksums and metadata (author, timestamp, stage). Prevents further edits to locked assets, links the snapshot to the approval request, and supports roll-back to the last frozen set if the next stage is rejected. Ensures consistency between what was validated and what is approved, eliminating version drift.

Acceptance Criteria
Atomic Snapshot at Gate Pass
Given a project with defined in-scope drawings and markups at Stage S When a user with required permissions triggers Gate Pass for Stage S Then the system creates an immutable snapshot capturing the exact current versions of all in-scope drawings and markups And no out-of-scope assets are included in the snapshot And the snapshot operation completes atomically (all-or-nothing) within 3 seconds for up to 500 assets And the snapshot is assigned a unique Snapshot ID
Immutable Lock on In-Scope Assets
Given a snapshot exists for Stage S When any user attempts to create, edit, or delete a locked drawing or markup from that snapshot Then the action is prevented and the UI displays "Locked by Snapshot [Snapshot ID]" And API requests receive HTTP 423 LOCKED with error code SNAPSHOT_LOCKED And the asset remains unchanged in storage
Snapshot Metadata and Checksums
Given a snapshot is created When the snapshot is persisted Then the snapshot metadata includes: Snapshot ID, author (user ID), UTC timestamp to millisecond precision, stage identifier, and asset count And each asset in the snapshot records its source version ID and SHA-256 checksum And the metadata is retrievable via UI and via API endpoint GET /snapshots/{id}
Integrity Verification on Retrieval
Given a snapshot exists When the system performs an integrity check during snapshot retrieval or download Then recomputed checksums for every asset match the stored SHA-256 values And if any mismatch is detected, the system blocks access and logs SECURITY_INTEGRITY_FAIL with Snapshot ID and asset path And the user is shown "Snapshot integrity check failed" with a support reference code
Snapshot Linked to Approval Request
Given a snapshot exists for Stage S When the project lead sends an approval request for Stage S Then the approval request record stores the Snapshot ID and stage identifier And approvers are presented a read-only viewer bound to that Snapshot ID And one-click approval writes the same Snapshot ID into the approval decision record
Rollback to Last Frozen Set on Rejection
Given the next stage approval is rejected When an authorized user selects "Rollback to Frozen Set" Then the working set of in-scope assets is restored exactly to the snapshot contents (versions and markups) And any edits created after the snapshot are not present in the restored working set And the system logs a ROLLBACK event with Snapshot ID, actor, timestamp, and stage
Concurrency Handling at Gate Pass
Given users are concurrently editing in-scope assets When Gate Pass is initiated Then the snapshot includes only the last saved versions committed before the snapshot lock is applied And any save attempts after lock start are rejected with "Asset locked by snapshot" and API HTTP 423 And the snapshot contents are deterministic and identical across repeated retrievals
Cross-File Conflict Detection
"As a drafter, I want conflicts across drawings and markups identified before advancing so that I can resolve them and avoid downstream rework."
Description

Detects and reports conflicts before advancement, including unmerged parallel markups, divergent drawing branches, stale references between sheets, and mismatched sheet indices. Builds a lightweight dependency graph across the sheet set to identify impacts, highlights conflicting elements in context, and requires resolution or authorized waiver. Integrates with markup/version history to suggest safe merges or rebase actions.

Acceptance Criteria
Gate Block on Unmerged Parallel Markups
Given a sheet has two or more open markup threads on parallel branches within the current stage When the user initiates Advance Stage Then the Gatekeeper halts advancement and lists each unmerged thread with branch name, author, last-updated timestamp, and affected element count And the conflicting elements are highlighted on-canvas with distinct colors per branch and are navigable from the list And per thread the user is offered actions: Review Diff, Merge, Request Waiver And the Gatekeeper panel displays a blocking badge with the total count of unmerged threads And the conflict detection completes in ≤ 3 seconds for projects up to 100 sheets and 50 parallel markup threads
Divergent Branch Detection and Safe Merge Suggestions
Given a drawing has branched versions whose heads are not ancestors of one another When the Gatekeeper check runs Then the system identifies divergence pairs and displays the minimal merge path for each pair And a suggested Safe Merge or Rebase action is generated with impacted elements list, conflict risk score (0–100), and estimated steps And a one-click Dry-Run Preview shows the diff without modifying the live branch And if the user accepts the suggestion, the merge/rebase executes or is queued without advancing the stage and updates branch tips And the detection and suggestion generation complete in ≤ 4 seconds for up to 20 branches per project
Stale Cross-Sheet Reference Detection
Given Sheet A contains a reference (viewport, callout, or annotation link) to Sheet B at version v And Sheet B has a newer unmerged version v+1 or higher When the Gatekeeper check runs Then the system flags a stale reference with Sheet A element ID, current target version, latest available version, and summary of changes affecting the reference And the referenced regions on both sheets are highlighted and navigable And the user is offered actions: Update Reference to latest, Open Compare, Request Waiver And advancement is blocked until all stale references are updated or explicitly waived by an authorized role
Mismatched Sheet Index Validation
Given the project maintains a Sheet Index registry with sheet numbers, titles, and sequence When the Gatekeeper check runs Then the system detects and lists mismatches including missing sheets, extras not in index, duplicate sheet numbers, title mismatches, and sequence gaps And each mismatch includes a suggested fix: Add to Index, Remove from Set, Renumber, or Rename with one-click actions And advancement is blocked until mismatches are resolved or waived by an authorized role And the check completes in ≤ 2 seconds for up to 500 indexed sheets
Dependency Graph Impact Visualization
Given a project sheet set with cross-sheet references, markups, and version links When the Gatekeeper builds the lightweight dependency graph Then it creates nodes for sheets, markups, and versions and edges for references, depends_on, and supersedes relationships And for each detected conflict the UI shows the conflict’s 2-hop neighborhood and counts of impacted nodes And clicking an impacted node pans/zooms the canvas to the element and filters the conflict list accordingly And graph build time is ≤ 1 second for up to 200 nodes and 600 edges
Authorized Waiver Workflow and Audit Trail
Given conflicts are present and the user has Project Lead or higher role When the user requests a waiver for selected conflicts Then the system requires reason text (min 10 characters), scope (specific conflicts), and duration (expires or indefinite) And an immutable audit entry is created with timestamp, user, conflicts waived, reason, and stage And advancement proceeds only for conflicts covered by active waivers; non-waived conflicts remain blocking And waived conflicts are badged in the Gatekeeper panel and on the stage header with a link to the audit log And waivers auto-clear when the underlying conflicts are resolved; audit entries remain read-only
Zero-Conflict Pass-Through and SLA
Given a project has no unmerged markups, no divergent branches, no stale references, and a consistent sheet index When the user initiates Advance Stage Then the Gatekeeper reports zero conflicts with a green Pass state and allows advancement without prompts And the complete Gatekeeper cycle time is ≤ 1.5 seconds for projects up to 200 sheets and 5 active branches And a pass result is recorded in the audit log with a hash of the dependency graph for traceability
Configurable Gate Rules & Waivers
"As an administrator, I want to configure which gate checks are mandatory and who can grant waivers so that the process aligns with our firm’s standards and risk tolerance."
Description

Provides project-level configuration of gate checks, severities (block vs warn), and rule templates by project type. Supports role-based waiver workflows with mandatory reason codes, optional attachments (evidence), time-bound waivers, and change history. Allows administrators to tailor Gatekeeper behavior to firm standards while preserving flexibility for edge cases without compromising auditability.

Acceptance Criteria
Admin Defines Project-Type Rule Templates
Given I am a Firm Admin on Gate Rules > Templates When I create a template named "Residential v1" for project type "Residential" with rules and severities (e.g., All comments resolved = Block; Current version locked = Block; Conflict checks clean = Warn) Then the template is saved and listed for the specified project type And it is selectable during project creation And its rules and severities are persisted and retrievable via API
Auto-Apply Template at Project Creation
Given I create a project of type "Residential" and select template "Residential v1" When the project is created Then the project's Gatekeeper rule set matches the template's rules and severities And Gatekeeper evaluations use these rules for advance checks And no extra rules outside the selected template are active by default
Project-Level Overrides with Audit
Given I am a Project Admin on Project Settings > Gate Rules When I change a rule's severity (e.g., Warn -> Block) and disable another rule Then the project-specific rule set reflects the changes immediately And Warn severities never block advancement; Block severities block advancement when failing And disabled rules do not affect gate advancement And an audit entry records user, timestamp, and before/after values for each change
Request Waiver on Blocking Failure
Given a Gatekeeper check fails on rule "All comments resolved" with severity Block and my role can request waivers When I submit a waiver with a mandatory reason code, optional attachment(s), and an expiration date/time Then the waiver status becomes Pending and is linked to the specific rule and project And the gate remains blocked while the waiver is Pending And submitting without a reason code is rejected with a validation error And attachments are stored and retrievable; unsupported file types are rejected with a clear error
Approve/Reject Waiver with Role Controls
Given a waiver request is Pending and my role permits waiver approvals and I am not the requester When I approve the waiver and enter an approval comment Then the waiver status becomes Approved with the defined expiration And Gatekeeper treats the failing rule as waived (does not block) until expiration And the approval is recorded in change history with approver, timestamp, and comment When I reject the waiver Then the waiver status becomes Rejected and the gate remains blocked And the rejection is recorded with approver, timestamp, and reason
Waiver Expiration and Enforcement
Given a waiver for rule "All comments resolved" is Approved with expiration 2025-10-15T17:00:00Z When the expiration time passes Then the waiver automatically transitions to Expired And the next Gatekeeper evaluation enforces the rule's configured severity (e.g., Block) And an attempt to advance after expiration is prevented with message "Waiver expired" and an audit entry is created
Change History Visibility and Export
Given I am a Project Admin on Gatekeeper > Change History When I filter by event type (Rule/Waiver) and date range and export Then I see an immutable, chronological list including actor, timestamp (UTC), entity, action (Create/Update/Approve/Reject/Expire), and before/after values And the list contains all relevant changes for the project And the export downloads as a CSV containing the same columns
Gate Results Panel & Remediation Actions
"As a project contributor, I want a clear checklist of gate failures with direct actions to fix them so that I can quickly remediate and proceed."
Description

A dedicated UI panel that presents gate outcomes with clear pass/fail/warn states, grouped by rule, with inline links to the exact items to fix (e.g., unresolved thread, conflicting markup). Includes one-click re-run after remediation, exportable reports for stakeholders, and accessibility support (keyboard navigation, screen reader labels). Reduces friction by turning failures into guided, actionable steps.

Acceptance Criteria
View Gate Results grouped and summarized
Given a completed gate run with at least one Pass, one Warn, and one Fail result When the user opens the Gate Results Panel Then the panel displays results grouped by rule with the rule name, state badge (Pass/Warn/Fail), and item counts per state And failing groups are expanded by default and passing-only groups are collapsed And groups are ordered by severity: Fail, then Warn, then Pass And the panel header shows the last run timestamp and the user who executed the run
Navigate to failing items via inline links
Given a rule with one or more failing items (e.g., unresolved comment threads, conflicting markups) When the user clicks the inline link for an item in that rule Then the app navigates to the exact drawing or thread context, highlights the item, and sets keyboard focus to it And a breadcrumb or Back to results control returns the user to the Gate Results Panel at the same scroll and expansion state And the linked item count decreases only after the item is actually resolved and a re-run confirms the fix
Re-run gate checks after remediation
Given at least one previously failing or warning result and the user has made fixes When the user clicks Re-run Checks in the Gate Results Panel Then validation re-executes without a full page reload and shows an in-progress state with controls disabled And upon completion the results refresh, including updated states, counts, and last run timestamp And if all rules pass the panel displays All checks passed and the Advance to next rung action becomes enabled And if any rule still fails the Re-run Checks action is re-enabled and failure details remain visible
Export gate results for stakeholders
Given a completed gate run is visible in the Gate Results Panel When the user selects Export and chooses PDF or CSV Then a file is downloaded with a name including project, rung, and timestamp And the export includes rule name, state (Pass/Warn/Fail), counts, and a list of failing/warning items with identifiers and links And the export header includes project name, rung name, executor, and run timestamp And exporting does not change the on-screen state of the panel
Keyboard navigation and screen reader support
Given the Gate Results Panel is open When the user navigates using only the keyboard Then all interactive elements are reachable in a logical order with a visible focus indicator and can be activated via Enter/Space And rule groups support Arrow keys to expand/collapse and announce state changes via ARIA attributes And screen readers announce each rule with its name, state, and item counts, and announce re-run progress and completion via a live region And all state badges meet WCAG 2.1 AA contrast requirements
Error and timeout handling during validations
Given the validation service returns an error or times out during a run When the Gate Results Panel receives the error Then the panel displays a descriptive, non-blocking error message per affected rule and provides a Retry action And partial results (from successful rules) remain visible And the Re-run Checks action remains available and re-attempts the failed validations And the export indicates any rules with errors as Error with no item list
Gatekeeper Audit Trail & Evidence Export
"As a compliance stakeholder, I want a complete audit trail and exportable evidence of gate results so that we can demonstrate process adherence to clients and regulators."
Description

Creates a tamper-evident audit log capturing who initiated the gate, rule versions used, results, waivers (with approver and reason), and the snapshot references locked at pass time. Supports retention policies, time-stamped signatures, and export to PDF/JSON for client records or compliance. Enables post-mortems and external audits by providing verifiable evidence of due diligence before approvals.

Acceptance Criteria
Gate Pass Event Logged With Complete Fields
Given a user initiates a Gatekeeper advance and all validations pass When the gate transitions to Passed Then a single audit record is created within 1 second containing: gate_id, project_id, stage_id, initiator_user_id, initiator_name, initiated_at_utc (ISO 8601), rule_set_id, rule_set_version, rule_results (per-check id, name, outcome, duration), decision=pass, environment (app_version, server_region), and locked_snapshot_ids (drawing_version_ids, conversation_snapshot_ids). And the record includes a monotonically increasing sequence number for the project and a server-generated event_id (UUIDv4). And the record persists durably (acknowledged by storage) before the UI shows "Passed".
Tamper-Evident Chain and Verification
Given a sequence of audit records exists for a project When integrity verification recomputes each record's content_hash and validates the prev_hash link Then every record validates (content_hash matches stored hash) and prev_hash of record N equals content_hash of record N-1. And each record includes a server-side digital signature (key_id) over content_hash and critical fields; verification with the published public key succeeds. And if any byte of a stored record is altered Then verification fails and returns the id of the first invalid record within 5 seconds for up to 10,000 records.
Evidence Export to JSON and PDF
Given an authorized user (Project Lead or above) requests an export for a specific gate_id When format=json Then the system returns a JSON file containing the complete audit record, embedded snapshot metadata, and verification artifacts (content_hash, prev_hash, signature, key_id) with field parity to internal storage. When format=pdf Then the system returns a PDF that renders the same fields as the JSON (1:1 field parity) and includes a QR code encoding a verification URL and the content_hash. And both exports include generated_at_utc (ISO 8601), exporter_user_id, exporter_name, and file names: Audit_{gate_id}_{generated_at_utc}.json|.pdf. And export completes within 10 seconds and is downloadable for at least 1 hour via a pre-signed URL.
Retention Policy and Legal Hold Enforcement
Given a workspace retention policy for Gatekeeper audit records is configured to 7 years When a record’s age exceeds the policy and no legal_hold is set Then the record and associated artifacts are permanently purged within 24 hours, and a purge_tombstone (record_id, purged_at_utc, policy_reference) is written to the admin audit log. And when legal_hold=true on a project or record Then the record is not purged until the hold is removed, regardless of age. And purge events are included in integrity verification and are tamper-evident. And attempts to export a purged record return HTTP 410 Gone with policy_reference and guidance.
Waiver Capture and Approval Binding
Given one or more Gatekeeper checks fail When an authorized approver applies a waiver and attempts to pass the gate Then the gate cannot pass unless each waived check includes: check_id, check_name, failure_reason, approver_user_id, approver_role, approver_reason_text (min 10 chars), approved_at_utc (ISO 8601), and approver_signature. And the audit record sets decision=pass_with_waivers and lists each waived check distinctly from passed checks. And exports clearly mark waived checks and include approver identity and signature artifacts.
Snapshot Reference Locking at Pass Time
Given a gate passes (with or without waivers) When the audit record is written Then drawing and conversation snapshots are recorded with immutable identifiers (file_id, version_id) and content hashes, and these references are immutable thereafter. And subsequent edits create new versions that are not retroactively linked to the passed audit record. And exports include stable URLs to the exact snapshot assets or a checksum-only placeholder if assets were deleted per retention policy.

OOO Delegation

Respects out-of-office windows by auto-assigning temporary delegates with full context transfer. Keeps approvals flowing during vacations and travel without losing accountability or history.

Requirements

OOO Scheduling & Policy Controls
"As a project lead, I want to schedule my OOO periods with clear start and end times so that PlanPulse automatically routes approvals while I’m away."
Description

Provide user-facing controls to define out-of-office windows (start/end, timezone, partial-day), recurring patterns, manual toggle, and per-project overrides, with an organization-level policy engine to enforce rules such as maximum duration, mandatory delegation, and blackout dates. On activation/deactivation, update the user’s presence state, pause/resume personal approval notifications, and display OOO badges on avatars and assignment pickers. Integrate with PlanPulse’s authorization layer to trigger delegation state transitions in real time across all sessions and devices.

Acceptance Criteria
Create timezone-aware partial-day OOO window
Given a user with profile timezone "America/Los_Angeles" And current time is 2025-10-15 12:50 local When the user schedules OOO from 2025-10-15 13:00 to 2025-10-15 17:00 local and saves Then the system stores start/end in UTC with correct timezone conversion And the presence state remains Available until 13:00 local When the clock reaches 13:00 local Then the presence state updates to OOO And personal approval notifications are paused And an OOO badge is displayed on the user's avatar and in assignment pickers When the clock reaches 17:00 local Then the presence state returns to Available And personal approval notifications resume within 60 seconds
Set weekly recurring OOO with per-occurrence exceptions
Given a user defines a recurring OOO rule: every Friday 12:00–18:00 in their profile timezone starting 2025-10-03 When the next Friday 12:00 local occurs Then the presence state updates to OOO and personal approval notifications pause until 18:00 local When the user adds an exception to skip 2025-10-24 and modifies 2025-11-07 to 10:00–16:00 Then no OOO activation occurs on 2025-10-24 And on 2025-11-07 the presence state updates to OOO at 10:00 and returns to Available at 16:00 with notifications paused/resumed accordingly And no duplicate or overlapping OOO windows are created for the same time span
Manual OOO toggle immediate activation/deactivation
Given a user is currently Available When the user toggles OOO ON manually Then the presence state becomes OOO within 2 seconds across all active sessions and devices And personal approval notifications pause immediately When the user toggles OOO OFF manually Then the presence state returns to Available within 2 seconds across all active sessions and devices And personal approval notifications resume immediately
Per-project overrides and delegate routing
Given a user sets default OOO delegate D1 And sets a per-project override for Project Alpha to delegate D2 And sets a per-project override for Project Beta to pause approvals (no auto-delegation) When the user's OOO becomes active Then new approval requests in Project Alpha are auto-assigned to D2 with full context transfer And new approval requests in projects without overrides are auto-assigned to D1 with full context transfer And new approval requests in Project Beta are held with status indicating the user is OOO and are not auto-assigned And assignment pickers display the user's OOO badge and indicate the active delegate or paused status per project
Org policy enforcement: max duration, mandatory delegate, blackout dates
Given org policy sets a maximum continuous OOO duration of 14 days, requires a delegate, and defines blackout dates from 2025-12-24 to 2025-12-26 When a user attempts to save an OOO window longer than 14 days Then the save is blocked with a clear validation error referencing the 14-day limit When a user attempts to save an OOO window overlapping any blackout date Then the save is blocked with a validation error listing the conflicting dates When a user attempts to save OOO without selecting a delegate Then the save is blocked with a validation error on the delegate field And no invalid OOO configuration is persisted
OOO badges and assignment picker indicators
Given a user's OOO is active When viewing team avatars, comments, and assignment picker Then the user's avatar shows an OOO badge with tooltip "OOO until {local end date/time}" And the badge is exposed to assistive technologies with equivalent text And in the assignment picker, the user is visually marked OOO and a "Delegated to {delegate}" label appears when delegation is active
Real-time delegation via authorization layer
Given user U activates OOO with delegate D When the OOO activation is confirmed Then the authorization layer grants D approval permissions on behalf of U across eligible projects within 3 seconds And all active sessions for U and D receive real-time events updating presence, delegation mappings, and assignment picker data within 3 seconds And any pending approval tasks assigned to U that match delegation rules are re-routed to D with preserved history When U deactivates OOO Then the authorization layer revokes the temporary permissions and restores original mappings within 3 seconds And no duplicate delegation records are created if activation events are replayed (idempotent)
Multi-level Delegate Assignment & Capacity
"As an architect, I want to appoint primary and backup delegates for my projects so that approvals continue even if the first delegate is unavailable."
Description

Allow users and admins to assign primary and backup delegates at multiple scopes (workspace, project, client, drawing set) with priority ordering, acceptance requirements, and conflict detection (e.g., delegate is OOO or over capacity). Include capacity controls (concurrent approvals, daily limits) and skill tags to guide routing by project type. Prevent circular delegation, surface an assignment matrix in admin settings, and integrate with the routing engine to select the best available delegate at runtime.

Acceptance Criteria
Multi-scope Primary/Backup Delegation with Acceptance and Priority
Given an admin configures delegates at multiple scopes (workspace, project, client, drawing set) with explicit priority order and an acceptance-required policy with a configured acceptance window And a new approval task is created under a scope that has configured delegates When the routing engine evaluates assignment Then the most specific scope’s configuration takes precedence over broader scopes And the highest-priority eligible delegate at that scope is notified with full context and an accept/decline action And if the delegate accepts within the configured window, the task is assigned and the acceptance event is audit-logged And if the delegate declines or times out, the next delegate in priority order is routed And if no delegates remain at that scope, routing falls back to the next broader scope in order
Conflict Detection for OOO and Capacity During Routing
Given a delegate is marked OOO for the task time window or has reached any configured capacity limit When the routing engine considers that delegate for assignment Then the delegate is skipped as ineligible with the reason recorded (OOO or Capacity) And the next eligible delegate is evaluated without notifying the ineligible delegate And the conflict is surfaced in the UI and audit log for the attempted assignment
Capacity Limits Enforced: Concurrent and Daily
Given a delegate has a concurrent approvals limit of 3 and a daily assignment limit of 10 And the delegate currently has 3 active approvals and 9 assignments today When two new approval tasks are created that would otherwise target that delegate Then any task that would exceed either limit is not assigned to the delegate and is routed to the next eligible delegate per priority and scope rules And if capacity frees up (an approval completes or is reassigned), subsequent routing decisions can assign to the delegate again And capacity evaluations occur at assignment time and are reflected in audit logs
Skill-Tag–Aware Routing and Tie-Breakers
Given an approval task requires skill tags ["Residential","MEP"] and candidate delegates have defined skill tags When the routing engine ranks candidates Then delegates whose tags fully cover the required tags are preferred over partial or non-matches And among full matches, priority order decides; ties break by lowest number of active approvals, then by alphabetical name for determinism And if no candidates have any matching tags, routing proceeds using priority and capacity only, and the lack of skill match is logged
Circular Delegation Prevention Across Scopes
Given existing delegate chains include A → B → C across any scopes When an admin attempts to add or modify an assignment that would create a cycle (e.g., C → A) Then the system blocks saving the change, shows an error indicating a circular delegation, and does not persist any part of the change And the error message includes the detected cycle path for remediation
Delegation Assignment Matrix in Admin Settings
Given an admin opens the Delegation Assignment Matrix in settings When the matrix loads Then it lists for each scope target (workspace, project, client, drawing set): primary and backups with explicit priority, acceptance requirement flag, capacity limits, OOO status, and skill tags And entries with conflicts (OOO, capacity reached, missing skills, circular risk) are highlighted with inline indicators and tooltips And the matrix supports filtering by scope, user, tag, and conflict state And each entry links to an audit log detailing configuration changes and routing outcomes
Runtime Best-Available Delegate Selection and Decision Logging
Given multiple potential delegates exist across scopes for a new approval task When the routing engine runs at task creation time Then it selects the best available delegate by applying rules in order: scope specificity, skill match quality (full > partial > none), configured priority, availability (not OOO and within capacity), least active approvals, alphabetical tie-break And it emits a routing decision record including considered candidates, acceptance/rejection reasons, and the final assignee And if no eligible delegate is found, the system emits a routing failure event, leaves the task unassigned, and notifies admins for intervention
Context Transfer & Access Scoping
"As a delegate, I want all relevant project context and the right level of access so that I can make informed decisions without hunting for information."
Description

On delegation activation, automatically grant the delegate precise access to the owner’s active approval queue, latest drawing versions, markup history, client conversations, checklists, deadlines, and attachments. Provide granular scopes (view, comment, approve) per project, redact personal notes flagged as private, and generate a timestamped, read-only context snapshot for traceability. Synchronize permissions via PlanPulse RBAC and fully revoke temporary access upon deactivation.

Acceptance Criteria
OOO Activation Grants Scoped Access to Delegate
Given owner O has active projects P1 and P2 with pending approvals and related artifacts and assigns delegate D with a defined start and end window When the delegation activates Then within 30 seconds D is granted via RBAC access only to O’s active approval queue, latest drawing versions, markup history, client conversations, checklists, deadlines, and attachments for P1 and P2, and to no other projects or resources Then D receives new approval items routed to O during the active delegation window within 30 seconds of routing Then D cannot access archived items or superseded drawing versions except where linked from in-scope approval items
Per-Project Scope Enforcement (View/Comment/Approve)
Given O configures per-project scopes for D as {P1: approve, P2: comment} When D interacts with P1 and P2 resources Then in P1 D can view, comment, and approve; in P2 D can view and comment but approve attempts are blocked with HTTP 403 and an in-app "Insufficient permission" message Then all permitted and denied actions are logged with user=D, principalOnBehalf=O, projectId, timestamp, and action Then D’s effective permissions equal the union of D’s permanent permissions and the temporary delegation scopes, not exceeding O’s maximum scope per project
Private Notes Redaction During Context Transfer
Given O has notes or annotations flagged as Private in delegated projects When D views project artifacts, conversations, and markups under delegation Then all content flagged Private is hidden and replaced with a redaction placeholder; direct API requests for private content return HTTP 403 Then non-private notes remain visible and searchable to D according to granted scope
Timestamped Read-Only Context Snapshot Creation
Given delegation from O to D activates When activation completes Then the system generates a read-only context snapshot capturing: timestamp, O and D identifiers, delegation window, project IDs, approval queue state, latest drawing version IDs/hashes, visible markup history, visible conversations, checklists, deadlines, and attachment references Then the snapshot is immutable (write attempts return HTTP 405) and is accessible to O and org admins; it is linked to the delegation audit record and retrievable by delegation ID
Approval Attribution and Audit Trail Under Delegation
Given D has approve scope on project P under an active delegation from O When D approves, rejects, or comments on an approval item Then the record stores actor=D and principalOnBehalf=O with timestamp, itemId, projectId, version hash, and action Then the audit log contains an entry for each access and action by D under delegation, filterable by delegation ID, and visible to O and admins
RBAC Synchronization Integrity
Given a delegation is activated or its scopes are updated When RBAC synchronization runs Then temporary roles and permissions are created or updated idempotently for D per project and scope and propagated to caches within 30 seconds Then no temporary grants exist outside the delegation window; periodic reconciliation removes orphaned grants and records a remediation audit event
Delegation Deactivation Fully Revokes Access
Given a delegation window ends or O manually deactivates it When deactivation occurs Then all temporary permissions for D are revoked within 30 seconds and attempts to access delegated resources return HTTP 403 via UI and API Then any active sessions or tokens carrying delegation scopes are invalidated; deep links created under delegation return HTTP 403 Then D retains only pre-existing permanent permissions; the context snapshot remains accessible to O and admins
Approval Proxy & Immutable Audit Trail
"As a compliance-conscious firm owner, I want delegated approvals to be clearly attributed and tamper-proof so that accountability is preserved."
Description

Enable delegates to take approval actions explicitly labeled as “on behalf of” the owner across activity feeds, exports, and client-visible receipts. Capture an immutable audit record linking the action to the context snapshot, including actor, principal, timestamps, reason codes, and cryptographic hashes to detect tampering. Support reversal workflows that require owner sign-off after return, while preserving a complete, ordered chain of custody for compliance audits.

Acceptance Criteria
Proxy Approval Labeling Across Surfaces
Given an active OOO delegation window and a configured delegate for an owner When the delegate performs an approval action on an artifact (drawing, markup, spec) Then the action is labeled "Approved by <Delegate Name> on behalf of <Owner Name>" on the activity feed, artifact timeline, notifications, client-visible receipts, and CSV/PDF exports And the metadata displayed includes actor_id, principal_id, and approval timestamp (UTC) And the label and metadata are consistent across web and mobile surfaces
Immutable Audit Record with Cryptographic Chain
Given any approval action taken by a delegate on behalf of an owner When the action is saved Then an immutable audit record is created containing: action_id, actor_id, principal_id, action_type, reason_code, timestamps (created_at, recorded_at in UTC), context_snapshot_id, snapshot_hash, and previous_record_hash And any attempt to modify or delete the audit record via UI or API is rejected with a 409 and logged as a separate security event And a verification job can recompute hashes over the ordered chain and returns Pass for intact chains and Fail for any break
Client Receipt Proxy Disclosure
Given a client-visible receipt is generated for a delegated approval When the receipt is rendered (web view and downloadable PDF) Then it displays "Approved by <Delegate Name> on behalf of <Owner Name>" with the exact UTC timestamp and owner’s local timezone And it includes a Receipt ID, action_id, and a short hash (first 10 chars of snapshot_hash) for reference And the receipt content is read-only and matches the corresponding audit record fields
Post-Return Owner Review and Reversal
Given an owner’s OOO window has ended and proxy approvals exist during that window When the owner opens the Post-Return Review queue Then they can confirm or initiate reversal on any proxy approval And reversal requires a reason_code and creates a compensating action that reopens the item without deleting the original approval And the audit trail links the reversal to the original via original_action_id and preserves chronological order And notifications are sent to affected stakeholders on reversal And no reversal is finalized without explicit owner confirmation
Context Snapshot Binding to Approval
Given a delegated approval is executed on a specific drawing version and markup thread When the approval is recorded Then a context snapshot is captured or referenced, including drawing_version_id, markup_version_id(s), relevant comment IDs, and attachment checksums And snapshot_hash is computed over the serialized snapshot content and stored in the audit record And later changes to the drawing or markups do not alter the snapshot; the original snapshot remains accessible and renders identically And re-serializing the snapshot produces the same snapshot_hash
Delegation Validity and Access Control
Given delegation is configured with specific delegates, scopes, and a start/end time window When a user attempts an approval action on behalf of an owner Then the action is permitted only if the user is an active delegate, the action is within the time window, and within the delegated scope And a reason_code is required and validated against an allowed list And attempts outside the window, outside scope, or by non-delegates are blocked with a clear error and an audit security event is logged And early cancellation of delegation immediately blocks further proxy approvals and is reflected in subsequent error messages
Notifications, Consent & Escalation
"As a project manager, I want timely updates and automatic escalation when delegated approvals stall so that timelines are protected."
Description

Issue consent requests that delegates must accept or decline before delegation activates. Send targeted notifications on schedule creation, activation, actions taken, escalation, and deactivation to the owner, delegates, and key stakeholders via in-app, email, and Slack, respecting quiet hours and user preferences. Apply SLA timers to delegated approvals and automatically escalate to backup delegates or project admins when items age beyond thresholds, including traceable escalation logs and deep links to the item.

Acceptance Criteria
Delegate Consent Required Before Activation
Given an OOO delegation schedule with start time T and delegate D When D has not accepted the consent request by T Then the delegation does not activate, the owner is notified within 60 seconds via preferred channels, and a manage-delegates deep link is provided Given the consent request is sent at schedule creation When D accepts before T Then the delegation activates exactly at T, and the acceptance is logged with timestamp, actor, and network fingerprint Given D declines at any time before T When the decline is recorded Then the delegation remains inactive, the owner is notified within 60 seconds with a deep link to select a replacement, and the decline is logged Given consent is pending When the owner cancels the schedule Then the consent request is invalidated and logged
Targeted Notifications Respect Quiet Hours and Preferences
Given an event E ∈ {schedule creation, activation, delegate action on an approval, escalation, deactivation} When E occurs Then recipients are determined as: owner, active delegate(s), relevant backup delegate(s) for escalations, and configured project stakeholders Given a recipient r has channel preferences (in-app, email, Slack) and quiet hours defined When E occurs Then deliver an in-app notification immediately and deliver email/Slack according to r's preferences outside quiet hours; if r is in quiet hours, queue email/Slack until quiet hours end; do not send duplicates across channels Given a notification is generated When it is delivered Then it includes event type, item name/ID, project, and a deep link to the item; in-app delivery occurs within 60 seconds of E, email/Slack within 5 minutes once outside quiet hours Given a user has opted out of a channel When E occurs Then no notification is sent via that channel while mandatory in-app notices still appear
SLA-based Escalation to Backup Delegates or Project Admins
Given a delegated approval with SLA threshold S (hours) from activation time T0 When elapsed business time since T0 ≥ S and the approval is still Pending Then the item escalates to the first available backup delegate; if none, notify project admins and mark the item as escalated Then an escalation log entry is written with timestamp, previous assignee, new assignee or admin group, SLA threshold, and reason "SLA breach", linked to the item and schedule Then owner, original delegate, new assignee(s), and configured stakeholders are notified per preferences with a deep link; notifications are sent within 60 seconds (in-app) and 5 minutes (email/Slack) outside quiet hours Given a subsequent SLA breach post first escalation When the next threshold is reached Then escalate to the next backup or to project admins, writing a new log entry for each escalation level Given the original delegate acts after escalation When they approve/decline Then the action is accepted and logged with a "post-escalation" flag
SLA Timer Semantics and Timezone Handling
Given a delegation activates at time T0 When starting the SLA timer Then the timer starts at T0 and measures elapsed time using the project's business calendar and timezone; if no calendar exists, the timer uses 24x7 elapsed time Given the schedule is edited prior to activation to new start time T0' When saved Then the SLA start time updates to T0' and the change is logged Given the schedule is deactivated or canceled When processed Then the SLA timer stops, pending escalations are canceled, and the stop is logged Given quiet hours are configured When the SLA timer runs Then quiet hours do not pause or reset SLA timers
Escalation and Consent Audit Log with Deep Links
Given any of the following occurs: consent request sent, consent accepted, consent declined, schedule activated, approval action by delegate, escalation, schedule deactivated When the event is committed Then an immutable audit record is created with fields: event type, item ID, project ID, actor user ID, target user/group (if applicable), timestamp (UTC and local), and deep link URL Given audit records exist When viewed in the Project Activity Log UI or retrieved via API with filters (date range, item ID, actor) Then matching records are returned and each deep link opens the exact item with the approval context highlighted Given a deep link is opened on a supported browser When network latency is typical broadband Then the item view loads and focuses the referenced approval within 2 seconds
Delegation Lifecycle and Attribution
Given consent has been accepted before start time T When the clock reaches T Then the delegation activates and all new approval requests created during the window route to the active delegate; owner and delegate receive activation notifications Given the delegate takes an approval action during the active window When the action is saved Then the item history records "Delegate <X> on behalf of Owner <Y>" with timestamp and action details Given the delegation window ends at T_end or is manually deactivated When deactivation occurs Then no new approvals route to the delegate; owner and delegate receive deactivation notifications; future routing follows standard assignment rules
Calendar Integration with Privacy Safeguards
"As a busy architect, I want my OOO status to sync from my work calendar so that I don’t have to manage it twice."
Description

Integrate with Google Workspace and Microsoft 365 calendars using OAuth with least-privilege scopes to read only Out of Office events. Map events to OOO windows with timezone handling and conflict resolution against manual schedules, support per-user opt-in and admin-enforced policies, and suppress sensitive event fields (titles, attendees). Securely store refresh tokens, perform periodic sync and webhook-based updates, and display the current source of truth for each OOO period.

Acceptance Criteria
OAuth Least-Privilege Scopes for OOO Read-Only
- Given a user connects Google Workspace, When the OAuth consent screen is shown, Then only read-only calendar scopes required to detect Out of Office windows are requested, and no write scopes or non-calendar scopes are requested. - Given a user connects Microsoft 365, When the OAuth consent screen is shown, Then only read-only scopes required to detect Out of Office windows (e.g., mailbox settings or calendar OOO) are requested, and no write scopes or non-calendar scopes are requested. - Given consent is granted, When inspecting granted permissions, Then the app cannot access event titles, descriptions, or attendees via the granted scopes. - Given the integration is active, When an API call that requires a non-granted scope is attempted, Then the provider returns an authorization error and the app does not retry more than 3 times.
Timezone Normalization and DST-Safe OOO Mapping
- Given an OOO item in provider timezone T1 that spans a DST boundary, When synchronized, Then the OOO window is stored in UTC and renders correctly in the user's current timezone with no ±1 hour drift. - Given an all-day OOO block on the provider calendar, When synchronized, Then the app represents it as 00:00–23:59:59 local for each day and delegates during the entire interval. - Given a partial-day OOO window that crosses midnight, When synchronized, Then delegation starts and ends at the exact boundary times. - Given the user changes their timezone, When viewing existing OOO windows, Then only the display adjusts; stored UTC start/end are unchanged. - Given a Microsoft 365 automaticRepliesSetting with a scheduled window, When synchronized, Then an OOO window is created matching the start/end times; if state is alwaysOn, Then an open-ended OOO window is represented until manually ended. - Given a Google Calendar Out of Office event with recurrence, When synchronized, Then recurrences are expanded within the sync horizon and mapped to discrete OOO windows.
Conflict Resolution and Source-of-Truth Display
- Rule: If a manual OOO schedule and a calendar-derived OOO window overlap, Then the admin policy determines precedence (Calendar wins or Manual wins) and only the winning window is active. - Rule: If windows do not overlap, Then the active OOO periods are the union of both sources without gaps. - Given a conflict is resolved, When the UI shows the active OOO period, Then it displays a badge with Source = Calendar | Manual and the policy applied, plus "Last sync <timestamp>". - Given a change in either source, When sync completes, Then the resolved state updates within 5 minutes for periodic sync or within 60 seconds for webhook-triggered sync, and an audit entry is recorded.
Per-User Opt-In vs Admin-Enforced Policies
- Given org policy is Opt-In, When a user connects a provider, Then OOO import is enabled only for that user and remains disabled for others until they opt in. - Given org policy is Enforced, When a user attempts to disable or skip connection, Then the action is blocked with an org policy message and an admin override link. - Given group-based enforcement is enabled, When a user is added to an enforced group, Then their integration status switches to Required within 10 minutes and a notification is sent. - Given a user disconnects under Opt-In, When processed, Then tokens are revoked and OOO import stops within 5 minutes; under Enforced, Then disconnect is blocked.
Sensitive Event Data Suppression
- Given OOO data is synchronized, When persisting records, Then only start/end timestamps, timezone, recurrence ID, provider, and OOO flag are stored; titles, descriptions, attendees, locations, and raw payloads are not stored. - Given logging is set to debug, When sync runs, Then no PII from provider payloads appears in logs; sensitive fields are redacted. - Given the UI displays OOO information, When a user views it, Then no event title or attendees are shown; only "Out of Office", timing, and source. - Given an external client calls the OOO API, When a response is returned, Then sensitive fields are absent from the schema and cannot be requested via parameters.
Secure Refresh Token Storage and Revocation Handling
- Given OAuth completes, When a refresh token is issued, Then it is encrypted at rest using a KMS-managed key and scoped access controls restrict decryption to the OOO service. - Given three consecutive token refresh failures due to invalid_grant or 401, When the next sync cycle runs, Then the integration is auto-paused, the user is notified, and tokens are purged within 24 hours. - Given a user clicks Disconnect, When processed, Then provider tokens are revoked via API, deleted from storage within 15 minutes, and an audit event is recorded including actor, timestamp, and token fingerprint. - Given a system restore from backup, When the service restarts, Then refresh tokens are not restored and users must re-consent.
Periodic Sync and Webhook Updates Reliability
- Given no webhook is configured or webhooks are delayed, When an OOO change occurs at the provider, Then the change is reflected in the app within 15 minutes. - Given webhooks are configured and healthy, When an OOO change occurs, Then the change is reflected in the app within 60 seconds at the 95th percentile; retries use exponential backoff on 5xx with a max retry window of 15 minutes. - Given an incoming webhook, When signature validation fails, Then the request is rejected with 401, no state changes occur, and the event is logged with correlation ID. - Given 10 consecutive webhook delivery failures, When the threshold is reached, Then the subscription is disabled and admins are alerted within 5 minutes.
Return Handoff & Reconciliation
"As a returning approver, I want a clear handoff and controls to finalize delegated work so that I can quickly regain ownership without losing context."
Description

At OOO end, notify participants, revoke temporary permissions, and deliver a concise handoff digest to the owner summarizing delegated actions, pending items, and exceptions. Provide a reconciliation view to ratify, override, or follow up on delegated decisions, reassign remaining tasks, and add owner notes for clients. Ensure all actions preserve audit continuity and update project timelines and accountability records without data loss.

Acceptance Criteria
OOO End Notifications to Owner and Participants
Given an OOO period with designated delegates has an end timestamp When the end timestamp is reached or the owner manually ends OOO early Then the system sends in-app and email notifications to the owner and to all participants who created, reviewed, or approved items during OOO within 2 minutes And the notifications include counts of delegated actions, pending items, and exceptions with deep links to the reconciliation view and the handoff digest And a notification delivery log entry is recorded per recipient with timestamp, channel, status (sent, delivered, failed), and correlation ID When a notification fails Then the system retries up to 3 times with exponential backoff and surfaces a non-blocking banner to the owner listing failed recipients
Automatic Revocation of Temporary Delegate Permissions
Given OOO ends When the end is triggered Then all temporary delegate permissions granted for the OOO window are revoked within 60 seconds and access tokens invalidated And delegates immediately lose capabilities to approve, reassign, or edit items tied to the owner's authority while retaining read-only per baseline access model When a delegate attempts an action requiring revoked permission after OOO end Then the system blocks the action and displays an OOO Ended message with a link to contact the owner And a permissions-change audit event is logged for each delegate with before/after scopes
Handoff Digest Delivery to Owner
Given OOO ends When the system compiles the handoff digest Then the digest includes delegated approvals (id, title, client, status, decision, delegate, timestamp), pending items, exceptions (blocked, escalated, missing info), and summary metrics And the digest is delivered via in-app message and email PDF within 5 minutes; both contain consistent content and deep links to each item and the reconciliation view And the owner can acknowledge receipt; acknowledgment is timestamped and logged When digest generation fails Then the system alerts the owner and retries generation within 2 minutes; on repeat failure, a minimal fallback list view is provided
Reconciliation View: Ratify, Override, Follow Up
Given the owner opens the reconciliation view When reviewing a delegated decision Then the owner can choose Ratify, Override, or Follow Up for each decision and optionally add a note to the client or internal team And bulk actions are available for multi-select with per-item confirmation and the ability to apply a shared note When the owner selects Override Then the system requires a reason, records the new decision, notifies affected participants/clients, and updates task status accordingly When Follow Up is selected Then a follow-up task is created with assignee, due date, and reminder rules Then all actions are persisted without page reload and are undoable for 10 minutes
Reassignment of Remaining Tasks
Given there are pending items at OOO end When the owner opens the reconciliation view Then the owner can reassign any remaining tasks to self or others, set due dates, and adjust priority And reassignment triggers notifications to new assignees and removes delegates from task watchers as appropriate And reassigned tasks inherit full context (attachments, markup history, client threads) without loss When reassignment affects dependencies Then the system recalculates timelines and flags conflicts
Audit Continuity and Attribution
Given any action performed during OOO or reconciliation When an audit record is created Then the record includes actor (delegate or owner), acting-as (if delegate), original authority (owner), timestamp, previous state, new state, and rationale (if provided) And overrides retain both the delegate's original decision and the owner's superseding decision in an immutable chain with cross-links And all audit entries are filterable by actor, date range, project, and decision type and exportable as CSV When the same event is processed more than once Then idempotency keys prevent duplicate audit entries
Timeline and Accountability Updates
Given reconciliation actions are completed When ratifications, overrides, or reassignments occur Then the project timeline is recalculated (task durations, due dates, critical path) within 2 minutes and a change log is recorded And accountability records reflect final owner responsibility for ratified items and shared responsibility for overridden items, with visible badges in the item history And cycle time metrics for the OOO window are updated and visible in analytics: number of delegated decisions, ratification rate, override rate, average time to ratify When data synchronization to analytics fails Then the system queues the update and retries until success without blocking user actions

Noise Sweep

Automatically suppresses low-value visual noise—like raster artifacts, lineweight jitter, and title-block churn—so heatmaps only glow where real design changes occurred. Tunable sensitivity lets each role see the right signal, cutting false positives and speeding reviews.

Requirements

Noise Classification Engine
"As a project lead, I want Noise Sweep to automatically filter out trivial drawing fluctuations between versions so that our heatmap only highlights meaningful design changes."
Description

Implements a core detection pipeline that differentiates low-value visual noise (e.g., raster speckle, lineweight jitter, hatch flicker, title-block churn) from substantive geometry and annotation changes between drawing versions. Ingests both vector PDFs and raster scans, performs geometric registration, and produces a suppression mask applied prior to heatmap generation. Combines deterministic heuristics with optional machine learning to achieve high precision and recall, outputs per-pixel or per-object confidence scores, and exposes tunable thresholds for downstream components. Ensures deterministic runs for the same inputs and supports batch processing for multi-sheet sets.

Acceptance Criteria
Multi-Format Ingestion & Geometric Registration
Given two drawing versions (any combination of vector PDF and raster scan at 150–600 DPI) When the engine ingests the pair Then it auto-detects formats and parses both without manual preprocessing And it performs geometric registration, reporting the chosen transform (rigid/affine/projective) and quality metrics (RMS error, inlier ratio) And the final alignment achieves RMS residual ≤ 1.0 pixel at 300 DPI (scaled proportionally by DPI) or ≤ 0.1% of the sheet’s shorter side, whichever is larger And pages requiring rotation, scale, or slight skew are normalized before differencing And if registration confidence < the configured threshold, the engine flags the sheet as "registration_failed" and does not emit a suppression mask
Noise vs Substantive Change Classification Quality
Given a labeled benchmark set covering raster speckle, lineweight jitter, hatch flicker, title-block churn, and true geometry/annotation edits across vector and raster sources When the engine runs in heuristics-only mode Then F1 ≥ 0.87 with precision ≥ 0.88 and recall ≥ 0.86 for detecting substantive changes at default thresholds When the engine runs in ML-enhanced mode (with the specified model available) Then F1 ≥ 0.92 with precision ≥ 0.92 and recall ≥ 0.92 at default thresholds And on noise-only pairs, false-positive highlighted area ≤ 3% of sheet area And suppression of title-block churn correctly marks ≥ 95% of churn pixels as noise
Suppression Mask Output & Alignment to Input Space
Given a registered input pair When a suppression mask is generated Then the mask is emitted at the registered input resolution and coordinate space (pixel-perfect for raster; geometry-aligned for vector) And maximum misalignment ≤ 0.5 pixel at 300 DPI (scaled proportionally by DPI) And the mask file(s) include explicit format metadata (e.g., PNG for raster with DPI, vector path set for vector) And applying the mask prior to heatmap reduces spurious highlights from labeled noise with IoU ≥ 0.90 against the ground-truth noise mask And at default thresholds, ≤ 5% of true-change pixels are erroneously suppressed
Deterministic Reproducibility
Given identical inputs, configuration (including thresholds), and model version/seed When the engine is executed 10 times on the same machine and on a second machine with the same CPU/GPU architecture Then all emitted outputs (suppression masks and confidence arrays) are byte-identical; if container metadata differs, numerical values differ by ≤ 1e-6 relative error And output metadata contains an execution fingerprint (hash of inputs, configuration, model) that verifies reproducibility
Threshold Tuning & Role Profiles
Given exposed tunable thresholds and predefined role profiles (e.g., Client, PM, Architect) When a user selects a profile or adjusts thresholds Then the effective thresholds are persisted and recorded in output metadata And increasing the suppression threshold monotonically non-increases mask area in ≥ 98% of benchmark cases; decreasing it monotonically non-decreases mask area in ≥ 98% of cases And default role profiles produce distinct outputs with Jaccard distance ≥ 0.10 between adjacent profiles on the benchmark set
Batch Processing for Multi-Sheet Sets
Given a multi-sheet set of at least 250 sheets with mixed vector and raster sources When the engine runs the set as a batch job Then it produces per-sheet outputs and a job-level summary JSON including per-sheet status, timings, registration metrics, and aggregate statistics And individual sheet failures do not abort the batch; failed sheets are reported with actionable error codes And batch jobs are resumable by job ID, skipping completed sheets and processing only missing/failed ones And the output directory structure preserves input order and filenames and is stable across reruns
Confidence Scores: Range, Calibration, and Exposure
Given detected differences When confidence scores are produced Then per-pixel (raster) or per-object (vector) scores are emitted in [0,1] with 32-bit precision and alignment to the mask coordinate space And on the benchmark set, AUROC ≥ 0.95 and Expected Calibration Error (10-bin ECE) ≤ 0.08 at default thresholds And downstream components can set thresholds on these scores, and score histograms plus summary statistics (min, max, mean, p25, p50, p75, p95) are included in metadata
Role-Based Sensitivity Profiles
"As an architect, I want role-specific sensitivity presets so that each stakeholder sees the appropriate level of change detail without being overwhelmed."
Description

Provides preset and customizable sensitivity profiles by role (e.g., Architect, Client, Project Manager) to control what is considered noise versus signal. Includes a simple slider for quick tuning and an advanced panel for threshold and feature-weight adjustments. Supports per-project defaults, per-user overrides, and shareable profile links. Integrates with access control to ensure the correct profile is applied in reviews, and persists selections in session and exports for reproducibility.

Acceptance Criteria
Apply Default Role Profile on Review Open
Given a project has role-based sensitivity profiles defined and a signed-in user has the role "Client" When the user opens the Review workspace for that project Then the "Client" sensitivity profile is automatically applied to Noise Sweep And the applied profile name and role are visible in the UI header within 500 ms And the session state records the applied profile ID, version, and timestamp And the user’s access permissions hide the Advanced Panel if the role lacks "Advanced Tuning" rights And if the user switches role (without page reload), the applied profile updates to the new role’s mapped profile within 500 ms
Set and Enforce Per-Project Role Defaults
Given a Project Admin has permission to configure role defaults When the admin assigns a sensitivity profile to the roles Architect, Client, and Project Manager and clicks Save Then the mappings are stored at the project level with audit entries (admin, timestamp, profile IDs, versions) And any new review session for a user in one of those roles auto-applies the mapped profile unless a per-user override exists And removing a role mapping stops auto-application for that role in subsequent new sessions
User Override Persists in Session and Next Visit
Given a user with any role in a project has an applied role-based profile When the user adjusts sensitivity via the slider or Advanced Panel and clicks Save as "My Override" Then the override is stored as that user’s active profile for the project and role And the applied profile remains the override for the remainder of the session And after sign-out and sign-in, reopening the same project applies the user’s override And selecting "Reset to Project Default" removes the override and reapplies the project’s role default immediately
Quick Sensitivity Slider Adjusts Signal/Noise in Real Time
Given the Quick Sensitivity Slider is visible to the user When the user drags the slider by any increment Then heatmap rendering updates within 300 ms to reflect the new sensitivity level And the numeric sensitivity value is displayed and bounded within the configured min/max range And unsaved slider changes are marked as "Unsaved" and are included in the current session state And clicking Save commits the slider setting to the currently applied profile (override if present, otherwise role default copy)
Advanced Panel Saves Thresholds and Feature Weights to Profile
Given the user has "Advanced Tuning" permission and opens the Advanced Panel When the user edits thresholds and feature weights (e.g., raster artifact suppression, lineweight jitter tolerance, title-block churn masking) within allowed ranges and clicks Save Then input validation prevents out-of-range values and shows inline errors And the saved parameters are written to the profile as a new version with audit metadata And the live preview updates within 500 ms to reflect changes And clicking Cancel discards unsaved changes and restores the last applied version
Shareable Profile Link Applies Profile with Access Controls
Given a user with permission to share profiles selects a profile and creates a shareable link When a recipient with project access opens the link while viewing the same project Then the linked profile (exact version) is applied read-only to their current session and labeled "From Link" And the recipient can Save As to create a personal copy only if they have edit rights And an unauthorized or out-of-project recipient receives an access error and the profile is not applied And revoking the link or deleting the profile invalidates previously issued links within 60 seconds
Exports Embed Profile Metadata for Reproducibility
Given a user has a view with an applied sensitivity profile When the user exports a PDF or PNG from the Review workspace Then the export embeds the profile name, ID, version, parameters summary, and slider value in the file metadata and export footer And downloading the export and using "Open in PlanPulse" reconstructs the same profile and sensitivity settings in-app And two exports made with different profiles contain different embedded metadata and produce visibly different heatmaps in-side-by-side comparison
Layer and Title-Block Exclusions
"As a reviewer, I want known static elements like title blocks and stamps excluded from the heatmap so that routine sheet churn doesn’t trigger false positives."
Description

Automatically detects and excludes title-block regions, legends, stamps, and other static layout elements from change detection and heatmap generation. Parses vector layers where available and applies learned templates or heuristics for raster-only inputs. Provides a UI to whitelist/blacklist layers or regions per sheet, and remembers choices across versions. Handles common CAD title-block churn (dates, issue numbers, plot stamps) without flagging them as design changes.

Acceptance Criteria
Auto-Exclude Title Block in Vector Sheets
Given a vector-based sheet with layers labeled for title block, legend, or stamp When change detection and heatmap generation run Then geometry on those layers is excluded from diff computation and heatmap output And no heatmap pixels exist within excluded polygons (count = 0) And the "Show Exclusions" toggle outlines excluded regions on the canvas And an audit log entry records excluded layer names and region counts
Detect Static Regions in Raster-Only Sheets
Given a raster-only sheet containing a standard title-block layout When Noise Sweep runs with default sensitivity Then the system detects and masks the title-block region with IoU ≥ 0.85 against the reference template or ground truth And changes to dates, issue numbers, or plot stamps within the block do not generate heatmap pixels And overall precision ≥ 0.98 and recall ≥ 0.95 for title-block detection on the provided test set
Per-Sheet Layer Whitelist/Blacklist UI
Given the Layer Exclusions panel is open for a sheet When the user whitelists or blacklists one or more layers and clicks Apply Then the heatmap recomputes and reflects the exclusions within 2 seconds And the panel displays an Active Exclusions list with layer names and region counts And the selection persists to the next upload of the same sheet automatically And the user can Undo/Redo the last exclusion change within the current session
Persist Exclusions Across Versions
Given a sheet with saved exclusion regions at version V1 When versions V2–V5 are uploaded with ≤ 5% change in title-block bounding box area and no orientation change Then the same exclusions auto-apply without prompting And if the title-block shifts > 5% area or rotates, the system flags "Needs Alignment" and pauses exclusions until confirmed And a banner explains why exclusions were paused with a one-click Re-align action
Suppress Title-Block Churn as Non-Design
Given only metadata inside the title block (dates, issue numbers, plot stamps, revision table entries) changed between versions When change detection runs Then the change summary reports 0 design changes And the heatmap shows zero signal outside excluded regions (count = 0) And a non-blocking log entry notes that metadata churn was suppressed
Manual Exclusion Region Overrides
Given auto-detection missed or mis-sized a static region When the user draws, edits, or deletes exclusion polygons and saves Then the manual polygons replace auto-detected ones for that sheet And the heatmap re-renders within 1 second And the overrides persist across future versions and can be exported/imported as JSON
Raster Artifact Smoothing
"As a small-firm architect, I want scan artifacts minimized before diffing so that minor raster noise doesn’t appear as changes in the review heatmap."
Description

Adds a preprocessing stage for scanned or rasterized drawings that reduces speckle, compression artifacts, and scan jitter while preserving line fidelity. Applies adaptive de-noising, de-skewing, and edge-preserving filters, with safeguards to avoid erasing thin vector-like lines. GPU-accelerated where available and optimized for large-format sheets to keep processing within real-time review constraints.

Acceptance Criteria
Preserve Thin Vector-like Lines
Given a benchmark of raster drawings containing synthetic 1–3 px lines at 300 DPI, When Raster Artifact Smoothing is applied with default sensitivity, Then at least 99.5% of ground-truth line pixels are preserved (recall ≥ 0.995). Given the same input, When smoothing completes, Then no new gaps longer than 3 px are introduced along any continuous line segment. Given the same input, When measuring median line width before vs after, Then absolute median change ≤ 0.2 px.
Speckle and Compression Artifact Reduction
Given a reference-backed test set with injected speckle and JPEG compression artifacts, When smoothing runs with default sensitivity, Then speckle count (connected components with area ≤ 4 px²) is reduced by ≥ 80% versus input. Given the same set, When comparing to the clean reference, Then SSIM increases by ≥ 0.03 or PSNR improves by ≥ 2 dB relative to the noisy input. Given the same set, When evaluating edge sharpness along true edges, Then average gradient magnitude reduction ≤ 10%.
Auto De-skew Accuracy
Given scans with global skew uniformly distributed in [−3°, +3°], When preprocessing runs, Then residual skew error ≤ 0.1° on the 95th percentile of pages. Given pages with detectable title-block lines, When de-skew is applied, Then detected title-block edges align within ≤ 1 px RMS to horizontal/vertical axes at 300 DPI. Given any page, When de-skew performs cropping to fill, Then content loss area ≤ 0.5% of page area.
Real-time Performance on Large-format Sheets (GPU/CPU)
Given a 36×48 in sheet at 300 DPI (≈10800×14400 px), When smoothing runs on reference GPU hardware (RTX 3060 or better), Then end-to-end preprocessing time ≤ 2.0 s and peak memory ≤ 2.0 GB. Given the same sheet on reference CPU hardware (8-core 3.0 GHz, integrated/none GPU), When smoothing runs, Then end-to-end preprocessing time ≤ 8.0 s and peak memory ≤ 2.0 GB. Given a system with a CUDA-compatible GPU, When smoothing runs, Then GPU utilization ≥ 50% during the denoising stage and the GPU execution path is selected; Given no GPU, Then the CPU fallback executes successfully with no functional regressions.
Sensitivity Control
Given the sensitivity parameter s in [0.0, 1.0], When s = 0.0, Then the smoothing stage is a no-op (output image hash equals input image hash). Given the same input, When s increases from 0.2 to 0.8, Then speckle reduction is monotonic (non-decreasing) with s and reaches ≥ 90% at s = 1.0 while meeting thin-line preservation criteria. Given two settings s1 < s2 on the same input, When comparing removed speckle counts, Then removed speckles at s2 ≥ s1.
Deterministic Output and Cross-Hardware Parity
Given identical input and parameters on the same hardware, When smoothing runs twice, Then outputs are bitwise identical (hash equality). Given identical input and parameters on supported CPU and GPU paths, When outputs are compared, Then mean absolute pixel error ≤ 1 (8-bit) and ≤ 0.1% of pixels differ by more than 1 gray level. Given different thread counts on the CPU path, When smoothing runs, Then outputs remain bitwise identical.
Confidence-Weighted Change Heatmap
"As a project lead, I want the heatmap intensity to reflect confidence in true changes so that I can quickly focus on the most credible edits."
Description

Integrates classifier confidence into the heatmap so that low-confidence differences are dimmed or suppressed based on current sensitivity settings. Provides tooltips and an inspector panel showing the reason code and confidence for any highlighted region. Supports threshold presets and a quick toggle to view only high-confidence changes, improving focus during time-constrained reviews.

Acceptance Criteria
Confidence-Weighted Heatmap Rendering
Given detected change regions each have a classifier confidence in [0.0, 1.0] and a sensitivity threshold S is set When the heatmap renders Then any region with confidence < S is visually deemphasized at opacity <= 20% unless suppression mode is enabled And any region with confidence >= S is visible And heatmap intensity is monotonically increasing with confidence and reaches maximum intensity at confidence = 1.0 And no region with lower confidence renders with higher intensity than a region with higher confidence Given suppression mode is enabled When the heatmap renders Then regions with confidence < S are not drawn at all
High-Confidence Only Quick Toggle
Given a sensitivity threshold S is active When the user enables the High-Confidence Only toggle Then only regions with confidence >= the High Confidence preset threshold are displayed and all others are hidden (not dimmed) And the toggle’s active state is visibly indicated And the number of visible regions is less than or equal to the number visible before enabling Given the High-Confidence Only toggle is enabled When the user disables it Then the previously selected sensitivity threshold S is restored and dimmed regions reappear according to S
Tooltip Shows Reason Code and Confidence
Given a visible heatmap region When the user hovers over the region for >= 250 ms or focuses it via keyboard Then a tooltip appears within 100 ms showing the reason code and confidence as a percentage with one decimal (e.g., "Boundary shift — 92.3%") And the tooltip remains anchored to the region without overflowing the viewport And the tooltip dismisses on pointer exit, Escape key, or focus loss
Inspector Panel for Highlighted Region
Given a heatmap region is selected by click or keyboard When the inspector panel opens Then it displays that region’s reason code and confidence as a percentage with one decimal And it updates to reflect the currently selected region within 100 ms when selection changes And the panel can be closed via Escape or clicking outside And if the region is currently suppressed by sensitivity settings, the panel indicates the suppression state
Threshold Presets Apply and Persist
Given threshold presets exist: High Confidence, Balanced, Exploratory, each mapped to a defined confidence threshold value When a user selects a preset Then the sensitivity threshold S is set to that preset’s value immediately And the preset selection persists for that user within the project across sessions And switching presets updates the visible/dimmed/suppressed regions according to the new S
Missing Confidence or Reason Handling
Given a detected change lacks a confidence value or reason code When the heatmap renders Then the region is hidden by default and not counted in visible change totals And hovering that area shows a tooltip with reason "Unknown" and confidence "N/A" And the inspector panel for a selected region with missing data displays reason "Unknown" and confidence "N/A" And if data later becomes available from the classifier, the heatmap and details update within 500 ms
Real-Time Update Performance
Given a drawing with up to 1,000 change regions on a standard target device When the user adjusts the sensitivity slider or toggles High-Confidence Only or suppression mode Then the heatmap re-renders within 200 ms and the UI remains responsive (>= 30 FPS) during the update And tooltip show/hide latency is <= 100 ms And inspector panel open/update latency is <= 150 ms
Reviewer Transparency and Overrides
"As a reviewer, I want clear visibility into what Noise Sweep hid and the ability to temporarily reveal it so that I can approve with confidence."
Description

Surfaces a suppression summary indicating how many items were hidden, with one-click reveal of suppressed regions and a per-region override to restore visibility. Captures the active Noise Sweep settings at approval time and writes them to the audit trail for reproducibility. Ensures annotations and comments remain intact regardless of suppression state and provides a non-destructive toggle to compare original versus swept views.

Acceptance Criteria
Suppression Summary Displays Count and Breakdown
Given a drawing with Noise Sweep enabled and detectable noise items When the view loads or the Noise Sweep sensitivity is changed Then a suppression summary is visible showing the total number of suppressed items and a category breakdown (e.g., raster artifacts, lineweight jitter, title-block churn) And the counts match the detection results for the current view and sensitivity And the summary updates within 1 second after a sensitivity or viewport change And if no items are suppressed, the summary explicitly displays 0 and no warning styles
One-Click Reveal of Suppressed Regions
Given one or more regions are suppressed by Noise Sweep When the reviewer clicks the "Reveal suppressed" control Then all suppressed regions become temporarily visible with a distinct visual delineation (e.g., outline/overlay) And a single click on the control again re-hides the regions And the reveal action is non-destructive and does not modify the source file or saved view settings And the suppression summary count dynamically reflects the current reveal state
Per-Region Override to Restore Visibility
Given a region is currently suppressed by Noise Sweep When the reviewer selects that region and activates "Restore visibility for this region" Then that region renders fully regardless of global Noise Sweep settings And the override persists for the current session and shared link until explicitly cleared And the suppression summary decreases by the number of items affected in that region And the region displays an indicator that an override is active And clearing the override returns the view to the prior suppression state without loss of data
Audit Trail Captures Noise Sweep Settings at Approval
Given the reviewer initiates client approval on a drawing with Noise Sweep available When approval is submitted Then the system writes an audit log entry capturing the active Noise Sweep settings, including enabled state, sensitivity level, role/profile applied, version of the Noise Sweep algorithm, list of active per-region overrides, and reveal-all state And the audit entry includes document ID, revision ID, user ID, timestamp, and application version And the audit entry is immutable and retrievable via audit APIs/UI And loading the document using the logged settings reproduces the same visual result as at approval time
Annotations and Comments Remain Intact Across Suppression States
Given annotations and threaded comments exist on the drawing, including within areas that Noise Sweep may suppress When the reviewer toggles Noise Sweep on/off, reveals suppressed regions, or applies/clears per-region overrides Then all annotations and comment threads remain present, selectable, and editable And their anchors/positions are stable with no ID changes or loss of threading And annotation visibility is governed by annotation settings, not suppression state And annotation/comment counts remain unchanged across toggles
Non-Destructive Original vs Swept View Toggle
Given a drawing where Noise Sweep can be applied When the reviewer toggles between Original and Swept views Then the system switches views without altering the underlying file or saved revision And returning to the prior view restores the exact previous camera, zoom, layer visibility, and Noise Sweep state And the toggle responds within 500 ms on documents up to the defined size threshold And the compare toggle is available via UI control and keyboard shortcut And enabling compare does not change the audit trail until an explicit approval action is taken
Real-Time Rendering and Caching
"As a project manager, I want Noise Sweep to feel instantaneous during reviews so that our team can make decisions without waiting on the interface."
Description

Delivers responsive pan/zoom and toggling of Noise Sweep with a target interaction latency under 150 ms on typical plan sizes. Uses tile-based rendering, background workers, and memoized suppression masks per version pair and sensitivity profile. Includes graceful degradation on lower-end devices and progress indicators for long-running sheets, ensuring smooth reviews without blocking the workspace.

Acceptance Criteria
Pan/Zoom Responsiveness on Typical Plans
Given a plan up to 10k×10k px with vector and raster layers loaded in the workspace When the user performs pan or zoom interactions (drag, wheel, pinch) Then the visible viewport re-renders within 150 ms for ≥95% of interactions and within 250 ms for ≥99% (end-to-end input-to-paint) on a reference mid‑range device And average frame rate during continuous pan/zoom remains ≥45 FPS with dropped frames ≤2% And no main-thread long task exceeds 50 ms during interaction
Noise Sweep Toggle Latency
Given Noise Sweep is off and the current sheet is visible at 100–200% zoom When the user toggles Noise Sweep on or off Then the viewport reflects the new state within 150 ms for ≥95% of toggles and within 250 ms for ≥99% And if a suppression mask is already memoized for the current version pair and sensitivity, the update occurs within 80 ms for ≥95% of toggles
Memoized Suppression Masks and Invalidation
Given a version pair (A→B) and a selected sensitivity profile S have been processed once When the user returns to the same version pair (A→B) and sensitivity S Then the memoized suppression mask is reused with a retrieval time ≤30 ms and no recomputation occurs And when the user changes either the version pair or sensitivity by any amount Then the previous mask is invalidated and a new mask is computed and memoized for the new (version pair, sensitivity) key
Tile-Based Rendering Efficiency and Cache Usage
Given tile-based rendering is enabled with a tile cache limit configured When the user pans/zooms within a 2× viewport radius of recently viewed areas Then tile cache hit rate is ≥80% over a 60‑second interaction window And peak tile cache memory usage does not exceed the configured limit And tiles evicted follow LRU policy with no visible seams or stale tiles on re-render
Background Workers and Non-Blocking UI
Given Noise Sweep computation and tile rasterization run in background workers When a long-running sheet requires >300 ms of processing Then the main thread remains responsive with interaction latency ≤100 ms for UI controls (chat, comments, selection) And a progress indicator appears within 100 ms of crossing the 300 ms threshold and updates at least every 1 s And users can cancel or navigate away; background work is aborted within 100 ms of cancellation
Graceful Degradation on Lower-End Devices
Given a lower-end device profile (dual-core CPU, integrated GPU, ≤4 GB RAM) is detected When the user pans, zooms, or toggles Noise Sweep Then adaptive strategies (reduced tile resolution, throttled tile count) are applied automatically And interaction latency remains ≤300 ms for ≥95% of interactions with ≥30 FPS sustained during motion And a non-blocking indicator communicates reduced quality mode while preserving feature functionality

Change Layers

Filter heatmap highlights by change type (geometry, dimensions, text, annotations, symbols) and discipline tags. One-click presets tailor the view for clients, QA/QC, or consultants, so each user focuses on the diffs that matter to their decisions.

Requirements

Change Type Classification Engine
"As an architect, I want changes automatically categorized by type so that I can filter and review only the diffs relevant to my task."
Description

Implement an automated diff classifier that detects and labels changes between drawing versions by type (geometry, dimensions, text, annotations, symbols). The engine should integrate with PlanPulse’s versioning pipeline, parsing vector and raster inputs, mapping detected changes to standardized categories, and emitting structured metadata consumable by the rendering layer and filters. Accuracy must support professional review (configurable thresholds), handle sheet scale/rotation, and de-noise minor artifacts to reduce false positives. Outputs must be idempotent, stored alongside version snapshots, and re-usable across sessions to power filters, heatmaps, and approval workflows.

Acceptance Criteria
Accurate Multi-class Change Detection (Vector and Raster)
- Given two sheet versions in vector (PDF/DWG) or raster (TIFF/PNG) format, when the engine runs, then each detected change is labeled as one of [geometry, dimensions, text, annotations, symbols] and includes bbox_mm, confidence, and modality. - Given the PlanPulse benchmark set (≥500 sheets), when evaluated with default thresholds, then per-class precision ≥ 0.95 and recall ≥ 0.90 for geometry and dimensions, and precision ≥ 0.93 and recall ≥ 0.88 for text, annotations, and symbols. - Given an A1 sheet at 300 DPI with ≤1000 changes, when processed on the reference environment, then processing time per sheet ≤ 5 s and peak memory ≤ 2 GB.
Alignment Across Scale, Rotation, and Translation
- Given two versions with arbitrary rotation (±180°), scale (50–200%), and translation, when normalized, then IoU between mapped unchanged fiducials ≥ 0.90 and misalignment-induced false positives ≤ 2% on the alignment validation set. - Given sheets with reoriented title blocks, when processed, then true movement is reported as geometry changes and not suppressed by normalization.
Noise Suppression and Minor Artifact Filtering
- Given raster scans containing speckle and compression artifacts, when using default thresholds, then noise-induced false positives ≤ 5 per A1 sheet at 300 DPI and min-area threshold default ≥ 6 mm² filters hatch jitter. - Given anti-aliasing or line-weight differences ≤ 0.2 mm, when processed, then no changes are emitted for these differences. - Given vector inputs with hidden layers off, when processed, then changes on hidden layers are ignored.
Schema-Compliant, Idempotent Metadata Output
- Given the engine emits metadata, then it conforms to schema v1.0 with fields: change_id (UUIDv4), sheet_id, version_id_old, version_id_new, category enum, discipline_tags[], bbox_mm {x,y,w,h}, confidence, element_ids[], engine_version, threshold_profile_id, created_at, hashes.det; all records validate against the JSON schema. - Given identical inputs and thresholds, when reprocessed, then output is byte-identical and change_id values remain stable (idempotent). - Given up to 5000 changes on a sheet, when stored, then metadata size ≤ 5 MB (compressed) and retrieval time ≤ 300 ms from the storage API.
Versioning Pipeline Integration and Storage
- Given a new version snapshot is created, when the pipeline finalizes ingestion, then the classifier runs within 10 s, associates output to the snapshot, and exposes GET /versions/{id}/changes returning 200 within 300 ms for ≤1000 changes. - Given concurrent uploads of the same content, when deduplicated, then only one classification job executes and others reference the same metadata. - Given a processing error occurs, when retried up to 3 times with exponential backoff, then failure is recorded without blocking snapshot availability, and an actionable error code is returned by GET /versions/{id}/changes.
Filter and Heatmap Compatibility for Change Layers
- Given a filter by change types [geometry, text] and discipline tags [Structural, MEP], when applied by the rendering layer, then the returned set equals the intersection exactly (100% match to metadata). - Given the Client View preset mapping to [geometry, dimensions, symbols], when requested, then only those categories are returned and heatmap intensity scales linearly with confidence (0..1 -> 0..100%). - Given no matches for a filter, when queried, then an empty list and count=0 are returned with 200 status.
Configurable Thresholds and Reproducible Profiles
- Given a project selects the "Conservative Review" threshold profile, when applied, then confidence and min-area thresholds reflect profile values in metadata and evaluation shows ≥30% reduction in false positives with ≤10% recall loss versus default on the benchmark. - Given an authorized admin updates thresholds, when saved, then the change is versioned, auditable, and triggers reprocessing only for affected sheets; both old and new outputs remain retrievable via threshold_profile_id. - Given thresholds are unchanged, when a sheet is reprocessed, then outputs remain identical (no drift across engine minor versions), or a schema-stamped engine_version bump is required to alter outputs.
Discipline Tagging & Multi-Filter
"As a project lead, I want to filter changes by discipline tags so that each stakeholder sees only their discipline-specific impacts."
Description

Provide a schema and UI to tag changes and sheets with discipline metadata (e.g., Architecture, Structural, MEP, Interiors, Landscape) and enable multi-select filtering across both change type and discipline. The filter panel must support AND/OR logic, saved defaults per project, and role-based visibility of internal disciplines. Integrate with project templates and import mappings from common CAD/BIM layer conventions to auto-assign tags where possible. The filtered result set should drive all downstream views (heatmap, lists, approvals) and maintain consistency across zoom levels and sheets.

Acceptance Criteria
Discipline Tagging at Change and Sheet Levels
- Given a sheet has discipline tags [Architecture, Interiors] and a new change is created on that sheet, When the change is saved without explicit discipline tags, Then the change’s effective discipline tags equal [Architecture, Interiors]. - Given a change is created on a sheet with [Architecture] but the user explicitly sets the change’s tags to [MEP, Structural], When the change is saved, Then the change’s effective tags equal [MEP, Structural] (overriding the sheet defaults) and are used in filtering. - Given a user attempts to assign a discipline tag not in the allowed set {Architecture, Structural, MEP, Interiors, Landscape, Internal-*}, When the change is saved, Then validation fails with an inline error and the tag is not persisted. - Given a sheet’s discipline tags are updated after changes already exist, When viewing existing changes, Then each change’s effective tags remain as last explicitly set for that change; changes without explicit tags reflect the updated sheet tags. - Given API/export is requested for a sheet, When the payload is generated, Then each change includes both its explicit tags and computed effective tags.
Multi-Select Filtering with AND/OR Logic Across Change Type and Discipline
- Given selected change types = {Geometry, Text} and selected disciplines = {Architecture, Structural} with cross-category operator = AND, When filters are applied, Then the result set includes only changes where (type ∈ {Geometry, Text}) AND (discipline ∈ {Architecture, Structural}). - Given the same selections with cross-category operator = OR, When filters are applied, Then the result set includes changes where (type ∈ {Geometry, Text}) OR (discipline ∈ {Architecture, Structural}). - Given multiple selections within a single category, When filters are applied, Then selections within that category are combined with OR semantics. - Given no selections in a category, When filters are applied, Then that category is treated as “Any”. - Given filters are applied, When the user clicks “Clear All”, Then all selections are removed and the full unfiltered result set is shown. - Given a project with 5,000 changes, When toggling any single filter pill, Then the visible results update across views within 300 ms on a modern laptop (baseline spec documented).
Role-Based Visibility of Internal Disciplines
- Given an Internal-* discipline tag exists on changes, When a user with Client role opens the filter panel, Then Internal-* disciplines are not listed and changes tagged only with Internal-* are excluded from client-visible results. - Given a user with Admin or PM role, When opening the filter panel, Then Internal-* disciplines are visible and selectable and their tags are displayed on changes. - Given a Client user attempts to access an internal discipline via a shared filter URL or API, When the view loads, Then the internal selection is ignored and the user sees a sanitized filter state without Internal-*. - Given audit requirements, When an Admin toggles visibility for Internal-* disciplines, Then the action is logged with timestamp, user, and project ID.
Saved Project Defaults and Template Integration
- Given a Project Owner configures filters (change types, disciplines, AND/OR) and clicks “Save as Project Default”, When any project member opens the project later, Then the saved default filter state is applied on initial load. - Given a user temporarily changes filters during a session, When the user clicks “Restore Project Default”, Then the filters revert exactly to the saved default state. - Given a new project is created from a template that contains saved default filters and discipline visibility rules, When the project is created, Then the new project inherits those defaults and rules. - Given project defaults are updated, When a new session starts for any user, Then the latest defaults are applied; existing open sessions retain their current filters until refresh. - Given a project has no saved defaults, When the filter panel is opened, Then sensible system defaults are applied: {All change types, All non-internal disciplines, AND}.
CAD/BIM Layer Mapping Import for Auto-Tagging
- Given an import of CAD/BIM files with layer/category names, When the user selects a mapping preset (e.g., A-* → Architecture, S-* → Structural, M-*/MEP_* → MEP), Then imported changes and sheets are auto-assigned corresponding discipline tags per the mapping. - Given layers/categories that do not match any mapping rule, When the import completes, Then those items are flagged as Unmapped and listed for review with counts; the user can bulk-assign a discipline and save the rule to the project template. - Given a user has saved a custom mapping to a project template, When the next import for the same project or a project created from that template runs, Then the saved mapping is applied automatically before fallback to defaults. - Given a user reviews the mapping preview, When confirming, Then the preview displays counts per discipline and unmapped count, and the post-import results match the preview within ±0 variance. - Given the user downloads the mapping table, When they upload an edited CSV mapping, Then the new rules are validated (no unknown disciplines, no circular rules) and applied to subsequent imports.
Filtered Results Drive Heatmap, Lists, and Approvals
- Given a filter is applied yielding N matching changes across selected sheets, When viewing the heatmap, Then exactly N highlights are visible and non-matching changes are hidden. - Given the same filter, When viewing the Change List panel, Then the list contains exactly N rows and the count badge shows N. - Given the same filter, When opening the Approvals queue, Then only the N matching changes are eligible for approval actions and the queue count equals N. - Given the user exports the Change Report, When the export completes, Then it contains only the filtered N changes. - Given the user modifies the filter, When switching between these views, Then counts and contents remain synchronized without discrepancy.
Consistency Across Zoom Levels and Cross-Sheet Navigation
- Given a filter is active, When zooming in and out on a sheet, Then the same set of matching changes remains visible/hidden at all zoom levels; no additional changes appear or disappear due to zoom. - Given a multi-sheet selection with an active filter, When navigating to a sheet with zero matches, Then the view clearly shows an empty state (e.g., “No filtered changes on this sheet”) and heatmap displays no highlights. - Given filters are active, When paging between sheets or panning in an infinite canvas mode, Then the filter state persists and applies to each sheet without reset. - Given a user opens a deep link to a specific change while a filter is active, When the change is outside the current filter, Then the app warns the user and offers to expand the filter to include that change’s tags.
Heatmap Overlay & Opacity Controls
"As a client, I want a clear heatmap that emphasizes significant changes so that I can quickly grasp what’s new without reading technical notes."
Description

Render a performant change heatmap overlay that visually emphasizes areas of concentrated or higher-impact diffs. Support configurable color ramps, intensity scaling by change magnitude/frequency, opacity slider, and per-type toggles. Ensure legibility on light/dark backgrounds and at varying zoom levels, with adaptive clustering to prevent visual noise on dense drawings. The overlay must sync with selection (hover/click to reveal change details) and work seamlessly in PlanPulse’s canvas with pan/zoom at 60 FPS on typical project sizes.

Acceptance Criteria
60 FPS Heatmap During Canvas Pan/Zoom on Typical Project
Given a project with 3 plan sheets and 5,000 heatmap points loaded in the PlanPulse canvas When the user continuously pans and zooms for 30 seconds with the heatmap overlay enabled Then the average FPS is >= 60 and the 95th percentile frame time is <= 20 ms and no single frame exceeds 100 ms Given the same session When toggling the heatmap overlay visibility on/off Then the toggle action completes in <= 150 ms and no input latency exceeds 50 ms Given the same dataset When switching between sheets with the overlay enabled Then the overlay render time per sheet is <= 500 ms and GPU memory usage does not grow cumulatively by more than 10% across 5 switches
Configurable Color Ramps and Intensity Scaling by Change Magnitude/Frequency
Given three built-in color ramp presets (Warm, Cool, Grayscale) and a control to adjust start/end colors When the user selects a preset or adjusts the colors Then the overlay updates within 150 ms and the legend reflects the active ramp Given change regions with magnitudes 1, 3, and 5 at equal frequency When the user selects the “By Magnitude” scaling mode Then the measured median heat intensity for magnitude 5 > magnitude 3 > magnitude 1 and the mapping is strictly monotonic Given regions of equal magnitude but frequencies 10, 50, and 100 When the user selects the “By Frequency” scaling mode Then intensity ordering reflects 100 > 50 > 10 Given the “Combined” scaling mode with adjustable weights When the user adjusts the magnitude/frequency weights Then the legend displays at least 7 distinct color steps and the overlay re-computes intensities within 150 ms
Real-Time Opacity Slider for Heatmap Overlay
Given an opacity slider with a 0–100% range When the user drags the slider thumb Then overlay opacity updates continuously with latency <= 50 ms between input and visual change Given the slider is set to 0% When the user releases the slider Then the heatmap overlay is not rendered and hover/click hit-testing on the overlay is disabled Given the slider is set to 100% When the user releases the slider Then the heatmap overlay is fully opaque relative to the base compositing mode and remains responsive during pan/zoom Given the slider has keyboard focus When the user presses ArrowUp/ArrowDown Then opacity adjusts by 1% per keypress; PageUp/PageDown adjusts by 10%
Per-Type Heatmap Toggles (Geometry, Dimensions, Text, Annotations, Symbols)
Given five per-type toggle controls (geometry, dimensions, text, annotations, symbols) When the user toggles any type off Then only heatmap contributions from the remaining enabled types are visible and the legend counts reflect only enabled types Given multiple types are enabled When the user toggles an additional type on Then the overlay updates within 150 ms to include that type’s contributions (set union) Given all types are toggled off When the overlay would otherwise render Then the overlay is hidden and hover/click hit-testing on the overlay is disabled
Legibility on Light/Dark Backgrounds Across Zoom Levels
Given a drawing with predominantly light backgrounds When the heatmap overlay is enabled Then the local contrast between overlay and base is >= 3:1 for at least 95% of heatmap pixels at 100% zoom and the legend text contrast is >= 4.5:1 Given a drawing with predominantly dark backgrounds When the heatmap overlay is enabled Then the system automatically selects stroke/halo or ramp variant to maintain >= 3:1 overlay contrast and >= 4.5:1 legend text contrast Given the user zooms between 10% and 800% When the overlay re-renders at each zoom level Then overlay elements adapt to maintain a minimum visual thickness/spot size of 2 px and avoid sub-pixel flicker
Adaptive Clustering to Reduce Visual Noise on Dense Drawings
Given a region containing > 50 change points within a 100x100 px window When adaptive clustering is enabled Then the number of rendered primitives in that window is reduced by >= 70% and each cluster displays an aggregate count badge Given clustered regions are visible When the user clicks a cluster Then the view expands (or zooms) to reveal constituent items within 300 ms and the aggregate count resolves into individual items Given clustering is toggled off When the same dense region is viewed Then all individual points render and the FPS does not degrade by more than 20% compared to the clustered view on the same dataset
Hover/Click Selection Syncs Overlay With Change Details
Given the heatmap overlay is visible When the user hovers a heat-affected region Then a tooltip appears within 150 ms showing change type, magnitude/frequency, and timestamp Given the user clicks a heat-affected region When selection occurs Then the corresponding change details open in the side panel and the selected region is highlighted and persists across pan/zoom until explicitly cleared Given multiple overlapping regions under the cursor When the user clicks or uses keyboard to cycle Then a disambiguation list allows selecting among items with selection latency <= 200 ms Given the overlay is hidden (opacity 0% or all types off) When the user hovers or clicks the canvas Then no heatmap tooltip or selection is triggered
One-Click Role Presets & Sharing
"As a consultant, I want one-click presets tailored to my role so that I can load a relevant view without manual configuration."
Description

Deliver preconfigured filter presets optimized for Clients, QA/QC, and Consultants that can be applied with a single click. Allow workspace admins to create, edit, and publish presets per project, including change types, disciplines, heatmap style, and annotation visibility. Presets must be shareable via link, attachable to review requests, and defaultable for certain roles on open. Persist chosen preset through session and across sheets, and provide a clear indicator of active preset with quick reset to default view.

Acceptance Criteria
One-click role preset application
Given a project has published presets named "Client Review", "QA/QC", and "Consultant View" And a user with project access is viewing a sheet with Change Layers available When the user clicks the "Client Review" preset Then the app applies the preset’s configured change types, discipline tags, heatmap style, and annotation visibility to the current view And the Active Preset indicator displays "Client Review" And previously applied manual filters are overridden by the preset And only one preset is active at a time And clicking another preset replaces the active preset and updates the indicator accordingly
Admin creates, edits, and publishes a project preset
Given a workspace admin opens Project Settings > Presets When the admin creates a new preset with a unique name and configures change types, disciplines, heatmap style, and annotation visibility And saves it as Draft Then the preset is saved and visible only to admins in the Presets list with status "Draft" When the admin Publishes the preset Then the preset becomes available to all project members in the preset picker And editing a published preset creates a new Draft that must be Published to update what users see And two presets cannot share the same name within a project
Share preset via link
Given a published preset exists When a project member copies the preset share link And a recipient with project access opens the link Then the app opens the project and applies the referenced preset to the current sheet or the sheet encoded in the link And the link does not grant access beyond the recipient’s existing project permissions And if the recipient lacks access, the app displays Access Denied without revealing preset details And if the preset has been unpublished or deleted, opening the link shows "Preset unavailable" and loads the default view
Attach preset to review request
Given a user is creating a Review Request for selected sheets When the user attaches a published preset and sends the request Then recipients opening the review see the attached preset auto-applied on those sheets And recipients can switch presets or clear presets without altering the sender’s attached preset And the review detail page shows the name of the attached preset
Role-based default preset on open
Given an admin assigns default presets for roles (Client, QA/QC, Consultant) in Project Settings And defines priority if a user has multiple roles When a user with that role opens any sheet in the project with no preset currently active in the session Then the default preset for that role is auto-applied And if the user has multiple roles, the highest-priority role’s default preset is applied And if no default is set for the user’s roles, no preset is auto-applied
Persist preset across sheets and within session
Given a user has applied a preset When the user navigates across sheets within the project Then the same preset remains active across those sheets When the user reloads the app within the same authenticated session Then the preset remains active When the user signs out or the session expires Then no preset is persisted unless a role default applies on next sign-in
Active preset indicator and quick reset
Given a preset is active Then the UI displays a persistent Active Preset indicator with the preset name When the user modifies any underlying filter setting Then the indicator shows "Modified" state When the user clicks "Reset to Default View" Then all filters, heatmap style, and annotation visibility revert to the project default with no preset active And the Active Preset indicator is cleared
State Persistence & Deep Linking
"As a team member, I want the filtered state encoded in the URL so that I can share an exact view with collaborators."
Description

Encode the active Change Layers state (selected types, disciplines, heatmap settings, zoom/position, active sheet, and version pair) into the URL and session storage to enable exact-view restoration and collaboration. Opening a shared link should recreate the same visual context and selection. Integrate with PlanPulse’s comment and approval objects so that notes and approvals can reference a reproducible filtered state. Ensure backward compatibility as settings evolve with versioned URL parameters.

Acceptance Criteria
Deep link reproduces exact Change Layers view
Given a user has selected change types [geometry,text], discipline tags [Structural,MEP], heatmap intensity 70%, opacity 50%, preset "Client", zoom 120%, pan (x=4500,y=2200), active sheet A-102, and version pair V3..V5 When the user clicks "Copy link" and opens it in a new browser session by a recipient with project access Then the app loads the same project with sheet A-102 comparing V3..V5, reapplies the exact Change Layers selections and heatmap settings, restores the same zoom and pan, and renders identical diffs And the Change Layers panel reflects the same active selections and preset And no default setting overrides the restored state
Session restore after refresh and intra-app navigation
Given a user has an active filtered Change Layers view with a specific sheet, version pair, and viewport When the user refreshes the page, navigates to another route and returns, or closes and reopens the tab within the same session Then the previous Change Layers selections, heatmap settings, sheet, version pair, and viewport are restored automatically And the restored state matches the last user interaction without drift
Comment links capture reproducible filtered state
Given a user creates a comment while a specific Change Layers state (filters, disciplines, heatmap, viewport, sheet, version pair) is active When another user opens the comment via notification or the comment panel Then the app opens to the referenced sheet and version pair and reapplies the exact filters, heatmap settings, and viewport that existed at comment creation time And the comment is focused and visible in the sidebar
Approval links preserve decision context
Given a user records an approval on a filtered Change Layers view When an approver opens the approval link Then the same sheet, version pair, filters, heatmap settings, and viewport are restored And the approval record references a deep link containing a stable schema version parameter And restoring the link yields the same visible diffs the approver saw at approval time
Backward-compatible, versioned URL parameters
Given deep links previously generated with state_v=1 and links generated with state_v=2 When each link is opened in the current app version Then state_v=1 links restore all supported state fields and ignore unknown keys without error And state_v=2 links restore the full encoded state And unknown or missing parameters fall back to safe defaults without altering other restored values And opening a link with an unsupported future version displays a non-blocking notice and restores all compatible fields
Graceful handling of missing or unauthorized resources
Given a deep link references a sheet or version the recipient cannot access or that no longer exists When the link is opened Then the user is prompted to authenticate or request access without leaking names or metadata And if a resource is missing, the app loads the nearest available context (same project, latest accessible sheet/version) and displays a clear notice of what could not be restored And the URL and parameters contain only stable IDs and schema keys (no display names, emails, or secrets)
Canonical, deterministic URL and performance
Given two users produce a link from the same state with selections made in different orders When the links are compared and opened Then the URLs are identical due to normalized parameter ordering and encoding (no duplicates, stable casing) And opening the link restores the state in under 1500 ms on a 10 Mbps connection for projects up to 50 sheets and 500 diffs And the generated URL length for a state with up to 10 change types, 10 disciplines, and one preset is ≤ 1200 characters
Accessibility & Keyboard Controls
"As a keyboard-first user, I want accessible controls and shortcuts so that I can navigate change layers efficiently without a mouse."
Description

Meet WCAG 2.1 AA by providing non-color indicators for change types, sufficient contrast in heatmap palettes, focus-visible states, ARIA labels, and screen reader descriptions for filters and overlays. Implement keyboard shortcuts for toggling change types, switching presets, adjusting heatmap opacity, and moving focus within the filter panel. Provide a reduced-motion option and tooltips with textual counts to support color-vision deficiencies and accessibility best practices.

Acceptance Criteria
Keyboard-only navigation of Change Layers panel
Given the Change Layers panel is closed and the canvas is focused When the user presses Shift+F Then the Change Layers panel opens and focus moves to the first interactive control in the panel within 100ms And the focus order follows: Change Types group -> Discipline Tags filter -> Presets group -> Opacity slider -> Apply -> Reset -> Close And Tab/Shift+Tab cycles through all interactive controls in the panel without trapping focus And Arrow keys navigate within grouped controls (Left/Right changes the focused toggle within Change Types and Presets; Up/Down changes items within Discipline Tags listbox) without moving focus outside the group And Space/Enter activates the focused control (toggle/checkbox/button) and Esc closes the panel returning focus to the element that invoked it
Discoverable and operable keyboard shortcuts for change filters and presets
Given the workspace is loaded and no text input is focused When the user presses G, D, T, A, or S Then the corresponding change type (Geometry, Dimensions, Text, Annotations, Symbols) toggles on/off and a non-intrusive toast confirms the state change When the user presses 1, 2, or 3 Then the corresponding preset (Client, QA/QC, Consultant) applies and announces the active preset When the user presses [ or ] Then heatmap opacity decreases or increases by 10% respectively, clamped between 10% and 100% When the user presses O Then heatmap opacity resets to its default (60%) When the user presses Ctrl+Arrow Up/Down while the panel is open Then focus jumps to the previous/next section within the panel And all shortcuts are inactive while focus is in a text-editable field And pressing Shift+/ opens the Shortcut Help overlay, which is keyboard navigable, labeled, and dismissible with Esc And no shortcut overrides default browser shortcuts in Chrome, Firefox, Safari, or Edge on desktop
Non-color indicators and legend for change types
Given change highlights are visible on the drawing When any change type is enabled Then the highlight uses both color and a unique non-color indicator (icon/pattern/hatch) for that type And a persistent legend shows each enabled type with its icon/pattern sample, text label, and count (e.g., "Geometry — 23") And with a grayscale filter applied to the app (100% grayscale), each change type remains visually distinguishable and correctly mapped to its legend item And hovering or focusing a legend item highlights corresponding overlays and exposes a text tooltip "{Type} changes: {count}" that is also available on focus And printing or exporting to PDF in monochrome preserves pattern distinctions
Heatmap and UI contrast meets WCAG 2.1 AA
Given the default theme (light and dark) Then all text in the Change Layers UI (filters, tooltips, toasts) has a contrast ratio ≥ 4.5:1 against its background (≥ 3:1 for text ≥ 18pt/14pt bold) And all non-text UI components, including icons, toggles, slider tracks/handles, and focus indicators, have a contrast ratio ≥ 3:1 against adjacent colors And heatmap overlays include a 2px outline or halo that maintains ≥ 3:1 contrast with the underlying plan content in both light and dark backgrounds And no information is conveyed by color alone; every color-coded element has a text label and/or non-color indicator
Focus order and focus-visible states across filters and overlays
Given any interactive element in the Change Layers UI is focused via keyboard Then a visible focus indicator of at least 2px thickness with contrast ≥ 3:1 is shown and is not clipped or obscured And focus order is logical and matches the visual layout without jumping across sections And programmatic focus changes occur only after explicit user actions (e.g., opening the panel, applying a preset) And dismissing dialogs/overlays (panel, help, tooltips activated by keyboard) returns focus to the element that invoked them And there are no keyboard traps; the user can always Tab forward and backward to exit a region
Screen reader support for filters, tooltips, and counts
Given a screen reader (NVDA, JAWS, or VoiceOver) is active When navigating the Change Types group Then each toggle exposes an accessible name including the type and on/off state and current count (e.g., "Geometry, on, 23 changes") via aria-label or aria-labelledby + aria-describedby When navigating the Presets Then each preset exposes role=button with aria-pressed reflecting state and a description of its effect (e.g., "Client preset — shows geometry and dimensions only") When navigating the Opacity slider Then role=slider with aria-valuemin=10, aria-valuemax=100, aria-valuenow updates on change, and the slider has a programmatic label "Heatmap opacity" When filters change and counts update Then a polite aria-live region announces the changed counts within 500ms without duplicative announcements And all tooltips are reachable on focus and expose their text to assistive technologies
Reduced motion preference and control for heatmap effects
Given the user has OS-level prefers-reduced-motion enabled When the workspace loads Then all non-essential animations (heatmap pulsing, fades, sliding panels) are disabled or replaced with instant state changes And any remaining transitions are opacity-only and ≤ 100ms When the user toggles the in-app "Reduce Motion" setting Then the same reductions apply regardless of OS settings and the preference persists across sessions in the same browser And heatmap visibility changes (enable/disable types, switch presets, change opacity) occur without motion-based effects
Performance & Scalability Guarantees
"As a QA reviewer, I want smooth interactions on large drawings so that I can review changes without lag disrupting my workflow."
Description

Optimize data structures, tiling, and GPU-accelerated rendering to ensure responsive interactions on large sheets and across many changes. Target sub-150ms filter application, 60 FPS pan/zoom under typical loads, and incremental loading for very large drawings. Implement caching of diff metadata, worker-based computations for intensity aggregation, and telemetry to monitor real-world performance. Define graceful degradation strategies (e.g., cluster levels, simplified shaders) to maintain usability on low-spec devices.

Acceptance Criteria
Sub-150ms Filter Toggle Under Typical Load
Given a 12k×12k sheet containing 2,500 diff items across five change types and three discipline tags on a baseline device (4-core CPU, integrated GPU, 8 GB RAM, 1080p) When the user toggles any single change type, discipline tag, or preset in Change Layers Then the heatmap and highlights update within 150 ms at p95 over 100 toggles, measured from input to next fully painted frame And p99 update time is ≤ 250 ms And no UI thread stall exceeds 50 ms at p99 during the toggle sequence And interaction latency (input-to-pointer-up acknowledgment) remains < 100 ms at p95
60 FPS Pan/Zoom While Change Layers Active
Given Change Layers overlays are enabled with at least two change types and one discipline tag active on a 12k×12k sheet with 2,500 diffs When the user continuously pans for 10 seconds and zooms between 0.5× and 6× over another 10 seconds Then average frame rate is ≥ 60 FPS and p95 frame time ≤ 16.7 ms And frames exceeding 33 ms constitute ≤ 1% of total frames during the run And no visible hitch > 50 ms occurs during tile swaps or shader reconfiguration And GPU utilization remains below 90% on average during the interaction
Incremental Loading for Very Large Drawings
Given a very large sheet (≥ 30k×30k effective pixels or equivalent vector complexity) with 15,000 diff items When the user opens the sheet or navigates to a new viewport Then first visible content (low-res tiles) appears within 500 ms at p95 And viewport-resolution tiles complete within 2,000 ms at p95 while background tiles continue streaming And UI remains responsive with main-thread long tasks (>50 ms) ≤ 1 per 5 seconds during loading And a loading placeholder/skimmer is shown until first paint, then replaced progressively without flicker
Diff Metadata Caching and Warm-Start Performance
Given the user has previously viewed the same sheet and filter combinations in the current or prior session with caching enabled When the user re-opens the sheet and toggles any previously computed filter or preset Then cache hit ratio for diff metadata lookups is ≥ 85% over the session And warm-toggle update time is ≤ 80 ms at p95 over 100 toggles And cache invalidates within one frame when the underlying diff set changes (e.g., new revision) And persistent cache size is bounded to ≤ 50 MB per project with LRU eviction
Web Worker Intensity Aggregation Without Main-Thread Jank
Given a sheet with 10,000 diff items requiring heatmap intensity aggregation When aggregation runs with Web Workers enabled Then main-thread blocked time per aggregation cycle is ≤ 10 ms at p95 And total aggregation completes in ≤ 120 ms at p95 And results match the single-thread reference within ±0.5% intensity at all sampled tiles And worker pool size equals min(available cores − 1, 4) and falls back gracefully when hardwareConcurrency is unavailable And if workers are unavailable, a degraded single-thread path completes in ≤ 300 ms at p95 with an on-screen indicator
Performance Telemetry Coverage and Accuracy
Given telemetry is enabled and Change Layers is used for at least 1 minute in a session When users toggle filters, pan/zoom, and load tiles Then performance events are recorded for filterApplied.durationMs, panZoom.frameTime.p95, tile.firstPaintMs, aggregation.durationMs, and degradeMode.active And ≥ 95% of eligible sessions upload at least one performance heartbeat within 30 seconds (or on exit) with retry/backoff when offline And telemetry overhead remains < 1% CPU and < 50 KB/s average network per active minute via batching (≤ every 30 s or ≥ 10 events) And timestamps have ≤ 5 ms measurement error and contain no PII
Graceful Degradation On Low-Spec Devices Or Overload
Given the device is low-spec (≤ 2 cores or no WebGL2) or runtime performance drops (p95 frame time > 25 ms over 3 seconds) When Change Layers is active Then the renderer auto-switches to a degraded mode (e.g., simplified shaders, clustered highlights, 0.5× heatmap resolution) within 500 ms And average FPS in degraded mode is ≥ 30 and p95 frame time ≤ 33 ms And an unobtrusive indicator informs the user with an option to override And normal mode auto-restores after 5 seconds of stable performance (p95 frame time ≤ 20 ms)

Impact Meter

Scores each change region and sheet from 0–100 using factors like changed area, proximity to critical zones, code tags, and linked cost risk. Surfaces the highest-impact deltas first, helping teams prioritize review time and reduce rework.

Requirements

Impact Scoring Engine
"As a project lead, I want each change region to receive an objective 0–100 impact score so that I can prioritize reviews on the highest-risk changes first."
Description

Compute a normalized 0–100 impact score for every detected change region and sheet using weighted factors including: changed area ratio, proximity/overlap with critical zones, presence and severity of code tags, and linked cost risk. Provide project-level configurable weights with sensible defaults and guardrails (e.g., 0–1 per factor, sum normalized). Recalculate scores in near real time when markups, regions, or metadata change, and persist factor breakdowns for transparency and debugging. Expose scores and factor components via internal APIs to the UI and workflow services. Ensure determinism, testability, and performance targets (e.g., <300 ms per region for up to 500 regions per sheet) with graceful degradation when external data (cost or code) is unavailable. Integrate with PlanPulse’s versioned markup model so scores are scoped per version and comparable across versions.

Acceptance Criteria
Normalized 0–100 Score Computation per Region and Sheet
Given a detected change region and configured factor weights where each factor raw score ∈ [0,1] and each weight ∈ [0,1] with sum = 1 When the engine computes the region score Then the score is a float in [0,100] and equals 100 × Σ(raw_i × weight_i) And the returned breakdown includes, for each factor, the raw score, the applied weight, the weighted contribution, and the final normalized score Given multiple regions on a sheet and default aggregation When computing the sheet score Then the sheet score equals the maximum of its region scores and lies in [0,100] Given the same inputs and environment When the engine runs multiple times or processes regions in a different order Then the scores and breakdown values are identical within 1e-6
Project-Level Weight Configuration Guardrails and Defaults
Given a new project with no custom weights When the engine initializes Then default weights for all factors are persisted, each ∈ [0,1], and the set sums to 1 Given a weight update where all factor weights are provided and each ∈ [0,1] but the sum ≠ 1 When the update is saved Then the backend normalizes the set to sum to 1 and persists the normalized weights with an audit record of the change Given a weight update where any factor weight is outside [0,1] When saving the update Then the request is rejected with a validation error, no changes are persisted, and defaults remain active Given weights are updated successfully When saved Then all affected region and sheet scores are marked stale and recomputed using the new weights
Near Real-Time Recalculation on Change Events
Given any of the following committed changes on a sheet version: markup edit, change-region geometry change, critical zone update, code tag add/update/remove, cost link/value update, or weight change When the change is committed Then all impacted region scores are recomputed and persisted within 2 seconds P95, with per-region compute latency P95 < 300 ms And the sheet score is recomputed accordingly and persisted And the internal API returns the updated scores and factor breakdowns for the affected entities
Performance Under Load for Up to 500 Regions per Sheet
Given a sheet version containing 500 detected change regions and all required metadata available When a full recomputation is triggered Then at least 95% of region computations complete in under 300 ms each, no computation fails, and the operation completes within service timeouts
Graceful Degradation When External Data Is Unavailable
Given external cost or code-tag data is unavailable or times out for a region at compute time When the score is computed Then the corresponding factor raw score is treated as 0, the factor is flagged as "missing" in the breakdown, weights are not renormalized, and a "degraded" flag is set true for that computation When the previously missing external data becomes available Then the affected scores are automatically recomputed and the "degraded" flag is set false
Version-Scoped Scoring and Cross-Version Comparability
Given two versions V1 and V2 of the same sheet When scores are computed Then each region and sheet score is stored and retrievable scoped by versionId, and cross-version queries return results labeled with their versionId Given a region unchanged between V1 and V2 and identical weights and external data availability When scores are computed Then the scores and factor breakdowns are equal within 1e-6 across versions Given a region that changes between V1 and V2 When scores are computed Then the difference in scores reflects the changed factor values and remains comparable on the same 0–100 scale
Internal APIs Provide Scores and Factor Breakdowns with Persistence
Given a request to the internal API to retrieve scores for a version with scope = region or sheet When a valid versionId is supplied Then the API responds 200 with items including id (regionId or sheetId), versionId, score, per-factor raw scores, weights, weighted contributions, degraded flag, and computedAt timestamp Given a request for a specific region's latest factor breakdown When a valid regionId and versionId are supplied Then the API returns the latest persisted breakdown corresponding to the current score Given an invalid id or versionId When requested Then the API responds with 404, and given a malformed request, the API responds with 400
Critical Zone Mapping
"As an architect, I want to define and visualize critical zones on sheets so that the impact score reflects proximity to areas that carry higher project risk."
Description

Enable definition, management, and visualization of critical zones on each sheet to influence impact scoring. Support creation via polygon drawing, import from CAD/BIM layers, or from project templates (e.g., egress paths, structural cores, MEP shafts). Allow zone typing and per-zone risk weights, versioning alongside sheets, and fine-grained permissions. Compute proximity metrics (minimum distance and overlap ratios) between zones and change regions, feeding normalized values to the scoring engine. Display zones as overlays with toggles, tooltips, and legend, ensuring clear visibility without obstructing markups.

Acceptance Criteria
Polygon Zone Creation & Editing
Given a sheet is open and the user has Zone Editor permission When the user draws a closed polygon with ≥3 vertices and clicks Save Then a new critical zone is created on the current sheet version with default Type="Unassigned" and Risk Weight=1.0 Given an existing zone is selected When the user drags vertices, adds/removes vertices, or moves the polygon Then geometry updates in real time, area is recalculated, and changes are not committed until the user clicks Save Given zone metadata is open When the user sets Type from the allowed list and enters a Risk Weight between 0.1 and 5.0 with up to 2 decimal places Then inputs are validated, invalid values are blocked with inline error, and valid values are saved with the zone Given the user performs zone edits When the user presses Undo or Redo Then the last 20 zone edit actions are reversible within the current editing session Given a zone is saved When the sheet is versioned Then the zone is stamped to the matching sheet version and retains a stable Zone ID across versions
CAD/BIM Layer Import to Zones
Given a sheet is open and the user has Zone Editor permission When the user imports zones from a supported file (DWG, DXF, or GeoJSON) and selects one or more layers Then polygons on selected layers are converted to zones, preserving geometry, with each zone assigned a Type from layer-name mapping and default Risk Weight from project defaults Given an import uses different units than the sheet When the user confirms the detected units Then imported zones are scaled to match the sheet and align within ±2 px of expected positions Given duplicate or self-intersecting geometries exist in the import When the import runs Then duplicates are merged by geometry, invalid polygons are skipped with a warning, and a summary report lists total imported, merged, and skipped Given an import completes When conflicts in zone names occur Then unique names are auto-resolved by appending an incrementing suffix and the mapping is shown in the report
Template Zone Application
Given a project template with predefined zones (e.g., Egress Paths, Structural Cores, MEP Shafts) exists When the user applies the template to a sheet Then all template zones are created on the sheet with their predefined Types, Risk Weights, names, and colors Given template zones overlap existing zones When the user applies the template Then the user is prompted to Merge, Keep Both, or Skip per conflict; the chosen action is applied and logged Given a template is applied When the user opens the zone legend Then template-added Types appear with their colors and can be toggled independently Given a template was previously applied to the sheet When the user attempts to apply the same template again Then the system prevents duplicate application and offers to re-sync changed template definitions with a preview of differences
Proximity Metrics & Normalization to Scoring Engine
Given project proximity normalization threshold T=5.0 m (configurable) and a change region R When R overlaps a critical zone Z by 25% of R’s area Then overlap_ratio(R,Z)=0.25, distance_norm(R,Z)=1.0, and both values are emitted to the scoring engine within 200 ms of saving R Given R does not overlap Z and the minimum edge-to-edge distance between R and Z is 2.5 m When metrics are computed with T=5.0 m Then overlap_ratio(R,Z)=0.0 and distance_norm(R,Z)=0.5 Given R does not overlap Z and min distance ≥ T When metrics are computed Then distance_norm(R,Z)=0.0 Given a sheet’s unit settings are defined When metrics are computed Then distances use sheet units, values are clamped to [0,1], rounded to 2 decimals for storage, and include per-zone outputs plus aggregate maxima per region Given a region is created, moved, resized, or deleted When computations run with up to 300 zones on the sheet Then metrics recompute and push updates to the scoring engine in under 300 ms for 95th percentile operations
Permissions & Access Control for Zones
Given project roles are configured with permissions View Zones, Create/Edit Zones, and Manage Zone Types/Weights When a user without View Zones opens a sheet Then zone overlays, tooltips, and legend are hidden Given a user lacks Create/Edit Zones When the user attempts to draw, edit, or delete a zone Then the action is blocked, a permission error is shown, and the attempt is logged with user, time, and action Given a user has Manage Zone Types/Weights When the user edits zone Type options or default weights Then changes apply project-wide, versioned with an audit trail, and propagate to new zones only (existing zones remain unchanged unless explicitly updated) Given zone-level overrides are configured for a specific zone When a user without override tries to edit that zone Then access is denied even if the user has general Create/Edit Zones permission
Overlay Visualization, Toggles, Tooltips, and Legend
Given a sheet with zones exists When the user toggles zone overlays by Type or All Zones Then zones appear/disappear without affecting the visibility of markups, with markups always rendered above zones Given zone overlays are visible When the user hovers or focuses a zone Then a tooltip shows Name, Type, Risk Weight, Created By, and Last Modified, and the tooltip is accessible via keyboard focus Given the zone legend is opened When the user views Types Then each Type shows a distinct color and label, colors meet WCAG contrast ratio ≥4.5:1 against the sheet background, and legend changes persist per user per sheet Given many zones overlap When zones are displayed Then default styling uses 35% fill opacity and 2 px outline to avoid obstructing markups, and hovering a zone temporarily raises its outline z-index without covering markups
Zone Versioning, History, Compare, and Rollback
Given zones exist on a sheet version V When the sheet is saved as version V+1 after zone edits Then a new zone layer snapshot is created for V+1, and each zone retains a stable Zone ID across versions Given two sheet versions V and V+1 When the user opens Zone Compare Then added zones are highlighted in green, removed in red, and modified geometry in yellow with a side-by-side or overlay diff Given a user has permission to manage versions When the user rolls back zone state from V+1 to V Then the prior zone geometry and metadata are restored, and the rollback is recorded in the audit log with timestamp, user, from-version, and to-version Given a user opens an older sheet version When zone overlays load Then the system displays the zones from the matching version without mixing data from other versions
Code Tag Recognition
"As a reviewer, I want code references to be detected within change regions so that compliance-sensitive changes are surfaced earlier."
Description

Detect and interpret code references within or adjacent to change regions to elevate compliance-sensitive changes. Use OCR on targeted regions with domain-specific parsing (regex and dictionaries) for building code sections, fire ratings, ADA, life-safety symbols, and jurisdiction-specific tags. Store detected tags with confidence scores, enable manual validation/override, and map tags to a severity taxonomy that feeds the scoring engine. Provide configurable tag dictionaries per project/jurisdiction and handle noisy drawings with fallbacks and user prompts for low-confidence cases.

Acceptance Criteria
OCR Tag Detection Within and Adjacent to Change Regions
Given a sheet with at least three change regions and code references inside the regions or within 15mm at plotted scale (building code sections, fire ratings, ADA, life-safety, jurisdiction tags) When targeted OCR and domain parsing run on those regions Then tags located inside the region or within the adjacency distance are detected and normalized by type and code And each detection stores a bounding box, raw text, normalized value, tag type, confidence score, parser source, and dictionary source And duplicate detections within a region are deduplicated And the adjacency distance defaults to 15mm and is configurable per project within 5–50mm
Confidence Thresholding and Low-Confidence Prompting
Given each detected tag has a confidence score And the project-level default confidence threshold is 0.75 When a tag’s confidence is below the threshold Then the tag is flagged as "Needs validation" and a prompt shows ranked suggestions And the user can Accept, Edit, or Dismiss the suggestion And upon action, the tag’s status updates accordingly and the outcome is stored And changing the threshold reclassifies existing detections and applies to new runs immediately
Manual Validation and Override with Audit Trail
Given a detected tag (suggested or previously validated) When a user edits the normalized tag, adjusts the severity mapping, or rejects the suggestion Then the system records an override with user, timestamp, previous value, new value, and reason And the Impact Meter recalculates within 1 second to reflect the change And the user can undo the last change and view a complete audit history for the tag And only users with Editor (or higher) permissions can apply overrides
Severity Mapping and Impact Meter Integration
Given a change region with initial impact score S0 and no detected tags When a High-severity code tag is added or validated in that region Then the region’s impact score increases by the configured High-severity weight (±5%) relative to S0 And if multiple tags exist, the configured aggregation rule (default: max weight with cap) is applied And removing or downgrading a tag updates the score immediately And the sheet-level score reflects the aggregate of recalculated region scores
Configurable Jurisdictional Tag Dictionaries
Given a project has jurisdiction set to "NYC" and global dictionaries enabled When the jurisdiction is switched to "IBC 2021" and a project-level custom dictionary is uploaded Then parsing uses precedence: project > jurisdiction > global for lookups and normalization And existing sheets are queued for async re-parse and tag statuses update upon completion And admins can add synonyms and symbol aliases that take effect on the next parse And dictionaries can be exported/imported in JSON and must pass schema validation before activation
Noisy Drawing Handling and Fallbacks
Given a sheet with low-quality scans (noise, skew, ~150 dpi) When OCR confidence for a region remains below 0.5 after preprocessing Then a fallback pass runs (e.g., alternative OCR/configuration) And if confidence still remains below the project threshold, the region is flagged "Unreadable" and a manual tagging prompt is shown And processing continues for other regions without error And flagged regions appear under a "Needs Attention" filter for quick review
Data Persistence and Versioning of Detected Tags
Given detected tags exist on sheet version V1 When the sheet is superseded by version V2 and detection is re-run Then V1 detections and overrides are retained with version linkage, and V2 detections are stored as a new version And each tag record includes sheet_id, region_id, version_id, bounding box, raw_text, normalized value, tag type, confidence, mapping_id, override flag, created_by/updated_by, and timestamps And queries return latest validated tags by default with an option to retrieve full history And all data persists across sessions and is recoverable after system restart
Cost Risk Linkage
"As a project manager, I want to link changes to cost risk data so that impact scores reflect potential budget implications."
Description

Link change regions to cost-risk data so impact scores reflect potential budget implications. Support manual linking to cost items, rule-based inference from layers/symbols, and integrations for importing estimates (CSV/API). Normalize cost risk into bands or a numeric index, handle unknowns with configurable defaults, and automatically refresh scores when cost data updates. Provide UI cues for missing/stale cost links and an audit trail of edits. Ensure read-only mode for external integrations and robust error handling when sources are unavailable.

Acceptance Criteria
Manual Cost Link Assignment
Given a change region is selected and the cost catalog is available, When the user links one or more cost items to the region and clicks Save, Then the links persist to storage and the region's impact score recalculates within 1 second using the configured aggregation mode (max/sum/average) and normalization settings. Given one or more cost items are already linked to a region, When the user removes any linked item, Then the impact score recalculates within 1 second and the audit log records an entry with user, timestamp, region ID, and removed item IDs. Given a user searches the cost catalog to link items, When the user types 3 or more characters, Then the system returns matching results within 300 ms at the 95th percentile and displays item name, ID, and current risk band/index. Given a user attempts to link a cost item already linked to the region, When the link action is confirmed, Then the system prevents duplicate links and shows a non-blocking notice indicating the item is already linked.
Rule-Based Inference from Layers/Symbols
Given workspace rule mappings exist from CAD/BIM layers or symbols to cost items or risk bands, When a new change region is created or resized to intersect mapped layers/symbols, Then the system auto-links the corresponding cost items and assigns the inferred risk band/index within 1 second. Given multiple rules match a region, When inference runs, Then conflicts are resolved by rule priority (higher priority wins); if tied, the most specific rule (symbol over layer) wins; if still tied, the most recently updated rule wins; duplicates are deduplicated. Given an auto-inferred link has been applied, When the user manually edits the links (add/remove/replace), Then the override persists and future automatic inference is suppressed for that region unless the user explicitly selects Reapply Rules. Given inference occurs, When links are added or overridden, Then an audit entry is recorded capturing rule ID(s), matched layer/symbol identifiers, before/after link sets, user (system for auto), and timestamp.
CSV Cost Estimate Import (External, Read-Only)
Given a user selects Import CSV and maps columns for item_id (required), title (required), risk_index or risk_band (at least one required), and optional estimate and external_source_id, When a UTF-8 CSV with up to 50,000 rows is uploaded, Then valid rows are imported and normalized, with a success summary showing rows inserted/updated/skipped. Given the CSV contains invalid or missing required fields, When import runs, Then invalid rows are skipped with a downloadable error report (line number and reason), and no partial data is saved for those rows. Given items are created or updated via CSV, When viewed in the cost catalog, Then they are marked as External and are read-only; any edit attempt is blocked with a message indicating the source of truth and a Link to manage imports. Given an imported row includes a risk_band or risk_index outside accepted values, When normalization occurs, Then values are mapped or clamped per configuration and flagged as Assumed in the UI. Given import completes, When linked regions reference updated items, Then impacted region and sheet scores refresh automatically per the refresh SLOs.
API Integration Sync and Source Unavailability Handling
Given an external cost system API is configured with valid credentials, When a scheduled or manual sync runs, Then new/updated cost items are ingested, normalized, and marked External (read-only), and a sync report shows created/updated/deleted counts. Given the external API is unavailable, times out, or returns 5xx/4xx errors, When sync runs, Then the system retains last-known-good data, marks cost data as Stale with a timestamp, displays a non-blocking banner in the Impact Meter, and retries with exponential backoff (1m, 5m, 15m, 30m) up to 24 hours. Given items are External, When a user attempts to edit them in the UI or via API, Then the operation is rejected with HTTP 409/ReadOnly and a descriptive message; audit logs capture the blocked attempt. Given a sync partially succeeds, When only some items update, Then only affected regions/sheets recalc; failures are listed in the sync report with item identifiers and error reasons; a manual Retry Failed option is available.
Normalization to Risk Bands and Numeric Index (with Defaults)
Given workspace settings define a risk scale using bands (e.g., Low/Medium/High/Critical) mapped to numeric indices [0–100] or direct numeric indices, When cost items are linked or imported, Then all risk values are normalized to an internal index in [0–100] using the configured mapping. Given a cost item has missing or unknown risk, When normalization runs, Then the system applies the configurable default risk index and band, flags the value as Assumed in the UI, and includes the assumption in audit logs. Given a cost item provides a numeric index outside [0–100], When normalization runs, Then the value is clamped to the nearest bound and the clamp is recorded in the audit log. Given normalization settings are changed by an admin, When saved, Then the change is versioned, recorded in the audit trail with before/after mappings, and users are prompted to Recalculate; upon confirmation, all region and sheet scores are recalculated.
Automatic Impact Score Refresh on Cost Data Changes
Given one or more regions are linked to cost items, When any linked cost item’s risk value or estimate changes via manual edit, CSV import, or API sync, Then all affected region and sheet impact scores recalculate and update UI ordering within 5 seconds for up to 500 affected regions (p95) and within 30 seconds for up to 5,000 regions (p95). Given a recalculation is in progress, When the user views the Impact Meter, Then a non-blocking in-progress indicator is shown and cleared upon completion; stale badges are removed once fresh scores are applied. Given a recalculation fails for any region, When the failure is detected, Then the previous score is retained, the region is flagged with a warning icon and tooltip containing the error, and a retry is attempted up to 3 times with backoff. Given recalculation completes, When the activity is logged, Then the audit trail includes the trigger source (manual, CSV, API), affected counts, duration, and a checksum/hash of the new scoring snapshot.
UI Cues for Missing/Stale Links and Comprehensive Audit Trail
Given a region has no linked cost items or only items with unknown risk, When the region is displayed, Then a Missing Cost Link badge is shown with a tooltip describing the condition and a quick action to Add Link. Given any linked cost item is marked stale due to outdated external data, When the region is displayed, Then a Stale Data badge is shown with last-sync time and a quick action to Retry Sync (if permitted). Given the user opens the Audit panel, When filtering by Cost Linking, Inference, Import, or Normalization, Then the panel lists events with who, when, what changed (before/after), source (user, rule, CSV, API), and region/item IDs; results are exportable to CSV. Given audit retention is configured to 12 months by default, When the retention window elapses, Then older entries are purged in accordance with policy, with a summary event recorded of the purge operation.
High-Impact Triage View
"As a reviewer, I want a sorted triage view of deltas by impact score so that I can focus my time on the most consequential changes."
Description

Surface the highest-impact deltas first through a dedicated triage view that lists change regions and sheets sorted by impact score. Provide filters (thresholds, factor presence, sheet, discipline), badges (e.g., code, cost, critical-zone proximity), and color-coded score bands. Support quick actions: assign reviewers, add comments, mark as reviewed, and bulk operations. Offer deep links to the visual workspace with the target region centered and highlighted. Ensure responsive performance for large projects, keyboard navigation, and accessible semantics. Enable export (CSV/PDF) for review meetings.

Acceptance Criteria
Triage view orders deltas by impact with visual cues
- Given multiple change regions and sheets with impact scores 0–100, When the triage view loads, Then items are listed in descending impact score. - Given items with badges for code, cost, and critical-zone proximity, When the triage view renders, Then each applicable badge is displayed per item and is visually distinct and text-accessible. - Given score bands configured as Low (0–33), Medium (34–66), High (67–100), When items render, Then each item shows a color-coded band and accessible label matching its score band. - Given a long list, When the user navigates across all items, Then ordering by score is preserved throughout.
Reviewer filters triage list by thresholds, factors, sheet, and discipline
- Given an impact threshold filter set to >= 70, When applied, Then only items with score 70–100 are displayed and the visible count reflects the filtered total. - Given factor presence filters (code, cost, critical-zone), When one or more are selected, Then only items possessing all selected factors are displayed. - Given sheet and discipline multi-select filters, When selections are applied, Then only items from those sheets and disciplines are shown. - Given multiple filters combined, When applied, Then results satisfy all filters (logical AND). - Given a Reset/Clear Filters control, When invoked, Then all filters are cleared and the full, sorted list is restored.
Reviewer executes quick and bulk actions from triage view
- Given a triage item, When a reviewer assigns a teammate, Then the assignee is saved, displayed on the item, and visible to other users on refresh. - Given a triage item, When a comment is added and submitted, Then the comment is persisted, timestamped, and visible in the item's activity. - Given a triage item, When marked as Reviewed, Then its state updates to Reviewed with time/user stamp and it is filterable by review state. - Given multiple selected items, When a bulk Assign or Mark Reviewed action is executed, Then the action applies to all selected items and reports success/fail counts. - Given bulk operations, When any item fails to update, Then the system surfaces an error per failed item without blocking successful updates.
Deep link opens visual workspace centered on target region
- Given a triage item with a deep link, When the link is activated, Then the visual workspace opens and loads the associated sheet with the target change region centered and visually highlighted. - Given the workspace view, When the deep link opens, Then the highlight has an accessible name and contrast compliant with WCAG 2.2 AA. - Given typical network latency (<200 ms to backend), When opening the deep link, Then the region is highlighted within 2 seconds of click.
Triage remains responsive on large projects
- Given a project containing at least 5,000 change regions across 300 sheets, When loading the triage view on a baseline device (4-core CPU, 8 GB RAM, latest Chrome), Then time to first interactive is 7F<= 3 seconds and initial memory usage 7F<= 400 MB for the tab. - Given the triage list loaded, When applying any single filter, Then the filtered results render within 300 ms at p95. - Given the triage list loaded, When scrolling through the list, Then frame rate stays 7F>= 50 fps and no input is blocked for more than 100 ms at p95. - Given bulk actions on 200 selected items, When executed, Then the UI remains responsive and completes updates within 5 seconds at p95 with progress feedback.
Keyboard-only navigation and accessible semantics
- Given a keyboard-only user, When navigating the triage view, Then all interactive controls (filters, rows, quick actions, bulk action bar, pagination) are reachable via Tab/Shift+Tab in a logical order. - Given focus on the triage list, When pressing Up/Down Arrow, Then focus moves between items; Space toggles selection; Enter opens the focused item's deep link; Ctrl/Cmd+A selects all. - Given a screen reader user, When reading the list, Then rows expose role=listitem with programmatic names including score, badges, sheet, and review state; badges include accessible labels (e.g., "Code", "Cost", "Critical zone"). - Given WCAG 2.2 AA, When auditing, Then color contrast, focus indicators, and semantics meet success criteria.
Export triage list to CSV and PDF for review meetings
- Given active filters and current sort by impact descending, When exporting to CSV, Then the file includes only filtered items, maintains current sort, and contains columns: ID, Sheet, Discipline, Impact Score, Score Band, Factors/Badges, Assignee, Review State, Last Updated, Deep Link URL. - Given active filters, When exporting to PDF, Then the document includes a cover summary (project name, filter summary, export date), paginated list with the same columns visible in the UI (wrapping as needed), and page numbers. - Given an export request on up to 5,000 items, When generated, Then CSV completes within 5 seconds and PDF within 15 seconds at p95, with a progress indicator and file naming: PlanPulse_ImpactTriage_{YYYY-MM-DD}_{HHmm}.csv/.pdf.
Audit Trail & Threshold Alerts
"As a compliance lead, I want alerts and an audit trail for impact scores so that I can justify decisions and ensure that high-impact changes get additional review."
Description

Persist score histories and factor inputs per region and version to provide traceability. Display a timeline of score changes with reason codes (e.g., weight change, new code tag, cost update). Allow configurable thresholds that trigger in-app and email notifications, and optionally gate approvals when exceeded. Provide project-level settings for thresholds and recipients. Expose audit data and score events via API/webhooks for downstream reporting. Define retention policies and ensure audit data is immutable once captured.

Acceptance Criteria
Persist Score History with Factor Inputs per Region-Version
Given a region on a sheet has an Impact Meter score and factor inputs When the score is recalculated due to any change in factor inputs or weight configuration Then an audit entry is appended capturing: project_id, sheet_id, region_id, version_id, actor_id (or system), previous_score, new_score, factor_inputs_snapshot (changed_area, proximity_to_critical_zones, code_tags[], cost_risk, weights), reason_code, timestamp_utc (ISO 8601), correlation_id Given the audit trail for a region-version When queried via UI or API Then entries are returned in chronological order with the exact factor inputs that produced each score Given a recalculation that yields no score delta but factor inputs changed When persisted Then an audit entry is still recorded with a reason_code reflecting the input change Given a recalculation initiated by system configuration (e.g., weight change) When recorded Then actor_id is attributed to the setting changer or "system" and the reason_code is populated
Immutable Audit Trail Enforcement
Given an existing audit entry When a user or API attempts to update or delete it Then the operation is rejected (UI prevents edit/delete; API returns HTTP 409) and no data is changed Given an audit entry is created When its integrity is validated Then the entry includes content_hash and previous_hash linking a verifiable chain per region, and tampering breaks chain validation Given routine maintenance or migrations When executed Then audit entries within retention windows remain unaltered and only purge operations per retention policy are allowed, with purge operations themselves logged as system events
Score Change Timeline with Reason Codes
Given the region or sheet timeline view is open When a score changes Then a new timeline item appears showing previous_score → new_score, timestamp_utc, actor, and reason_code chosen from the set [weight_change, area_change, proximity_change, new_code_tag, code_tag_removed, cost_update, manual_override] Given a timeline item When the user expands it Then a details panel shows factor deltas (e.g., changed_area +12 m², cost_risk Medium→High, code_tags +ADA-102) Given the timeline view When the user navigates between versions Then only the score change events relevant to the selected version context are displayed in order (newest first)
Configurable Thresholds with In-App and Email Alerts
Given project-level threshold settings When a user creates or updates a threshold rule with conditions (score ≥ T and/or score_delta ≥ D) and selects recipients Then the rule is saved and becomes active immediately for subsequent score events Given a score event meets a configured threshold rule When processed Then an in-app alert is created and email notifications are sent to the configured recipients within 2 minutes containing project, sheet, region, version, previous_score, new_score, reason_code, and a deep link Given a threshold rule is disabled or deleted When a subsequent score event would have matched it Then no in-app or email notifications are produced for that rule
Approval Gating on Threshold Breach with Overrides
Given a threshold rule is configured with Gate approvals enabled When a score event meets or exceeds the rule Then the Approve action for the affected region/sheet/version is disabled and a banner indicates the gating rule and triggering event Given approvals are gated by a threshold rule When a user with the Project Admin role selects Override gate and enters a justification of at least 10 characters Then the approval action becomes enabled for that attempt and an audit entry records the override actor, justification, timestamp, and related rule Given approvals are gated When the score later falls below all gating thresholds Then the Approve action automatically re-enables without requiring manual intervention
API and Webhook Exposure of Audit Data and Score Events
Given an authenticated API client When it calls GET /api/v1/projects/{project_id}/audit-entries with optional filters (region_id, version_id, since, reason_code) and pagination Then the API returns 200 with a stable-ordered list conforming to the schema including ids, scores, factor_inputs_snapshot, reason_code, actor_id, and timestamp_utc Given a project has a configured webhook endpoint and secret When a score_updated or threshold_triggered event occurs Then the system POSTs a JSON payload including event_type, identifiers, scores, factor_inputs_snapshot, reason_code, triggering_rule_id, HMAC-SHA256 signature header, and a unique delivery_id Given a webhook delivery receives an HTTP 5xx When retries are attempted Then exponential backoff up to 5 attempts is applied and idempotency is ensured via an Idempotency-Key header
Retention Policy Enforcement
Given a project-level audit retention policy in days (default 365) When the daily retention job runs at a scheduled time Then audit entries older than the configured retention are purged and a system audit event logs the purge count and timestamp Given retention is configured When users access audit data before its retention expiry Then all entries remain available and immutable until purged Given retention is set to 0 (no retention) or Infinite When the retention job runs Then behavior matches configuration (immediate purge for 0; no purge for Infinite) and is reflected in system audit events

Version Scrub

Scrub through versions with a slider to animate how changes appear, persist, or resolve over time. Spot regressions, confirm that prior comments were addressed, and capture a quick clip for contextual approvals.

Requirements

Interactive Version Slider
"As a project lead, I want to scrub through drawing versions with a timeline slider so that I can quickly understand how the design evolved and jump to points of interest."
Description

Provide an interactive timeline slider to scrub across ordered drawing versions, with play/pause, scrub speed controls (0.25x–4x), snap-to-version ticks, keyboard shortcuts (←/→, J/K/L), and touch gestures. The slider should display version metadata (timestamp, author, approval state) on hover and indicate sections with comments or regressions. Must integrate with existing version history and permission model, and respect filters (e.g., only approved versions). Accessible (ARIA roles, focus states) and responsive for desktop and tablet. Emits events for other panels (comments, approvals) to sync to the current playhead.

Acceptance Criteria
Transport Scrubbing With Play/Pause, Speed, and Snap-to-Version Ticks
Given the project has ordered drawing versions visible in the timeline When the user presses Play Then the playhead animates through visible versions in order and stops at the last visible version Given the playhead is animating at any speed When the user presses Pause or presses Play again Then animation stops and the playhead remains on the current snapped version Given the user drags the playhead between ticks When the user releases the drag Then the playhead snaps to the nearest visible version tick Given the speed control is opened When the user selects 0.25x, 0.5x, 1x (default), 2x, or 4x Then subsequent playback uses the selected speed and the selection persists for the session Given the user clicks a version tick When the tick is selected Then the playhead jumps to that exact version and the canvas updates to that version
Keyboard Shortcuts For Timeline Navigation (←/→, J/K/L)
Given focus is not inside a text-editable field When the user presses the Left Arrow key Then the playhead snaps to the previous visible version or remains if already at the first Given focus is not inside a text-editable field When the user presses the Right Arrow key Then the playhead snaps to the next visible version or remains if already at the last Given focus is not inside a text-editable field When the user presses J Then playback starts in reverse at the currently selected speed Given any playback state When the user presses K Then playback pauses Given focus is not inside a text-editable field When the user presses L Then playback starts forward at the currently selected speed Given focus is inside a text-editable field When the user presses any of ←/→/J/K/L Then no timeline action occurs
Touch Gesture Scrubbing On Tablet
Given a tablet device with touch input When the user drags horizontally on the slider track Then the playhead follows the finger and on release snaps to the nearest visible version tick Given a tablet device with touch input When the user taps the Play/Pause control Then playback toggles with a touch target size of at least 44 by 44 pixels Given the user vertically scrolls the page outside the slider When a vertical gesture starts outside the slider hit area Then page scrolling works and the slider does not intercept the gesture Given the user performs a horizontal drag within the slider hit area When the gesture is recognized Then the page does not scroll and the slider scrubs instead
Version Metadata Tooltip And Comment/Regression Indicators
Given the cursor hovers over a version tick for at least 300 ms When the tooltip appears Then it shows timestamp, author name, and approval state for that version Given versions with associated comments exist When the slider renders Then sections containing comments are visually indicated on the track with a distinct marker and accessible label Given versions marked as regressions exist When the slider renders Then sections containing regressions are visually indicated with a distinct marker and accessible label that is distinguishable from comment markers
Filter And Permission Compliance In Visible Timeline
Given the filter "Approved only" is active When the slider renders Then only approved versions have ticks and are reachable by playhead and shortcuts Given the current user lacks permission to view certain versions When the slider renders Then those versions are absent from ticks and are not preloaded or reachable via any control Given filters are changed (e.g., from Approved only to All) When the filter is applied Then the visible ticks, version count, and total range update accordingly without requiring a page reload
Accessible, Focusable, And Responsive Slider (Desktop and Tablet)
Given the slider component is rendered When inspected by assistive technologies Then it exposes role="slider" (or appropriate roles for composite controls) with correct aria-valuemin, aria-valuemax, aria-valuenow, and an accessible name Given the slider is focused via keyboard When Tab or Shift+Tab is used Then a visible focus indicator is shown on all interactive elements (slider, ticks, play/pause, speed control) Given the user uses the keyboard to adjust the slider When Left/Right Arrow keys are pressed while the slider has focus Then the playhead moves to previous/next visible version and the new value is announced by the screen reader Given viewport width is >= 1024 px (desktop) or between 768–1023 px (tablet) When the slider renders Then layout adapts without overlap or truncation and all controls remain operable, with hit targets >= 44 px on tablet
Event Emission And Cross-Panel Sync To Current Playhead
Given the playhead changes due to play, drag, tick click, shortcut, or touch gesture When the change occurs Then an event "version.playheadChanged" is emitted with payload {versionId, versionIndex, timestamp, playState, speed} and consumer panels receive it Given the comments panel is subscribed When a "version.playheadChanged" event is received Then the comments panel updates to show threads for the current version Given the approvals panel is subscribed When a "version.playheadChanged" event is received Then the approvals panel updates its state to the current version Given the user is actively dragging the playhead When intermediate positions are generated Then events are debounced to no more than one per 50 ms to prevent UI thrash
Animated Diff Rendering
"As an architect, I want animated diffs that highlight changes between versions so that I can quickly spot what changed and what persisted."
Description

Render animated visual diffs between successive drawing versions, highlighting added, removed, and unchanged elements via color codes and opacity transitions. Support vector CAD/PDF layers and raster backgrounds, with options to toggle markups, layers, and change intensities. Maintain spatial alignment across versions, handle scale differences, and provide fallbacks if alignment fails. Integrate with the drawing viewer pipeline and reuse existing layer toggles. Provide an API to query change sets for other components (e.g., regression alerts).

Acceptance Criteria
Scrub Slider Animates Diffs Between Two Versions
Given two successive versions are loaded in the viewer and the Version Scrub slider is visible When the user drags the slider from 0% to 100% Then added elements increase opacity from 0 to 1, removed elements decrease opacity from 1 to 0, and unchanged elements remain constant And the visual state at 0% equals the older version and at 100% equals the newer version And the animation renders at ≥24 FPS on documents ≤50 MB and canvas ≤4096×4096 on a standard workstation And no console errors are logged during the interaction
Color Coding and Change Intensity Control
Given animated diff rendering is enabled When the default color scheme is applied Then added elements render as #00C853, removed as #D50000, and unchanged as #9E9E9E And when the user adjusts Change Intensity from 0 to 100 Then the opacity of added and removed elements scales linearly with the intensity setting (0→ fully transparent, 100→ fully opaque) while unchanged elements remain at 100% opacity And at default intensity (100), color contrast against the background meets WCAG AA (≥4.5:1)
Layer and Markup Toggles Drive Diff Inclusion
Given the existing layer toggles and an Include Markups toggle are present When a layer is switched off Then elements on that layer are excluded from the animated diff across both versions When Include Markups is off Then markup changes are excluded from the diff And toggle states persist while scrubbing, on version switches, and after page reload within the same project session
Spatial Alignment and Scale Normalization
Given versions A and B with translation/rotation differences and scale variance ≤10% When diff mode initializes Then auto-alignment is applied to achieve average pixel registration error ≤2 px across the viewport and scale difference ≤0.1% And pan/zoom coordinates map consistently between versions (same world coordinate under cursor targets the same element)
Fallback Behavior When Auto-Alignment Fails
Given auto-alignment cannot meet tolerance within 1.0 s When diff mode is activated Then the viewer displays a non-blocking banner: "Auto-alignment failed; showing unaligned diff" And the renderer falls back to a non-animated overlay with 50% cross-fade and disables the scrub control And a Retry Alignment action is available and logged as an analytics event And the API exposes alignmentStatus = "failed" for the attempted diff
Vector CAD/PDF and Raster Background Support
Given a drawing that contains vector CAD/PDF layers over a raster background When animated diff is enabled Then per-element vector changes are detected and animated, and raster background changes are rendered via cross-fade only if the raster asset differs between versions And a control Include Raster Changes toggles inclusion of raster cross-fade in the diff And if a vector source is unsupported (e.g., encrypted PDF), the system falls back to rasterized diff with a warning banner
Change Set API for External Components
Given two version IDs and optional filters (layers, includeMarkups, intensity) When the Diff API endpoint is called Then it returns within 300 ms for ≤10k elements with JSON containing alignmentStatus, scaleFactor, and arrays added[], removed[], unchanged[] each with elementId, layerId, bbox, and centroid And the API respects provided filters so that excluded layers/markups are not present in results And identical requests with the same inputs return identical payloads (idempotent)
Comment-Resolution Overlay
"As a reviewer, I want comments to appear and update as I scrub so that I can confirm whether my feedback was addressed across versions."
Description

Overlay comment markers during scrubbing, anchored to referenced elements/regions, and dynamically show resolution status (open, addressed, rejected) per version. Auto-seek to the version where a comment was resolved, and grey out markers when out of scope. Clicking a marker pauses playback and opens the comment thread. Integrates with existing comment model and permissions; supports bulk filtering by assignee, status, and tag. Provide search and deep links to a playhead position with a specific comment context.

Acceptance Criteria
Show Resolution Status Per Version
Given a project with a comment whose status history is Open at v3, Addressed at v6, and Rejected at v7 When the scrubber playhead is moved to v3, v6, and v7 Then the on-canvas marker for that comment displays "Open" at v3, "Addressed" at v6, and "Rejected" at v7, and the thread header shows the same status values And the status values are sourced from the existing comment model for the corresponding version IDs
Anchoring Stability Across Revisions
Given a comment anchored to a region/element present in versions v1–v10 When the user scrubs from v1 to v10 at 100% zoom Then the marker remains within ≤10px of the anchor centroid at each version and maintains orientation relative to the region bounds And at zoom levels other than 100%, the allowed offset scales proportionally with zoom
Grey Out Markers When Out of Scope
Given a comment anchored to an element that is deleted in v5 When the playhead is at v5 or later and the anchor cannot be mapped with confidence ≥0.8 Then the marker appears greyed, is non-interactive, and displays an "Out of scope" tooltip on hover/focus And clicking/tapping the greyed marker does not pause playback or open a thread
Click Marker to Pause and Open Thread
Given the version scrub is playing and at least one visible marker is on-screen When the user clicks (or presses Enter/Space on) a visible marker Then playback pauses within 200ms, the comment thread opens in the side panel within 500ms, and focus moves to the thread panel And the playhead remains at the frame where the click occurred
Auto-Seek to Resolution Version
Given a comment that transitions to Addressed in v8 When the user selects "Go to resolution" from the comment actions Then the playhead jumps to v8 within 1s, highlights the marker for 2s, and the thread header indicates "Addressed at v8" And if the comment is not resolved, the action is disabled with a tooltip "No resolution version"
Bulk Filter and Search Overlay
Given comments with varying assignees, statuses, and tags When the user applies filters Assignee=[A,B], Status=[Open, Addressed], Tag=[MEP], and enters search text "duct" Then only markers and thread list items matching (Assignee ∈ {A,B}) AND (Status ∈ {Open, Addressed}) AND (Tag includes MEP) AND (text matches "duct" case-insensitive) are shown And the overlay count equals the number of visible comments, and non-matching markers are hidden across all versions while filters are active And users only see comments they have permission to view; restricted comments are excluded from results
Deep Link to Playhead with Comment Context
Given a deep link URL containing versionId and commentId parameters When a user with view permission opens the link Then the workspace loads the Version Scrub, sets the playhead to the specified version, brings the specified comment marker into view, opens its thread, and applies any filters encoded in the URL And if the user lacks permission for the comment or version, the app loads with an access error state and does not reveal restricted metadata And if the commentId or versionId is invalid, the app shows a "Not found" message and does not crash
Regression Alerts
"As a project lead, I want automatic regression alerts while scrubbing so that I can catch unintended reintroductions before client review."
Description

Detect and surface potential regressions where previously removed or changed elements reappear or revert to older states. Use geometric and metadata heuristics with configurable sensitivity thresholds. Indicate alerts with timeline badges and in-view highlights; clicking jumps to the earliest and latest occurrences. Provide a review panel to accept/dismiss alerts and export a list for QA. Integrate with animated diff and comment overlay to cross-reference impacted areas.

Acceptance Criteria
Detect Reappearance of Removed Elements
Given an element exists in version V3 and is removed in version V4 When a geometrically similar element (similarity ≥ 0.95) or the same metadata ID reappears in version V6 Then a "Regression: Reappearance" alert is created linking V3, V4, and V6 And a timeline badge appears at V6 with the alert count And selecting the alert highlights the element in-view and opens alert details And the alert records earliest and latest occurrence versions and enables Jump to Earliest/Latest controls
Detect Reversion of Metadata/Style to Older State
Given an element’s properties change between V2 and V3 (e.g., layer, material, thickness, text value) When a later version V5 matches an older state by metadata hash equality and geometric similarity ≥ threshold Then a "Regression: Reversion" alert is created showing the reverted properties and their prior values And impacted properties are listed in alert details with before/after values And no alert is created when only differences below the current sensitivity tolerance are observed
Timeline Badges and In-View Highlights Behavior
Given one or more regression alerts are associated with a version When the scrub slider lands on that version Then a timeline badge with the total alert count is visible at that version marker And clicking the badge opens the alert summary panel And all impacted geometries for that version are highlighted with visible outlines and pins And when cycling next/prev within the alert, the viewport centers each geometry and dims non-impacted layers
Review Panel Accept/Dismiss and Persistence
Given an open regression alert in the review panel When the reviewer accepts it as Valid Regression Then the alert status becomes Accepted and it is removed from the active queue but retained in history with reviewer ID, timestamp (UTC), and optional note Given an open regression alert When the reviewer dismisses it as False Positive Then it is hidden from active alerts for the current version range and logged with reviewer ID, timestamp (UTC), and note And if the same element regresses again with a new geometry/metadata hash, a new alert is created Given an accepted or dismissed alert When the project is reloaded or accessed by another authorized user Then the decision and audit metadata persist Given a recent decision (≤ 5 minutes) When Undo is invoked Then the decision is reverted and the alert returns to the active queue
Configurable Sensitivity Thresholds
Given sensitivity presets Low, Medium (default), High and a custom slider 0.0–1.0 When the user changes the threshold Then detections are recalculated and the active alert count updates within 3 seconds for projects up to 2000 elements And a tooltip displays the current geometric similarity tolerance and metadata strictness derived from the threshold Given the threshold was changed When the user clicks Reset Then the threshold returns to Medium and the prior alert set is restored
Export Alerts for QA
Given a project has active and resolved regression alerts When the user exports alerts Then CSV and JSON files are generated within 5 seconds named PlanPulse_Alerts_<projectId>_<YYYYMMDDThhmmssZ> And each record includes alertId, type, impactedElementIds, earliestVersion, latestVersion, versionsAffected, status, reviewer, decisionNote, createdAt, decidedAt, sensitivity, projectId And geometries are referenced by stable IDs only (no embedded geometry payload) Given the alerts list is filtered (e.g., status = Unresolved) When the user exports Then only the filtered set is exported and the applied filters are included in the export metadata
Integration with Animated Diff and Comment Overlay
Given a regression alert is selected When View in Diff is activated or scrub playback is running Then animated diff mode turns on, the timeline plays between earliest and latest versions, and impacted regions are outlined during playback Given comments reference impacted element IDs When the alert is selected Then the comment overlay filters to comments linked to impacted elements and pins them in-view And clicking a pinned comment scrolls the alert detail to the corresponding impacted element Given diff or comment overlay was enabled by an alert When the alert is deselected or review mode is exited Then the UI returns to the prior non-review state
Clip Capture and Share
"As an architect, I want to capture a short clip of the version evolution so that I can give clients context and get faster approvals."
Description

Allow users to define a time range on the timeline and export an animated clip of the version scrub as MP4, WebM, or GIF with selectable resolution and watermarking. Include optional overlays for version labels and comment status. Generate a shareable link with access controls and the ability to attach the clip to a one-click approval request. Store exports securely with lifecycle policies and record audit events.

Acceptance Criteria
Time Range Selection and Export Preview
Given a Version Scrub timeline with at least two versions is open When the user selects a start and end point within the timeline bounds where start < end Then the Export Clip dialog displays with the selected time range pre-populated And the preview plays only frames within the selected range And the Export action is disabled if the range is invalid (start >= end or outside bounds) And the displayed clip duration matches the selected range within one frame
Format and Resolution Selection
Given the Export Clip dialog is open When the user selects a format of MP4, WebM, or GIF Then the exported file uses the chosen container with the correct MIME type And MP4/WebM outputs are playable in modern browsers and GIF outputs animate correctly When the user selects a resolution preset or Match Source Then the exported clip pixel dimensions equal the selection
Watermark Embedding
Given the watermark option is available in the Export Clip dialog When watermarking is enabled Then the preview displays the watermark and the exported clip contains the watermark burned into every frame When watermarking is disabled Then no watermark appears in the preview or exported clip And the watermark is not a separate overlay track and cannot be removed without re-encoding
Overlays: Version Labels and Comment Status
Given toggle controls exist for Version Labels and Comment Status overlays When Version Labels is enabled Then the preview and exported clip display the correct version identifier for each frame according to the scrub timeline When Comment Status is enabled Then the preview and exported clip display accurate comment states (e.g., open/resolved) as of the corresponding timestamps When either overlay is disabled Then that overlay does not appear in the preview or exported clip
Shareable Link and Access Controls
Given a clip export completes successfully When the user generates a shareable link and selects an access scope (Public via unguessable token, Workspace members only, or Specific invitees) and an expiration Then a unique link is created and the chosen access scope and expiration are enforced on access And revoking the link immediately blocks access and returns 403 for subsequent requests And downloads/views via the link are logged with timestamp and actor where available
Attach Clip to One-Click Approval Request
Given a clip export is available When the user attaches the clip to a one-click approval request Then the approval request includes the clip thumbnail and a link to play the clip And the recipient can view the clip without additional authentication if permitted by the link’s access scope And an approval action records the clip ID/reference in the approval record
Secure Storage, Lifecycle, and Audit Events
Given clip exports are stored by the system Then exported artifacts are encrypted at rest and delivered over TLS in transit And access to stored clips is authenticated and authorized per link scope When a link expires or is revoked Then subsequent access is blocked and the binary and derivatives are scheduled for deletion per lifecycle policy And the system records audit events for export start, export completion, share link creation, access (success/denied), attachment to approval, revocation, and deletion with user ID (or link token), timestamp, and IP where available
Performance Preload and Caching
"As a user working with large plans, I want smooth scrubbing without lag so that I can review changes efficiently and stay in flow."
Description

Preload adjacent versions around the playhead, cache rendered frames, and leverage GPU acceleration to maintain smooth scrubbing performance on large drawings. Establish performance targets (≥30 fps for typical project size; initial render <2s) and memory budgets. Implement telemetry to monitor frame drops and loading times, and provide graceful degradation (e.g., reduced effects) on low-power devices. Work across Chromium-based browsers and Safari on desktop/tablet.

Acceptance Criteria
Smooth Scrub Performance (Typical Project)
Given the "Typical_A" dataset and supported environments (latest 2 Chromium-based browsers on Windows/macOS and Safari on macOS/iPadOS) When the user scrubs across 10 versions for 60 seconds at 1x speed, reversing direction every 10 seconds Then median FPS ≥ 30 on desktop and tablet And 95th percentile frame time ≤ 50 ms And dropped frames ≤ 5% And no visible stutter longer than 200 ms And no unhandled exceptions or console errors related to rendering
Initial Render Latency (First Frame)
Given cold start with empty caches on a supported environment When the user opens the Version Scrub view for a drawing of typical size Then the first interactive frame renders in ≤ 2000 ms And a skeleton/placeholder is shown within ≤ 200 ms until content is ready And no blocking spinner is shown for more than 500 ms
Adjacent Version Preloading Around Playhead
Given a sequence of versions with the playhead at version k When the user pauses or scrubs near version k Then assets for versions k-2 through k+2 are preloaded and decoded in the background And entering any preloaded version displays its first frame in ≤ 100 ms And while scrubbing at up to 2x speed, preload miss rate ≤ 5% over 60 seconds And no network bursts exceed 10 concurrent requests
Frame Cache Efficiency and Memory Budget Enforcement
Given the user scrubs back and forth across the same 8 versions for 60 seconds When revisiting previously rendered frames Then frame cache hit rate ≥ 70% And total memory used by the rendering cache ≤ 500 MB on desktop and ≤ 300 MB on tablet And when the budget is reached, least-recently-used eviction occurs within 50 ms without UI hitching And no out-of-memory crashes occur And JavaScript GC pauses remain ≤ 50 ms at p95
GPU Acceleration and Fallback Verification
Given hardware acceleration is available When the user scrubs for 30 seconds under load Then GPU-backed rendering (WebGL2/WebGPU/Accelerated Canvas) is used as indicated by feature flags And main-thread render work p50 ≤ 20 ms per frame And GPU utilization increases by ≥ 20 percentage points over idle Given hardware acceleration is not available or is disabled When the user scrubs the same content Then the software fallback renders without visual regressions And median FPS ≥ 24 And no runtime errors occur
Telemetry Coverage for Performance Metrics
Given telemetry is enabled When a user scrubs for at least 5 seconds Then the client emits events capturing fps, frame_time_ms, dropped_frames, preload_latency_ms, cache_hit, cache_miss, memory_usage_mb, gpu_used, device_profile, browser_family, and render_path And events are batched and delivered within 10 seconds of capture And in staging, sampling = 100%; in production, sampling ≥ 20% And the metrics appear in the performance dashboard with p50/p95 within 5 minutes And missing-data rate ≤ 1% of sessions over a 24-hour period
Graceful Degradation on Low-Power Devices
Given a low-power profile (battery saver enabled or measured CPU score below threshold) When Version Scrub is opened or scrubbing begins Then the app automatically reduces effects (e.g., disables motion blur, reduces antialiasing, lowers layer effects) to maintain performance And median FPS ≥ 24 while scrubbing the "Typical_A" dataset for 60 seconds And the first interactive frame renders in ≤ 2000 ms And a visible "Performance mode" indicator explains reduced effects and offers a toggle And the user's preference persists across sessions

Change Atlas

A set-level dashboard that visualizes change intensity across all sheets and disciplines. Drill down from overview to sheet and region in two clicks, search by tag or keyword, and export a concise change report for stakeholders.

Requirements

Change Scoring Engine
"As a project lead, I want objectively calculated change intensity scores so that I can compare impact across sheets and versions and focus my review efficiently."
Description

Implements a backend service that computes a consistent “change intensity” score per sheet, discipline, and region by aggregating vector diffs, markup edits, and comment activity across versions. Normalizes metrics (e.g., changed area, count of entities, annotation churn) with configurable weights per discipline and tag, stores versioned snapshots, and exposes query endpoints for time ranges. Triggers recalculation on new uploads or markups, supports incremental updates for large sets, and caches aggregates for fast retrieval by the dashboard. Ensures results are comparable across projects, enabling accurate prioritization and reporting within PlanPulse.

Acceptance Criteria
Score computation on new sheet version upload
Given a project with existing baseline versions and configured weights per discipline and tag And an existing sheet with a prior version and stored score When a new version of the sheet is uploaded with detectable vector diffs, markup edits, and comments Then the service computes a change intensity score for the sheet, its discipline aggregate, and any defined regions And normalized metrics are applied: changedArea / sheetArea, entityDeltaCount / priorEntityCount, annotationChurnRate And the final score is produced on a 0–100 scale with at most one decimal place And snapshot creation completes within 5 seconds for a sheet with up to 5,000 entities and 10 regions And the API responds with 201 Created and returns identifiers for the new snapshot records
Configurable normalization and weights per discipline and tag
Given a weighting configuration exists with metric weights per discipline and optional tag-specific overrides When a score calculation runs for a sheet whose discipline and tags match the configuration Then the weighted contribution equals sum(metric_i_normalized * weight_i) using the effective weights for that sheet And invalid configurations (missing required metrics or non-numeric weights) are rejected with 400 and no scores are computed And changing the configuration updates only subsequent calculations; existing snapshots remain unchanged unless explicitly reprocessed And each snapshot stores the configuration version used (configVersion is present and non-empty)
Versioned snapshot storage and time-range retrieval
Given multiple snapshots exist for a project across different timestamps When the client requests GET /scores with projectId, from, and to parameters Then the response includes only snapshots whose timestamps are within the inclusive [from, to] window And each item includes projectId, sheetId, discipline, regionId (nullable), versionId, score, timestamp, and configVersion And results are sorted ascending by timestamp by default and support cursor-based pagination via nextCursor And the response status is 200 for valid requests and 400 for invalid time ranges (e.g., from > to)
Incremental update efficiency on large project sets
Given a project with at least 2,000 sheets and existing aggregates computed And baseline full recomputation time is recorded When a single sheet receives a new markup that affects its score Then only that sheet and dependent aggregates (sheet, discipline, project) are recomputed And unrelated sheets' scores remain unchanged And the incremental recomputation time is at least 80% faster than the baseline full recomputation and completes within 30 seconds
Automatic recalculation on markup and comment activity
Given event listeners are active for new uploads, markup create/edit/delete, and comment create/edit/delete When any such event occurs for a sheet with a prior snapshot Then a recalculation job is enqueued within 2 seconds And events on the same sheet within a 5-second window are batched into a single recalculation And transient job failures are retried up to 3 times before marking the attempt as failed
Cached aggregates for dashboard fast retrieval and invalidation
Given cached aggregates exist for a project's set-level overview When the dashboard requests aggregates for the project without cache invalidation events Then the API returns the cached result with p95 latency under 200 ms for projects up to 500 sheets and 10 disciplines And when a relevant recalculation completes, the corresponding cache entry is invalidated within 5 seconds And a subsequent request after invalidation returns updated aggregates and repopulates the cache
Cross-project score comparability and scale consistency
Given two projects with different absolute sheet sizes and entity counts but identical relative changes (normalized inputs) When scores are computed using the same configuration Then the resulting sheet-level scores differ by no more than 1.0 on the 0–100 scale And a no-change input yields a score of 0.0 and a maximal-change test fixture yields a score of at least 90.0 in both projects
Change Heatmap Overview
"As a project lead, I want a visual overview of change hotspots across the entire set so that I can immediately see where attention is needed."
Description

Provides a set-level dashboard that visualizes change intensity across all sheets and disciplines using an intuitive color scale and tile/grid layout. Supports filters for discipline, time range, version pair, and tag, displays key counts (changed sheets, top disciplines), and highlights hotspots. Loads within two seconds for up to 500 sheets via precomputed aggregates and CDN-cached thumbnails, and adheres to accessibility standards for color contrast and keyboard navigation. Integrates with PlanPulse’s project context and respects user permissions and visibility rules.

Acceptance Criteria
500-Sheet Heatmap Load Performance
Given a project with 500 sheets and precomputed change aggregates available and CDN-cached thumbnails When an authenticated user with access opens the Change Heatmap Overview Then the overview renders the filter bar, legend, summary counts, and first viewport of tiles and becomes interactive within 2,000 ms at p95 on a 10 Mbps, 100 ms RTT test profile And initial network payload before interactivity is ≤ 2.5 MB And below-the-fold thumbnails are lazy-loaded and deferred until scrolled into view And thumbnail requests are served from cdn.planpulse.io with Cache-Control max-age ≥ 86400 And performance metrics are captured to RUM with route "change-heatmap-overview" including TTI and payload size
Multi-Filter by Discipline, Time Range, Version Pair, and Tag
Given the overview is loaded with no filters When the user applies one or more filters for discipline, time range, version pair, and tag Then results and summary counts reflect the intersection (AND) across different filter types and union (OR) within the same type And the total changed sheet count updates accurately within 300 ms after the API response And active filters are encoded in the URL query string and restored on reload And clearing all filters returns the view to the unfiltered state And an empty-state message is shown when no sheets match
Color-Scaled Change Intensity with Legend
Given each sheet has an intensity score from 0 to 100 derived from precomputed aggregates When the heatmap tiles are rendered Then a 5-bucket color scale maps scores to colors with thresholds [0, 1–24, 25–49, 50–74, 75–100] And a persistent legend shows bucket ranges and color swatches And each tile shows its exact score on hover or focus tooltip And a colorblind-safe palette toggle is available and persists per user And non-color cues (pattern or icon) appear for the highest bucket to avoid color-only reliance
Accessible Heatmap and Keyboard Navigation
Given a user navigates the overview with a keyboard and assistive technologies When tabbing and arrowing through the interface Then all interactive elements (filters, legend toggle, tiles, hotspots toggle, pagination/scroll controls) are reachable via keyboard with visible focus And grid navigation supports arrow keys moving focus tile-to-tile row-wise And Enter opens the tile’s quick preview; Esc closes it and returns focus And aria-labels/roles convey tile metadata (sheet number, name, discipline, score) And color contrast meets WCAG 2.1 AA (≥ 4.5:1 for text/icons, ≥ 3:1 for large text) And the view is usable with a screen reader, announcing counts and filter changes
Permissions-Respecting Data Visibility
Given the user’s project permissions restrict access to specific disciplines and sheets When the overview loads and when filters are applied Then only authorized sheets are displayed and counted And unauthorized disciplines are omitted from filter options and counts And attempts to query unauthorized data return 403 without leaking totals And summaries reflect the same visibility rules And all permission checks are enforced server-side and logged
Hotspot Highlighting
Given the current filtered result set contains N sheets with scores When hotspots are calculated Then tiles in the top 10% by score (minimum 5 tiles if N ≥ 50) are badged as hotspots And a Hotspots Only toggle filters the view to just these tiles And hotspot badges include an accessible label with the rank and score And the hotspot set recomputes when filters change
Summary Counts and Top Disciplines
Given the overview is loaded When any filter state is applied Then a summary panel displays: total changed sheets in scope and the top 3 disciplines by average score with their counts And counts and rankings update consistently with visible tiles And hovering or focusing a discipline in the summary highlights its tiles And all counts are derived from the same API response as tiles to prevent drift
Two-Click Drilldown
"As an architect, I want to drill down from the overview to a precise sheet region in two clicks so that I can quickly inspect the exact area impacted."
Description

Enables navigation from the overview to a specific sheet and then to a highlighted change region in two clicks. Sheet view overlays change regions with intensity contours and provides a region pane with metadata (tags, affected disciplines, change metrics). Includes breadcrumbs, back navigation, and prefetching of sheet assets to keep interactions under 300 ms. Integrates with existing PlanPulse viewers for markups and comments, preserving selection state and applied filters throughout the drilldown path.

Acceptance Criteria
Two-Click Overview-to-Region Navigation
Given the Change Atlas overview is loaded with sheet thumbnails visible When the user clicks a sheet thumbnail Then the app opens that sheet view as the first navigation step Given the sheet view displays highlighted change regions When the user clicks a highlighted change region Then the app focuses that region and opens the region pane as the second navigation step Then the total number of pointer activations from overview to focused region equals 2 with no intermediate confirmations or additional clicks required
Sheet Overlay with Intensity Contours
Given a sheet view is open When the sheet view renders Then every detected change region is overlaid with intensity contours and a visible region boundary And contours align to the region geometry within ±2 px at 100% zoom And contours are visible at the default zoom without requiring additional user actions
Region Pane Metadata Completeness
Given a change region is focused in the sheet view When the region pane opens Then it displays values for all required fields: tags, affected disciplines, and change metrics And if any field has no data, it explicitly shows "None" instead of being blank And the metadata corresponds to the focused region's identifier and updates within 150 ms of region focus
Breadcrumbs and Back Navigation Persistence
Given the user has navigated Overview → Sheet → Region When the user clicks the "Sheet" breadcrumb Then the app returns to the sheet view within 300 ms and preserves prior zoom, pan, and filter settings When the user clicks the "Overview" breadcrumb from the sheet view Then the app returns to the overview within 300 ms and preserves previously applied filters and scroll position When the user uses the browser Back/Forward buttons from any of these states Then navigation mirrors the breadcrumb behavior and preserves state without a full page reload
Prefetching Enables Sub-300ms Interactions
Given the Change Atlas overview is loaded When a sheet thumbnail enters the viewport or receives hover/focus Then the app begins prefetching that sheet's assets and associated change region data When the user clicks a visible (prefetched) sheet thumbnail Then time from click to interactive sheet view (all overlays rendered) is ≤ 300 ms at the 95th percentile When the user clicks a highlighted change region with prefetched region assets Then time from click to open region pane with metadata rendered is ≤ 300 ms at the 95th percentile Then prefetching does not drop overview frame rate below 55 FPS during idle hover/scroll at the 95th percentile And client-side metrics are captured for these timings and emitted to telemetry
Filter and Selection State Preservation Through Drilldown
Given filters are applied at the overview (e.g., tags and disciplines) When the user drills down Overview → Sheet → Region Then the same filters remain applied and visible in the UI at each level, affecting overlays and lists consistently When the user navigates back up via breadcrumbs or browser back Then the previously applied filters remain applied and the last selected sheet/region remains highlighted in its parent view
PlanPulse Viewer Integration in Region Focus
Given a change region is focused When the user opens markups or comments Then the integrated PlanPulse viewer opens anchored to the focused region and displays only markups/comments scoped to that region And creating a new markup or comment associates it with the focused region without clearing selection or filters And opening/closing the viewer does not trigger a full page reload and does not exceed 300 ms to interactive at the 95th percentile
Tag and Keyword Search
"As a project lead, I want to search changes by tag or keyword so that I can surface all related modifications across the set instantly."
Description

Indexes change records, markups, comments, and sheet metadata to support fast search by tag, keyword, and discipline with typeahead suggestions. Supports boolean operators, exact phrase matching, and filter facets that mirror dashboard filters. Highlights matching regions and scrolls the dashboard or sheet view to relevant results. Leverages PlanPulse’s tagging model and maintains role-based visibility so sensitive internal tags and notes remain hidden from client viewers.

Acceptance Criteria
Typeahead Suggestions for Tags, Keywords, and Disciplines
Given the user types at least 2 characters in the global search bar When tags, keywords, or disciplines matching the typed prefix exist Then a typeahead list of up to 8 suggestions appears within 300 ms of the last keystroke And suggestions are deduplicated, prefix-highlighted, and labeled by type (Tag, Keyword, Discipline) And only values from PlanPulse’s tagging model and project configuration are suggested And ArrowDown/ArrowUp navigates the list and Enter selects the highlighted suggestion to populate and execute the search
Boolean Operators and Exact Phrase Queries
Given boolean operators AND, OR, NOT and double-quoted exact phrase syntax are supported When the user submits queries such as "kitchen AND plumbing", "(door OR window) NOT demolition", and '"sheet note"' Then results include only items that satisfy the boolean logic and exact phrase constraints And operator precedence is parentheses > NOT > AND > OR And queries are case-insensitive and diacritic-insensitive And invalid syntax yields a clear, inline error message without executing a partial search And result lists render within 1500 ms for queries returning up to 200 matches
Facet Filters Mirror Dashboard and Stay in Sync
Given the Change Atlas dashboard has active filters (e.g., Discipline, Sheet, Date Range, Change Intensity, Tags) When the user opens the search panel Then search facets mirror the current dashboard filter set and facet counts And adjusting a facet in search updates the results and synchronizes the dashboard filters immediately And clearing all facets resets both search results and dashboard filters to default And facet selections persist when navigating between overview and sheet views during the session
Highlight and Scroll to Matching Regions in Sheet View
Given a search result corresponds to a geometric region on a sheet (e.g., a markup or change record) When the user clicks that result Then the sheet view opens the correct sheet (if not already open), pans/zooms to center the matching region, and overlays a visible highlight And the highlight remains visible for at least 3 seconds and can be dismissed with Esc or click-away And if multiple regions match on the sheet, Next/Previous controls navigate between them without reloading the sheet
Scroll and Two-Click Drilldown from Dashboard Overview
Given results span multiple sheets in the Change Atlas overview When the user clicks a sheet-specific result in the search results panel Then the dashboard scrolls to bring the corresponding sheet tile into view and visually emphasizes it And a second click drills down to that sheet and centers the exact matching region, achieving overview-to-region in no more than two clicks
Role-Based Visibility in Search Results and Suggestions
Given some tags and notes are marked Internal and restricted by role When a Client Viewer types or executes a search Then internal-only tags, notes, and their typeahead suggestions are excluded from results, counts, and highlights And if all matches are internal-only, the user sees a "No results" state with no indication of hidden content And a Project Editor with permission can see both internal and client-visible items, with visibility labels applied
Indexing Coverage and Freshness for Searchable Content
Given users create, edit, or delete change records, markups, comments, sheet metadata, or tags When the change is saved Then the search index reflects creates and updates within 60 seconds and removes deleted items within 60 seconds And retagging updates tag associations in the index without creating duplicates And indexing failures are logged and retried with backoff, with no stale suggestions persisting beyond 5 minutes
Exportable Change Report
"As a project lead, I want to export a concise change report for stakeholders so that they can review impact offline and align on next steps."
Description

Generates concise, branded change reports in PDF and CSV that summarize change intensity by sheet and discipline, include key tags and top-affected regions, and optionally embed thumbnails. Respects the current filter state (time range, disciplines, tags) and annotates the report with project, version pair, and timestamp. Runs as an asynchronous job with progress feedback and produces a shareable artifact stored in PlanPulse with retention controls and webhook/Email delivery options.

Acceptance Criteria
Export Respects Active Filters and Search
Given I am viewing Change Atlas with a selected time range, one or more disciplines, and tags or keywords applied When I initiate "Export Change Report" and choose PDF and/or CSV formats Then the exported dataset contains only changes matching the active time range, selected disciplines, and applied tags/keywords And the report header includes project name, version pair, and UTC timestamp And the export includes a summary of applied filters and search terms And PDF and CSV exports contain identical records and totals for all included fields And if no changes match, the export completes with an empty-state section and zeroed summaries without errors
Content Correctness: Sheets, Disciplines, Top Regions, and Tags
Given changes exist under the current filter state in Change Atlas When I export the change report Then each sheet entry shows its change intensity grouped by discipline consistent with the on-screen dashboard And the report lists top-affected regions per sheet in descending change intensity And the report includes key tags associated with included changes for each sheet And totals and per-discipline counts in the export match the on-screen totals for the same filters And numeric values use consistent units and precision per workspace settings
Branding and Optional Thumbnails
Given workspace branding (logo/name/colors) is configured and a Thumbnails option is available in export settings When I export to PDF and CSV with Thumbnails turned On Then the PDF includes branding on cover/header/footer and embeds sheet and top-region thumbnails alongside entries And the CSV includes branding metadata in a header row without breaking the column schema And if any thumbnail fails to generate, a placeholder is shown in PDF and the export completes without failing When I export with Thumbnails turned Off Then neither PDF nor CSV includes thumbnails and the layout remains compact
Asynchronous Export Job: Progress, Cancellation, and Failure Handling
Given export jobs run asynchronously with UI feedback When I start an export Then a progress indicator is shown and updates periodically until completion And I can cancel the export before completion, resulting in a canceled status and no stored artifact And on success, the job completes with a success status and an artifact record containing format, filters, size, and generated timestamp And on failure, the job status is failed, an error message is shown with a retry action, and the failure reason is logged And if the job exceeds the configured export timeout, it terminates with a timeout failure and no partial artifact is stored
Artifact Storage, Shareable Link, and Retention Controls
Given completed exports are stored as artifacts in PlanPulse with retention settings When an export completes successfully Then a shareable link is created that honors workspace sharing and permission policies And the artifact has a retention period set by the user or workspace default and is automatically purged upon expiry And deleting the artifact immediately revokes access via the shareable link And views/downloads of the artifact are logged with requester identity and timestamp
Webhook and Email Delivery
Given email recipients and/or a webhook endpoint are configured for export delivery When the export completes successfully Then an email is sent to the specified recipients containing the artifact link and, when size permits, the PDF attachment And a webhook POST is sent to the configured endpoint containing the artifact id, URL, format, filters summary, project, version pair, status, and generated timestamp And transient delivery errors are retried with exponential backoff up to the configured limit And if all retries fail, the requester receives a delivery failure notification
CSV and PDF Field Parity and Formatting
Given I generate both PDF and CSV change reports for the same filter state When the files are produced Then the CSV contains columns for project, version_pair, sheet_id, sheet_name, discipline, change_intensity, top_regions, key_tags, generated_at And the PDF presents corresponding fields with matching values And field names and ordering match the published export specification And CSV numeric values use dot decimal and no thousands separators; PDF uses locale-aware formatting And all text content in both files is UTF-8 encoded without character loss
Shareable Stakeholder Views
"As a small-firm project lead, I want to share a restricted Change Atlas view with clients so that they can see relevant changes without exposing internal notes."
Description

Provides secure, read-only share links for the Change Atlas and its exports with role-based scopes (client reviewer vs. internal) and expirations. Automatically redacts internal-only tags, comments, and markups for client-scoped links while preserving essential context. Includes access logging, revocation, and compatibility with existing PlanPulse SSO and project permissions to ensure controlled distribution of change insights.

Acceptance Criteria
Generate Read-Only Share Link for Change Atlas
Given I am a project lead with permission to share Change Atlas When I generate a share link for the current Change Atlas view with scope set to "internal" or "client reviewer" Then the link opens the Change Atlas in a browser without requiring edit permissions And the UI disables all write actions (edit markups, add/delete tags, comments, uploads, settings changes) And only viewing actions (zoom, pan, drill-down, search, filter, export) are available And any attempt to call write APIs via the browser returns 403 Forbidden And the shared view loads with the same filters and drill context encoded in the URL
Role-Based Redaction on Client-Scoped Share Links
Given internal-only tags, comments, and markups exist on the Change Atlas When a client-scoped share link is opened Then internal-only items are not rendered in the UI or included in API responses And client-visible items remain intact, including change metrics, public tags, version diffs, and region pins And the same redaction applies to any exports initiated from the client-scoped view And when an internal-scoped share link is opened, all items (internal and client-visible) are rendered
Share Link Expiration Enforcement
Given I set an expiration date and time on a share link When the current time is before the expiration Then the link is accessible and the view loads as scoped When the current time is at or after the expiration Then the recipient sees a clear "Link expired" message and no Change Atlas data or exports are retrievable And access attempts after expiration are denied and recorded in the access log
Immediate Revocation of Share Links
Given an active share link exists When I revoke the link Then subsequent attempts to open the link are blocked with a "Link revoked" message and no data is returned And any currently active sessions created from that link are invalidated within 60 seconds and cannot refresh or export And the revocation event is captured in the access log
Access Logging for Auditability
Given share links are created and used by stakeholders Then each event (create, view, export download, revoke) is logged with timestamp (UTC), link ID, scope, actor identity when authenticated, IP address, and user agent And project admins can view and export the access log for the project And access logs for client-scoped links do not include internal-only content details
SSO and Project Permission Compatibility
Given an internal-scoped share link is opened by a user Then the user must authenticate via PlanPulse SSO and have the project permission to view Change Atlas or access is denied Given a client-scoped share link is opened Then no SSO is required to view, and the session remains read-only And only users with the project permission to create stakeholder share links can create or revoke links And removing a user's project access immediately prevents them from creating new links and from accessing internal-scoped links
Export Consistency and Secure Delivery from Shared Views
Given a shared Change Atlas view has active filters, search terms, and a scope When the viewer exports a change report from that shared view Then the exported content matches the on-screen data (filters, drill context, sort) for that scope And redaction rules based on the link scope are applied to the export And the export is delivered via a time-limited signed URL tied to the share link And downloads are blocked if the share link is expired or revoked

Zone Watch

Define or import named zones (life safety, cost-sensitive rooms, permit-critical areas) and get automatic heatmap flags when changes touch those zones. Keeps risk hotspots front and center and prevents accidental scope creep.

Requirements

Zone Creation & Import
"As a project lead, I want to quickly define or import named risk zones on my drawings so that I can track sensitive areas without redrawing them every time the plan changes."
Description

Enable users to define named zones within a drawing by drawing polygons, selecting existing geometry, or importing from CAD/BIM layers (e.g., DWG/DXF layer names, IFC zones). Support assigning zone metadata (type: life safety, cost-sensitive, permit-critical; tags; description) and color presets. Ensure zones are stored in PlanPulse as first-class entities linked to a project and sheet with scale-aware coordinates. Allow multi-sheet and multi-level support, snapping to drawing geometry, and precise editing (add/remove vertices, holes). Provide validation to prevent overlaps when disallowed and to enforce closed geometries. Include bulk import, conflict resolution for duplicate names, and unit consistency. Persist stable Zone IDs for cross-version tracking and integrations.

Acceptance Criteria
Draw Polygon Zone with Snapping and Closed-Shape Validation
Given a user is on a sheet and in Zone creation mode with snapping enabled When the user places at least 3 vertices and clicks Complete Then the final segment auto-connects to the first vertex within <=2 px tolerance and the polygon is closed And vertices within 8 px of drawing edges/vertices snap exactly to that geometry And if the shape is self-intersecting or open, a blocking validation message is shown and Save is disabled And on Save, a Zone entity is created with a unique Zone ID, linked to the current project, sheet, and level, storing scale-aware coordinates in sheet units
Select Existing Geometry to Define Zone Boundary
Given an uploaded drawing with selectable closed polylines/regions When the user selects a closed polyline and chooses "Create Zone from Geometry" Then a Zone is created matching the geometry within <=0.1% perimeter and area tolerance And if the geometry is not closed, the action is disallowed with an explanatory message And the Zone uses the source layer name as default name when available; otherwise a sequential "Zone <#>" name is assigned And snapping remains active for subsequent edits to this zone
Assign Zone Metadata and Color Preset
Given the Create/Edit Zone panel is open When the user inputs Name, selects Type (life safety, cost-sensitive, permit-critical), adds Tags and Description, and picks a Color Preset Then Name is required (1–100 chars) and unique within the sheet (case-insensitive) And Type is required and limited to the allowed values And up to 20 tags are accepted (each <=30 chars); Description is limited to 1,000 chars And the selected Color Preset immediately updates the zone fill/stroke in canvas and legend And on Save, all metadata persists and is retrievable via UI and API
CAD/BIM Import with Layer/IFC Mapping, Bulk Import, and Unit Consistency
Given the user uploads a DWG/DXF/IFC file and opens the Zone Import dialog When they map source layers/IFC zones to zone types, confirm/override detected units, and click Import Then all closed geometries are imported as Zones with coordinates converted to sheet scale and <=0.5% dimensional variance versus source And open polylines are skipped and listed with reason codes (e.g., not closed, self-intersecting) And duplicate zone names within a target sheet are resolved per selected rule (append numeric suffix, merge, or skip) with a post-import summary report And the importer supports at least 10,000 zones per operation with progress feedback and cancel support And each imported zone records its source file and layer/IFC identifier and is assigned a stable Zone ID
Overlap Prevention Based on Project Policy
Given the project setting "Allow Zone Overlap" is Off When a user creates or imports a zone that overlaps any existing zone on the same sheet and level by an area > 0.01 square units Then the operation is blocked, overlapping regions are highlighted, and a message lists the conflicting zone names And the user may edit geometry or cancel; no partial save occurs Given the project setting "Allow Zone Overlap" is On When a user creates or imports overlapping zones Then the zones are saved and both receive a non-blocking warning badge
Precise Editing: Vertices and Holes with Scale-Aware Storage
Given a user opens an existing zone in Edit mode When they add, move, or remove vertices, or add/remove an interior hole polygon Then the zone remains a valid closed polygon with zero self-intersections and all holes fully contained And snapping applies to edited vertices; keyboard nudging adjusts vertices by 1.0 units (Shift+Arrow) and 0.1 units (Arrow) And area and perimeter recalculate in real time with accuracy ±0.1% And on Save, updated coordinates and hole data persist with a version history entry capturing editor, timestamp, and change summary
Stable Zone IDs Across Sheets, Levels, and Versions for Integrations
Given a zone is created or imported When the underlying sheet is versioned or re-imported and the zone geometry matches the previous version at ≥95% similarity (area overlap and shape signature) Then the zone retains its original Zone ID across versions And the Zone ID is globally unique within the project and retrievable via API and export And when a zone is duplicated to another sheet or level, a new Zone ID is assigned and a parent reference to the source zone is recorded And if a zone is deleted and later recreated, a new Zone ID is generated and the retired ID remains in audit logs
Zone–Revision Binding & Sync
"As an architect, I want my defined zones to stay aligned across drawing revisions so that I can trust alerts and heatmaps without babysitting every update."
Description

Maintain robust binding between zones and evolving drawing revisions. When a sheet is superseded, auto-realign zone geometries using layer mapping, control points, or vector diff alignment to keep zones positioned accurately. Detect scale, rotation, and origin shifts and prompt the user with a reconciliation UI if confidence is low. Preserve zone history across versions, including creation source and last verified revision. Expose a “Verify Alignment” action in the revision workflow and display confidence scores. Ensure compatibility with PlanPulse’s versioned markup system so zones appear consistently in historical views and diffs.

Acceptance Criteria
Auto-Realign Zones on Superseded Sheet
Given a sheet S at revision Rn with zones bound to S And a superseding sheet S' at revision Rn+1 is uploaded for S When the alignment job runs Then the system attempts alignment in this order: Layer mapping → Control points → Vector diff alignment And the selected method and its metrics are recorded on the revision And the post-alignment RMS error ≤ 2 px or ≤ 0.5% of sheet width (whichever is greater) And the alignment completes within 10 seconds for up to 200 zones on a ≤100 MP sheet
Detect and Report Transform Changes
Given a new revision S' with potential transform differences from S When alignment analysis runs Then the system computes and records: scale factor, rotation (°), and translation (x,y px) And displays these values in the alignment report panel And flags "Transform Change Detected" if |scale−1| > 1% or |rotation| > 1° or |translation| > 10 px And includes the flag and values in the audit log for the revision
Low-Confidence Reconciliation Gate
Given the computed alignment confidence score for S → S' is < 0.85 or a Transform Change is flagged When the user enters the revision review Then the system blocks auto-commit of zone realignment and opens the Reconciliation UI And the UI provides actions: Accept as-is, Adjust control points, Recompute, Keep zones on previous revision, Cancel And no zone geometry is persisted on S' until the user confirms Accept or Recompute with confidence ≥ 0.85 And upon confirmation, lastVerifiedRevisionId is set to Rn+1 and the decision is audit logged
Verify Alignment Workflow Action
Given a revision Rn+1 awaiting zone verification When the user clicks "Verify Alignment" in the revision workflow Then the system displays overlay preview, chosen method, confidence score (0–1), and transform deltas And the action is fully keyboard-accessible and available via API And on user approval, zone alignment state is marked Verified, with user, timestamp, method, and confidence recorded And on rejection, the state remains Unverified and prompts for control point adjustment
Preserve Zone History Across Versions
Given a project with zones created via import, draw, or API across multiple revisions When viewing a zone’s History panel Then the system shows: creationSource, createdBy, createdAt, lastVerifiedRevisionId, lastVerifiedBy, lastVerifiedAt, lastAlignmentMethod, lastAlignmentConfidence And these fields persist across revisions and are immutable for past entries And exporting zone metadata for a given revision includes these fields with values as of that revision
Consistent Zones in Historical Views and Diffs
Given the user is viewing historical revision Rn in PlanPulse with versioned markups enabled When toggling Zones on in the viewer Then zone geometries render exactly as positioned at Rn, independent of later alignments And the Diff view between Rn and Rn+1 shows zone movement/resizing overlays and lists changed zones And markups linked to zones resolve to the correct zone instances for each compared revision
Alignment Fallback and Escalation
Given initial layer mapping and control point matching both fail to achieve confidence ≥ 0.85 When vector diff alignment is attempted Then if confidence remains < 0.85, the system does not persist any realignment And the user is prompted to add at least 3 control points before retry And the revision remains in Unverified state until confidence ≥ 0.85 or the user explicitly Accepts with a recorded override reason
Change Intersection Detection Engine
"As a small-firm owner, I want automatic detection when revisions touch sensitive zones so that I immediately see potential risk or scope creep without manual checking."
Description

Compute zone-touching changes by diffing successive drawing revisions and markup edits, then intersecting deltas with zone geometries. Track change metrics per zone (area delta, perimeter delta, object count/type changes, annotation presence) and support configurable buffer distances for near-boundary alerts. Generate a normalized severity score per zone based on magnitude, type, and rule weightings. Process diffs incrementally for performance and batch results for real-time feedback in the PlanPulse workspace. Provide APIs/events for downstream consumers (heatmap overlay, notifications, approvals). Handle large sheets with spatial indexing and throttled recomputation.

Acceptance Criteria
Detect and intersect drawing/markup deltas with zone geometries
Given a project with defined zones and a baseline sheet revision When a new sheet revision is uploaded or a markup is added/edited/deleted Then the engine computes geometric deltas between revisions and markup states And each delta is intersected with zone geometries to classify relation as inside/overlap/cross/touch/outside And a zone-change record is produced per affected zone containing sheet_id, from_revision_id, to_revision_id, change_set_id, zone_id, intersection_type, and timestamps And no records are produced for deltas with no intersection to any zone And the first batch of results is available within 700 ms for up to 500 deltas across up to 100 zones
Compute and expose per-zone change metrics
Given zone-intersecting deltas have been identified for a sheet When metrics are calculated for a zone Then area_delta (signed and absolute) and perimeter_delta (signed and absolute) are computed in model units And object_count_delta and object_type_changes (added/removed/modified by type) are reported And annotation_presence is true if any text/callout/markup exists in the zone after changes And metric values are consistent across repeated runs (deterministic) and within ±1% of a reference computation And metrics are persisted and retrievable via API by project_id, sheet_id, revision_id, and zone_id
Configurable buffer distances for near-boundary alerts
Given a global default buffer distance and optional per-zone overrides When a change occurs within distance <= effective_buffer of a zone boundary without intersecting the zone interior Then a near_boundary alert is emitted for that zone including the minimum distance and buffer used And when effective_buffer is 0, no near_boundary alerts are emitted And buffer distances respect sheet scale and units and can be configured to mm/in/ft equivalents And updating a buffer value takes effect on the next recomputation and is included in the change_set metadata And near-boundary evaluation completes within 200 ms for up to 100 zones per change_set
Normalized per-zone severity scoring and rule weightings
Given rule weightings for change types, magnitudes, and zone categories are configured When zone metrics are produced Then a severity_score in the range [0,100] is computed for each affected zone And the score is deterministic for identical inputs and monotonically non-decreasing with increased magnitude of change And missing rule weights fall back to documented defaults and are traceable in the score breakdown And the API returns severity_score along with contributing factors and applied weights for auditability And scores update in real time as metrics change, with recalculation completing within 150 ms per affected zone
Incremental diff processing, throttling, and real-time batching
Given a burst of drawing and markup edits arriving within short intervals When the engine processes changes Then only impacted spatial tiles/partitions are recomputed (incremental), avoiding full-sheet recomputation And update batches are emitted at most every 250 ms during activity with a final consolidated batch within 1 s after 300 ms of inactivity And batch payloads include a stable change_set_id and do not duplicate zone-change records within the same burst And the PlanPulse workspace receives at least one batch per second under continuous edit streams And throughput sustains 20+ edits per second without backlog growth over a 60-second interval
Scalable performance on large sheets with spatial indexing
Given a sheet containing 50k vector objects and 500 zones with spatial indices built When processing a change_set of up to 1k deltas Then cold-start index build completes within 2 s and is reused for subsequent diffs And subsequent diff+intersection runs complete within 800 ms median and 1.5 s p95 And peak memory usage for indexing and diff stays under 1.5 GB And no request times out under a 5 s server timeout while maintaining correctness of intersections
Downstream APIs and events for heatmap, notifications, and approvals
Given authorized consumers are subscribed to events and have access to REST endpoints When zone-change results are produced Then a WebSocket event zone_change.v1 is emitted within 300 ms of computation with ordered per-sheet sequence numbers And a REST endpoint GET /api/projects/{pid}/sheets/{sid}/zones/changes returns the latest results with ETag support for idempotent caching And payloads include project_id, sheet_id, revision_ids, change_set_id, zone_id, metrics, severity_score, near_boundary, timestamps, and correlation_id And unauthorized requests receive 401/403, and events exclude any PII And the heatmap overlay and notifications services can render/update from a single event without additional queries (contains all required fields)
Heatmap Overlay & Legend
"As a project manager, I want an at-a-glance heatmap of impacted zones so that I can focus reviews on the highest-risk areas first."
Description

Render an interactive heatmap overlay on the drawing that color-codes zones by severity and change type. Provide a legend with filter controls (e.g., life safety, cost, permit) and time-scrub to review how zones changed across revisions. Support hover/click tooltips summarizing change metrics and links to the underlying diffs. Enable toggles per layer and per zone, adjustable opacity, and print/export to PDF for client packages. Ensure responsive performance on large drawings and accessibility (WCAG color contrast, non-color indicators). Integrate with PlanPulse’s visual workspace and version timeline so users can compare states side-by-side.

Acceptance Criteria
Enable Heatmap and Validate Color/Indicator Mapping
Given a project drawing with zones tagged with severity in {Low, Medium, High, Critical} and changeType in {Life Safety, Cost, Permit} And the Heatmap Legend defines color swatches for severities and non-color indicators for change types When the user turns Heatmap Overlay On from the workspace toolbar Then the overlay renders on the current viewport within 1000 ms And each visible zone’s fill color matches its severity swatch in the legend And each visible zone displays its change-type non-color indicator consistent with the legend And overlay-to-geometry misalignment is ≤1 px at 200% zoom And turning the overlay Off hides all heatmap elements within 200 ms
Filter Zones via Legend by Type and Severity
Given the Legend is open with filter controls for change types (Life Safety, Cost, Permit) and severities (Low, Medium, High, Critical) When the user selects only Life Safety and High severity Then only zones with changeType = Life Safety and severity = High are visible on the heatmap And the legend displays the count of active filters And applying or clearing any single filter updates the overlay within 300 ms And clicking Clear All restores visibility for all zones
Review Zone Changes Using Time-Scrub
Given a project with at least 3 revisions on the Version Timeline and Compare Mode is Off When the user drags the time-scrub handle to Revision R or uses Left/Right Arrow to step revisions Then the heatmap updates to reflect zone changes as of Revision R within 800 ms And the scrub snaps to discrete revisions and shows the revision label and timestamp And pausing the scrubber leaves the overlay at the last snapped revision Given Compare Mode is On with Revisions R1 and R2 selected from the timeline When the user opens side-by-side view Then the workspace displays synchronized panels for R1 and R2 with matched pan/zoom And scrubbing either panel advances both panels to the corresponding adjacent revisions in lockstep And filters and opacity settings apply to both panels by default and can be decoupled via a toggle
Toggle Layers/Zones and Adjust Opacity
Given a drawing with multiple CAD layers and zones listed in the Layers/Zones panel When the user hides layer "MEP" and zone "Stairwell A" Then heatmap elements associated with that layer and that zone are hidden within 300 ms And re-showing them restores their overlays And adjusting the Opacity slider from 0% to 100% in 5% increments updates overlay opacity in real time with changes applied within 100 ms And the user’s visibility and opacity settings persist for the session and are reset by clicking Reset View
Inspect Zone Tooltip Metrics and Open Underlying Diff
Given the heatmap is visible on the drawing When the user hovers over a zone for 300 ms Then a tooltip appears showing: Zone Name, Severity, Change Types, Revision context, Area Delta %, Markup Count Delta And the tooltip includes an actionable View Diff link When the user activates View Diff Then the underlying diff opens focused on the same zone and revision context within 800 ms in the PlanPulse diff panel And the same tooltip content is available via keyboard by focusing the zone in the Zones list and pressing Enter, with content announced to screen readers
Export Heatmap View and Legend to PDF for Client Package
Given the heatmap is visible with active filters and the legend shown When the user selects Export > PDF and chooses Current View or Full Sheet Then a PDF is generated that includes: rendered heatmap, legend with active filters and non-color indicators, revision label(s), timestamp, and scale bar And the PDF matches on-screen appearance with scale within ±1% and preserves colors/patterns (visual ΔE ≤ 3) And for a sheet up to A1 with ≤300 zones, export completes within 10 seconds and file size is ≤15 MB And the PDF opens without errors in common viewers (Adobe Acrobat, built-in browser viewer)
Performance and Accessibility Compliance on Large Drawings
Given a large drawing (e.g., 24 MP raster or complex vector) with 300 zones When the user enables/disables the overlay, applies filters, and pans/zooms Then initial overlay render time is ≤1500 ms, subsequent filter/toggle updates are ≤300 ms, and pan/zoom maintains ≥30 fps for 95% of interactions And additional memory used by the heatmap layer is ≤300 MB at peak And no main-thread long task exceeds 200 ms for the 95th percentile during these interactions (per Performance API) And legend text and UI controls meet WCAG 2.1 AA contrast (text ≥ 4.5:1) and non-text indicators meet ≥ 3:1 And non-color indicators for change types are visible on-screen and present in exported PDFs And all controls (overlay toggle, legend filters, time-scrub, compare toggle, opacity, export) are operable via keyboard with visible focus order and ARIA names/roles; tooltip content is announced via ARIA when focused
Zone Rules & Thresholds
"As a senior architect, I want to tailor thresholds and rules per zone so that alerts reflect our firm’s risk profile and local code priorities."
Description

Allow per-zone and template-based rules that define what constitutes a flag: minimum area/percentage change, specific object/category changes (e.g., egress width, doors, sprinklers), annotation keywords, and permit-critical always-flag logic. Support sensitivity presets (Strict/Standard/Lenient) and advanced conditions (AND/OR, time windows, buffer distance). Provide default templates for life safety and cost-sensitive zones aligned with common practice. Expose rule testing with recent diffs and display expected outcomes before saving. Store rules versioned with auditability and integrate their weights into severity scoring.

Acceptance Criteria
Per-Zone Area/Percentage Threshold Flagging
Given a zone has rules: Minimum Area Change = 5 m² and Minimum Percentage Change = 10% with OR logic by default When a diff increases the zone’s net area by 4.9 m² and 9.9% Then no flag is created for that zone Given the same zone rules When a diff increases the zone’s net area by exactly 5.0 m² (percentage < 10%) Then a flag is created and the flag details show “Threshold met: Area >= 5 m²” Given the same zone rules When a diff changes area by 1 m² but percentage by 10% or more Then a flag is created and the flag details show “Threshold met: % Change >= 10%” Given per-zone overrides exist over a template When the zone’s own thresholds differ from the template Then the zone-specific thresholds are used for flagging
Object/Category and Annotation Keyword Triggers
Given a zone rule tracks specific categories: Egress Width, Door, Sprinkler and keywords: ["egress","ADA","sprinkler"] (case-insensitive) When a diff modifies an egress width parameter within the zone Then a flag is created and the flag details list “Trigger: Category = Egress Width” Given the same rules When a new door object is added inside the zone boundary Then a flag is created and the flag details list “Trigger: Category = Door” Given the same rules When an annotation inside the zone contains the text “ADA ramp updated” Then a flag is created and the flag details list “Trigger: Keyword = ADA” Given the same rules When a door is added outside the zone and an annotation with keyword appears outside the zone Then no flag is created for this zone
Permit-Critical Always-Flag Enforcement
Given a zone is marked Permit-Critical with Always-Flag enabled When any change (geometry/object/annotation) touches the zone or its interior Then a flag is always created regardless of other thresholds or presets, and the flag details include “Policy: Permit-Critical Always-Flag” Given Always-Flag is disabled for the same zone When a change touches the zone Then flag creation is governed by the standard rules and thresholds Given a permit-critical zone with Always-Flag enabled When a change occurs within the configured buffer (if any) Then a flag is created and details include both “Policy: Permit-Critical Always-Flag” and the applied buffer distance
Sensitivity Presets and Advanced Conditions (AND/OR, Time Window, Buffer)
Given presets Strict, Standard, and Lenient are available and display their parameter values (min area, min %, tracked categories, logic, time window, buffer) When the user selects Strict Then the populated min area and min % are lower than those for Standard and Lenient, and the UI shows the exact values applied Given a zone uses AND logic between Area and Category triggers When a diff changes area above threshold but does not include any tracked category change Then no flag is created Given the same zone switches to OR logic When the same diff is applied Then a flag is created Given a time window condition of “last 14 days” is set When a change older than 14 days is included in the diff set Then it does not contribute to flagging Given a buffer distance of 1.0 m is set for the zone When a change occurs within 1.0 m outside the zone boundary Then it is treated as touching the zone for rule evaluation
Default Templates for Life Safety and Cost-Sensitive Zones
Given default templates Life Safety and Cost-Sensitive are provided When a user creates a zone and applies the Life Safety template Then the rule set pre-populates with tracked categories including (at minimum) Egress Width, Sprinklers, Fire-Rated Walls, and shows their default thresholds and logic Given the same templates When a user applies the Cost-Sensitive template Then the rule set pre-populates with tracked categories including (at minimum) Doors, Finishes, Equipment Counts, and shows default area/% thresholds and logic Given a user edits any pre-populated value When the zone is saved Then the edited values persist and the template label shows “Modified”
Rule Testing With Recent Diffs and Expected Outcomes Preview
Given a user is editing a zone’s rules When the user clicks “Test Rules” Then the system lists recent diffs for the zone (time-bounded per current settings) and displays for each: Would Flag? (Yes/No), Trigger Reasons, and Severity Preview Given the preview is displayed When the user adjusts a threshold, preset, or logic (AND/OR, time window, buffer) before saving Then the preview updates immediately to reflect expected outcomes without persisting the changes Given the user saves the rules after testing When the next diff arrives matching the tested conditions Then an actual flag is created whose outcome (flag/no-flag and reasons) matches the last preview under the saved configuration
Versioned Rules and Weighted Severity Scoring with Audit Trail
Given a zone’s rules are edited When the user saves changes Then a new version is created with timestamp, editor, and a diff of changed fields, and the version is visible in a history list Given multiple versions exist When the user selects two versions to compare Then a side-by-side or diff view shows added/removed/modified thresholds, categories, logic, presets, keywords, and weights Given a previous version is selected When the user clicks Rollback Then the selected version becomes the active ruleset and a new version entry records the rollback action Given rule weights are defined per trigger When a flag is generated Then the severity score and tier are shown with per-rule weight contributions, and increasing a rule’s weight (in a subsequent version) increases the computed score for the same diff when re-evaluated
Alerts & Approval Gates
"As a project lead, I want automatic alerts and enforced review gates for impacted zones so that critical issues are seen and resolved before approvals go out."
Description

Trigger real-time alerts when flagged changes occur in watched zones, with routing to roles (architect, client, consultant) via in-app notifications and email. Include digest mode to reduce noise and escalation paths for high-severity life safety events. Integrate with PlanPulse’s one-click approval by requiring acknowledgment of flagged zones before approval can be completed, and record who reviewed what. Provide per-project notification settings, snooze/defer options, and deep links to the heatmap state. Ensure alerts are idempotent and grouped by revision to avoid duplicates.

Acceptance Criteria
Real-time alert when watched zone is changed
Given a project with one or more watched zones enabled And a user with edit permissions saves a revision that modifies content intersecting a watched zone When the revision is saved Then an alert record is created and associated with the project, the revision ID, and the zone name And an in-app notification is delivered within 5 seconds of save And the alert includes severity derived from the zone type and a concise change summary And the heatmap displays a flag on the affected zone for that revision
Role-based routing of alerts to architect, client, and consultant
Given routing rules are configured mapping zone severities to project roles And users in those roles have valid accounts When an alert is generated for a watched zone Then mapped role members receive an in-app notification And mapped role members with email enabled receive an email within 30 seconds And users not mapped to the alert’s severity receive no notification And both notification types include a deep link to the affected revision’s heatmap
Digest mode batches non-critical alerts
Given a user has enabled digest mode for the project And the alert severity is Low or Medium When multiple alerts are generated during the digest window Then individual alert emails are suppressed for that user And a single digest email is sent at the configured digest time summarizing counts by zone and revision And in-app notifications for those alerts are marked as batched with a link to the digest And High severity alerts bypass digest and are sent immediately
Escalation for unacknowledged high-severity life safety alerts
Given a High severity Life Safety alert is generated And no assigned reviewer has acknowledged it When 10 minutes elapse without acknowledgment Then a reminder notification is sent to the originally routed roles And when 30 minutes elapse without acknowledgment Then the alert escalates to the project lead and client roles via in-app notification and email And the escalation is recorded in the alert’s audit trail with timestamps and recipients
Approval gate requires zone acknowledgment before one-click approval
Given a revision contains one or more unacknowledged flagged zones When a client attempts one-click approval Then the approval action is blocked And the user is prompted to review and acknowledge each flagged zone And upon acknowledgment of all flagged zones, the approval can be completed And the system logs reviewer identity, timestamp, and zone IDs acknowledged And the approval record references the acknowledged alert IDs
Per-project notification settings and snooze/defer controls
Given a user opens project notification settings When the user configures channel preferences by role and severity and saves Then subsequent alerts for that project respect those preferences for that user And when the user snoozes an alert for a specified duration Then no individual notifications for that alert are delivered to that user during the snooze And when the user defers an alert to digest Then the alert is included in the next digest instead of immediate email And High severity escalations ignore deferral and still deliver escalation notifications
Idempotent, revision-grouped alerts with deep links to heatmap
Given multiple saves are performed on the same revision affecting the same watched zone When alerts are processed Then only one alert per zone per revision is created And subsequent saves update the existing alert’s change summary and timestamp instead of creating duplicates And digest emails group alerts by revision with a single entry per zone per revision And the deep link opens the heatmap focused on the affected zone within the correct revision context
Audit Trail & Reporting
"As a compliance-focused architect, I want an auditable record of zone impacts and decisions so that I can justify changes to clients and permit reviewers."
Description

Maintain a comprehensive timeline of zone events: creation/edits, alignment verifications, rule changes, detected impacts, alerts sent, and approvals. Allow exportable reports (PDF/CSV) that summarize impacted zones per revision with snapshots and severity metrics. Provide filters by zone type, date range, and responsible party. Link entries to drawing versions and comments for traceability in client conversations. Ensure data retention policies and access controls align with PlanPulse project permissions, enabling shareable but secure audit packages for permits and client sign-off.

Acceptance Criteria
Recording Zone Lifecycle and Alignment Events
- Rule: For each of the following events — zone create, edit (geometry or metadata), rename, delete, alignment verification (pass/fail), rule change, approval — the system writes an immutable audit entry. - Rule: Each audit entry includes: event_type, project_id, zone_id, zone_name, zone_type, drawing_version_id, actor_user_id, actor_display_name, actor_role, timestamp_utc (ISO 8601), and before/after diffs for edits and rule changes; alignment_result for verifications; approval_id and approval_outcome for approvals. - Rule: Deletions persist a tombstone record retaining last-known zone identifiers and actor references. - Given a user edits a zone boundary, When they save changes, Then an audit entry is created with a geometry_diff reference, the correct drawing_version_id, and is visible in the audit trail for that project. - Given an alignment verification is run and fails, When the result is saved, Then an audit entry records alignment_result = "fail" with validation details.
Impact Detection and Alert Dispatch Auditability
- Rule: When a change affects a watched zone, an Impact Detected audit entry is created with severity_score, impacted_rules, impacted_area, and linked change_set_id. - Rule: For each alert issued, an Alert Sent audit entry is created with recipient_id or external_email, channel (email/in-app), delivery_status (queued/sent/failed), attempt_count, and timestamps. - Rule: Alert failures record failure_reason and are retriable with subsequent entries linked by correlation_id. - Given a change triggers an alert to two recipients, When alerts are dispatched, Then two Alert Sent entries appear with delivery_status "sent" and link to the same change_set_id.
Export Audit Report per Revision (PDF/CSV)
- Rule: Export includes only entries scoped to selected revision(s) and current filters. - Rule: CSV contains columns: project_id, revision_id, revision_label, revision_author, revision_created_at, zone_id, zone_name, zone_type, event_type, timestamp_utc, severity_score, alignment_result, approval_state, alert_count, drawing_version_id, snapshot_uri. - Rule: CSV is UTF-8 encoded, comma-delimited, with header row and LF line endings. - Rule: PDF contains a per-revision summary page and per-entry pages with snapshot thumbnails and zone highlights. - Given the user selects revision R1 and clicks Export CSV, When the file downloads, Then its rows all have revision_id = R1 and match the on-screen count.
Filter by Zone Type, Date Range, and Responsible Party
- Rule: Filters support multi-select zone types, a date range (start/end inclusive in project timezone), and responsible party (actor_user or assigned_owner). - Rule: Combining filters applies logical AND across dimensions. - Rule: Clearing filters resets the result set to all entries for the project. - Given filters [zone_type = "permit-critical", responsible_party = "Alex"], When applied with date range 2025-06-01 to 2025-06-30, Then only entries matching all three conditions are shown and the result count updates accordingly.
Traceability Links to Drawing Versions and Conversations
- Rule: Every audit entry includes a deep link to its drawing_version_id and, if applicable, to the associated comment thread ID. - Rule: Following the drawing link opens the exact version with the impacted zone centered and highlighted. - Rule: Following the conversation link opens the correct thread with the referenced comment auto-scrolled into view. - Rule: If a link target is unavailable (permissions or deletion), a descriptive error is shown and access is logged. - Given a reviewer clicks a comment link in an audit entry, When the conversation view opens, Then the referenced comment is in focus and the browser back action returns to the audit trail.
Access Control and Shareable Audit Packages
- Rule: Viewing the audit trail requires the project permission "View Audit"; exporting requires "Export Audit". - Rule: Users without required permissions cannot see the audit menu actions and API endpoints return 403. - Rule: Generating a shareable audit package produces a tokenized URL with configurable expiry and revocation; the package includes only the selected entries and embedded snapshots. - Rule: External access via the token shows no in-app controls beyond viewing/downloading the package; all accesses are logged with IP, timestamp, and user-agent. - Given a PM revokes a share link, When an external recipient tries it, Then access is denied and the attempt is logged.
Data Retention and Legal Hold Enforcement
- Rule: Each project has a configured audit retention period (e.g., 12/24/36 months); entries older than the period are purged and no longer retrievable via UI or API. - Rule: Entries under an active legal hold are exempt from purge until the hold is cleared; holds are auditable with placer, reason, and timestamps. - Rule: A daily retention job evaluates and purges eligible entries, producing a Retention Action audit entry summarizing counts purged. - Rule: Changing the retention period updates the effective date and is recorded as an Audit Policy Change entry. - Given retention is set to 12 months, When the purge job runs, Then entries older than 12 months (excluding legal holds) are removed and do not appear in exports or queries.

DeltaSync Engine

Syncs only the deltas you changed—down to sheet tiles and markup strokes—with resumable, bandwidth-aware transfers. Cuts wait times on weak site Wi‑Fi, avoids re-downloading whole sets, and guarantees version integrity when you come back online.

Requirements

Delta Tile & Stroke Diffing
"As an architect reviewing plans on site, I want only the tiles and markups I changed to upload so that syncing completes quickly over weak Wi‑Fi."
Description

Implement fine-grained diffing that detects and packages only modified sheet tiles and markup strokes for transfer. The client segments sheets into fixed-size tiles and assigns content-addressed hashes; vector markups carry stable stroke IDs and bounding metadata. On save or idle, the engine computes a delta manifest (added/updated/removed tiles and strokes) tied to the current drawing/version ID. This integrates with PlanPulse’s canvas renderer and versioning layer to avoid re-uploading unchanged assets, reducing bandwidth usage and accelerating sync on site Wi‑Fi while preserving full fidelity for real-time markups and approvals.

Acceptance Criteria
Tile Hash Diffing Detects Changes
Given a sheet segmented into fixed-size tiles hashed by content When a user modifies pixels within any tile Then only those tiles produce new hashes and are flagged as updated in the delta set Given no visual change within a tile When diffing runs on save or idle Then the tile's hash equals the prior hash and it is excluded from the delta payload Given a new page or changed sheet dimensions When diffing runs Then tiles that appear or fall outside the new bounds are marked as added or removed respectively in the manifest
Delta Manifest Structure and Version Binding
Given a save or idle trigger When a delta manifest is generated Then it includes drawingId, baseVersionId, nextLocalVersionId, timestamp, tileSize, and tileDeltas {added|updated|removed} plus strokeDeltas {added|updated|removed} Given identical sheet state When the manifest is regenerated Then tile and stroke lists (including ordering) are deterministic for the same input and hashing function Given a manifest referencing baseVersionId V When applied server-side Then the resulting version content matches the client canvas state byte-for-byte for changed tiles and strokes
Stable Stroke IDs and Bounding Metadata
Given a newly created vector stroke When it is first persisted Then it is assigned a globally unique, stable strokeId and a computed bounding box and hash that persist across sessions Given an edit to an existing stroke’s geometry or style When diffing runs Then the same strokeId is emitted under updated with a new hash and bounding box Given a deletion of a stroke When diffing runs Then the strokeId is emitted under removed and does not reappear in subsequent manifests unless recreated as a new strokeId
Resumable, Bandwidth-Aware Delta Transfer Packaging
Given a delta payload with N updated tiles and strokes When upload begins over a constrained network Then only the changed items are sent in chunked form with per-chunk checksums and acknowledgments Given a mid-transfer interruption When the connection resumes Then only missing or unacknowledged chunks are retransmitted and the server assembles the payload without corruption, verified by end-to-end asset hashes Given upload concurrency would saturate the link When bandwidth drops below the configured target Then the client reduces concurrent transfers to maintain steady forward progress without exceeding the target rate
No Re-Upload of Unchanged Assets Across Saves
Given the user triggers save with no modifications since the last synced state When diffing runs Then the delta manifest contains zero added or updated tiles and strokes, and the upload body is ≤ 1 KB excluding headers Given repeated idempotent saves without changes When observing network traffic Then the server responds with a no-op acknowledgement and no asset bytes are transferred
Renderer Applies Deltas Without Visual Regression
Given a sheet with mixed unchanged and updated tiles and strokes When the delta is applied locally and after server round-trip Then the rendered canvas matches a full re-render pixel-for-pixel, within 1 px tolerance on tile boundaries Given tile updates adjacent to unchanged tiles When rendered Then no visible seams or stitching artifacts appear at tile edges Given a stroke update overlapping multiple tiles When applied Then affected tiles are correctly invalidated and re-rendered while unaffected tiles are not redrawn
Offline Edits Merge With Version Integrity
Given the user makes edits offline producing several local delta manifests When connectivity is restored Then the client uploads deltas in order against the recorded baseVersionId and the server produces a new version matching the last offline canvas state Given the server advanced the version while offline When syncing resumes Then the client rebases local deltas onto the latest server version, resolving conflicting edits deterministically and surfacing stroke conflicts for user resolution Given sync completes When the client performs a verification fetch Then all tile and stroke content hashes match the server’s values, confirming version integrity
Resumable Chunked Transfers
"As a project lead working in the field, I want transfers to resume automatically after a disconnect so that I don’t waste time or data re-uploading."
Description

Provide chunked upload/download with checkpointing so interrupted transfers can resume without restarting. The client splits deltas into configurable chunks, maintains encrypted session IDs and byte-range checkpoints locally, and leverages multipart APIs on the server. On reconnect, the engine verifies chunk integrity and continues from the last confirmed offset. Adaptive chunk sizing responds to latency and packet loss. Integrates with authentication refresh, storage quotas, and the version manifest to ensure atomic completion of a delta bundle.

Acceptance Criteria
Resume upload after mid-transfer network drop
Given a delta bundle split into chunks with a persistent encrypted session ID and byte-range checkpoints stored locally And the connection drops after at least one chunk has been confirmed by the server When connectivity is restored and the client initiates resume using the same session ID Then the client queries the server for confirmed byte ranges and missing chunks And resumes from the first missing byte without re-uploading confirmed chunks And the number of re-uploaded bytes is less than or equal to the size of the last unconfirmed in-flight chunk And each resumed chunk passes server-side hash verification And the final bundle checksum matches the precomputed checksum And the server commits the bundle atomically and returns a success receipt bound to the session ID
Resume download after app crash or restart
Given a partially downloaded delta bundle with local byte-range checkpoints and a session ID persisted to disk When the app is force-closed or the device restarts during transfer And the user relaunches the app Then the client restores transfer state from checkpoints And resumes download from the last confirmed offset without re-downloading more than the last unconfirmed chunk And the final file hash matches the manifest hash And the UI progress indicator reflects the resumed progress within 2 seconds of relaunch And no user action is required to resume
Adaptive chunk sizing under high latency and packet loss
Given adaptive chunking is enabled with a configured minimum chunk size of 64 KB and maximum of 4 MB And the client measures rolling 10-second averages for RTT and packet loss When average RTT exceeds 250 ms or packet loss exceeds 2% for 3 consecutive measurement windows Then the client halves the current chunk size down to the configured minimum When average RTT is below 100 ms and packet loss is below 0.5% for 3 consecutive windows Then the client doubles the current chunk size up to the configured maximum And all adjustments remain within the configured bounds And the per-megabyte retry rate does not increase after an adjustment over the next 30 seconds
Integrity verification and atomic completion against version manifest
Given all chunks of a delta bundle have been uploaded When the server assembles the bundle Then each chunk hash matches its transmitted hash And the reconstructed bundle checksum matches the client-side checksum And the bundle aligns with the target version manifest entries And the server applies the delta bundle atomically so that either a new version is visible with all changes or no changes are visible And clients reading the version manifest observe a single monotonically increased version number with consistent content hashes
Authentication expiry mid-transfer with seamless refresh
Given a long-running transfer with an access token that will expire during the operation When the server returns an authentication error indicating token expiry Then the client requests a token refresh using the configured auth flow without discarding session state And resumes the transfer from the last confirmed offset using the same session ID within 5 seconds of successful refresh And no chunk is duplicated on the server And no sensitive tokens or session IDs are written to plaintext logs And the transfer completes with hash and manifest validation as if uninterrupted
Server storage quota exceeded during transfer
Given the user's server-side storage quota will be exceeded by the pending delta bundle When the server responds with a quota-exceeded error during chunk upload Then the client stops sending further chunks for that session And marks the transfer as failed with an explicit quota error code And the server discards any partial bundle data associated with the session so that used storage does not increase after failure And the client clears local checkpoints for the failed session but retains the original delta bundle for retry And the user is shown the additional storage required to complete the transfer And after quota is increased, a new transfer session can upload the bundle to completion with no residual partial state on the server
Encrypted local session state for resumable transfers
Given the client persists session IDs and byte-range checkpoints to local storage for resumable transfers When the device is at rest and the app is not running Then the persisted session data is encrypted with AES-256-GCM using a key stored in the OS secure keystore And the plaintext session data is not readable from the file system by another app or user account And tampering with the stored data results in failed integrity verification and the app discards the session and starts a new one And a redacted debug export does not include session IDs or raw checkpoints
Bandwidth-aware Sync Scheduler
"As a user on spotty Wi‑Fi, I want the app to adapt sync speed and prioritize critical changes so that I can keep working without delays."
Description

Continuously assess network conditions (throughput, latency, metered status) to prioritize and throttle sync tasks intelligently. Critical small deltas and approval-blocking changes are sent first; large tiles are deferred or rate-limited. Provide background sync, pause/resume controls, and a lightweight status indicator in the UI. Scheduling respects user preferences (e.g., Wi‑Fi-only) and OS power constraints. Integrates with the delta manifest to reorder work without compromising version integrity.

Acceptance Criteria
Prioritize Critical Deltas Under Constrained Bandwidth
Given measured throughput <= 1 Mbps or RTT >= 200 ms, When the queue contains both critical deltas (<=200 KB) and non-critical assets, Then the scheduler dispatches all critical deltas ahead of non-critical items. Given an approval-blocking change is enqueued, When the scheduler is active, Then its transfer begins within 2 seconds and completes before any non-critical transfer >1 MB starts. Given multiple critical deltas are pending, When dispatching, Then items are ordered by priority (approval-blocking > critical > normal) and FIFO within the same priority. Given bandwidth improves by >50% over the last 60 seconds, When the critical backlog is empty, Then the scheduler increases concurrency for non-critical items up to the configured cap.
Defer and Rate-Limit Large Tile Transfers
Given measured throughput <= 2 Mbps or RTT >= 200 ms, When a non-critical large tile (>5 MB) is queued, Then its start is deferred until no critical or approval-blocking items remain. Given a large tile transfer is active and the bandwidth budget is 300 KB/s, When instantaneous throughput exceeds the budget, Then the scheduler throttles the transfer to within ±10% of 300 KB/s. Given no critical items are pending for 30 seconds, When large tiles are queued, Then at most 1 concurrent large tile transfer is allowed. Given throughput > 10 Mbps and RTT < 100 ms, When large tiles are pending, Then the scheduler increases concurrency up to 3 large tile transfers.
Respect User Network Preferences and Metered Connections
Given the user preference is "Wi-Fi only", When the device is on cellular or a metered network, Then non-critical transfers are not started and existing non-critical transfers >1 MB are paused within 3 seconds. Given the connection is metered and no override is set, When critical deltas (<=200 KB each) are queued, Then only those critical deltas are synced and non-critical items remain queued. Given the user changes the network preference, When the new preference prohibits current transfers, Then the scheduler pauses them within 3 seconds and marks them "Paused by policy". Given the device connects to unmetered Wi-Fi, When previously paused items exist, Then they auto-resume within 5 seconds, preserving byte offsets.
Background Sync with Pause/Resume and Resumable Transfers
Given the app is in background and the OS permits background tasks, When sync is active, Then transfers continue and progress is persisted at least every 5 seconds. Given the user taps "Pause Sync", When transfers are in progress, Then all in-flight transfers are paused within 2 seconds and their byte offsets are saved. Given the user taps "Resume Sync", When previously paused transfers exist, Then they resume from the last saved byte offset without re-transferring completed bytes. Given the app is force-quit and relaunched, When the scheduler starts, Then it restores the previous queue and resumes resumable transfers from saved offsets. Given network drops mid-transfer, When connectivity returns within 10 minutes, Then the transfer resumes without data corruption or duplicate deltas.
Low Power Mode and OS Power Constraints
Given OS Low Power Mode or Battery Saver is active, When scheduling transfers, Then non-critical transfers are deferred and critical transfers are limited to 1 concurrent stream capped at 150 KB/s. Given battery < 20% and the device is not charging, When the app is foregrounded, Then large non-critical transfers (>5 MB) do not start unless explicitly resumed by the user. Given the OS background data restriction is enabled, When the app goes to background, Then the scheduler suspends background sync and marks items "Paused by OS policy". Given the device is connected to power and Low Power Mode is off, When items paused for power constraints exist, Then they resume within 5 seconds subject to network preferences.
Delta Manifest Integrity and Reordering on Reconnect
Given the device was offline and changes were queued, When connectivity is restored, Then the scheduler consults the delta manifest and orders transfers to satisfy dependencies before non-dependent items. Given a delta chunk completes, When its checksum/ETag does not match the manifest, Then the chunk is retried up to 3 times with exponential backoff and the item is marked "Fail" after max retries without committing the version. Given an approval-blocking change set comprises multiple deltas, When all required deltas have been synced, Then the change set is committed atomically and partial commits are not visible. Given partially transferred assets exist from prior sessions, When resuming, Then only missing bytes/deltas are requested and no full re-download occurs.
Lightweight Status Indicator Displays Scheduler State
Given the scheduler has active or queued transfers, When the user views the UI, Then a status indicator displays the number of queued items, number in progress, and current policy flags (e.g., Metered, Low Power) with updates at least every 2 seconds. Given the user taps the status indicator's Pause control, When sync is active, Then all transfers pause within 2 seconds and the indicator reflects the paused state. Given all transfers complete, When no tasks are pending, Then the status indicator shows "Up to date" within 2 seconds and background network activity stops. Given a transfer fails, When the user opens the indicator, Then a concise error state is shown with a retry option that requeues the item at appropriate priority.
Conflict-free Merge & Version Integrity
"As a collaborator editing the same sheet, I want my offline edits to merge safely so that we never corrupt versions or lose work."
Description

Ensure deterministic, lossless merge of concurrent edits when users work offline. For vector markups, apply CRDT/OT-based reconciliation with per-stroke timestamps and authorship to avoid conflicts; for raster tiles, use tile-level last-writer-wins with causality checks. Generate immutable version artifacts and maintain a full audit trail linking deltas to users, timestamps, and approval states. Prevents corruption, guarantees that approved versions are exact, and aligns with PlanPulse’s one-click approval workflow.

Acceptance Criteria
Offline Vector Markup Merge (CRDT)
Given two users edit vector markups on the same sheet while offline And each stroke has a stable UUID, timestamp, and author When both users reconnect and sync their deltas Then the merged sheet contains all non-deleted strokes with no duplicates or losses And concurrent create/move/delete operations resolve deterministically per CRDT rules without prompting users And per-stroke authorship and timestamps are preserved And reapplying the same deltas produces no further changes (idempotent)
Raster Tile Last-Writer-Wins with Causality
Given two devices modify the same raster tile T with deltas carrying parentVersionId and causal time And one delta is causally newer than the other When both deltas are synced Then the stored tile for the latest version reflects the causally newest writer (last-writer-wins) And the older, stale delta is not applied over the newer tile And both deltas are recorded with their causal relationship in the audit trail And the stale delta remains accessible via its own version artifact linked as an ancestor
Deterministic Replay and Content Hash Consistency
Given a base version V0 and a set of deltas D from multiple devices When D is applied to V0 in any permutation on a clean environment Then the resulting version Vn has the same content hash H across runs And H includes vector geometry, raster tiles, and deterministic metadata fields And H matches the stored immutable artifact hash for Vn
Immutable Version Artifact and Read-Only Guarantees
Given a merge completes and a version artifact Va is created When any client attempts to modify Va in place Then the system rejects the write and creates a new child version Vb with a new content hash And Va remains byte-identical and downloadable after the operation And Va metadata includes parentVersionId, creator userId, createdAt (UTC), change summary, and approval state
Complete Audit Trail Linking Deltas to Approvals
Given multiple deltas lead to an approved version Va When the audit trail for Va is queried Then every delta from the base to Va is present with fields: deltaId, userId, deviceId, timestamp (UTC), per-stroke/tile authorship, parentVersionId, and causality info And approval transitions (requested, approved, revoked) are present with userId and timestamps And exporting the audit trail returns a verifiable JSON including checksums/signatures for integrity And no gaps or orphan deltas exist in the chain
One-Click Approval Integrity Under Late Syncs
Given version Va is approved via one-click approval while contributors are offline When those contributors later sync deltas based on an ancestor of Va Then Va remains immutable and unchanged And their deltas are applied onto a new child version Vb requiring separate approval And the UI indicates Va is approved and Vb is pending approval And no post-approval changes appear in Va’s artifact or hash
End-to-End Integrity Verification
"As a user relying on accurate drawings, I want verification that synced data is complete and correct so that I can trust the versions I approve."
Description

Validate every transfer with cryptographic checksums per chunk/tile and a signed delta manifest. The client and server compute and compare hashes, retrying only failed chunks. Use ETag/If-Match headers for concurrency control and reject partial or stale writes. Store manifests alongside version metadata so clients can verify completeness before marking a sync as successful. This guarantees that what’s rendered and approved exactly matches what was produced by the authoring client.

Acceptance Criteria
Chunk Hash Mismatch Triggers Targeted Retry
Given a client uploads chunk N with a SHA-256 hash Hc When the server recomputes the SHA-256 hash Hs over the received bytes Then the server accepts the chunk only if Hs equals Hc and responds 201 with ETag=Hc And if Hs does not equal Hc, the server rejects with 400 code "ChecksumMismatch", does not persist the chunk, and records the failure And the client retries only chunk N up to 3 attempts with exponential backoff starting at 500 ms And after 3 failed attempts, the sync aborts with status "FailedIntegrity" and no subsequent chunks are applied
Signed Delta Manifest Validation Before Apply
Given a delta manifest includes chunk IDs, order, byte sizes, per-chunk SHA-256 hashes, and a detached Ed25519 signature When the server receives the manifest Then the server verifies the signature against the project’s registered public key and rejects with 401 code "InvalidSignature" if verification fails And the server validates that all referenced chunks exist and their stored hashes match the manifest And the server will not apply any delta if manifest signature verification or content validation fails
ETag Concurrency Control Rejects Stale Client Write
Given an apply request includes If-Match set to the client’s known version ETag Vn When the server compares If-Match to the current version ETag Then if they differ, the server responds 412 Precondition Failed with code "VersionConflict" and performs no write And if If-Match is missing, the server responds 428 Precondition Required And if they match, the server applies the delta atomically, returns 200 with new ETag Vn+1, and persists no partial state on failure
Completeness Verification Blocks Success State Without All Chunks
Given a manifest lists M chunks for a delta When fewer than M chunks are stored with matching SHA-256 hashes Then the server refuses finalize/apply and returns 409 code "IncompleteTransfer" And the client must not mark the sync successful until the server responds "Applied" with manifestId and new ETag And the server response includes the count and IDs of missing chunks so the client can resume precisely
Resumed Sync Maintains Integrity After Network Interruption
Given an upload for manifestId is interrupted mid-transfer When the client reconnects and requests sync status Then the server returns the set of verified chunk IDs and their ETags for manifestId And the client uploads only missing chunks; re-uploading an already-verified chunk with matching content returns 200 and is idempotently ignored And after resumption completes, the server revalidates all hashes and the manifest signature before applying the delta
Rendered Output Digest Matches Authoring Client
Given the authoring client provides a canonical render hash set for affected tiles at 300 DPI computed as SHA-256 over rasterized sRGB bytes When the applied version is rendered on the receiving side Then each tile’s computed render hash matches the authoring client’s corresponding hash And if any tile hash mismatches, the version is flagged "IntegrityMismatch", approvals are blocked, and diagnostics are logged And on success, an integrity verification event is recorded linking manifestId to the render hash set
Offline Queue & Smart Retry
"As an architect traveling between sites, I want my changes to queue and sync automatically when I’m online so that I don’t have to babysit uploads."
Description

Maintain a durable local queue for all delta operations while offline, with deduplication, dependency ordering, and exponential backoff on failures. Provide user controls to view the queue, retry, cancel, or defer items, and respect metered connection policies. The queue persists across app restarts and device reboots, and coordinates with the scheduler to avoid contention with active drawing sessions. Ensures steady progress toward sync without user babysitting.

Acceptance Criteria
Queue Persistence Across App Restarts and Device Reboots
Given the device is offline with 10 queued delta items When the app is force-closed and the device is rebooted Then upon relaunch the queue restores the same 10 items in the same order with identical IDs and metadata Given an item had partially uploaded (>=1 acknowledged chunk) When the app restarts Then the upload resumes from the last acknowledged offset without re-sending more than 64KB Given no user action occurs When 30 days elapse Then all queued items remain present and durable (not lost or duplicated)
Deduplication and Dependency Ordering of Delta Operations
Given two queued deltas with identical content-hash and target When sync runs Then only one network transfer occurs and the duplicate is marked Deduplicated without transmission Given a delta B depends on delta A When sync runs Then A is sent and acknowledged before B is attempted and B shows Blocked: waiting on A until A succeeds Given a dependency fails permanently When processing dependents Then all dependents are marked Cancelled: upstream failed and are not retried
Exponential Backoff and Retry Policy
Given a retryable error occurs (HTTP 429/500/502/503 or network timeout) When retrying Then delays follow 1s, 2s, 4s, 8s... capped at 5 minutes with ±20% jitter Given 8 consecutive retry attempts fail When evaluating the item Then the item transitions to Failed and requires explicit user Retry to continue Given a non-retryable error occurs (HTTP 400/401/403/404/409) When processing the item Then it is marked Needs attention with no automatic retries Given a user taps Retry now on a Failed item When policy allows Then a new attempt starts within 2 seconds and backoff is reset
User Queue Management: View, Retry, Cancel, Defer
Given the Queue screen is opened When items are present Then each item displays id, type, size, progress, status (Queued/Uploading/Blocked/Deferred/Failed/Cancelled/Deduplicated), last error (if any), and dependency count Given a user selects an item and taps Cancel When confirmed Then the item and its dependents are marked Cancelled and will not be transmitted; cancellation persists across restart Given a user selects Defer and chooses a time window When saved Then the item remains Paused/Deferred and is not attempted until the window elapses or the user resumes Given a user taps Retry on a Failed item When dependencies and policies permit Then the item is placed at the head of the queue and an attempt begins within 2 seconds
Metered Connection Policy Compliance
Given the device is on a metered network and policy is No sync on metered When the queue is evaluated Then no transfer attempts start; items show Paused: metered network and 0 bytes are uploaded Given network state changes from metered to unmetered When policy permits sync Then paused items transition to Queued and syncing resumes within 5 seconds Given a user taps Retry on metered with No sync on metered enabled When evaluating the request Then the attempt is blocked and the UI indicates Policy blocked: metered network
Automatic Resume and Resumable Transfers on Connectivity Return
Given the device is offline with pending items When connectivity returns Then syncing starts automatically within 5 seconds subject to backoff and policies Given a transfer was interrupted mid-upload When connectivity returns Then the upload resumes using resumable transfers without re-uploading more than 64KB beyond the last acknowledged offset Given connectivity flaps three times within one minute When processing the same item Then the item is transmitted at most once; no duplicate commits occur server-side
Scheduler Coordination During Active Drawing Sessions
Given an active drawing session is editing a resource referenced by queued deltas When the scheduler runs Then those deltas are marked Deferred: active session and are not attempted until the session is idle or ends Given background sync is active during interactive drawing When measuring UI performance Then 95th percentile frame time stays ≤16.7ms on the reference device; sync yields to preserve interactivity Given the editor holds file locks for the active sheet When the queue evaluates I/O Then it does not attempt conflicting reads/writes and no I/O lock errors are logged
Sync Observability & Health Alerts
"As a project admin, I want visibility into sync performance and failures so that I can troubleshoot and ensure smooth delivery."
Description

Instrument client and server with metrics and tracing to monitor DeltaSync performance: bytes saved vs. full sync, completion times, resume rates, failure codes, and per-project health. Expose a lightweight in-app health indicator for users and dashboards/alerts for admins when failure rates spike or integrity checks fail. Ensure logs are privacy-safe and correlate events via sync session IDs. Improves troubleshooting, SLA adherence, and continuous optimization of sync behavior.

Acceptance Criteria
In-App Sync Health Indicator
Given a user is in a project with DeltaSync enabled and the app is online When the sync state changes between Syncing, Healthy, Degraded, Offline, and Error Then the in-app indicator updates within 2 seconds to reflect the correct state And Degraded is shown when the 15-minute rolling p95 completionTime exceeds baseline by ≥50% or the 15-minute failure rate is between 1% and 5% or resume rate is ≥10% And Error is shown when the last attempt ends in retry-exhausted or integrity-check-failed And tapping/clicking the indicator reveals last sync time, last error code, and a Retry action without leaving the workspace And the indicator and its panel contain no PII (names, emails, drawing content)
Admin Alert on Failure Rate Spike
Given alerting is configured for a project When the 15-minute rolling sync failure rate (excluding user-cancelled) exceeds 5% with ≥50 attempts Then a Failure Rate Spike alert is created within 60 seconds And alerts are deduplicated for the same project and cause within a 10-minute suppression window And the alert payload includes projectId/name, environment, affected users count, top 3 failure codes with percentages, and a link to pre-filtered traces by syncSessionId And the alert auto-resolves when the failure rate remains below 2% for 15 consecutive minutes
Integrity Check Failure Escalation
Given the server performs integrity verification on sync completion When an integrity check fails (e.g., hash mismatch, missing/extra tile) Then the session is marked Failed and excluded from Complete metrics And an Integrity Check Failed alert is sent to admins immediately and the in-app indicator shows Error within 2 seconds And the previous good version remains served; the corrupted version is quarantined and not exposed to clients And a retry is queued with exponential backoff starting at 30s, doubling to a max delay of 10m, max 5 attempts, with failure reasons logged and trace-linked
Metrics: Bytes Saved vs Full Sync
Given a DeltaSync session completes (success or failure) When emitting metrics from client and server Then both record bytesTransferred, fullBytesEquivalent, bytesSaved = max(fullBytesEquivalent - bytesTransferred, 0), and percentSaved rounded to 1 decimal And both record completionTimeMs, networkType, appVersion, region, projectId, userIdHash, and syncSessionId And the absolute difference between client and server percentSaved is ≤2 percentage points for 99% of sessions over 24h And metrics include resumed (true/false) and retryCount per session enabling resume rate computation on dashboards And p50/p95 of completionTimeMs and percentSaved are queryable per project over arbitrary 24h windows
Trace Correlation via syncSessionId
Given a new DeltaSync session starts on the client When the client generates a syncSessionId Then the same syncSessionId is present in all related client logs, server logs, spans, and metrics for that session And a trace viewer can retrieve a complete cross-service chain for 99% of 1000 sampled sessions And syncSessionId is UUIDv4, not reused, and collision probability < 1e-6 across 10 million sessions
Privacy-Safe Logging and Redaction
Given logs and traces are captured across client and server When events include fields that may contain PII (emails, full names, free-text comments, drawing paths/content) Then those fields are redacted or hashed before storage And an automated scanner over 10,000 test events finds 0 PII occurrences in stored logs And only syncSessionId, projectId, userIdHash, environment, and coarse device/app metadata are retained for correlation And log retention is capped at 30 days with verified secure deletion thereafter
Per-Project Sync Health Dashboard
Given an admin opens the Sync Health dashboard When selecting a project and a time range (default last 24h) Then the dashboard loads within 2 seconds and displays attempts, success rate, failure rate, top failure codes, percentSaved p50/p95, completionTime p50/p95, resume rate, and active users And filters for environment and region are available and applied within 1 second And exporting the visible metrics to CSV produces a file within 5 seconds that matches on-screen values And totals and rates shown are within 1% of backend counters over the selected window And clicking any metric opens a pre-filtered log/trace view scoped by projectId and syncSessionId

Context Packs

One-tap preflight bundles the exact sheets, zones, open comments, and assignments you’ll need for a site visit. Role-aware and size-estimated, it caches everything securely for offline use so nothing critical is missing when signal drops.

Requirements

One-Tap Pack Builder
"As a project lead on my way to a site visit, I want to generate a complete pack with one tap so that I don’t miss critical context when I go offline."
Description

A single action generates a Context Pack for a selected site visit or scope, aggregating exactly the needed sheets, zones, referenced details, linked views, RFIs, open comments, and assignments. The builder resolves cross-references so dependent content is included for offline completeness, and presents a preflight summary with item counts, estimated size, last-updated timestamps, and override toggles to add/remove items. Pack configurations can be saved as templates for reuse per project. Provides REST endpoints for web and mobile, integrates with Drawings, Comments, Assignments, and Files/CDN services, and emits analytics events for generation, overrides, and cancellations.

Acceptance Criteria
One-Tap Pack Generation for Selected Site Visit
Given an authenticated project member with permission to build packs and a selected site visit or scope When the user taps or calls the One-Tap "Build Context Pack" action via web or mobile Then the system aggregates exactly the items linked to that visit/scope: sheets, zones, referenced details, linked views, RFIs, open comments, and assignments within the user's access rights And the aggregation excludes duplicates across sources and maintains source-of-truth IDs for each item And the response returns a preflight summary with a new pack_id and HTTP 200/201 And the preflight counts per item type equal the number of unique items aggregated
Cross-Reference Resolution for Offline Completeness
Given the aggregation includes sheets with callouts and links When the pack is built Then all callout-referenced details and linked views are included in the pack without duplication And RFIs that reference drawings/files are resolved so that the referenced drawings/files are included And any reference that cannot be resolved due to permissions or missing assets is listed in preflight as "omitted" with a reason code, and the build does not fail And the preflight summary displays the final resolved counts after dependency inclusion
Preflight Summary, Estimates, and Override Toggles
Given a pack preflight has been generated When the user reviews the preflight Then the UI and API present counts by item type, an overall size estimate in MB, and last-updated timestamps for each item in ISO 8601 And the user can toggle individual items or entire groups on/off to add or remove them from the pack And toggling updates counts and size estimate within the same interaction and is reflected by the API in the preflight payload And proceeding is blocked with a clear error if the estimated size exceeds the configured offline cache limit, indicating the amount over the limit
Offline Caching and Security of Context Pack
Given the user confirms the preflight to finalize the pack When the device begins caching the pack Then all included drawings, comments, assignments, RFIs, and referenced files are available offline, verified by accessing the pack in airplane mode And cached content is encrypted at rest and inaccessible to other apps; clearing app data, signing out, or deleting the pack removes the cached content And if the user's role or permissions change to remove access to any included item, the next sync removes that item from the cache and updates the pack manifest
Pack Templates: Save, Reuse, and Permissions
Given a user has configured a preflight selection When the user saves it as a template with a unique name within the project Then the template is stored project-scoped with creator and visibility metadata And applying the template on the same project auto-selects items and resolves dependencies against the latest available revisions And only users with template-manage permissions can edit or delete the template; project members with view permissions can apply it And editing a template updates its version and audit trail without altering packs previously built from older versions
REST API Endpoints, Auth, and Idempotency
Given web and mobile clients integrate via REST When clients call the API Then the service exposes endpoints to create, read, update preflight overrides, delete packs, and manage templates, all scoped by project And all endpoints require authenticated, project-scoped access; unauthorized requests return 401 and forbidden requests return 403 And POST create operations accept an Idempotency-Key header; retries with the same key within the idempotency window return the same pack_id without creating duplicates And downstream links to Drawings, Comments, Assignments, and Files/CDN are returned as canonical IDs/URLs resolvable by those services; transient integration failures surface as 503 with a correlation_id
Analytics Events for Generation, Overrides, and Cancellations
Given analytics is enabled for the project When a pack generation is started, completed, cancelled, or an override is toggled Then an event is emitted for each action with payload including event_name, pack_id, project_id, user_id, source (web|mobile), timestamp (ISO 8601), counts_by_type, size_estimate_mb, overrides_added, overrides_removed, and duration_ms (for completed) And events include a stable event_id to enable downstream de-duplication and are retried on transient failures without blocking the user flow And events are observable via the analytics sink or test harness for verification within the pipeline's expected delivery window
Role-Aware Content Filter
"As a consultant with limited access, I want the pack to only include what I’m authorized to see so that sensitive information remains protected."
Description

Pack contents are tailored to the user’s role and permissions, automatically including or excluding sheets, layers, markups, comments, and assignments based on access control rules and project-specific overrides. Restricted layers and client-private threads are redacted for non-authorized roles while preserving navigability. The filter enforces identical constraints offline via encrypted manifests, logs access for audit, and supports admin-configurable role profiles. Integrates with IAM/permissions, Markup Layer service, Comments privacy settings, and device policy enforcement.

Acceptance Criteria
Architect Pack Filtering with Project Overrides
Given a user with role Architect and project-specific overrides denying "MEP_Layer" and granting "Site_Notes" When the user preflights a Context Pack for Project Alpha Then only sheets, layers, markups, comments, and assignments permitted by IAM entitlements and overrides are included And items denied are excluded or redacted per policy And client-private threads are excluded for non-authorized roles And counts of included items match the filter report And direct-link requests to excluded items return 403
Client Navigation with Redactions Preserving Continuity
Given a user with role Client without access to "Structural_Layer" and "Private_Thread_42" When navigating the Context Pack Then references to hidden content display redaction placeholders with a reason code And comment numbering and thread continuity are preserved And tapping a redacted item shows an access message without revealing content And spatial and breadcrumb navigation remain intact
Offline Enforcement via Encrypted Manifest
Given a cached encrypted manifest for Project Alpha generated at time T When the device is offline and the user opens the Context Pack Then access is limited strictly to items listed in the manifest And attempts to access non-manifest or newly revoked items are blocked And cached binaries decrypt only in-memory using device keystore And upon reconnection the manifest refreshes and removes newly revoked items before rendering And offline access attempts are queued for audit upload
Audit Logging of Allowed and Denied Access
Given audit logging is enabled When the user views downloads or attempts to open any sheet layer markup comment or assignment Then a single structured audit event is recorded per action with userId roleId projectId itemId action result (allowed|denied) timestamp deviceId And offline events queue and upload within 60s of reconnect And no content bodies or sensitive text are logged beyond identifiers And duplicate events within 2s are de-duplicated
Admin Role Profile Update Propagates to Pack
Given an admin updates the Contractor role profile to hide "Client_Comments" and show "Zone_B" When a Contractor user preflights a Context Pack after the change Then the latest role profile is fetched from IAM And the pack contents reflect the updated visibility And any previously cached manifest for that user and project is invalidated And in conflicts project-specific overrides take precedence over the global role profile
Respect Markup Layer and Comment Privacy Integrations
Given a markup layer is set to Project Members Only and a comment thread is set to Client-Private When a non-authorized user preflights or navigates a Context Pack Then the restricted layer is excluded or redacted And the client-private thread is excluded for non-clients And a subsequent permission change takes effect within 5 minutes or on next preflight whichever is sooner And direct API calls to excluded resources return 403 with audit logged
Device Policy Enforcement for Secure Caching and Wipe
Given device policy requires secure storage and the device lacks a hardware-backed keystore or is rooted When the user attempts to preflight a Context Pack Then the preflight is blocked with a policy error and no content is cached And on compliant devices cached content and manifests are stored encrypted at rest with keys protected by the OS keystore And remote wipe invalidates tokens and purges cached content within 2 minutes of command receipt even if offline via time-bomb expiry
Zone & Sheet Resolver
"As an architect walking a specific floor zone, I want the pack scoped to those areas so that onsite navigation is fast and focused."
Description

Automatically scopes the pack to exact zones and sheet subsets relevant to a visit by parsing sheet metadata, zone polygons, tags, and location context. Includes dependent sheets (key plans, legends, general notes) and any detail callouts referenced from within the selected zones to avoid missing cross-linked information offline. Provides a review screen to adjust included zones/sheets and highlights coverage on thumbnails. Integrates with BIM/metadata parser, Drawings indexer, and a graph of sheet/detail references.

Acceptance Criteria
Auto-scope Sheets by Selected Zones and Location Context
Given a project with sheets that have zone polygons and tags, and a site visit specifies selected zones and optional location bounds When the resolver runs Then it includes only sheets whose zone polygons intersect the selected zones or location bounds by an area > 0 And it excludes sheets with no qualifying intersection And it filters by visit tags so only sheets matching at least one required tag are included And it records the matched zones per included sheet for downstream display
Include Dependent Sheets (Key Plans, Legends, General Notes)
Given an initial set of included sheets When dependency resolution runs Then any key plan, legend, and general notes sheets referenced by the included set are added exactly once And no duplicates exist in the final manifest And unreferenced global notes sheets are excluded unless explicitly toggled on in review And dependency additions are labeled with dependency type for each sheet
Resolve Detail Callouts Cross-References Within Included Zones
Given detail callouts located within the selected zones that reference views on other sheets When cross-references are resolved Then all target sheets and views referenced by those callouts are included And only callouts whose marker center falls inside the selected zones are considered And transitive references are followed until closure or a maximum depth of 3, whichever comes first And circular references are detected and resolved without duplication
Review Screen Adjustments Recompute Scope and Dependencies
Given the review screen with the computed sheet and zone manifest When a user adds or removes a zone or toggles a sheet include/exclude Then the manifest and dependency set are recomputed within 500 ms And counts of direct sheets, dependent sheets, and callout targets update immediately And excluded dependent sheets are shown with a warning that related references may be unresolved And the user can undo the last change and restore the prior manifest
Thumbnail Coverage Highlighting Accuracy
Given sheet thumbnails with vector zone overlays When zone coverage is rendered Then the highlighted polygons match the underlying zone geometries within a tolerance of ±2 pixels at 1x thumbnail scale And included callout markers within selected zones are visually indicated And a legend shows color mapping for direct vs dependent coverage
Integration With BIM Parser, Drawings Indexer, and Reference Graph
Given available integrations to the BIM/metadata parser, drawings indexer, and reference graph When the resolver executes Then it fetches sheet metadata, zone geometries, and reference edges via the respective services And on any integration failure it surfaces a user-visible error banner that names the failing service And it uses the last successful cached dataset if available and marks the manifest as "from cache" And it blocks finalization if required data for scoping is missing (zones or references)
Performance and Deterministic Manifest Ordering
Given a project of up to 800 sheets, 2,000 callouts, and 300 zones on baseline hardware When the resolver computes scope Then it completes in 2.0 seconds or less at the 95th percentile And the resulting manifest is deterministically ordered by discipline, sheet number, then dependency type And repeated runs on identical inputs produce identical manifests and coverage sets
Open Comments & Assignments Sync
"As a site lead, I want to capture and update comments and assignments offline so that decisions and follow-ups aren’t delayed by poor signal."
Description

Bundles all open comments, threads, and assignments linked to included sheets/zones and enables full offline read/write. Local changes are queued with timestamps, support quick actions (resolve, reassign, add photo note), and merge on reconnect with conflict detection and human-readable resolution prompts. Thumbnails for attachments are cached; full-res files are fetched on demand unless flagged as critical. Integrates with Comments and Tasks services, the sync engine, and notifications to update watchers upon successful merges.

Acceptance Criteria
Offline Bundle: Open Comments and Assignments Availability
Given a context pack contains sheets and zones with open comments and assignments at snapshot time T0 When the pack is downloaded and the device is offline Then all open comments, their threads, and assignments linked to those sheets and zones are readable offline and the offline counts equal the server snapshot at T0 Given a user opens an included thread offline When they expand the thread Then all replies up to T0 are visible with author, timestamp, and status badges Given offline mode When the user searches or filters comments by status, assignee, or zone Then results return in under 500 ms and match the offline dataset Given attachments exist in comments When viewing offline Then thumbnails render for 100% of attachments without broken placeholders
Offline Actions: Resolve, Reassign, Add Photo Note
Given the device is offline and the user has permission to modify a comment or assignment When they mark a comment resolved, reassign an assignment, or add a photo note Then the change is applied locally, displayed as Pending Sync, and added to a durable queue with ISO-8601 timestamp, author ID, and operation type Given pending offline actions exist When the app is force-quit and relaunched offline Then the pending actions persist and remain visible with their original metadata Given a photo note is added offline When saved Then a local thumbnail (<= 200 KB) and EXIF timestamp are stored and associated with the comment; full-resolution file is deferred unless marked critical Given an action requires unavailable data When attempted offline Then the UI blocks the action with a clear offline-required message and no queue entry is created
Reconnect Merge: Conflict Detection and Resolution Prompts
Given there are pending local changes and the same comment or assignment was modified remotely since T0 When connectivity is restored and sync starts Then conflicts are detected per entity and field (status, assignee, body, attachments) and the user is shown a human-readable prompt with local vs remote values, authors, and timestamps Given a conflict prompt is shown When the user chooses Keep Local, Keep Remote, or Merge Text (for comment body) Then the decision is applied, the audit trail records the resolution, and no data is silently lost Given N=50 pending changes with K=10 conflicts When syncing on a good connection Then non-conflicting changes are committed within 10 seconds and conflicts are batched into one review session Given conflicts are resolved When sync completes Then the local queue is emptied for committed items and any failures are retried with exponential backoff up to 5 times
Attachment Handling: Cached Thumbnails and Critical Full-Res
Given attachments in included threads and assignments When the pack is downloaded Then thumbnails for all attachments are cached offline and occupy no more than 50 MB total by default Given an attachment is flagged Critical When the pack is downloaded Then the full-resolution file is pre-fetched and available offline Given an attachment is not Critical and the device is offline When the user taps to open full-resolution Then a message indicates full-resolution unavailable offline and the thumbnail remains viewable Given the device is online When the user opens a non-critical attachment Then the full-resolution file is fetched on demand and cached for the session, showing a progress indicator and succeeding within 3 seconds on a 10 Mbps connection
Watcher Updates: Post-Merge Notifications
Given offline changes are successfully merged to the server When commits are acknowledged Then watchers of affected comments and assignments receive a single notification per entity within 30 seconds containing action summary, author, and deep link to the sheet or zone Given multiple local changes affect the same thread during one sync window When merged Then notifications are deduplicated into one aggregated update per watcher Given a merge fails When retry attempts are exhausted Then no watcher notifications are sent and the user is shown an error with a retry option Given the user is the only watcher When the user performs the change Then no external notification is sent to themselves unless user settings explicitly allow self-notify
Sync Robustness: Queue Persistence, Idempotency, and Retry
Given the device loses connectivity mid-sync When connectivity returns Then the sync resumes without duplicating changes and maintains original timestamps and ordering Given identical offline actions are applied twice due to user retry When syncing Then the server processes them idempotently resulting in a single final state Given the app experiences an OS kill during offline use When relaunched Then the offline queue, cached data, and unsent attachments remain intact and checksums verify data integrity Given an API dependency returns a 5xx error When syncing Then the client backs off exponentially (1s, 2s, 4s, 8s, 16s) up to 5 attempts and surfaces a non-blocking warning Given the offline queue contains at most 200 operations When syncing on a 10 Mbps connection with typical latency Then end-to-end sync completes within 30 seconds excluding user conflict resolution time
Offline Cache & Secure Storage
"As a firm owner, I want offline packs to be securely cached so that our project data stays protected even if a device is lost."
Description

All pack assets are cached for offline use with at-rest encryption using the platform keystore (e.g., AES-256), remote wipe on logout or device revoke, biometric-gated access, and time-based pack expiry. Viewers support fast pan/zoom for vector PDFs, markups, and images without network access. Integrity checks (hash verification) guard against corruption, and sensitive PII fields follow redaction rules per role. Complies with regional data residency settings and logs security events. Integrates with Secure Storage, Viewer, and Device Management modules.

Acceptance Criteria
At-Rest Encryption via Platform Keystore
Given a device with PlanPulse installed and a Context Pack downloaded When the pack assets are written to local storage Then each asset is encrypted at rest using AES-256 with keys stored in the platform keystore, and no plaintext asset bytes are present on disk And attempts to read cached files via a file explorer or device debugging tools yield encrypted content And decryption occurs only within the app process using keystore-held keys and never writes decrypted bytes to persistent storage
Biometric-Gated Offline Access
Given a device with biometrics or system passcode enabled and cached packs available When the user attempts to open any cached pack while offline Then a biometric (or OS-governed passcode) prompt is required before any asset decryption begins And after 5 consecutive failed attempts the pack access is locked for 5 minutes And no decryption keys are released to the app until authentication succeeds And audit metadata records success/failure locally for upload when online
Remote Wipe on Logout or Device Revoke
Given cached packs exist on the device When the user logs out Then all cached assets, derived tiles, thumbnails, and encryption keys are securely erased within 5 seconds and become unrecoverable When the device receives a revoke signal from Device Management while offline or online Then the same secure erase completes within 60 seconds of the next app foreground or background heartbeat And after wipe, the viewer shows no packs offline and launching a pack is blocked
Time-Based Pack Expiry (Offline-Enforced)
Given a downloaded pack with an expiry timestamp T When the device local time passes T while offline Then the pack becomes immediately inaccessible and cannot be opened And any decrypted material in memory is cleared and cached decrypted tiles (if any) are deleted within 10 minutes And upon next connectivity the client requests renewal; if renewed, a new expiry T2 is applied; if not, the pack remains blocked
Hash-Based Integrity Verification and Recovery
Given each cached asset has a recorded SHA-256 hash When an asset is initially downloaded Then the computed hash matches the recorded hash before it is accepted into the cache When an asset is opened offline Then its hash is verified before rendering And on mismatch the asset is blocked from viewing, marked Corrupted, and a repair download is queued for next connectivity while unaffected assets remain viewable
Offline Viewer Performance for PDFs, Markups, and Images
Given a pack containing up to a 100-page vector PDF (≤150 MB), 50 markup layers, and 40 images When opened offline on a representative mid-tier device (≥4 CPU cores, ≥4 GB RAM) Then first page renders in ≤2 seconds at the 95th percentile And pan/zoom interactions maintain ≤100 ms input-to-frame latency at ≥45 FPS at the 95th percentile And vector fidelity is preserved up to 800% zoom without rasterization artifacts And toggling markup layer visibility completes in ≤200 ms at the 95th percentile And no network calls occur during viewing
Role-Based PII Redaction in Cached Assets
Given a user role that is not permitted to view designated PII fields When a pack is prepared and cached for that user Then PII fields are redacted per policy before encryption and storage And the cached bytes contain only the redacted variants with no embedded originals or reversible layers And offline viewing preserves redactions; search/copy does not reveal PII And a permitted role’s pack contains unredacted fields, while the unauthorized role’s device never stores unredacted PII
Size Estimation & Storage Budgeting
"As a project lead with limited device space, I want an accurate size estimate and quality options so that I can fit the pack on my phone without sacrificing essentials."
Description

Pre-download estimation computes total pack size using asset manifests, compression ratios, and markup deltas, then checks device free space and suggests quality presets (vector-only, medium-res rasters, exclude archived sheets). Users can set a storage budget per pack, see per-category size contributions, and monitor progress with remaining time and per-asset status. Auto-clean rules remove oldest or fully resolved packs, with manual clear options. Integrates with compression service, asset manifest, and device storage APIs.

Acceptance Criteria
Accurate Pack Size Estimation
Given a context pack with assets listed in the manifest (vectors, rasters, archived sheets, markup deltas) When the size estimation runs using current quality preset Then the computed total size is within ±10% of the actual downloaded size for that preset Given per-category estimation is enabled When estimation completes Then it provides a breakdown by category (Vectors, Rasters, Markup Deltas, Archived Sheets) whose sum equals the displayed total within ±1% Given the compression service is available When estimation runs Then the app requests compression ratios for rasters using the selected preset and applies the returned values to size calculations Given a manifest entry is missing required size metadata When estimation runs Then the asset is flagged as "Unknown size" and the UI displays a warning badge without blocking other calculations
Free Space Check and Pre-download Gate
Given device free space is retrieved from the storage API When the user taps "Start Download" Then the app blocks download if (Estimated Size + 5% overhead) > Free Space and displays the shortage in MB Given sufficient free space When the user taps "Start Download" Then the download begins within 1 second and no blocking modal is shown Given the user changes the quality preset or excludes archived sheets When the new estimate is computed Then the estimate updates within 500 ms and the gate re-evaluates with the new size
Role-aware Quality Preset Suggestions
Given the user’s role is Field Reviewer and the estimate exceeds available free space When suggestions are shown Then the first suggestion is "Vector-only + Medium raster thumbnails" and it reduces the estimate by at least 30% Given the user’s role is Design Lead When suggestions are shown Then available options include "High-res rasters", "Medium-res rasters", and "Vector-only" with "Exclude archived sheets" toggle, each selectable Given any suggested preset is selected When applied Then per-category estimates recalculate and display within 500 ms and the overall estimate reflects the change
User-defined Storage Budget Enforcement
Given the user sets a storage budget B (in MB) for the pack When the current estimate exceeds B Then the UI shows the over-budget amount (in MB and %) and suggests up to 3 presets that bring the estimate ≤ B Given the estimate is ≤ B When the user taps "Start Download" Then the download is allowed without additional prompts Given the estimate is > B When the user taps "Start Download" Then the app shows an override confirmation with explicit overage in MB; download only proceeds if the user confirms Given the download completes When actual on-disk size is computed Then the app displays actual size and flags "Budget overrun" if actual > B by more than 10%
Per-category Size Contributions and Toggles
Given estimation completes When the user views the pack details Then the UI shows size by category: Vectors, Rasters, Markup Deltas, Archived Sheets, each with MB and percentage of total Given the user toggles "Exclude archived sheets" When toggled off Then the total estimate decreases by the archived sheets MB and the UI updates within 500 ms Given the user expands a category When expanded Then the top 10 largest items in that category display with individual estimated sizes
Download Progress and Per-asset Status with Time Remaining
Given a pack download is in progress When the user opens the progress view Then the UI shows overall % complete, estimated time remaining updated at least every 2 seconds, and current throughput (MB/s) Given per-asset tracking is enabled When assets transition state Then each asset displays one of: Queued, Downloading, Completed, Failed, Skipped; failures show last error and retry count Given network connectivity is lost for >10 seconds When the download is paused Then the UI shows "Offline - waiting to resume" and download resumes automatically within 5 seconds after connectivity returns
Auto-clean Rules and Manual Clear
Given auto-clean is enabled with policy "Remove oldest or fully resolved packs first" When device free space drops below 10% or an auto-clean event is triggered Then the app deletes least-recently-opened packs and fully resolved packs first until free space ≥ 15% or the policy limit is reached, and shows a summary of space freed Given a pack is pinned by the user When auto-clean runs Then the pinned pack is never removed by auto-clean Given the user invokes Manual Clear on a pack and confirms When the action completes Then the pack’s cached data is removed, the freed space (in MB) is displayed within ±5% accuracy, and the pack status changes to "Not downloaded"
Resumable Download & Delta Refresh
"As a field architect moving through spotty coverage, I want downloads to resume and only fetch changes so that I don’t waste time or data."
Description

A background download manager performs resumable, chunked transfers with retry/backoff, network and power awareness, and partial usability of the pack while remaining items continue downloading. After initial download, delta refresh fetches only changed assets and flags stale items with a one-tap refresh. Supports manual refresh and scheduled quiet-time auto-updates. Integrates with CDN, sync engine, and service workers/mobile background tasks, and captures telemetry on throughput, error rates, and completion times for reliability tuning.

Acceptance Criteria
Resumable chunked download on intermittent network
Given a 500 MB Context Pack with 100 assets and connectivity drops after 40% completion When connectivity resumes within 15 minutes Then the download resumes from the last fully persisted chunk without restarting and completes with no duplicate byte ranges Given CDN supports HTTP Range requests When requesting chunks Then each chunk is independently verified by checksum and only failed chunks are retried Given the app is force-closed mid-download When the app is relaunched Then persisted progress is restored and remaining chunks resume within 5 seconds Given the device switches from Wi‑Fi to cellular When the IP/network changes mid-transfer Then the transfer continues via resume with at most one transient retry
Adaptive retry, exponential backoff, and power/network awareness
Given a chunk request fails due to HTTP 429/503 or timeout When retrying Then retries use exponential backoff with jitter (initial 1–2s, max 60s), honor Retry-After headers, and stop after 7 attempts with a surfaced error Given battery is below 15% and the device is not charging When a background download is active Then it pauses within 10 seconds and auto-resumes when charging or when battery is ≥ 25% Given the user preference is "Download on Wi‑Fi only" When the current connection is metered/cellular Then downloads defer and automatically resume on an unmetered network
Partial pack usability during ongoing downloads
Given at least one sheet and its dependent assets are downloaded When the user opens the pack offline while other items are still downloading Then downloaded items render within 2 seconds, pending items display placeholders with size/ETA, and the app remains responsive Given a user attempts to open an undownloaded item When the item is pending Then the UI shows a non-blocking "Pending download" message with an inline retry and no crash occurs Given comments and assignments linked to downloaded sheets When opened offline Then they are viewable and actionable per role, with write operations queued if a dependency is missing
Delta refresh downloads only changed assets
Given an initial pack has been downloaded When a delta refresh is triggered Then the client fetches a manifest and revalidates via ETag/Last-Modified, downloading only changed or new assets and receiving 304 for unchanged Given 100 assets with 5 changed When delta refresh completes Then total bytes transferred is ≤ the sum of the 5 changed assets + 10% protocol overhead, and completion time is ≤ 30% of a full download under the same network Given the server removes an asset from the pack When delta refresh runs Then the removed asset is purged locally within the same operation
Stale flagging with one-tap refresh
Given the server indicates newer versions or TTL expiry for specific assets When the user views the pack Then those assets are labeled "Stale" and a pack-level "Refresh" CTA is visible within 1 second of manifest fetch When the user taps "Refresh" Then only stale assets are fetched, stale labels clear upon success, and version timestamps update Given some assets fail to refresh When the operation completes Then the UI lists failed items with retry controls while successfully refreshed items remain updated
Scheduled quiet-time auto-updates and manual refresh
Given the user configures quiet hours (e.g., 02:00–05:00) with constraints "Wi‑Fi + charging only" When within quiet hours and constraints are met Then packs with updates auto-refresh in the background and a completion notification is posted on success or a single aggregated failure message on error Given it is outside quiet hours or constraints are not met When updates are available Then auto-refresh defers, while manual refresh is always available and starts immediately upon user action Given a manual refresh is running when quiet hours begin When the scheduler triggers Then no duplicate job is started and the existing job continues to completion
Background continuation, CDN compliance, and telemetry capture
Given the app is sent to background or the screen locks When downloads are in progress Then service workers/mobile background tasks continue the transfer up to OS limits, and if killed by the OS, the job resumes on next opportunity without losing completed chunks Given CDN endpoints issue redirects or require signed URLs and support Range requests When requesting assets Then the client follows redirects, signs/refreshes tokens as needed, and validates partial content (206) semantics per RFC 7233 Given downloads and refreshes execute When telemetry is captured Then throughput (bytes/sec), error counts/rates by code, retry counts, and completion times (p50/p95) are recorded with correlation IDs, no PII is collected, and events are available in analytics within 5 minutes

PinPoint Capture

Snap photos, record voice notes, or sketch and pin them to precise sheet coordinates—even without service. Auto-tags by room/zone and discipline, preserving context so drafters know exactly what to fix, reducing back-and-forth and rework.

Requirements

Offline-First Capture & Sync
"As a site architect, I want to capture and pin issues while offline so that I can document accurately in the field and have everything sync automatically when I’m back online."
Description

Enable capturing photos, voice notes, and sketches and creating pins without network connectivity. Store data locally with background sync when connectivity resumes, including retry logic, conflict resolution, and progress indicators. Encrypt local media, compress uploads, and enforce size/quality settings to balance fidelity with performance. Provide user feedback for queued items, partial syncs, and failures, ensuring no data loss and a seamless offline-to-online experience.

Acceptance Criteria
Offline Capture: Photos, Voice Notes, Sketches with Pin Placement
Given the device has no internet connectivity and a sheet is open When the user captures a photo, records a voice note, or creates a sketch and drops a pin at specific sheet coordinates Then the capture and pin are saved to local storage within 500 ms, assigned a temporary local ID, and displayed on the sheet at the exact coordinates with status "Queued" And the local record includes metadata: sheetId, coordinate (x,y,scale), captureType, timestamp (UTC), filesize, and deviceId And the locally stored media is encrypted at rest using the platform keystore And the user can view, edit title/description, and delete the queued item while offline, with changes persisted locally and reflected immediately in the UI
Auto Background Sync on Connectivity Resume with Progress Indicators
Given there are one or more queued items and the device transitions from offline to online When background sync initiates Then uploads start automatically in FIFO order by capture timestamp and display per-item progress (0–100%) and an overall queue progress indicator And sync continues when the app is backgrounded and resumes on next launch if interrupted, preserving progress state And each item transitions status through "Queued" → "Syncing" → "Synced" on success, with timestamps for start and completion
Robust Retry and Failure Surfacing
Given a queued item encounters a transient error (e.g., network timeout or HTTP 5xx) When syncing the item Then the client retries up to 5 times with exponential backoff starting at 2s, capped at 2 minutes, with jitter ±20% And if a permanent error occurs (HTTP 4xx excluding 408/429), the item is marked "Failed" immediately with an actionable error message And after max retries, the item status is "Failed" with a visible "Retry" action, and reattempts are idempotent via a stable upload key to prevent duplicates
Conflict Resolution for Concurrent Edits on the Same Pin
Given a pin and its annotations are edited offline by User A and the same pin is edited online by another user When the offline edits attempt to sync Then version checks detect a conflict using server and client version tokens And non-overlapping fields (e.g., title vs description) are merged automatically; overlapping fields trigger a conflict banner with options: "Keep Mine", "Use Server", or "Review" And the resolved outcome results in a single server pin record, with an audit trail capturing both versions and the chosen resolution And no duplicate pins are created on the sheet
Partial Sync and Upload Resume for Media
Given media uploads use chunked transfer for files > 1 MB When connectivity drops mid-upload Then the client resumes from the last confirmed chunk upon reconnect without re-uploading completed chunks, verified by chunk checksums And metadata and references remain consistent so the item shows status "Partially Synced" until the media completes And upon successful completion the server returns a final checksum/ETag that the client verifies before marking the item "Synced"
Media Encryption, Compression, and Size/Quality Enforcement
Given default capture settings are enabled When the user captures media Then local media is encrypted at rest and never written unencrypted to disk And uploads are compressed to meet targets: photos ≤ 2 MB at visually lossless quality, voice notes AAC at 64 kbps mono, sketches PNG/SVG ≤ 1 MB And if a media item exceeds the absolute limit (10 MB), the user is prompted to auto-compress or cancel before queuing And server receives uploads over TLS 1.2+ and the client verifies certificate pinning per app security policy
User Feedback: Queue, Partial Syncs, Failures, and Data Integrity
Given the user is working offline with multiple queued items When they open the Sync Queue view Then they see counts and filters for statuses: Offline, Queued, Syncing, Partially Synced, Failed, Synced, with per-item details and retry actions And pins placed offline render on the sheet with an offline badge and become standard pins after sync, preserving exact coordinates And force-closing the app or device restart does not lose any queued items; all items remain present and sync successfully upon reconnection And a batch test of 100 offline captures results in 100% items synced with zero duplicates and matching checksums post-sync
Sheet Coordinate Pinning & Revision Resilience
"As a project lead, I want pins to stay anchored to the correct spot across drawing revisions so that context isn’t lost and I don’t have to re-pin items after updates."
Description

Allow users to place pins precisely on drawing sheets using a normalized sheet-coordinate system independent of device resolution and zoom. Support snapping to room/zone boundaries and geometry, and maintain pin anchors across sheet revisions using vector alignment and/or image registration. Detect and flag pin drift when sheets change, offering remap suggestions and a side-by-side compare view to confirm updated positions.

Acceptance Criteria
Normalized Device-Independent Pin Placement
Given a drawing sheet is open on any device and at any zoom level When a user places a pin at a visible point P on the sheet Then the stored pin coordinate is normalized to the sheet (x,y in [0.0,1.0]) with precision >= 1e-4 And reopening the sheet on any device/zoom renders the pin at the same sheet location with positional error <= 0.25% of the sheet’s shorter side or 2 screen pixels, whichever is greater And the normalized coordinate persists unchanged after save, sync, and reload
Snap-to Rooms/Zones and Geometry
Given snap mode is enabled and the sheet contains vector geometry and room/zone polygons When the cursor/tap is within 12 screen pixels of an edge, vertex, room boundary, or room centroid Then the pin snaps to the nearest eligible target and a visual indicator shows the snap type and target name And the pin stores target metadata (entity_id, entity_type, room/zone id, discipline) alongside the normalized coordinate And if multiple targets are within the snap radius, a disambiguation list appears within 300 ms ordered by proximity; selecting one sets the anchor And holding Alt (desktop) or long-press (mobile) bypasses snapping, placing a free pin And snap placement accuracy is within <= 0.1% of the sheet’s shorter side relative to the target
Cross-Version Anchor Preservation via Alignment
Given existing pins on sheet version Vn and a revised sheet version Vn+1 is uploaded When the system runs alignment Then it attempts vector alignment first (geometry/layer matching); if unavailable/insufficient, it falls back to image registration And for each pin, a mapped coordinate on Vn+1 is computed And if the registration RMSE <= 0.3% of the sheet diagonal and match confidence >= 0.90 Then the pin is auto-re-anchored to Vn+1 and marked Re-anchored Else the pin remains at its last confirmed position and is marked Needs Review
Pin Drift Detection and Flagging on Sheet Update
Given pins have been mapped from Vn to Vn+1 When a pin’s displacement magnitude exceeds 0.5% of the sheet diagonal OR its mapped position changes room/zone membership OR the original target geometry no longer exists Then the pin is flagged as Drifted And the UI displays a drift badge with displacement distance (in mm/in if scale known; otherwise as % of sheet diagonal) and direction And the sheet-level drift counter increments and is exposed via API (drifted=true) And the pin appears in a Review queue
Remap Suggestions with Confidence Scoring
Given a pin is flagged as Drifted When the user opens the remap dialog Then the system presents up to 3 suggested new anchor positions with labels (e.g., Matched Room A-102 Boundary, Closest Wall Edge) and confidence scores (0–1) And at least one suggestion references the original target entity if it still exists And selecting a suggestion updates the pin’s normalized coordinate, clears the Drifted flag, and records an audit log (old_coord, new_coord, method, confidence, user, timestamp) And the action is undoable with a single step revert
Side-by-Side Compare and Confirmation Workflow
Given there are pins marked Needs Review or Drifted between Vn and Vn+1 When the user opens Compare Then the app shows Vn and Vn+1 side-by-side with synchronized pan/zoom and linked pin highlighting And a wipe slider and blink toggle allow visual verification of geometry changes And the user can Accept per-pin or in bulk; on Accept, the mapped coordinates become the confirmed anchors on Vn+1 and review status is cleared And on Cancel, no changes are saved and pins retain their prior confirmed positions
Robustness to Sheet Transformations (Scale/Rotate/Crop)
Given a revised sheet version includes rotation (±180°), uniform scaling (50–200%), translation (≤10% of side length), and/or cropping (≤20% of area) When alignment runs Then the mapping compensates for these transforms and preserves pin anchors And for a validation set of synthetic transforms, the post-alignment positional error per pin is <= 0.3% of the sheet diagonal And pins whose mapped location falls into cropped-out regions are marked Orphaned and require user remap
Smart Auto-Tagging by Room/Zone & Discipline
"As a drafter, I want pins to be auto-tagged by room and discipline so that I can filter my queue and address the right items without manual sorting."
Description

On pin creation, automatically infer and apply room/zone and discipline tags by parsing sheet metadata, titles, and callouts, and by mapping coordinates to room polygons. Use heuristics and configurable rules to suggest discipline based on capture type and keywords. Allow quick override and ensure all tags are stored as structured fields for filtering, routing, and reporting.

Acceptance Criteria
Auto-Tag Room/Zone from Sheet Coordinates
Given a sheet contains defined room and zone polygons and a user drops a pin fully inside Room A polygon When the pin is created Then the pin stores room_id = Room A.id and room_name = Room A.name and tag_source_room = coordinate_map And the operation completes within 500 ms on a mid-tier device Given the same pin also lies within Zone B polygon When the pin is created Then the pin stores zone_id = Zone B.id and zone_name = Zone B.name and tag_source_zone = coordinate_map
Discipline Suggestion from Capture Type, Keywords, and Callouts
Given capture type = photo and the attached note contains the keyword duct and the sheet callout near the pin includes HVAC When the pin is created Then the suggested discipline_code = MEP with discipline_confidence >= 0.8 and tag_source_discipline = heuristic_rules Given the QA keyword set for disciplines is executed against the suggestion engine When precision and recall are computed Then precision >= 85% and recall >= 85% And average suggestion latency <= 700 ms on a mid-tier device
Ambiguity Handling for Overlapping or Out-of-Bounds Pins
Given a pin centroid is outside all room polygons When the pin is created Then room_id is null and room_name = Unassigned and tag_source_room = none and tag_status_room = Needs Review And the UI prompts the user to pick a room from the current sheet Given a pin overlaps multiple room polygons When overlap areas are calculated Then the room with the greatest overlap area is assigned; if the top two overlaps differ by less than 10%, the user is prompted to choose And resolution_method_room is recorded as auto or user
Quick Override of Suggested Tags
Given a pin has suggested room, zone, and discipline When the user taps Edit Tags Then the user can change each of room, zone, and discipline in no more than 2 taps per field and save within 5 seconds And on save, tag_source_room or tag_source_zone or tag_source_discipline becomes user and corresponding confidence = 1.0 And previous values are appended to tag_audit with timestamp, old_value, new_value, and actor
Offline Tagging and Deferred Sync
Given the device is offline and the sheet polygons and metadata are cached When a pin is created Then room and zone tags are inferred locally using cached data; discipline suggestions use cached rules And if required data is missing, tag_status fields are set to Pending and placeholders are stored without blocking pin creation Given connectivity is restored When sync runs Then pending tags are inferred and updated, user overrides are preserved, and no tags are lost or duplicated
Structured Tag Storage, Filtering, Routing, and Reporting
Given a new pin is created and tagged When persisted Then the record includes structured fields: room_id, room_name, zone_id, zone_name, discipline_code, discipline_confidence (0..1), tag_source_room, tag_source_zone, tag_source_discipline, and rule_id_discipline (nullable) Given a filter is applied for room_id = X and discipline_code = ELEC When the pin list is refreshed Then only pins matching both fields are returned Given a routing rule exists for discipline_code = STR to notify the Structural queue When a pin is tagged with STR Then a route event is emitted to the Structural queue within 2 seconds Given a report export is generated When fields are selected Then all tag fields appear as discrete columns with correct values
Configurable Discipline Rules and Priority Order
Given an admin defines discipline rules with keyword patterns, capture-type conditions, and priority values, and publishes the config When a pin is created that matches multiple rules Then the rule with highest priority applies; ties are broken by the most specific capture-type condition; remaining matches are recorded as secondary And rule_id_discipline is stored with the applied suggestion Given a new ruleset is published When clients are online Then the new rules take effect immediately without app restart and are synced for offline use within 5 minutes
Multi-Modal Capture (Photo, Voice, Sketch)
"As an on-site inspector, I want to attach photos, voice notes, and quick sketches to a pin so that I can convey details unambiguously without writing long descriptions."
Description

Provide a unified capture interface to add photos (single or burst), record voice notes with basic trimming, and create sketches or markups directly on the sheet or on top of photos. Support multiple attachments per pin, EXIF/time/location capture where available, and queued transcription of voice notes when online. Include simple annotation tools (color, stroke, arrow, text) and enforce media constraints for size and format.

Acceptance Criteria
Capture and Pin Multiple Photos (Single/Burst) Offline
Given a sheet is open and the user taps a coordinate to place a pin, when the user selects Photo mode and captures a single photo, then the photo is attached to that pin and a thumbnail appears in the pin detail panel. Given the user selects Burst mode, when the user holds the shutter and captures up to 20 photos, then all photos (max 20) are attached to the same pin in chronological order and a count badge displays on the pin. Given the device lacks connectivity, when photos are captured, then attachments are saved locally with a Pending Sync state and appear immediately in the pin gallery. Given the device supports EXIF and permission for camera is granted, when a photo is captured, then EXIF timestamp and orientation are stored; when location permission is granted and a fix is available, GPS is stored; otherwise location is omitted.
Record, Trim, and Transcribe Voice Note
Given a pin is open, when the user selects Voice and taps Record, then recording starts and a waveform shows elapsed time. Given recording is stopped, when the trim handles are adjusted and Save is tapped, then the trimmed audio is saved as an attachment in M4A (AAC) format. Given max duration is 10 minutes, when the recording reaches 10:00, then recording auto-stops and prompts to save or discard. Given the device is offline, when a voice note is saved, then it is queued for transcription; when the device reconnects, then transcription starts automatically, shows Transcribing..., and on success an editable transcript is attached to the note. Given transcription fails due to network or service error, when Retry is tapped, then the job is re-queued and completes when the service is reachable. Given the user taps Play, then audio plays with pause, resume, and scrubbing controls.
Annotate Sheet or Photo with Drawing Tools
Given a sheet or photo is open in markup mode, when the user selects Pencil, Arrow, Text, or Shape tools, then marks are placed on the canvas and tool options include at least 6 colors and stroke widths from 1 to 12 px. Given the user taps Undo or Redo, then the last action is reversed or restored up to 50 steps. Given the user pans or zooms, then strokes remain anchored to the underlying sheet or photo coordinates with no visual drift. Given the user saves, then the annotation is stored as an editable vector layer and a flattened PNG preview, and reopening preserves editability. Given the user adds text, then font size, color, and rotation can be adjusted and text remains legible at 100% zoom.
Manage Multiple Attachments per Pin
Given a pin is selected, when the user adds photos, voice notes, and sketches, then up to 25 attachments can be associated to the pin and displayed in a unified gallery sorted newest first. Given the user long-presses an attachment, when Rename, Reorder, or Delete is chosen, then the action applies; delete requires confirmation and performs a soft delete pending sync. Given attachments exist on a pin, then the pin badge shows type-specific counts (e.g., 3 Photos, 1 Voice, 2 Sketches). Given the device is offline, when attachments are added or edited, then changes are queued; when online, they sync and each item transitions to Synced or shows an error state with retry.
Capture and Store Attachment Metadata (EXIF, Time, Location)
Given any attachment is created, then created_at and updated_at timestamps are stored in UTC with millisecond precision and associated to the authoring user. Given a photo contains EXIF, then camera make, model, focal length, exposure, and orientation are extracted and stored; if EXIF is absent, fields are null. Given location permission is granted and a GPS fix with <= 50 m accuracy is available within 5 seconds, then latitude and longitude are stored with the attachment; otherwise location is not stored and the capture proceeds without repeat prompts. Given an attachment is viewed in Details, then timestamp, author, file type, file size, dimensions or duration, and location (if present) are displayed; location opens a mini map view when tapped.
Enforce Media Size and Format Constraints
Rule: Accepted photo inputs are JPEG and HEIC; HEIC is converted to JPEG at 85% quality; max resolution 4096 px on the longest edge; max processed photo size 8 MB; captures exceeding limits are downscaled and recompressed to meet both limits. Rule: Accepted audio is M4A (AAC LC, 48 kHz); max duration 10 minutes; max file size 20 MB; recording is prevented beyond limits and shows Limit reached. Rule: Sketch or markup is stored as vector JSON plus a PNG preview with max 4096 px longest edge and max 4 MB; export to PDF is available on share. Rule: Burst mode allows up to 20 photos per capture; per pin total attachments are limited to 25; attempts beyond limits are blocked with a toast and no data is lost. Rule: Uploads larger than 25 MB per pin batch are chunked into 5 MB parts with resumable upload and exponential backoff up to 6 retries on failure.
Offline Capture and Background Sync
Given the device is offline, when the user captures photos, voice notes, or sketches, then all data are saved in an encrypted local store with a Pending Sync state per item and are accessible in the pin gallery. Given connectivity is restored, then a background worker syncs pending items in FIFO order while the app is in foreground or background, showing per-item progress and an overall sync indicator. Given a conflict occurs because a pin was deleted remotely, then local pending items are reattached to a new local pin at the same sheet coordinates and the user is prompted to confirm or discard. Given a sync completes successfully, then items transition to Synced and receive server-assigned IDs; on failure, an error banner appears and a Retry action re-queues the item. Given the device is in power-saving mode, then background sync reduces concurrency to 1 and defers until charging unless the user opens the pin detail.
Pin Workflow & Assignment
"As a project coordinator, I want to assign pins and track their status so that the right team members can act and I can verify resolution without chasing updates."
Description

Introduce a lightweight workflow for each pin with statuses (Open, Assigned, In Review, Resolved), assignees, due dates, and comment threads. Trigger notifications on assignment and status changes, and surface a concise activity log. Allow linking a pin to specific sheet revisions and exporting a pin’s history for audit and handoff.

Acceptance Criteria
Assign Pin to User with Optional Due Date
Given a pin exists in status "Open" and I have permission to manage assignments When I assign the pin to user "Drafter A" without a due date Then the pin status updates to "Assigned" And the assignee is recorded as "Drafter A" And the due date field is stored as null and displays "No due date" And an assignment entry with actor, assignee, and timestamp is appended to the pin's activity log Given a pin exists in status "Open" and project timezone is configured When I assign the pin to user "Drafter B" with a due date of 2025-10-05T17:00:00-04:00 Then the due date is persisted in ISO 8601 including timezone offset And the due date is displayed using the project timezone settings And an assignment entry including the due date is appended to the activity log Given a pin exists in status "Assigned" with assignee "Drafter B" When I reassign the pin to user "Drafter C" Then the assignee changes to "Drafter C" and status remains "Assigned" And the activity log records the old and new assignees with timestamp and actor
Status Lifecycle and Valid Transitions
Given a pin exists in status "Open" When I change status to "Assigned" Then the status updates to "Assigned" and the change is recorded with actor and timestamp in the activity log Given a pin exists in status "Assigned" When I change status to "In Review" or "Resolved" Then the status updates accordingly and the change is recorded with actor and timestamp in the activity log Given a pin exists in status "In Review" When I change status to "Resolved" or "Assigned" Then the status updates accordingly and the change is recorded with actor and timestamp in the activity log Given a pin exists in status "Resolved" When I change status to "Open" Then the status updates to "Open" and the change is recorded with actor and timestamp in the activity log Given any pin in any status When I attempt to change to the same status Then the request is rejected with a validation error and no activity is logged
Pin Comment Thread
Given a pin detail view is open When I post a text comment "Please update door swing" Then the comment appears at the end of the thread with my name, timestamp (ISO 8601), and exact content And the comment is persisted and visible to all project members with access to the pin And a corresponding activity entry "Comment added" with actor and timestamp is recorded Given an existing thread with multiple comments When I load the pin detail view Then comments are displayed in chronological order (oldest at top, newest at bottom) with no gaps or duplicates Given a pin exists When I attempt to post an empty comment Then the action is blocked with a validation message and no comment or activity entry is created
Notifications on Assignment and Status Change
Given a pin is assigned to user "Drafter A" When I reassign the pin to user "Drafter B" Then user "Drafter B" receives an in-app notification within 60 seconds containing pin ID, sheet ID, coordinates, and the assigning actor And the assigning actor does not receive a notification for their own action And the notification links directly to the pin detail view Given a pin in status "Assigned" When I change the status to "In Review" Then the current assignee and the pin creator each receive an in-app notification within 60 seconds including the previous and new status, actor, and timestamp Given notification delivery preferences exist for email/push When an assignment or status change occurs Then email/push notifications are sent in addition to in-app notifications only if enabled for the recipient
Concise Activity Log Visibility
Given a pin has more than 10 activity events When I open the pin detail view Then the activity log shows the 10 most recent entries in reverse chronological order with a "View all" control Given I click "View all" on the activity log When the full log loads Then all historical entries are displayed in reverse chronological order And entries are immutable and include actor, timestamp (ISO 8601), and a concise summary (e.g., "Assigned to Drafter A", "Status: Assigned → In Review") Given key pin actions occur (create, assign/reassign, due date add/change/remove, status change, revision link/unlink, export history) When I view the activity log Then each of those actions is represented by a single concise entry
Link Pin to Specific Sheet Revisions
Given a pin exists and sheet ABC has revisions R1 and R2 When I link the pin to revision R2 Then the pin stores a reference to sheet ABC and revision R2 And the linked revision is displayed on the pin detail view And an activity entry "Linked to Sheet ABC Rev R2" is recorded with actor and timestamp Given a pin is linked to revision R2 When I unlink the revision Then the link is removed And an activity entry "Unlinked from Sheet ABC Rev R2" is recorded with actor and timestamp Given I am viewing sheet ABC revision R2 When I toggle "Show linked pins" Then I see all pins linked to revision R2 rendered at their coordinates
Export Pin History for Audit and Handoff
Given I am on a pin detail view When I click "Export history" Then a downloadable package is generated within 30 seconds that includes: - A PDF summary with pin metadata (ID, sheet/revision links, coordinates), current status, assignee, due date - A machine-readable JSON file containing full activity log, assignment history, status changes, comment thread, and revision link history with ISO 8601 timestamps and actor IDs And an activity entry "History exported" is recorded with actor and timestamp Given an export completes When I open the PDF summary Then all fields render without truncation and match the current system of record at the time of export Given an export completes When I parse the JSON file Then the schema validates against the documented export schema and includes at least one entry for each change type applied to the pin
Visibility Controls & Client Sharing
"As a project lead, I want to control which pins clients can see so that we can collaborate transparently while keeping internal notes private."
Description

Provide per-pin visibility settings (Internal, Client-Visible) and role-based access controls to manage who can see, edit, or comment. Support generating a shareable, client-safe view that hides internal comments while preserving location and media context. Log shares and views for accountability and ensure consistency with the product’s approval flows.

Acceptance Criteria
Per-Pin Visibility Toggle and Enforcement
Given a project sheet with mixed pins When a Project Lead or Admin sets a pin's visibility to Internal Then only Admin, Project Lead, and Drafter roles can see the pin across app views and APIs, and Client users cannot see it anywhere And the pin is excluded from client exports, approvals, and client notifications Given a pin is set to Client-Visible When the change is saved Then the pin becomes visible in client views and share links within 2 seconds of sync And any internal comments and internal-only attachments on that pin remain hidden in client views Given a Drafter creates a new pin When saving the pin Then the default visibility is Internal And the Drafter cannot set visibility to Client-Visible unless granted permission scope "Make Client-Visible"
Role-Based Access to See, Edit, and Comment
Given a user with role Admin or Project Lead When accessing any pin Then they can view all pins, edit pin fields, change visibility between Internal and Client-Visible, delete pins, and comment in both internal and client threads Given a user with role Drafter When accessing pins Then they can view Internal and Client-Visible pins; edit pins they created; change visibility from Client-Visible to Internal; cannot set visibility to Client-Visible unless granted scope "Make Client-Visible"; and can comment only in the internal thread Given a user with role Client When accessing the application or a share link Then they can view only Client-Visible pins; cannot edit or delete pins; cannot change visibility; and can comment only if project setting "Client Comments Enabled" is true And all permissions are enforced at both UI and API layers; forbidden actions return HTTP 403 with no side effects
Generate Shareable Client-Safe View
Given a sheet containing Internal and Client-Visible pins When a Project Lead clicks "Create Client Share" and selects scope (This Sheet or Entire Project) Then the generated link renders only Client-Visible pins And each shown pin preserves location coordinates, attached media (photos, audio, sketches), and auto-tags (room/zone, discipline) And internal comments, internal-only attachments, and internal-only tags are excluded from the client view And the shared view is read-only except for "Approve" and "Client Comment" actions when enabled And an internal "View as Client" preview shows exactly what a client will see
Share Link Security, Expiry, Revocation, and Audit Logging
Given a newly created client share link Then it uses a non-guessable token with at least 128-bit entropy and is served only over HTTPS And the creator can set an expiry (24h, 7d, or custom date) and an optional password When a share link is expired or revoked Then subsequent access returns a client-safe 410 Gone page without revealing project metadata And the audit log records share-create/update/revoke events (timestamp, actor, scope) and each view event (timestamp, viewer identity if authenticated or link token ID, sheet/pin accessed) And audit entries are immutable and exportable to CSV with filters by date range, actor, and sheet
Client Approval Flow Consistency
Given a client accesses a share link with Client-Visible pins When the client clicks "Approve" for a sheet or selected pins Then the system records approval against the current revision with timestamp, approver identity, and originating share ID And only Client-Visible pins are counted toward approval metrics; Internal pins neither block nor appear in the client’s approval scope And if any pin is marked "Requires Client Response," approval is blocked until resolved or explicitly overridden by Admin/Project Lead with a required reason note And internal users are notified of approvals; clients never receive notifications containing internal comment content
Comment Stream Segregation and Redaction
Given a pin with both internal and client comment threads When viewed via the client share link Then only the client thread is displayed; internal comments, mentions, and attachments are fully hidden And internal replies to client comments can be marked Internal or Client-Visible; selecting Internal moves the reply to the internal thread and hides it from clients And outbound emails/notifications to clients strip internal-only mentions and content And search within the client share returns results only from client-visible content and client comment threads

SignSafe Biometrics

Approve offline with a biometric-gated stamp that binds your identity to a specific version hash. When back online, it commits to the audit ledger, flags any version drift, and requests a quick recheck if the sheet changed—keeping approvals trustworthy.

Requirements

Offline Biometric Approval Stamp
"As a project lead working on-site without internet, I want to approve a drawing with my biometric-gated stamp so that my decision is securely recorded and bound to the exact version even while offline."
Description

Enable approvers to apply a cryptographic stamp to a specific drawing sheet version while offline, gated by device biometrics. The approval artifact includes a signed payload with the sheet ID, immutable version hash, approver identity, device attestation, and timestamp, generated using secure enclave/keystore keys and cached until connectivity returns. Integrates with PlanPulse’s approval flow so offline actions appear in the UI as “Pending Sync,” preserving a seamless experience. Delivers strong identity binding and non-repudiation without exposing biometric data, reducing approval delays when field conditions lack connectivity.

Acceptance Criteria
Offline Biometric Approval
Given the device has no internet connectivity and the user is viewing a specific drawing sheet version, When the user taps Approve and successfully completes device biometric authentication within 30 seconds, Then the app generates an approval stamp locally without any server calls. Given biometric authentication fails or is canceled three times, When the user attempts to approve offline, Then the approval is blocked and an error message "Biometric authentication required" is shown. Given the device has no enrolled biometrics, When the user attempts offline approval, Then the action is disallowed and the user is prompted to enable biometrics to proceed.
Signed Payload Composition
Given offline approval succeeds, When the stamp is generated, Then the payload includes sheetId, immutable versionHash (SHA-256), approverIdentity, deviceAttestation, timestamp (ISO 8601 UTC), and a digital signature over the canonical payload. Given the payload is generated, When the signature is verified using the approver’s registered public key, Then the verification succeeds and the payload integrity is confirmed. Given the payload is generated, When inspected, Then no biometric templates, images, or raw biometric data are present in the payload or local cache.
Secure Enclave Enforcement
Given the device supports a hardware-backed secure enclave/keystore, When generating the approval stamp, Then the signing operation uses a non-exportable private key stored in secure hardware and returns a valid hardware attestation statement. Given the device does not support hardware-backed keys or attestation, When attempting offline approval, Then the app prevents stamping and displays "This device is not eligible for offline approvals." Given an attempt is made to export or read the private key material, When evaluated via platform APIs, Then the key is marked non-exportable and cannot be retrieved from app storage.
Pending Sync Queue and UI
Given a stamp is generated offline, When the user navigates to the project dashboard or sheet list, Then the associated sheet displays a "Pending Sync" badge and the approval appears in a Pending queue with one entry per stamp. Given the app is force-closed and reopened while still offline, When viewing the sheet or Pending queue, Then the "Pending Sync" state persists and the queued artifact remains present. Given the artifact is stored locally, When inspecting device storage, Then the artifact is encrypted at rest using platform-secure storage and is inaccessible to other apps.
Connectivity Recovery and Ledger Commit
Given one or more pending offline approvals exist, When network connectivity is restored, Then the app uploads all queued artifacts within 30 seconds and awaits commit receipts from the audit ledger. Given a queued artifact is successfully committed, When a receipt containing the server timestamp and artifact hash is returned, Then the sheet status updates to "Approved," the "Pending Sync" badge is cleared, and the receipt link appears in the audit trail UI. Given an upload attempt fails due to a transient network error, When retry logic is applied, Then the app retries with exponential backoff up to 5 attempts and surfaces a non-blocking warning if retries are ongoing.
Version Drift Detection and Recheck
Given a pending stamp references versionHash H1, When connectivity returns and the server reports the current sheet version hash is H2 not equal to H1, Then the commit is halted, the approval is flagged as "Recheck Required," and the approver is prompted to re-verify the latest sheet. Given the approver opens the recheck prompt, When they confirm approval on the updated sheet, Then a new signed payload is generated referencing hash H2 and committed to the ledger; the original offline artifact remains archived but is not applied to the new version. Given the approver declines the recheck, When the prompt is closed, Then the pending approval remains uncommitted and is marked "Canceled by Approver" in the audit trail.
Privacy and Non-Repudiation
Given offline biometric approval is used, When inspecting app storage and network payloads, Then no biometric images, templates, or raw sensor data are stored or transmitted; only the OS-level biometric success signal gates access to the signing key. Given a completed approval, When verifying the audit ledger record, Then the record contains approverIdentity, deviceAttestation, versionHash, timestamp, and signature sufficient to prove origin and integrity without exposing biometric data. Given the approver disputes an approval, When an auditor validates the ledger entry, Then the signature verifies against the approver’s registered public key and the device attestation validates for the signing time.
Version-Hash Binding & Drift Detection
"As an approver, I want my approval tied to an immutable version hash so that any changes after I reviewed the sheet are detected and my approval isn’t misapplied."
Description

Generate and store a deterministic, collision-resistant hash for each sheet version that includes the base drawing, markups, and critical metadata, then bind approvals to that hash. On reconnection, compare the locally approved hash to the latest server state to detect any drift. If drift is found, automatically invalidate the pending approval, flag it in the UI, and route it into the recheck workflow. Ensures approvals remain trustworthy by guaranteeing they reference the precise content reviewed at the time of stamping.

Acceptance Criteria
Deterministic Version Hash Generation
Given a sheet with defined base drawing, markups, and critical metadata, when a version hash is generated twice without any changes, then the hash values are identical 64-character lowercase hex strings. Given any change to the base drawing, any markup (geometry, style, visibility), or any critical metadata field, when a new hash is generated, then the hash value differs from the previous one. Given unsupported or missing required inputs, when a hash is requested, then the system returns a validation error and no hash is stored. Given a 10MB base drawing and up to 500 markups on a target device, when a hash is generated, then the operation completes within 1 second in 95% of attempts. Given 1,000 randomly mutated variants of a sheet, when hashes are generated, then no collisions occur across the set.
Canonical Hash Input Coverage and Normalization
Given identical logical content with different serialization order of markups or metadata keys, when a hash is generated, then the hash is identical due to canonical ordering. Given timestamps in different local timezones or formats, when normalized and hashed, then the resulting hash is identical for semantically equal timestamps using UTC with fixed precision. Given floating-point numeric fields, when normalized to the defined precision, then re-serialization and hashing yields the same hash value across runs. Given non-critical fields (UI state, viewport position, ephemeral IDs), when changed, then the hash remains unchanged because these fields are excluded from the hash input. Given the hashing operation, when requested via API, then the system can return the canonicalized payload used for hashing to allow verification that base drawing bytes, serialized markups, and schema-defined critical metadata fields were included.
Offline Approval Binding and Local Persistence
Given the device is offline and the user passes biometric gating, when the user approves a sheet, then a local approval record is created containing approver identity, the version hash, a signature over the hash, and a timestamp. Given the approved sheet is edited locally after approval, when new changes are saved, then a new version hash is produced for the edits and the original approval remains bound to the original hash. Given local persistence of approvals, when the app restarts, then the approval record is still present and verifiable, and any tampering with the record causes signature verification to fail. Given the same user attempts to approve the same version hash offline multiple times, when the action is repeated, then the system prevents duplicate approvals for the same user and hash.
Reconnection Drift Detection and Outcome Handling
Given pending local approvals, when the device reconnects, then each approval's bound hash is compared to the latest server hash for the corresponding sheet. Given the hashes match, when comparison completes, then the approval is marked Ready to Commit and proceeds to ledger commit. Given the hashes do not match, when comparison completes, then the pending approval is invalidated with status Invalidated - Drift and no commit is attempted. Given the sheet is missing or access is revoked on the server, when comparison is attempted, then the approval is flagged Invalidated - Not Found and the user is notified. Given up to 50 pending approvals, when reconnection occurs, then all comparisons complete within 5 seconds.
Audit Ledger Commit and Idempotent Replay
Given a pending approval with a matching hash, when committed to the server, then a ledger entry is created containing approver ID, sheet ID, version hash, timestamp, and signature, and a commit ID is returned. Given transient network failures during commit, when retries occur, then the approval is eventually committed without duplication once the server is available. Given the same approval is replayed multiple times, when the server receives duplicates, then only a single ledger entry exists and the client is returned the original commit ID. Given a ledger entry is committed, when queried later, then the record is immutable and any subsequent change is an append-only entry that references the prior record.
UI Flagging and Recheck Workflow Routing
Given a pending approval invalidated due to drift, when the sheet is opened, then the UI displays a prominent Recheck Required status within 1 second and disables one-click approval until recheck is completed. Given a drifted approval, when the user selects Recheck, then a diff view shows changes between the approved-hash snapshot and the current server version. Given the diff has been reviewed, when the user confirms and passes biometric gating, then a new approval is created bound to the current version hash and the prior invalidated record remains in the audit trail. Given a drifted approval, when notifications are enabled, then the approver receives an in-app alert and the project lead sees an entry in the project activity feed.
Encrypted Offline Approval Queue
"As a user who travels between low-connectivity sites, I want my offline approvals stored securely and synced automatically so that I don’t lose work and can trust nothing was altered."
Description

Store offline approval artifacts in a tamper-evident, encrypted local queue using platform keychain/keystore and integrity checks (e.g., chained hashes). Provide automatic retry and backoff for sync, along with a manual “Sync Now” option. Display clear per-item states (Queued, Syncing, Needs Recheck, Failed) and safe purge controls. Prevents data loss, blocks tampering, and gives users transparency and control over offline approvals pending upload.

Acceptance Criteria
Offline Artifact Encryption via Platform Keystore
Given the device is offline and the user approves a sheet When the approval artifact is persisted locally Then the artifact payload is stored only in encrypted form at rest using a non-exportable key from the OS keychain/keystore And neither plaintext payload nor private keys are ever written to disk And attempting to open the stored file outside the app yields non-human-readable ciphertext And attempting decryption with an invalid key fails and no queue entry is created; the user is notified of an encryption error
Tamper-Evident Chain Integrity for Offline Queue
Given the offline approval queue contains at least one item When any queued item’s file contents or metadata are modified outside the app Then the next integrity verification detects a hash mismatch and marks the item as Failed with reason "Integrity check failed" And all subsequent items referencing that hash are blocked from syncing until resolved And during sync, the server verifies a contiguous chain from the last committed head; on mismatch, the affected items remain uncommitted and are marked Failed
Queue Persistence Across App Restarts and Reboots
Given one or more items are in the offline approval queue When the app is force-quit or the device reboots and the app is reopened Then all queued items and their states are preserved without data loss And each item passes a local integrity check (hash verification) prior to any sync attempt And if device free storage falls below a defined threshold (e.g., 5%), the app displays a non-blocking low-storage warning and does not auto-delete queued items
Automatic Sync with Exponential Backoff and Resume
Given the device regains network connectivity and there are queued items When an upload attempt fails due to a transient error (e.g., timeout, 5xx) Then retries occur with exponential backoff and jitter up to a capped maximum interval, without user intervention And on reconnect after interruption, partially uploaded items resume or re-attempt without creating duplicate server records And on success, items are removed from the queue and acknowledged exactly once
Manual "Sync Now" Action Overrides Backoff
Given there are items in Queued or retryable Failed state When the user taps "Sync Now" Then the system initiates immediate sync for eligible items, bypassing any active backoff timers And the action is disabled when no network connectivity is available And per-item progress and final outcomes (Success, Needs Recheck, Failed) are reflected in real time, and the last sync timestamp updates
Per-Item States and Transitions Display
Given a new offline approval is stored Then its initial state is Queued and it displays a visible state badge labeled exactly: "Queued" When a sync attempt starts for the item Then the state changes to "Syncing" and the UI shows an in-progress indicator When the server reports base-version drift for the approved sheet Then the state changes to "Needs Recheck" and the item is not committed When an unrecoverable error (e.g., integrity failure) occurs Then the state changes to "Failed" with a visible reason code And users can filter the list by state and open a detail view showing timestamp, version hash, retry count, and available actions (Retry, Recheck, Purge)
Safe Purge Controls with Confirmation and Audit
Given the offline queue contains items When the user selects Purge Selected or Purge All Then a confirmation modal shows the exact count and total size of items to be deleted and warns that Syncing items cannot be purged When the user confirms Then only eligible items are deleted, a local audit record is written with item IDs, actor, timestamp, and reason, and storage usage decreases accordingly And items in "Needs Recheck" require a second typed confirmation ("PURGE") before deletion And when the device is online, the purge audit record is uploaded to the audit ledger
Audit Ledger Sync & Idempotency
"As a compliance-focused administrator, I want offline approvals to post exactly once to the audit ledger with full verification so that our records remain accurate and defensible."
Description

Upon connectivity restoration, submit offline approval records to the audit ledger via an idempotent API that uses deterministic request IDs and server-side de-duplication. Include device attestation and signature verification steps server-side, and update the audit trail with a complete provenance chain. Provide robust error handling, partial-failure recovery, and observability (metrics, logs) to ensure reliable, once-only recording of approvals. Guarantees authoritative, verifiable entries in PlanPulse’s audit ledger without double-commits.

Acceptance Criteria
Offline Retry Yields Single Ledger Entry
Given an offline approval payload with deterministic request_id R and content hash H When the client submits the identical payload multiple times due to retries or concurrent sends Then the audit ledger contains exactly one entry for H, the first successful attempt returns 201 Created with ledger_entry_id E, and all subsequent identical attempts return 200 OK with idempotency_replayed=true and the same E Given two submissions with different request_id values but identical content hash H within a 7-day deduplication window When processed Then only one ledger entry exists for H, the later submission returns 200 OK with deduplicated=true and the same ledger_entry_id E; if any field contributing to H differs, a new entry is created Given concurrent identical submissions across processes or devices When processed under load Then no duplicate ledger entries are created and database uniqueness constraints on H (and/or E) prevent double-commits
Deterministic Request ID Validation
Given a received approval payload P with request_id R When the server recomputes the deterministic request_id R' from the canonicalized payload P Then R must equal R' and match the required 64-hex format; otherwise the server responds 422 Unprocessable Entity with error_code=request_id_mismatch and no ledger write occurs Given a request_id that violates format (length/charset) When submitted Then the server responds 400 Bad Request with error_code=invalid_request_id_format and no ledger write occurs Given a valid request with R == R' When accepted Then the response echoes request_id R and includes ledger_entry_id E upon creation or replay
Server-side Device Attestation and Signature Verification
Given an approval payload containing a biometric-gated signature S over the canonical payload and a device attestation token T When the server validates the request Then the signature S verifies against the registered public key, T is verified to a trusted root, bound to the app and device, and is fresh (issued within the last 10 minutes); on success the ledger entry is written with verification_status=verified and attestation_summary recorded Given an invalid or mismatched signature S When submitted Then the server responds 422 Unprocessable Entity with error_code=signature_invalid and no ledger write occurs Given missing, expired, or invalid device attestation T When submitted Then the server responds 401/403 with error_code in {attestation_missing, attestation_expired, attestation_invalid} and no ledger write occurs
Version Drift Detection and Recheck Trigger
Given an approval referencing sheet_id S at version_hash V1 When connectivity is restored and the server compares V1 to the current head version V2 for S Then if V1 != V2 the server writes the ledger entry against V1 with drift_detected=true, creates a recheck task for the approver, emits a notification event, and the API response includes recheck_required=true and head_version_hash=V2; if V1 == V2, drift_detected=false and status=approved Given a drift-detected entry When the approver completes the recheck Then the ledger records a linked recheck action referencing the original approval entry and the current version hash
Batch Sync Partial-Failure Recovery
Given a batch of N offline approvals with deterministic request_ids When the batch is submitted after connectivity restoration Then each item is processed independently with per-item status; items successfully persisted return 201 Created (or 200 idempotent on replay), failing items return 4xx/5xx with machine-readable error_code and retryable flag Given a network interruption during response streaming When the client retries the same batch with the same request_ids Then only items not previously persisted are processed; already persisted items return 200 OK with idempotency_replayed=true and the same ledger_entry_id Given a server crash mid-batch When the client retries with the same request_ids Then the final ledger state contains exactly one entry per approval, with no duplicates
Observability for Sync: Metrics, Logs, Traces
Given sync requests are processed When observing the system Then metrics expose counters {approvals_submitted_total, approvals_created_total, approvals_deduplicated_total, approvals_failed_total} labeled by outcome and reason, and histograms {sync_latency_seconds, verification_latency_seconds}; a /metrics endpoint (or equivalent) returns these for scraping Given any sync attempt When logging Then structured logs include request_id, ledger_entry_id (if assigned), user_id_hash, device_id_hash, version_hash, outcome, error_code, and latency_ms, with no raw PII; logs correlate via a correlation_id/trace_id across services Given distributed tracing is enabled When a sync occurs Then a trace spans API, attestation verification, signature verification, deduplication, and persistence with step durations and outcome annotations
Provenance Chain Completeness and Verifiability
Given a successful approval commit When retrieving the ledger entry E via the audit API or export Then E includes request_id, ledger_entry_id, user_identity_ref, device_attestation_ref, signature_algorithm, signature_digest, sheet_id, version_hash, timestamp, previous_version_hash (if applicable), drift_detected flag, and entry_hash; entry_hash equals the SHA-256 of the canonical entry content Given a project-level audit export When a verifier recomputes entry_hashes and follows previous_version_hash links Then the entire provenance chain validates without gaps or hash mismatches; any tampering produces a verification failure that identifies the first inconsistent entry Given an entry is updated only to append verification metadata When re-exported Then the original approval content hash remains unchanged and the append-only metadata is clearly separated to preserve immutability guarantees
Quick Recheck Diff & Reconfirm
"As an approver, I want a quick way to see what changed and reconfirm or revoke my approval so that decisions stay accurate without redoing the whole review."
Description

When version drift is detected, prompt the approver with a focused recheck flow that highlights changes via a visual diff (markups, geometry, metadata) and offers one-tap Reconfirm or Revoke. Notify impacted stakeholders, record outcomes in the audit trail, and update approval statuses across the workspace. Include time-boxed reminders and escalation rules. Restores trust by ensuring approvals reflect the current sheet content without restarting lengthy review cycles.

Acceptance Criteria
Detect Version Drift and Launch Recheck Prompt
Given the approver applied a SignSafe biometric approval to sheet S at version hash H_offline while offline And a newer committed version with hash H_current exists for sheet S when the device reconnects When the system detects H_offline != H_current Then sheet S approval state is set to "Recheck Required" immediately And the approver is shown a blocking recheck modal within 2 seconds of detection And the modal displays the previously approved hash, the current hash, timestamps for both, and a link to open the visual diff And other users do not see an "Approved" badge for sheet S until the recheck is completed
Visual Diff Highlights All Changes
Given a recheck is required for sheet S where H_offline != H_current When the approver opens the visual diff Then the diff renders within 3 seconds for sheets up to 20 MB at 95th percentile And added/removed/modified markups are color-coded and listed with authors and timestamps And geometry changes are outlined with overlays and a changes count is shown And metadata changes are listed with field-level old→new values And layer toggles (Markups, Geometry, Metadata) are available and on by default And unchanged regions are dimmed And the set of reported changes matches the version-control delta for S with 100% accuracy on test fixtures
One-Tap Reconfirm or Revoke Actions
Given the recheck modal is open for sheet S at H_current When the approver taps Reconfirm and passes biometric verification (or PIN fallback per policy) Then the approval is rebound to H_current, status becomes "Approved", and completion occurs within 1 second after verification (p90) And watchers of sheet S are notified and the modal closes When the approver taps Revoke and confirms (with optional reason) Then the prior approval is invalidated, status becomes "Revoked", the reason (if provided) is stored, and watchers are notified And both actions are idempotent if retried within 60 seconds
Stakeholder Notifications on Outcome
Given a recheck outcome (Reconfirm or Revoke) is finalized for sheet S When the outcome is recorded Then notifications are sent to impacted stakeholders (project owner, sheet assignees, prior approvers, watchers) via in-app, email, and push (if enabled) And each notification includes sheet name/ID, action, actor, timestamp, version hash bound, and a link to the diff or sheet And deliveries succeed within 60 seconds (p95) with 3 retry attempts and exponential backoff And notification failures are logged to the audit ledger
Audit Trail Entry for Recheck and Outcome
Given version drift was detected for sheet S When the recheck flow is initiated and completed Then the audit ledger contains immutable entries for: drift detection, recheck modal shown, diff viewed, action taken (Reconfirm/Revoke), biometric verification result (pass/fail), actor identity, device ID/IP, previous and current hashes, timestamps, and optional revoke reason And entries are time-ordered, tamper-evident, and queryable by sheet ID and approval ID And if connectivity is lost during recording, entries are queued and committed within 10 seconds of reconnection
Time-Boxed Reminders and Escalation
Given sheet S is in "Recheck Required" state When no approver action occurs within 24 hours (org-configurable) Then a reminder is sent to the approver via in-app and email When no action occurs within 48 hours (org-configurable) Then the item is escalated to the project owner and backup approver, and an "Escalated" flag appears on S And reminders stop once the approver acts or the approval is reassigned And the approver may Snooze reminders for 4 hours up to 2 times, with all snoozes recorded in the audit ledger
Approval Status Propagation Across Workspace
Given a recheck outcome is finalized for sheet S When the status changes to Approved or Revoked Then the status and badge update within 2 seconds across the sheet header, approvals panel, project overview dashboard, and client view (if shared) for active web sessions And subscribed clients receive real-time updates within 10 seconds; polling clients reflect changes within 30 seconds And a single idempotent event is emitted to prevent duplicate updates, and stale caches are invalidated
WebAuthn Biometric Support & Fallback
"As a cross-device user, I want to use my device’s native biometrics to sign approvals with a secure fallback so that I can approve reliably wherever I work."
Description

Implement WebAuthn/FIDO2-based authentication to leverage platform biometrics (Face ID, Touch ID, Android BiometricPrompt, Windows Hello) for gating offline approval signing. Provide enrollment, consent, and recovery flows, and define policy-driven fallbacks (e.g., passkey or device PIN) when biometrics are unavailable—while maintaining security posture and clear UX messaging. Ensure cross-browser/device compatibility and accessibility. Establishes a standards-based, privacy-preserving identity layer that never transmits biometric data off-device.

Acceptance Criteria
Platform Biometric Enrollment via WebAuthn
Given a signed-in user on a supported device and browser When the user initiates “Enable Biometric Approvals” Then the app displays clear consent stating biometrics remain on-device and only a WebAuthn credential is created And when the user consents, navigator.credentials.create() is called with publicKey using authenticatorAttachment="platform", userVerification="required", and attestation per policy And then a credential is created and the credentialId and public key are stored server-side bound to the user; no biometric data or templates are transmitted or stored And then the UI confirms enrollment success within 2 seconds and offers a test authentication And if creation fails, a mapped, actionable error is shown and the user can retry without losing session
Offline Approval Signing Gated by Biometrics
Given the device is offline, the user has an enrolled platform credential, and a drawing version hash H is selected When the user taps Approve Then navigator.credentials.get() is invoked with userVerification="required" using the platform authenticator And on success, an approval payload {versionHash: H, timestamp, clientDataJSON, authenticatorData, signature, credentialId} is created and stored locally encrypted via the OS keystore with status "pending-sync" And if verification fails or is canceled, no payload is stored and a non-blocking error is shown And after 5 consecutive failed attempts, the approve action is disabled for 5 minutes
Online Commit to Audit Ledger
Given one or more pending-sync approvals exist and network connectivity is restored When background sync runs or the user triggers Sync Then each approval is submitted idempotently to the audit ledger API within 10 seconds using an idempotency key And the server verifies the WebAuthn assertion per spec and binds the approval to the version hash And on success, the local record updates to status "committed" and stores the audit record ID And on transient error, retries use exponential backoff up to 6 attempts; on permanent error, the record is marked "failed" with a reason code and a user retry option
Version Drift Detection and Recheck Prompt
Given a pending approval exists for sheet S with version hash H1 When the current server version for S is H2 and H2 != H1 at sync time Then the pending approval is not committed and is marked "drifted" And the user is prompted to recheck S@H2 with a one-tap Re-approve flow And if the user re-approves successfully, a new approval for H2 is committed and the H1 approval is marked "superseded" And the audit trail records drift detection, recheck prompt, and supersession events
Policy-Driven Fallback When Biometrics Unavailable
Given biometrics are unavailable or the platform authenticator reports userVerificationUnavailable/lockout When the user attempts to approve Then the system evaluates fallbacks in policy order (e.g., passkey on a cross-platform authenticator, platform PIN) And if an allowed fallback is available, a WebAuthn assertion with userVerification="required" gates the approval exactly as with biometrics And if all fallbacks are disallowed or unavailable, the approval is blocked with clear UX messaging and guidance And all fallbacks inherit the same rate limits and lockouts as biometric flows
Credential Recovery and Revocation
Given a user has lost access to a device or wants to rotate authenticators When the user initiates recovery from a verified session or recovery channel per policy Then the user can register at least one new WebAuthn platform credential with userVerification="required" And the user can view and revoke existing credential IDs; revocation is immediate and prevents future approvals from those credentials And any offline pending approvals signed with a revoked credential are rejected at sync and shown as "revoked" And recovery and revocation events are appended to the audit trail
Compatibility & Accessibility Compliance
Given supported environments (latest stable Safari, Chrome, Edge, Firefox on macOS, iOS/iPadOS 16+, Android 10+, Windows 10/11) When executing enrollment, approval (online/offline), fallback, and sync test suites Then 100% of critical-path tests pass and at least 95% of all test cases pass across the matrix And unsupported combinations are detected at runtime with clear guidance and no silent failures And all flows are keyboard navigable, screen-reader labeled (ARIA), maintain logical focus order, and announce errors And visual elements meet WCAG 2.1 AA contrast and time limits allow at least 20 seconds or are extendable

Conflict Stitcher

If multiple teammates edit the same area offline, get a side-by-side overlay to choose theirs, yours, or a blended merge with rationale notes. Preserves attribution and queues a clean, reviewable resolution for cloud commit—no email firefights.

Requirements

Offline Conflict Detection
"As a project lead, I want automatic detection of overlapping offline edits so that I can quickly see exactly where conflicts exist without manual file comparisons."
Description

Detect and flag overlapping edits made while offline by comparing edit journals against the last synced baseline, identifying spatial overlaps and semantic collisions across vector markups, annotations, and attached files. Maintain a per-session conflict set with object-level granularity (IDs, bounding boxes, layers) and capture the offline sequence of operations to support deterministic reconciliation. Integrates with PlanPulse’s versioned markup model to scope conflicts to the current workspace and drawing, and prepares metadata for downstream compare/merge steps.

Acceptance Criteria
Detect conflicts on reconnect within scoped drawing
Given a user has offline edits in workspace W and drawing D and the cloud has new commits since the user’s last sync When the device reconnects or the user selects "Check Conflicts" Then the system diffs the offline edit journal against the last synced baseline of workspace W, drawing D And limits detection strictly to objects in drawing D and its layers And emits a conflict set only if overlaps or collisions are found And records a zero-conflict result with timestamp and journal hash if none are found
Spatial overlap detection for vector markups
Given two or more offline edits produce vector objects whose axis-aligned bounding boxes intersect with IoU ≥ T (default T=0.05) When conflict detection runs Then a conflict entry is created per overlapping object pair with fields: localObjectId, remoteObjectId, layerId(s), bboxLocal, bboxRemote, overlapArea, IoU, editTypes And edits with IoU < T are not flagged as spatial conflicts
Semantic collision detection on the same object or attachment
Given the same objectId (or attachmentId) is present in both the local journal and remote delta since baseline When both modify at least one of the same properties (e.g., geometry, style, text, layer assignment, attachment metadata) or one deletes while the other modifies Then a semantic collision is recorded with type in {UPDATE_UPDATE, UPDATE_DELETE, DELETE_UPDATE, MOVE_MOVE, ATTACHMENT_CONFLICT, ANNOTATION_TEXT_CONFLICT} And the entry includes propertyPathsInConflict, valuesLocal, valuesRemote, and baselineValues And re-creations with new IDs are only mapped as collisions if hashMatchScore ≥ 0.9 (configurable)
Per-session conflict set granularity and persistence
Given conflict detection generates results When the conflict set is persisted for the session Then each conflict includes at minimum: conflictId(UUIDv4), objectId(s), layerId(s), bbox(es), editorUserId(s), sessionId, localOpSeqRange, remoteOpSeqRange, opTimestamps(ISO-8601), editType(s), collisionType, baselineVersionId, drawingId, workspaceId And the conflict set is written atomically to local storage and is replayable from the journal to reproduce identical results
Deterministic ordering and journal sequence capture
Given identical baseline, journals, and configuration When conflict detection executes multiple times on the same or different devices Then the produced conflict list order is deterministic using the sort key (drawingId, layerId, min(objectId), min(opTimestamp), conflictId) And ties are broken by lexicographic order of conflictId And operation sequence numbers are contiguous and preserved per editor in the conflict metadata
Prepare merge-ready metadata payloads
Given conflicts are found When preparing data for downstream compare/merge Then each conflict provides a payload containing: overlayRegion(bbox in drawing coordinates), snapshotLocal, snapshotRemote, baselineSnapshot, selectableOptions{yours,theirs,blend}, rationaleNotesPlaceholder, attribution(names,avatars) And each payload is queued for review with a stable deeplink URL containing conflictId
Performance and resource thresholds at scale
Given a drawing with up to 50 layers, 5,000 local offline edits, and 2,000 remote edits since baseline When conflict detection runs on the QA reference machine (≥8-core CPU, 16 GB RAM) Then 95th percentile detection latency is ≤ 2.0 seconds and peak additional memory usage is ≤ 200 MB And persistence of the conflict set completes in ≤ 200 ms And a subsequent incremental run with no new edits completes in ≤ 800 ms by reusing indexes
Side-by-side Overlay Compare
"As a designer, I want a clear visual compare of conflicting areas so that I can understand differences quickly and make the right merge choice."
Description

Provide a dual-pane compare mode with synchronized pan/zoom and an overlay slider to visually contrast "mine," "theirs," and the baseline. Highlight changed regions, show per-object deltas on hover, and allow layer toggles for precise inspection. Include keyboard shortcuts, minimap navigation, and pixel/geometry alignment aids to ensure accurate, fast evaluation of conflicts on large drawings.

Acceptance Criteria
Dual-Pane Synchronized Pan/Zoom
Given compare mode is active with left="mine" and right="theirs" When the user pans in one pane via mouse drag, trackpad, or arrow keys Then the other pane matches the same center within 50 ms and within ±2 px deviation Given compare mode is active When the user zooms in one pane via pinch, mouse wheel, or +/- keys Then the other pane matches zoom within ±0.5% and keeps the cursor focal point aligned Given sync lock is toggled off When the user pans/zooms a pane Then the other pane remains unchanged until sync is re-enabled Given the user holds Alt/Option while navigating When panning/zooming Then temporary unsynced navigation occurs; releasing Alt restores sync Given the user clicks Reset View Then both panes return to the last shared fit-to-extent with identical zoom and alignment
Overlay Slider: Mine/Theirs/Baseline Blending
Given a compare pair is selected (mine vs baseline | theirs vs baseline | mine vs theirs) When the user drags the overlay slider Then blending updates smoothly at ≥60 FPS and the slider value reflects 0–100% Given keyboard focus is on the slider When the user presses '[' or ']' Then the slider decreases/increases by 5% (Shift modifies step to 1%) Given the user selects a compare pair via UI or keys '1','2','3' Then labels, legends, and color keys update within 100 ms and the slider position is preserved Given the slider is at 0% or 100% Then only one source is visible and the active source is clearly labeled Given accessibility support is required When using a screen reader Then the slider exposes ARIA role=slider with name "Overlay blend" and announces value changes
Change Highlights and Per-Object Deltas on Hover
Given a compare pair is active and diff computation completes Then changed regions render with legend: additions=green, deletions=red, modifications=amber Given the user hovers a changed object Then a tooltip appears within 150 ms showing object name/ID, author, timestamp, change type, and numeric delta (e.g., area %, length delta, color change) Given the user clicks a changed region Then a side panel opens with detailed attributes and rationale notes field; ESC closes and focus returns Given layer filters are applied Then highlights respect visible layers; hidden layers’ changes are not shown Given there are no changes in the current view Then a non-intrusive "No differences detected" notice appears and no highlights render Performance: For drawings up to 30,000 vector objects, initial diff visual highlights appear within 2 seconds on reference hardware
Layer Toggles and Linked Visibility
Given the layer panel is open When a layer visibility checkbox is toggled Then the layer hides/shows in both panes when Link Visibility is ON; when OFF only the active pane changes Given the user selects "Show changed layers only" Then only layers containing detected changes remain visible Given the user locks a layer Then that layer cannot be toggled until unlocked; its lock state is indicated and persists during the compare session Given the user reorders layer stacking Then both panes reflect the new stacking order consistently Given persistence rules Then layer visibility/lock/filter states persist within the compare session and reset on exit unless saved as a view preset
Minimap Navigation and Viewport Indicators
Given the minimap is visible Then it displays full drawing extent with two viewport rectangles and updates within 100 ms of any pan/zoom Given the user clicks or drags in the minimap while sync lock is ON Then both panes recenter to the target location; while OFF only the active pane recenters Given the user activates Fit to Extent from the minimap Then both panes fit the entire drawing with identical zoom and alignment Given the user presses 'M' Then the minimap toggles visibility and its last state persists during the compare session
Keyboard Shortcuts for Compare Controls
Given focus is not in a text input When the user presses shortcuts Then the following occur within 100 ms and show a transient hint: '['/']' overlay -5%/+5% (Shift=±1%); '1' mine vs baseline; '2' theirs vs baseline; '3' mine vs theirs; Arrow keys pan 50 px (Shift=200 px); '+'/'-' zoom ±10%; 'F' fit; 'H' toggle highlights; 'L' toggle layer panel; 'M' toggle minimap; 'Esc' exit compare mode; '?' open shortcut help Given focus is inside a text or number input Then shortcuts (except Esc) do not trigger actions Given the help overlay is opened with '?' Then all shortcuts are listed with descriptions and are keyboard/screen-reader accessible
Alignment Aids: Rulers, Guides, and Auto-Registration
Given alignment aids are enabled Then rulers render along top/left with project units, a crosshair tracks coordinates, and a snap grid toggles with 'G' Given the user invokes Auto-align Then a non-destructive transform (translation/rotation) is computed to register sources and applied within 1 second on reference drawings; an RMS error in pixels is displayed along with Apply and Reset controls Given Difference Mode is toggled with 'D' Then a high-contrast difference shader is applied to accentuate misalignment without altering source data Given the user adjusts manual alignment offsets Then offsets change in 1 px and 0.1° increments, are shown numerically, and Reset Alignment returns offsets to zero Given alignment is active Then an "Aligned" badge with current error metric is visible and alignment remains session-scoped without modifying underlying drawings
Per-Region Merge Choices
"As a collaborator, I want to choose per-area whether my changes, their changes, or a blend applies so that the final drawing reflects the best of both inputs."
Description

Enable granular resolution of each conflict with one-click choices of Yours, Theirs, or Blend at the region, object, or annotation level. Support batch selection across similar conflicts, immediate preview of the outcome, and the ability to undo/redo before finalizing. Queue unresolved items, enforce completion checks, and provide tooltips explaining the implications of each choice to reduce errors and rework.

Acceptance Criteria
One-Click Per-Region Choice (Yours/Theirs/Blend)
Given a conflict overlay is open with region, object, or annotation-level conflicts When the user selects a conflicted item and clicks Yours or Theirs Then the preview updates that item to the chosen version, the item is marked Resolved, and the resolution entry records item ID, choice, user, and timestamp And when the user clicks Blend Then the system applies the defined blend rules to that item in the preview, marks it Resolved, and records item ID, choice=Blend, user, and timestamp
Batch Resolve Similar Conflicts
Given multiple conflicts are detected and the user opens Similar Conflicts filter When the user selects a set of similar conflicts (n items) and applies a single choice (Yours/Theirs/Blend) Then the preview updates all n selected items accordingly, all are marked Resolved, and the batch action records count, item IDs, choice, user, and timestamp And when the user deselects specific items before applying Then only the remaining selected items are updated and marked Resolved
Immediate Visual Preview Before Commit
Given the conflict overlay is active When the user makes a merge choice on any conflicted item Then the merged result is displayed immediately in the preview layer with a diff legend, without altering the base drawing data And the user can toggle Preview On/Off to compare before/after for the current item And no changes are persisted to cloud storage until the user finalizes the merge
Pre-Commit Undo/Redo for Merge Decisions
Given the user has performed one or more merge choices in the current session When the user clicks Undo Then the last merge decision is reverted in the preview and the item status returns to Unresolved (or prior state) And when the user clicks Redo Then the reverted decision is reapplied And after Finalize Merge Then the undo/redo history for that session is cleared
Completion Check and Unresolved Queue
Given there are unresolved conflicted items remaining When the user attempts to finalize the merge Then the system blocks finalization, displays the unresolved count, and opens the Unresolved Queue panel listing each item with type (region/object/annotation) And when all items in the queue are resolved Then the Finalize action becomes enabled And if the user exits the overlay with unresolved items Then the queue state persists for the next session
Decision Tooltips with Implications and Attribution
Given the conflict resolution toolbar is visible When the user focuses or hovers on Yours, Theirs, or Blend Then a tooltip appears describing the action, the impacted element count, and attribution implications, and is accessible via keyboard (Tab/Shift+Tab) and screen readers And when the user clicks elsewhere or presses Escape Then the tooltip dismisses
Blend Requires Rationale and Preserves Attribution
Given a conflicted item is selected When the user chooses Blend Then the system prompts for a required rationale note before marking the item Resolved And upon confirmation, the resolution stores the rationale text and preserves attribution metadata for contributing authors on the merged item And the review queue entry displays the choice=Blend, rationale, and attribution details
Blended Merge & Rationale Notes
"As a project lead, I want to blend compatible changes and record why choices were made so that the team has context for future reviews and avoids repeat conflicts."
Description

Offer a blended merge path that intelligently combines compatible edits (e.g., unioning vector geometry, layering annotations, reconciling text edits) while preventing unsafe merges on incompatible changes. Require or optionally capture rationale notes for each blended decision, attaching them to the resolution record. Present authorship badges on blended outputs and show a before/after diff to confirm intent.

Acceptance Criteria
Blended union of overlapping vector geometry
Given two overlapping vector shapes on the same layer edited offline by two different authors And both edits are topologically compatible (no delete-vs-modify conflict, same coordinate system, same units) When the user selects Blended Merge in the Conflict Stitcher Then the preview renders a unioned geometry result within 2 seconds And per-segment authorship badges are visible on hover for the resulting shape And a rationale note field is displayed and is optional when org policy is "Optional" And on Confirm, the note (if entered) is attached to the resolution record with author, timestamp, and affected object IDs And a before/after diff is shown highlighting the merged area, then the resolution is queued for cloud commit
Layered annotation blend with attribution
Given two or more annotation objects overlapping the same region edited by multiple authors And annotations are of compatible types (e.g., callouts, arrows, dimension markers) When the user selects Blended Merge Then annotations are layered according to project z-order rules And exact-duplicate annotations are deduplicated And each retained annotation shows an authorship badge stack with up to 3 avatars and a "+N" indicator when more And the preview appears within 2 seconds and the before/after diff highlights only added/removed/relayered annotations And on Confirm, the user can add a rationale note which is attached to the resolution record
Text block reconciliation with inline conflict choices
Given the same text block was edited offline by two authors When the user selects Blended Merge Then non-overlapping changes are merged automatically at word level And overlapping conflicts are presented inline with choices (Theirs, Yours, or Custom edit) And the user must resolve all inline conflicts before Confirm is enabled And the before/after diff displays character-level additions and deletions And the resolution record stores the chosen options and optional rationale note
Unsafe blended merge prevention with reasons
Given conflicting edits are incompatible (e.g., one author deleted an object while another modified it; edits span locked layers; or unit systems differ) When the user attempts a Blended Merge Then the system blocks confirmation with an "Unsafe merge" banner listing all reasons And only "Choose Theirs" and "Choose Yours" remain selectable And the rationale note field remains available, but blended confirmation is disabled And the resolution record logs the prevented blended attempt with enumerated reasons
Rationale note enforcement policy
Given org policy "Rationale Notes" is set to Required for blended merges When the user clicks Confirm on a blended decision Then the Confirm action is disabled until a rationale note of 10–1000 characters is entered And empty or whitespace-only notes are rejected with inline validation And the saved note is immutable post-commit and visible in the resolution timeline and audit export
Before/after diff confirmation and fidelity
Given a blended merge preview is displayed When the user clicks Confirm Then a side-by-side before/after and an overlay diff are shown with a "Looks correct" checkbox And Confirm is disabled until the user checks "Looks correct" And the diff highlights 100% of modified objects and 0% of unchanged objects And the resolution snapshot (images plus change list) is attached to the resolution record
Authorship badges persistence through commit
Given blended output includes elements contributed by multiple authors When the resolution is queued and later committed to cloud Then authorship badges persist on the committed objects And hovering a badge reveals all contributors and their contributions And clicking a badge opens the attached rationale note(s) for that blended decision
Attribution Preservation & Audit Log
"As a firm owner, I want clear attribution and an audit of conflict resolutions so that accountability is preserved and client reviews are fully traceable."
Description

Persist authorship, timestamps, and decision provenance through the resolution workflow, including co-authorship on blended items. Generate an immutable audit trail that details each conflict, the selected outcome, the resolver, rationale notes, and any reversions. Expose a shareable, read-only report linked to the resulting version to support client transparency and internal QA.

Acceptance Criteria
Attribution Persistence Across Resolution Types
Given a conflict with candidate edits from User U1 at time T1 and User U2 at time T2 When Resolver R selects "Yours" while U1 is the resolver Then the committed item attribution contains exactly [U1] with authoredAt=T1 in ISO 8601 UTC and resolverId=R recorded in decisionProvenance When Resolver R selects "Theirs" choosing U2's version Then the committed item attribution contains exactly [U2] with authoredAt=T2 in ISO 8601 UTC and resolverId=R recorded in decisionProvenance When Resolver R selects "Blend" combining U1 and U2 Then the committed item attribution lists coAuthors=[U1, U2] with each contribution timestamp preserved (T1, T2), and decisionProvenance.selectionType="Blend" And all attribution records persist unchanged through queue-to-commit and after cloud commit retrieval via UI and API And userIds are immutable stable identifiers and timestamps include timezone normalized to UTC
Immutable Audit Trail Entry on Resolution
Given any conflict resolution is saved (Yours, Theirs, Blend) When the save completes Then an audit entry is appended with fields: conflictId, versionId, selectionType, resolverId, resolvedAt (ISO 8601 UTC), rationale (string or null), selectedCandidateIds, resultingItemHash (SHA-256), priorCandidateHashes[], attributionSnapshot And the audit store is append-only: attempts to edit an entry result in a new correction entry referencing the original (original remains readable) And each entry includes entryId and prevEntryHash scoped to the version to form a tamper-evident chain; recomputing the chain verifies integrity And the audit entry is visible in UI and retrievable via API within 2 seconds of save And for selectionType="Blend" the rationale is required and must be non-empty (>=5 characters)
Reversion Capture and Chain of Custody
Given a previously committed resolution with audit entry E1 When a user with revert permission reverts that resolution Then a new audit entry E2 is appended with type="Revert", references E1.entryId, reverterId, revertedAt (ISO 8601 UTC), and revertReason (required, >=5 characters) And the system restores the prior item content and attribution snapshot exactly as recorded before E1 (verified by resultingItemHash matching the referenced prior state) And E1 is not deleted or modified; both E1 and E2 appear in chronological order in UI and API And subsequent reports for the affected version include both the original resolution and the revert
Shareable Read-Only Audit Report Linked to Version
Given a version V is cloud-committed When a project member generates a shareable audit report link Then a tokenized, read-only URL is created (>=32 chars entropy) that renders the audit report for V And the report includes: totalConflictsResolved, counts by selectionType, per-conflict details (conflictId, selected outcome, resolverId, resolvedAt, rationale, attribution), and a cryptographic summary hash of the report contents And no edit or delete actions are available in the view; all write attempts return HTTP 403 And the link can be revoked by a project member; after revocation, the URL returns HTTP 410 And the report can be exported as a PDF matching on-screen content and containing the same summary hash And initial render for up to 200 conflicts completes in <=5 seconds under median network conditions
Rationale Notes Capture and Validation
Given the conflict overlay resolution dialog is open When the resolver selects "Blend" Then the Rationale field is required, enforces 5–2000 characters, trims leading/trailing whitespace, and blocks save until valid When the resolver selects "Yours" or "Theirs" Then the Rationale field is optional; if provided, it is persisted and displayed in the audit report and API And rationale input is stored and rendered as plain text (no HTML/JS execution), preserving line breaks And rationale is stored with authorId and createdAt (ISO 8601 UTC) in decisionProvenance
Audit Log API: Query, Filtering, and Integrity
Given an authenticated user with view permissions When they request GET /versions/{versionId}/audit?resolverId=...&selectionType=...&from=...&to=...&page=1&limit=50 Then the API returns 200 with a paginated list of audit entries matching the filters, sorted by resolvedAt ascending And the response schema includes: entryId, conflictId, selectionType, resolverId, resolvedAt, rationale, resultingItemHash, prevEntryHash, attributionSnapshot And requests without permission return 403; a non-existent version returns 404; invalid filters return 400 with machine-readable error details And performance: for up to 1000 entries, the API responds in <=2 seconds at p95 And integrity: for each page, the server includes a pageHash computed over entry hashes; client recomputation matches the provided pageHash
Reviewable Commit Queue
"As a reviewer, I want a queued, summarized commit to review and approve so that only clean, vetted resolutions are merged into the project version history."
Description

After conflicts are resolved, stage a clean, review-ready commit draft summarizing changes, affected regions, authors, and notes. Allow reviewers to approve, request changes, or reopen specific conflicts. Commit atomically to the cloud workspace, trigger notifications, and handle retries for transient failures, ensuring no partial or duplicated merges enter the mainline.

Acceptance Criteria
Stage Review-Ready Commit Draft Post-Conflict Resolution
Given all identified conflicts for a drawing are marked Resolved And the user has Commit Draft permission for the workspace When the user clicks Stage Commit Then the system creates a draft with fields: draftId (UUID), baseVersion, changeSummary, affectedRegionIds[], authors[], rationaleNotes, createdAt And generates a versioned visual diff artifact linked to the draft And the draft appears in the Review Queue within 5 seconds And the draft status is Pending Review And repeated Stage Commit for the same resolution set returns the same draftId (idempotent) and does not create duplicates
Display Summarized Changes with Affected Regions and Attribution
Given a staged draft exists When a reviewer opens the draft Then the UI displays a change summary with total change count and a list of affectedRegionIds with visual overlays And each change item shows contributing author userIds and associated rationaleNotes And the reviewer can filter the diff by region and by author And if required metadata is missing, the draft cannot be opened and shows error "Draft Metadata Incomplete" listing missing fields
Reviewer Actions: Approve, Request Changes, Reopen Conflicts
Given a reviewer with Review permission has opened a Pending Review draft When the reviewer selects Approve Then the draft status changes to Approved and records reviewerId, optional comment, and approvedAt timestamp When the reviewer selects Request Changes Then a comment and selection of affected regions or change items are required, the status changes to Changes Requested, and authors are notified When the reviewer selects Reopen Conflicts Then the reviewer must specify conflictIds or regions to reopen, the draft status changes to Conflicts Reopened, and the specified conflicts are returned to Conflict Stitcher And once a terminal action (Approved, Conflicts Reopened) is recorded, subsequent conflicting actions are rejected with HTTP 409 Conflict and no state change
Atomic Commit to Cloud Workspace with No Partial or Duplicated Merges
Given a draft is Approved and the mainline is at the draft's baseVersion When the commit operation is executed Then all changes are applied atomically to the cloud workspace with a new commitId; either all changes merge or none And an idempotencyKey tied to draftId ensures repeated commit requests yield the same single commitId and no duplicates And if the mainline has advanced with overlapping changes since baseVersion, the commit is rejected with status Outdated Draft and no changes merged And if pre-commit validation fails, the commit is aborted with status Failed and error details, with no partial changes merged
Notifications on Draft and Commit Events
Given notification channels are enabled (in-app and email) When a draft is staged Then assigned reviewers receive notifications within 1 minute containing draftId and a deep link When a draft is approved, changes requested, conflicts reopened, or a commit completes Then authors and watchers receive notifications within 1 minute containing event type, draftId or commitId, and links And duplicate notifications for the same event and recipient within 5 minutes are de-duplicated
Retries and Backoff for Transient Failures
Given an operation encounters a transient error (network timeout, HTTP 5xx, or 429) When staging a draft, dispatching notifications, or executing a commit Then the system retries up to 3 times with exponential backoff (2s, 4s, 8s) And all retries use the same idempotencyKey to prevent duplicate drafts, notifications, or commits And if all retries fail, the operation status is set to Failed with a retriable flag and a visible Retry action for authorized users And non-transient errors (HTTP 4xx excluding 408/429) are not retried and include actionable error messages
Performance & Resilience Guarantees
"As a field architect on a lightweight laptop, I want the conflict view to stay fast and recoverable so that I can resolve merges without crashes or delays."
Description

Ensure low-latency rendering of overlays and diffs on large drawings through progressive loading, tile-based rendering, and GPU-accelerated zoom/pan where available. Safeguard user work with autosave during resolution, conflict session checkpoints, and resumable sync. Support cross-browser compatibility and degraded modes for low-memory devices to keep the resolution flow responsive and reliable.

Acceptance Criteria
Progressive Tile Rendering for Large Drawings
Given a 300 MB multi-layer drawing with three overlay diffs enabled and a cold cache When the Conflict Stitcher view is opened Then the first screenful of tiles is visible within 800 ms at 100% zoom And 95% of tiles for the current viewport load within 2.5 s And p95 pan/zoom input-to-paint latency after initial load is <= 50 ms And no main-thread long task exceeds 50 ms during continuous pan/zoom for 10 s
GPU-Accelerated Zoom/Pan with CPU Fallback
Given a device with WebGL2 or WebGPU available When the user pans/zooms for 10 s in a 300 MB drawing at 150% zoom Then average frame rate is >= 45 FPS and p95 frame time <= 30 ms And GPU rendering is confirmed via performance markers Given a device without hardware acceleration or with blocked WebGL/WebGPU When the user pans/zooms for 10 s in the same drawing Then the app falls back to CPU rendering automatically And average frame rate is >= 24 FPS and p95 input latency <= 120 ms And the user receives a non-blocking "Reduced performance mode" notice within 1 s
Autosave During Conflict Resolution Sessions
Given the user is resolving conflicts with unsaved markup edits When the user makes changes Then an autosave occurs within 2 s of idle or at most every 10 s during continuous editing And the UI shows "All changes saved" within 500 ms of save completion And autosave does not block the UI for more than 50 ms on the main thread And after a crash or forced close, reopening restores the session to the last autosave with no more than 5 s of edits lost
Checkpoints and One-Click Resume
Given a conflict resolution session with multiple selections and notes When the user creates a checkpoint Then the checkpoint is saved within 500 ms including overlays, selections, and rationale notes And up to 20 checkpoints are retained per session with LRU eviction beyond 20 When the app is relaunched or connectivity is restored Then the last active checkpoint auto-restores within 2 s And switching to any prior checkpoint completes within 1 s without data loss
Resumable Sync for Cloud Commit
Given the user queues a resolution for cloud commit and the payload size is up to 200 MB When network connectivity drops for up to 10 minutes Then the upload pauses and resumes from the last confirmed byte within 2 s of reconnection And total re-uploaded data is <= 1 MB or 1% of payload, whichever is lower And the user sees "Sync paused" within 1 s of loss and "Sync resumed" within 1 s of restore And if no connection within 10 minutes, retries back off at 5 s, 15 s, 30 s, 60 s, then every 2 minutes capped at 15 minutes
Cross-Browser Compatibility Coverage
Given the latest two stable versions of Chrome, Edge, and Firefox on Windows 11 and macOS, and Safari on macOS and iPadOS When executing core Conflict Stitcher flows (open, overlay diffs, edit, autosave, checkpoint, sync) Then all tests pass with zero Sev-1 defects and <= 2 Sev-2 cosmetic issues per browser And performance thresholds from related criteria are met within ±10% And CI runs these browser suites on every main-branch build with a pass rate >= 99% over the last 20 runs
Degraded Mode for Low-Memory Devices
Given the device has <= 4 GB RAM or a browser memory pressure event occurs When opening Conflict Stitcher on a 300 MB drawing Then the app switches to degraded mode: single overlay enabled, 50% texture resolution, no live blend preview And peak tab memory usage p95 <= 1.5 GB during a 2-minute interaction script And p95 input latency for pan/zoom <= 150 ms and first contentful paint <= 1.2 s And a one-time non-blocking notice explains degraded mode within 1 s of activation

PeerLink Sync

Share the latest cached sheets and markups device‑to‑device via Bluetooth or Wi‑Fi Direct. Respecting permissions and logging handoffs, it keeps the field crew aligned in basements and dead zones, then de‑duplicates to the cloud later.

Requirements

PeerLink Native Bridge
"As a field architect, I want the app to connect directly to nearby devices without internet so that my crew can receive the latest sheets and markups on site."
Description

Provide a lightweight native bridge for iOS and Android (via Capacitor or equivalent) to expose Bluetooth Classic/LE and Wi‑Fi Direct APIs to the PlanPulse web client. The bridge must handle OS‑level permissions, radio capability checks, and background execution constraints, exposing a unified JavaScript interface to initiate discovery, advertise presence, establish connections, and stream data. This component enables reliable device‑to‑device transfers in basements and dead zones where internet connectivity is unavailable while preserving the app’s web‑first architecture.

Acceptance Criteria
Permission Requests and Capability Detection
Given the app cold-starts with no prior permissions When peerLink.init() is invoked Then the OS prompts for required Bluetooth and Location permissions exactly once per permission type and the API resolves a permissions map within 2 seconds of user action Given the user denies any required permission When discoverPeers() or advertisePresence() is called Then the Promise is rejected with code E_PERMISSION_DENIED and no radio operations are started Given a device lacks Wi‑Fi Direct support When capabilities() is called Then it returns { wifiDirect: false } and Wi‑Fi Direct methods reject with E_UNSUPPORTED Given Bluetooth is disabled at OS level When peerLink.init() is called Then state=disabled is emitted and no scans/advertising begin until the user enables Bluetooth manually
Unified Discovery and Advertising API
Given Device A calls advertisePresence({ id, role }, transports:["ble","wifiDirect"]) and Device B calls discoverPeers({ role:"field-crew" }, transports:["ble","wifiDirect"]) with radios enabled When discovery is running Then Device B receives a peerFound event for Device A within 5 seconds including peerId, rssi, and availableTransports Given Device A stops advertising When Device B remains in discovery Then a peerLost event for Device A fires within 6 seconds Rule: advertisePresence payload size must be <= 128 bytes; attempts to exceed reject with E_PAYLOAD_TOO_LARGE without starting advertising Rule: stopDiscovery() and stopAdvertising() resolve within 1 second and no further discovery/advertising events are emitted thereafter
Connection Establishment and Transport Selection
Given both devices support BLE and Wi‑Fi Direct and are mutually discoverable When connect(peerId, { transport:"auto", payloadSizeHint: 5*1024*1024 }) is called Then a connection is established using Wi‑Fi Direct within 10 seconds or, on 2 consecutive Wi‑Fi Direct failures, a fallback to BLE is attempted automatically Rule: Only one active connection per peer; concurrent connect() attempts to the same peer reject with E_ALREADY_CONNECTED Rule: connect(peerId, { transport:"ble"|"wifiDirect" }) honors explicit transport selection or rejects with E_UNSUPPORTED when unavailable Then on success a connected event is emitted including peerId and transport; on failure a standardized error code and error event with correlationId are provided
Data Streaming Reliability and Integrity Offline
Given an active connection and no internet connectivity When sendStream("sheetBundle", 50*1024*1024) is invoked Then data is chunked into segments <= 256 KB with per-chunk acknowledgements and receiver emits transferProgress events and a streamComplete event with checksum matching the sender Rule: On connection drop mid-transfer, resume within 60 seconds continues from the last acknowledged chunk, re-sending at most one prior chunk Performance (reference devices): Wi‑Fi Direct average throughput >= 0.8 MB/s; BLE average throughput >= 40 KB/s; bridge memory overhead during transfer <= 50 MB Timeouts: If no ACK is received for 10 seconds, the chunk is retried up to 3 times before failing with E_TIMEOUT
Background Execution Constraints
iOS: Given an active transfer When the app moves to background Then the bridge requests background time, completes within the allowed window or pauses gracefully emitting transferPaused, and resumes automatically within 3 seconds of foregrounding without data loss Android: Given an active transfer When the app moves to background Then a foreground service starts with a persistent notification and the transfer continues; if the process is killed, resumable metadata is persisted and the transfer resumes within 5 seconds of relaunch Rule: No discovery or advertising occurs when the app is terminated; upon relaunch, prior discovery/advertising state is not auto-started unless explicitly requested
Unified Error Codes and Event Model
Rule: The bridge exposes identical JS error codes on iOS and Android: E_PERMISSION_DENIED, E_UNSUPPORTED, E_BUSY, E_TIMEOUT, E_CONNECTION_LOST, E_PAYLOAD_TOO_LARGE, E_INVALID_ARG, E_INTERNAL, E_ALREADY_CONNECTED Rule: Event names and payload schemas are identical across platforms: stateChanged, peerFound, peerLost, connected, disconnected, transferProgress, transferPaused, transferResumed, streamComplete, securityEvent, error Given any API Promise rejects When the error is emitted Then an error event is also emitted with the same correlationId and normalized code/message Rule: version() returns a SemVer string; breaking changes increment MAJOR and are documented via getChangelog()
Security and Privacy Safeguards
Rule: Advertising payloads must conform to an allowed schema (ephemeral sessionId, appVersion, capabilities); attempts to include PII or disallowed fields reject with E_INVALID_ARG Rule: Discovery/advertising start only after an explicit user action within the current session; auto-start is disabled by default Rule: No peer identifiers or payloads are written to system logs; debug logs remain in app sandbox and are toggleable via setDebug(true|false) Given a connection attempt without a valid session secret When handshake occurs Then the connection is rejected, a securityEvent is emitted with reason:"unauthenticated", and no data is exchanged
Offline Cache Packaging
"As a project lead, I want to share only the relevant sheets and markups as a single package so that transfers are fast and my team gets exactly what they need."
Description

Bundle selected sheets and associated markups into a compact, versioned share package with a signed manifest. The package must include dependencies (fonts, referenced images), per‑sheet metadata (revision IDs, authorship, timestamps), and content‑addressable identifiers for deduplication. Provide compression, size estimates, and selective inclusion (by sheet set, tag, or most‑recent edits) to minimize transfer time while ensuring complete context for recipients.

Acceptance Criteria
Package Creation with Signed Manifest
Given the user selects one or more sheets and associated markups When the user taps "Create Share Package" Then the system generates a single package file containing all selected sheets and their markups And the package includes a manifest file And the manifest contains package_id, package_version, schema_version, creator_id, created_at (UTC ISO 8601), and an items array listing every sheet, markup, and dependency with filename, byte_size, and SHA-256 hash And the manifest is digitally signed, including signature algorithm and key identifier And verifying the signature against the manifest content succeeds And the package file name includes the package_id and package_version
Dependency Inclusion and Validation
Given selected sheets reference external fonts and images When packaging is executed Then all referenced fonts used by text layers are included exactly once in the package And all referenced images (linked or embedded) are included exactly once in the package And the manifest lists each dependency with type (font/image), original path or identifier (if available), and SHA-256 hash And if any dependency cannot be resolved and the user chose "Require complete context", the packaging aborts with error code DEP_MISSING and lists the missing dependencies And if the user chose "Allow partial context", the package is created and each missing dependency is flagged missing=true in the manifest And upon validation, the presence and hash of each included dependency matches the manifest
Per‑Sheet Metadata Completeness
Given sheets and markups are included in the package When the manifest is generated Then each sheet entry includes sheet_id, sheet_name, revision_id, revision_sequence, authorship {author_user_id, author_display_name}, created_at (UTC ISO 8601), last_modified_at (UTC ISO 8601), and markup_count And each markup entry includes markup_id, parent_sheet_id, version_id, authorship {author_user_id, author_display_name}, created_at (UTC ISO 8601), and last_modified_at (UTC ISO 8601) And all timestamps are normalized to UTC and valid ISO 8601 format And validation fails if any required metadata field is missing or invalid
Content‑Addressable Identifiers and De‑dup Readiness
Given all items (sheets, markups, dependencies) are added to the package When identifiers are assigned Then each item has a content‑addressable identifier of the form "cai:sha256:<hex>" derived from its canonical bytes And duplicate content appears only once in the package, with multiple references pointing to the same CAI And the manifest references items by CAI, with byte_size and file path mapping And on importing two packages that share content, items with matching CAIs are not duplicated in storage And a deduplication report lists counts of reused vs new items
Compression and Size Estimate Accuracy
Given the user has selected items for packaging When the size estimate is displayed prior to packaging Then the UI shows an estimated total size and breakdown by category (sheets, markups, dependencies) And the final package size differs from the estimate by no more than ±10% for packages between 10 MB and 1 GB on the reference dataset And compression is applied to reduce total size unless content is detected as incompressible (e.g., already compressed formats) And the manifest records the compression method and original vs compressed size for each item And the final size is displayed upon completion
Selective Inclusion by Set, Tag, or Recent Edits
Given a project with multiple sheet sets, tags, and recent edits When the user filters by (a) sheet set(s), (b) tag(s), and/or (c) most‑recent edits within a selectable time window Then only sheets and markups matching the filter are included And all referenced dependencies required by the included items are included regardless of filters to preserve context And the UI displays the count of sheets, markups, dependencies, and the estimated size after filters are applied And clearing filters restores the full selectable list And test data confirms the included items exactly match the filter logic
Package Validation and Import Readiness
Given a created package is transferred to a recipient device When the recipient runs package validation Then the manifest signature verification succeeds and the manifest hash matches And all item SHA‑256 hashes match their computed values And if validation fails, the import is aborted with a descriptive error code and log entry And if the manifest schema_version is newer but backward‑compatible, the import proceeds with a warning; if incompatible, it blocks with an error And upon successful validation, sheets, markups, metadata, and dependencies are ready for import and deduplication by CAI And an audit log entry is recorded with sender_id, receiver_id, package_id, validation_result, and timestamp
Permissioned Share Controls
"As an admin, I want shares to respect project permissions so that sensitive drawings are not distributed beyond authorized team members."
Description

Enforce role‑based access for PeerLink Sync, restricting who can initiate shares and what projects/sheets can be included. Require explicit user consent on receive, display sender identity, and honor project‑level permissions. Include ephemeral, device‑scoped authorization tokens embedded in the package manifest to prevent unauthorized redistribution and to enable post‑transfer auditing.

Acceptance Criteria
Initiator Role Enforcement
Given a signed-in user without the PeerLink:InitiateShare permission When the user attempts to open the PeerLink Sync share panel or press Send Then the action is blocked, the Send control is disabled, and an error message "Insufficient permissions to initiate PeerLink Sync" is displayed And an audit event is recorded with fields {event=share_initiate_denied, reason=role_missing, user_id, device_id, timestamp} Given a signed-in user with the PeerLink:InitiateShare permission When the user opens the PeerLink Sync share panel Then the panel loads and allows item selection
Sender-Side Project/Sheet Scope Filtering
Given the initiator has view/share permission for Project A sheets [S1,S2] and no permission for Project B sheets [S3] When the initiator selects S1 and S3 for sharing Then S3 is automatically excluded prior to packaging, and the UI shows "1 item removed due to permissions" And the Send button remains enabled only if at least one permitted item remains Given the initiator selects only items they lack permission to share When they attempt to press Send Then Send is disabled and a message "No permitted items to share" is shown And an audit event is recorded {event=share_content_filtered, removed_count, remaining_count} Given items were selected earlier When the initiator presses Send Then the system revalidates the final set against the cached ACL and excludes any items whose permissions have changed since selection, showing the updated counts before transfer
Receiver Consent and Sender Identity Display
Given a receiving device detects an incoming PeerLink package When the consent dialog is presented Then it displays sender identity fields {sender_full_name, sender_org, sender_role, sender_device_name}, project list, sheet count, and package size And the only options are Accept and Decline; no auto-apply occurs Given the recipient taps Accept within the default 2-minute window When the token and package validate Then the import proceeds and the device shows a success summary Given the recipient taps Decline or no action is taken within 2 minutes When the window expires Then the package is discarded without importing any items And an audit event is recorded {event=share_receive_declined|timed_out, sender_id, recipient_device_id, token_id} And no data is applied to the workspace
Ephemeral Device-Scoped Token Validation
Given a package is created with an embedded manifest containing a signed token {token_id, issued_at, expires_at<=30m, sender_device_id, recipient_device_id, package_hash} When the intended recipient device, offline, attempts to import within the validity window Then the app verifies the signature and device binding locally and proceeds Given any device other than the intended recipient attempts to import the same package file When validation runs Then import fails with error "Authorization token not valid for this device" And an audit event is recorded {event=token_validation_failed, reason=device_mismatch, token_id} Given the intended recipient attempts to import after the token expires or with a tampered manifest When validation runs Then import is blocked with error "Authorization token invalid/expired" And an audit event is recorded {event=token_validation_failed, reason=expired|tampered, token_id}
Audit Trail and Post-Transfer Sync
Given a share attempt (success or failure) occurs When the event completes Then a local immutable log entry is written with fields {event_type, result, sender_user_id, sender_device_id, recipient_device_id, token_id, project_ids, sheet_ids, package_hash, timestamp, reason} Given the devices regain connectivity When background sync runs Then all local handoff logs and token statuses are uploaded to the cloud audit service And the audit UI shows a single canonical record per token_id/package_hash with deduplicated retries and their outcomes linked Given an attacker replays the same package to the same or different device When the system detects a previously used token_id Then the attempt is blocked and the audit shows it as a replay with references to the original successful/failed event
Recipient Permission Enforcement and Partial Import
Given the sender includes items beyond the recipient's project/sheet permissions (per recipient's cached ACL) When the recipient accepts the transfer and validation passes Then only items the recipient is permitted to access are imported And disallowed items are omitted and listed in a post-import summary with counts Given none of the items are permitted for the recipient When the recipient accepts Then zero items are imported, a message "No permitted items to import" is shown, and the event is logged {event=receive_filtered_all, token_id} Given omitted items due to permissions When the audit record is viewed Then it reflects {requested_count, imported_count, omitted_count, omitted_reasons=[permission_denied]}
Handoff Audit Trail
"As a PM, I want a clear audit of who shared what and when so that I can track responsibility and meet compliance requirements."
Description

Record a tamper‑evident log for each device‑to‑device handoff, capturing sender/recipient user IDs, device fingerprints, project and sheet IDs, package checksums, and timestamps. Store logs locally for offline use and sync to the cloud when available, linking each entry to the project activity feed. Provide an API and UI hooks for compliance reporting and dispute resolution.

Acceptance Criteria
Log Entry Creation on Successful Handoff
Given a device-to-device handoff of cached sheets/markups completes successfully When the sender confirms the handoff completion Then the system writes exactly one audit log entry with fields: handoff_id (UUIDv4), sender_user_id, recipient_user_id, sender_device_fingerprint, recipient_device_fingerprint, project_id, sheet_ids (array), package_checksum (SHA-256), transfer_medium (Bluetooth|Wi‑Fi Direct), start_timestamp (ISO 8601 UTC), end_timestamp (ISO 8601 UTC), outcome=success And the write is atomic (no partial records) and the entry is immutable thereafter And if the handoff fails after a connection attempt, an audit entry is recorded with outcome=failed and includes error_code and error_message
Tamper-Evident Integrity and Detection
Given an audit log entry has been persisted When its signature is verified using the device’s registered public key Then the signature validation passes for unaltered entries and fails for any modified field And entries with failed validation are flagged as "Integrity Warning" and excluded from export until explicitly acknowledged by an Admin
Offline Local Storage and Retry Sync
Given no internet connectivity at handoff time When the audit log entry is created Then it is stored locally in an encrypted audit store and marked sync_status=pending And the system retries cloud sync with exponential backoff up to a maximum interval of 15 minutes while offline And upon connectivity restoration, pending entries upload in FIFO order until sync_status=complete for all entries
Cloud Sync and Project Activity Linking
Given an audit log entry uploads successfully to the cloud When the project activity feed is refreshed Then a new activity item appears within 10 seconds linked by project_id and handoff_id, showing sender and recipient display names, sheet count, and timestamp And the activity item deep-links to the audit detail view for the same handoff_id And duplicate uploads are ignored using an idempotency key composed of handoff_id+package_checksum
Permissions and Access Control
Given a signed-in user requests audit entries for a project When the user has Audit:View permission Then the user can list and view audit entries for that project And when the user lacks Audit:View, the request is denied with 403 and no audit fields are returned And deletion of audit entries is prohibited; Admins may append a redaction marker with reason and timestamp without altering original fields
Compliance API for Reporting
Given a client calls GET /api/projects/{project_id}/handoffs with filters (date range, sender_user_id, recipient_user_id, outcome, device_fingerprint, sheet_id) When the dataset contains up to 50,000 entries Then the API responds within 500 ms (p95) with cursor-based pagination and returns only the requested fields plus a verification_signature And the API supports export=csv|json; CSV is RFC4180-compliant, UTF‑8, with header row and properly escaped values And the API enforces authorization: Audit:Export is required for export endpoints
UI Hooks for Dispute Resolution
Given a user views a sheet or markup When they open the Related Handoffs panel Then entries involving that sheet_id display with recipient, sender, start/end timestamps, outcome, and package_checksum And the user can copy a shareable deep link to any audit entry And the user can append an immutable dispute note (author, timestamp, text) to the audit entry, which appears in the project activity feed
Resumable Transfer & Reliability
"As a site engineer, I want transfers to resume automatically after interruptions so that we don’t have to restart large packages in the field."
Description

Implement chunked streaming with integrity checks, congestion control, and automatic retry to handle interference and intermittent radios. Support pause/resume, recipient‑side storage space checks, and user feedback (progress bar, ETA, error states). Optimize for low power by batching discovery and throttling transmissions based on battery level and thermal state.

Acceptance Criteria
Dead‑Zone Interruption Auto‑Resume
Given Device A and Device B are connected via PeerLink over Bluetooth or Wi‑Fi Direct and a 1 GB transfer has started When link connectivity is lost for up to 5 minutes Then the transfer auto‑pauses within 2 seconds of detecting failure and retries using exponential backoff between 1–30 seconds And upon reconnection the transfer resumes from the last verified chunk without re‑sending verified chunks And per‑chunk checksums validate all received chunks; corrupted chunks are re‑requested only And the final payload hash on Device B matches Device A’s manifest And the total duplicate bytes resent after resume are less than or equal to 2% of the payload
User‑Initiated Pause/Resume
Given an active transfer between Device A and Device B When the sender or recipient taps Pause Then data transmission halts within 2 seconds, state is persisted, and temporary files remain intact And when Resume is tapped within 24 hours the transfer continues from the last verified chunk with no data loss And when Cancel is selected partial data is deleted within 5 seconds and reserved space is freed And UI state transitions display Paused, Resuming, and Canceled accordingly on both devices
Recipient Storage Pre‑Check
Given the sender initiates a transfer of size S When the recipient reports available storage A before data flows Then the transfer only starts if A >= S + max(50 MB, 5% of S) with reservation success And if the pre‑check fails both devices show Insufficient Storage and no data is sent And if storage drops below the reserved threshold mid‑transfer the transfer auto‑pauses, an error is shown, and it resumes automatically when space is available
Congestion Control Under Loss and Interference
Given chunked streaming is active with per‑chunk acknowledgments When packet loss exceeds 10% or repeated negative acknowledgments occur within a 30‑second window Then the sender reduces inflight chunks to 4 or fewer and reduces chunk size to 128–256 KB And retransmissions are paced to prevent buffer overflow with a maximum of 5 retries per chunk before surfacing Link Unstable with retry options And steady‑state throughput variance stays within ±20% over 30 seconds once adaptation engages
Progress, ETA, and Error Feedback
Given a transfer is active, paused, or failed When the UI renders transfer status Then progress percentage updates at least once per second and never regresses And ETA is displayed within 15 seconds of start and remains within ±20% accuracy thereafter And distinct error states are shown with clear messages for Link Lost, Insufficient Storage, Permission Denied, Thermal Throttling, and Battery Saver Throttle And all states are announced via accessibility labels for screen readers
Low‑Power Discovery and Throughput Throttling
Given the app is performing peer discovery When the app is foregrounded Then scan bursts are batched no more than once every 3 seconds And when the app is backgrounded scan bursts are batched no more than once every 30 seconds Given a transfer is active and the device battery is at or below 20% or OS Low Power Mode is enabled or thermal state is hot When throttling activates Then average radio power usage is reduced by at least 30% versus unthrottled baseline while maintaining a minimum transfer rate of 256 KB/s when link capacity allows And throttling is lifted within 10 seconds after conditions clear without restarting the transfer
Handoff Logging and Crash‑Safe Resume
Given a transfer is in progress When chunks are sent and acknowledged Then transfer metadata (peer IDs, package ID, size, start time, last chunk index, status) is durably persisted at least every 5 seconds or every 50 chunks, whichever comes first And on app crash or restart the app restores the transfer within 10 seconds and offers to resume from the last verified chunk And on completion or failure a handoff log entry is created with outcome and reason, queued for later cloud de‑duplication using payload hash + peer IDs + timestamps, avoiding duplicate log entries
End‑to‑End Encryption
"As a security‑conscious architect, I want device‑to‑device shares to be encrypted and authenticated so that sensitive plans are protected even if a device is compromised."
Description

Protect all PeerLink Sync payloads with modern end‑to‑end encryption (e.g., X25519 for key exchange and AES‑GCM for data), performing mutual authentication using signed app/device identities. Keys must be ephemeral per session, and manifests/signatures must be verified before import. Ensure zero‑knowledge transfers (no plaintext on transport layer), secure key storage, and compliance with regional cryptography regulations.

Acceptance Criteria
Mutual Authentication Handshake Before PeerLink Transfer
Given two PlanPulse devices have valid app/device identity certificates issued by the PlanPulse CA and at least one transport (Bluetooth or Wi‑Fi Direct) is available When a PeerLink Sync session is initiated Then both devices perform mutual authentication, validating certificate chains, key usage, expiration, and binding identities to the handshake transcript And revocation status is checked online (OCSP/CRL) when available or via cached status not older than 7 days when offline And the session is aborted with no payload exchange if any validation fails, with a non-sensitive error code logged And a handoff audit record is stored containing session ID, hashed peer IDs, transport type, and timestamps, with no secret material And the authenticated handshake completes within 1.5 s (P50) and 3.0 s (P95) under baseline conditions (RSSI > −70 dBm)
Ephemeral X25519 Session Keys With Rotations
Given a new PeerLink Sync session is starting When key agreement occurs Then each device generates a fresh ephemeral X25519 keypair and performs ECDH to derive shared secret(s) And traffic keys are derived via HKDF-SHA-256 with context {session_id, roles, cipher_suite=v1} And no ephemeral keypair is reused across sessions; keys are rotated after 1 GiB of ciphertext or 15 minutes, whichever occurs first And rekeying generates new ephemeral keypairs and updates traffic keys without exposing plaintext And obsolete keys are zeroized within 100 ms of rotation or session end and are never written to disk or included in crash logs And only truncated, non-sensitive key fingerprints (e.g., first 8 hex chars) may be displayed for operator verification
AES-GCM Encrypted Payloads Over Bluetooth/Wi‑Fi Direct
Given a mutually authenticated session with derived traffic keys When any payload (sheets, markups, manifests, or metadata) is transmitted over Bluetooth or Wi‑Fi Direct Then the payload is encrypted with AES-256-GCM using a unique nonce per message And AAD includes {session_id, message_counter, manifest_id, sender_id, receiver_id} And decryption occurs only if the GCM authentication tag verifies; any single auth failure aborts the session and logs a tamper event And no plaintext application data, filenames, or manifest contents appear on the transport; on-the-wire inspection reveals only ciphertext and framed lengths And a monotonically increasing message_counter is enforced to prevent replays; duplicates or out-of-window messages are rejected
Manifest and Content Signature Verification Pre‑Import
Given an encrypted payload has been received and decrypted successfully When the receiver validates the included manifest and per-object digital signatures using the sender’s verified signing key Then import proceeds only if all signatures verify, the manifest hash matches, and the manifest version is compatible And any verification failure blocks import, rolls back staged changes, and displays “Signature Verification Failed” with a support code And replay or downgrade attempts are detected via manifest IDs and session nonce binding and are rejected And the event is recorded in an audit log with evidence hashes and no content exposure
Secure Identity Key Storage and Ephemeral Key Disposal
Given a device holds an identity keypair for PlanPulse When the app stores and uses the identity private key Then the key is stored as non-exportable in hardware-backed secure storage where available (Secure Enclave/StrongBox/TPM), otherwise in the OS keystore with strongest protections enabled And access is limited to the app process, gated by device unlock, and blocked on root/jailbreak/debugger detection And identity/session keys are excluded from backups and crash reports And ephemeral session keys exist only in RAM, are page-locked, and are zeroized within 100 ms after session end
Regional Crypto Compliance and Policy Enforcement
Given the device region and organization crypto policy profile are resolved at runtime When a PeerLink Sync session is established Then only approved cipher suites/providers are used per the active policy (e.g., require FIPS 140-3 validated provider when FIPS mode is enabled) And if a required compliant provider is unavailable, PeerLink Sync is disabled with a clear “Crypto Policy Not Met” error and a telemetry flag And an inventory/report endpoint exposes the active cipher suite, provider, key lengths, and policy profile for the session And any attempted downgrade to a non-compliant algorithm is blocked and logged as a security event
Cloud De‑duplication & Merge
"As a project lead, I want offline changes to sync without duplicates and with clear conflict handling so that the cloud remains the single source of truth."
Description

Upon reconnection, reconcile received packages with the cloud by matching content‑addressable IDs and checksums to avoid duplicate uploads. Merge markups by version graph, detect conflicts at annotation granularity, and surface a minimal review UI when concurrent edits cannot be auto‑merged. Ensure idempotent uploads, accurate attribution, and preservation of offline audit logs linked to the resulting canonical versions.

Acceptance Criteria
Idempotent Reconnect Upload
Given a device has N cached packages including K previously uploaded packages with identical content IDs When the device reconnects and pushes the full queue twice due to network retries Then the cloud creates versions and blobs only for the N−K previously unseen content IDs And a second identical push results in 0 new versions and 0 new blobs And the reconcile report indicates duplicates were treated as no-ops with 2xx responses and idempotency keys
Content-Addressable De-duplication Integrity
Given each artifact includes a content ID (CID) and checksum When a package references an artifact whose CID already exists in the cloud with a matching checksum Then the client skips blob upload and only links references without creating duplicates And when a CID collision with mismatched checksum is detected Then the client aborts reconcile for that artifact, flags integrity_error, and creates no new version And the reconcile report lists counts for skipped, uploaded, and integrity errors
Order-Independent Reconciliation and Retry Safety
Given two devices produce packages derived from the same base and they arrive at the cloud in arbitrary order with intermittent failures When reconciliation runs with automatic retries and backoff Then the final version graph and annotation set are identical regardless of arrival order And no duplicate versions or annotations are created due to retries And replaying the same packages three times yields identical results (at-most-once semantics)
Version-Graph Auto-Merge
Given two diverged versions Vb and Vc from common ancestor Va with non-overlapping annotations When reconciliation executes Then a merge node Vm with parents Vb and Vc is created And all non-overlapping annotations appear exactly once in Vm And the merge completes within 5 seconds for 2,000 annotations and 100 markups per branch on a standard node
Annotation-Level Conflict Detection and Review UI
Given the same annotation ID is edited concurrently on Vb and Vc producing conflicting fields When auto-merge cannot resolve the conflict Then the system creates a conflict record per annotation and surfaces a review UI listing only unresolved conflicts And each conflict displays both variants with author, timestamp, and field-level diffs And the user can resolve each conflict in no more than 3 clicks by choosing left/right or editing fields And after resolution a new canonical version Vr is created with zero remaining conflicts And the review UI loads within 1 second for up to 200 conflicts
Accurate Attribution Preservation
Given packages include author IDs, device IDs, and edit timestamps for annotations and markups When de-duplication and merge complete Then all annotations in the canonical version retain original author attribution and edit timestamps And the canonical version metadata includes a contributor list with per-author counts And attribution remains unchanged after idempotent replays and is not reassigned to a service account
Offline Audit Log Preservation and Linking
Given offline actions and PeerLink handoffs generate audit log entries on devices When reconciliation completes Then all relevant audit entries are uploaded and linked to the resulting canonical versions and merge nodes And each entry preserves original device ID, actor, timestamp, and hash-chain linkage And querying a canonical version returns the complete ordered audit trail including offline periods and handoffs And no audit log entries are duplicated or lost during de-duplication

Low‑Power Ink

A latency‑tuned, battery‑savvy markup engine optimized for stylus input. Smooth strokes, palm rejection, and instant autosave work flawlessly offline, while heavy diff computations defer until charging—so you never lose ink or momentum.

Requirements

Latency-Optimized Ink Rendering
"As an architect sketching on-site, I want my ink to appear instantly and smoothly so that I can mark up drawings without distraction or lag."
Description

A rendering pipeline that delivers sub-10ms perceived latency on supported hardware by combining coalesced pointer events, stroke prediction, velocity-aware smoothing, and GPU-accelerated compositing. The engine samples stylus input at the browser’s maximum rate, filters jitter without introducing lag, and renders ink on a dedicated offscreen canvas to avoid main-thread jank. It gracefully degrades on low-power devices by dynamically adjusting sampling and smoothing while maintaining stroke fidelity. It integrates with PlanPulse canvases, supports pressure and tilt where available, and ensures consistent stroke appearance across zoom levels and exports.

Acceptance Criteria
Sub‑10ms Ink Latency on Supported Devices
Given a supported device (>=120 Hz display and stylus) with hardware acceleration enabled and no power‑saver mode, When the user draws continuous strokes for 10 seconds, Then the measured end‑to‑end ink latency (PointerEvent.timeStamp to first pixel paint) is <= 10 ms at the 95th percentile and <= 15 ms at the 99th percentile. Given the same conditions, When drawing for 10 seconds, Then the frame drop rate during strokes is < 2% and there are no visual gaps > 16 ms. Given telemetry is enabled, When a drawing session ends, Then latency percentiles (50/95/99) are recorded for QA verification.
Jank‑Free Rendering via Offscreen GPU Compositing
Given a complex PlanPulse canvas (>= 5 layers, 1000+ existing paths), When the user draws and pans simultaneously, Then ink updates render on a dedicated offscreen canvas and are GPU‑composited above drawing layers and below UI overlays, with no main‑thread long tasks (>50 ms) overlapping pointermove handling. Given a DevTools performance capture, When drawing, Then main‑thread scripting time during strokes is <= 5% and raster/composite occurs on a worker/GPU thread. Given continuous stroke input, When the UI triggers layout/paint for panels, Then ink maintains continuity with no more than 1 missed frame per 5 seconds.
Max‑Rate Stylus Sampling with Coalesced Events
Given getCoalescedEvents is supported, When pointermove events occur at N Hz, Then the engine processes >= 95% of coalesced points and the effective sample rate is within 5% of the event rate. Given getCoalescedEvents is not supported, When pointermove events fire, Then every event is sampled (no throttling/debouncing) and per‑event processing time averages <= 1 ms with 95th percentile <= 3 ms. Given a 240 Hz stylus, When drawing steadily, Then the engine captures >= 230 samples per second.
Velocity‑Aware Smoothing Without Added Lag
Given slow strokes (<50 mm/s), When drawing straight lines, Then positional jitter amplitude is reduced by >= 50% compared to raw input while tip‑to‑ink lag remains <= 2 px at 1x zoom (95th percentile). Given fast strokes (>200 mm/s), When drawing curves, Then smoothing preserves curvature without overshoot and maximum lateral error vs. a cubic‑spline reference is <= 0.8 px at 1x (95th percentile). Given sudden direction changes, When cornering, Then prediction overshoot is <= 3 px and visually corrects within <= 30 ms.
Pressure and Tilt Support with Graceful Fallback
Given a device reporting PointerEvent.pressure, When pressure varies from 0.1 to 1.0 during a stroke, Then mapped stroke width/opacity vary monotonically with max deviation <= 5% from the configured response curve. Given a device reporting tiltX/tiltY, When tilt changes by 30°, Then brush orientation changes by 30° ± 5° and anisotropic width reflects tilt. Given a device without pressure/tilt, When drawing, Then strokes render with default width/opacity, no runtime errors, and no console warnings, with continuous appearance throughout the session.
Graceful Degradation on Low‑Power Devices
Given the device is on battery with power‑saver active or battery level <= 20% (if available), When drawing, Then the engine reduces sampling and increases smoothing such that CPU usage attributable to ink stays <= 20% of one core on mid‑range mobile and tip‑to‑ink lag stays <= 3 px at 1x (95th percentile). Given thermal throttling is detected (reduced rAF cadence), When drawing continuously for 30 seconds, Then ink remains continuous with dropped frames < 5% and maximum path deviation from high‑fidelity reference <= 0.7 px at 1x. Given charging resumes or power‑saver disables, When drawing, Then full sampling/smoothing is restored within 1 second.
Consistent Stroke Appearance Across Zoom and Export
Given document zoom levels 25%, 100%, and 400%, When rendering the same stroke, Then stroke appearance in document space remains consistent: width scales with zoom and varies by <= 2% in device‑independent units. Given exports to PNG @1x and @4x and to SVG, When exporting a canvas with pressure‑varied strokes, Then raster output has geometric RMS error <= 0.3 px vs. on‑canvas rendering and SVG preserves path topology and width profiles exactly. Given a high‑DPI display (devicePixelRatio >= 2), When drawing and zooming, Then strokes remain crisp with no double‑bleed or pixelation and antialiasing halo <= 1 px.
Robust Palm Rejection & Touch Filtering
"As a tablet user, I want the app to ignore my palm and stray touches while writing so that my markups stay precise and my view doesn’t jump."
Description

A touch-input classifier that prioritizes stylus events and suppresses accidental palm and finger contacts during inking. It uses pointerType awareness, contact geometry, movement heuristics, and temporal gating to ignore non-stylus inputs while a pen is active. Left- and right-handed modes adjust rejection zones relative to the writing edge, and a quick toggle allows manual override. The solution respects platform gestures outside the canvas and restores touch interactions when not inking. This integrates seamlessly with PlanPulse’s workspace, ensuring precise annotations on dense drawings without unintended pans or zooms.

Acceptance Criteria
Active Pen Input: Suppress Palm/Finger Contacts on Canvas
Given a compatible device with active stylus support and the PlanPulse canvas is focused When a pen pointerdown occurs and a palm or finger contacts the canvas concurrently Then only pen strokes are rendered on the canvas And no marks are produced by non-pen contacts And no panning or zooming is triggered within the canvas And inking latency with suppression enabled is <= 20 ms at P95
Temporal Gating Ignores Non-Stylus Contacts While Pen Is Down
Given a pen pointerdown is active on the canvas When any non-pen contact begins within 300 ms of pen-down or while the pen is moving Then those non-pen contacts are ignored on the canvas (no stroke, no gesture) And within 150 ms after pen-up, normal touch interactions on the canvas are restored And pen-side button actions and stylus double-tap are not blocked
Multi-Contact Rest: No Ink or Gestures While Writing
Given a pen stroke is in progress on the canvas When 1 to 5 additional touch contacts rest on the canvas (stationary or drifting < 10 px/s) Then no additional strokes are created from those contacts And no canvas panning or zooming is initiated by those contacts And the pen stroke remains continuous and smooth with no visible gaps
Handedness Mode Adjusts Rejection Zone
Given Right-handed mode is selected When the pen writes near the right-hand writing edge and the palm touches within the rejection band Then palm/finger contacts within that band are ignored Given Left-handed mode is selected When the pen writes near the left-hand writing edge and the palm touches within the mirrored rejection band Then palm/finger contacts within that band are ignored And switching handedness updates the rejection band immediately and persists the preference
Quick Toggle Overrides Palm Rejection
Given the Palm Rejection toggle is set to Off When finger touches or gestures occur on the canvas Then touch interactions behave normally on the canvas (pans, zooms, finger ink if enabled) Given the Palm Rejection toggle is set to On When a pen is active on the canvas Then non-pen contacts are suppressed according to the rules And the toggle is keyboard accessible and exposes state via ARIA-pressed
Touch Interactions Restored When Not Inking; Gestures Outside Canvas Unaffected
Given no pen contact is active on the canvas When the user performs a two-finger pinch or drag on the canvas Then canvas zoom and pan gestures operate normally Given a pen stroke is active on the canvas When the user scrolls or pinches outside the canvas area Then the page or OS-level gesture executes normally And after pen-up, canvas touch gestures are re-enabled within 100 ms
Offline Autosave & Crash Recovery
"As a project lead working in a basement site with no signal, I want my ink to autosave instantly and recover after a crash so that I never lose markups."
Description

An offline-first persistence layer that journals strokes and tool state incrementally to durable storage with zero user action. Each stroke is appended to an operation log and periodically checkpointed into compact snapshots to minimize replay time. Writes are atomic and resilient to crashes; on restart, the session restores to the last drawn mark within one second. A service worker coordinates storage (IndexedDB) and sync handoff, and autosave operates entirely offline with conflict-safe IDs, enabling uninterrupted field markups. The design minimizes I/O to conserve battery and supports workspace-level encryption keys for local privacy.

Acceptance Criteria
Instant Offline Stroke Journaling
Given the device is offline and a workspace is open When the user draws continuously for 2 minutes producing ≥5,000 stroke points across ≥100 strokes Then 100% of emitted stroke points are appended to an IndexedDB operation log within 25 ms of receipt And no network requests are made during journaling (verified via network inspector and service worker logs) And average storage write throughput ≤60 KB/min and autosave CPU utilization ≤5% median during the session And no frame-drop attributable to autosave exceeds 1% over the session (measured via rAF metrics)
Crash Recovery to Last Mark ≤1s
Given an active drawing session with unsynced operations in the log When the app is force-terminated mid-stroke and relaunched into the same workspace Then the canvas restores to the exact last visible mark (no missing or duplicated points) within ≤1,000 ms of document open And the last tool state (tool type, color, thickness, transform/zoom) is restored And input latency to first post-recovery stroke ≤100 ms And recovery succeeds with op logs up to 10,000 entries without manual user action
Checkpoint Snapshots Limit Replay Time
Given ongoing drawing that appends operations to the log When either 2,000 new operations have been appended or 5 seconds of inactivity elapse (whichever comes first) Then a compact snapshot is created with size ≤200 KB and includes all ops up to that point And snapshot creation does not block input for any frame by more than 4 ms (p95) And on cold start with 50,000 total ops, total replay time to render the full state ≤300 ms and the number of ops replayed after the latest snapshot ≤2,000
Atomic Writes and Log Integrity Under Faults
Given journaling uses transactional writes with sequence numbers and checksums When power loss or browser kill is injected at random intervals during writes (10,000 trials) Then the operation log opens without corruption (checksum/sequence validation passes) in 100% of trials And at-most-once semantics hold (no duplicated records) and no partially written records are visible And no completed stroke segments are lost; at most the currently in-flight sub-segment (<20 ms of points) is absent
Service Worker Offline Autosave and Sync Handoff
Given the user creates ≥3,000 offline operations with conflict-safe IDs (e.g., UUIDv7/ULID) When connectivity is restored Then the service worker detects pending operations and enqueues background sync without blocking the UI And server-side receipt is exactly-once per operation (idempotent retries verified by 2 simulated network failures) And no ID collisions occur across two devices generating 1,000,000 IDs each (0 duplicates observed)
Workspace-Level Encryption at Rest
Given a workspace encryption key is provisioned locally When autosave writes operation logs and snapshots Then all persisted records are encrypted at rest (AES-GCM or equivalent) with unique nonces per record And clearing the local key renders stored data undecryptable (decryption fails and content is unreadable) And keys never leave the device (no export; verified by audit of network payloads) And encryption overhead adds ≤5% to save/restore time on the reference device
Battery- and I/O-Savvy Autosave Policies
Given the device is on battery (≤30% charge) and the user draws for 10 minutes producing ≥20,000 points When autosave runs continuously Then average disk write rate ≤100 KB/min and autosave CPU utilization ≤5% median, ≤10% p95 And no heavy diff/computation jobs are scheduled while on battery; such jobs only run when device is charging And frame drops attributable to autosave ≤1% over the session
Battery-Aware Compute Scheduler
"As a mobile user away from power, I want heavy processing to wait until I’m charging so that my battery lasts and inking stays smooth."
Description

A scheduling layer that adapts compute intensity to power conditions, prioritizing low-latency inking while deferring nonessential work. When battery is low or the device is not charging, heavy tasks (e.g., raster diffs, high-resolution thumbnail generation) are queued and executed during idle windows or when charging is detected; when battery information is unavailable, the scheduler uses user preferences and activity/idle signals to apply the same behavior. It exposes clear status cues and a manual 'Prefer Battery Saver' toggle. This preserves smooth inking and extends session duration without sacrificing data integrity.

Acceptance Criteria
Low Battery Defers Heavy Compute, Preserves Inking
Given device battery is at or below 20% and not charging And heavy tasks are enqueued (raster diffs, high‑res thumbnail generation) When the user is actively inking Then heavy tasks are placed in a low‑power queue within 200 ms And no heavy task executes while inking is active And inking p95 latency remains ≤ 25 ms during 60 s of continuous strokes And autosave succeeds with an interval ≤ 2 s between saves And a "Battery saver active — tasks queued" status appears within 500 ms
Charging Resumes Queued Tasks with Priority Controls
Given queued heavy tasks exist And the device begins charging or battery rises above 40% When the scheduler evaluates the queue Then queued tasks start within 2 s in FIFO order And background CPU usage for queued tasks is capped at 60% And inking p95 latency remains ≤ 25 ms while tasks run And the status updates to "Processing queued tasks" with progress (0–100%)
Battery Info Unavailable — Preference and Activity‑Driven Scheduling
Given OS battery APIs are unavailable or return undefined And the user has set "Prefer Battery Saver" to ON When the scheduler evaluates power state Then the system behaves as low‑power (queue heavy tasks) within 200 ms And status reads "Battery info unavailable — battery saver assumed" Given OS battery APIs are unavailable And the user has "Prefer Battery Saver" OFF And the workspace is idle for ≥ 30 s When the scheduler evaluates the queue Then a single heavy task may run during idle with CPU cap 40% And any pointer/keyboard input pauses queued tasks within 100 ms
Manual 'Prefer Battery Saver' Toggle Overrides Scheduling
Given the "Prefer Battery Saver" toggle is visible in Settings and Quick Actions When the user toggles it ON Then low‑power mode engages within 500 ms and future heavy tasks are queued And the preference persists across app restarts and device reboots When the user toggles it OFF while charging Then queued tasks resume within 2 s And status updates to reflect the new mode within 500 ms
User Status Cues for Power Mode and Queue Visibility
Given the app is running When power mode changes (automatic or manual) Then a persistent, accessible indicator shows the current mode textually And tapping/clicking the indicator opens a panel listing queued tasks with counts And the indicator and panel meet WCAG 2.1 AA (contrast ≥ 4.5:1, keyboard focus, ARIA roles) And the indicator does not overlap the canvas input area or occlude stroke paths
Data Integrity Across Queue Pause/Resume and Crashes
Given heavy tasks are queued or executing When the app crashes or the device reboots Then the queue and task metadata are persisted and recovered on next launch And partially computed outputs are validated or discarded before use And no data loss occurs for saved strokes or versions And a "Recovered queued tasks" notice is shown on resume
Idle‑Window Execution with Input‑Responsive Pausing
Given the user is inactive for ≥ 20 s and battery > 20% (not charging) And heavy tasks are queued When the scheduler detects an idle window Then queued tasks run with a background CPU cap ≤ 30% And any user input pauses queued tasks within 100 ms And subsequent inking shows no visible frame drops (p95 latency ≤ 25 ms)
Deferred Diff & Sync Pipeline
"As a small-firm lead, I want my offline markups to sync and generate diffs automatically when I’m back at my desk so that clients see clean versions without me managing uploads."
Description

A background pipeline that computes structural diffs of markups against the base drawing and synchronizes them when conditions are optimal. While offline or on battery, the pipeline records high-level operations; once charging or idle with connectivity, it batches operations, generates diffs and thumbnails in a Web Worker/Service Worker, and syncs them with PlanPulse’s versioning service. It ensures causal ordering, retries with exponential backoff, and produces one-click approval artifacts identical to online behavior. User-facing progress is non-blocking, with clear indicators when pending diffs are queued or completed.

Acceptance Criteria
Offline Ink Capture with Deferred Diff
Given the device is offline or on battery power and the user makes stylus markups, When strokes are completed, Then each high‑level operation is appended to the local queue within 50 ms per op and autosaved within 200 ms after the last stroke; And no diff or thumbnail computation runs; And a non‑blocking "Pending" badge increments within 500 ms; And queued operations persist across app reloads and OS process kills.
Auto Diff & Thumbnail on Charge + Connectivity
Given queued operations exist for a document and the device becomes charging, online, and idle for at least 5 seconds, When the pipeline runs, Then diffs and thumbnails are computed in a Web Worker/Service Worker without blocking the UI thread; And the UI maintains ≥55 FPS with no main‑thread block >100 ms; And batched operations are processed in document order and the queue drains; And completion updates the document state and clears the pending badge.
Causal Ordering and Idempotent Sync
Given operations O1..On are recorded with monotonic sequence numbers per document, When syncing to the versioning service, Then operations are applied server‑side in the same causal order; And re‑transmissions due to retry are idempotent (no duplicate effects); And the resulting server version checksum matches the local recomputed state checksum.
Retry with Exponential Backoff
Given a sync attempt fails with transient errors (network loss or HTTP 5xx), When the pipeline schedules retries, Then delays grow exponentially (factor 2) with jitter 0–10%, starting at 1 s and capped at 300 s; And after 10 consecutive failures, a non‑blocking warning is shown and an error event is logged; And retries resume automatically when connectivity returns; And on permanent errors (HTTP 4xx except 408/429), the item is marked Needs Attention and is not retried until user action.
One‑Click Approval Artifact Parity
Given deferred processing generates diffs and thumbnails, When the sync completes, Then the one‑click approval artifact (bundle + metadata) is hash‑equal to the artifact produced by the online path for the same operations; And the approval link is available immediately upon server ack; And user interaction flow requires no additional steps compared to fully online behavior.
Clear Non‑Blocking Progress Indicators
Given there are pending diffs or active sync jobs, When the user is drawing or navigating, Then no modal or blocking spinner is shown; And an unobtrusive indicator displays counts and states (Queued, Syncing, Complete) and is updated within 1 s of state change; And the indicator is accessible (aria‑live="polite", contrast ≥ 4.5:1); And tapping the indicator opens a queue view without interrupting current work.
Persistence and Recovery Across Restarts
Given queued operations exist, When the app or Service Worker is terminated and later restarted, Then the queue and scheduling state are restored from durable storage; And the pipeline resumes automatically under the same gating conditions; And no recorded operation is lost or duplicated across restart; And a simulated crash between strokes results in full recovery of all saved operations.
Lightweight Stroke Storage & Compression
"As an architect working across large drawings, I want markups to load fast and use minimal storage so that I can move between sheets without delays."
Description

A compact, battery-friendly stroke model that stores ink as vector paths with delta and variable-length encoding to minimize space and I/O. Pressure, tilt, and timestamp metadata are preserved, while quantization and path simplification keep visual fidelity. Snapshots compress with fast codecs and are chunked for partial loading, enabling quick open times on large sheets. The storage format integrates with PlanPulse’s export and versioning layers, supports forward compatibility, and keeps memory usage bounded to prevent GC pauses during long sessions.

Acceptance Criteria
Delta-Encoded Vector Stroke Storage
Given raw stroke points recorded as 32-bit float x/y coordinates, When encoded with delta and variable-length encoding, Then the total byte size of points is reduced by at least 60% versus the raw 32-bit float baseline. Given an encoded stream of 10,000 points, When decoding in the reference test harness, Then total decode time is <= 10 ms (>= 1,000,000 points/second throughput). Given an encode-decode round trip of a stroke, When reconstructing positions at canvas scale 1.0, Then the maximum absolute position error per point is <= 0.25 px. Given a single-byte corruption within an encoded stroke stream, When decoding, Then the decoder detects the error, isolates it to the affected stroke, and continues without process crash.
Metadata Round-Trip Integrity (Pressure/Tilt/Timestamp)
Given per-point pressure values in [0,1], When encoded and decoded, Then absolute error per point is <= 1/255 and no reordering occurs. Given per-point tilt in degrees (0–180) and azimuth if available, When encoded and decoded, Then absolute error per component is <= 0.5°. Given monotonically non-decreasing per-point timestamps in milliseconds, When encoded and decoded, Then monotonicity is preserved and absolute error per point is <= 1 ms. Given strokes from devices lacking tilt or pressure sensors, When encoded, Then absent fields are flagged explicitly and overhead for missing metadata is <= +1 byte per point.
Visual Fidelity After Quantization and Simplification
Given an original stroke path and its simplified/quantized counterpart, When rendered at 100% zoom, Then the maximum perpendicular deviation between paths is <= 0.5 px. Given simplification applied across the standard stroke corpus, When comparing point counts, Then average point count reduction is >= 30% while meeting the 0.5 px error bound. Given simplified paths, When validated geometrically, Then no self-intersections are introduced relative to the original path topology. Given rasterized before/after masks at 1 px stroke width, When computing intersection-over-union (IoU), Then IoU >= 0.98.
Snapshot Compression and Chunked Partial Loading
Given an uncompressed snapshot of 10 MB, When compressed with the chosen fast codec profile, Then compression finishes in <= 100 ms and compressed size is reduced by >= 30%. Given a target chunk size of 512 KB, When saving a snapshot, Then produced chunks are within 256 KB–1 MB and each chunk includes a checksum that validates on load. Given opening a document where the initial viewport covers <= 5% of the sheet area, When loading, Then <= 10% of total snapshot bytes are read before first paint. Given a snapshot containing a corrupted chunk, When loading, Then unaffected chunks load successfully and the corrupted chunk is reported and skipped without crashing.
Large Sheet Quick Open Time
Given a document with 100,000 points across 1,000 strokes and a 200 MB on-disk history, When cold-opening from local storage offline, Then first visible content paints in <= 1.5 seconds. Given the same document, When warm-opening within the same session, Then first visible content paints in <= 500 ms. Given first paint occurs, When measuring I/O, Then total bytes read prior to first paint is <= 15 MB.
Forward Compatibility and Versioning Integration
Given a file containing unknown optional chunks from a higher minor format, When loaded and re-saved, Then all unknown chunks are preserved byte-for-byte. Given a file marked with format version N.M, When opened by a reader supporting N.(M-1..M+1), Then loading succeeds and unsupported features are ignored without loss of known data. Given two successive saves with identical content, When diffed by the versioning layer, Then unchanged strokes and chunks retain identical content hashes/IDs. Given export to a PlanPulse archive, When including storage payload and manifest, Then the manifest records schema version and feature flags required for forward compatibility.
Bounded Memory and GC Pause Constraints
Given a 60-minute continuous inking session averaging 20 points/second, When profiling heap usage attributable to storage, Then peak usage remains <= 80 MB and sustained allocation rate <= 30 MB/min. Given the same session, When profiling GC pauses attributed to storage operations, Then no single pause exceeds 16 ms. Given a runtime memory pressure signal from the OS/runtime, When received, Then the storage cache evicts least-recently-used chunks to keep cache size <= 20 MB without data loss.

Window Planner

A visual scheduler for setting time-boxed review slots by discipline with drag-and-drop lanes, time‑zone awareness, and holiday calendars. It clarifies who reviews when, prevents overlap and bottlenecks, and gives everyone a clear countdown so MEP, structural, and interiors stay on pace without back‑and‑forth emails.

Requirements

Discipline Lanes & Slot Management
"As a project lead, I want to visually schedule and adjust discipline-specific review slots so that I can coordinate reviews without back-and-forth emails and keep the team aligned on timing."
Description

Deliver a visual scheduler with vertical lanes per discipline (e.g., MEP, Structural, Interiors) and a horizontal time axis for creating and managing time‑boxed review slots. Users can create, resize, split, and drag‑and‑drop slots; snapping to configured increments and working hours. Enforce configurable min/max slot durations, buffer times, and lane capacity. Each slot links to a PlanPulse drawing package/version and displays a live countdown and status (scheduled, in review, completed, overdue). Provide bulk operations (multi‑select move/extend), templated lane setups by project phase, undo/redo, autosave, and an audit log of changes. Persist schedules in UTC with deterministic ordering and recover gracefully from conflicts or network loss (optimistic UI with server reconciliation). Meet WCAG 2.1 AA for keyboard navigation and contrast, maintain 60 FPS interaction on modern hardware, and support responsive layouts for tablet use.

Acceptance Criteria
Create and Edit Slots with Constraints
Given a project configured with 15 min increments, working hours 09:00–18:00, min slot 30 min, max slot 4 h, and 15 min buffers; When a user drag-creates a slot from 08:50 to 09:20 in the MEP lane; Then the slot snaps to 09:00–09:30 and is created. When the user resizes the slot beyond 4 h; Then the resize is blocked and an inline message states "Max duration 4 h". When the user resizes the slot below 30 min; Then the resize is blocked and an inline message states "Min duration 30 min". Given an adjacent slot ending 11:00; When the user drags another slot to start at 11:05; Then the drop is blocked and guidance shows "Requires 15 min buffer"; When dropped at 11:15; Then it succeeds. When a 3 h slot is split at 1 h 30 min; Then two slots are created (each ≥30 min) with buffers enforced on both sides.
Lane Capacity and Overlap Prevention
Given lane capacity = 2 concurrent slots and two slots exist 10:00–11:00; When attempting to create a third overlapping slot in the same lane/time; Then creation is blocked with "Lane at capacity" and a suggestion chip "Next available: 11:15" is shown. When the user accepts the suggestion; Then the slot is created at 11:15 with buffers respected. When one of the overlapping slots is deleted; Then creating a new slot overlapping 10:00–11:00 succeeds if buffers are met.
Time Zones, Working Hours, Holidays, UTC Persistence, and Ordering
Given User A (UTC−07:00) and User B (UTC+01:00), and a slot stored as 2025-10-05T16:00Z–18:00Z; When both view the schedule; Then A sees 09:00–11:00 and B sees 17:00–19:00 local times. Given the project holiday calendar marks 2025-12-25 as a holiday; When attempting to create a slot on that date; Then creation is blocked unless "Allow holiday scheduling" is enabled; When override is enabled; Then the slot is created with a holiday badge. Given working hours 09:00–18:00; When dragging beyond hours; Then slot edges snap to the nearest boundary within hours. When multiple slots share the same start time; Then the rendered order is deterministic: by lane key ascending, then slot ID ascending. When the network is offline during a change; Then changes are queued locally and persisted to the server in UTC on reconnection without duplication. When the server returns a version conflict for a changed slot; Then the client refetches the authoritative schedule, replays the local change if valid, or presents a conflict dialog with options (keep server, keep mine, edit); The final resolution is persisted and logged.
Slot Linking, Countdown, and Status Lifecycle
Given a slot linked to drawing package "A101 v3"; When the link is updated to "A101 v4"; Then the slot UI displays "A101 v4" within 2 seconds. Given current time < slot start; Then status = "scheduled" and countdown shows time until start (minute precision). At slot start time; Then status auto-transitions to "in review" and countdown shows time remaining; At end time if not completed; Then status = "overdue" with negative countdown. When a reviewer marks the slot "Completed" or the linked package receives client approval; Then status = "completed", countdown stops, and completion timestamp is recorded. When the linked package is archived; Then the slot shows an inline warning and disables the open-link action until relinked.
Bulk Multi-Select Move/Extend with Undo/Redo and Autosave
Given three selected slots across two lanes; When moving them forward by 60 minutes; Then all slots attempt to move; slots that violate constraints remain in place and a summary lists failures with reasons; successful moves preserve relative offsets. When extending the selection by +15 minutes; Then min/max durations and buffers are enforced per slot; failures are reported per slot. After any bulk operation; Then the change is autosaved within 1 second of idle and an audit log entry records user, timestamp, affected slot IDs, and old/new times. When Undo is invoked; Then the entire bulk operation is reverted across all affected slots; When Redo is invoked; Then the operation reapplies; Undo/Redo history persists within the scheduler for the session.
Templated Lane Setups by Project Phase
Given a "Design Development" template defining lanes (MEP, Structural, Interiors) with capacities and working hours; When applying the template to a project with existing lanes and slots; Then missing lanes are added, existing lane settings are updated, and existing slots remain unchanged. When previewing the template; Then a diff lists lanes and settings to be added/updated; When confirmed; Then changes apply atomically. After application; Then an audit log entry records template name, actor, timestamp, and affected lanes. When applying the same template again; Then the second application is a no-op (idempotent) with no slot changes recorded.
Accessibility, Keyboard Navigation, Contrast, and Performance
Given keyboard-only use; When focus enters the scheduler; Then users can navigate lanes and time cells with Arrow keys, create a slot with Enter, move/resize with Shift+Arrow in 15 min increments, split with S, and open details with Space; all functions are available without a mouse. All interactive elements meet WCAG 2.1 AA: focus indicators ≥3:1 contrast; text/icons ≥4.5:1 contrast; ARIA roles/states/names are present; screen readers announce slot title, discipline, package, start/end, status, and countdown. On modern hardware (2021+ MacBook Air or iPad Pro 2021+); During drag, resize, or scroll; Then the frame rate is ≥60 FPS at the 95th percentile over a 10 s interaction; initial render of up to 500 slots completes in ≤1.5 s; dragging latency ≤16 ms per frame. On tablet (≥768 px width); Then layout remains responsive with sticky time header, vertical lane scrolling, and full parity of touch and keyboard interactions.
Time Zone & DST Handling
"As a distributed reviewer, I want schedule times shown in my local time zone with accurate DST handling so that I never miss a slot due to time conversion errors."
Description

Implement end‑to‑end time‑zone awareness using the IANA tz database. Store canonical UTC timestamps and render local times per viewer with clear labels and optional dual‑time display (creator vs. viewer). Automatically detect user time zones and allow manual override per user and per project. Accurately handle daylight saving transitions (including gaps and overlaps), prohibit creating slots that ambiguously span DST changes, and provide safe suggestions. Ensure reminders and countdowns trigger according to each assignee’s local time while maintaining a single source of truth in UTC.

Acceptance Criteria
UTC Source of Truth for Window Planner Slots
Given a user schedules a review slot using local start and end times When the slot is saved Then the system stores start_at_utc and end_at_utc as UTC timestamps without local offsets And the canonical persisted timestamps are UTC-only (no local time stored as source of truth) And retrieving the slot in any viewer time zone renders times by converting from the stored UTC values And exporting or syncing internal reminders uses the stored UTC timestamps as the scheduling basis
Local Rendering and Dual-Time Display (Creator vs. Viewer)
Given a viewer in time zone A is viewing a slot created by a user in time zone B When the schedule is displayed Then the slot time is shown in the viewer’s local time with clear labels including abbreviation and offset (e.g., 3:00 PM, GMT+01:00) And a dual-time toggle reveals both Viewer Time and Creator Time, each labeled with the IANA zone name (e.g., Europe/Berlin, America/Los_Angeles) And if Viewer Time and Creator Time are the same zone, dual display is suppressed or merged to avoid redundancy And changing the viewer’s time-zone override updates displayed times immediately without altering the stored UTC values
Auto-Detect and Manual Overrides (User- and Project-Level)
Given a first-time user signs in with a browser reporting an IANA time zone When the app initializes Then the user’s profile time zone is set to the detected IANA zone And the user can manually override their profile time zone to any valid IANA identifier And invalid inputs (offset-only, Windows IDs, unknown names) are rejected with a helpful error And a project setting allows a project-level default time zone override And when a project time zone is set, project views default to that zone while still allowing per-viewer local rendering and dual-time display
DST Spring-Forward Gap Prevention and Safe Suggestions
Given a project in a time zone with a spring-forward DST gap (nonexistent local times) When a user attempts to create a slot whose start or end falls within the nonexistent local time window or crosses the gap ambiguously Then the creation is blocked with an explanation that the selected local time does not exist due to DST And the UI provides safe suggestions (e.g., next valid local time or adjusted duration) that map to valid UTC instants And the time picker disables nonexistent local times for the chosen date and time zone And accepting a suggestion results in a slot saved to UTC that corresponds to the suggested valid local time
DST Fall-Back Overlap Disambiguation and Ambiguity Prohibition
Given a project in a time zone with a fall-back DST overlap (repeated local hour) When a user selects a local time that occurs twice or creates a slot spanning the overlap Then the user is required to disambiguate by choosing the specific occurrence or UTC offset (e.g., first 01:30 PDT or second 01:30 PST) And the system records the disambiguation and maps it to a unique UTC instant And creation of slots that span the overlap without explicit disambiguation is prohibited And countdowns and reminders follow the chosen occurrence consistently
Reminder and Countdown Accuracy per Assignee Local Time
Given a review slot has a single UTC start time and multiple assignees across different time zones And reminder policies are set for 24 hours and 1 hour before start When reminders are triggered and countdowns are shown Then each assignee receives reminders at their correct local times derived from the UTC start (respecting any DST changes between scheduling and send-time) And changing an assignee’s time-zone override updates future reminder schedules without altering the UTC start And no duplicate reminders are sent for the same reminder policy and assignee
IANA Time Zone Database Usage and Validation
Given the application performs time conversions and renders time-zone choices When listing selectable time zones Then only IANA time zone identifiers are presented (e.g., America/Los_Angeles), not offsets or Windows IDs And any submitted time zone is validated against the installed IANA tz database And time conversions around known DST transitions for a representative set of zones match the IANA rules And the application exposes its loaded IANA tz database version in diagnostics
Holiday Calendars & Working Hours
"As a project coordinator, I want the scheduler to respect holidays and working hours so that review slots are realistic and don’t create avoidable delays."
Description

Provide organization‑level working hours, project‑level overrides, and per‑user PTO. Integrate regional holiday sets and allow importing calendars via ICS (Google/Microsoft 365). The scheduler should compute availability across lanes, warn on off‑hours or holidays, and offer one‑click adjustments to the next valid window. Allow authorized users to override with justification and optional escalation. Expose configuration for country/region, time window templates (e.g., 9–5 Mon–Fri), and blackout dates for client offices.

Acceptance Criteria
Set Organization Working Hours Template
Given I am an organization admin When I configure country/region and set working hours template "Mon–Fri 09:00–17:00" and save Then the template is persisted and becomes the default for all projects without overrides Given an existing project without an explicit override When I view project settings Then it displays the inherited "Mon–Fri 09:00–17:00" hours and associated region holidays Given the organization template is updated When I open an inheriting project Then the project reflects the updated hours and holidays within 60 seconds Given a user attempts to schedule outside organization working hours When they drop a slot at 19:00 local time Then the system warns "Off-hours for organization" and prevents scheduling unless the override flow is initiated
Project-Level Working Hours Override
Given I am a project owner When I enable "Project working hours override" and set "Tue–Sat 10:00–18:00" Then those hours supersede the organization template for this project Given the project override is active When I schedule a slot on Monday 11:00 Then the system warns "Outside project working hours" and offers "Adjust to next valid window" Given I click "Adjust to next valid window" When computation completes Then the slot moves to Tuesday 10:00 local project timezone within 2 seconds and a tooltip shows the adjustment reason Given a project override exists When I click "Revert to organization default" Then the project hours revert and all future unsent invites revalidate against the organization hours
Per-User PTO and Regional Holidays
Given a user has PTO set from 2025-12-24 to 2025-12-28 and the project region is "US-NY" When availability is computed Then those dates are blocked for that user and US-NY holidays are marked as unavailable across lanes Given conflicting regional holiday sets across participants When computing a cross-discipline slot Then the system treats a day as unavailable if any required participant has a holiday that day and shows per-user reason badges Given a user edits their PTO When they remove a date Then previously blocked time reopens and dependent tentative slots recalculate within 60 seconds
ICS Calendar Import (Google/Microsoft 365)
Given I upload a valid ICS URL for my Google Calendar When I click "Connect" Then events are imported within 2 minutes and marked as read-only blocks on my availability Given repeating events and updates When the ICS feed refreshes Then changes are applied idempotently without duplicate blocks, and deletions remove corresponding blocks Given an invalid ICS URL or authentication failure When import runs Then the system shows an error with remediation steps and no partial blocks are created Given I disconnect the ICS integration When the next refresh cycle runs Then all blocks sourced from that ICS are removed
Availability Computation and Warnings Across Lanes
Given disciplines MEP, Structural, and Interiors are required When I propose a slot Then the system computes overlap of all participants’ working hours, PTO, holidays, and blackout dates with time-zone normalization to the project timezone Given the slot falls partially outside any required participant’s valid window When I release the drag Then the slot is flagged with a warning icon and a message indicating which constraints are violated Given the slot is invalid When I click "Adjust to next valid window" Then the system snaps the slot to the earliest time that satisfies all constraints within 2 seconds or informs "No valid window in the next 30 days"
Authorized Override with Justification and Escalation
Given I have the "Schedule Override" permission When I attempt to place a slot during a holiday or off-hours Then I am prompted to enter a justification (required) and choose optional escalation recipient(s) Given I submit an override When the slot is created Then an audit log records user, timestamp, justification text, and impacted constraints, and notifications are sent to selected recipients Given I lack the permission When I attempt the same action Then the system blocks the override and suggests requesting access or contacting a project owner
Client Blackout Dates and Region Configuration
Given client office blackout dates are configured (e.g., 2025-07-01 to 2025-07-05) and region is "UK" When proposing slots in that range Then all slots are marked unavailable with reason "Client blackout" Given multiple regions across project members When scheduling Then times are shown in the user’s local time and the project timezone, and countdown timers reflect the user’s locale correctly Given the region is changed in project settings When saved Then the associated holiday set updates and all future slots revalidate, with change logs listing added and removed holidays
Conflict Detection & Auto‑Resolution
"As a discipline lead, I want the planner to immediately flag and help resolve scheduling conflicts so that the review flow stays on pace without bottlenecks."
Description

Add real‑time validation to prevent or flag conflicts such as overlapping assignments for a reviewer, lane capacity breaches, and cross‑discipline dependency violations (e.g., Structural starts after MEP completes). Provide inline warnings, collision heatmaps, and actionable suggestions (shift by N hours, split slot, swap reviewer). Support soft conflicts with override reasons and hard blocks for policy rules. Implement optimistic locking and server‑side checks to avoid concurrent edit collisions, with non‑destructive merge suggestions when conflicts arise.

Acceptance Criteria
Real-time overlap detection and auto-suggestions per reviewer
Given a reviewer has an existing slot from T1 to T2 in any lane And the user creates or drags a second slot for the same reviewer that overlaps any portion of [T1,T2] When the overlap occurs (during drag, resize, or on drop) Then the UI displays an inline conflict warning within 150 ms and highlights the overlapping intervals And the system presents actionable suggestions: (1) Shift to nearest non-overlapping window, (2) Split slot to avoid overlap, (3) Swap to an available reviewer in the same discipline And the Save action behavior follows policy: if hard overlap policy = true, Save is disabled; if soft = true, Save requires an override reason (min 10 chars) When the user applies a suggestion Then the schedule updates with no remaining overlaps for that reviewer and the warning clears
Lane capacity limit enforcement and resolution
Given a lane has a configured capacity N concurrent slots And the current visible window includes existing bookings When the user creates, moves, or resizes a slot such that any time slice exceeds capacity N Then a capacity conflict is shown inline on the slot and the heatmap marks the overloaded cells And the system proposes suggestions: (1) Shift by the smallest delta to the nearest within-capacity window, (2) Split across two within-capacity windows, (3) Suggest an alternate lane with available capacity (if allowed) And Save behavior follows policy: hard capacity = block Save; soft capacity = require override reason (min 10 chars) When a suggestion is accepted Then the resulting schedule contains no time slice over N and the conflict indicators clear
Cross-discipline dependency guardrails (MEP must complete before Structural starts)
Given a dependency rule: Structural reviews must start after MEP completes for the same package And a MEP slot ends at Tm When a user schedules or moves a Structural slot whose start Ts < Tm Then a dependency violation is raised with inline messaging referencing the blocking MEP slot And the system suggests the earliest allowed start = Tm plus configured buffer (if any) And if the rule is hard, Save is disabled until Ts >= Tm (+ buffer); if soft, Save requires an override reason (min 10 chars) When the blocking MEP slot is edited Then the Structural suggestion updates in real time to reflect the new earliest allowed start
Collision heatmap visualization and drilldown
Given conflicts (overlaps, capacity, or dependencies) exist within the visible time range When the scheduler view loads or a slot is modified Then the collision heatmap updates within 200 ms to represent conflict density by time cell And a legend explains intensity levels and conflict types When a user clicks a heatmap cell Then a drilldown lists conflicts in that cell, grouped by type and sorted by severity, each entry deep-linking to the affected slot(s) And when conflicts are resolved Then the heatmap and legend reflect the cleared state with no residual highlights
Optimistic locking with non-destructive merge flow for concurrent edits
Given two users are editing the same schedule board And User A saves changes creating server version V+1 When User B attempts to save conflicting changes against version V Then the server rejects with a version conflict response including a diff of impacted slots And the client opens a merge modal showing each conflict with options: Keep server, Keep mine, Split, Shift to next available, or Swap reviewer When User B resolves all conflicts and confirms Then the merged result preserves all non-conflicting edits, applies chosen resolutions, increments the version, and no slot data is lost
Time-zone and holiday-aware validation and suggestions
Given reviewers may have different time zones and lanes have assigned holiday/working-hour calendars When the system validates conflicts or computes suggestions Then all comparisons use UTC-normalized instants And suggestions respect each affected reviewer’s working hours and holidays, avoiding non-working periods unless policy allows soft overrides And the UI displays times in the current user’s locale while tooltips show the reviewer’s local time And across DST transitions, no false overlaps or gaps are produced
Soft-conflict override justification and audit trail
Given a soft conflict (overlap, capacity, or dependency) is present When a user chooses to proceed Then the system requires an override reason (min 10 chars, max 500) and records user, timestamp, conflict type, affected slot IDs, and reason And Save is blocked until a valid reason is provided When the record is saved Then the activity log shows the override entry with filters by conflict type, reviewer, and date range, and the slot detail panel surfaces the latest override reason
Reviewer Assignment & Multi‑Channel Notifications
"As an assigned reviewer, I want clear invites and reminders across my channels so that I can prepare and complete reviews within my scheduled window."
Description

Enable assignment of primary and backup reviewers per slot with capacity checks against individual workloads. Generate calendar invites (ICS) and send notifications via email and Slack/Teams with configurable lead times (e.g., 24h, 1h) and a live countdown link back to PlanPulse. Support reminder snooze, reassignment requests, and automatic notifications when slots shift. Ensure deliverability with retry/backoff, unsubscribe preferences, and localization. Log delivery status and surface per‑slot notification history for auditability.

Acceptance Criteria
Assign Primary and Backup with Capacity Validation
- Given an open review slot and selected users, when saving primary and backup assignees, then the system validates each assignee’s workload for the slot window against their configured capacity limit. - Given a capacity limit would be exceeded, when attempting to assign, then the save is blocked with an error naming the over-capacity user and the required free hours. - Given assignment succeeds, when viewing the slot, then the primary and backup are displayed and the same user cannot appear in both roles. - Given no backup is selected, when saving, then the slot is saved with only a primary reviewer and passes capacity checks for that reviewer.
Calendar Invite (ICS) Creation and Updates
- Given a slot is created or assignees change, when saving, then an ICS VEVENT is generated with UID tied to the slot, SEQUENCE starting at 0 and incrementing on updates, VTIMEZONE for the slot time zone, correct start/end, and attendees for the current primary and backup. - Given an initial assignment, when email notifications are sent, then the ICS is attached and imports correctly in Google, Outlook, and Apple Calendar with accurate title including project and discipline. - Given a time change or reassignment, when saving, then updated ICS is sent with incremented SEQUENCE to current attendees and METHOD:CANCEL is sent to removed attendees. - Given a slot is deleted, when deleting, then a METHOD:CANCEL ICS is sent to all prior attendees and the event is removed from their calendars.
Multi-Channel Notifications with Configurable Lead Times
- Given project-level default lead times (e.g., 24h, 1h) or per-slot overrides, when a slot with assignees is saved, then notifications are scheduled at those offsets for each assignee. - Given user channel preferences and available integrations, when scheduling, then email and Slack/Teams notifications are queued per recipient’s enabled channels. - Given a scheduled send fires, when the message is delivered, then it contains a live countdown link that opens the slot in PlanPulse. - Given recipients have time-zone preferences, when computing send time and countdown, then the recipient’s time zone is used; if absent, the slot time zone is used.
Reminder Snooze Controls
- Given a pre-start reminder, when the recipient clicks Snooze (15/30/60 minutes or Until 5 minutes before start), then a single replacement reminder is scheduled on the same channel at the chosen offset and any prior pending reminder for that lead time is canceled. - Given current time is past the slot start, when the recipient attempts to snooze, then the snooze action is disabled and no new reminder is created. - Given the recipient unsubscribes from the channel after snoozing, when the snoozed reminder reaches its send time, then it is suppressed and recorded as Suppressed with reason Unsubscribed.
Reviewer Reassignment Request Flow
- Given an assigned reviewer cannot attend, when they submit a reassignment request with a reason from email/Slack/Teams or the slot page, then the scheduler/owner is notified immediately in-app and via configured channels. - Given the scheduler approves and selects a new reviewer, when saving, then capacity checks are applied and must pass; the new reviewer is assigned, receives ICS and notifications, and the original reviewer receives a cancellation notice. - Given the scheduler declines, when actioned, then the requester is notified and the slot’s assignment remains unchanged.
Automatic Notifications and Rescheduling on Slot Changes
- Given an existing slot with scheduled reminders, when the slot start/end or time zone changes, then all pending reminders are canceled and rescheduled using the same lead-time offsets relative to the new start time. - Given the change occurs within the shortest lead-time window before start, when saving, then an immediate “slot updated” notification is sent to current assignees on their enabled channels. - Given assignees are added or removed, when saving, then new assignees receive initial notifications and ICS, and removed assignees receive cancellation notices and their future reminders are canceled.
Deliverability, Unsubscribe, Localization, and Audit Log
- Given a notification attempt fails with a transient error, when retrying, then the system retries up to 3 times with exponential backoff and jitter (approximately 1m, 5m, 15m); after final failure the status is marked Failed and no further retries occur. - Given a user has unsubscribed from a channel, when a notification is scheduled on that channel, then it is not sent, is logged as Suppressed with reason Unsubscribed, and is excluded from retries. - Given a recipient language preference, when generating a notification, then localized templates and locale-specific date/time formats are used; if a template is missing, English is used as fallback. - Given any notification lifecycle event, when viewing the slot’s Notification History, then each entry shows timestamp, channel, recipient, template name and version, locale, status (Queued, Sent, Delivered, Failed, Suppressed, Bounced), attempt count, and provider message ID; the history view loads within 2 seconds for the 95th percentile. - Given an audit requirement, when exporting per-slot notification history, then a CSV containing all fields for the selected date range is downloaded successfully.
Client Read‑only Timeline View
"As a client stakeholder, I want a simple read‑only view of the review schedule so that I can see what’s expected of me and when without needing full workspace access."
Description

Provide a secure, read‑only schedule view for clients with shareable, expiring links and optional password protection. Hide internal lanes or sensitive metadata based on role and project settings. Present clear countdowns, time‑zone labels, and status indicators without drag‑and‑drop controls. Support mobile‑friendly rendering, embeddable iframe mode, and access logging for audit. Offer quick copy‑link and export‑to‑PDF for formal distributions.

Acceptance Criteria
Expiring Share Link with Optional Password
Given an internal user with permission to share a project timeline When they generate a client timeline link with an expiry date/time and optional password Then the system creates a unique, unguessable link tied to that project and link settings And the link remains accessible until the exact expiry date/time (to the minute) in the project's timezone And after expiry, any request using the link returns an access-expired screen and HTTP 410 And if a password was set, the link requires the correct password before rendering any timeline data And the password prompt does not reveal whether the link exists beyond “Invalid credentials or expired link”
Read-only UI with Status Indicators, Time-Zone Labels, and Countdowns
Given a client opens a valid share link When the timeline renders Then no drag-and-drop handles, resize cursors, edit menus, or keyboard shortcuts for mutation are present And all API calls that would mutate data respond 403 from this link context And each review window displays a status pill (e.g., Upcoming/In Progress/Complete/Overdue) derived from server state And a countdown shows time remaining or elapsed with HH:MM granularity, updating at least once per minute And the view displays both the project's base timezone and the viewer's detected timezone label
Role-Based Redaction of Internal Lanes and Sensitive Metadata
Given project settings mark specific lanes and metadata fields as internal-only And access is via a client share link When the timeline renders or is exported Then internal-only lanes are omitted from the layout And internal-only metadata fields are not present in the DOM, network responses, or PDF output And lane ordering remains contiguous with no empty gaps where internal lanes were removed And totals or summaries exclude hidden content to prevent inference
Mobile-Friendly Rendering Across Breakpoints
Given a client opens the share link on a mobile device When the viewport width is between 320px and 768px Then the timeline supports horizontal scrolling with sticky date headers And tap targets are at least 44px in both dimensions And text in lane headers truncates with ellipsis and shows full values on tap/long-press tooltip And initial render time is under 3 seconds on a mid-tier device over 3G Fast network
Embeddable Iframe Mode with Domain Allowlist
Given an organization has configured an allowlist of embed hostnames When the share link is loaded with embed mode in an iframe on an allowed hostname Then the timeline renders without primary app chrome and fits the iframe width responsively And the iframe height can be programmatically resized via postMessage to avoid scrollbars And when embedded on a non-allowed hostname, rendering is blocked with a branded error and HTTP 403 And security headers (CSP/X-Frame-Options) permit only allowed origins
Access Logging and Audit Trail Export
Given any request to a share link (successful or denied) When the request completes Then an audit record is stored with timestamp (UTC), link ID, project ID, outcome (success/expired/password-fail), requester IP, and user agent And an authorized internal user can filter and export audit records to CSV by project and date range And the export includes a checksum and record count for integrity verification
Quick Copy-Link and Export-to-PDF for Formal Distribution
Given an internal user viewing the Window Planner When they open the share menu for the client view Then a Copy Link action copies the current share URL to the clipboard and shows a confirmation toast within 1 second And if a password is set, the UI clearly indicates “Password protected” without exposing the password And a single-click Export to PDF action is available on both internal and client read-only views And the generated PDF matches on-screen content for visible lanes, includes timezone labels and status indicators, and paginates without truncating events And the PDF file is named PlanPulse_<project>_Timeline_<YYYY-MM-DD>.pdf
Schedule Analytics & SLA Tracking
"As a firm principal, I want analytics on review cadence and SLA compliance so that I can identify bottlenecks and improve delivery predictability."
Description

Deliver dashboards and exports that quantify schedule health: slots created vs. completed, on‑time completion rate, average delay by discipline, bottleneck heatmaps, and upcoming risk alerts. Allow filtering by project, phase, and date range, and export to CSV/JSON. Define SLAs per phase or discipline and surface compliance badges on lanes and slots. Compute metrics incrementally for performance and retain historical snapshots for trend analysis while respecting role‑based access controls.

Acceptance Criteria
View Schedule Health Dashboard by Project/Phase/Date
Given a Project Lead with access to Project A and phase "Design Development" and >=100 slots across three disciplines within 2025-06-01 to 2025-06-30 When they open the Schedule Analytics dashboard with filters Project A, Phase "Design Development", Date Range 2025-06-01..2025-06-30 Then the dashboard loads in ≤3 seconds and displays: Slots Created, Slots Completed, On-time Completion Rate, Average Delay by Discipline (hours), Bottleneck Heatmap, Upcoming Risk Alerts And On-time Completion Rate = completed_on_time / completed_total rounded to 1 decimal place And Average Delay by Discipline = mean(max(actual_end - planned_end, 0)) in hours within the selected range And the Bottleneck Heatmap highlights the top 3 discipline-week cells by highest average queue time And Upcoming Risk Alerts list all slots due within 48 hours with negative slack to their applicable SLA
Apply Filters Across Widgets and Exports
Given analytics widgets and the data table are visible When the user changes the Project filter from Project A to Project B Then every widget, heatmap, alerts list, and data table refresh to reflect only Project B within 1 second, and the Export actions produce files that match the visible data And changing Phase or Date Range likewise scopes all metrics and badges consistently across dashboard and exports And clearing filters restores defaults (current project, all phases, last 30 days)
Define SLAs and Surface Compliance Badges
Given SLAs exist: Phase "Design Development" = 48h, Discipline "MEP" = 24h When a slot belongs to that phase and discipline Then the effective SLA target is the most specific applicable SLA (discipline overrides phase) And the lane and the slot display a compliance badge: "On Track" if remaining_time ≥ warning_window (default 24h), "At Risk" if 0h ≤ remaining_time < warning_window, "Breached" if remaining_time < 0h And badges include text labels plus color with contrast ratio ≥ 4.5:1 and a tooltip showing SLA target, elapsed, and remaining time And the dashboard shows compliance rate = completed_on_time / completed_total for the selected scope, rounded to 1 decimal place
Export Analytics to CSV and JSON
Given the dashboard is filtered to Project A, Phase "Design Development", Date Range 2025-06-01..2025-06-30 When the user selects Export CSV or Export JSON Then a file downloads within 5 seconds containing only records in the selected scope And each record includes fields: project_id, project_name, phase, discipline, slot_id, created_at, planned_start, planned_end, actual_start, actual_end, status, sla_target_hours, on_time (boolean), delay_hours, reviewer_user_id And the export contains a summary section with slots_created, slots_completed, on_time_rate, avg_delay_by_discipline And timestamps are UTC ISO 8601; numeric hour values have 1 decimal place; filename includes project, phase, date range, and generation timestamp
Incremental Metrics and Daily Historical Snapshots
Given analytics for Project A with >50,000 slots When a single slot transitions from "In Review" to "Complete" Then aggregate metrics endpoints reflect the change within 10 seconds without full recomputation blocking the UI And daily metric snapshots are generated at 00:00 UTC per project and phase and retained for ≥365 days And trend charts use snapshot values; editing a slot after the snapshot time does not alter previously stored snapshot points
Enforce Role-Based Access in Analytics and Exports
Given User U is not a member of Project A When U requests analytics (UI or API) for Project A Then the system returns 403 Forbidden and no analytics data or exports are produced Given User V has Viewer role in Project A When V opens analytics and performs exports Then V can view dashboards and download exports for Project A only; SLA definitions are read-only Given User W has Admin or Project Lead role When W creates or edits SLA definitions Then the changes save successfully and take effect on badges and metrics within 60 seconds

Version Pin

Locks each consultant window to a specific drawing version hash so feedback aligns to the exact set under review. If a newer set appears mid‑window, it prompts to freeze the current window or spawn a new window with carry‑over context—eliminating rework from commenting on the wrong version.

Requirements

Version Hash Lock
"As a project lead, I want to lock a review window to a specific drawing version so that all consultant feedback maps to the exact set I intended."
Description

Lock each consultant workspace window to an immutable drawing version hash. The client and server must resolve all assets (sheets, layers, references, markups) strictly by that hash, preventing silent upgrades when a newer set is published. Requests include the hash; the backend validates existence, permissions, and integrity before serving content. Attempting to navigate outside the pinned version requires explicit user action. This ensures feedback and approvals align precisely to the intended set, eliminating rework caused by commenting on the wrong version. Integrates with PlanPulse’s versioning service, CDN cache keys, shareable links, and approval flows so that all downstream artifacts remain traceable to the exact version under review. Expected outcome: consistent, reproducible review contexts and reduced revision churn.

Acceptance Criteria
Hash-Required Request Validation
Given any request to load a workspace or asset When the request omits versionHash Then the API responds 400 with code ERR_VERSION_HASH_REQUIRED and no content is served Given any request with versionHash that does not exist When the backend validates the hash Then the API responds 404 with code ERR_VERSION_NOT_FOUND Given any request with versionHash the user is not permitted to access When the backend checks permissions Then the API responds 403 with code ERR_FORBIDDEN_VERSION Given any request with versionHash that fails integrity verification of the version manifest When the backend computes/compares checksums Then the API responds 409 with code ERR_VERSION_INTEGRITY_FAILED and does not stream partial assets Given any successful request for a valid versionHash H When the response is returned Then the response includes headers X-Version-Hash: H and all nested asset URLs include H as path or query parameter
Strict Asset Resolution by Version Hash
Given a workspace window pinned to version hash H When sheets, layers, references, thumbnails, and markups are requested Then every asset is resolved strictly by H and no asset from a different hash is returned Given H is pinned and a newer version H2 exists When the user continues working in the pinned window Then no content silently upgrades to H2 and all responses include X-Version-Hash: H Given CDN caching is enabled When assets for H are requested Then cache keys include H and byte-for-byte content for H remains stable even after H2 is published
Newer Set Detected — Freeze or Spawn Flow
Given a workspace window pinned to H is open When the system detects a newer version H2 for the same drawing set Then a non-blocking modal offers two choices: Freeze current window (stay on H) and Spawn new window (open H2) with carry-over of current sheet, viewport (zoom/pan), layer visibility, and any unsent draft comment text Given the modal is shown When the user selects Freeze current window Then the window remains on H, a persistent banner indicates Pinned to H, and no further prompts appear for H2 in this window Given the modal is shown When the user selects Spawn new window Then a new tab/window opens pinned to H2 with the specified context carried over, the original window remains on H, and no markups are duplicated unless the user explicitly runs a migration/import Given a version change notification for H2 has already been acknowledged in this window When additional checks occur Then the modal is not shown again for H2
Navigation Intercept Outside Pinned Version
Given a workspace pinned to H When the user attempts to open a link, sheet, or reference that belongs to H2 Then the app intercepts navigation with a confirmation dialog offering Open in new window (pinned to H2) or Stay on H, defaulting to Stay on H Given keyboard shortcuts or deep links are used When the target asset is not part of H Then the same intercept and explicit confirmation are required before any cross-version navigation occurs Given the user cancels the intercept When the action completes Then the current window remains on H with no navigation performed
Shareable Link Reproducibility
Given a user copies a Share View link from a window pinned to H When another permitted user opens the link Then the app loads version H with identical view state (sheet ID, viewport, layer toggles, markup visibility) and does not substitute a newer version Given a Share View link is tampered to remove or alter the version hash When the link is opened Then the app returns a safe error screen with 400 and does not fall back to latest Given assets are requested after opening a valid link When network requests are inspected Then all asset responses include X-Version-Hash: H and request URLs contain H
Approval Records Bound to Version Hash
Given a window pinned to H When the user submits an approval Then the approval record stores versionHash=H immutably and the audit log includes H and integrity metadata Given an approval request without a valid versionHash When it reaches the backend Then it is rejected with 400 ERR_VERSION_HASH_REQUIRED Given an approval exists for H When the approval is opened from history Then the app reproduces the exact H view; approving H2 requires a new, distinct approval action
Markup Persistence and Visibility by Hash
Given a window pinned to H When the user creates or edits markups Then the markups are saved with versionHash=H and are only auto-displayed in contexts pinned to H Given a different version H2 is active When viewing markups Then markups from H are hidden by default and can only be brought forward via an explicit Import from H action with user confirmation Given an export or print is initiated from a window pinned to H When the output is generated Then only markups associated with H are included
Version Drift Detection & Prompt
"As a consultant, I want to be alerted when a newer set is available so that I don’t accidentally comment on an outdated drawing."
Description

Continuously detect when a newer drawing set exists while a pinned window is active. On detection, present a non-dismissive prompt offering: (a) freeze this window on current version, (b) spawn a new window on the latest version with carry-over context, (c) view diff before deciding, (d) remind me later. Temporarily guard against posting to a non-target version by disabling submit actions until a choice is made. Integrates with the version registry, notification bus, and UI modal framework. Expected outcome: prevent misaligned comments during mid-session updates and guide users to the correct action with minimal friction.

Acceptance Criteria
Detect newer set while pinned window is active
Given a consultant window is pinned to version hash Vp and is active for user U And the version registry contains a newer drawing set Vn published after Vp When PlanPulse receives a versionUpdated event for the project or polling detects Vn while the window is active Then a Version Drift prompt is displayed in that window within 1000 ms And the prompt lists four options: Freeze on Vp, Spawn on Vn, View diff, Remind me later And the prompt clearly displays both Vp and Vn identifiers (short hash and timestamp) And only one prompt is displayed per window at a time And subsequent versionUpdated events update the prompt to reflect the latest Vx without stacking multiple prompts And an analytics event version_drift_detected is emitted with Vp, Vx, window_id
Non-dismissive prompt and submit guard
Given the Version Drift prompt is visible in a pinned window Then all submit actions in that window (Post Comment, Commit Markup, Approve/Request Approval) are disabled and visually indicated as disabled And an inline notice or tooltip explains that submissions are blocked until a choice is made When the user clicks outside the modal or presses Escape Then the prompt remains open (non-dismissive) and focus stays trapped within the modal And the first actionable control in the modal is focused on open and is reachable via keyboard (Tab/Shift+Tab) And the modal uses semantic accessibility roles (role="dialog", aria-modal="true") with a programmatic label And no background interaction occurs while the prompt is visible
Freeze this window on current version
Given a Version Drift prompt is visible for a window pinned to Vp When the user selects "Freeze this window on current version" Then the window remains pinned to Vp and its content does not reload to any newer version And submit actions in this window are re-enabled And a persistent banner or chip indicates "Frozen on Vp" with the short hash And further version drift prompts are suppressed for this window until the user unfreezes or navigates away And an analytics event version_freeze_selected is emitted with Vp and window_id
Spawn new window on latest with carry-over context
Given a Version Drift prompt is visible for a window pinned to Vp and the latest version is Vx When the user selects "Spawn a new window on the latest version" Then a new consultant window opens pinned to Vx within 2000 ms And the new window carries over viewport center, zoom level, active layers/filters, and the currently selected tool And any in-progress comment text and unsaved markups are copied as drafts into the new window (not posted) and remain in the original window as drafts as well And both windows clearly display their version identifiers (Vp on original, Vx on new) and labels (e.g., Frozen/Latest as applicable) And submit actions are enabled in both windows after the spawn completes And focus moves to the new window upon open And an analytics event version_spawn_selected is emitted with Vp, Vx, source_window_id, new_window_id
View diff before deciding
Given a Version Drift prompt is visible for a window pinned to Vp with a newer version Vx When the user selects "View diff before deciding" Then a diff view opens comparing Vp to Vx with the current viewport applied and pan/zoom controls available And while the diff view is open, submit actions in the originating window remain disabled And closing the diff view returns the user to the still-open Version Drift prompt without making a selection by default And no content change, spawn, or freeze occurs until the user explicitly chooses one of the three decisive options And an analytics event version_diff_viewed is emitted with Vp and Vx
Remind me later behavior
Given a Version Drift prompt is visible for a window pinned to Vp with a newer version Vx When the user selects "Remind me later" Then the prompt dismisses and submit actions are re-enabled in that window And an "Outdated version" indicator appears in the window header/toolbar with a link to review the latest version And the system snoozes further prompts for this window for 10 minutes or until the user navigates away, whichever comes first And if additional newer versions are published during the snooze, no prompt is shown until snooze ends, at which point the prompt reflects the latest version only And an analytics event version_remind_later_selected is emitted with Vp and Vx
Integration resilience and prevention of misaligned posting
Given the window is subscribed to the notification bus and has access to the version registry When the notification bus disconnects and reconnects Then the app re-establishes listeners without duplicating subscriptions or prompts When the version registry API temporarily fails Then the app retries every 15 seconds and does not issue prompts based on partial/ambiguous data When multiple newer versions are published before the user makes a choice Then the prompt (or spawned window) references only the latest version Vx at decision time And if a user attempts to submit content at the exact moment drift is detected, the submission is intercepted, the prompt is shown first, and no content is posted until a choice is made And no comment/markup/approval is ever posted to a version other than the window’s pinned version or, if spawned, the new window’s pinned version
Context Carry-Over Spawn
"As an architect, I want to carry my current review context into the new version so that I can continue without rebuilding filters and references."
Description

Enable one-click spawning of a new review window on the latest version while preserving working context: active participants, viewport/zoom, sheet selection, filter state, open threads, and unresolved issues. Migrate references to prior-version comments through stable anchors (sheet IDs, coordinates, semantic tags), and pre-link them as ‘from previous version’ to maintain traceability. Provide fallbacks when anchors shift (e.g., moved geometry) with best-effort mapping and a manual re-anchor tool. Integrates with PlanPulse’s diff/mapping service and thread model. Expected outcome: reviewers seamlessly continue work on new sets without losing context or duplicating effort.

Acceptance Criteria
One-Click Context Carry-Over to Latest Version
Given a reviewer is in a pinned review window on version hash vA with active participants, current sheet selection, viewport/zoom, filter state, open threads panel, and unresolved issues visible And a newer drawing set vB exists When the reviewer clicks "Spawn on latest" Then a new review window opens pinned to version hash vB And the new window preserves active participants, current sheet selection, viewport/zoom, filter state, open threads panel state, and unresolved issues list And the original window remains open and pinned to vA And the new window displays a banner indicating it was spawned from vA
Thread and Comment Traceability from Previous Version
Given threads and comments exist on version vA When spawning a new review window on version vB Then each carried thread in vB is labeled "From previous version" and pre-linked to its source thread on vA with bi-directional links including version hashes And author, timestamps, status (open/resolved), and assignments are preserved without modification And no duplicate carried threads are created if the same source window is spawned multiple times (idempotent linking) And 100% of eligible threads receive a traceability label and link
Stable Anchor Mapping via Diff Service
Given the diff/mapping service is available When migrating anchors for prior-version comments to version vB Then anchors are resolved using stable references in priority order: sheet ID, semantic tag, coordinate transform And for unchanged geometry, at least 95% of anchors auto-resolve with confidence ≥ 0.90 And each mapped anchor stores mapping source and confidence for audit And anchors on deleted sheets are flagged as "Sheet removed" and set to Needs Re-Anchor And mapping of up to 200 anchors completes within 1.5 seconds p95
Fallback and Manual Re-Anchor Workflow
Given some anchors cannot be auto-resolved or have confidence < 0.90 When opening the spawned window on version vB Then those threads are flagged "Needs re-anchor" and listed in a re-anchor queue And selecting a flagged thread opens the re-anchor tool with a ghost overlay of the prior anchor context And the reviewer can set a new anchor by click or drag and save the change And upon save, the thread updates with the new vB anchor, retains the "From previous version" link, and logs user and timestamp And bulk re-anchoring allows sequential re-anchoring of at least 20 items without page reload And after re-anchoring all flagged items, the queue shows 0 remaining
Version Pin Consistency and Spawn Prompt
Given a review window pinned to version hash vA is active And the system detects a newer set vB When the prompt appears Then it presents two options: "Freeze current window" and "Spawn on latest" And selecting "Freeze current window" keeps the current window pinned to vA with no context changes And selecting "Spawn on latest" creates a new window pinned to vB with context carry-over per defined rules And both windows display their version hashes prominently And dismissing the prompt makes no changes to either window
Performance, Concurrency, and Reliability of Spawn
Given a project containing up to 500 threads and 50 unresolved issues When spawning a new window on version vB under normal load Then the new window becomes interactive within 2.0 seconds p95 and 5.0 seconds p99 And counts of open threads and unresolved issues in the new window match the source window And concurrent spawns by multiple reviewers do not create duplicate carried threads; linkage remains one-to-one per source thread And if the diff/mapping service times out (> 3 seconds) or errors, the window still spawns; anchors are marked "Pending mapping", retries occur up to 3 times in the background, and users are notified upon completion
Comment Binding & Read-Only Freeze
"As a QA coordinator, I want comments to be bound to the exact version they were made on so that our audit trail is accurate and disputes are minimized."
Description

Persist every markup, note, and approval with the originating version hash and enforce read-only behavior on archived/pinned versions. New comments cannot be posted to superseded versions unless explicitly allowed by role-based policy. When threads are cloned forward to a new version, maintain a bidirectional link to the source thread for traceability. The UI clearly labels version-bound threads and prevents cross-version confusion. Expected outcome: irreversible, auditable association of feedback to the correct version and reduced comment misplacement.

Acceptance Criteria
Version-Hash Binding for Feedback
- Given a drawing version with hash H exists and a user creates a markup, note, or approval on it, When the item is saved, Then the persisted record includes versionHash=H and the versionHash field is immutable thereafter. - Given an API client attempts to PATCH or otherwise change the versionHash of an existing feedback item, When the request is processed, Then the service returns 403 Forbidden and writes an audit event with reason="immutable_version_binding". - Given a thread bound to H exists, When the same project is viewed at any other version H', H'≠H, Then the thread is not listed in H' conversations except via an explicit cross-version reference link. - Given the feedback export/API endpoint is called for the thread, When the payload is returned, Then versionHash=H is present for each item and requests missing versionHash are rejected with 422 Unprocessable Entity.
Read-Only Enforcement on Superseded/Pinned Versions
- Given version H is superseded by a newer version H2 or the window is pinned to H, When a user without override permission attempts to create, edit, delete, or react to a comment on H, Then all write actions are disabled in the UI and corresponding API calls return 409 Conflict with code="read_only_version". - Given an existing comment on H, When the user attempts inline edit or resolve, Then the controls are disabled and a read-only banner is displayed indicating the superseding version. - Given a direct API POST to create a comment on H, When processed, Then it is rejected with 409 Conflict and an audit event reason="write_blocked_read_only" is stored.
Role-Based Override for Superseded Posting
- Given project policy allows override posting to superseded versions and a user has permission "comment_on_superseded", When the user posts a new comment to H (superseded by H2), Then the comment is created on H, the action is tagged override=true in the audit log, and a UI warning is shown on the posted item. - Given a user without the permission attempts the same action, When the request is made via UI or API, Then the action is blocked with 403 Forbidden and no comment is created. - Given an admin updates the policy toggle for override posting, When the change is saved, Then enforcement reflects the new setting within 60 seconds across UI and API requests and the policy change is logged with actor, timestamp, and scope.
Forward Clone With Bidirectional Trace Links
- Given a thread T bound to version H exists and a newer version H2 is available, When the user selects "Clone forward" to H2, Then a new thread T2 is created on H2 with carried-over title, body, tags, participants, and unresolved status; T2 items are bound to H2. - Given T and T2 exist, When either thread is opened, Then it displays a bidirectional link to its counterpart including version label, creator, and clone timestamp. - Given the clone operation completes, When the audit log is queried, Then entries exist for clone_initiated and clone_completed with thread IDs, H, and H2. - Given H2 is not present or user lacks permission, When clone is attempted, Then the operation is blocked with a clear error and no partial thread is created.
UI Version Labels and Misplacement Prevention
- Given a user is viewing a thread bound to version H, When the thread header renders, Then it shows the version label and the first 7 characters of the version hash (e.g., H[0..6]) and a status chip (Current/Archived/Pinned). - Given a user is in a pinned window on H and a newer version H2 becomes available, When they attempt to reply in the current thread, Then the composer is disabled with a message referencing H2 and a one-click action to open the thread on H2 (cloned if needed). - Given a user navigates from a thread on H to a drawing on H2 via global nav, When the UI changes context, Then a confirmation prompt warns about cross-version context and prevents accidental posting to the wrong version unless explicitly confirmed.
Auditability and Export of Version-Bound Feedback
- Given any create/update/delete/clone action on threads or comments occurs, When the audit subsystem records the event, Then the record includes actorId, projectId, threadId, itemId, action, versionHash, timestamp (UTC ISO-8601), requestId, and outcome, and records are append-only. - Given an admin requests an export filtered by versionHash=H and a date range, When the export is generated, Then it contains only items bound to H and includes a SHA-256 checksum of the export payload for tamper-evidence. - Given a user or API client attempts to modify or delete an audit record, When the request is processed, Then it is rejected with 403 Forbidden and a new audit event reason="audit_mutation_blocked" is added.
Version Pin UI & Controls
"As a reviewer, I want obvious indicators that my window is pinned so that I can share a stable link and avoid accidental version changes."
Description

Provide clear, persistent indicators and controls for pinning state: a version badge with short hash and timestamp, a pin/unpin toggle, quick-copy link that encodes the version hash, and a keyboard shortcut. Display warnings when attempting actions that would break the pin (e.g., navigating to an unpinned sheet). Ensure responsive layout and accessibility (ARIA roles, focus management) in modals and controls. Expected outcome: users immediately understand whether their window is pinned and can confidently share stable links for review.

Acceptance Criteria
Display Version Badge
Given the window is pinned to version V with hash H and timestamp T, When the workspace loads, Then a version badge is visible in the header showing the first 7 characters of H and T formatted in the user’s locale date and time. Given the window remains pinned to the same version, When navigating between sheets or panes, Then the version badge content does not change. Given the user hovers or keyboard-focuses the badge, When the tooltip appears, Then it shows the full 40-character hash and UTC timestamp. Given standard color theme, When the badge is rendered, Then its text/icon contrast ratio is ≥ 4.5:1 and it is visible without scrolling on viewports ≥ 320x568.
Toggle Pin State via UI Control
Given the workspace is viewing a versioned set, When the user activates the Pin toggle via mouse or Enter/Space, Then the window becomes pinned within 300 ms and the toggle reflects aria-pressed=true with label “Pinned”. Given the window is pinned, When the user deactivates the toggle, Then the window becomes unpinned, the badge updates to indicate “Unpinned,” and subsequent navigation will not show version-change warnings. Given the window is pinned, When navigating between sheets within the same version, Then the pinned state persists without additional user action. Given a toggle action fails, When the error occurs, Then a non-blocking error toast is shown and the prior pin state is preserved.
Copy Stable Link with Version Hash
Given the window is pinned to hash H, When the user clicks Copy Link, Then the clipboard receives a URL that includes a version parameter equal to H and a success toast appears within 200 ms. Given an unauthenticated or incognito browser opens the copied URL, When the page loads, Then the workspace opens pinned to H with the badge and toggle reflecting the pinned state. Given the window is unpinned, When the user opens the Copy Link control, Then the control is disabled and a tooltip explains “Pin to copy a stable link.”
Keyboard Shortcut for Pin/Unpin
Given the workspace has focus and no text input/editor is active, When the user presses Ctrl+Shift+P (Windows/Linux) or Cmd+Shift+P (macOS), Then the pin state toggles and an on-screen confirmation appears for 2–3 seconds. Given a text field or annotation editor has focus, When the shortcut is pressed, Then no toggle occurs and no focus is removed from the field. Given the pin toggle is hovered or focused, When its tooltip/help is shown, Then it lists the keyboard shortcut used to toggle the pin state.
Warning on Navigation That Breaks Pin
Given the window is pinned to hash H, When the user attempts to navigate to content that requires a different version, Then a warning modal appears before navigation with options “Stay on pinned version” and “Open in new window.” Given the warning modal is shown, When the user selects “Stay on pinned version,” Then no navigation occurs and the window remains on H. Given the warning modal is shown, When the user selects “Open in new window,” Then a new window/tab opens at the target resource pinned to the target version with current filters/markup context carried over, and the current window remains on H. Given the warning modal is shown, When 30 seconds elapse without user action, Then the modal remains open (no auto-dismiss) to prevent accidental version change.
Responsive Layout Behavior
Given a viewport width of 320–399 px, When the header renders, Then the version badge collapses to icon+short hash and the pin toggle moves into an overflow menu without causing horizontal scroll. Given a viewport width ≥ 400 px, When the header renders, Then the full version badge (short hash + timestamp), pin toggle, and copy link are visible without overlap or truncation. Given any viewport width from 320–1440 px, When interacting via touch, Then all actionable controls have hit areas ≥ 44×44 px and remain operable.
Accessibility and Focus Management
Given any interactive control (pin toggle, copy link, overflow), When navigated via keyboard, Then it is reachable in logical tab order, has a visible focus indicator, and operates with Enter/Space. Given the pin toggle changes state, When toggled, Then a polite ARIA live region announces “Pinned to [short hash]” or “Unpinned.” Given the warning modal opens, When presented, Then it has role=dialog, an accessible name, traps focus, sets initial focus to the primary safe action, supports Esc to close, and returns focus to the trigger on close. Given the pin toggle is rendered, When inspected, Then it exposes aria-pressed that reflects its state and has an accessible name including “Pin version.” Given any theme, When text/icons are rendered, Then contrast ratios meet WCAG AA (≥ 4.5:1 for text, ≥ 3:1 for non-text icons).
Version Decision Audit Log
"As a firm principal, I want an audit of version decisions so that I can defend approvals and quantify process improvements to clients."
Description

Record all version-related events and decisions per window: pin, unpin, drift detection, user selection (freeze/spawn/diff), and cross-version thread migrations. Store user, timestamp, IP/device fingerprint, project, sheet, and version hashes. Expose project-level reports and CSV export for compliance and client transparency; surface KPIs like reduced revision rounds and time saved. Integrates with PlanPulse’s audit service and analytics pipeline. Expected outcome: defensible traceability for approvals and data to demonstrate reduced approval cycles.

Acceptance Criteria
Capture Version Action Events
Given a user pins a window to a version hash When the action is confirmed Then an audit entry is written with event_type='pin' and context: event_id, window_id, project_id, sheet_id, version_hash, user_id, timestamp_utc, ip, device_fingerprint, correlation_id Given a user unpins a window When the action completes Then an audit entry is written with event_type='unpin' and the same required context Given drift is detected in a window When the prompt is shown Then an audit entry is written with event_type='drift_detected', original_version_hash, newest_version_hash, and required context Given the user selects Freeze current window When the selection is made Then an audit entry is written with event_type='freeze_selected', decision='freeze', links to the prior drift_detected via correlation_id Given the user selects Spawn new window with carry-over When the selection is made Then an audit entry is written with event_type='spawn_selected', new_window_id, carried_context_ref, and required context Given a version diff view is opened When the diff loads Then an audit entry is written with event_type='diff_opened', base_version_hash, compare_version_hash, and required context Given a cross-version thread migration completes When the migration succeeds Then an audit entry is written with event_type='thread_migrated', thread_id, from_version_hash, to_version_hash, mapping_count, success=true
Required Audit Fields and Validation
Given any audit entry is written Then it includes required fields: event_id (UUIDv4), event_type, timestamp_utc (ISO 8601 Z), user_id, project_id, sheet_id (nullable for project-wide actions), window_id, version_hash fields as applicable, ip (IPv4/IPv6), device_fingerprint (non-empty), correlation_id (UUIDv4) Given a required field is missing or invalid When write is attempted Then the write is rejected, a validation error is logged, and the client retries up to 3 times with exponential backoff Given timestamps originate from various time zones When stored Then the service normalizes to UTC and also stores original_timezone_offset Given duplicate submissions with the same event_id When processed Then the write is idempotent and no duplicate records are created Given a malformed ip or device_fingerprint When processed Then the entry is rejected and surfaced as a non-retriable error to observability
Immutability, Tamper Evidence, and Service Integration
Given an audit entry is persisted Then it is immutable (no updates allowed) and any delete is soft-delete recorded in a separate immutable log with reason and performed_by restricted to compliance role Given an audit entry is stored Then a SHA-256 hash of the canonical payload is saved as event_hash and chained via previous_event_hash per window_id to enable tamper-evidence Given the audit service is temporarily unavailable When a write occurs Then events buffer locally (up to 500 events or 15 minutes) and flush in order per window_id upon recovery; on overflow, version actions are blocked and an "Audit service unavailable" error is shown Given an audit entry is accepted Then it is forwarded to the analytics pipeline within 60 seconds and delivery status is recorded; on failure, retry up to 5 times with exponential backoff before dead-lettering Given a read by correlation_id or window_id When requested Then the sequence of related events can be reconstructed in order using timestamp_utc and chain pointers
Project-Level Audit Report and CSV Export
Given a user with audit_viewer role opens Project Audit Reports When filtering by date range, user, event_type, sheet_id, version_hash, window_id Then results return within 3 seconds for datasets up to 10,000 events Given the same filters are applied When Export CSV is clicked Then a CSV is generated within 30 seconds for up to 100,000 events and includes columns: event_id, event_type, timestamp_utc, user_id, user_name, role, ip, device_fingerprint, project_id, project_name, sheet_id, sheet_name, window_id, version_hash fields, correlation_id, event_hash, previous_event_hash Given a timezone is selected in the report When viewing or exporting Then timestamps render in the selected timezone and the CSV header includes the UTC offset used Given no results match the filters When exporting Then the CSV contains only the header row with correct columns Given an export job completes When the file is ready Then a signed download URL is issued that expires in 24 hours and is accessible only to the requesting user
KPI Calculation and Exposure
Given nightly analytics run When inputs are available Then compute and store KPIs per project: revision_rounds_reduction_percent and approval_cycle_time_saved_percent using defined formulas and baselines Given baselines are not configured When rendering KPI cards Then show "Baseline required" with a link to set baselines and suppress percentage values Given the date range filter changes When applied Then KPI values recompute for the range within 5 seconds and display last_computed_at timestamp Given KPI values are compared to ad-hoc recomputation When validated Then the difference is <= 1% for the same filters and timeframe Given KPI drill-down is requested When opened Then underlying contributing events are listed with links back to the audit entries
Concurrency and Correct Association in Multi-Window Sessions
Given multiple windows are open on the same project and sheet When version drift occurs in one window Then only that window's decision event is logged against its window_id and version_hash without cross-window mixing Given two version decisions occur within 100 ms across different windows When recorded Then each stream preserves order using a per-window sequence_number and timestamps Given a window is closed before the write returns When the background task completes Then the event is persisted and linked to the correct window_id; on failure a user notification is queued Given a network outage occurs When the client reconnects Then offline events sync in order and deduplicate by event_id to avoid duplicates
Access Control and Privacy in Audit Access
Given a user without audit_viewer permission When attempting to view reports or export CSV Then access is denied with HTTP 403 and no export job is created Given a client stakeholder with limited project access When viewing the audit report Then only events for authorized projects are visible and ip/device_fingerprint are masked (IPv4 /24, IPv6 /48) Given a 7-year retention policy When events exceed retention Then they are archived to cold storage and remain exportable within 48 hours upon compliance request with an audit trail of the retrieval

Lockout Guard

Enforces start/stop rules with soft and hard lock modes: edit during the window, read‑only after close. Consultants can request a one‑click grace extension with auto‑calculated schedule impact and an audit trail, keeping momentum without sacrificing accountability.

Requirements

Lock Window Scheduling Engine
"As a project lead, I want to define edit windows for each drawing set so that contributors work within agreed timeframes and we avoid last-minute revision churn."
Description

Provides per-project and per-drawing-set scheduling of editable windows with explicit start and end times, recurrence options, blackout dates, and timezone awareness. On window open, all editing capabilities for drawings, versioned markups, and file uploads are enabled; on window close, the workspace transitions to a read-only state that blocks new versions, markups, and uploads across the PlanPulse editor and APIs. Handles daylight saving shifts, mid-window configuration updates with safe-guard prompts, and edge cases such as users active at cutoff. Exposes a centralized configuration panel within the Project Settings, integrates with the project timeline, and propagates enforcement flags to the UI, real-time collaboration services, and backend validations to ensure consistency and prevent bypass via API or offline sync. Expected outcome is predictable edit cycles that reduce last-minute churn and align all contributors to a single, enforced cadence.

Acceptance Criteria
Timezone- and DST-aware edit window enforcement
Given a project timezone is set (e.g., America/New_York) and a recurring Mon–Fri 09:00–17:00 local edit window exists When a DST shift occurs or the day/time reaches window boundaries Then editing is enabled between 09:00–17:00 local and blocked outside across UI, real-time services, and all APIs And attempts outside the window are rejected with 403_LOCKED and machine-readable code window_closed And open/close events are emitted within 3s and logged with both UTC and local timestamps
Blackout dates override recurring windows
Given a recurring window and a configured blackout date for the project or drawing set When the blackout date is in effect Then the workspace remains read-only for the entire blackout period regardless of normal windows And timeline and UI display "Blackout" with tooltip reason And create/update endpoints return 403_LOCKED with code blackout_active And normal scheduling resumes automatically at the end of the blackout
Safeguarded mid-window configuration updates
Given an edit window is currently open When a Project Admin edits the window configuration Then a confirmation modal summarizes impacts (new start/end, affected sets, users currently editing) and requires a reason And changes only apply on the next minute boundary at least 60s after confirmation And active users receive in-app notification of pending change And an audit record stores old/new values, admin ID, reason, and timestamps And cancelling the modal leaves settings unchanged
Cutoff handling for active sessions at window close
Given users are actively editing near the scheduled close time When the exact close timestamp is reached Then all in-progress changes up to the cutoff auto-save successfully And further edits, markups, versions, and uploads are blocked immediately across UI and APIs And clients receive a "Window closed" banner within 3s and sessions switch to view-only And offline clients receive deterministic errors on sync for mutations after cutoff and do not overwrite server state
System-wide enforcement flag propagation
Given the scheduling engine transitions state (open <-> read-only) When the enforcement flag updates Then a single source endpoint exposes current state with ETag and lastChanged And UI updates via WebSocket event within 3s (polling fallback within 30s) And backend validations consistently allow/block across all nodes and legacy endpoints with 403_LOCKED when blocked And offline sync rejects conflicting operations with error code offline_lock_conflict
Central configuration panel with timeline integration
Given a Project Admin opens Project Settings > Lock Window Scheduling When configuring per-project or per-drawing-set windows Then the form supports start/end times, timezone selection, recurrence (daily/weekly with day selection), and blackout dates And validation prevents end <= start and overlapping windows for the same scope And a timeline preview shows the next 8 weeks of windows and exceptions And saving persists a versioned configuration with change note and effective timestamp And only Project Admins can create/update; others can view but not edit
Grace extension with auto-calculated schedule impact
Given a window has ≤30 minutes remaining and a consultant requests a grace extension When a Project Admin approves the one-click extension Then the current window end extends by the approved duration and the project timeline reflects schedule impact on subsequent windows And all services and UI update within 3s; APIs allow edits until the new end time And audit trail records requester, approver, duration, reason, and recalculated dates And if the request is denied or times out, no changes occur and the requester is notified
Dual Lock Modes Control (Soft vs Hard)
"As a project owner, I want distinct soft and hard lock modes so that I can balance accountability with controlled flexibility based on project phase and risk."
Description

Implements two enforcement modes that determine post-window behavior and override capabilities. Soft Lock places the workspace in read-only while enabling controlled recovery actions (e.g., grace extension requests and owner-approved temporary unlocks) governed by policy constraints. Hard Lock enforces strict read-only with no extensions except explicit admin override with justification. Both modes display clear state indicators (banners, badges, disabled controls) and propagate mode flags to the permissions layer, editor services, and API responses. Admin and project owners can configure default mode per phase or drawing set and schedule automatic escalation (Soft to Hard) after a defined period. This modular design ensures accountability while preserving momentum when policy allows.

Acceptance Criteria
Mode Flag Propagation Across Layers
Given lockMode=Soft, When permissions are fetched, Then Consultant{canEdit:false, canRequestExtension:true}, Owner{canEdit:false, canApproveTemporaryUnlock:true}, Admin{canEdit:false, canOverride:true} are returned. Given lockMode=Hard, When permissions are fetched, Then Consultant{canEdit:false, canRequestExtension:false}, Owner{canEdit:false, canApproveTemporaryUnlock:false}, Admin{canEdit:false, canOverride:true} are returned. Given any lock mode, When GET /workspaces/:id is called, Then the response includes lockMode and lockStateTimestamp and is not cached beyond 60s. Given any lock mode, When POST/PUT/PATCH to a write endpoint is called and canEdit=false, Then the API responds 403 with error.code in {LOCKED_SOFT, LOCKED_HARD} and includes lockMode in the body. Given a mode change event, When the editor initializes, Then UI controls reflect permissions and lock banners appear within 200 ms.
Soft Lock with Grace Extension Request and Audit Trail
Given lockMode=Soft and role=Consultant, When "Request Grace Extension" is clicked, Then a dialog opens with auto-calculated scheduleImpact and defaultExtensionMinutes per policy. Given the dialog, When the request is submitted with reason length ≥10, Then a request record is created (id, duration, scheduleImpact, reason, submitter, timestamp), an audit entry is written, and the owner is notified in-app and via email within 60s. Given a submitted request, Then workspace remains read-only until approval and request status=Pending is visible to requester and owner. Given policy.maxExtensionsPerClose=N, When N requests are pending/approved, Then the request control is disabled with a policy tooltip. Performance: P95 request creation <2s, error surfaces inline on validation failure.
Owner-Approved Temporary Unlock Under Soft Lock
Given lockMode=Soft, When an owner approves a temporary unlock with duration D (0<D≤policy.maxUnlockMinutes) and dailyUnlockCount<policy.maxUnlocksPerClose, Then the workspace becomes editable for authorized roles only, a countdown banner appears, all edits are versioned and tagged with unlockSessionId, and read-only resumes automatically at expiry. Given a temporary unlock, When non-authorized roles attempt to edit, Then the API returns 403 UNAUTHORIZED_ROLE and UI controls remain disabled. Given unlock start and end, Then audit logs record approver, duration, start/end timestamps, and affected drawings; notifications are sent at start and end within 60s.
Hard Lock Strict Enforcement and Admin Override
Given lockMode=Hard, When any non-admin attempts a write action, Then write controls are disabled and the API returns 403 LOCKED_HARD with lockMode=HARD. Given lockMode=Hard, When an admin initiates an override, Then a justification (≥15 chars) and duration D (0<D≤policy.maxAdminOverrideMinutes) are required or validation errors are shown and no override begins. Given a valid override, Then the UI shows an "Admin Override" banner with countdown, only permitted roles can edit per policy, and all changes are versioned and associated with overrideSessionId. Given override end (time expiry or manual revoke), Then the system reverts to Hard Lock, blocks further edits, and audits justification, approver, duration, and affected items; notifications are dispatched within 60s.
Clear Lock State Indicators in UI
Given lockMode=Soft or Hard, When the workspace loads or mode changes, Then a banner displays mode-specific text and icon (Soft=amber, Hard=red), lock badges appear on drawing list and header, and disabled controls show tooltips explaining next steps. Given a mode change, Then indicators render within 300 ms, persist across reloads, and are announced via ARIA live region; color contrast meets WCAG AA (≥4.5:1). Given user locale, When mode indicators render, Then all copy is localized accordingly.
Automatic Escalation from Soft to Hard Lock
Given a phase or drawing set with escalationDelay=X hours in the project timezone, When currentTime ≥ closeTime+X, Then the system escalates to Hard Lock exactly once, updates mode flags across services within 60s, disables grace/unlock controls, and writes an audit event. Given scheduler retries, When escalation has already occurred, Then no duplicate audit events or notifications are created (idempotent). Given escalation, Then owners and consultants receive notifications within 60s, any in-progress temporary unlock is terminated with autosave, and subsequent API responses reflect lockMode=HARD.
Configure Default Lock Mode per Phase and Drawing Set
Given admin or project owner, When opening Settings > Lockout Guard, Then they can set defaultLockMode per phase and per drawing set, set escalationDelay, and define policy constraints (maxUnlockMinutes, maxUnlocksPerClose, maxExtensionsPerClose, defaultExtensionMinutes). Given Save, When configuration is submitted, Then validation enforces numeric ranges and logical constraints (e.g., Hard Lock disables grace requests), changes are versioned and auditable, and take effect for the next close event. Given GET /projects/:id/lockout-config, Then the API returns the active configuration; Given invalid PUT data, Then the API returns 400 with field-level errors.
One-Click Grace Extension Workflow
"As a consultant, I want to request a quick grace extension with minimal friction so that I can finish critical edits without derailing the overall schedule."
Description

Enables consultants and designated contributors to request a time-limited extension after a Soft Lock, using a single-click action from the locked workspace banner. The request form auto-suggests durations based on policy (max extension per window, daily caps) and requires a brief justification. Upon submission, approvers receive an actionable notification with in-message approve/deny controls; approval automatically extends the window, updates the project timeline, re-enables editing for the approved duration, and logs all actions to the audit trail. Denials communicate rationale to the requester and keep the lock intact. The workflow supports SLAs, auto-expiry of pending requests, and safeguards against overlapping or chained extensions beyond policy limits, preserving momentum without eroding schedule discipline.

Acceptance Criteria
Extension Request Launch and Form Prefill
Given the workspace is in Soft Lock and the user is a Consultant or Designated Contributor with prior edit access When the user clicks “Request Grace Extension” on the lock banner Then a modal request form opens with duration options auto-suggested per policy (max-per-window, daily cap, minimum increment) And the default option is the largest allowable duration within current limits And the form displays the calculated schedule impact for the currently selected duration And the justification field is required (min 10 characters) And the Submit button remains disabled until a valid duration is selected and justification is provided
Submission Validation and Policy Enforcement
Given the extension request form is open and policy limits are loaded When the user attempts to submit a duration that exceeds the remaining per-window limit or daily cap Then the submission is blocked with a clear error explaining which limit is exceeded And no request record is created Given there is already a Pending or Active extension overlapping the current lock window for this workspace When a new request is attempted Then the system blocks submission with an error stating overlapping/chained extensions are not allowed beyond policy limits And only one Pending request per workspace is allowed; additional attempts are blocked with a message
Approver Actionable Notification and Auto-Extend
Given an extension request is successfully submitted When an approver opens the in-app or email notification and clicks Approve Then the system immediately extends the lock window by the approved duration starting at the approval timestamp And re-enables editing for authorized users for that duration And updates the project timeline to reflect the schedule impact equal to the approved duration And sends confirmation to the requester and watchers And writes audit entries for request and approval including new lock end time And the approval action is idempotent so repeated clicks do not double-apply the extension
Denial Keeps Lock Intact and Communicates Rationale
Given an approver reviews a pending extension request When the approver clicks Deny and provides a rationale (required) Then the request status is set to Denied and the rationale is recorded in the audit trail And the requester is notified with the denial rationale And the workspace lock state and project timeline remain unchanged
SLA and Auto-Expiry of Pending Requests
Given a request remains Pending When the configured SLA time elapses without approver action Then the request is auto-set to Expired, approver controls are disabled, and no extension is applied And both requester and approver receive an expiry notification And the requester may submit a new request subject to current policy caps and limits
Comprehensive Audit Trail Logging
Given any request lifecycle event occurs (Submitted, Approved, Denied, Expired, Auto-Lock Reapplied) When the event completes Then an immutable audit entry is recorded within 5 seconds containing actor, timestamp (UTC), action type, justification (if any), policy parameters evaluated, previous and new lock end times (if changed), and a link to the affected workspace And audit entries are visible in the project audit view and exportable
Auto-Re-Lock After Extension Ends
Given an approved extension is active When the approved duration elapses Then the workspace automatically returns to read-only Soft Lock state And further edits are rejected with a lock message And the audit trail records the auto-re-lock event with timestamp And the project timeline reflects no additional changes beyond the applied extension
Schedule Impact Calculator
"As an approver, I want to see the exact schedule impact of an extension before I approve it so that I can make an informed decision and avoid cascading delays."
Description

Automatically computes downstream schedule effects for any extension or lock-mode change by analyzing project milestones, dependent windows, approval deadlines, and client review periods in PlanPulse. Displays a clear before/after view with deltas (dates and durations), highlights conflicts, and flags policy violations or risks (e.g., client approval slipping past a contractual date). Integrates with the timeline/Gantt view, updates ICS/calendar events, and posts a summarized impact note to the project activity feed upon approval. Supports what-if previews for approvers before committing an extension, ensuring transparent trade-offs and informed decisions.

Acceptance Criteria
Extension or Lock‑Mode Change Impact Computation and Before/After View
Given a project with milestones, dependent windows, approval deadlines, and client review periods And a user initiates a grace extension or a lock‑mode change on a specific window When the Schedule Impact Calculator runs Then it recalculates start/end dates for all directly and indirectly dependent items using the project dependency graph And it respects lead/lag offsets and constraints (earliest start, must finish by) And it renders a before/after view listing each impacted item with original date(s), new date(s), and delta values in working days and calendar days And it displays the total number of impacted items and the net slip of the critical path in working days And items with no change are explicitly labeled as "No Change"
What‑If Preview Without Committing Changes
Given an approver opens a what‑if preview for a requested extension or lock‑mode change When the preview is displayed Then no changes are persisted to the timeline/Gantt, calendar (ICS) events, or activity feed And the preview is clearly labeled as "What‑If" with requester and timestamp And all conflicts, flags, and deltas are shown exactly as they would appear post‑approval And closing or cancelling the preview leaves the project schedule unmodified
Timeline/Gantt and ICS Calendar Update Upon Approval
Given an approver approves the requested extension or lock‑mode change When approval is confirmed Then the timeline/Gantt updates the dates of all impacted items in a single atomic operation And calendar (ICS) events for affected milestones and review periods are rescheduled to the new times while preserving attendees And all updates complete within 60 seconds of approval and are visible on refresh And failures to update external calendars are logged and surfaced to the approver with retry guidance
Conflict Detection and Policy Violation Flagging
Given the recalculated schedule introduces conflicts or breaches policies When the impact is presented in preview or after approval Then items crossing contractual dates, SLAs, or approval deadlines are flagged with severity (Violation vs Risk) and reason text And overlapping windows, negative float, and date constraint breaches are highlighted And each flag links to the underlying policy/date definition for traceability And if project policy is configured to block approvals on violations, the Approve action is disabled until resolved or an authorized override is applied
Activity Feed Summary Note on Approval
Given an extension or lock‑mode change is approved When the system records the change Then a summarized impact note is posted to the project activity feed including: approver, requester, timestamp (UTC and project local), count of impacted items, critical path slip (working days), earliest and latest changed dates, and any violations/risks And the note links to the before/after detail view and the modified window And the note respects project access permissions and is visible to members with read access
Performance and Scalability of Impact Calculation
Given a project of up to 200 milestones and 500 dependencies with up to 3 dependency levels When the calculator evaluates an extension up to 10 working days or a lock‑mode change Then server‑side calculation completes within 2 seconds p95 and 4 seconds p99 And the before/after view renders client‑side within 1 second p95 And UI interactivity is not blocked for longer than 200 ms during rendering
Project Calendar and Time Zone Rules
Given the project defines working days, hours, holidays, and a project time zone When the calculator computes new dates and deltas Then working‑day deltas exclude non‑working days and holidays, while calendar‑day deltas include them, and each metric is clearly labeled And windows configured to allow non‑working days are honored accordingly And all stored times are in UTC and displayed in each user’s local time, correctly handling DST transitions
Audit Trail and Compliance Logging
"As a compliance reviewer, I want a complete audit log of lock activity so that I can verify policy adherence and resolve disputes quickly."
Description

Captures an immutable, queryable record of all lock-state transitions, extension requests, approvals/denials, overrides, and edits attempted during locked periods. Each event stores actor identity, role, timestamp, IP/device fingerprint, impacted artifacts (drawings, versions), justification text, and pre/post schedule snapshots. Provides exports (CSV/JSON) and filters by project, user, time range, and event type, with tamper-evident storage and retention policies aligned to firm compliance needs. Integrates with the activity feed for human-readable summaries and exposes an API endpoint for downstream BI/reporting. This ensures accountability, simplifies audits, and strengthens client trust.

Acceptance Criteria
Log Lock-State Transitions and Locked-Period Edit Attempts
Given a project with Lockout Guard enabled When a soft or hard lock starts or ends Then an audit event is created containing: actor identity, actor role, timestamp in UTC ISO 8601, IP address, device fingerprint, impacted artifact IDs and version IDs, justification text (if provided), and pre/post schedule snapshots (when applicable) And the event is immutable and cannot be altered or deleted after write And the event becomes queryable within 3 seconds of occurrence And any failed write is retried up to 3 times and surfaced as an error event if persistence ultimately fails When a user attempts to edit during a locked period Then an audit event of type "locked_edit_attempt" is recorded with the attempted action metadata and outcome=blocked
Capture Extension Requests, Approvals, Denials, and Overrides
Given a consultant submits a lock grace extension request When the request is submitted Then an audit event is recorded with requester identity, role, justification text (required), requested duration, and auto-calculated schedule impact (delta before/after snapshot) When an approver approves or denies the request Then a corresponding audit event is recorded capturing approver identity, role, decision (approved|denied), timestamp, and updated schedule snapshot When an admin or project owner performs a lock override Then an audit event records the override action, original and new lock state, impacted artifacts, justification (required), and pre/post schedule snapshots
Filterable and Searchable Audit Log UI
Given an auditor opens the Audit Log UI When filters for project, user, time range, and event type are applied Then only matching events are returned and counts reflect the filtered set And timestamps display in the viewer’s local timezone while underlying data remains UTC And results are sortable by timestamp (default desc) and paginated (page size selectable up to 200) And the first page returns within 2 seconds for up to 50,000 matching events And each row exposes a details panel showing the complete stored fields for the event
Export Audit Trail to CSV and JSON
Given a user with export permission applies filters to the audit log When Export CSV is requested Then a CSV is generated and downloaded containing only filtered events with columns: event_id, event_type, actor_id, actor_role, timestamp_utc, ip, device_fingerprint, project_id, artifact_ids, version_ids, justification, schedule_snapshot_before, schedule_snapshot_after And values are properly escaped and encoded in UTF-8 with a header row When Export JSON is requested Then a JSON array is generated and downloaded with the same fields and exact values as shown in the UI details And the number of exported records matches the filtered result count (subject to pagination selection) And exports include a generated filename with project identifier and timestamp
Tamper-Evident Storage and Retention Policy Enforcement
Given audit events are written to storage When an integrity verification job runs Then each event’s tamper-evident proof (e.g., hash chain) validates successfully, otherwise the system raises a critical alert and records an integrity_failure event And any read request includes a verification flag indicating integrity_pass or integrity_fail Given a firm-configured retention policy exists (e.g., X years per project) When events exceed the retention period Then they are archived or purged per policy and a retention_action event is recorded including counts and scope And attempts to access purged events return 410 Gone with reference to the retention_action event
Activity Feed Integration for Human-Readable Summaries
Given an audit event is recorded When the activity feed is refreshed or pushed in real time Then a human-readable summary entry is displayed including event type, actor display name, affected artifact references, and timestamp And the entry links to the relevant drawing/version or lock settings view And the feed respects project permissions so that users only see entries they are authorized to view And sensitive fields (IP, device fingerprint) are not shown in the feed but remain available in the audit detail
Audit Events API for BI and Reporting
Given an authenticated client with proper scope calls the Audit Events API When requesting events with filters for project, user, time range, and event type Then the API returns a paginated JSON response within 2 seconds for up to 1,000 events per page, including all required fields and a next-page cursor And role-based access control ensures the client retrieves only events they are permitted to see And the API rate limits are enforced and communicated via headers And OpenAPI documentation describes fields, filters, pagination, and response codes
Role-Based Permissions and Overrides
"As an administrator, I want clear role-based controls for lock configuration and overrides so that only authorized users can change enforcement states."
Description

Defines granular permissions for configuring lock rules, requesting extensions, approving/denying requests, and executing emergency overrides. Maps capabilities to PlanPulse roles (Project Lead, Consultant, Client Approver, Admin) with per-project overrides and SSO group mapping. Enforces policy constraints (e.g., who can escalate from Soft to Hard, who can break a Hard Lock) and requires justification and optional multi-approver workflows for sensitive actions. Integrates with existing access control to ensure lock states are honored consistently across the editor, conversations, versioning, and API operations, preventing privilege drift and unauthorized edits.

Acceptance Criteria
Configure Lock Rules by Role (Per-Project)
Given an authenticated user with a role derived from SSO mapping or local assignment and a selected project When the user opens the project's Lock Rules settings or calls PATCH /projects/{id}/lock-rules Then only Project Lead and Admin may view and modify lock windows, soft/hard modes, escalation paths, and approver policies And Consultant and Client Approver receive 403 RBAC_DENIED and an audit entry is recorded with userId, projectId, action, timestamp, and reason And any change requires a justification text between 10 and 500 characters And successful changes are versioned with previous values and author and emit a lockRules.updated event And new rules are enforced across UI and API within 5 seconds of save
Consultant Grace Extension Request Submission
Given a project in Soft Lock or Hard Lock and a user with Consultant role on that project When the user clicks Request Extension and submits a duration between 1 and 72 hours with a justification of at least 20 characters Then the system calculates and displays schedule impact before submission And on submit, a request is created with status Pending, returns 201 with requestId, and is routed to configured approver(s) And the Consultant cannot change lock state directly and receives 403 if attempting to modify lock rules And duplicate pending requests by the same user for the same lock window are rejected with 409 DUPLICATE_REQUEST And an audit entry and notifications to approvers are created within 10 seconds
Approver Decision and Multi-Approver Policy for Extensions
Given a Pending extension request for a project and approver policies that may require 1 or 2 approvers When a Project Lead or Client Approver opens the request and chooses Approve or Deny with a required comment between 10 and 500 characters Then if the policy requires 2 approvers, both must approve within the SLA window (e.g., 24 hours) for approval to complete; any Deny immediately finalizes as Denied And upon final approval, the lock window is extended by the approved duration, schedule impact is updated, and enforcement state reflects the extension across editor, conversations, versioning, and API within 5 seconds And upon final decision, the requester and watchers are notified, and an immutable audit record is written containing approver(s), decision, timestamp, and comment And API returns 200 with final state (Approved/Denied) and updated lock state; unauthorized approvers receive 403 RBAC_DENIED
Emergency Hard Lock Break with Justification and TTL
Given a project under Hard Lock and an Admin or designated Emergency Override role initiates Break Hard Lock When the initiator provides a reason code and justification between 50 and 1000 characters and submits the override Then if the policy requires multi-approver (e.g., 2 of 3), a second approver must confirm within 30 minutes or the override auto-cancels And on final approval, edit access is enabled for a time-limited window (TTL up to 4 hours) visible in UI, then automatically reverts to the previous Hard Lock And all edits during the override window are flagged with overrideId and included in audit and version history And unauthorized users receive 403; partial failures roll back to the prior lock state and raise an alert; all actions are logged with userId, projectId, timestamps
SSO Group Mapping with Per-Project Role Overrides
Given an IdP SSO login response containing group claims and a project with defined role mappings and overrides When the user signs in and accesses the project Then the system maps IdP groups to PlanPulse roles per the configured mapping; per-project overrides apply after global mapping And Admins can create/update/delete per-project overrides; changes are audit logged and take effect within 60 seconds, invalidating cached permissions And removal of a role or override revokes elevated capabilities immediately; in-flight privileged API calls fail with 403 RBAC_ROLE_REVOKED and no changes are persisted And unknown or unmapped groups result in no project role assignment and no privileged access
Consistent Enforcement Across Editor, Conversations, Versioning, and API
Given a user with a defined role and a project in a given lock state When the user attempts protected actions (edit markup, upload new drawing/version, post/delete conversation messages, mutate via API) Then authorization and lock checks are applied consistently, allowing only actions permitted by role and lock state And restricted actions return 403 RBAC_DENIED or 423 LOCKED with a structured error body {code, reason, lockState, requiredRole}; allowed actions succeed And state or role changes propagate and take effect across all surfaces within 5 seconds (p95); authorization check latency is under 300 ms (p95) And no unauthorized edits or versions are persisted; daily integrity check reports zero discrepancies in attempted vs. persisted unauthorized changes
Notifications and UX Indicators
"As a contributor, I want clear alerts and visual cues about lock status and timing so that I can plan my work and avoid being blocked unexpectedly."
Description

Delivers timely, multi-channel notifications and unambiguous in-app cues for upcoming locks, lock activations, extension requests, and decisions. Includes countdown banners, disabled control tooltips, and a timezone-aware clock within the workspace. Sends pre-close warnings and lock/extension outcomes via email, in-app alerts, and optional Slack/MS Teams integrations with deep links to act. All messages reflect Soft/Hard state, remaining time, and next steps to reduce confusion. This cohesive UX ensures users understand status at a glance and can respond quickly, minimizing idle time and miscommunication.

Acceptance Criteria
Pre-Close Multi-Channel Warning
Given a workspace with a scheduled lock close time T in the project timezone And a user has at least one notification channel enabled (email and/or in-app and/or Slack/Teams) When the current time reaches T-24h, T-1h, and T-15m Then a pre-close warning is sent on each enabled channel within 60 seconds of each threshold And each message includes: lock mode (Soft/Hard), remaining time, project timezone label, and a clear next-step CTA And each message contains a deep link that opens the relevant workspace view with the countdown banner visible And delivery is recorded in the audit log with timestamp, channel, and recipient
Lock Activation UX Indicators (Soft vs Hard)
Given a workspace reaches its scheduled lock close time T When the lock transitions to Soft Then the in-app banner updates to "Soft Lock active" and shows next steps and a "Request Extension" CTA And only permitted actions remain enabled; disallowed controls are disabled with explanatory tooltips reflecting Soft state When the lock transitions to Hard Then all editing controls are disabled and labeled read-only And the banner and status pill display "Hard Lock active" with next steps and a link to request an extension (if allowed) And all cues display the correct lock mode and remaining time (0 for Hard)
Disabled Control Tooltips During Lock
Given a user hovers, focuses, or taps a disabled control while a Soft or Hard lock is active When the tooltip is displayed Then it explains why the control is disabled, shows the current lock mode (Soft/Hard), remaining time (or "Locked"), and the next-step CTA (e.g., Request Extension) And the tooltip appears adjacent to the control and dismisses on blur/escape/click outside And the tooltip includes a deep link to the appropriate action (e.g., open extension request dialog)
Timezone-Aware Clock and Countdown Banner
Given two users in different local timezones view the same workspace When they view the countdown banner and lock timestamps Then both see identical remaining time values and timestamps labeled with the project timezone And pre-close thresholds and lock activation are computed using the project timezone, not each viewer's local time And changing a viewer's profile timezone does not alter the countdown; only localized timestamp formatting changes
Extension Request Alert to Approvers with Deep Link
Given a consultant submits a grace extension request from the banner or tooltip And the system calculates a proposed new close time T' and schedule impact Δ When the request is submitted Then approvers receive in-app and email notifications (and Slack/Teams if enabled) within 60 seconds And each message includes: current lock mode (Soft/Hard), proposed close time T' with project timezone, schedule impact Δ, and approve/deny deep links And the workspace shows a visible "Extension pending" indicator and prevents duplicate requests by the same requester until a decision is made And the request is logged in the audit trail with requester, timestamps, and proposed impact
Extension Decision Outcome Notifications and UI Update
Given an approver approves or denies a pending extension request When the decision is recorded Then the requester and watchers receive notifications on their enabled channels within 60 seconds And approved decisions update the countdown banner and clock to T' and add an audit trail link; denied decisions maintain the current lock and show next steps And all messages include the decision, resulting lock state (Soft/Hard), remaining time (if applicable), and a deep link to view the decision and context
Deep-Link Navigation and Access Handling
Given a user clicks a deep link in an email, in-app alert, or Slack/Teams message When the user is authenticated and authorized for the target workspace Then they land on the exact workspace context (board/view) with the relevant banner/clock visible When the user is not authenticated Then they are routed to sign-in and returned to the original target after successful authentication When the user lacks access rights Then they see an access-denied message with a "Request access" CTA And all deep-link visits are captured in the audit log with source channel and target context

Nudge Cadence

Targeted reminders tuned to the window lifecycle: a pre‑brief heads‑up, mid‑window progress nudge, and final‑call alert. Each message includes the consultant’s open items and a one‑click Done or Blocker response, boosting on‑time completion while reducing manual chasing.

Requirements

Lifecycle-Triggered Nudge Scheduling
"As a project lead, I want automated reminders aligned to each review window so that deadlines are met without me manually chasing contributors."
Description

Automatically schedules three reminder touchpoints per review window—pre‑brief heads‑up, mid‑window progress nudge, and final‑call alert—driven by project timelines and deliverable due dates. The scheduler aligns to each consultant’s timezone and working hours, suppresses nudges when all items are complete, and re‑computes timing when dates shift. It integrates with PlanPulse’s milestone and approval states, ensures idempotent delivery (no duplicates), logs schedule decisions for auditability, and supports safe throttling to avoid over‑messaging on highly active projects.

Acceptance Criteria
Three Touchpoints Scheduled per Review Window
Given a review window with startDate and dueDate and the associated milestone state is Review Pending or In Review And default offsets are configured (preBriefOffset = 24h before startDate, midWindow = midpoint between startDate and dueDate, finalCallOffset = 12h before dueDate) unless overridden at workspace/project level And a consultant has a timezone and working hours configured When the scheduler evaluates the window Then it creates exactly three pending nudge events of types pre-brief, mid-window, and final-call for that consultant And the computed send times are: - pre-brief = startDate - 24h - mid-window = startDate + 50% of (dueDate - startDate) - final-call = dueDate - 12h And each send time is adjusted to the consultant’s working hours in their timezone by shifting to the nearest valid working slot (pre-brief and mid-window shift forward; final-call shifts to the last valid slot before dueDate) And no send time is in the past at creation; if a computed time would be past, it is shifted to the next valid slot before dueDate (or suppressed if none exists) And each event is persisted with status = scheduled and a dedupKey of consultantId:windowId:touchpointType
Timezone and Working Hours Alignment
Given a consultant with timezone America/Los_Angeles and working hours 09:00–17:00 Monday–Friday And a nudge’s computed time falls at 07:30 local time on a working day When the scheduler finalizes the send time Then the send time is shifted forward to 09:00 local time that day And if the computed time falls on a non-working day, it is shifted to 09:00 on the next working day not later than dueDate And daylight saving time transitions are honored according to the timezone rules And if no consultant timezone is configured, the project timezone is used; if none, UTC is used And if no working hours are configured, default 09:00–17:00 Monday–Friday are used
State-Gated Scheduling and Suppression
Given a consultant’s review window for a deliverable When the associated milestone is not in Review Pending or In Review (e.g., Draft, Approved, or Cancelled) Then no new nudge events are scheduled for that window And any previously scheduled future nudges are canceled with reason "MilestoneClosed" when the milestone transitions to Approved or Cancelled And at the intended send time, if the consultant has zero open items in the window, the nudge is not sent and is marked suppressed with reason "NoOpenItems" And all suppression and cancellation actions are logged
Recompute Schedule When Dates Shift
Given a review window with unsent scheduled nudge events And the startDate and/or dueDate is updated When the scheduler re-evaluates the window Then all unsent future nudges are recalculated using the current offsets and alignment rules And updated send times replace previous times atomically And for each updated nudge, an audit record is written with previousTime, newTime, and reason "WindowDatesChanged" And sent or canceled nudges are not modified
Idempotent Delivery Across Retries and Concurrent Runs
Given a nudge identified by dedupKey = consultantId:windowId:touchpointType When multiple scheduler instances or retry attempts process the same nudge within any overlapping timeframe Then at most one message is sent for that dedupKey And at most one "sent" audit record is created And subsequent duplicate attempts are no-ops recorded with reason "Duplicate" And the nudge ends in a single terminal state (sent, suppressed, or canceled) without duplicates
Audit Logging of Scheduling Decisions
Given any scheduler action (scheduled, rescheduled, sent, suppressed, canceled, throttled, duplicate) When the action occurs Then an immutable audit log is written within 5 seconds including: timestamp (UTC), actor = "scheduler", projectId, windowId, consultantId, touchpointType, actionType, reasonCode, previousTime, newTime, dedupKey And audit logs are queryable by projectId and windowId And audit logs are retained for at least 180 days
Safe Throttling on Highly Active Projects
Given throttling defaults of minSpacing = 2h between nudges and dailyCap = 3 nudges per consultant per project in any 24h rolling window And a nudge would violate minSpacing or exceed dailyCap When the scheduler evaluates the send Then the nudge is deferred to the earliest time that satisfies working hours, timezone alignment, minSpacing, and dailyCap within the active window And if that earliest time would be after dueDate, the nudge is canceled with reason "ThrottledWindowEnded" And final-call nudges may exceed dailyCap by 1 to ensure at least one final-call is delivered before dueDate, while still respecting minSpacing and working hours And each throttle decision is logged with actionType = "throttled" and nextAttemptTime
Open Items Aggregation in Reminders
"As a consultant, I want reminders to include my specific open items so that I immediately know what to do without searching the workspace."
Description

Compiles each consultant’s outstanding items—markups, tasks, approvals, and comments—into a concise, personalized list embedded in every nudge. Items include due dates, status, and deep links back to the exact drawing or thread in PlanPulse. The aggregator de‑duplicates across versions, filters by assignment and permissions, highlights changes since the last nudge, and gracefully handles empty states (e.g., no open items). Performance is optimized for large projects with pagination and caching, and all data respects project access controls.

Acceptance Criteria
Consultant-Specific Open Items Compilation in Nudge
Given a project with open markups, tasks, approvals, and comments assigned to Consultant X And Consultant X has an active PlanPulse account with access to the project When a nudge is generated for Consultant X within the window lifecycle Then the nudge payload includes only open items assigned to Consultant X And each item displays a concise title/summary, current status, and due date if present (or "No due date") And each item includes a deep link that opens the exact drawing view or comment thread in PlanPulse And items are ordered by due date ascending, then by last-updated descending when due dates are equal And the payload includes one-click Done and Blocker action tokens per item that are single-use and expire in 24 hours And the total item count in the payload equals the number of open items returned by the aggregator
Cross-Version De-duplication of Items
Given a drawing with multiple versions where the same unresolved markup/task persists across versions When the aggregator compiles open items for a consultant Then only a single entry for the item is included in the nudge, referencing the latest drawing version And if the latest version marks the item as resolved/approved, the item is excluded from the nudge even if earlier versions show it open And the deep link takes the user to the latest version context with the correct anchor (e.g., markup ID) And the reported counts and pagination reflect the de-duplicated set of items
Assignment and Access Control Filtering
Given items exist across multiple projects and drawings, including items Consultant X cannot access When the aggregator compiles open items for Consultant X Then only items assigned to Consultant X directly or via team/group membership are included And items assigned to other users or unassigned items are excluded And items from projects, drawings, or threads outside Consultant X's permissions are excluded And deep links for excluded items are not generated And the resulting list passes an access-control check such that attempting to open any deep link as Consultant X returns HTTP 200, while the same link with no access returns HTTP 403
Change Highlights Since Last Nudge
Given a previous nudge for Consultant X was sent at timestamp T And the system has recorded the snapshot of items included at time T When generating the next nudge at timestamp T2 Then items created after T are labeled "New" in the payload And items whose status changed since T include a "Status changed: <old> → <new>" annotation And items whose due date changed since T show the previous date and a delta (e.g., "Due moved +2d") And items resolved between T and T2 are not listed, and the payload includes a summary count of items resolved since last nudge And all change annotations are present in the payload fields for use by the nudge template
Graceful Empty State in Nudge Content
Given Consultant X has zero open items after applying assignment, permissions, and de-duplication When a nudge is generated for Consultant X Then the payload contains an explicit empty state flag And no item list is rendered and no one-click action tokens are generated And the payload includes a link to the project workspace and a short "You're all set" message key And the API returns HTTP 200 and logs no errors or warnings related to missing data
Performance, Pagination, and Caching at Scale
Given a large project with ≥5,000 open items across ≥200 drawings with realistic metadata sizes When generating a nudge payload server-side for a single consultant Then p95 server processing time for aggregation is ≤800 ms and p99 is ≤1500 ms measured over 200 runs And the first page includes up to 50 items with a next-page cursor and a deep link to "View more" in PlanPulse And subsequent page fetches return in ≤400 ms p95 with consistent ordering and no duplicates across pages And aggregation uses a cache layer such that repeated payload requests within 10 minutes hit cache with ≤60 s staleness and reflect permission changes within 60 s And peak memory usage for aggregation remains ≤150 MB per request
Precise Deep Links to Drawings and Threads
Given open items include markups, tasks, approvals, and comments across drawings and threads When the aggregator builds deep links for each item Then each deep link navigates to the exact target context (e.g., drawing version V with markup ID M focused; thread ID C scrolled to the comment) And links preserve viewport/coordinate anchors where available for markups And links respect project access controls and expire if tokenized (time-bound tokens valid ≤24 hours) And links are valid on desktop and mobile web and open in ≤1.5 s p95 under normal network conditions And opening a link without required permissions returns a 403 response and does not reveal existence of restricted content beyond generic messaging
One‑Click Done/Blocker Response Capture
"As a consultant, I want to mark items Done or flag a Blocker directly from the reminder so that I can update status quickly and surface issues early."
Description

Enables actionable buttons in each reminder for immediate status updates: Done closes or marks items complete; Blocker records a blocking issue and optionally captures a short note. Secure, signed action links work from email and push notifications without full sign‑in, expiring after first use or a set time to prevent abuse. Responses synchronize item status in PlanPulse in real time, trigger follow‑up rules (e.g., notify leads on blockers), and write a complete audit trail with actor, timestamp, and context.

Acceptance Criteria
Email Done Link Marks Open Item Complete
Given a reminder email contains a signed Done link for open item X assigned to user U When U clicks the Done link within the link’s validity window and the link has not been used Then item X status is set to Done and reflects in PlanPulse within 5 seconds And the action requires no full sign-in, using the token to authenticate U And the link is immediately invalidated after success And a confirmation page is shown stating the item was marked Done And an audit record is written with actor U, action Done, item X, source=email, timestamp
Blocker Response With Optional Note From Reminder
Given a reminder message contains a signed Blocker link for open item X assigned to user U When U activates the Blocker link and submits an optional note up to 500 characters Then item X status is set to Blocked and the note is stored with the item And the note input is optional; if blank, the Blocker action still succeeds And lead and watchers are notified within 60 seconds with the blocker note And an audit record includes the note text, actor, item, source channel, and timestamp
Single-Use, Expiry, and Signature Validation of Action Links
Given any signed action link contains claims for user, item, action, and expiry When the signature is invalid or claims do not match server records Then the request is rejected with no state change and a safe error page is displayed When the link is used after expiry or after a prior successful use Then the action is not applied and the page offers secure sign-in to proceed And the expiry duration is configurable per cadence between 15 minutes and 72 hours And all invalid/expired attempts are logged in the audit trail without mutating item state
Idempotency and Concurrency Protection for Action Processing
Given the same action link is clicked multiple times or opened in multiple clients When duplicate or concurrent requests are received within 10 seconds Then the action is applied at most once; subsequent requests return “already processed” And the final item status is consistent and no duplicate notifications are sent And the audit trail records each attempt and flags duplicates, with exactly one state-change event
Push Notification One-Click Actions With Offline Fallback
Given a push notification includes Done and Blocker action buttons for item X When the user taps an action while online and the app is installed Then the action executes in-app and PlanPulse reflects the change within 5 seconds When the app is not installed Then the action completes via a secure web fallback using the signed link When the device is offline at tap time Then the action queues locally and executes once online within 15 minutes with at-most-once semantics And all cases write audit entries including device info and channel=push
Audit Trail Completeness and Access Control
Given any Done or Blocker action is processed Then an immutable audit record is created containing actor user ID and email, project ID, item ID, action type, outcome (success/failure), timestamp (UTC ISO 8601), source channel, message ID, request IP, user agent, token ID, and note (if provided) And audit records are visible in the item activity log to users with Lead or Admin role And audit records are exportable as CSV and JSON for a selectable date range
Follow-Up Rules Triggered by Responses
Given an item is marked Blocked via an action link Then the project lead is notified via email and in-app within 60 seconds with the note (if any) And a follow-up task is created and assigned to the lead with due date next business day 09:00 lead local time And if the item remains Blocked for 48 hours, an escalation notification is sent to the account owner When an item is marked Done Then it is removed from upcoming nudges in the current window and excluded from the next cadence run
Multi‑Channel Delivery with Fallback
"As a project lead, I want reminders delivered on the channels my team uses so that they are seen and acted on promptly."
Description

Delivers nudges via email, in‑app notifications, and optional Slack/MS Teams, with future‑ready hooks for SMS. Channel selection follows user preferences and organization policy, with intelligent fallback if a message bounces or remains unopened. Messages use responsive templates, include localized time and date formats, and track delivery/open/click for reliability metrics. Supports unsubscribe/notification preferences, legal compliance (e.g., CAN‑SPAM/GDPR), and branded sender configuration per workspace.

Acceptance Criteria
Primary Channel Selection by User Preference and Org Policy
Given a recipient with saved channel preferences and an organization policy defining allowed channels And the workspace has optional integrations configured (Slack and/or MS Teams) and SMS is feature-flagged off by default When a nudge is dispatched Then the system selects the first channel in the recipient’s preference order that is allowed by org policy and is technically available (integration connected, consent present) And no message is sent via any channel disallowed by org policy or lacking required consent And SMS is only considered if the SMS feature flag is enabled and the recipient has SMS consent on file And the dispatch log records the selected channel, disallowed channels (with reasons), and feature flags evaluated
Intelligent Fallback on Bounce or Unopened
Given a nudge was dispatched on a primary channel When a hard bounce or explicit delivery failure event is received Then a fallback attempt is sent via the next eligible channel within 2 minutes of the failure event When no open or click event is recorded within the configured fallback delay (default: 4 hours) Then a fallback attempt is sent via the next eligible channel And fallback stops immediately upon first open or click on any channel And a maximum of 2 fallback attempts are made per nudge, with no more than one attempt per channel And duplicate notifications for the same nudge are prevented across channels (idempotency key per nudge-recipient) And all attempts and outcomes (success/failure, timestamps, reason) are recorded in an audit log
Responsive, Accessible Templates Across Channels
Given the system supports email, in‑app notifications, Slack, and MS Teams When a nudge is rendered in each channel Then email templates render without layout breakage at common widths (≥320px mobile, ≥600px desktop) And in‑app notifications wrap text and actions without overflow at viewport widths from 320px to 1440px And Slack/MS Teams messages use supported markdown/blocks and do not exceed platform field length limits And all templates include alt text for images, maintain ≥4.5:1 text contrast, and expose actionable elements with keyboard focus And subject lines (≤78 chars) and preheaders (≤140 chars) are present for email And templates support dark mode without losing legibility And one‑click action buttons (Done/Blocker) are present and functional in all supported channels
Localized Time and Date Rendering
Given a recipient with a stored locale and time zone When a nudge includes dates/times (e.g., window start/end, due by) Then those values render in the recipient’s locale format (e.g., en‑US: MM/DD/YYYY, en‑GB: DD/MM/YYYY) and time zone And daylight saving transitions are correctly applied for the recipient’s zone And if the recipient’s locale is unavailable, the workspace default locale and time zone are used And machine‑readable timestamps (ISO‑8601 with offset) are included in metadata for tracking And the same localized values appear consistently across email, in‑app, Slack, and MS Teams
Delivery, Open, and Click Tracking with Reliability Metrics
Given a nudge is sent via any supported channel When delivery/open/click events occur (as supported by the channel) Then the system records an event with channel, timestamp (UTC), message ID, recipient ID, nudge ID, and event type And open events detected via privacy proxies (e.g., image proxying) are flagged as proxied and are not used to trigger fallback And click tracking records which action (Done/Blocker/Manage Prefs) was clicked and the channel source And per‑workspace metrics are available: send volume, delivery rate, bounce rate, open rate, click‑through rate, and median delivery latency by channel for selectable time ranges And event ingestion is idempotent (duplicate webhooks do not create duplicate events) And metrics are exportable via API and downloadable CSV
Unsubscribe and Notification Preferences Enforcement
Given per‑user notification preferences exist by nudge type and by channel When a nudge is being dispatched Then the system suppresses sending to any channel the user has unsubscribed from for that nudge type And all email messages contain a working unsubscribe/manage‑preferences link that updates preferences within 60 seconds And Slack/MS Teams messages include a Manage Notifications deep link to the in‑app preferences page And suppressed attempts are logged with suppression reason And re‑subscribe is honored immediately after user opt‑in And double opt‑in is required for SMS if/when enabled
Branded Sender Configuration and Compliance
Given a workspace with branding and sender settings configured When email nudges are sent Then emails use the workspace display name, logo, and color accents and are sent from a verified domain with SPF/DKIM/DMARC passing And each email includes a physical postal address and an unsubscribe link to comply with CAN‑SPAM And GDPR compliance is supported via a link to the privacy notice and honoring data subject preferences When Slack/MS Teams nudges are sent Then messages are delivered only if the workspace app is installed with required scopes and tenant consent; otherwise, dispatch is blocked with a clear error And if branded sender settings are incomplete or unverified, sending is blocked and an actionable configuration error is shown And compliance and sender verification statuses are visible in admin settings
Cadence Configuration and Quiet Hours
"As a project lead, I want to tune reminder timing and quiet hours so that nudges are helpful and non‑disruptive."
Description

Provides project‑level defaults and per‑consultant overrides for timing (offsets for pre‑brief, mid‑window, final‑call), working hours, weekends/holidays, and quiet hours. Includes rules to suppress nudges after recent activity or when all items are complete, plus a preview mode to simulate the next send for verification. Role‑based access controls restrict who can change cadence, and changes are versioned with an audit log. Includes template text editing with variables for personalization.

Acceptance Criteria
Cadence Offsets and Overrides
Given a project review window with start S and end E And project-level defaults are set: Pre-brief = 1440 minutes before S; Mid-window = 50% of (E−S) after S; Final-call = 120 minutes before E And consultant C has an override for Pre-brief = 720 minutes before S and no overrides for other offsets When the scheduler computes planned nudges for all consultants Then C’s Pre-brief is scheduled 720 minutes before S and C’s Mid-window and Final-call use the project defaults And consultants without overrides use all project defaults And only consultants with at least 1 open item at computation time are included And each queued nudge is stamped with the effective settings source (override or default)
Respect Working Hours, Weekends, Holidays, and Quiet Hours
Given consultant C has working hours 09:00–17:00 in their local time zone, weekends excluded, quiet hours 20:00–08:00, and holidays include 2025-12-25 And a nudge for C is scheduled at time T When T falls outside working hours, on a weekend, on a listed holiday, or within quiet hours Then the send time is deferred to the next permissible time within the same review window that respects C’s settings And if no permissible time remains before E, the nudge is suppressed with reason "No permissible send window" And the computed send time uses C’s time zone if set, otherwise the project time zone
Suppress After Recent Activity or Completion
Given the recent-activity suppression window is 60 minutes And consultant C performed a tracked activity at time A And C has N open items at evaluation time When a nudge is due at time T Then if N = 0 at T, the nudge is suppressed with reason "All items complete" And if T − A ≤ 60 minutes, the nudge is suppressed with reason "Recent activity" And suppressed nudges are logged with timestamp, consultant, reason, and rule identifier
Preview Mode Simulates Next Send
Given a user with Manage Cadence permission opens Preview for consultant C and nudge type X And current time is Now When the user clicks "Preview next send" Then the system displays the computed next send timestamp, target channel, and the resolved message template with variables evaluated for C And any applicable suppression is shown with its reason And no messages are enqueued or sent, and no counters are incremented And if there are unsaved draft changes, the preview uses the draft values; otherwise it uses the currently saved settings
Role-Based Access Control on Cadence Changes
Given only users with the Manage Cadence permission may create or modify cadence defaults, consultant overrides, quiet hours, or templates When a permitted user creates, updates, or deletes any cadence setting Then the operation succeeds and is attributed to that user When a non-permitted user attempts the same Then the operation is blocked, no changes are saved, and the user sees an authorization error
Versioning and Audit Log of Cadence Changes
Given the current cadence configuration is at version V When a user saves changes to any cadence setting or template Then a new version V+1 is created with timestamp, author, and a diff of changed fields And the audit log records project, consultant (if override), old value, new value, and user context And an authorized user can view version history and restore a prior version, which creates version V+2 with restored values
Template Text Editing with Variables
Given a cadence message template supports variables {consultant_name}, {open_item_count}, {window_start}, {window_end}, and {response_link} When a user edits and saves a template containing only supported variables Then validation passes and the template is saved When a template contains an unknown variable token Then validation fails with an error listing the unknown tokens and the template is not saved And in Preview or send computation, supported variables resolve to correct values for the target consultant
Escalation and Follow‑Up Workflow
"As a project lead, I want automatic escalation on missed deadlines or blockers so that I can intervene before schedules slip further."
Description

Automatically escalates when final‑call passes without required completion or when a Blocker is reported. Sends a summary to the project lead with affected items, owners, and suggested next actions, and can schedule a targeted follow‑up nudge or reassign items per configuration. Supports snooze, acknowledgement, and SLA targets, and records escalation events for reporting. Integrates with PlanPulse approvals to prevent sign‑off until critical items are resolved or explicitly waived.

Acceptance Criteria
Escalation Triggers: Final‑Call Missed or Blocker Reported
Given a project has a configured decision window and final‑call has been sent And one or more required items remain incomplete at final‑call end When the final‑call window expires Then the system creates an escalation incident within 5 minutes And the incident aggregates all affected items, current owners, due dates, and priority And the incident is deduplicated so no duplicate incident is created for the same items within 15 minutes Given an item owner submits a Blocker response from any nudge When the Blocker payload is received Then the system creates an escalation incident within 5 minutes And the incident captures the blocker reason and requested assistance
Lead Summary Notification Content & Routing
Given an escalation incident is created When the summary is dispatched Then the configured project lead(s) receive an in‑app notification and an email And the summary contains: total affected items; each item’s title, current owner, due date, status (open/done/blocker), SLA remaining/breached, and suggested next actions; deep links to items and Resolve/Reassign actions And the email subject follows: "[PlanPulse][Escalation] <Project Name>: <N> items require attention" And only one summary is sent per escalation incident And delivery failures are retried up to 3 times over 10 minutes and surfaced in activity history
Configurable Follow‑Up Nudge & Reassignment Actions
Given an escalation incident exists and follow‑up automation is enabled When the lead schedules a targeted follow‑up nudge Then nudges are queued to affected owners with the selected cadence and include one‑click Done/Blocker controls And scheduled nudges respect owner time‑window preferences and project quiet hours When the lead confirms Reassign for selected items Then ownership transfers to the chosen user And the new owner receives immediate notification and is added to future nudges And the reassignment appears in the item’s history with previous and new owner
Snooze and Acknowledgement Controls
Given an escalation summary is visible to a project lead When the lead selects Snooze and chooses a duration (e.g., 1h, 4h, 1d) Then escalation nudges for the selected items are paused for that duration And the snooze expires at the selected time and nudges resume automatically And the snooze record includes actor, timestamp, and snooze‑until When the lead selects Acknowledge Then the incident is marked acknowledged without suppressing SLA breach handling And acknowledged state is visible in the UI and filterable in reports
SLA Targets, Timers, and Breach Handling
Given a project defines an SLA target for critical items (hours from final‑call) When items are resolved or explicitly waived before the SLA deadline Then no SLA breach is recorded and timers close with time‑to‑resolution captured When the SLA deadline elapses with unresolved/un‑waived critical items Then the system flags an SLA breach on the incident and items And a breach escalation is sent to the project lead and the configured escalation group within 5 minutes And breach indicators appear in the UI and in summary messages And timestamps for time‑to‑breach and time‑to‑resolution are recorded
Escalation Event Logging & Reporting
Given any escalation lifecycle event occurs (created, summary‑sent, snoozed, acknowledged, follow‑up scheduled, reassigned, waived, resolved, SLA‑breached) When querying Escalation Events for a project and date range Then each event appears with: eventId, UTC timestamp, projectId, itemId(s), trigger type, actor, action type, previous→new state, SLA status, notification channels And events are filterable by project, owner, trigger type, and SLA status And events export successfully to CSV with the same fields And metrics show counts by trigger, median time to first response, and mean time to resolution
Approvals Gating with Explicit Waiver
Given a project has open escalated critical items When a user attempts to approve a PlanPulse sign‑off Then the approval action is blocked and the UI lists the blocking items And a user with Waiver permission can waive specific items by selecting a reason and confirming And upon resolution or waiver of all blocking items, approval proceeds successfully And each waiver is recorded with reason, actor, timestamp, and affected items
Nudge Analytics and Optimization
"As a product owner, I want visibility into nudge effectiveness so that I can optimize cadence and demonstrate impact on approval cycle time."
Description

Aggregates metrics such as send volume, delivery rate, open/click rate, response type (Done/Blocker), time‑to‑completion after nudge, and on‑time completion rate by stage and project. Presents dashboards and CSV export, supports cohort comparisons and A/B testing of timing or channel, and recommends optimized cadence settings based on historical performance. All analytics respect user privacy settings and data retention policies.

Acceptance Criteria
Metric Aggregation and Accuracy by Stage and Project
Given 90 days of nudge events across multiple projects and stages When the analytics job runs Then metrics are computed per project and stage: send_volume, delivery_rate = delivered/sent, open_rate = opens/delivered, click_rate = clicks/delivered, response_rate_done = done_responses/delivered, response_rate_blocker = blocker_responses/delivered, median_time_to_completion_after_nudge, and on_time_completion_rate = on_time_completions/total_due Given events include timezone-aware timestamps When metrics are rendered Then calculations use UTC timestamps and date filters apply in the workspace time zone; displayed timestamps reflect the workspace time zone Given some events may be retried or duplicated When aggregations are computed Then deduplication by immutable event_id ensures counts are correct Given a stage or project has zero sent nudges in the filter range When rates are displayed Then rate fields show "—" and no division-by-zero errors occur Given data ingestion may be delayed up to 10 minutes When the dashboard loads Then metrics include data up to the last completed 5-minute window and show a "Last updated" timestamp Given a user has access to a subset of projects When viewing metrics Then only authorized projects and stages are included in aggregates and totals
Interactive Analytics Dashboard Filtering and Drill-Down
Given filters for date range, project, stage, consultant, channel, and message type When filters are applied Then all charts and tiles update within 2 seconds for datasets up to 50,000 events and totals remain consistent across components Given a data point in a chart is clicked When drill-down is requested Then a table shows contributing records (nudge_id, timestamp, project, stage, channel, response_type) up to 500 rows with pagination and the same filters applied Given a user switches Group by between project, stage, channel, and cohort When the selection changes Then the visualizations re-group accordingly and grand totals remain unchanged Given the dashboard is viewed on a 1280x800 display When loaded Then the layout is responsive without horizontal scrolling and tooltips display metric definitions on hover Given a user lacks access to certain projects When those projects are included in filters Then the filter options are disabled or the results exclude them with a notice indicating restricted items were omitted
CSV Export with Filters and Retention Compliance
Given current dashboard filters are set When the user clicks Export CSV Then the exported file contains only records and aggregates matching the filters and the current grouping Given an export exceeds 100,000 rows When initiated Then the export runs asynchronously and completes within 5 minutes for up to 500,000 rows, notifying the user upon completion with a secure download link that expires in 24 hours Given CSV formatting requirements When the file is generated Then headers use snake_case, timestamps are ISO 8601 with timezone offset, numbers use dot as decimal separator, delimiter is comma, text fields are quoted when containing commas, and line endings are LF Given a 180-day event-level data retention policy When exporting Then rows older than 180 days are excluded and aggregates cover only periods within retention Given privacy settings exclude certain users or projects from analytics and require PII redaction When exporting Then excluded records are omitted and identifiers are hashed; no message content appears in the file Given a user without export permission attempts to export When they click Export Then a permission error is shown and no file is generated
Cohort Comparison and Lift Calculation
Given two mutually exclusive cohorts defined by filters (e.g., channel=email vs channel=SMS) When Compare is activated Then the UI displays side-by-side metrics and computes absolute difference and relative lift for the selected primary metric Given either cohort has fewer than 200 delivered nudges When showing lift Then a "Low sample" warning appears and lift is hidden until both cohorts have n ≥ 200 Given cohort definitions overlap When the comparison is attempted Then the system blocks the comparison and prompts the user to adjust filters to be mutually exclusive Given the date range filter changes When applied Then cohort metrics and lift recompute and the baseline cohort selection persists Given the user resets filters When Reset is clicked Then comparison mode exits and the dashboard returns to default totals
A/B Test Setup and Attribution for Nudge Timing/Channel
Given a user creates an experiment specifying objective (e.g., on_time_completion_rate), factor (timing or channel), 2–4 variants, and traffic split When the experiment is started Then eligible nudges are randomly and deterministically assigned at the consultant level to prevent cross-over, with assignment logged per nudge_id Given the user enters baseline rate and minimum detectable lift When the sample size calculator runs Then the required sample per variant is shown and Start is disabled until target per-variant sample size is met Given the experiment is running When viewing results Then per-variant metrics, confidence intervals, and a winner flag appear only when p < 0.05 (two-sided) and each variant meets the required sample size Given the experiment is paused or ended When new nudges are created Then no further traffic is enrolled, and historical assignment logs are retained for 365 days or per retention settings Given a user lacks experiment permissions When accessing Experiments Then create/edit actions are hidden and results are view-only
Automated Cadence Recommendations
Given at least 90 days of history and ≥ 1,000 delivered nudges per stage When generating recommendations Then the system proposes pre-brief, mid-window, and final-call timing and channel with expected lift and confidence interval versus current settings Given data per stage is below the threshold or impacted by privacy exclusions When generating recommendations Then the system returns "Insufficient data" for that stage and provides no recommendation Given a recommendation is displayed When Apply is clicked Then the proposed cadence is saved to the project template, versioned, and an activity log records who applied it, when, and the supporting metrics snapshot Given recommendations are shown When viewed by any role Then they never include message content and only reference aggregate metrics and timing windows; excluded users/projects are not considered Given a recommendation has been applied When viewing analytics later Then pre/post performance is tracked and labeled to attribute changes to the applied recommendation
Privacy, Permissions, and Data Retention Enforcement in Analytics
Given a user or project is marked Exclude from analytics When aggregates and exports are computed Then events from those entities are omitted from counts and rates Given event-level retention is 180 days When computing analytics or exports Then events older than 180 days are not used; if historical summaries are permitted, only pre-aggregated rollups are used with their retention labels Given role-based access controls When a Viewer accesses analytics Then they can view dashboards but cannot export CSV, configure experiments, or apply recommendations; Admins can perform all actions; Editors can export and run comparisons but cannot change retention/privacy settings Given PII fields (names, emails, phone numbers) When rendering analytics or generating exports Then PII is masked or hashed and message bodies are never stored or exported Given an audit trail is requested by an Admin When exporting the audit log Then the system returns access and export events for the past 365 days including user_id, timestamp, action, and object

Window Briefs

Auto‑assembled, discipline‑specific packets that pair visual diffs with scope tags, relevant sheets, and impact highlights. Consultants land on the exact changes that matter to them, speeding focused reviews and minimizing misinterpretation.

Requirements

Visual Diff Generation
"As a project architect, I want accurate visual diffs for selected versions so that consultants instantly see exactly what changed without downloading full sets or misreading minor updates."
Description

Generate high-fidelity visual diffs between selected drawing versions, normalizing scale, rotation, and sheet alignment to output color-coded overlays that clearly depict additions, deletions, and modifications. Support both vector and raster sources with fallbacks for scanned sheets, and preserve line weights and layers to keep discipline-relevant details legible. Expose a service endpoint and cache layer so Window Briefs can request thumbnails and full-resolution diffs on demand with predictable performance. Include configurable diff sensitivity to reduce noise from minor graphic artifacts. Integrate with PlanPulse’s version history and markup system to anchor diffs to specific commits and comments.

Acceptance Criteria
Alignment-Normalized Overlay between Versions
Given two versions of the same sheet with differing scale and rotation, when a diff is generated, then the system normalizes scale within ±0.5% and rotation within ±0.1° and aligns geometry with mean alignment error ≤ 2 px across ≥ 20 control points. Given misaligned scans with margin differences up to 3%, when a diff is generated, then automatic registration compensates and P95 alignment error remains ≤ 3 px. Given a user pans/zooms the overlay, when toggling diff visibility, then the registered alignment does not drift more than 1 px between redraws.
Color-Coded Add/Delete/Modify Classification
Given a generated diff, when classification runs, then additions are labeled Add, deletions Delete, and geometry changed in-place Modify with F1-score ≥ 0.95 on the curated test set. Given the overlay is displayed, when the legend is shown, then each class has a distinct, consistent color and the user can toggle each class on/off and the setting persists for the session. Given light and dark canvas backgrounds, when the overlay renders, then all class colors meet contrast ratio ≥ 4.5:1 against both backgrounds.
Mixed Source Support with Scanned Fallback
Given vector-to-vector PDFs, when a diff is generated, then original layer names are preserved and line weights deviate ≤ 10% from source at 100% zoom. Given vector-to-raster or raster-to-raster sources at ≥ 150 dpi, when a diff is generated, then a raster pipeline is used and unchanged areas achieve SSIM ≥ 0.95 to the source. Given a skewed scan up to 5°, when a diff is generated, then deskew and contrast normalization are applied and overlay alignment P95 error ≤ 3 px.
Diff Sensitivity Controls
Given sensitivity set to Low, when a diff is generated, then isolated artifacts with width < 3 px or area < 20 px² are ignored. Given sensitivity set to Medium (default), when a diff is generated, then artifacts < 2 px or area < 10 px² are ignored while 0.35 pt line changes are detected. Given sensitivity set to High, when a diff is generated, then line changes down to 0.25 pt and 1 px shifts are detected. Given a user changes sensitivity, when re-rendering, then cached re-render completes ≤ 2 s P95 and cold regeneration ≤ 10 s P95.
On-Demand API and Cache for Window Briefs
Given POST /api/diffs with {sheetId, fromVersionId, toVersionId, sensitivity, sizes:[thumbnail,full]}, when no cached diff exists, then respond 202 Accepted with diffId; when cached, respond 200 with URLs. Given GET /api/diffs/{diffId}, when status is ready, then return {status:ready, thumbnailUrl, fullUrl, fromCommitId, toCommitId, sensitivity, createdAt, ttl}. Given a warm cache, when requesting thumbnail, then P95 latency ≤ 500 ms; for full-res, P95 ≤ 2000 ms; cold generation P95 ≤ 15 s per sheet at A1@300 dpi equivalent; hourly 5xx error rate ≤ 0.5%. Given responses are returned, then they include ETag and Cache-Control headers; cache invalidates upon a new commit affecting the sheet.
Version History and Markup Anchoring
Given two selected versions from version history, when a diff is created, then metadata includes fromCommitId, toCommitId, and a deterministic diffKey = hash(sheetId, from, to, sensitivity). Given an existing markup comment on the fromVersion, when viewing the diff, then the comment pin appears at the correct registered coordinates and links to the original thread. Given a user opens a diff from a commit view, when navigating back, then links correctly open both commits; the diff remains accessible even after newer commits are added.
Thumbnail and Full-Res Output Legibility
Given a generated thumbnail (max dimension 512 px), when viewed, then major changes that are ≥ 10 px at full-res are visually discernible and class toggles remain functional. Given full-resolution output, when inspected at 100% zoom, then line weights are preserved within ±10%, text ≥ 8 pt is legible, and unchanged areas exhibit PSNR ≥ 35 dB with no visible banding. Given export options, when the user selects background, then transparent and white backgrounds are supported; thumbnail size ≤ 300 KB; full-res raster ≤ 25 MB; vector output ≤ 10 MB.
Scope Tagging & Discipline Mapping
"As a discipline lead, I want changes auto-tagged to my scope so that I only receive packets with items that actually affect my work."
Description

Automatically classify changes into scope tags and map them to disciplines (e.g., Structural, MEP, Interiors) using a hybrid rules-and-ML approach that leverages layer names, title block metadata, sheet indices, and tagged markups. Provide an admin-managed mapping table and discipline profiles to customize tag-to-discipline routing per firm or project. Allow manual overrides in the UI to correct edge cases and improve future classification via feedback. Persist tags at the change cluster level so packets remain consistent across exports and notifications. Ensure the tagging service is idempotent and re-runnable when new versions arrive or mappings change.

Acceptance Criteria
Hybrid Auto-Classification from Drawing Metadata and Markups
Given change clusters with available layer names, title block metadata, sheet indices, and tagged markups When the tagging service runs Then each cluster is assigned one or more scope tags or marked Needs Review if confidence is below threshold And the service records which signals contributed to the decision for each cluster And ties are resolved by precedence: explicit rules overrule ML scores; otherwise highest-confidence ML prediction wins And on a validation set of at least 200 labeled clusters, micro-precision is >= 0.85 and recall is >= 0.80 And if one or more signals are missing, the service still classifies using remaining signals without error
Admin Mapping Table and Discipline Profiles Configuration
Given an Admin user When they create, update, or delete tags, disciplines, or tag-to-discipline routes at firm or project scope Then changes are validated (no duplicates, references exist), versioned, and audit logged with actor and timestamp And project-level mappings override firm-level mappings on conflicts; non-overridden entries inherit from firm level And only Admins can modify mappings; non-admins receive a permission error And upon saving mapping changes, the system triggers a re-tag of only affected projects and updates impacted clusters within 2 minutes
Manual Overrides with Feedback Capture
Given an authorized Project Lead or Admin viewing a change cluster When they manually edit the cluster’s scope tags and/or mapped disciplines in the UI Then the override persists on that cluster and is used for exports and notifications And the override survives subsequent automated re-runs unless explicitly cleared And the action is audit logged (before/after values, user, timestamp) And a feedback record is created within 5 seconds capturing features and the corrected label for future model training And users can revert to system classification, restoring the last non-overridden state
Cluster-Level Tag Persistence Across Exports and Notifications
Given a cluster with persisted tags and discipline mappings When a Window Brief is exported or a notification is sent Then the payload includes the persisted tags and mapped disciplines, matching what is displayed in the UI And re-exporting without changes produces identical tag/discipline values in JSON and PDF annotations And if a cluster is split, children inherit the parent’s tags and are flagged Needs Review And if clusters are merged, the resulting cluster receives the union of parent tags and is flagged Needs Review And cluster identifiers remain stable across re-runs unless a merge/split occurs, in which case new IDs are generated and linked in the audit trail
Idempotent, Re-runnable Tagging Pipeline on New Versions and Mapping Changes
Given identical inputs (drawings, mappings, and overrides) When the tagging pipeline runs multiple times Then outputs are identical and no duplicate audit entries or notifications are produced And when a new drawing version arrives, only clusters impacted by detected changes are re-evaluated; others remain unchanged And when mappings change, only clusters whose routing is affected are re-evaluated and updated And concurrent runs for the same project are serialized or deduplicated so final state is consistent And each run is traceable with a run ID, inputs fingerprint, start/end time, and outcome status
Discipline-Based Routing for Window Briefs
Given discipline profiles and tag-to-discipline mappings are configured When generating a Window Brief for a specific discipline (e.g., Structural) Then the packet includes only clusters mapped to that discipline And clusters mapped to multiple disciplines appear once in each relevant packet without duplication within a packet And clusters with no discipline mapping are placed in an Unassigned queue and excluded from discipline packets And notifications respect discipline-specific recipient lists so each consultant only receives relevant changes
Relevant Sheet Aggregation
"As a structural consultant, I want a packet that includes only impacted sheets and necessary context so that I can review efficiently without hunting through the full set."
Description

Assemble a minimal set of affected sheets for each Window Brief by tracing change clusters back to their source sheets and related details, while pulling in one level of contextual sheets (e.g., plans referenced by elevations) to reduce back-and-forth. Maintain original sheet numbering and titles, and include deep links to open the exact viewport location within the sheet. Handle cross-file scenarios where changes span multiple linked models, ensuring deduplication and consistent ordering. Provide pagination and lazy loading for large packets to keep the web view responsive.

Acceptance Criteria
Minimal Affected Sheets Aggregation
Given a Window Brief is generated from a set of change clusters When tracing each cluster to its source sheet identifiers Then include each unique source sheet that contains at least one changed element And exclude sheets with no changed elements And the total included source sheets equals the count of unique source sheet identifiers referenced by the clusters
One-Level Contextual Sheets Inclusion
Given source sheets in the aggregation reference other sheets via callouts, section/elevation markers, or view references When contextual sheets are added Then include exactly one level of referenced sheets for context And do not include references beyond one level And do not include a contextual sheet if it is already included as a source sheet
Original Sheet Number and Title Preservation
Given aggregated sheets from one or more files When the Window Brief is rendered Then display each sheet’s number and title exactly as stored in the source file metadata And do not alter numbering, prefixes, or title casing And preserve any discipline code prefixes present in the sheet number
Deep Link to Exact Viewport Location
Given an aggregated sheet that contains one or more change clusters When a user activates the deep link for a specific cluster Then the sheet viewer opens with the cluster’s bounding box centered within 10% of the viewport center And the cluster’s bounding box occupies between 30% and 90% of the viewport area And the deep link lands within the correct viewport on multi-viewport sheets
Cross-File Aggregation with Dedup and Stable Ordering
Given changes span sheets across multiple linked files When the aggregation is built Then deduplicate sheets using a composite unique key of file_id + sheet_id And order the final list stably by discipline code (ascending), then natural sheet number (ascending), then sheet title (ascending), then file_id (ascending) And repeated runs with identical inputs produce identical ordering
Pagination and Lazy Loading Performance
Given the aggregated list contains more than 50 sheets When the Window Brief is opened Then render the first page within 2 seconds on a 50 Mbps connection for sheet thumbnails up to 200 KB each And load sheets in pages of 20 items with lazy loading of thumbnails and metadata when items enter the viewport And prefetch of the next page starts when scroll position is within one viewport height of the list end, and the next page’s first thumbnails appear within 300 ms after crossing the threshold
Source vs Contextual Deduplication
Given a sheet qualifies both as a source sheet and as a contextual sheet via references When assembling the final list Then include that sheet only once in the aggregation And position it according to the global stable sort rules And ensure no duplicate entries appear across pages
Impact Highlights Computation
"As a consultant PM, I want a quick impact summary on each change so that I can prioritize my team’s review and avoid surprises downstream."
Description

Compute concise impact summaries per change cluster, including affected rooms/zones, related systems, quantity deltas (e.g., window count, size changes), and dependency flags (e.g., fire rating, egress implications). Assign an impact score and category (minor, moderate, major) using configurable rules to help recipients triage. Surface highlights inline with each diff and roll them up into a packet summary section. Integrate with scope tags, markups, and project metadata to increase accuracy and avoid false alarms. Provide transparent reasoning snippets so reviewers understand why a change was marked high impact.

Acceptance Criteria
Accurate Rooms/Zones and Quantity Deltas Extraction
Given a change cluster affecting windows in Rooms R101 and R102 and Zone Z1, when impact highlights are computed, then affected_rooms = [R101, R102], affected_zones = [Z1], and no other rooms/zones are listed. Given the cluster includes 2 added, 1 removed, and 3 resized windows, when quantity deltas are computed, then windows_added = 2, windows_removed = 1, windows_resized = 3, and total_size_delta_m2 is reported with precision ±0.01 m². Given a change cluster has no geometric overlap with any room/zone boundary, when computation runs, then affected_rooms and affected_zones are empty arrays. Given room and level metadata exist for multi-level plans, when computation runs, then affected_rooms include level-qualified identifiers (e.g., R201@L2) where applicable.
Related Systems and Dependency Flags Identification
Given a changed window is in a wall tagged fire-rated, when dependency flags are computed, then fire_rating_flag = true and the highlight lists the impacted fire safety system. Given a configured rule "egress_area_negative_delta triggers egress_flag", when a bedroom window net egress area delta < 0 is detected for the cluster, then egress_flag = true and the related system includes Life Safety. Given no scope tags or metadata indicate MEP, structural, or life-safety relevance, when computation runs, then related_systems contains only Facade/Architectural and no dependency flags are set. Given multiple systems are implicated by tags (e.g., structural lintel + fire-rated wall), when computation runs, then related_systems includes both Structural and Life Safety and both flags are present without duplication.
Configurable Impact Scoring and Categorization
Given a ruleset assigns points (e.g., +5 per egress_flag, +3 per fire_rating_flag, +1 per window resized), when a cluster has egress_flag=true, fire_rating_flag=false, and 2 resized windows, then impact_score = 5 + 2*1 = 7 and category = "moderate" per thresholds [0-3 minor, 4-7 moderate, 8+ major]. Given an admin updates the category thresholds to [0-4 minor, 5-9 moderate, 10+ major], when the same cluster is recomputed, then the impact_score remains 7 and the category recalculates to "moderate" under the new thresholds. Given no custom ruleset is configured, when computation runs, then the default ruleset is applied and the applied_ruleset_id is recorded in the result. Given two categories are tied by boundary rounding, when computation runs, then the higher category is chosen (major > moderate > minor) and the tie-break rule name is included in reasoning.
Inline Diff Highlights and Packet Roll-up
Given a Window Brief contains 4 visual diffs, when the brief is rendered, then each diff panel displays an inline Impact Highlights block with fields: affected_rooms, related_systems, quantity_deltas, dependency_flags, impact_score, and category. Given multiple diffs affect the same room and flag, when the packet summary is generated, then the summary de-duplicates rooms/flags and displays aggregate counts (e.g., Room R101 affected in 3 diffs) and the highest category across those diffs. Given a diff has no computed impacts, when rendered, then the inline Impact Highlights block shows "No material impact detected" and the diff is excluded from the summary roll-up. Given sort preference = "highest impact first", when the packet summary is rendered, then items are ordered by category major > moderate > minor, then by descending impact_score, producing deterministic order.
Scope-Aware Filtering to Reduce False Alarms
Given the recipient consultant scope tags = [Structural], when a Window Brief is generated, then only highlights with related_systems including Structural are included; others are hidden from their packet but remain in the global record. Given a markup is tagged exploratory = true and ready_for_review = false, when computing impacts for packets, then changes from that markup are excluded from consultant-facing highlights. Given project metadata excludes Level L3 from current issue set, when impacts are computed, then affected_rooms on L3 are not surfaced in the packet summary and are marked as out-of-scope in internal logs. Given scope tags are updated, when a packet is regenerated, then included/excluded highlights reflect the new scope without stale entries.
Transparent Reasoning Snippets and Rule Trace
Given a cluster is categorized as major, when highlights are generated, then a reasoning_snippet is included with: (a) the primary facts (e.g., counts, rooms), (b) the top contributing rule names, and (c) referenced metadata keys; total length ≤ 240 characters. Given a user expands details on a highlight, when rule trace is requested, then the system returns the list of fired rules in order of contribution with their input facts and partial scores. Given no flags are set but quantity deltas exist, when reasoning is generated, then the snippet explains the category outcome (e.g., "2 resized windows; no dependency flags; score=2 → minor"). Given exported packet JSON is generated, when inspecting the highlight entry, then fields reasoning_snippet and fired_rules are present and non-empty for computed highlights.
Packet Composer & Templates
"As a project lead, I want to generate a consistent, branded packet for each discipline so that reviews are standardized and easy to consume."
Description

Compile Window Briefs into a structured, shareable artifact with a cover summary, discipline-scoped change list, relevant sheets, visual diffs, and impact highlights. Support per-discipline templates that control section order, fields, and branding, with the ability to save presets at the project level. Offer both interactive web views and exportable PDFs with stable anchors for referencing in emails or RFIs. Ensure all components are dynamically generated from source data so packets stay in sync when versions update, with change logs indicating what was added since the last brief. Provide accessibility-friendly layouts and keyboard navigation.

Acceptance Criteria
Compose Packet with Required Sections and Discipline Scope
Given a project with at least one published version and scoped changes for Discipline X When a user composes a Window Brief using the Discipline X template Then the packet contains a cover summary, a discipline-scoped change list, relevant sheets, visual diffs for each changed item, and impact highlights And then only items tagged with Discipline X appear in the change list And then each change list item links to its corresponding visual diff and relevant sheet section within the packet And then the cover summary includes project name, version range, discipline, packet timestamp, and author And then each change item displays impact highlights with at least scope tags and affected sheets; optional cost/schedule fields appear only if populated
Create and Apply Per-Discipline Templates with Branding and Section Order
Given a user with Template Admin permissions When they create or edit a template Then they can configure section order, visible fields per section, and upload branding assets (logo, primary color, secondary color) And then validation prevents saving a template missing required sections (cover summary and change list) And then the template can be saved as a project-level preset and set as default for a selected discipline And when a composer uses that discipline, then the default preset auto-applies; users without Template Admin cannot alter the template definition but can choose among allowed presets
Interactive Web View and PDF Export Parity
Given a composed packet When opened in the interactive web view and exported to PDF Then section order, numbering, visible fields, and change counts match exactly between formats And then section and item anchors are present in web (fragment IDs) and in PDF (bookmarks and clickable intra-document links) And then PDF export completes within 15 seconds for packets up to 150 changes and 50 sheets at ≥150 DPI for rasterized diffs, with a file size ≤50 MB And then all images and diffs render without placeholders or broken links in both formats
Dynamic Sync on Version Updates with Change Log
Given an existing packet composed for versions A..B When a new version C is published Then the packet indicates it is out of date and offers one-click refresh And when refreshed, then all sections re-generate from source data and reflect version C And then the change log enumerates items added since B with ID, timestamp, author, and brief summary And then anchors for unchanged items remain stable; removed items are listed in the change log as removed with a reason if available
Stable Anchors for Email/RFI Referencing
Given a change item with anchor ID When the packet is re-generated without the item's content changing Then the anchor ID and deep link remain unchanged And when the item is modified, then the anchor ID persists while its version metadata increments And when the item is deleted, then the prior deep link returns a 410 page with a pointer to the parent section And then anchor IDs match the pattern <discipline>-<changeId> and are unique within the project
Accessibility and Keyboard Navigation
Given the web packet When audited with axe-core 4.x default rules Then there are zero critical and zero serious violations, and no more than five minor violations And then all interactive controls are reachable via keyboard with visible focus states; tab order follows reading order; a skip-to-content control is present And then all non-decorative images and visual diffs have meaningful alt text or accessible names; decorative images are aria-hidden And then the exported PDF is tagged, has correct reading order, and passes PAC 2024 with no critical errors
Discipline-Scoped Filtering Accuracy
Given a test project with ground-truth discipline tags for changes When composing a packet for the MEP discipline Then the set of included changes equals the ground-truth MEP set with 100% recall and 100% precision (no extras, no misses) And then multi-tagged items that include MEP appear once and display all associated tags And then the change count in the cover summary equals the number of items in the change list And then items with tags outside MEP do not appear
Targeted Distribution & Access Control
"As a BIM coordinator, I want to share packets securely with the correct consultants so that reviews happen quickly without exposing unrelated project information."
Description

Deliver Window Briefs directly to the right reviewers via discipline groups, contact lists, or external emails, with expiring secure links and optional SSO for repeat collaborators. Enforce view permissions so recipients only see the sheets and comments they are authorized to access, with audit logs for compliance. Send configurable notifications and reminders and allow recipients to subscribe or opt out per discipline. Provide a share dialog that previews the recipient list and permissions before sending. Log delivery outcomes and bounce handling to ensure reliable outreach to external consultants.

Acceptance Criteria
Share to Discipline Groups, Contact Lists, and External Emails with Preview
Given a Window Brief is open and the sender selects discipline groups MEP and Structural, the contact list "Phase 2 Consultants", and adds external email "peer@vendor.com" When the Share dialog is opened Then it displays a deduplicated recipient list with a total count, showing for each recipient: name/email, source (group/list/manual), discipline tag(s), and effective permission level And invalid or malformed emails are flagged inline with a reason and excluded from the send count And the Send button is disabled until at least one valid recipient with at least one permission is present When the sender clicks Send Then a unique access link is generated per recipient and invitations are queued within 5 seconds And a Share activity record is created capturing the final recipient list, disciplines, and initial delivery status "Queued"
Expiring, Recipient-Bound Secure Links and Revocation
Given the sender sets link expiry to 7 days for the distribution When a recipient opens their link within 7 days Then the brief loads with HTTPS and the access is granted for that recipient When the same link is opened after 7 days Then the system returns an "Expired link" page and denies access (HTTP 410) and logs the attempt When the sender revokes access for a specific recipient from the Share activity Then that recipient’s token is immediately invalidated and subsequent requests with that token are denied (HTTP 401/403) and logged And regenerated links produce new tokens and do not re-enable any revoked token And each token is unique per recipient and cannot be used to access content for any other recipient
Optional SSO Enforcement for Repeat Collaborators
Given the organization enables "Require SSO" for collaborator domain example-partner.com When recipient user@example-partner.com opens their invitation link Then they are required to authenticate via the configured SSO provider and, upon success, gain access to the brief When SSO enforcement is disabled for that domain Then the same recipient may access using the secure invitation link without additional sign-in and their identity is associated to the invitation And all access attempts record the IdP (for SSO) or link verification (for magic link) in audit logs
Scope-Limited View Permissions for Sheets and Comments
Given a Window Brief includes sheets A, B, C and comments 1–10 And recipient R1 is authorized for Structural scope with access to sheets A and B and comments 1–4 only When R1 opens the brief Then only sheets A and B and comments 1–4 are visible and navigable in UI, search, and exports And attempts to access sheet C or comments 5–10 via deep link return HTTP 403 and are logged And any cross-navigation (e.g., next/prev) skips unauthorized content without revealing titles or metadata
Configurable Notifications, Reminders, and Per-Discipline Subscription Preferences
Given the sender configures notifications as: initial email at send time and a reminder after 72 hours for recipients who have not accessed the brief When the brief is sent Then initial notifications are sent to all valid recipients and reminders are only sent to those who have not accessed by 72 hours And each notification includes an unsubscribe/manage-preferences link When a recipient opts out of a discipline (e.g., MEP) via preferences Then they no longer receive notifications for future briefs tagged with MEP while still receiving other disciplines they remain subscribed to And subscription changes take effect immediately and are recorded in audit logs
Audit Logging for Share, Access, and Permission Events
Given an organization admin views audit logs for a Window Brief Then the log includes entries for: share created/updated, delivery status changes, access granted/denied, permission changes, link revocations, and subscription changes And each entry contains: UTC timestamp, actor (user or system), actor type (internal/external), brief ID, recipient identity (email or SSO subject), discipline tags, action type, outcome, IP address, and user agent And logs are immutable, filterable by date range, action type, and recipient, and exportable to CSV
Delivery Outcomes and Bounce Handling for External Outreach
Given a distribution is sent to external recipients When the email service returns delivery events Then each recipient’s status updates in Share activity to one of: Queued, Sent, Delivered, Soft Bounced (with retry count), Hard Bounced, or Failed And Soft Bounced messages are retried up to 3 times with exponential backoff; upon exceeding retries the status becomes Failed with reason And Hard Bounced addresses are suppressed from future sends, and the sender is notified with the provider reason code And all delivery outcomes and bounce reasons are logged for compliance
Review Tracking & Acknowledgement
"As a project manager, I want visibility into which consultants have reviewed and acknowledged changes so that I can unblock approvals and keep the schedule on track."
Description

Track recipient engagement for each Window Brief, including opens, time-in-packet, per-change acknowledgements, and comments anchored to specific diffs. Provide a dashboard for the project team to see who has reviewed what, overdue items against SLA targets, and a burn-down of unacknowledged changes. Allow one-click responses like “Reviewed—no impact,” “Needs clarification,” or “Change impacts scope,” which convert to tasks in PlanPulse where needed. Sync acknowledgements to the project timeline and approval logs to reduce administrative overhead. Export review status for inclusion in meeting minutes and client updates.

Acceptance Criteria
Per-Recipient Engagement Telemetry Capture
Given a recipient opens a Window Brief from a unique, authenticated link When the brief loads in a supported browser Then an Open event is recorded with briefId, recipientId, timestamp, userAgent, and sessionId And active time-in-packet is accumulated only while the tab is visible and the user is not idle for more than 60 seconds And a session is considered ended after 5 minutes of inactivity or an explicit close, whichever comes first And total time-in-packet is computed as the sum of active intervals per session with accuracy ±5 seconds And subsequent opens on the same day create new sessions tied to the same recipient and brief And telemetry for opens, sessions, and time-in-packet is visible on the Review Dashboard within 10 seconds of the event
Per-Change Acknowledgement & Quick Responses
Given a recipient is viewing a Window Brief with N listed diffs When the recipient selects a diff Then the system presents one-click responses: “Reviewed—no impact”, “Needs clarification”, “Change impacts scope” And choosing any response records an acknowledgement with diffId, recipientId, responseType, timestamp, and optional note And each recipient can record at most one active acknowledgement per diff, with the ability to update it; all updates are audit-logged And selecting “Needs clarification” automatically opens a comment thread anchored to the diff and sets the diff status to Clarification Requested And selecting “Change impacts scope” creates a linked task in PlanPulse assigned to the project lead with default priority, SLA, and a backlink to the diff And the acknowledgement completion percentage updates immediately on the Dashboard and in the brief header
Comments Anchored to Specific Diffs
Given a recipient or project team member views a diff within a Window Brief When they add a comment on the diff Then the comment is anchored to the exact diff region with diffId, coordinate/selector, authorId, and timestamp And replies form a thread with chronological ordering and unread indicators And comments support @mentions with notifications sent to mentioned users within 60 seconds And authors can edit or delete their own comments within 15 minutes of posting; all edits are versioned and audit-logged And permissions restrict visibility to project members and invited consultants on that brief And the Dashboard displays a count of unresolved comment threads per recipient and per brief
Review Dashboard—Status, SLA, and Burn-Down
Given a project has at least one sent Window Brief When a project team member opens the Review Dashboard Then they see for each recipient: last opened timestamp, total time-in-packet, diffs acknowledged %, counts by response type, SLA due date, and overdue flag if applicable And the dashboard provides filters by discipline, recipient, brief, response type, and overdue status, and supports sort by any visible column And a burn-down chart of unacknowledged diffs over time is displayed and updates within 15 seconds of new acknowledgements And clicking a recipient row drills down to per-diff acknowledgement and comment details And all counts and percentages match underlying event data within ±1 item
SLA Tracking, Overdue Flags, and Reminders
Given an SLA policy is defined for Window Brief reviews (e.g., 3 business days from send) When a brief is sent to recipients Then each recipient-review is assigned a due date per the SLA with timezone awareness and business-day rules And any recipient with remaining unacknowledged diffs after the due date is flagged Overdue on the Dashboard And overdue recipients receive an automatic reminder email and in-app notification once per day until all diffs are acknowledged or the due date is extended And project leads receive a daily summary listing upcoming (next 24h) and overdue recipients And adjusting the SLA or due date recalculates flags and schedules within 60 seconds and is audit-logged
Sync to Project Timeline and Approval Logs
Given acknowledgements are recorded on diffs and tasks may be created When a recipient acknowledges a diff or updates their response Then an event is written to the Project Timeline including briefId, diffId, recipientId, responseType, timestamp, and any note And when all diffs in a Window Brief are acknowledged by all required recipients, a Brief Reviewed milestone is added to the Timeline and Approval Log And tasks created from “Change impacts scope” acknowledgements appear in PlanPulse Tasks with a backlink to the diff and inherit the brief’s metadata (project, discipline) And sync operations are idempotent, retry on transient failures up to 3 times, and surface errors to project leads And approval logs display an exportable audit trail of acknowledgement changes
Export Review Status for Minutes & Client Updates
Given a project team member needs to share review progress externally When they export review status from the Dashboard Then they can choose CSV or PDF and select filters (date range, brief, discipline, recipient, overdue) And the export includes for each recipient: brief name/version, sent date, last opened, total time-in-packet, counts by response type, % acknowledged, SLA due date, overdue flag, unresolved comments count, and links/IDs to tasks And per-diff detail can be toggled on for the export, including diff IDs, titles, and acknowledgement status And exports up to 500 recipients and 5,000 diffs generate in under 30 seconds and are accurate to the current Dashboard state And a secure download link is generated that expires in 7 days and is accessible only to authorized project members

Merge Gate

On window close, aggregates consultant markups into a single, conflict‑aware packet. Flags overlaps, suggests merges, and routes a concise accept/reject queue to the Project Lead—preserving attribution while accelerating the move to client‑ready revisions.

Requirements

Window-Close Merge Trigger & Autosave
"As a project lead, I want markups to be captured and queued for merging when a consultant closes their window so that no feedback is lost and I receive a ready-to-review packet automatically."
Description

On browser/tab close, navigation away, or session timeout, automatically capture the consultant’s latest markups and commit them to a pending merge packet tied to the current drawing version. The trigger debounces rapid events, confirms unsaved edits, and falls back to offline persistence (e.g., IndexedDB) if the network is unavailable, retrying on reconnect to prevent data loss. A lightweight toast confirms the capture without blocking the user. The service tags the packet with session, drawing, version, and consultant metadata and queues it for server-side aggregation. Integrates with PlanPulse session management, versioned drawings, and activity logs to ensure a zero-click path from consultant exit to a reviewable merge packet.

Acceptance Criteria
Autosave on Tab/Window Close
Given a consultant is editing markups on Drawing D Version V with network connectivity available And there are unsaved edits in the local editor When the user closes the browser tab or window Then the service serializes all unsaved edits into a single pending merge packet within 300 ms And tags the packet to Version V and queues it via API returning 202 Accepted within 2 s And a non-blocking toast "Markups captured for merge" appears within 1 s and auto-dismisses within 5 s And the close action is not blocked by any modal or prompt
Autosave on Navigation Away
Given a consultant is editing markups on Drawing D Version V When the user initiates navigation away (internal route change or external link/back/forward) Then the service captures the latest edits and creates one pending merge packet tied to Version V within 300 ms And the navigation proceeds without blocking And a non-blocking toast confirmation is displayed within 1 s And the server returns 202 Accepted within 2 s (if online)
Autosave on Session Timeout
Given a consultant is editing markups and their PlanPulse session expires or is force-logged-out When the session manager emits a timeout/expired event Then the service captures current edits and creates a pending merge packet within 500 ms And attempts to queue it to the server immediately (if online) with a 202 Accepted within 3 s And displays a non-blocking toast confirmation prior to redirecting to sign-in
Debounced Merge Trigger
Given multiple close/navigation/visibility events fire nearly simultaneously When two or more trigger events occur within 1500 ms Then only one pending merge packet is created and queued for that drawing/version And the packet includes the latest available edits at the time of final trigger And telemetry records the number of deduplicated trigger events for observability
Offline Persistence and Reconnect Retry
Given the network is unavailable at the time of a close/navigation/timeout trigger When capture runs Then the packet is serialized and persisted to IndexedDB within 500 ms under a pending queue And a toast "Captured offline—will sync on reconnect" is shown within 1 s When connectivity is restored or the app is reopened online Then the client retries upload within 10 s using exponential backoff (initial retry ≤ 2 s) And upon first 202 Accepted response the local pending entry is removed And no duplicate packets appear server-side
Metadata Tagging and Version Binding
Given a merge packet is created from a capture event Then it is tagged with sessionId, consultantId, drawingId, versionId, packetId, capturedAt (UTC), and client build info And versionId equals the active version in the editor at trigger time even if a newer version exists server-side And the server reflects these fields unchanged after acceptance
Idempotency and Single Activity Log Entry
Given a pending merge packet is retried due to network failures or repeated triggers When the same packetId is submitted multiple times Then the server accepts at most one and ignores duplicates (idempotent) And the PlanPulse activity log shows exactly one entry "merge_packet_captured" with that packetId within 2 s of first acceptance
Conflict-Aware Aggregation Engine
"As a project lead, I want consultant inputs aggregated into one conflict-aware packet so that I can review everything in one place without hunting across files."
Description

Server-side consolidation that ingests multiple consultant packets for the same drawing version and produces a single, structured merge packet. It detects overlaps and conflicts using geometric intersection for vector annotations, proximity thresholds for raster notes, and semantic clustering for text comments. It deduplicates near-identical items, tags conflict types (positional, scope, specification), and preserves per-item IDs for idempotent reprocessing. The engine scales across large sheets via tiled processing, respects layer/discipline semantics, and aligns all coordinates to the authoritative drawing transform. Outputs a normalized dataset consumable by the review UI and audit log, integrating with PlanPulse’s versioning and permissions.

Acceptance Criteria
Vector Overlap Detection and Merge Suggestions on Same Drawing Version
Given a drawing version VID with two consultant packets P1 and P2 containing vector annotations aligned to transform T And there exist at least two annotations A1 in P1 and A2 in P2 where polygon_intersection_area(A1,A2) >= 0.5 mm^2 OR min_edge_distance(A1,A2) < 0.3 mm in sheet coordinates When the engine aggregates the packets Then it creates a conflict record linking A1 and A2 with conflict_type = "positional" And includes merge_suggestion = true if IoU(A1,A2) >= 0.80 and non-geometry attributes match within tolerance (stroke_width difference <= 0.2 mm; color ΔE <= 2.0) And retains both source item_ids and author attributions in the conflict record And does not flag any pair where intersection area < 0.5 mm^2 AND edge-to-edge distance >= 0.3 mm
Raster Note Proximity Aggregation into Single Markup
Given a drawing version VID with raster note markups R1..Rn each with a bounding box in sheet coordinates And two notes Ri and Rj satisfy either centroid_distance(Ri,Rj) <= 5.0 mm OR min_edge_distance(Ri,Rj) <= 2.0 mm When the engine aggregates Then it clusters Ri and Rj into the same aggregate item with aggregate_id assigned deterministically from sorted source item_ids And produces exactly one review-queue entry for the cluster with source_ids including Ri and Rj And does not cluster any pair failing both thresholds And preserves per-note attribution in the aggregate item
Semantic Clustering of Text Comments with Specification Conflict Detection
Given text comments C1..Cm from multiple consultant packets on drawing VID with anchor positions within 10 mm of each other When the engine computes semantic similarity between comments Then it clusters comments into the same group if cosine_similarity(embedding(Cx), embedding(Cy)) >= 0.85 And it does not cluster pairs with similarity < 0.85 And if two comments within a cluster contain differing numeric/spec tokens (e.g., diameter, gauge, material grade), it emits a conflict record with conflict_type = "specification" referencing the involved comment ids And the cluster object includes a representative text chosen deterministically (longest by token count; tie-break by lowest source_id)
Near-Identical Item Deduplication Across Consultants
Given two items from different packets that target the same layer/discipline and geometry type When the engine compares them Then it deduplicates them into a single canonical item if: - For vector geometry: directed Hausdorff distance <= 0.5 mm and IoU >= 0.90 - For text comments: normalized Levenshtein similarity >= 0.95 after case/punctuation/whitespace normalization - For raster notes: bounding box IoU >= 0.90 And it preserves all source item_ids under a sources array on the canonical item And it does not deduplicate items from different disciplines or layers
Conflict Type Tagging and Routing to Project Lead Queue
Given aggregated items with overlaps, proximity clusters, or semantic clusters on drawing VID When the engine classifies conflicts Then it tags conflicts as: - positional when geometry thresholds are met and semantic content matches - scope when items overlap within 10 mm but belong to different disciplines or layer scopes - specification when semantic cluster contains divergent spec tokens And the classifier achieves precision >= 0.95 and recall >= 0.95 on a labeled test set of at least 200 cases for each type And the resulting merge packet produces a review queue addressed only to users with role = "Project Lead" per permissions, rejecting others with 403
Idempotent Reprocessing with Stable IDs
Given the same set of input packet IDs {P1..Pk} for drawing VID with unchanged contents When the engine is run twice Then the normalized merge packet payloads are byte-identical except for fields explicitly marked as volatile (timestamps), and output items retain the same aggregate_id and source item_ids in the same deterministic order And re-running with an additional packet P{k+1} yields no duplication of previously aggregated items and only adds or updates items impacted by P{k+1} And a content hash (SHA-256) of the normalized payload remains identical across runs with identical inputs
Tiled Processing, Transform Alignment, and Normalized Output Consumption
Given a large sheet (>= 10,000 x 10,000 sheet units) segmented into tiles no larger than 2048 x 2048 units and a total of >= 10,000 annotations across all packets When the engine processes on a server with 4 vCPU and 16 GB RAM Then total processing time <= 20 seconds and peak RSS memory < 4 GB And items within 2 mm of tile boundaries are neither dropped nor duplicated across tiles And all output coordinates, when transformed back to native drawing space via T^-1, have RMS error <= 0.25 mm and max error <= 0.75 mm And no merges occur across different disciplines or layers; discipline and layer metadata are preserved on every output item And the output validates against JSON Schema "merge_packet.v1" and the review UI endpoint /review/queues/{VID} returns 200 with item_count equal to the number of aggregate items; an audit log entry is recorded with a deterministic hash id And version mismatches are rejected with HTTP 409 and do not mutate stored outputs
Overlap Heatmap & Inline Diff View
"As a project lead, I want to quickly see and navigate overlapping feedback so that I can prioritize and resolve the most critical conflicts first."
Description

Interactive visualization that highlights hotspots where consultant markups overlap or contradict. A low-latency canvas/WebGL overlay renders a heatmap of density and conflict severity, while an inline diff panel lists grouped overlaps with thumbnails, authors, and affected layers. Clicking an item zooms to the exact region, with tooltips summarizing parties involved and conflict type. Filters by consultant, discipline, severity, and time let users isolate what matters. Design adheres to accessible color contrast and remains performant on large drawings via tile-based rendering and virtualization. Integrates with the aggregation engine’s tags and the review queue for smooth triage.

Acceptance Criteria
Heatmap Rendering Accuracy & Severity Legend
Given aggregated consultant markups contain known overlaps and contradictions labeled with severities (Low, Medium, High, Critical) When the drawing is opened in Merge Gate Then the heatmap overlay renders without errors And the legend displays all severity bins with distinct labels and swatches And each conflict region is rendered with the correct severity color per scale mapping And the number of rendered hotspots equals the number of unique conflict regions (±0%) And hotspot positions align to source geometry within ≤2 px at 100% zoom
Large-Drawing Performance via Tile-Based Rendering
Given a 36×48 in (300 DPI) drawing with ≥10,000 markups and ≥1,000 conflict regions When the heatmap first renders Then time-to-first-visible heatmap ≤1000 ms (95th percentile) And pan/zoom interaction maintains ≥45 FPS during continuous navigation (95th percentile) And per-frame input-to-paint latency ≤50 ms (95th percentile) And renderer peak memory usage ≤500 MB during navigation And tile fetch/prepare latency ≤150 ms median And if WebGL is unavailable, a 2D canvas fallback activates with time-to-first-visible ≤1500 ms and feature parity for viewing
Inline Diff List Grouping & Metadata Completeness
Given conflicts are available from the aggregation engine with tags and metadata When the Inline Diff panel is opened Then conflicts are grouped by spatial region and conflict type with grouping precision ≥95% And each list item shows a thumbnail, authors, affected layers, severity, and last modified time And items are sorted by severity (desc) then last modified (desc) by default And list virtualization renders the first 50 visible items in ≤100 ms with datasets of ≥5,000 items And toggling between groupings (by region/type/author) updates the list in ≤200 ms
Click-to-Zoom Navigation & Tooltip Summaries
Given a conflict exists in the list and on the canvas When a user clicks a list item or hotspot Then the viewport centers and zooms to the conflict region within ≤300 ms And the selected region’s bounding box aligns within ≤5 px at 100% zoom And a tooltip appears within ≤200 ms showing parties involved (names), conflict type, affected layers, and last modified time And pressing Esc or clicking outside dismisses the tooltip And pressing Enter on a focused list item performs the same zoom action
Multi-Facet Filters (Consultant, Discipline, Severity, Time) with Session Persistence
Given conflicts span multiple consultants, disciplines, severities, and timestamps When the user applies multiple filters Then results reflect logical AND across filter categories and OR within a single category selection And the heatmap, legend counts, and list update within ≤200 ms of filter change And the total conflict count displayed matches the filtered set (±0%) And filter state persists for the current session and resets only when the user clears filters And clearing all filters restores the full, unfiltered results
Accessibility & Color Contrast Compliance
Given default UI and heatmap palette When evaluated against WCAG 2.2 Then UI text and controls meet AA contrast ratios And a colorblind-safe palette and optional pattern overlays can be enabled by the user And conflict categories remain distinguishable under simulated Deuteranopia, Protanopia, and Tritanopia And all interactive elements are fully keyboard navigable with visible focus indicators and logical tab order And list items, tooltips, and controls expose appropriate ARIA roles, names, and states to screen readers
Tag Integration and Review Queue Synchronization
Given conflicts include aggregation engine tags and attribution When the user sends selected items to the Review Queue from the heatmap or diff list Then the items are enqueued with tags and attribution preserved within ≤300 ms And following a deep link from the Review Queue opens and centers the exact conflict region And Accept/Reject actions in the Review Queue update the conflict status in the heatmap and list within ≤500 ms And duplicate queue entries are not created for the same conflict-id And an audit log records user, timestamp, conflict-id, action, and outcome for each operation
Merge Suggestion Rules & Confidence Scoring
"As a project lead, I want merge suggestions with clear confidence and reasoning so that I can accept safe merges quickly and focus on ambiguous cases."
Description

Rule- and ML-assisted suggestions that propose safe merges and consolidations, accompanied by transparent rationales and confidence scores. Heuristics include geometric proximity, directional alignment, layer/discipline precedence, and semantic similarity of text using lightweight embeddings. Configurable thresholds per project allow tuning aggressiveness, with explainability (e.g., “90% similar text; same layer; within 8px”) shown in the UI. Suggestions are batched, previewable, and reversible; risky cases are flagged for manual review. The service logs decisions to improve future suggestions and integrates with review actions and audit trails.

Acceptance Criteria
Confidence Score & Rationale Display per Suggestion
Given a project with suggestion_threshold set to 0.80 and computed merge suggestions When the Project Lead opens the Merge Gate queue Then each suggestion displays an overall confidence between 0.00 and 1.00 rounded to one decimal place And the rationale lists contributing heuristics with metrics: text similarity (%), distance (px), alignment (degrees), and layer/discipline match (boolean) And the composed rationale text follows the format '<text% similar>; <layer match>; within <px>px; <deg>° aligned' And the displayed confidence equals the stored score within ±0.01
Heuristic Matching: Geometry, Alignment, and Text Similarity
Given two consultant markups where text cosine similarity ≥ 0.90, nearest-edge distance ≤ 10 px, and orientation delta ≤ 5° When merge suggestions are generated Then a single merge suggestion is produced with confidence ≥ 0.85 and its rationale lists the three metrics And if any of the following holds: text similarity < 0.60 OR distance > 20 px OR orientation delta > 15°, no merge suggestion is produced for that pair
Discipline/Layer Precedence in Conflict Resolution
Given a project precedence order configured as ['Structural','Architectural','MEP'] and two overlapping markups from Structural and Architectural with conflicting edits When merge suggestions are generated Then the proposed merge keeps the Structural markup as primary and suggests consolidating compatible attributes from Architectural And the rationale includes 'precedence: Structural over Architectural' And contributor attribution for both markups is preserved in the merged result metadata
Project-Level Threshold Configuration Effects
Given a project with suggestion_threshold = 0.80 and three candidate suggestions with confidences 0.76, 0.82, and 0.93 When the Project Lead views the Merge Gate queue Then only the 0.82 and 0.93 suggestions appear in the Suggested list, and the 0.76 item appears in Manual Review When the suggestion_threshold is increased to 0.90 and suggestions are recomputed Then only the 0.93 suggestion remains in the Suggested list, and the 0.82 and 0.76 items appear in Manual Review And the counts in each list update without page reload within 500 ms
Batch Preview, Apply, and Undo of Merge Suggestions
Given a queue containing 10 suggestions and the Project Lead selects 7 of them When they click Preview Then a preview shows before/after deltas for the 7 items with per-item enable/disable toggles When they click Apply Then all 7 merges are applied, attribution metadata is retained, and each item’s status changes to Applied And an Undo Last Apply control becomes available When Undo Last Apply is clicked within 5 minutes Then all 7 merges are fully reverted to their prior state with no loss of data or attribution
Risk Flagging and Manual Review Queue
Given candidate suggestions that have confidence < suggestion_threshold OR include conflicts among ≥ 3 contributors OR span different disciplines without a clear precedence resolution When merge suggestions are generated Then those items are placed into the Manual Review queue with a Risk badge and are excluded from Apply All And each risky item offers Accept, Reject, and Edit actions And no risky item is auto-applied on window close
Decision Logging and Audit Trail Integration
Given any Accept, Reject, or Undo action on a merge suggestion When the action completes Then a log entry is written containing: suggestion_id, project_id, actor_id, timestamp (UTC), action, prior_state_hash, new_state_hash, confidence, heuristics_snapshot, and attribution list And the affected drawing’s audit trail references this log within 2 seconds And the log entry is retrievable via the project audit API by suggestion_id and by date range
Attribution & Provenance Preservation
"As a project lead, I want merged items to retain contributor attribution and history so that accountability and context are not lost during consolidation."
Description

End-to-end preservation of contributor context for every markup and merged item, including author, role/discipline, timestamps, original geometry/text, and change lineage. When items are merged, the system maintains a provenance chain linking all source contributions, viewable as badges or a details panel. Exports to client-ready revisions retain metadata in the file manifest while keeping visuals clean by default. Permissions ensure only authorized viewers can see contributor identities. This guarantees accountability, facilitates future audits, and supports rollbacks without losing who-said-what context.

Acceptance Criteria
Capture Complete Metadata on Markup Creation
Given an authenticated contributor with an assigned role/discipline creates a markup on a drawing When they save the markup Then the system records authorId, authorDisplayName, roleDiscipline, createdAt (ISO 8601 with timezone), originalGeometry (vector/points), originalText (if any), sourceDrawingId, and markupId And the metadata is immutable thereafter (updates create new lineage entries rather than overwrite) And the markup’s details panel displays these fields exactly as stored And the metadata is retrievable via the internal API at GET /markups/{id}/provenance returning HTTP 200 with the same values
Provenance Chain on Merge and Nested Merge
Given multiple consultant markups are selected for merge by Merge Gate or auto-suggested When a merged item is created Then the merged item contains a provenance chain listing all source markupIds with their authorId, roleDiscipline, and timestamps in chronological order And badges on the merged item display the unique contributor count and per-discipline indicators And opening the details panel shows each source contribution and the merge operation(s), including nested merges with parent/child linkage And re-merging additional sources appends to lineage without losing any prior references
Permission-Gated Visibility of Contributor Identities
Given a user without the "View Contributor Identities" permission opens a drawing with markups When they view badges or the details panel Then author identities are anonymized (e.g., discipline-only labels) while non-PII provenance fields remain visible And a user with the permission sees full authorDisplayName and authorId And permission changes take effect within the same session upon next panel open And attempts to access identities via API without permission return HTTP 403 and no PII fields
Export Manifests Preserve Metadata with Clean Visuals
Given a project lead exports a client-ready revision When the export completes Then the visual export contains no provenance badges or identity labels by default And the export package includes a manifest file containing provenance for all included markups and merged items with fields: schemaVersion, markupId, authorId (or anonymized per policy), roleDiscipline, timestamps, originalGeometryHash, and changeLineage And the manifest validates against the published schema and includes a checksum that verifies on import And enabling the "Include review overlays" toggle adds badges to the visuals; disabling keeps visuals clean
Rollback Restores Prior State Without Losing Attribution
Given a project lead rolls back a drawing to a prior revision or un-applies a merge When the rollback is executed Then all markups and merged items revert to the selected state And all provenance chains remain intact, including original author and timestamps And a rollback event is appended to the lineage with actor, timestamp, and reason And no duplicate or orphaned lineage entries are created
Conflict-Aware Merge Queue Shows and Preserves Attribution
Given Merge Gate aggregates overlapping markups into an accept/reject queue on window close When the project lead views an item with flagged overlaps Then each decision card displays contributor badges for all involved consultants and disciplines And accepting a suggested merge results in a merged item whose provenance chain lists all sources; rejecting keeps sources separate with a lineage note of rejection And queue actions are audit-logged with actor, timestamp, and affected markupIds
Review Queue with Bulk Accept/Reject
"As a project lead, I want a streamlined accept/reject queue with bulk actions so that I can finalize a client-ready revision rapidly and consistently."
Description

Generates a concise, prioritized queue of conflicts and suggestions for the Project Lead, with batch accept/reject, per-item preview, inline edits, keyboard shortcuts, and one-click undo. Items show confidence, rationale, and affected scope, with filters and saved views for large sets. Completing the queue emits a new drawing revision and notifies stakeholders per project rules. All actions are logged for audit and analytics (e.g., cycle time, auto-merge rate). Integrates tightly with PlanPulse’s approval flow to speed transition to client-ready revisions.

Acceptance Criteria
Queue Generation and Prioritization on Merge Gate
Given consultant markups from multiple sources exist and the Merge Gate triggers on window close When the system aggregates and analyzes conflicts and suggestions Then a review queue is generated containing only conflict items and merge suggestions, excluding auto-merged non-conflicting changes And each queue item includes type (conflict/suggestion), severity/impact score, priority rank, and consultant attribution And conflicting overlaps are flagged with a visual badge and a link to related overlapping items And the queue is sorted by priority descending by default and can be re-sorted by the user And the queue appears in the Project Lead’s dashboard with an unread count and is assigned to the current project’s lead And initial render completes within 3 seconds for up to 500 items and within 6 seconds for up to 2,000 items
Bulk Actions with Undo and Shortcuts
Given one or more items are selected in the queue When the user presses A or clicks Bulk Accept Then all selected items transition to Accepted state within 1 second and show a success confirmation Given one or more items are selected in the queue When the user presses R or clicks Bulk Reject Then all selected items transition to Rejected state within 1 second and show a success confirmation Then an inline undo affordance is shown for 30 seconds; when the user clicks Undo or presses Ctrl/Cmd+Z, the last action is reverted in full within 1 second And keyboard navigation supports Up/Down to move focus, Space to toggle selection, and Shift+Click to select ranges And bulk operations are atomic by default; if any item fails, no items change and a retry with error details is offered; a granular retry option can be enabled for per-item retries
Per-Item Preview and Inline Edit Before Decision
Given the Project Lead opens a queue item When the item is previewed Then a visual preview shows the drawing region, proposed change, and diff vs baseline with zoom and pan controls And the preview loads within 800 ms from cache and within 2 seconds from network for items under 10 MB When the lead edits the item rationale or note inline Then changes auto-save within 500 ms and are included in the decision record When an inline edit modifies the affected scope Then the item’s confidence and priority are recalculated and updated in the list within 1 second If the user attempts to navigate away with unsaved edits Then a confirmation prompt prevents data loss
Visibility of Confidence, Rationale, and Scope
Given a queue item is visible Then it displays confidence as a percentage (0–100%) with a High (>=80), Medium (50–79), or Low (<50) badge And it displays rationale text (truncate at 280 characters with tooltip for full text) And it displays affected scope (sheet IDs, layers, and count of impacted areas) When hovering over the confidence value Then a tooltip reveals the top contributing factors used by the model If any of confidence, rationale, or scope is unavailable Then the UI shows "Unknown" with a warning icon and a link to source details And displayed values exactly match backend values (tolerance 0%)
Filters and Saved Views for Large Sets
Given a queue contains more than 200 items When the user applies filters (type, severity, confidence threshold, consultant, date, status) Then the result list and count update within 500 ms for up to 10 chained filters When the user saves the current view (filters, sort, and visible columns) Then the saved view persists per user and project and is available after sign-out/sign-in And the saved view is shareable via link respecting project permissions And saved views show a dynamic badge with current match count When underlying data changes (e.g., items resolved) Then filter results and counts recompute within 2 seconds And virtualization/pagination maintains 60 fps scrolling and loads the next segment within 200 ms
Queue Completion Emits Revision and Notifications
Given all items in the queue are in a terminal state (Accepted or Rejected) When the Project Lead clicks Complete Review Then a new drawing revision is created with an incremented semantic version (e.g., R12 -> R13) and a changelog summarizing accepted count, rejected count, and affected sheets And the revision is immediately available in PlanPulse’s approval module for next-step transition And notifications are sent to stakeholders per project rules via configured channels within 60 seconds And notification failures are retried with exponential backoff and recorded for follow-up If any item remains unresolved Then the Complete action is disabled and a tooltip lists unresolved blockers
Audit Logging and Analytics
Given any action occurs in the review queue (view, select, accept, reject, edit, undo, complete) Then an audit log entry is recorded with timestamp (UTC ISO 8601), actor ID, item IDs, previous and new state, rationale delta (if edited), and request ID And log entries are immutable, queryable by project and date, and exportable as CSV Then analytics compute and update within 5 minutes: item cycle time, queue cycle time, auto-merge rate, bulk action rate, average confidence, and rejection reason distribution And an API provides paginated, filterable access to logs and metrics respecting project permissions And data retention follows project policy (default 18 months) and is GDPR-compliant, including user action export on request

Staggered Waves

Configure sequential or partially overlapping windows with dependency rules (e.g., structure before MEP). The system auto‑hands off between waves, adjusts downstream timings when a wave slips, and maintains a clear critical path to keep the overall approval schedule tight.

Requirements

Wave Template Builder
"As a project lead, I want to create standardized wave templates by discipline so that I can set up consistent schedules quickly and reduce planning errors."
Description

Provide a configuration interface to define reusable scheduling templates composed of named waves (e.g., Structure, MEP, Interiors), each with default duration, start mode (fixed date or relative/offset), permitted overlap percentage, required roles, deliverables, and approval gates. Templates can be applied per project with override capability, and each wave can be linked to specific drawing sets and conversation threads in PlanPulse. Include validation to prevent circular references, enforce minimum/maximum durations, and ensure required disciplines are present. Persist templates at workspace level, support versioning, and allow cloning to accelerate setup across similar projects.

Acceptance Criteria
Create and persist wave template with core fields
Given I have workspace admin permission and open the Wave Template Builder When I create a template named "Core Shell - v1" and add waves "Structure", "MEP", and "Interiors" each with default duration (in days), start mode (fixed date or relative offset), permitted overlap (%), required roles, deliverables, and approval gates Then the template saves successfully and appears in the workspace template list And reopening the template shows all entered values persisted exactly as saved And template names must be unique within the workspace; duplicate names are blocked with a clear error
Block circular dependencies between waves
Given a template where waves can be set to start relative to other waves with offsets When I introduce a dependency cycle (directly or indirectly) such as Structure → MEP → Interiors → Structure Then the system blocks the save and highlights the waves forming the cycle And an error message states "Circular dependency detected" and lists the path of the cycle And no partial changes are persisted
Enforce permitted overlap and relative start rules
Given Wave B depends on Wave A and Wave B has a permitted overlap of 25% When I set Wave B to start relative to Wave A with an offset that would overlap more than 25% of Wave A’s duration Then validation fails with an error indicating the allowed overlap and the calculated excess And saving is blocked until the overlap is within 0–25% And permitted overlap accepts only integer values from 0 to 100 inclusive; values outside the range are rejected with inline errors
Link waves to drawing sets and conversation threads
Given I edit a wave in the template When I link it to one or more drawing set IDs and conversation thread IDs available in PlanPulse Then the links are saved with the template and displayed on reopen And removing a link updates the template accordingly on save And invalid or deleted resource IDs are flagged with a warning and cannot be saved
Apply template to project with per-project overrides
Given a template exists at the workspace level When I apply it to Project X Then all waves are instantiated in Project X with the template’s defaults and links And I can override per-project fields (duration, fixed dates/offsets, permitted overlap, required roles, deliverables, approval gates) without modifying the source template And project-level overrides are tracked with a visual indicator and audit entry And the project schedule recalculates dependencies immediately upon override save
Workspace-level template versioning and cloning
Given I have an existing template version v1 When I edit and save changes with a change note Then a new immutable version v2 is created, showing author, timestamp, and note in version history And I can view or restore prior versions without altering their contents And when I clone the template, a new template is created with a new ID and incremented name (e.g., "Core Shell - v1 (Clone)") with all waves and settings copied
Validate durations and required disciplines against policy
Given workspace policy defines min/max wave durations (e.g., 1–180 days) and required disciplines (e.g., Structure, MEP) When I attempt to save a wave with duration outside the policy range Then save is blocked and an inline error specifies the allowed range And when required disciplines are missing from the template, save is blocked with a message listing the missing disciplines And entering valid durations and adding required disciplines allows the template to save successfully
Dependency Rule Engine
"As a project scheduler, I want to define and enforce dependencies between waves so that work starts only when prerequisites are satisfied and overlaps remain controlled."
Description

Implement a robust dependency system supporting Finish-to-Start, Start-to-Start, Finish-to-Finish, and Start-to-Finish relationships with configurable leads/lags (in days or percentages). Allow discipline-level constraints (e.g., Structure must precede MEP rough-in) and resource constraints (e.g., same reviewer cannot be double-booked across overlapping waves). Enforce rules during schedule creation and updates, surface conflicts in real time, and provide inline fixes (auto-shift, adjust overlap within limits). The engine must expose APIs for read/write so other PlanPulse modules (approvals, markups) can react to dependency changes.

Acceptance Criteria
All Dependency Types with Leads/Lags (Days and Percentages)
Rule: Engine supports dependency types Finish-to-Start, Start-to-Start, Finish-to-Finish, and Start-to-Finish. Rule: Lead/Lag can be specified as signed integer days or signed percentage of predecessor duration; negative values denote lead (overlap), positive denote lag (gap). Given a predecessor duration of 10 days and a +20% lag When the dependency is applied Then the successor's offset equals 2 days beyond the base relationship point. Given a Finish-to-Start dependency with a -30% lead on a 10-day predecessor When the dependency is applied Then the successor starts 3 days before the predecessor finishes, maintaining FS semantics. Then the stored dependency includes type, offsetUnit (days|percent), and offsetValue, and is persisted and retrievable via API.
Discipline-Level Constraints: Structure Must Precede MEP Rough-In
Given a configured discipline rule "Structure precedes MEP rough-in" When a user attempts to set the MEP rough-in wave to start before Structure finishes Then the engine blocks save and surfaces a real-time conflict indicating the violating waves and rule. When the user selects an inline fix Then the engine either adds an FS dependency or auto-shifts MEP to the earliest valid start that satisfies the rule And revalidates to confirm no remaining conflicts.
Resource Constraints Prevent Double-Booking Reviewers Across Overlapping Waves
Given reviewer "Alex" is assigned to Wave A and Wave B And Wave A and Wave B overlap in time by any amount When the second assignment is made or dates are edited to overlap Then the engine flags a resource conflict in real time (≤1 second) and blocks save. When the user chooses an inline fix Then the engine offers to auto-shift the later wave to the first non-overlapping slot, reassign the reviewer, or adjust dependency overlap within configured limits And applies the selected fix atomically and revalidates with no new conflicts introduced.
Real-Time Conflict Detection and Inline Fix Application
Given a schedule with active dependency, discipline, and resource rules When a user edits dates, adds/edits/deletes a dependency, or changes assignments (UI or API) Then the engine updates the conflict list and markers in real time (≤1 second), labeling each conflict with type, impacted wave IDs, and suggested fixes. When a suggested fix is applied from the inline options Then the schedule updates atomically, the engine recalculates dependencies, and conflicts resolve or are updated accordingly And the user can undo to restore the prior valid state.
Downstream Recalculation on Predecessor Slip
Given a chain of waves connected by any supported dependency types When a predecessor wave start or finish is delayed Then the engine recalculates all affected successors to maintain dependency semantics and uphold discipline and resource constraints. Then successors shift no earlier than the earliest valid timestamps that satisfy all rules; if an automatic fix cannot satisfy constraints, a conflict is raised with suggested resolutions. And the set of adjusted waves and new dates is available for retrieval via API for dependent modules to consume.
Read/Write APIs and Change Events for Dependencies
Given REST endpoints for dependency CRUD When a client creates, updates, or deletes a dependency with valid payload Then the engine persists the change, recalculates schedules, and returns the updated dependency and a conflicts summary in the response. And an event (dependencyChanged) is published containing affectedWaveIds, before/after values, and recalculated dates so other modules (approvals, markups) can react. When a payload is invalid or violates rules Then the API responds with a 4xx status and structured error details including ruleType and offending wave IDs.
Slip Propagation & Reforecasting
"As a project lead, I want schedule changes to automatically ripple through downstream waves so that the overall plan stays realistic without manual recalculation."
Description

Automatically recalculate downstream start/end dates when a wave slips or completes early, honoring dependency rules, overlap limits, and resource constraints. Provide an immediate reforecast of the full plan with visibility into variance versus baseline, highlight critical path impacts, and generate suggested mitigations (e.g., compress overlap to allowed max, add buffer, reassign reviewer). Support manual overrides with clear annotations and maintain a baseline history for audit. Trigger recalculation on status changes, approval outcomes, or duration edits and publish deltas to all affected stakeholders.

Acceptance Criteria
Reforecast on Status Change or Duration Edit
Given a plan with Staggered Waves and defined dependencies When a wave’s status is set to Delayed with a slip of X days Then the system recalculates all downstream waves within 5 seconds and updates their start/end dates accordingly Given a plan with Staggered Waves and defined dependencies When a wave is marked Approved earlier than planned by X days Then the system advances eligible downstream waves’ start dates while honoring dependency lags and overlap limits Given a plan with Staggered Waves When a wave’s duration is edited Then only directly and indirectly dependent waves are recalculated; unrelated waves remain unchanged Given a reforecast is executed When calculations complete Then the plan version is incremented and a reforecast timestamp is stored
Respect Dependencies, Overlap Limits, and Resource Constraints
Given dependency rules (e.g., Structure must finish before MEP with minimum lag L) When a reforecast runs Then no downstream wave starts before all predecessors meet their dependency and lag requirements Given an overlap limit O% between specific waves When a reforecast proposes concurrent work Then the overlap between those waves does not exceed O% and is rounded to whole days Given a shared resource R with daily capacity C across multiple waves When reforecast schedules overlapping work Then total allocation to R on any day does not exceed C; excess work is pushed according to wave priority rules Given constraints cannot be satisfied simultaneously When reforecast runs Then the system flags a scheduling conflict, identifies the offending rules and waves, and prevents publishing until resolved
Baseline Variance Display and Baseline History
Given an established baseline When a reforecast completes Then each affected wave displays variance vs baseline for start, end, and duration in days and percentage Given a reforecast completes or a manual override is saved When the change is committed Then a forecast snapshot is recorded in baseline history with timestamp, triggering event, user, and plan version Given baseline history entries exist When the user opens Baseline History Then an immutable list of snapshots is shown and any snapshot can be compared side‑by‑side with the current forecast Given data retention policies When baseline history grows Then at least the last 50 snapshots are retained and can be exported to CSV
Critical Path Recalculation and Impact Highlighting
Given a plan with dependencies When a reforecast completes Then the critical path is recalculated using the longest‑path method and the overall plan end date is updated Given critical path recalculation completes When the plan view is refreshed Then waves on the critical path are highlighted, show zero total float, and non‑critical waves show positive float values Given the critical path differs from the prior version When the reforecast is published Then a summary banner lists added/removed critical waves and net change to plan end date in days
Mitigation Suggestions with Application and Feasibility
Given a slip causes the plan end date to move later When a reforecast completes Then the system generates at least three mitigation suggestions including: compress overlaps to allowed maximums, reassign reviewers/resources where capacity exists, and insert buffers at risk handoffs Given mitigation suggestions are listed When a suggestion is selected Then the suggestion shows projected change to plan end date, risks, and whether all constraints will be satisfied Given a user applies a mitigation suggestion When the system recalculates Then the plan updates only if all dependency, overlap, and resource constraints are honored; otherwise the suggestion is disabled with an explanation
Manual Overrides with Annotations and Audit Trail
Given a user edits a wave’s system‑calculated start or end date When Save is clicked Then a reason annotation is mandatory; without it, the save is blocked with inline validation Given a manual override is saved When the plan is reforecast Then downstream waves respect the override while still honoring dependency, overlap, and resource constraints Given a manual override violates constraints When Save is clicked Then the system blocks the change and lists the violated rules Given a manual override exists When the user selects Revert Override Then dates return to system‑calculated values and an audit entry with user, timestamp, and change description is created
Stakeholder Delta Publication and Notifications
Given a reforecast completes When changes affect one or more waves Then a delta summary is generated including old vs new dates, variance in days, critical path impact, and a link to mitigation suggestions Given a delta summary is generated When the reforecast is published Then notifications are sent to affected stakeholders via in‑app and email within 60 seconds, batched to one message per plan change Given stakeholders not associated with changed waves When the reforecast is published Then no notifications are sent to those stakeholders Given notifications are dispatched When delivery completes Then each notification includes the plan version ID and deep link to the reforecast view and delivery status is logged
Auto Handoff & Smart Notifications
"As a discipline lead, I want the next wave to auto-start with all context and notifications so that my team can begin immediately without coordination delays."
Description

On wave completion or entry into a ready state, automatically hand off to the next dependent wave by assigning owners, generating checklists, linking relevant drawing versions, and opening the associated conversation thread. Send targeted, role-based notifications (in-app and email) with context (what changed, what’s due, attachments) and suppress noise with digesting and escalation rules. Include readiness checks (dependencies met, approvals obtained) before handoff, and log all handoffs in the project timeline for traceability.

Acceptance Criteria
Readiness Gate Blocks Handoff When Dependencies Unmet
Given a next-dependent wave exists and the current wave is marked Complete And one or more required dependencies or approvals are unmet When the system evaluates readiness for handoff Then the handoff is prevented and the next wave remains in Not Started state And the next wave is flagged Blocked with a list of unmet items And in-app alerts are sent only to the responsible roles for the unmet items And no owners are assigned, no checklist is generated, and no conversation is opened And a timeline entry is created with status Blocked and the unmet items recorded
Auto Handoff on Wave Completion Assigns Owners and Artifacts
Given all dependencies and approvals for Wave A are satisfied And Wave A is marked Complete When handoff is triggered Then Wave B (next dependent) is set to In Progress within 60 seconds And owners are auto-assigned per the wave template mapping And a task checklist is generated with all template items and due dates relative to Wave B start And the latest approved drawing versions from Wave A are linked to Wave B And the associated conversation thread for Wave B is opened and watchers include assigned owners
Auto Handoff on Ready State Triggers Without Manual Intervention
Given all readiness checks for Wave A are satisfied And Wave A enters Ready for Handoff state (without being Complete) When the system processes the state change Then all dependent waves marked as Start on Ready are initialized within 60 seconds And their owners, checklists, and linked drawings are created as per template And no manual user action is required to start the dependent waves
Role-Based Notifications Include Context and Deep Links
Given a handoff (completion- or ready-based) initializes Wave B When notifications are sent Then only users mapped to roles required for Wave B receive notifications via in-app and email And each notification includes: wave name, what changed since last notification, what is due next with due dates, linked drawing/version IDs, checklist link, and conversation thread deep link And recipients do not receive duplicate notifications for the same event And notifications are delivered within 60 seconds of the triggering event And email notifications render correctly and include deep links that open the correct context in PlanPulse
Noise Suppression Digest and Escalation Rules
Given multiple handoff-related events occur for the same wave within a 15-minute window When notifications are prepared Then the system sends a single digest summarizing all events to each recipient instead of multiple individual messages And identical events within the window are deduplicated And if any item becomes due in less than 24 hours or is overdue, an escalation notification is sent immediately to owners and the project lead regardless of the digest window And users who have opted out of a role do not receive digests for that role
Handoff Logged to Project Timeline for Traceability
Given a handoff is executed (successful or blocked) When the project timeline is viewed Then a single immutable entry exists per handoff attempt with timestamp, actor (system), from-wave and to-wave IDs, readiness check results, owners assigned, checklist ID, linked drawing version IDs, and notification summary (recipients and channels) And timeline entries are filterable by event type = Handoff and exportable as CSV And clicking the entry opens the associated Wave B conversation thread
Critical Path Visualization
"As a project manager, I want to see and share the critical path so that stakeholders understand which waves drive the project completion date."
Description

Provide an interactive timeline view that highlights the critical path across waves, displays float/slack, and differentiates overlapping versus sequential segments. Support zoom levels, filtering by discipline or assignee, and color-coded status (on track, at risk, slipped). Enable click-through to underlying drawings, markups, and conversation threads in PlanPulse. Include scenario toggles (e.g., apply max overlap, add weekend work) to preview impacts before committing changes, and allow exporting a read-only share link for client visibility.

Acceptance Criteria
Critical Path and Slack Visualization
Given a project with at least 5 waves and defined dependencies including overlaps When the timeline view loads Then the critical path is highlighted using the style indicated in the legend and is continuous across all dependent waves And each non‑critical wave displays its total float in days with one decimal place via inline badge and tooltip And overlapping segments are visually stacked with overlap connectors, while sequential segments show non‑overlapping connectors And if a wave’s duration is increased so its float becomes 0, it is shown as critical within 1 second of the change
Timeline Zoom and Time Scale Control
Given the timeline view is open When the user zooms in or out via toolbar controls or Ctrl/⌘+scroll Then the time scale switches smoothly between Month, Week, and Day levels without label overlap And the viewport center remains within ±50px of the pre‑zoom center And minimum zoom shows 1 day ≥ 50px width; maximum zoom shows ≥ 6 months in one screen And the selected zoom level persists for the user session
Discipline and Assignee Filtering
Given the timeline contains waves tagged with multiple disciplines and assignees When the user applies a filter for Discipline = “Structure” and Assignee = “Alex Kim” Then only matching waves and segments remain fully visible and non‑matching items are hidden And the critical path visualization remains based on the full schedule, with any hidden critical segments represented by dashed connectors and a tooltip “Hidden by filter” And clearing filters restores all items and the continuous critical path line
Status Color Coding and Legend
Given waves have planned and actual dates with computed float When the timeline renders Then each wave segment is color‑coded: On Track (float ≥ 2.0d), At Risk (0.0d ≤ float < 2.0d), Slipped (float < 0.0d or missed milestone) And a visible legend defines these statuses and styles And colors/patterns meet WCAG AA contrast (≥ 4.5:1) or include equivalent patterns for color‑blind accessibility And status updates within 1 second after any schedule change
Click‑Through to Drawings, Markups, and Conversations
Given a user clicks any wave segment or milestone on the timeline When the details panel opens Then it loads within 500 ms and lists linked drawings, markups, and conversation threads with counts And selecting a drawing opens it in PlanPulse with the relevant markup highlighted and a back control returning to the same timeline position and zoom And deep links (URL) reproduce the same selection and viewport when reloaded
Scenario Toggles and Impact Preview
Given scenario toggles are available (Apply Max Overlap, Add Weekend Work) When a user enables one or more toggles in Preview mode Then a preview timeline is shown with changes highlighted and a banner “Preview – not applied” And the system displays deltas for project end date, per‑wave dates, and critical path membership changes And calculations complete within 2 seconds for projects up to 100 waves And clicking Apply commits the changes and updates history; clicking Cancel reverts with no persisted changes
Read‑Only Client Share Link Export
Given the user selects Export → Share Read‑Only Link When a link is generated Then the link opens a view‑only timeline with legend, zoom, and filter controls but no edit or apply actions And the owner can set expiration (date/time) and optional passcode; both are enforced on access And revoking the link immediately prevents further access And the shared view loads within 3 seconds on a standard broadband connection and is responsive on mobile and desktop
Approval Gate Sync
"As a project lead, I want approvals to act as start gates for downstream work so that no team progresses on unapproved deliverables."
Description

Integrate one-click client approvals as formal gates within waves. Define which deliverables require approval, block dependent waves until approvals are granted, and optionally allow conditional starts with risk flags. Upon approval, freeze the referenced drawing versions, update the schedule status, and trigger auto handoff to the next wave. If rejected, route the wave back to revision sub-waves with adjusted estimates and notify owners. Record all approval events in the audit trail and expose their state in the critical path view.

Acceptance Criteria
Block Dependent Waves Until Approval
Given a wave has one-click client approval gates defined for specific deliverables And one or more waves are configured as dependent on that wave When the dependent wave is attempted to be started before the upstream approval is granted Then the Start action is disabled in the UI with the reason "Blocked by Approval Gate" And any API attempt to start returns HTTP 409 with error code "APPROVAL_GATE_BLOCKED" And the dependent wave’s planned start date cannot be earlier than the upstream approval timestamp And the critical path view shows a red gate icon on the dependency line indicating "Approval Required"
Allow Conditional Start With Risk Flag
Given project settings allow conditional starts on approval-gated dependencies And the dependency between Wave A (gated) and Wave B is marked as "Conditional Allowed" When a user starts Wave B before Wave A’s approval is granted Then Wave B moves to In Progress And a visible risk flag is attached to Wave B and all its tasks with label "Conditional Start—Approval Pending" And a risk entry is created in the risk register linking Wave B to Wave A’s pending approval And the dependency gate state is set to "Pending Approval (Conditional)" And the critical path view displays an amber gate icon for the conditional dependency And auto handoff from Wave B to its dependents remains disabled until Wave A is approved
Freeze Referenced Drawing Versions On Approval
Given a wave contains deliverables that reference specific drawing versions When the client issues one-click approval for the wave Then the referenced drawing versions are frozen and locked against edits within that wave’s context And any attempt to modify a frozen version returns HTTP 423 with error code "DRAWING_VERSION_LOCKED" And subsequent edits create a new version that is not part of the approved package And the wave record stores the exact approved drawing version IDs and timestamps
Auto Handoff And Schedule Update On Approval
Given a wave is configured with auto handoff enabled to a downstream wave And the wave has an approval gate on its deliverables When the client approves the wave Then the wave status changes to "Approved" within 5 seconds And the downstream wave transitions to "Ready" or "In Progress" per configuration And the schedule is recalculated and the critical path view updates to reflect the new start/finish dates And assigned owners of the downstream wave receive notifications (in-app and email if enabled) And an approval gate closure timestamp is recorded for SLA reporting
Rejected Route To Revision Sub-waves With Adjusted Estimates And Notifications
Given a wave with an approval gate is submitted to the client When the client rejects the wave and provides comments Then the wave status changes to "Rejected" And a "Revision" sub-wave (or set of sub-waves) is created using the configured revision template And effort and dates are recalculated using the defined estimate adjustment rule (e.g., +25% of prior effort) And all downstream dependent waves remain blocked And owners and watchers are notified with the rejection reason and revised target dates And the critical path view updates to show the inserted revision work on the path if applicable
Audit Trail Logging And Critical Path Gate State Visibility
Given approval gates are enabled for the project When an approval decision occurs (Approved, Rejected, or Resubmitted) Then an immutable audit event is recorded with actor, timestamp, decision, affected deliverables, comments, and attachments And the event is visible in the project audit trail and exportable as CSV And the critical path view displays the current gate state icon with tooltip details and links to the audit event And filters allow highlighting waves by gate state (Approved, Pending, Conditional, Rejected) And API endpoints expose gate state and audit event IDs for each wave

Verify QR

Embed a scannable QR and short link on every exported sheet and packet that resolves to a live verification page showing version hash, signer identity, timestamp, and approval status. Inspectors, clients, and consultants can instantly confirm authenticity and catch stale or altered PDFs—no logins or guesswork required.

Requirements

Per-Sheet QR and Short Link Embedding
"As an inspector, I want to scan a QR on any printed sheet to open its verification page so that I can confirm the document is current and authentic without logging in."
Description

Programmatically add a high-contrast QR code and human-readable short link to every exported sheet and compiled packet without requiring user intervention. The QR encodes a secure short URL that resolves to the live verification page. Placement must be configurable (e.g., title block footer/right margin), respect safe areas, and adapt to page sizes and orientations. Rendering must be vector-based where possible, maintain print fidelity at 300–600 DPI, include a quiet zone, and meet minimum physical size for scannability. The embed must not alter drawing geometry or break digital signatures applied post-export. Batch exports must apply consistent placement and content across all sheets. The visible short link text should include a brief “Verify at” label for manual entry. Re-exports with changed content must update the QR/link automatically to reference the new version token.

Acceptance Criteria
Auto-Embedding on Single-Sheet Export
Given a user exports a single sheet without modifying default QR settings When the export completes Then the resulting PDF contains exactly one high-contrast QR code and one visible short link on the sheet And no user prompts or manual steps were required to insert the QR/link And the QR encodes an HTTPS short URL containing a non-guessable version token And scanning the QR with a standard smartphone camera resolves with HTTP 200 to the live verification page for that exact sheet version And the visible short link text matches the QR target URL exactly
Configurable Placement and Safe-Area Compliance
Given placement is configured to Title Block Footer When exporting sheets of sizes A4 portrait, A3 landscape, Letter portrait, and Tabloid landscape Then the QR and short link render within the configured region, maintain orientation, and remain at least 6 mm from any page edge And the QR/link do not overlap user-defined safe areas or title block content And switching placement to Right Margin applies the same rules with consistent anchoring relative to the page edges And placement is consistent across preview and final export
Print Fidelity, Vector Rendering, and Scannability
Given default rendering settings When exporting at 300–600 DPI Then the QR renders as vector paths in the PDF; if vector is not possible, it embeds as a raster image at ≥600 DPI And the QR includes a quiet zone of ≥4 modules on all sides And the printed physical QR size is ≥18 mm x 18 mm at 300 DPI (scaling proportionally at higher DPI) And contrast ratio between QR foreground and background is ≥4.5:1 And test prints on common office printers at 300 DPI are scannable by default iOS and Android camera apps from 12–18 inches under normal lighting
Batch Export Consistency Across Packet
Given a batch export of N (N≥5) sheets to a compiled packet and to individual PDFs When the export completes Then every sheet contains a QR and short link with identical placement relative to page edges within ±0.5 mm tolerance And the short link prefix and “Verify at” label text are identical across all sheets And each sheet’s QR/link token is unique per sheet-version and does not duplicate another sheet’s token And no sheet is missing the QR/link and no sheet contains more than one QR/link
Version Token and Re-Export Update Behavior
Given a sheet is exported (version V1) and its QR/link token is recorded When content on the sheet changes and the sheet is re-exported Then the new QR/link encodes a new version token (V2) different from V1 without requiring user action And scanning the old V1 QR continues to resolve to the V1 verification page, while scanning the V2 QR resolves to V2 And when the sheet is re-exported with no content changes, the version token remains the same as the prior export
Non-Intrusive to Drawing Geometry and Digital Signatures
Given the same sheet exported once with QR embedding enabled and once with embedding disabled When comparing vector geometry and rasterized output (excluding the QR/link overlay) Then the drawing geometry, coordinates, and linework extents are identical between files And the QR/link are added in their own overlay layer/group without modifying existing page content streams And when a third-party digital signature is applied to the exported PDF post-export, the signature validates as unmodified on reopen in Acrobat and other common PDF viewers
Human-Readable Short Link and Secure Resolution
Given a sheet is exported When inspecting the visible text Then the short link is prefixed with the label “Verify at ” and uses only lowercase alphanumerics and hyphens, with total visible length ≤60 characters And entering the printed short link manually in a browser resolves over HTTPS to the same verification page as the QR without requiring login And short links with invalid or expired tokens respond with HTTP 404/410 and do not reveal any personally identifiable information
Immutable Version Hashing
"As a consultant, I want to see a unique version hash for a sheet so that I can verify it hasn’t been altered since approval."
Description

On export, compute a cryptographic, deterministic content hash (e.g., SHA-256) of the finalized artifact at both sheet and packet levels and store it as an immutable property of the version. Hashing must occur after all render steps (including QR/link placement) to reflect the exact distributed binary. Persist the hash with associated metadata (project, sheet ID, export timestamp, exporter version) and expose it to the verification page and audit logs. Any content change must generate a new version and hash; historical hashes remain read-only. Provide internal APIs to retrieve and validate hashes by token, and safeguards to prevent accidental re-use of tokens across different hashes. Support integrity checks to detect tampering of downloaded PDFs by recomputing and comparing the hash server-side.

Acceptance Criteria
Post-Render Hash Computation on Export
Given a user exports a sheet or packet, When all render steps complete including QR and short-link placement, Then compute a SHA-256 hash of the exact final binary and associate it with the created version record. Given the same exported binary is hashed multiple times across processes/environments, When SHA-256 is computed, Then the resulting hash value is identical each time. Given a version record is persisted, When storing metadata, Then save hash value, algorithm (SHA-256), project ID, sheet/packet ID, export timestamp (UTC ISO-8601), and exporter version. Given a completed export, When retrieving the version via internal API, Then the stored hash and metadata are returned and correspond to the exported artifact.
Immutable Hash Storage and Audit Exposure
Given a version exists with a stored hash, When attempting to update or overwrite the hash value or algorithm, Then the operation is rejected and the original value remains unchanged. Given a verification page is requested for a valid token, When the page loads, Then it displays the stored hash (value and algorithm), signer identity, approval status, and export timestamp as persisted. Given an export completes, When audit logs are recorded, Then an event includes version ID, hash value, algorithm, exporter identity, and timestamp, and the event is immutable/read-only. Given historical versions exist, When accessed read-only, Then their hashes and metadata remain retrievable without modification capabilities.
Version Bump on Any Content Change
Given any change to exported content (e.g., drawing edits, markups, embedded metadata, QR target URL), When re-exporting, Then a new version is created and a new SHA-256 hash is computed and stored. Given the exported binary differs by at least 1 byte from the prior version, When hashing, Then the new hash value differs from the previous version's hash. Given the exported binary is byte-identical to a prior version, When hashing, Then the computed hash equals the prior version's hash. Given a request attempts to reuse a previous version ID with different content, When processed, Then the request is rejected and no overwrite occurs.
Server-Side Integrity Check on Verification
Given a verification request provides a token and a PDF to validate, When the server recomputes the SHA-256 of the supplied binary, Then it compares the result to the stored hash for the token and returns a Match or Mismatch status. Given the recomputed hash matches the stored hash, When the verification page responds, Then it clearly indicates the document is authentic and shows signer identity, approval status, and timestamp. Given the recomputed hash does not match the stored hash, When the verification page responds, Then it clearly indicates the document is stale/tampered and shows the authoritative version's hash and current approval status. Given the supplied PDF is unreadable or corrupted, When validation runs, Then the response indicates a validation error and does not report a Match.
Internal Hash Retrieval and Validation APIs
Given a valid version token, When calling the hash retrieval endpoint, Then a 200 response returns hash value, algorithm, project ID, sheet/packet ID, export timestamp, and exporter version. Given an invalid or unknown token, When calling the hash retrieval or validation endpoints, Then a 404 response is returned. Given a valid token and a supplied hash, When calling the validation endpoint, Then a 200 response returns a boolean indicating whether the supplied hash matches the stored hash. Given excessive requests to internal endpoints, When rate limits are exceeded, Then a 429 response is returned and no hash data is leaked.
Token Collision and Reuse Protection
Given a new version is created, When generating a token, Then the token is unique across all versions and not previously bound to a different hash. Given a request attempts to bind an existing token to a different hash, When processed, Then the operation is rejected with a conflict and is audit logged. Given an idempotent retry attempts to bind the same token to the same hash, When processed, Then it succeeds without creating duplicate records. Given high-volume token generation, When statistically analyzed, Then observed collisions are zero and the designed collision probability is below 2^-64.
Hashing Performance and Reliability
Given a sheet PDF up to 50 MB, When hashing after render completes, Then SHA-256 computation finishes within 2 seconds at p95 on production hardware. Given a packet PDF up to 500 MB, When hashing after render completes, Then SHA-256 computation finishes within 10 seconds at p95 on production hardware. Given a transient hashing error occurs, When retried up to 3 times with exponential backoff, Then either a single hash is persisted exactly once or the export is marked failed without partial persistence. Given 100 concurrent exports, When hashing and persisting, Then each version stores the correct hash and metadata with no cross-request leakage or misassociation.
Public Verification Page
"As a client, I want a clear verification page from a QR scan so that I can instantly confirm the approval status and who signed it."
Description

Serve a public, no-login verification page for each version token that loads quickly and is mobile-friendly. Display project name, sheet identifier, version hash, approval status (Approved, Pending, Revoked, Superseded), signer identity (name/organization), and approval timestamp in local and ISO formats. Provide a prominent stale/superseded banner with a link to the latest approved version when applicable, without auto-redirecting. Do not expose confidential drawing content; show only verification metadata. Implement clear visual status indicators, accessibility to WCAG 2.1 AA, noindex/nofollow meta, structured error states for invalid/expired tokens, and a print-friendly layout. Return appropriate HTTP codes (404 invalid token, 410 revoked). Target performance of <1.5s p95 TTFB+render on 4G. Include tenant branding elements and localization for dates and status labels.

Acceptance Criteria
Public Access Metadata-Only View
- For a valid version token URL, the page returns HTTP 200 without any login or session requirement. - The page displays: project name, sheet identifier, version hash, approval status, signer name, signer organization, and approval timestamp in both local time and ISO 8601 (UTC). - No drawing content (images, PDFs, model previews) is rendered or fetched; network logs show no requests to protected drawing assets. - No download links to confidential drawing content are present on the page.
Status Representation and Localization
- Approval status is one of: Approved, Pending, Revoked, Superseded, matching the backend state for the token. - A visual indicator (icon/color) reflects the current status and includes an accessible name; color contrast meets WCAG AA (>=4.5:1 for text). - Timestamp shows both localized format (per user/tenant locale) and ISO 8601 UTC simultaneously, each clearly labeled; local time includes timezone abbreviation. - Status labels and static text are localized for supported locales (e.g., en, es, fr, de) with English fallback. - Signer identity shows name and organization; if organization is missing, only the name is shown without placeholder text.
Superseded/Stale Banner With Link (No Auto-Redirect)
- For Superseded versions, a prominent top-of-page banner labeled "Superseded" is displayed. - The banner includes a link labeled "View latest approved version" that navigates to the latest approved verification page URL when one exists. - No automatic redirect occurs on load (no 3xx responses, meta refresh, or JavaScript navigation) to the latest version. - For versions that are not the latest Approved (e.g., Pending/Approved with a newer Approved), a "Stale" banner is shown with the same link behavior; the latest Approved version shows no stale banner. - Banner is keyboard-focusable, screen-reader announced (aria-live=polite), and visible without overflow at 360px viewport width.
4G Performance Budget (<1.5s p95 TTFB+Render)
- Under emulated 4G (400ms RTT, 1.6Mbps down, 750Kbps up) on a mid-tier mobile device, the 95th percentile of (TTFB + First Contentful Paint) is <= 1500 ms across at least 30 cold-load runs. - TTFB p95 <= 500 ms and FCP p95 <= 1000 ms on the same test runs. - Measurements are captured via WebPageTest or Lighthouse throttling with results recorded in CI.
WCAG 2.1 AA Accessibility Compliance
- Automated axe-core scan reports zero critical and serious violations on desktop and mobile breakpoints. - All interactive elements are fully keyboard accessible with visible focus; tab order matches visual order; a skip-to-content link is present. - Semantic landmarks (header, main, footer) are used; all controls/images/icons have accessible names; the page has a descriptive title and lang attribute. - Text and non-text contrast meet WCAG AA (text >= 4.5:1; non-text >= 3:1); status colors meet required contrast. - Screen readers announce status/banners via aria-live regions; page is operable without a mouse.
Structured Error States and HTTP Codes
- Invalid token requests return HTTP 404 and render a structured error page labeled "Invalid verification link" with no project/sheet metadata displayed and a support/contact link. - Revoked token requests return HTTP 410 and render a structured error page labeled "Revoked" with no project/sheet metadata displayed; revocation timestamp shown if available. - Expired token requests return a structured "Expired" error page with an HTTP 4xx status; no project/sheet metadata displayed; includes guidance to contact the issuer. - Error pages do not render or request any drawing content and maintain consistent branding and accessibility with the main page.
Mobile/Print-Friendly Layout, Robots, and Tenant Branding
- Responsive layout has no horizontal scroll at widths down to 320px; tap targets are >= 44px; viewport meta tag is set. - Print stylesheet produces a clean, single-column layout containing all verification metadata and excluding navigation/interactive elements; no content is clipped on A4/Letter. - Page sets <meta name="robots" content="noindex, nofollow"> and the X-Robots-Tag HTTP header to "noindex, nofollow". - Tenant branding (logo/colors/footer) is applied per the token's tenant; no cross-tenant assets are loaded; branding elements meet contrast requirements.
Short-Link Service and Token Security
"As a project lead, I want secure short links behind the QR so that verification works reliably without exposing sensitive data."
Description

Provide a first-party short-link service that issues collision-resistant, non-sequential tokens (10–16 URL-safe chars) mapped to version records. Enforce HTTPS with HSTS, prevent open redirects, and ensure tokens contain no PII. Implement rate limiting, basic bot mitigation, and abuse monitoring. Tokens should be durable for the life of the version unless explicitly revoked; revocations must immediately invalidate resolution and return 410. Support UTM/scan parameters for analytics without affecting token resolution. Provide service health metrics and alerting, and ensure high availability with graceful degradation (e.g., cached resolution). Maintain audit logs of resolutions without storing sensitive user data.

Acceptance Criteria
Token Issuance and Mapping to Version Records
Given a request to create a short link for a specific version record When the service generates a token Then the token length is between 10 and 16 characters inclusive And every character is URL-safe (A–Z, a–z, 0–9, -, _) And the token contains no PII or internal identifiers (e.g., project ID, version ID, user ID, email) And in a batch generation test of 1,000,000 tokens, zero collisions occur And in a sequential sample of 10,000 tokens, fewer than 1% share a common prefix longer than 4 characters And the token resolves to the canonical verification page for the mapped version record with HTTP 200
HTTPS Enforcement, HSTS, and Redirect Safety
Given any HTTP (non-TLS) request to the short-link domain When the request is received Then it is redirected with HTTP 301 to the equivalent HTTPS URL And the HTTPS response includes Strict-Transport-Security with max-age >= 15552000 and includeSubDomains And TLS version is >= 1.2 and weak ciphers are disabled Given a resolve request containing redirect-like parameters (e.g., redirect, next, url) When the request is processed Then the service does not perform open redirects and only navigates to the internal verification page for the token
Rate Limiting, Bot Mitigation, and Abuse Monitoring
Given a single client IP issues more than 120 resolve requests within 60 seconds (burst up to 240) When the threshold is exceeded Then subsequent requests receive HTTP 429 with a Retry-After header And legitimate allowlisted monitors/scanners are exempt per policy Given an IP makes more than 5 invalid-token requests within 1 minute When additional invalid requests are made Then the service applies progressive backoff (e.g., 2s delay) or lightweight challenge for 10 minutes And counters for rate_limited_total and invalid_token_total are incremented And an abuse alert is triggered when more than 20 IPs exceed 500 requests/min each for 5 consecutive minutes
Token Durability and Immediate Revocation to 410
Given an active token mapped to a version record When a resolve request is made at any time prior to revocation Then the service returns HTTP 200 and the correct verification payload Given the token is explicitly revoked by an authorized actor When any resolve request is made after revocation Then the service returns HTTP 410 Gone within 5 seconds globally And all edge and application caches are purged for that token And the revoked token cannot be reactivated; a new issuance creates a distinct token
Support for UTM and Scan Parameters Without Affecting Resolution
Given a valid short link token with additional query parameters (utm_* and/or scan=*) When the URL is resolved Then the destination and status are identical to resolving the bare token And unknown query parameters are ignored for token matching And analytics attributes record UTM/scan parameters without altering resolution outcome or token identity
High Availability with Graceful Degradation via Cached Resolution
Given a primary datastore outage When resolving previously seen active tokens Then at least 99% of requests are served from cache with HTTP 200 and p95 latency <= 1s And unknown or unseen tokens during the outage return HTTP 503 with a Retry-After header (not a 5xx without guidance) And cache TTL for active mappings is >= 24 hours and revocations purge cache within 5 seconds And monthly resolution availability meets or exceeds 99.9%
Observability: Metrics, Alerting, and Privacy-Safe Audit Logs
Given the service is running When the /metrics endpoint is scraped Then Prometheus metrics are exposed including counters (resolutions_total, resolutions_200_total, resolutions_410_total, invalid_token_total, rate_limited_total) and latency histograms And alerts page on-call when 5xx rate > 1% for 5 minutes or p95 latency > 500ms for 5 minutes or consecutive DNS/health check failures > 3 And each resolution event is audit-logged with timestamp, salted hash of token, outcome code, truncated IP (/24 for IPv4, /48 for IPv6), hashed user agent, and referrer domain only And audit logs contain no raw tokens or PII, are access-controlled, retained for 90 days, and are tamper-evident via daily hash chaining
Approval and Signer Identity Sync
"As an inspector, I want to know who approved the document and when so that I can trust its authorization."
Description

Integrate the verification flow with PlanPulse’s approval workflow to surface signer identity and approval timestamps tied to the exact version hash. Support multiple signers, roles, and sequential or parallel approval paths. Pull signer display names and organization from internal profiles or external e-sign providers, masking emails by default. Reflect real-time status changes (Pending, Approved, Revoked) on the verification page and record a tamper-evident audit trail linking approvals to the version hash. Respect privacy settings to limit what is shown publicly while retaining sufficient attribution for inspectors. Provide admin tools to correct attribution errors without altering the underlying version hash.

Acceptance Criteria
Public Verification Page Shows Signers, Roles, and Approval Timestamps by Version Hash
Given a valid QR or short link containing a version hash When the link is opened without authentication Then the page responds 200 and renders within 2 seconds at p95 And it displays the exact version hash and overall status (Pending/Approved/Revoked) And for each required signer it shows role, display name, organization, and approval timestamp (ISO 8601 with timezone) if approved And emails are masked by default (local-part first character + "***" with full domain visible)
Multiple Signers with Roles and Sequential/Parallel Approval Paths
Given an approval flow configured with N required signers and roles When the flow mode is sequential Then signers beyond the current step cannot approve until the current step is Approved And overall status becomes Approved only when all steps are Approved When the flow mode is parallel Then any required signer can approve in any order And overall status becomes Approved only when all required signers are Approved And the verification page lists each signer with individual status (Pending/Approved/Revoked) and role
Real-Time Status Reflection on Verification Page
Given an open verification page for version hash H When any signer approves or revokes via PlanPulse or an external e-sign provider Then the page reflects the new status within 5 seconds without manual refresh And the page shows a last-updated timestamp in ISO 8601 And the system prevents showing statuses older than 60 seconds without an explicit "Out of date" notice
Tamper-Evident Audit Trail Linked to Version Hash
Given version hash H When an approval, revocation, or attribution correction occurs Then an append-only audit entry is created with: event type, H, signer unique ID or external provider ID, role, display name snapshot, organization snapshot, UTC timestamp, and event checksum And the verification page exposes read-only audit summaries (event type, role, display name snapshot, organization, timestamp) And attempts to modify existing audit entries are rejected and a new correction entry is appended referencing the original entry ID
Privacy Controls for Public Verification Page
Given project privacy setting is "Public identity details: Limited" When the verification page renders Then only role and organization are shown; display names are hidden And emails remain masked in all modes; full emails are never shown publicly Given privacy setting is "Public identity details: Full" Then role, display name, and organization are shown; emails remain masked And role and organization are always visible to provide sufficient attribution for inspectors
Admin Correction of Signer Attribution Without Changing Version Hash
Given an admin corrects a signer’s display name or organization for version hash H When the correction is saved Then the verification page reflects the corrected values within 5 seconds And the version hash H remains unchanged And an audit entry of type "Correction" is appended with admin user ID, UTC timestamp, old value, and new value And prior values remain visible in audit history
External E-Sign Provider Identity and Status Sync
Given a signer completes approval via an integrated e-sign provider When the provider webhook/callback is received Then the signer’s identity is mapped using provider user ID or verified email to an internal profile And the verification page shows "Verified via <Provider>" for that signer and displays role, display name, and organization from the mapped profile And status and timestamp appear within 60 seconds of provider completion And if the provider is unreachable for more than 5 minutes, the page shows last known status with a "Source unavailable" notice while retrying And emails remain masked on the public page
Supersession Detection and Stale Warning
"As a consultant, I want stale documents to be clearly flagged so that I don’t work from outdated information."
Description

Automatically determine when a newer approved version exists for the same sheet or packet and mark older versions as Superseded. The verification page must prominently warn users when viewing a superseded version and provide a link to the latest approved one. Supersession logic must account for branches (e.g., design options), unapproved drafts, and partial packet re-issues. Expose APIs to query supersession state and emit events when supersession occurs. Optionally notify project owners when superseded versions are scanned. Do not auto-redirect to preserve chain-of-custody and historical accuracy.

Acceptance Criteria
Superseded Warning Banner and Latest Link on Verification Page
Given a QR/short link to an approved sheet/packet version that has been superseded by a newer approved version within the same branch or via an explicit cross-branch supersession mapping, When the verification page is loaded, Then a prominent "Superseded" warning banner is displayed above the fold with role="alert" and color contrast meeting WCAG AA, And a link labeled "View latest approved version" points to the latest approved verification page URL, And the page does not auto-redirect.
Branch-Aware Supersession Logic
Given two approved versions of the same sheet exist on different branches (e.g., Option A and Option B) with no merge or explicit supersession mapping, When evaluating supersession for the earlier version within its branch, Then it is not marked Superseded by versions from other branches. Given an approved version is explicitly merged or mapped to supersede versions in another branch, When the merge/mapping is committed, Then the mapped prior versions are marked Superseded.
Drafts Must Not Supersede Approved Versions
Given a newer unapproved draft exists for a sheet or packet, When evaluating supersession for the latest approved version, Then the approved version remains Not Superseded. Given the draft becomes approved, When approval is recorded, Then supersession is recalculated and prior approved versions are marked Superseded accordingly.
Partial Packet Re-issue Handling on Verification Page
Given a packet v1 is approved and only a subset of its sheets are re-issued and approved in packet v2, When the verification page for packet v1 is loaded, Then the page displays a "Partially Superseded" warning enumerating which sheets are superseded and which remain current, And provides per-sheet links to the latest approved versions for the superseded sheets, And does not auto-redirect. Given a QR/short link to a superseded individual sheet from v1, When scanned, Then the sheet’s page shows "Superseded" and links to that sheet’s latest approved version.
Supersession State API
Given a GET /api/v1/supersession/{resourceId} request with a valid resourceId for a sheet or packet version, When called, Then the API responds 200 with JSON containing: resourceType, resourceId, branch, isSuperseded (boolean), supersededBy (id|null), latestApprovedUrl (string|null), isPartial (boolean), supersededChildren (array of ids), evaluatedAt (ISO8601). Given an unknown resourceId, When called, Then the API responds 404. Given more than 60 requests per minute per IP, When called, Then the API responds 429 with a Retry-After header.
Supersession Event Emission
Given a new version is approved that supersedes one or more prior versions, When the approval is recorded, Then an event "supersession.created" is emitted within 5 seconds containing newVersionId, supersededVersionIds, scope (sheet|packet), branch, timestamp (ISO8601), and idempotencyKey. Given a transient delivery failure to a registered webhook, When the event is emitted, Then the system retries with exponential backoff for at least 24 hours or until acknowledged.
Optional Notifications on Scanning Superseded Versions
Given project owners have enabled "Notify on superseded scans" for a project, When a QR/short link for a superseded version is scanned, Then the system sends a notification within 2 minutes via configured channels (email and/or in-app) including scannedVersionId, latestApprovedUrl, project name, and redacted scanner metadata (e.g., IP truncated to /24), And the verification page remains publicly viewable without login and without auto-redirect. Given multiple scans of the same superseded version occur within a 10-minute window, When notifications are generated, Then they are deduplicated to at most one notification per version per project per window.
Print-Quality and Scan Robustness
"As a field inspector, I need QR codes that scan reliably on weathered prints so that I can verify documents on-site."
Description

Ensure QR codes remain scannable across common architectural page sizes (A3–A0, ANSI B–E) and under degraded conditions (B/W copies, low light, creases). Enforce minimum physical size (≥18 mm), error correction level (≥M), quiet zone (≥4 modules), and high contrast rendering. Validate output at 300–600 DPI with automated preflight checks that score scannability and block export if thresholds are not met. Auto-adjust placement to avoid title block collisions and allow a human-readable short link fallback. Provide internal test matrices and device validation across popular scanner apps and camera hardware.

Acceptance Criteria
Export QR Rendering Standards Enforcement
Given a sheet or packet export is initiated When the QR is generated and rendered into the output Then the QR physical size shall be ≥ 18 mm on paper at target scale And the QR error correction level shall be ≥ M And the quiet zone shall be ≥ 4 modules on all sides And the QR foreground/background contrast ratio shall be ≥ 7:1 And the QR shall be embedded as vector or rasterized at an effective resolution within 300–600 DPI at print scale And preflight shall verify each metric and report measured values And if any metric fails, the export shall be blocked with an error specifying which metric(s) to fix
Automated Scannability Preflight Score and Hard Fail
Given a composed sheet with QR ready for export When preflight runs scannability analysis including simulated print/recompress and minor skew/blur variations Then a scannability score between 0.00–1.00 shall be computed And the export shall pass only if the score is ≥ 0.90 And the preflight report shall include score, effective module size (mm), quiet zone (modules), contrast ratio, and DPI And if the score is < 0.90, the export shall be blocked with remediation guidance
Degraded Conditions Robustness (B/W, Low Light, Creases)
Given the QR is printed on A3 and A1 sheets at 300 DPI and 600 DPI When tested across these conditions: grayscale photocopy, ambient light 50–100 lux, and one crease crossing non-finder areas occluding ≤ 10% of modules Then each device/app combination in the validation matrix shall achieve ≥ 95% successful scans over 20 attempts per condition And median time-to-decode shall be ≤ 2.5 seconds per attempt And any combination below thresholds shall be flagged as a blocker
Auto-Placement and Collision Avoidance
Given a sheet template with a title block and existing content When auto-placement of the QR and short link is computed for page sizes A3–A0 and ANSI B–E Then the QR shall be placed in a non-overlapping region with ≥ 6 mm clearance from any graphic/text and ≥ 10 mm from sheet edges And the QR shall not overlap the title block or reserved zones (e.g., hole-punch margin) And the QR physical size and quiet zone requirements shall be preserved without scaling below thresholds And if no safe region exists, the export shall be blocked with a clear message and suggested placements
Short Link Fallback Legibility and Equivalence
Given a QR is included on the sheet When the export is generated Then a human-readable short link shall be printed within 15 mm of the QR And the short link shall be ≤ 25 characters, font size ≥ 9 pt on A3 (scaled proportionally for other sizes), and contrast ratio ≥ 7:1 And the short link shall resolve to the same verification page (version hash, signer identity, timestamp, approval status) as the QR target And preflight shall fail if the short link is missing, illegible, or resolves incorrectly
Device and App Validation Matrix Coverage
Given the internal validation matrix includes at minimum: iOS 15–18 on iPhone 12–15 (incl. Pro/Max) and Android 12–15 on Google Pixel 6–9 and Samsung Galaxy S21–S24; apps: native Camera, Google Lens, Samsung Camera, Adobe Scan When scanning printed A3 and A1 sheets under 200–500 lux at 30–50 cm distance, handheld Then ≥ 95% of device/app combinations shall successfully decode on first attempt within ≤ 2.0 seconds And all combinations shall succeed within ≤ 2 attempts And results shall be recorded with pass/fail per combination; any failure is a release blocker

Timeproof Notary

Attach an independent trusted timestamp to each approval and export using a standards‑based time‑stamping authority. This creates an incontrovertible record of when a decision was made and what exact file was approved, reducing disputes and meeting strict compliance or contract requirements.

Requirements

RFC 3161 TSA Integration
"As a project lead, I want PlanPulse to obtain trusted RFC 3161 timestamps from approved TSAs so that every approval has a legally defensible, third-party timestamp."
Description

Implement secure integration with one or more standards-based Time Stamping Authorities using RFC 3161, with configurable endpoints, credentials, hash algorithms (SHA-256 or stronger), nonce usage, policy OIDs, and TSA certificate chain validation including OCSP/CRL checks and network timeouts. Support automatic failover between multiple TSAs and exponential backoff retries to ensure high availability. Enforce TLS 1.2+ with certificate validation against trusted roots managed by PlanPulse. Provide admin UI and API to manage TSA profiles at workspace or project scope.

Acceptance Criteria
Admin UI: Create and Validate TSA Profile
Given I am a workspace admin, When I open TSA Settings and choose Create Profile, Then I can set: profile name (required), RFC 3161 HTTPS endpoint URL (required), credentials (if required by the TSA), hash algorithm (SHA-256, SHA-384, or SHA-512), policy OID (optional), nonce usage (toggle, default On), and request timeout in seconds (1–60, default 15). Given any invalid input (e.g., non-HTTPS URL, unsupported hash, missing required fields), When I attempt to Save, Then the save is blocked and field-level errors are shown. Given valid inputs, When I click Test Connection, Then a live RFC 3161 test request is sent and the UI displays Pass with round-trip latency on success or a descriptive error on failure. Given the test passed, When I save the profile, Then it is persisted, visible in the list, and can be set as the workspace default.
API: Workspace and Project-Scoped TSA Profiles
Given I have admin permissions, When I call POST /api/tsa/profiles with a valid body, Then a TSA profile is created and 201 with its ID and scope is returned. Given a workspace default profile exists, When a project has no override, Then timestamping operations in that project use the workspace default profile. Given I create a project-scoped profile and set it as the project default, When any timestamp is requested in that project, Then that profile is used. Given a non-admin token, When calling TSA profile create/update/delete endpoints, Then the API returns 403 Forbidden. Given I call GET /api/tsa/profiles and GET /api/tsa/profiles/{id}, Then the API returns profile metadata including scope, algorithms, endpoint, policy OID, and timeout values.
RFC 3161 Request: SHA-256+ and Nonce
Given a binary artifact to be approved, When a timestamp is requested, Then its message imprint is computed with the configured algorithm and algorithms weaker than SHA-256 are rejected. Then the TimeStampReq includes messageImprint, certReq=true, a cryptographically random nonce (>=128 bits) when nonce usage is enabled, and reqPolicy set to the configured policy OID (if any). When the TSA responds, Then status=granted or grantedWithMods is treated as success; other statuses cause the operation to fail with the returned failureInfo. Then the TSTInfo.messageImprint and nonce (if present) exactly match the request; otherwise the token is rejected and not stored.
Verify TSA Response and Certificate Chain with OCSP/CRL
Given a TimeStampResp, When validating, Then the CMS signature is verified and the TSA signing certificate chain builds to a trusted root in the PlanPulse trust store; the TSA certificate includes the timeStamping EKU and is valid at genTime. Then the configured policy OID, if specified, matches the token’s policy OID; otherwise the token is rejected. Then revocation is checked: OCSP is attempted first; if OCSP is unavailable, CRL is checked; revoked or indeterminate revocation status results in rejection. Then the verification result returns genTime, serialNumber, policy OID, TSA subject, and revocation status for audit; failures return a machine-readable reason code.
TLS 1.2+ and Network Timeout Enforcement
Given any outbound request to a TSA endpoint, Then TLS 1.2 or higher is enforced with certificate validation against the PlanPulse trusted roots and hostname verification; untrusted, expired, or mismatched certificates cause the request to fail. Given a request exceeds the configured timeout, Then the request is canceled, marked as timeout, and no partial token is stored. Given a successful connection, Then connection latency and outcome are recorded for observability without logging secrets.
Automatic Failover and Exponential Backoff
Given multiple TSA profiles are configured in priority order, When a request to the primary fails due to network error, TLS error, HTTP 5xx, timeout, or invalid RFC 3161 response, Then the system immediately attempts the next profile in order. Then retries per TSA follow exponential backoff with jitter: initial delay 500ms (configurable), factor 2.0, maximum 3 retries per TSA, and a maximum delay cap of 8s. Then if any TSA returns a valid timestamp, the operation succeeds and records which TSA was used; if all attempts fail, the operation returns an error summarizing all attempts. Then duplicate timestamping is prevented by ensuring only one successful token is stored per artifact per operation.
Deterministic Binding of Token to Approved Artifact
Given an approval action for a specific file version, When a timestamp is created, Then the token is stored with the message imprint, hash algorithm, TSA identifier, policy OID (if any), and file version ID. Then re-verification recomputes the hash over the exact stored bytes and succeeds only if it matches the token’s messageImprint; any byte-level modification causes verification to fail.
Hash-Only Submission and Content Integrity
"As a security-conscious architect, I want the TSA to receive only a hash of my files so that my clients’ designs remain confidential while still being verifiable."
Description

Generate a cryptographic digest of the exact approved artifact using SHA-256 and submit only the hash to the TSA, never the file contents, to preserve confidentiality. Persist the computed hash, TSA request nonce, and returned timestamp token (TST) with the approval record. Ensure deterministic hashing by normalizing file bytes and capturing the precise versioned binary as stored at approval time. Re-hash on access to detect post-approval modification and surface integrity status in the UI and API.

Acceptance Criteria
Deterministic SHA-256 of Approved Artifact
Given an artifact is stored in PlanPulse at approval time When the system computes the SHA-256 over the exact stored bytes Then the resulting digest is identical across repeated computations and matches a reference SHA-256 computed over an exported copy of the stored bytes And hashing results are identical across supported platforms for the same stored bytes
Hash-Only RFC 3161 TSA Request
Given an approval triggers time-stamping When the system constructs the RFC 3161 Time-Stamp Request (TSQ) Then the TSQ includes only the SHA-256 messageImprint of the artifact, a cryptographically random nonce (>=128 bits), and required RFC 3161 fields And no artifact file bytes are included in the request payload, headers, or attachments And outbound network logs contain no artifact content
Persist Hash, Nonce, and TST with Approval
Given a successful TSA response is received When the approval record is retrieved via API and UI Then it contains sha256 (hex), tsaNonce, tsaTst (DER/Base64), tsaGenTime, and tsaStatus fields And these fields are immutable after approval (write attempts are rejected) And the stored values byte-for-byte match the original TSQ and TSA response
Integrity Check on Access with UI/API Surfacing
Given an approved artifact is accessed (downloaded or viewed) When the system re-reads the stored bytes and computes SHA-256 Then it compares to the persisted hash and sets integrityStatus to "Intact" on match or "Tampered" on mismatch And integrityStatus is returned in GET /approvals/{id} and displayed in the UI within 2 seconds of access And an integrity-check audit event is recorded with timestamp and result
Version Binding at Approval Time
Given an artifact has multiple versions When version N is approved Then the persisted sha256, tsaNonce, and tsaTst are bound to version N only and remain unchanged if version N+1 is uploaded later And integrity checks for the approval of version N use version N bytes exclusively and report status independently from version N+1
Tamper Detection and Alerting
Given the stored bytes for an approved artifact are modified post-approval (via test injection) When the next integrity check occurs (on access or scheduled) Then integrityStatus becomes "Tampered", both expected and observed hashes are recorded, and a security alert is emitted to the audit log And the original approval sha256, tsaNonce, and tsaTst remain unchanged And API/UI reflect the tampered state until remediation
Nonce Uniqueness and Replay Protection
Given multiple TSA requests are issued for different approvals When nonces are generated Then each TSQ uses a unique cryptographically random nonce with at least 128 bits of entropy and no nonce is reused And if a TSA response indicates a replayed nonce, the approval is marked failed, the event is logged, and a retry occurs with a new nonce
Approval Event Binding and Snapshot
"As a client approver, I want my approval to be tied to the exact file and version I saw so that there is no ambiguity about what I approved."
Description

Bind each timestamp to a specific approval event by capturing approver identity, authentication context, project, drawing/version ID, markup state, and approval message, and by generating a read-only snapshot of the approved file set. Store snapshot URI, metadata, hash, and TST in an append-only ledger to provide an immutable evidence record. Disallow edits to snapshots while enabling superseding approvals that create new distinct records.

Acceptance Criteria
Approval Event Context Capture is Complete and Validated
Given an authenticated approver with explicit approval permissions on a project When they submit an approval with an optional message for a specific drawing version and current markup state Then the system must atomically capture and bind to the approval event: approver.userId, approver.displayName, auth.method, auth.assuranceLevel, auth.sessionId, project.id, drawing.id, version.id, markup.stateId or markup.stateHash, approval.message (exact text), client.ip, userAgent, server.timestamp (UTC ISO 8601 with milliseconds), and eventId (UUIDv4) And if any required field is missing or invalid, the approval is rejected with a 400 error and no snapshot or ledger entry is created And if the user lacks approval permission, the request is rejected with 403 and no side effects
Read-Only Snapshot Generation and Integrity Hashing
Given an approval event accepted by the system When the snapshot is generated Then the snapshot must include exactly the approved file set and markup state as rendered at approval time And the snapshot must be stored at a content-addressed URI and flagged read-only And a cryptographic hash (SHA-256) of the snapshot package and a manifest of included items (name, byte size, mime, individual hashes) must be produced And any attempt to modify, replace, or delete files within the snapshot storage after creation must be blocked with 403 and produce no change in hash or URI And rehydrating the snapshot package must deterministically reproduce the same root hash
RFC 3161 TSA Time-Stamp Token Attached and Verified
Given a computed snapshot root hash When the system requests a time-stamp from the configured RFC 3161 TSA Then a valid TST containing the hash, TSA serial, TSA signing time, and TSA certificate chain must be returned and validated (signature verified and policy OID recorded) And the approval is only finalized after the TST is successfully validated And upon TSA failure or timeout (>=10s), the approval is rejected with retriable error and no snapshot or ledger entry is persisted
Append-Only Ledger Write with Complete Evidence
Given a finalized approval with validated TST and snapshot When writing to the append-only ledger Then a new immutable record must be appended containing: eventId, serverTimestamp, project.id, drawing.id, version.id, snapshot.uri, snapshot.rootHash, snapshot.manifestHash, approval.message, approver.userId, auth.method, auth.assuranceLevel, TST.bytes (base64), TSA.serial, TSA.policyOid, and previousLedgerRecordId (or null for first) And the ledger must reject updates or deletions to existing records with 405 and preserve sequence ordering with monotonically increasing recordId And attempts to append a record with a duplicate eventId must be rejected with 409 And reading the record by eventId or recordId must return exactly the stored values
Superseding Approval Produces Distinct Immutable Record
Given an existing approval ledger record for a drawing/version When a new approval is performed for the same drawing at a later version or state Then a new approval eventId, snapshot, TST, and ledger record are created without altering the prior record And the new ledger record must include supersedes.eventId referencing the prior approval (when applicable) And any attempt to edit or overwrite a prior snapshot or ledger record must be rejected with 405
Evidence Export and Third-Party Verification
Given a valid approval ledger record When the evidence package is exported via API or UI Then the export must include: snapshot package (or immutable download link), manifest, snapshot.rootHash, TST (DER), TSA certificate chain, ledger record (JSON), and verification instructions And an offline verification using open-source RFC 3161 tools must validate the TST signature and signing time, confirm the snapshot.rootHash matches the exported snapshot, and confirm ledger record values match the export And the export must be reproducible and downloadable within 2 clicks in UI or a single GET call in API
Exportable Verification Bundle
"As a contracts manager, I want a portable export that proves what was approved and when so that I can share it with stakeholders and auditors without requiring PlanPulse access."
Description

Provide a downloadable verification bundle that includes the approved artifact or a reference to it, its cryptographic hash, the RFC 3161 timestamp token (DER), TSA certificate chain, and revocation data, packaged as a self-contained ZIP and as an embedded PDF/A-3 attachment when applicable. Include a human-readable summary and a machine-readable JSON manifest with verification instructions to enable offline validation by third parties.

Acceptance Criteria
Downloadable ZIP Verification Bundle on Approval
Given an artifact has been approved in PlanPulse When the user selects "Download Verification Bundle" Then a ZIP file is generated and downloaded within 5 seconds for artifacts ≤ 250 MB And the ZIP contains at minimum: manifest.json, summary.txt, timestamp.tsr (DER), tsa_chain.pem, and revocation data files (OCSP and/or CRL) And the ZIP additionally contains either the exact approved artifact file or an artifact_reference.json when the artifact is not embedded And the bundle is self-contained so that offline verification requires no external network calls And the ZIP format supports ZIP64 when the total size exceeds 4 GB And each file listed in manifest.json appears at the declared relative path inside the ZIP
Embedded PDF/A‑3 Verification Attachment for PDF Artifacts
Given the approved artifact is a PDF file When the verification package is generated Then a PDF/A‑3B compliant file is produced with zero errors as validated by veraPDF And the original approved PDF content is preserved as the base document (no visual alterations) And the verification attachments include: manifest.json, summary.txt, timestamp.tsr, tsa_chain.pem, and revocation data And each embedded file is marked with AFRelationship=Data and named deterministically And the embedded manifest references the embedded files by their attachment names And the exported ZIP is also produced alongside the PDF/A‑3
Cryptographic Hash Accuracy and Algorithm Labeling
Given the verification bundle has been generated When the SHA‑256 digest is recomputed over the exact approved artifact bytes (or the provided artifact bytes if embedded) Then the recomputed digest exactly equals the value declared in manifest.json (lowercase hex, no separators) And manifest.json declares hash.algorithm="SHA-256" and hash.value present And for non-embedded artifacts, manifest.json includes artifact.uri, artifact.mediaType, and artifact.size (bytes) And no hash values are present that do not correspond to a file or reference in the bundle
RFC 3161 Timestamp Token Validity and Chain
Given timestamp.tsr (DER) is included in the bundle When the token is validated offline using only files in the bundle Then the token signature verifies, the messageImprint algorithm matches SHA‑256, and the imprint equals the artifact hash in manifest.json And tsa_chain.pem contains a complete chain to the issuing CA (excluding the trust anchor), in order And included OCSP/CRL responses are current at or after the token genTime and indicate the TSA signing cert is not revoked And the token genTime is within ±5 minutes of the recorded approvalTime in manifest.json And the token policy OID is present and recorded in manifest.json
Human‑Readable Summary Completeness
Given summary.txt is opened from the bundle When its contents are inspected Then it lists: product (PlanPulse), feature (Timeproof Notary), approvalId, project name, artifact name, artifact version/revision, artifact location (file path or URI), hash algorithm and digest, approvalTime (UTC ISO‑8601), TSA name, token serial number/policy OID, and bundle generation time And it provides step‑by‑step offline verification instructions referencing files in the bundle And all listed fields are non‑empty and match the corresponding values in manifest.json And the summary is encoded in UTF‑8 and under 200 KB
JSON Manifest Schema and File Integrity
Given manifest.json is parsed When validated against schema version "1.0.0" bundled at schemas/verification-bundle-1.0.0.json Then it contains required sections: version, artifact, hash, approval, timestamp, tsa, revocation, files[], verificationInstructions[] And every file listed in files[] exists at the declared path and its SHA‑256 digest matches files[].sha256 And artifact.type is "embedded" or "reference" and fields present accordingly (filename for embedded; uri, mediaType, size for reference) And approval includes id and time (UTC ISO‑8601) And manifest.json is UTF‑8 without BOM and its size is ≤ 512 KB
In-App Timestamp Verification UI
"As a project lead, I want to verify any approval’s timestamp inside PlanPulse so that I can quickly resolve disputes without external tools."
Description

Add a verification panel that validates timestamp tokens and certificate chains, checks revocation status, recomputes the artifact hash, and displays a clear pass/fail status with reasons, validation time, and TSA details. Support verifying internal records and externally provided bundles, with localized display of the trusted time and UTC as canonical time. Provide warnings for expiring TSA certificates and unavailable revocation endpoints.

Acceptance Criteria
Verify Internal Approval Record Timestamp
Given an internal PlanPulse approval record with an attached RFC 3161 timestamp token and stored artifact reference When the user opens the Verification panel and selects Verify Then the system validates the timestamp token signature against a trusted TSA certificate chain And builds a certificate path to a trusted root with timeStamping EKU present And recomputes the artifact hash using the algorithm specified in the token imprint and compares it to the imprint value And checks OCSP/CRL revocation status for TSA and intermediate certificates And displays overall status Pass if all checks succeed, otherwise Fail And shows TSA name, TSA certificate serial, policy OID, token serial, validation time, trusted time (UTC), and localized trusted time And lists each failure or warning with a machine-readable code and human-readable message
Verify External Timestamp Bundle Upload
Given a user uploads an external verification bundle containing an artifact file and an RFC 3161 token (.tsr/.tsd) with optional certificate chain When the user clicks Verify Then the system parses the bundle and extracts the token and any provided certificates And recomputes the artifact hash and matches it to the token imprint And validates the token signature and builds a chain to a trusted root (preferring provided intermediates when present) And checks revocation status for TSA and intermediates And returns Pass with TSA details and times if all checks succeed And returns Fail with specific reasons (e.g., TOKEN_PARSE_ERROR, HASH_MISMATCH, UNTRUSTED_ROOT, INVALID_SIGNATURE) if any check fails And records a verification result ID for audit
Certificate Chain and Revocation Enforcement
Given any timestamp token under verification When constructing the certificate path and validating revocation Then the chain must terminate at a trusted root and all certificates must be valid at the token's genTime And the TSA end-entity certificate must include Extended Key Usage timeStamping and not be a CA certificate And all crypto signatures (token and certificates) must verify with supported, non-deprecated algorithms And OCSP/CRL checks for TSA and intermediates must return Good for Pass And if revocation endpoints are unreachable or responses are stale, the result is Pass with Warning: REVOCATION_UNAVAILABLE And if any certificate is Revoked, the result is Fail: CERT_REVOKED
Artifact Hash Recalculation and Mismatch Reporting
Given the provided artifact does not match the timestamp token imprint When verification runs Then the result is Fail: HASH_MISMATCH And the UI displays the expected algorithm and imprint from the token and the computed algorithm and hash of the provided artifact And provides Copy actions for both values And instructs the user to select the correct file to re-verify
Trusted Time Display in Localized and UTC Formats
Given a successfully validated timestamp token When rendering verification results Then the trusted time from the token is displayed in UTC using ISO 8601 (e.g., 2025-09-29T14:05:00Z) and labeled as Canonical (UTC) And the trusted time is also displayed localized to the user's locale/timezone with appropriate formatting And the verification (current) time is displayed separately in both UTC and local time And no client-local clock is used to compute the trusted time value; it is derived solely from the token
Warnings for Expiring TSA Certificates and Revocation Unavailability
Given verification otherwise passes When the TSA end-entity certificate expires within 30 days or an AIA/OCSP endpoint is unreachable/times out Then the overall result remains Pass with Warning And the UI shows a Warning badge with codes TSA_CERT_EXPIRING_SOON and/or REVOCATION_ENDPOINT_UNAVAILABLE And displays the TSA cert notAfter date and days-until-expiry when applicable And displays the affected endpoint URL(s) and error type for unavailability And logs warnings in the verification result record
Audit Trail, Retention, and Access Controls
"As a compliance officer, I want controlled access and auditable histories of timestamped approvals so that we meet contractual and regulatory requirements."
Description

Enforce role-based access to timestamped records and exports, maintain a tamper-evident audit trail of timestamp issuance, verification attempts, exports, and configuration changes, and support configurable retention periods and legal holds. Normalize all event times to UTC with millisecond precision and sign server logs to detect tampering. Provide export of audit logs for compliance reviews via UI and API.

Acceptance Criteria
RBAC: Access to Timestamped Records and Audit Exports
Given a user without the Audit.View permission When they request any audit log or timestamped record/export via UI or API Then the system returns HTTP 403 and logs an AccessDenied audit event referencing the attempted resource and user Given a user with Audit.View and scoped project access When they query audit logs Then results include only events for projects within their scope and exclude all other tenants/projects Given a user with Export.Create permission When they export audit logs Then the system generates an export file and logs an ExportCreated event with user ID, time (UTC ms), filters, file checksum (SHA-256), size, and IP address Given a non-admin user When they attempt to change RBAC or retention settings via UI or API Then the system returns HTTP 403 and logs a ConfigChangeAttempt event with outcome=denied Given an admin user When they update RBAC assignments Then the change is enforced immediately and a ConfigChanged event is written with before/after role mappings hashed
Audit Trail: Capture of TSA and Configuration Events
Given a timestamp issuance for a client approval When the TSA token is created Then an Issuance event is written with event_type=issuance, approval_id, tsa_policy_oid, token_hash, actor_id, request_id, outcome=success, and timestamp in UTC ms Given a verification request of a TSA token (success or failure) When verification completes Then a Verification event is written including validation_result, failure_reason (if any), actor_id (or null for public verification), request_ip, user_agent, latency_ms, and UTC ms timestamp Given an audit log export (UI or API) When the export job finishes Then an Export event is recorded with filters applied, record_count, file_hash, storage_location, and UTC ms timestamp Given a system configuration change (RBAC, retention, legal hold, signing key rotation) When the change is committed Then a ConfigChanged event is recorded with changed_fields, before_hash, after_hash, actor_id, approval_reference (if required), and UTC ms timestamp
Time Normalization: UTC Millisecond Precision
Given any audit event is persisted When the event is written to storage Then the event timestamp is stored in ISO 8601 format with millisecond precision and Z suffix (e.g., 2025-09-29T14:23:45.123Z) Given audit events are returned via API or export When a client retrieves events Then all timestamps are normalized to UTC with exactly three fractional digits and include the Z designator Given UI rendering of audit timestamps When events are displayed Then the underlying data remains UTC ms; any local display conversion is explicit and does not alter stored values Given system time synchronization When clock drift exceeds 500 ms from NTP reference Then the system raises a health alert and logs a SystemClockSkew event without blocking read access to existing logs
Log Integrity: Cryptographic Signing and Tamper Detection
Given audit records are appended When a new record is committed Then it is included in a hash chain (previous_hash, record_hash) and the batch manifest is signed with the current server private key, recording key_id Given an export of audit logs When the export is generated Then the package includes the signed manifest, chain anchors, and the public key certificate chain necessary for verification Given any modification or removal of an existing audit record outside of retention purge When verification is performed on the chain Then signature or chain verification fails, yielding a TamperDetected result and a corresponding alert event Given signing key rotation When a new key is activated Then the rotation is recorded as a ConfigChanged event and the manifests include cross-signing anchors so that verification across the rotation passes
Retention Policies and Legal Holds
Given an org or project retention policy is configured When an admin sets retention to a valid duration within allowed bounds (e.g., 90–3650 days) Then the new policy is saved and logged as a ConfigChanged event with before/after values (hashed) and UTC ms timestamp Given audit records older than the effective retention period When the nightly purge job runs Then records not under legal hold are permanently purged and a Purge event is logged with count, range, and UTC ms timestamp Given a legal hold is applied to a scope (org/project/approval) When a purge job evaluates records under that scope Then those records are excluded from purge until the hold is removed, and HoldApplied/HoldReleased events are logged accordingly Given an export request overlaps purged ranges When the export completes Then the export manifest indicates any gaps due to retention with start/end ranges for missing data
Audit Log Export via UI and API
Given a user with Audit.View and Export.Create permissions When they filter by time range, event types, actors, and project, then export via UI Then the system returns a downloadable package containing: data (CSV and JSON Lines), a signed manifest, SHA-256 checksums, and a verification guide URL Given an API client with valid credentials and scope When it calls GET /api/audit-logs/export with query parameters (from,to,event_types,project_ids,actor_ids,page,page_size) Then the API responds 202 Accepted with an export job id, and subsequent GET returns 200 with file links, total_count, and checksums when ready Given pagination of API audit log queries When a client requests pages Then results are stable for the query window and include next/prev cursors and a consistent total_count Given any export completes When the job finalizes Then an Export event is recorded with parameters, record_count, file size, checksum, and UTC ms timestamp
Verification Attempts: Comprehensive Logging
Given a user or system attempts to verify a timestamped approval When the verification service runs Then an audit record is written with event_type=verification, approval_id, token_hash, validation_result, validation_errors (if any), actor_id (nullable), request_ip, user_agent, correlation_id, latency_ms, and UTC ms timestamp Given a public (unauthenticated) verification link is used When verification is attempted Then actor_id is null, and the event includes an indicator public=true while still capturing IP, user_agent, and outcome Given repeated verification attempts for the same approval When multiple attempts occur Then each attempt is recorded independently with unique request_id values and can be filtered by approval_id in exports
Resilient Queueing and Backfill
"As an architect, I want approvals to proceed even if the TSA is temporarily unavailable so that my workflow is not blocked, with clear indicators when the trusted time is attached."
Description

Queue timestamp requests asynchronously at approval time, provide immediate provisional approval status, and finalize once the TSA response is validated, with user notifications on success or failure. Support backfilling timestamps for approvals created during TSA outages while preserving the original approval event time as claimed time and clearly distinguishing the trusted TSA time. Surface operational metrics and alerts for TSA latency and error rates.

Acceptance Criteria
Provisional Approval with Async Timestamp Queueing
Given an approver submits an approval for a specific file version When the approval is saved Then the system enqueues an RFC 3161 TSA request containing the SHA-256 hash of the approved file and the approvalId And sets an idempotency key of approvalId+fileHash on the job And returns a response and updates UI/API to status "Provisional: Pending TSA" within 500 ms
Finalize Approval on Valid TSA Response
Given a pending approval with a queued TSA request When a TSA timestamp token is received within 60 s and its signature validates and imprint equals the stored SHA-256 hash Then the approval status changes to "Trusted" And the TSA token, TSA policy OID, serial, TSA trusted time, and certificate chain are stored immutably And the claimedApprovalTime remains unchanged And the approver and watchers receive success notifications containing both times and the token reference
Automatic Backfill After TSA Outage
Given the TSA health status is Unhealthy at approval time and approvals were marked Provisional When the TSA health returns to Healthy Then the system backfills timestamps for eligible provisional approvals using the original file hash and approvalId And on success marks status "Trusted (Backfilled)" and sets tsaTrustedTime to the token time while preserving claimedApprovalTime And on content hash mismatch or permanent error marks status "Failed (Backfill)" and includes reason in audit log And sends user notifications on success or failure
Retry, Backoff, and Dead-Letter with Alerts
Given a TSA request fails due to transient errors (timeouts, 5xx) When retrying Then exponential backoff with jitter is applied across up to 6 attempts within 24 hours And after exceeding attempts the job is moved to a dead-letter queue with error details visible in the ops dashboard And an alert is sent when TSA error rate > 2% over 5 minutes or DLQ size > 50 And operators can requeue DLQ jobs; requeued jobs preserve idempotency
Distinct Claimed vs TSA Trusted Times in UI, API, and Audit
Given any approval record retrieved via UI or API Then both claimedApprovalTime and tsaTrustedTime fields are returned; tsaTrustedTime is null until trusted And exports and audit logs label and include both times plus file hash and token serial And no operation can overwrite one time with the other
Operational Metrics and SLOs for TSA Latency
Given the system is processing TSA requests Then the ops dashboard displays queue depth, enqueue rate, processing rate, TSA p50/p95/p99 latency, success rate, error rate, and time-to-trust distribution And an alert triggers when p95 TSA latency > 10 s for 10 minutes or success rate < 98% over 15 minutes And metrics are tagged by TSA provider and environment
Idempotent and Tamper-Evident Timestamping
Given duplicate submissions for the same approvalId and fileHash When processed Then exactly one TSA token is stored and subsequent duplicates return the existing token without re-contacting TSA And the token, hash, and TSA response are stored immutably; any mutation attempt is blocked and logged And if a TSA token imprint does not match the file hash the job is failed and an alert is raised

Revocation Trail

Void or supersede an approval without deleting history. A structured revocation entry is chained to the original, recording reason, impact, and the new superseding version. The system auto‑notifies affected stakeholders and pauses dependent steps until revalidated—preventing silent reversals and preserving trust.

Requirements

Chained Revocation Entry Model
"As a project lead, I want to record a structured revocation of an approval with reasons and a superseding version so that the team has a clear, traceable source of truth."
Description

Design and implement a structured, append-only revocation record that is cryptographically and relationally linked to the original approval. Each revocation entry must capture revocation type (void or supersede), reason, impact assessment, actor identity, timestamps, source context (drawing/markup/version), and a reference to the superseding version when applicable. Enforce validation rules (e.g., superseding version must exist and be newer, approvals cannot be deleted once revoked, one approval can have multiple successive supersessions but only one active current state). Provide optimistic concurrency controls to prevent conflicting simultaneous revocations and ensure referential integrity across projects and versions. Expose the model to services that drive workflow pausing, notifications, UI visualization, and audit export. Expected outcome: a reliable, tamper-evident chain that preserves history and defines the current authoritative approval state.

Acceptance Criteria
Append-Only Revocation Entry with Cryptographic and Relational Linkage
Given an approved record A exists and the actor is authorized When the actor submits a revocation with type, reason, impact assessment, actor identity, timestamps, and source context (drawing/markup/version), and optional superseding version reference Then the system persists a new immutable revocation record R linked to A by foreign key And computes and stores a deterministic content hash of R and a previous-link hash chaining R to the latest prior entry for A (or a genesis marker) And update and delete operations on R are rejected with 405 Method Not Allowed And retrieving the revocation chain for A returns entries in append order and hash-chain verification passes
Supersede vs Void Validation Rules
Given revocation type = "supersede" When a superseding version reference V is provided Then V must exist, belong to the same project and source context lineage as the original approval, and have a strictly newer version identifier; otherwise persist is rejected with 422 and error code SUPERSede_INVALID_REFERENCE And reason and impact assessment are required fields Given revocation type = "void" When a superseding version reference is provided Then persist is rejected with 422 and error code SUPERSEDING_NOT_ALLOWED_FOR_VOID And reason and impact assessment are required fields
Single Authoritative Approval State Resolution
Given approval A may have zero or more revocation entries When the current state of A is requested Then exactly one authoritative state is returned: - Approved if no revocations exist - Void if the latest valid revocation type is "void" - Superseded with reference V if the latest valid revocation type is "supersede" And earlier revocation entries remain in history but do not override the latest state And creation of a new revocation immediately updates the derived current state for A
Optimistic Concurrency to Prevent Conflicting Revocations
Given two clients C1 and C2 read approval A at version v (or ETag e) When C1 and C2 submit revocations concurrently Then the first successful write commits and increments the version (or changes the ETag) And the second write with stale version/ETag is rejected with 409 Conflict and a retry token And at most one new revocation record is persisted for that version window And no partial or duplicate revocations are created
Cross-Entity Referential Integrity and Non-Deletion Rules
Given a revocation references approval A, source context (drawing/markup/version), and optional superseding version V Then A, the source context entities, and V must exist and belong to the same project; cross-project references are rejected with 422 And approvals with one or more revocations cannot be deleted; delete attempts return 405 Method Not Allowed And deletes of referenced drawings/markups/versions are blocked or soft-deleted in a way that preserves referential integrity; hard-delete attempts return 409 Conflict
Workflow Pause and Stakeholder Notifications on Revocation
Given a revocation record R is successfully persisted Then the system publishes an event (ApprovalRevoked or ApprovalSuperseded) containing R fields (revocation id, approval id, type, reason, impact, actor, timestamps, source context, superseding reference if any) within 1 second And workflow steps dependent on the approval are set to paused = pending_revalidation within 1 second And all affected stakeholders subscribed to the approval context receive a notification within 60 seconds And idempotent re-publishing does not create duplicate pauses or notifications
Audit Export and Tamper-Evidence Verification
Given an auditor requests an export for approval A When the export is generated via API Then it includes all revocation entries in order with full field values, content hashes, previous-link hashes, and a chain verification result = pass And if any entry has been altered outside append-only rules, chain verification = fail and the failing entry index is indicated And export is read-only and does not modify any records
Dependent Workflow Pause & Revalidation
"As a project manager, I want dependent steps to pause automatically on revocation so that no downstream work proceeds on outdated assumptions."
Description

Automatically identify and pause all dependent steps (tasks, checklists, deliverables, linked approvals, and integrations) when an approval is revoked. Provide a dependency graph resolver that maps from the revoked approval to affected downstream items, transitions them to a Paused state, and blocks progression until revalidation criteria are met (e.g., new approval recorded or risk waiver applied). Support configurable rules per project (what to pause, who can override, SLA timers), in-context prompts to revalidate, and automatic resumption once the superseding approval is confirmed. Log all pauses/resumptions and surface impact summaries in the workspace.

Acceptance Criteria
Auto-Pause of Downstream Items on Approval Revocation
Given an approval has downstream dependencies mapped by the dependency graph resolver When the approval is revoked and the revocation entry is saved Then the resolver identifies all downstream items according to current project rules And all identified items that are not Completed or Cancelled transition to Paused And each paused item displays the pause reason, revocation ID, and a link to the revocation entry And progression actions (start, complete, submit, approve, merge, handoff, integration-run) are disabled on paused items And each item is paused at most once even if reachable by multiple dependency paths And items with circular dependencies are handled without deadlock or infinite loops And the system records the list of affected items and their prior statuses for resumption
Blocking Progress Until Revalidation With Clear Errors
Given an item is in Paused due to an approval revocation When a user without override permission attempts to progress the item via UI or API Then the action is blocked and an error message references the revocation ID and required revalidation And the API responds with HTTP 409 and error code PAUSED_PENDING_REVALIDATION And an inline prompt offers revalidation options without allowing bypass When a user with override permission attempts to progress the item Then an override dialog requires justification and (if configured) a risk waiver selection And the system logs the override with user, timestamp, justification, and rule reference And the item only progresses if the specific override rule permits the attempted action
Project-Level Pause Rules, Overrides, and Revalidation SLAs
Given a project has configurable rules for pause scope, override roles, notification audience, and revalidation SLAs When an approval is revoked Then only items matching the project's pause scope are paused; excluded types or labeled items remain unaffected And SLA timers are created on paused items with due dates as defined by the project's revalidation SLA And overdue SLA timers trigger escalation notifications to the configured audience And override permissions are enforced based on role and context as defined by rules And all rule evaluations, decisions, and rule versions applied are captured in the audit log
Automatic Resumption After Superseding Approval
Given items are Paused due to a revocation And a superseding approval has been recorded and confirmed for the same scope When dependencies for an item are satisfied under current project rules Then the item automatically transitions from Paused back to its prior active state And original assignees and checklists are preserved; due dates are recalculated using remaining SLA And items that no longer qualify for resumption remain Paused with an updated reason And a resume event is logged with correlation to the revocation and superseding approval
Impact Summary and Audit Logging of Pauses and Resumptions
Given a revocation is recorded When the pause operation completes Then the workspace displays an impact summary with counts by type, list of affected items, affected stakeholders, and estimated SLA impact And a link to view the dependency graph with highlighted affected nodes is available And affected stakeholders receive notifications summarizing the impacts and required actions And the impact summary updates in real time as items are revalidated or resumed And the audit log contains entries for the revocation, each pause, each resume, rule snapshot applied, and notifications dispatched And users can export the impact summary and audit entries to CSV or JSON
External Integration Hold and Resume Semantics
Given an affected item has an external integration When it transitions to Paused due to revocation Then the system sends a signed webhook with revocation and pause details to the configured endpoint And the connector sets the external job or order to On Hold or equivalent state And inbound callbacks attempting to progress the paused item are rejected with HTTP 409 and error code PAUSED_PENDING_REVALIDATION And webhook deliveries are retried with exponential backoff and are idempotent via a correlation ID When the item resumes Then a signed resume webhook is sent and the external job or order is unlocked
In-Context Revalidation Prompts and Tasking
Given items are Paused due to revocation When a responsible user views a paused item or the workspace impact panel Then an in-context Revalidate prompt is shown with options to record a superseding approval, apply a risk waiver, or request clarification And selecting an option creates or links the necessary record and requires mandatory fields such as reason, approver, and scope And users can bulk-select paused items and perform a single revalidation action if project rules permit And upon successful revalidation, the item transitions to Ready or Active; otherwise it remains Paused with an explicit reason And all revalidation actions are logged with user, timestamp, and linked artifacts
Stakeholder Notification Engine
"As a client, I want to be immediately informed when an approval is revoked and what action is required so that I can respond promptly and avoid surprises."
Description

On revocation, trigger templated notifications to all affected stakeholders (approvers, watchers, assignees of dependent items, external clients) across email, in‑app, and chat (Slack/Teams). Messages must include the revocation reason, impact summary, superseding version link, required actions, and deadlines. Support batching, rate limiting, quiet hours, retries, and delivery status tracking. Provide role-based visibility controls, per-project templates, localization, and deep links to the relevant workspace context. Ensure idempotent dispatch for repeated updates on the same revocation chain.

Acceptance Criteria
Revocation Triggers Multi-Channel Stakeholder Notifications
- Given an approved item is revoked in PlanPulse with stakeholders including approvers, watchers, assignees of dependent items, and external clients - When the revocation is saved - Then a notification job is enqueued within 30 seconds for each stakeholder–channel combination (email, in-app, Slack/Teams) per preferences and project settings - And each message includes the revocation reason, impact summary, a deep link to the workspace context highlighting the revocation, a link to the superseding version, required actions for the recipient, and the response deadline - And if a chat integration (Slack/Teams) is not connected or the recipient lacks a mapped account, the system falls back to email and records the fallback in delivery metadata
Role-Based Visibility and Content Filtering
- Given role-based visibility rules are configured for approvers, watchers, assignees, and external clients - When notifications are generated - Then approvers receive the full reason and impact details, superseding version link, re-approval action, and deadline - And external clients receive only client-safe reason and impact summary; internal notes and non-shared attachments are excluded - And watchers receive an informational update without required actions - And assignees receive a revalidation/adjustment action with the dependent item shown as paused until revalidated - And deep links enforce RBAC so recipients can only access content permitted by their role
Per-Project Templates and Merge Fields
- Given per-project templates exist for each role and channel with defined merge fields - When generating a notification - Then the system selects the template by project, role, and channel; if absent, it falls back to the global default template - And all required merge fields (reason, impact, superseding version URL, required actions, deadline, deep link) are populated; missing fields cause the message to be withheld and an error logged to the audit trail - And the template ID and version are recorded in the message metadata for traceability
Localization and Timezone-Aware Deadlines
- Given a stakeholder with locale set to fr-FR and timezone set to Europe/Paris - When a notification is created - Then the message content uses the fr-FR localized template and localized static strings - And the deadline timestamp is rendered in the stakeholder's timezone with locale-appropriate formatting and includes an ISO/RFC 3339 machine-readable timestamp in metadata - And if a translation key is missing in fr-FR, the system falls back to the project default language and records a missing-translation event
Batching, Rate Limiting, and Quiet Hours Enforcement
- Given multiple notification events for the same stakeholder occur within the configured batch window (e.g., 2 minutes) - When dispatching notifications - Then the system consolidates them into a single message per channel that enumerates each revocation with identifiers and deep links - And per-stakeholder, per-channel rate limits defined at the project level are enforced; excess messages are queued rather than dropped - And if the planned send time falls within the stakeholder's quiet hours, the message is deferred to the first minute after quiet hours and labeled as deferred due to quiet hours in delivery metadata
Retry Logic and Delivery Status Tracking
- Given a channel provider returns a transient error for a send attempt - When the retry mechanism runs - Then the system retries up to 3 times with exponential backoff starting at 1 minute and capping at 15 minutes - And each message has a per-channel delivery status lifecycle tracked as queued, sent, delivered, failed, or bounced with timestamps and attempt counts - And terminal failures are surfaced to project admins and recorded in the audit log with the last provider error
Idempotent Dispatch for Revocation Chain Updates
- Given repeated updates occur for the same revocation chain without content changes within 24 hours - When dispatching notifications - Then the system deduplicates using an idempotency key comprised of revocationChainId, stakeholderId, channel, and content hash to prevent duplicate sends - And if content changes materially (e.g., updated deadline or superseding version), the existing in-app and chat threads are updated in place, and emails are sent with an Update subject indicator; the prior message is linked in the audit trail - And each idempotency decision is recorded with the computed key and reason
Immutable Audit Trail
"As a compliance officer, I want an immutable history of approvals and revocations so that audits can verify decisions and timelines with confidence."
Description

Maintain an append-only, tamper-evident audit trail for approvals and their revocations, including who performed the action, when, what changed, justification, and related artifacts. Implement hash chaining for entries to detect tampering, store signatures of critical events, and prevent hard deletion. Provide an audit view in the product with filtering and export (CSV/PDF) options, and enforce permissioned access. Integrate with the revocation model to present a complete, chronological narrative suitable for compliance and client transparency.

Acceptance Criteria
Hash-Chained Append-Only Entries
Given an existing audit trail for a project When a new approval or revocation is recorded Then a new entry is appended with a previousHash referencing the latest entry and an entryHash computed over canonicalized entry data And no prior entry is mutated Given any attempt to modify or delete an existing audit entry via UI, API, or database When the operation is executed Then the system rejects the change and appends an AuditViolation entry identifying the actor and method Given an entry’s entryHash or previousHash is altered When the chain verification job runs or the audit view loads Then the system flags the trail as Tampered, identifies the first failing entry, and displays integrityStatus=Tampered in the audit view and exports
Complete Event Capture for Approvals and Revocations
Given a user finalizes an approval or revocation When the audit entry is persisted Then the entry includes actorId, actorRole, actionType (Approval|Revocation), entryId, timestamp (UTC ISO 8601), justification, changedFields summary, relatedArtifactIds, relatedVersionId, supersedesEntryId (for revocations), ipAddress, and signaturesRef Given any required field is missing or invalid When persisting the entry Then the write is rejected with HTTP 422 Unprocessable Entity and no entry is appended Given a revocation supersedes a prior approval When the entry is stored Then supersedesEntryId links to the original approval entry and is resolvable from the audit view
Digital Signatures for Critical Events
Given an approval or revocation is finalized When the audit entry is created Then a detached digital signature is stored with signatureAlgorithm, signatureValue, and publicKeyRef, and signature verification against the stored entry payload succeeds Given an audit entry has an invalid or missing signature for a critical event When signature verification runs or the audit view loads Then the entry is flagged Signature Invalid and the integrity indicator reflects the failure in the list and exports
Audit View Filtering, Sorting, and Export
Given a user with access opens the Audit View for a project When they apply filters (actionType, actor, date range, version, keyword) and sort by timestamp Then the results return in chronological order with pagination and an integrity indicator (Valid|Tampered|Unknown) per result set Given a filtered result set of up to 10,000 entries When the user exports to CSV Then the CSV contains only the filtered rows and columns: entryId, timestamp, actor, actionType, justification, relatedVersionId, entryHash, previousHash, signatureStatus, integrityStatus; encoding UTF-8; comma delimiter; export completes within 10 seconds Given a filtered result set of up to 5,000 entries When the user exports to PDF Then the PDF contains only the filtered rows with the same columns rendered, includes project name header and page numbers, and is generated within 10 seconds Given no results match the filters When exporting is attempted Then the export actions are disabled and no file is generated
Permissioned Access to Audit Trail
Given a user without Audit.View permission When they attempt to access the audit view or export endpoints Then the system returns 403 Forbidden and no audit data is returned Given a user with Audit.View permission scoped to specific projects When they open the audit view Then they can see only audit entries for projects within their scope Given any audit view or export access occurs When the action completes Then an AuditAccess entry is appended recording actorId, timestamp, target project, and action (view|export)
Revocation Narrative Integration
Given an approval is later revoked one or more times When the audit view is opened for the associated version Then the audit list shows a complete chronological chain linking the original approval and each revocation via supersedesEntryId, displaying reason, impact, and superseding version for each revocation Given a user selects Export Narrative When the narrative is exported to PDF Then the PDF groups the approval with its revocations, preserves chronological order, and includes chain integrity and signature statuses for each entry
Role-Based Revocation Controls
"As an account admin, I want to control who can revoke approvals and require appropriate justification or co-approval so that reversals are governed and accountable."
Description

Define fine-grained permissions and safeguards governing who can revoke and under what conditions. Support policy rules such as mandatory justification, multi-party confirmation for high-impact items, prevention of self-revocation without secondary review, and escalation to admins for restricted projects. Enforce consistent checks across UI and API, log denied attempts, and present clear confirmation dialogs outlining impact. Allow per-project configuration and policy templates to standardize governance for firms of different sizes.

Acceptance Criteria
Mandatory Justification Enforcement
Given a user with revoke permission attempts to revoke an approval under a policy requiring justification When the user initiates revocation via UI or API Then the system requires three fields: reason (minimum 20 characters), impact (one of {Scope, Schedule, Cost, Compliance, Other}), and supersedingVersionId (UUID of an existing version) before revocation can proceed And if any field is missing or invalid, the revocation is blocked; the UI displays field-level errors and the API returns 400 with error.code=REVOCATION_JUSTIFICATION_REQUIRED and error.fields detailing validation failures And the system logs a denied attempt including actorId, role, projectId, approvalId, channel (UI/API), timestamp, policyId, and validationErrors And if all fields are valid, the UI presents a confirmation dialog summarizing stakeholdersToNotifyCount, dependentStepsToPauseCount, and a link to the superseding version; the API requires confirm=true to proceed And upon confirmation, the request moves to the next applicable policy step without bypassing other policies
High-Impact Multi-Party Confirmation
Given the project policy marks the target approval as high-impact requiring two confirmations from distinct roles: Project Lead and Client Representative When an eligible initiator submits a revocation with valid justification Then the system creates a pending revocation requiring one confirmation from a Project Lead and one from a Client Representative, neither equal to the initiator And notifications are sent to users in those roles within the project And duplicate confirmations from the same user or same role are rejected with 409 and error.code=REVOCATION_CONFIRM_DUPLICATE And if both confirmations are not received within 48 hours, the revocation auto-expires with status=Cancelled and the initiator is notified And upon receiving both valid confirmations, the revocation finalizes, dependent steps are paused, stakeholders are notified, and the audit trail links both confirmations to the revocation entry
Self-Revocation Requires Secondary Review
Given the original approver attempts to revoke their own previously approved item When they submit a revocation request Then the system prevents immediate revocation and creates a review request requiring approval from a designated reviewer role (not the initiator) And the UI displays "Secondary review required" with reviewer queue details; the API returns 202 with error.code=REVOCATION_SELF_REVIEW_REQUIRED and reviewRequestId And until secondary review is approved, the approval remains in Approved state and no dependent steps are paused And all actions (initiation, reviewer assignment, decision) are recorded in the audit log linked to the approval
Restricted Project Admin Escalation
Given the project has the Restricted flag enabled in revocation policy When a non-Org-Admin attempts to revoke an approval Then the system blocks direct revocation and creates an escalated request with status=Pending Admin Review And the UI shows escalation status and target admin group; the API returns 202 with error.code=REVOCATION_ADMIN_REQUIRED and escalationId And only Org Admins can approve or deny the escalated revocation; attempts by others return 403 with error.code=POLICY_ENFORCEMENT_DENIED And upon admin approval, the revocation proceeds; upon denial, the request is closed and the initiator is notified And all escalation events are logged with admin actor, timestamps, and outcomes
Consistent Enforcement Across UI and API
Given a project with a defined revocation policy When the same revocation attempt (inputs and actor role) is performed via UI and via API Then both channels evaluate policies using the same engine and produce identical outcomes (Passed, Pending, or Denied) And on denial, both channels return the same machine-readable error.code and field-level details; UI text maps to the same localization key; API response is 4xx with error.code and error.fields And on pending workflows, both channels return a reference id (e.g., reviewRequestId) with status=Pending and the same nextSteps metadata And parity tests confirm that attempts blocked in UI cannot be completed via API, and vice versa, for justification, multi-party, self-revocation, and restricted-project rules
Denied Attempt Audit Logging
Given any revocation attempt is denied by policy When the denial occurs Then an immutable audit log entry is appended with: eventType=RevocationDenied, approvalId, projectId, actorId, actorRole, channel (UI/API), timestamp (ISO8601 UTC), clientIp, userAgent, policyId, policyRule, errorCode, correlationId, and payloadHash And the entry is queryable via Audit API by approvalId, actorId, date range, and policyRule within 5 seconds of the event And audit entries are visible in the project audit UI to users with AuditViewer role and are not editable or deletable by any non-system user And exporting audit logs to CSV includes the above fields and passes checksum verification
Per-Project Policy Templates and Overrides
Given an Org Admin applies the "Small Firm Standard" policy template to a project When the template is applied Then the project's revocation policy reflects template defaults for justification, multi-party confirmation, self-revocation, and restricted project rules And project-level overrides can strengthen but not weaken org-level minimums; attempts to weaken return 409 with error.code=POLICY_MINIMUMS_VIOLATION And the active policy snapshot (template name, overrides, version) is visible in the project settings UI and retrievable via Policy API GET /projects/{id}/revocation-policy And policy changes are versioned with who/when/what and take effect for new revocation attempts within 60 seconds; change events are logged and broadcast to the project team
Supersession Visualization UI
"As an architect, I want to see at a glance which approvals have been superseded and by which version so that I can navigate to the current source of truth."
Description

Provide clear visualizations in the drawing/version timeline and approval panels to indicate when an approval is voided or superseded. Show a chain view with backward/forward links, reason tooltips, and badges that highlight the current authoritative version. Offer one-click navigation between superseded and superseding versions, filters to view only current approvals, and a prominent Revalidate call-to-action on affected items. Ensure responsive design and accessibility, and integrate status cues in thumbnails and list views to prevent silent use of stale versions.

Acceptance Criteria
Timeline Indicators and Authoritative Badges
Given a version V1 approved and later superseded by V2, When the version timeline renders, Then V1 displays a Superseded badge and V2 displays an Authoritative badge. Given a version whose approval was voided with no superseding version, When the timeline renders, Then that version displays an Approval Voided indicator and no Authoritative badge. Given a chain V1 -> V2 -> V3 where V3 is latest, When the timeline renders, Then only V3 is marked Authoritative and both V1 and V2 are marked Superseded. Given any version with a revocation record, When its status icon is hovered with a mouse or focused via keyboard, Then a tooltip appears showing the revocation reason, impact, actor, and timestamp.
Approval Panel Chain View with Backward/Forward Links
Given a version that has a revocation or supersession chain, When the approval panel opens, Then a chain view shows each related version node with status (Approved, Superseded, Voided), actor, and timestamp. Given the chain view, When the current node is not the latest, Then a forward link labeled "View superseding version" is available. Given the chain view, When the current node has a predecessor, Then a back link labeled "View superseded version" is available. Given the chain nodes, When a node is focused, Then a tooltip or popover reveals the revocation reason and impact text for that transition.
One-Click Navigation Between Versions
Given V1 is superseded by V2, When the user activates "View superseding version" via click or keyboard (Enter/Space), Then the UI navigates to V2's detail view without a full page reload, updates breadcrumbs, and sets focus on V2's header. Given the user followed a chain link, When navigation completes, Then the target version is scrolled into view in the timeline and highlighted as selected (aria-current=true). Given a chain link points to a version the user cannot access, When the link is activated, Then the UI shows an access denied message and does not change the current selection.
Filter: Current Approvals Only
Given a project with mixed current and superseded approvals, When the user toggles the "Current approvals only" filter on, Then all superseded and voided approvals are hidden from the timeline, approval list, and thumbnails. Given the filter is on, When counts are displayed (e.g., "Approvals: n"), Then counts reflect only currently authoritative approvals. Given the filter is toggled, When the page is refreshed or a shareable URL is copied, Then the filter state persists via URL parameters and session storage. Given the filter is on, When the user clicks "Clear filters", Then all items reappear and counts recompute.
Prominent Revalidate Call-to-Action on Affected Items
Given a revocation that impacts dependent items (e.g., markups, tasks), When viewing any affected item, Then a prominent "Revalidate" button is displayed with an accessible label describing the dependency. Given the "Revalidate" button is activated, When the revalidation modal opens, Then it is prefilled with the affected version(s), shows the revocation reason, and provides Confirm and Cancel actions. Given the user confirms revalidation, When the modal closes, Then the item status updates to "Pending Revalidation" or "Revalidated" as appropriate, the banner is dismissed, and the timeline/list reflect the new state. Given the viewer lacks permission to revalidate, When viewing the affected item, Then the "Revalidate" button is disabled and a tooltip explains the required role.
Status Cues in Thumbnails and Lists to Prevent Stale Use
Given a superseded version is shown in a thumbnail grid or list, When it renders, Then it displays a "Superseded" badge/ribbon and the authoritative version is indicated with a checkmark badge. Given the user opens a superseded version, When the detail view loads, Then a warning banner states "This version is superseded" and offers a one-click "Switch to authoritative version" action. Given the user attempts to share, export, or request approval from a superseded version, When the action is initiated, Then an intercept dialog warns about staleness and defaults the primary action to use the authoritative version, requiring explicit confirmation to proceed with the superseded one. Given a voided approval, When displayed in thumbnails or lists, Then actions that would promote use (share/export/request approval) are disabled with an explanatory tooltip.
Accessibility and Responsive Design Compliance
Given a keyboard-only user, When navigating the timeline, chain view, filters, and Revalidate controls, Then all interactive elements are reachable in a logical order, operable via Enter/Space, and visibly focused. Given a screen reader user, When encountering badges and chain links, Then each control has an accessible name and state (e.g., Authoritative, Superseded, Voided), tooltips are exposed via aria-describedby, and dynamic status changes are announced via an aria-live region. Given color-blind accessibility needs, When viewing status cues, Then color is not the sole indicator; text labels or icons accompany color and meet a minimum 4.5:1 contrast ratio. Given different devices, When the viewport is <=480px, 481–768px, 769–1024px, and >1024px wide, Then the timeline and chain view adapt without loss of information, hit targets are at least 44x44 px on touch, and tooltips reposition to remain fully visible.
Revocation Events API & Webhooks
"As an integrator, I want webhook events and an API for revocations so that my downstream systems remain consistent with PlanPulse."
Description

Expose secure REST endpoints to query approval revocation chains and subscribe clients to webhook events for revocations, supersessions, and revalidations. Provide signed webhook deliveries with retries, idempotency keys, and payload schemas including reason, impact, entities affected, and links to superseding versions. Implement pagination, filtering, rate limits, and API permissions aligned with role policies. Publish comprehensive developer documentation and examples to enable external systems (e.g., PM tools, document controllers) to stay synchronized.

Acceptance Criteria
Query Revocation Chain Endpoint
Given a valid API token with read:revocations scope and access to project P When the client GETs /api/v1/revocations/chain?approval_id={approval_id} Then the response is 200 application/json with an array ordered by created_at ascending representing the full chain from original approval to current state And each item includes id (UUID), event_type ∈ {revoked, superseded, revalidated}, reason (string; required for revoked/superseded), impact ∈ {none, minor, major, critical}, affected_entities[] [{type,id,change_summary}], actor {id,name}, created_at (ISO-8601 UTC), supersedes_id (nullable), superseded_by_id (nullable), links {self, approval, superseding_version, chain} And response includes chain_summary {current_state, length} And 400 is returned for invalid or missing approval_id; 404 for unknown or unauthorized approval_id And conditional requests with If-None-Match or If-Modified-Since return 304 when unchanged
Signed Webhook Deliveries with Retries and Idempotency
Given a subscription exists for events [revocation.created, revocation.superseded, revocation.revalidated] When such an event occurs Then a POST is sent within 5 seconds to the subscriber URL with JSON body and headers: X-PlanPulse-Event-Id, X-PlanPulse-Event-Type, X-PlanPulse-Timestamp, X-PlanPulse-Signature, Idempotency-Key And X-PlanPulse-Signature is HMAC-SHA256 over (timestamp + body) using the subscription secret; the example in docs verifies against the published sample And non-2xx or timeouts >10s trigger exponential backoff retries (max 8 attempts over 24h); retries stop after first 2xx; 410 Gone disables the subscription And all retries reuse the same Event-Id and Idempotency-Key for idempotence And each delivery attempt is recorded and retrievable via GET /api/v1/webhooks/deliveries/{event_id} with final_status and attempt_count
Webhook Subscription Management API
Given a user with role integration_admin on project P and a valid token When they POST /api/v1/webhooks/subscriptions with {url:https, event_types[], secret(optional)} Then response is 201 with subscription {id, url, event_types, status:active, created_at, secret_last4} And url must be HTTPS and resolvable; 422 returned for invalid url or event_types; secret is autogenerated if absent and never returned in full And GET /api/v1/webhooks/subscriptions supports filtering by event_type and status; PATCH allows status changes (active|paused) and secret rotation; DELETE performs soft delete and stops deliveries And POST supports Idempotency-Key header; repeated creates with same key within 24h return 201 with the same id without side effects And POST /api/v1/webhooks/subscriptions/{id}/test returns 202 and enqueues a webhook.test delivery
Revocation Event Payload Schema Completeness
Given any webhook event for revocation, supersession, or revalidation When a delivery is sent Then the JSON payload conforms to a versioned schema: {schema_version, event_id, event_type, occurred_at, approval_id, project_id, reason, impact, affected_entities[], superseding_version_id (nullable), links {approval, superseding_version, chain}, actor, previous_state, current_state} And reason is required for event_type ∈ {revocation.created, revocation.superseded} and null for revocation.revalidated And affected_entities entries include type ∈ {drawing, sheet, markup, comment, attachment}, id (string), change_summary (string) And schema_version follows semver; breaking changes increment major and are announced in changelog And payload size ≤ 256 KB; if exceeded, payload is replaced with minimal envelope plus link to GET full details
Filtering, Pagination, and Sorting of Revocation Events API
Given multiple events exist for project P When the client GETs /api/v1/revocations with filters approval_id, event_type, actor_id, occurred_at_gte, occurred_at_lte and sort=-occurred_at and page[limit], page[cursor] Then response is 200 with stable, opaque cursor-based pagination; page[limit] ∈ [1,100], default 25 And results honor all filters and sort; next/prev cursors are included when applicable; empty results return 200 with an empty array and no next cursor And standard rate-limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset) are present on responses And P95 response time ≤ 500 ms for limit=25 on datasets up to 50k events within project P
Role-Based Access Control and Permissions Alignment
Given a token scoped to project P with assigned role When the client requests revocation chains or manages webhook subscriptions Then read access is allowed for roles ∈ {architect, project_lead, integration_admin}; write/manage subscriptions is allowed only for integration_admin And clients (external stakeholders) may read only events for approvals they originated; cross-project access is denied with 403; non-visible resources return 404 to unauthorized users And all access attempts are audited with actor, action, target, timestamp, and outcome
Developer Documentation and Examples Published
Given an external developer needs to integrate with PlanPulse When they visit the developer portal Then an OpenAPI 3.1 specification (JSON and YAML) for the Revocation Events API and Webhook endpoints is available and accurate to deployed behavior And event payload JSON Schemas with examples are published and downloadable; sample payloads cover each event_type and edge cases (no reason, large affected_entities) And runnable examples for cURL, Node.js, and Python show: querying chains, creating subscriptions with Idempotency-Key, and verifying webhook signatures And a quickstart guide includes a public test endpoint and a mock server to simulate webhook deliveries; a changelog documents versions, breaking changes, and deprecation timelines And documentation includes authentication, rate limits, errors, and retry semantics; links to support and issue reporting are present

KeySafe Roles

Manage signing keys and policies by role with support for hardware keys and time‑bound delegation. Rotate keys during staff changes without breaking the ledger, set who can sign what and when, and maintain continuity with emergency escrow—all while keeping signatures verifiable and accountable.

Requirements

Role-Based Signing Policies
"As a project administrator, I want to define signing permissions by role and document type so that only authorized stakeholders can approve specific drawings and revisions."
Description

Define and enforce granular signing permissions by role across projects, document types, and workflows. Administrators can create reusable policy templates that specify who can sign, what actions they can take (approve, reject, countersign), required co-signers or quorum thresholds, and contextual constraints such as time windows, project phase, and maximum approval limits. Policies are applied at the workspace, project, or folder level and integrate directly with PlanPulse’s one‑click approval UI and versioned drawing workflows, ensuring that only authorized roles can sign specific revisions. All policy decisions are recorded with a snapshot of the policy at signature time to preserve verifiability and accountability.

Acceptance Criteria
Create and Apply Policy Templates by Scope
Given I am an administrator with permission to manage signing policies in workspace W When I create a policy template T with rules for roles/actions, co-signers/quorum, contextual constraints (time windows, project phases), and maximum approval limits Then template T is saved as version 1 with a unique immutable version hash And When I apply T(v1) to workspace W, project P in W, or folder F in P Then the system records the scope binding and effective timestamp And evaluation for items in a scope uses the most specific bound policy (folder > project > workspace) And removing a binding at a scope immediately reverts evaluation to the next broader bound policy And all create/apply/remove events are audit-logged with actor, timestamp, scope, and policy version
UI Enforcement of Role Actions
Given user U with role R in project P is authenticated and viewing revision Rev123 of drawing D in folder F And effective policy evaluation for U on Rev123 allows {Approve, Reject} and disallows {Countersign} When U opens the one-click approval panel for Rev123 Then only Approve and Reject controls are enabled And Countersign is hidden or disabled with a policy reason tooltip And if U invokes the sign API for a disallowed action, the server returns 403 POLICY_VIOLATION with a machine-readable reason code and the evaluated policy version hash
Quorum and Co‑Signer Requirements
Given policy T(v1) on folder F requires a 2-of-3 quorum among roles {Lead Architect, Client Rep, QA} for Approve on document type "Construction Docs" When the first eligible signer submits an Approve Then the signature state becomes "Pending Quorum (1/2)" When a second distinct eligible signer submits Approve Then the revision transitions to "Approved" and records both signers and their roles And if any required role submits a Reject before quorum is met, the revision transitions to "Rejected" and pending approvals are voided And additional Approve signatures beyond quorum are recorded as "Additional Confirmation" without changing the final state
Time Window and Project Phase Constraints
Given policy T(v1) restricts signing to 09:00–18:00 in project timezone America/Chicago and to phases {"Design Development","Construction Docs"} When a signer attempts to approve at 19:30 project local time Then the action is blocked with reason "Outside allowed time window" When the project phase is "Concept" Then the action is blocked with reason "Disallowed in current phase" When the signer acts at 10:15 during "Design Development" Then the action proceeds and is recorded as compliant with T(v1)
Maximum Approval Limit Checks
Given policy T(v1) sets a maximum approval limit of 10000 for role "Project Manager" on numeric attribute "ApprovalValue" And revision Rev456 has ApprovalValue = 12000 When a user with role "Project Manager" attempts to Approve Rev456 Then the action is blocked with reason "Exceeds approval limit (12000 > 10000)" and API responses return 403 POLICY_LIMIT When Rev456 has ApprovalValue = 10000 Then the Approve action is permitted And when a user with a higher limit role approves above 10000 as allowed, the event records the approver’s role and limit applied
Policy Snapshot at Signature Time
Given a signature event occurs on revision Rev789 under policy T(v2) When the signature is recorded Then the system stores an immutable policy snapshot including: template ID, version number, version hash, bound scope IDs (workspace/project/folder), evaluated rules (allowed actions, constraints, quorum), actor role, evaluation timestamp, and reason codes And the snapshot is retrievable in the signature details UI and via audit API And future edits creating T(v3) do not alter the snapshot attached to Rev789’s signature And the snapshot is included in the signature’s verification payload with a content hash that validates integrity
Policy Inheritance and Conflict Resolution
Given a workspace-level policy Tw(v1), a project-level policy Tp(v1), and a folder-level policy Tf(v1) When evaluating a signing request for an item within that folder Then Tf takes precedence over Tp and Tw for overlapping rules And if a rule is undefined in Tf, evaluation falls back to Tp; if undefined in Tp, it falls back to Tw And if conflicting allows/denies exist for the same action across levels, the more specific scope’s rule is enforced exclusively And the evaluator produces a deterministic report listing each rule’s source scope and version
Hardware Key Authentication (WebAuthn/FIDO2)
"As a signer, I want to use a hardware security key when approving drawings so that my signatures are secure and resistant to phishing or credential theft."
Description

Enable strong, phishing‑resistant authentication for all signing operations using FIDO2/WebAuthn hardware security keys (e.g., YubiKey) and supported platform authenticators. Users can register multiple keys, set a primary, and recover via backup keys subject to role policy. Administrators can require step‑up hardware verification per role, document sensitivity, or transaction amount, and can enforce attestation policies where needed. The flow integrates with PlanPulse’s approval actions so that a hardware prompt appears seamlessly during signature, with device/browser compatibility guidance and fallbacks limited by policy and audit logging.

Acceptance Criteria
Enroll Primary Hardware Key With Attestation
Given a logged-in user with a role requiring hardware-key enrollment and an attestation allowlist, When the user registers a FIDO2/WebAuthn authenticator, Then the system requests attestation and accepts only if the attestation format and AAGUID meet policy and the attestation chain validates. Given the user successfully registers the first authenticator, When registration completes, Then the credential is stored with AAGUID, attestation metadata, and marked as Primary, and an audit log records user, RP ID, AAGUID, attestation result, and timestamp. Given the authenticator does not meet attestation policy, When registration is attempted, Then registration is blocked with a specific, actionable error message indicating non-compliance and the attempt is audit-logged with reason.
Step-Up Verification During Document Signature
Given role, document sensitivity, or transaction amount triggers step-up, When the user clicks Approve/Sign on a document version in PlanPulse, Then a WebAuthn assertion prompt is displayed inline without a full page reload, and approval proceeds only after successful assertion. Given a successful hardware assertion, When the server verifies rpId, origin, challenge bound to the document hash/version and amount, and signature counter, Then the signature is recorded as verified and the approval action is committed, with audit log linking assertion ID to the document version. Given the user cancels or the assertion fails verification, When the approval flow runs, Then the approval is not recorded, the user sees a clear failure reason, and the failed attempt is audit-logged with error code and policy that triggered step-up.
Manage Multiple Keys and Primary Selection
Given a user with an existing primary hardware key, When the user adds an additional key, Then the new credential is stored as a backup (non-primary) and is available for authentication per policy with audit logging of enrollment. Given a user with two or more keys, When the user sets a different key as Primary, Then exactly one key is marked Primary, subsequent prompts default to the Primary, and the change is audit-logged with actor and timestamp. Given a role policy that requires at least one hardware key, When the user attempts to delete the last remaining key, Then the deletion is blocked with guidance to enroll a replacement first and the attempt is logged.
Enforce Role- and Sensitivity-Based Hardware Policies
Given an admin configures policies requiring hardware step-up for specified roles, document classifications, or amounts above a threshold, When a matching action is initiated, Then the system enforces a hardware assertion and blocks non-hardware methods. Given an attestation policy specifying allowed AAGUIDs/cert roots and disallowed platform authenticators for certain roles, When a user attempts registration or uses an authenticator for step-up, Then non-compliant authenticators are rejected and the decision (policy version, rule matched) is audit-logged. Given an action that does not meet any step-up criteria, When the user approves/signs, Then no hardware prompt is shown and the absence of step-up is traceable to the evaluated policy in logs.
Rotate Signing Keys Without Ledger Breakage
Given a staff change requires key rotation, When an admin disables an old credential and the user enrolls a new compliant hardware key, Then historical signatures remain verifiable against the old public key, and new signatures bind to the new credential without altering ledger history. Given a credential is deactivated, When the user attempts to sign with that credential, Then the attempt is blocked, the user is prompted to use an active key, and the event is audit-logged with deactivation reason. Given rotation completes, When viewing audit logs, Then entries show deactivation, new enrollment, actors, timestamps, and references to affected documents (if any pending at time of rotation).
Policy-Limited Fallback and Backup Key Recovery
Given a user’s primary key is unavailable and a compliant backup key exists, When the user performs a step-up during approval, Then the backup key can be used successfully and the audit log records that a backup credential was used. Given role policy disallows non-hardware fallback, When the user attempts approval without any enrolled hardware key, Then no SMS/OTP or password-only fallback is offered, approval is blocked, and the user is guided to policy-compliant recovery. Given policy defines admin-mediated recovery, When a user cannot satisfy hardware step-up, Then a recovery request can be submitted, approval actions are prevented until recovery is completed, and notifications plus an audit trail are generated.
Cross-Platform Authenticator Compatibility Guidance
Given a browser/OS that lacks required WebAuthn capabilities or transport support, When the user attempts to enroll or sign, Then the UI displays compatibility guidance with supported browsers/OS and transports, and the action is blocked until a supported environment is used. Given multiple authenticators are available, When prompting for step-up, Then the system prioritizes and suggests policy-compliant authenticators (e.g., cross‑platform hardware over platform if required) and records the selected transport (USB/NFC/BLE) in logs. Given an authenticator is removed mid-prompt or times out, When the user retries within the same flow, Then the system gracefully re-prompts without losing the pending approval context and logs the timeout/cancel event.
Time‑Bound Delegation & Scheduled Access
"As a project lead, I want to delegate my signing authority during leave within a defined time window so that approvals continue without compromising access control."
Description

Allow authorized users to delegate their signing authority to another user for a defined time window and scope (projects, folders, or document types). Delegations can be scheduled in advance, auto‑activate at start time, and auto‑expire, with optional daily time windows and role constraints. Delegations are clearly indicated in the UI, included in signature receipts, and logged with the delegator, delegate, scope, and policy snapshot. Notifications are sent on create/activate/expire, and any pending approvals are routed to the delegate during the active window to maintain continuity without broadening standing permissions.

Acceptance Criteria
Auto-Activation and Auto-Expiration of Scheduled Delegation
Given a delegator has created a delegation to a delegate with a defined start and end timestamp and a valid scope When the current system time is before the start timestamp Then the delegate cannot view, approve, or sign on behalf of the delegator within that scope and any attempt returns a 403 error with reason "Delegation inactive" and is audit-logged When the current system time reaches the start timestamp Then the delegation status changes to Active within 60 seconds, and all pending approvals for the delegator within the defined scope are reassigned to the delegate And both delegator and delegate receive an "Delegation Activated" notification within 60 seconds When the current system time reaches the end timestamp Then the delegation auto-expires within 60 seconds, reassigned tasks revert to the delegator, and the delegate is blocked from further actions under that delegation And both parties receive an "Delegation Expired" notification within 60 seconds
Daily Time Window Enforcement Within a Multi-Day Delegation
Given a delegation is scheduled across multiple days with a daily active window of 09:00–17:00 in the delegator’s configured timezone When the current local time is outside 09:00–17:00 but within the overall start/end dates Then the delegation is considered Inactive for that period and the delegate cannot act on behalf of the delegator When the current local time is between 09:00–17:00 and within the overall start/end dates Then the delegation becomes Active within 60 seconds and the delegate can act within scope And activation/pausing across the daily window boundaries is reflected in the UI and audit log with accurate local and UTC timestamps
Scope-Based Routing of Pending and New Approvals to Delegate
Given a delegation is defined with scope including Project Alpha (all folders) and Document Type = "Markups" only When the delegation is Active Then all pending approvals belonging to the delegator that match Project Alpha and Document Type = "Markups" are routed/assigned to the delegate within 60 seconds And pending approvals outside the defined scope (e.g., other projects, other document types) remain with the delegator And any new approvals created during the active window that match the scope are assigned to the delegate upon creation And routing reverts to the delegator immediately upon delegation expiration or pause (outside daily window)
Role Constraint Validation and Non-Broadening of Permissions
Given a delegation is being created with a scope and required signing policy that demands Role = "Project Approver" When the selected delegate does not hold the required role or equivalent per policy at creation time Then the system blocks delegation creation with a validation error detailing the missing role(s) When a delegation is Active and the delegate attempts to sign outside the authorized scope or required role constraints Then the action is denied with a 403 error "Out of scope or role constraint" and the event is audit-logged And a valid delegation never grants the delegate access to projects/folders/document types or actions beyond what is explicitly scoped and permitted by policy
UI Indicators and Signature Receipts for Delegated Signatures (with Hardware Key Support)
Given a delegation is Active and the delegate initiates a signature within the authorized scope When the delegate signs using their registered key (hardware or software) Then the UI clearly displays "Acting on behalf of [Delegator]" and a delegation badge on the approval/sign dialog and item details And the resulting signature receipt includes: delegate identity, delegator identity, delegation ID, scope summary, key type and identifier (e.g., hardware key serial/fingerprint), timestamp, and policy snapshot hash And the receipt statement reads "Signed by [Delegate] on behalf of [Delegator] under Delegation [ID]" and is verifiable by checking the signature chain and policy snapshot And the signed item in the activity feed is labeled as a delegated signature
Notifications on Delegation Create, Activate, and Expire
Given a delegation is created with future start and defined end When the delegation is created Then delegator and delegate each receive an in-app and email notification within 60 seconds containing delegator, delegate, scope, start/end, and daily window (if any) When the delegation auto-activates at start time Then delegator and delegate each receive in-app and email notifications within 60 seconds confirming activation and listing the active scope When the delegation auto-expires at end time Then delegator and delegate each receive in-app and email notifications within 60 seconds confirming expiration and that routing has reverted And all notifications are captured in the audit log with delivery status
Comprehensive Audit Logging with Policy Snapshot for Delegations
Given delegation lifecycle events occur (Create, Activate, Daily-Window Start/Pause, Expire) and delegated signatures are performed When any such event happens Then an immutable audit log entry is recorded containing: event type, timestamp (UTC), delegator ID, delegate ID, delegation ID, scope definition, policy snapshot hash, actor ID (if applicable), and outcome And audit entries are retrievable by delegation ID and time range, exportable as JSON/CSV, and include cryptographic hashes to detect tampering And an audit trail for a delegated signature links the signature receipt to the activation event and policy snapshot used at signing time
Seamless Key Rotation with Ledger Continuity
"As a security administrator, I want to rotate signing keys when staff change or on schedule so that future approvals remain secure without breaking verification of past signatures."
Description

Provide controlled key rotation for users and service accounts without invalidating historical signatures. Each key is versioned with a unique key ID, and signatures embed the key ID and policy snapshot to ensure verifiable provenance. Rotation workflows include proof‑of‑possession, staged rollout, grace periods, and automated deactivation of old keys after cutoff. Admins can trigger emergency rotation on staff changes, enforce rotation cadences by role, and update role bindings atomically. The ledger preserves integrity by linking signatures to their original key versions while future signatures use the new key, avoiding breaks in verification chains.

Acceptance Criteria
Scheduled Rotation with Staged Rollout and Grace Period
Given a user with active keyId K1 and a scheduled new keyId K2 with cutoff timestamp T and a grace period G When current time is before T Then signatures using K1 and K2 are both accepted, each embedding its own keyId and the policy snapshot at signing time And the ledger links each signature to its keyId without collisions When current time is in [T, T+G) Then new signatures with K1 are rejected with a "key deactivated for signing" error and historical K1 signatures remain verifiable And K2 is the primary signing key and new signatures are marked with keyId K2 When current time is at or after T+G Then K1 status is Deactivated and cannot produce valid signatures
Proof-of-Possession Gate for Rotation
Given a rotation is requested for principal P replacing keyId K1 with a proposed new keyId K2 When P submits the rotation request Then the system requires proof-of-possession for K2 by having P sign a server-provided nonce N with K2 and validates the signature And requires authorization via either a signature by K1 over the rotation payload or an admin-approved emergency workflow And rejects the rotation if either proof-of-possession or authorization fails, recording reason codes And upon success, the audit log records K1 and K2 keyIds, nonce N, authorizer identity, timestamps, and hardware attestation details if applicable
Atomic Role Binding Update During Rotation
Given a role binding update and key rotation for principal P are applied together When the change is committed Then the update is atomic: either both the new role bindings and K2 activation succeed, or neither change is applied And signatures issued after commit embed the updated policy snapshot and keyId K2 And signatures issued before commit embed the previous policy snapshot and keyId K1 And no signature can be produced with a mismatched policy snapshot and keyId combination
Emergency Rotation on Staff Departure with Escrow
Given an admin triggers emergency rotation for user U due to suspected compromise or staff departure When the emergency rotation is executed Then K1 is immediately disabled for new signatures and marked as compromised And K2 is provisioned from escrow or an approved hardware device with enforced time-bound delegation if temporary access is granted And all new signatures use K2 with the embedded incident tag and policy snapshot at signing time And historical signatures under K1 remain verifiable via their keyId and are not invalidated And notifications are sent to designated watchers and U, and an audit trail entry is created with justification and approval
Service Account Zero-Downtime Rotation
Given a service account SA with active keyId K1 and staged keyId K2 and downstream verifiers that fetch public keys by keyId When rotation proceeds through the staged rollout Then both K1 and K2 public keys are published and discoverable during the rollout window And no signing or verification requests for SA fail due to missing keys during the rollout window And tokens minted before cutoff T remain valid until their configured expiry even after T And tokens minted at or after T are signed with K2 and verify successfully with K2's public key
Historical Verification Across Key Versions
Given ledger entries that were signed before and after a key rotation When a verifier validates any entry Then the verifier resolves the signature's keyId to the correct public key version and validates the signature And the policy snapshot attached to the signature is retrievable and matches what was in effect at signing time And subsequent changes to roles or policies do not alter the verification outcome of historical entries
Role-Based Rotation Cadence Enforcement
Given roles R1 and R2 with configured rotation cadences and optional grace periods When a key associated with a role reaches its rotation due date Then the system issues notifications to the key owner and admins and opens a rotation task And after the grace period elapses without rotation, signing with the overdue key is blocked with an error that indicates the due date And enforcing rotation cadence does not invalidate existing signatures; only future signing is blocked until rotation completes And reports and APIs expose compliance status by role, principal, keyId, due dates, and grace status
Emergency Escrow (Break‑Glass) with Dual Control
"As a compliance officer, I want a break‑glass process with dual authorization and audit so that urgent approvals can proceed without bypassing controls."
Description

Implement a tightly controlled emergency access mechanism that allows temporary signing capability when a key is unavailable, protected by dual authorization, time delay, and scope limitation. Escrow secrets are stored securely and require approvals from two distinct privileged roles to initiate, capturing rationale and ticket references. Resulting access is time‑boxed, restricted to specific projects or documents, and heavily audited. Automatic alerts, post‑incident reviews, and forced key rotation or revocation can be configured to restore steady‑state security. This maintains operational continuity without compromising accountability.

Acceptance Criteria
Dual Authorization to Initiate Break‑Glass
- Given a new break-glass request is submitted with target scope and TTL, When no approvals have been recorded, Then the request status is "Pending-Approval" and no signing token exists. - Given the first approver with privileged role "Security Officer" approves, When the approval is saved, Then the request status becomes "Awaiting-Second-Approval" and the approver identity is recorded. - Given the same user or same role attempts the second approval, When the submission occurs, Then the system rejects with error BG-ROLE-001 "Second approval must be from a distinct privileged role/user". - Given a second approver with privileged role "Project Owner" approves within the configured approval window, When the approval is saved, Then the system verifies distinct users and distinct roles and sets status to "Approved-Pending-Delay". - Given policy requires dual control, When the second approval is from a delegate mapped to the same role as the first, Then the system rejects with BG-ROLE-002 "Roles must be non-overlapping".
Configurable Delay Before Emergency Access
- Given a request reaches "Approved-Pending-Delay", When the delay timer starts with configured duration D (minimum 5 minutes), Then no token can be issued or used until D elapses. - Given an authorized canceller (Security Admin or either approver) cancels during delay, When cancel is invoked, Then the request transitions to "Canceled" and no token is issued. - Given D elapses without cancellation, When the system issues the emergency token, Then status becomes "Active" and an issuance audit event is recorded. - Given any signing attempt before D elapses, When a client calls the signing API with the pending request ID, Then the system returns 423 Locked with error BG-DELAY-001. - Given any state change (approval, cancel, issue), When it occurs, Then alerts are delivered to configured channels within 60 seconds.
Time‑Boxed Emergency Token Expiry and Revocation
- Given an emergency token is Active with TTL T minutes, When current time > activation_time + T, Then the token auto-expires and can no longer be used. - Given a signing attempt with an expired token, When the API is called, Then it returns 401/403 with BG-TTL-001 and records a denied event. - Given a Security Admin revokes the token early, When revocation occurs, Then subsequent signature attempts fail with 401/403 BG-REVOKE-001 and are audited. - Given the token is used to sign, When a signature is produced, Then signature metadata includes break_glass=true, request_id, approver_ids, issued_at, and expires_at. - Given the token expires or is revoked, When enforcement runs, Then all related sessions/caches are invalidated within 60 seconds.
Project/Document Scope Limitation
- Given a token is issued with allowed scope S (project_id(s)/document_id(s)), When a signing request targets an item not in S, Then the system blocks with 403 BG-SCOPE-001 and records sign_denied_out_of_scope. - Given a token is issued with scope S, When used for in-scope signing actions, Then signatures succeed; attempts to grant privileges, change scope, or access other projects are blocked with 403 BG-SCOPE-002. - Given scope S is defined at issuance, When audit metadata is viewed, Then the exact immutable scope S is present and matches enforcement decisions.
Mandatory Rationale and Ticket Reference Capture
- Given a user attempts to create a break-glass request, When rationale_text or ticket_reference is missing or fails the configured regex, Then the system rejects with 400 BG-JUSTIFY-001 and no request is created. - Given a request exists, When approvers review it, Then rationale_text and ticket_reference are displayed and must be included in the approval payloads. - Given the request lifecycle completes, When audit logs and exports are generated, Then rationale_text and ticket_reference are present, immutable, and redacted only per policy.
Comprehensive Audit Logging and Alerts
- Given any lifecycle event occurs (request_created, first_approval, second_approval, delay_started, canceled, token_issued, sign_attempt, sign_success, sign_denied, token_revoked, token_expired, review_opened, review_closed), When it happens, Then an immutable audit entry is written with timestamp (UTC), actor, actor_role, request_id, resource_ids, client_ip, user_agent, rationale, ticket, and outcome. - Given audit entries exist, When queried via the audit API, Then results are filterable by request_id, project_id, actor, and time range, and exportable as CSV and JSON. - Given alert destinations are configured (email, Slack, webhook), When critical events occur (created, approved, issued, first use, revoked, expired), Then alerts are delivered within 60 seconds with payload containing request_id, scope, TTL, approvers, and deep links; delivery outcomes are recorded.
Post‑Incident Review and Enforced Key Rotation
- Given policy post_incident_review=true, When a token expires or is revoked, Then a review task is auto-created for the Security Team with due_date <= 7 days and status "Open". - Given policy force_rotation_after_break_glass=true for impacted key(s), When the review opens, Then rotation of the affected primary signing key(s) is scheduled within 24 hours; new keys are activated and verification of historical signatures remains valid. - Given rotation completes, When verifying signatures made before and after rotation, Then pre-incident signatures validate against archived keys and new signatures validate against the new keys. - Given policy block_reuse_until_review=true, When a new break-glass is requested for the same project while review is Open, Then the system rejects with BG-POLICY-001. - Given the review is completed, When reviewers submit findings and mark Closed, Then the review status is "Closed" and any temporary restrictions are lifted per policy.
Verifiable Signature Ledger & Reporting
"As an auditor, I want verifiable records and exports of all signatures and policies applied so that I can confirm authenticity and demonstrate compliance to stakeholders."
Description

Maintain a tamper‑evident ledger of all signing events capturing signer identity, role, key ID/version, policy snapshot hash, timestamp, IP/device metadata, and the exact drawing revision hash. Expose verification badges in the UI, downloadable signature receipts for clients, and exportable reports (CSV/JSON) with filters by project, role, status, and time range. Provide webhook and SIEM integrations for security monitoring, plus dashboards for policy adherence, rotation status, and delegation activity. This ensures external and client‑side verifiability while supporting audits and compliance reporting.

Acceptance Criteria
Tamper‑Evident Ledger Entry Completeness
Given a user with role R signs drawing revision D using key K version X under policy P When the signature is submitted Then the system appends a ledger entry with fields: signer_user_id, signer_display_name, role, key_id, key_version, policy_snapshot_hash, timestamp_utc (ISO‑8601), ip_address, device_metadata (user_agent, device_id), drawing_revision_hash (SHA‑256), signature_status, ledger_entry_id, chain_hash And the entry is immutable thereafter Given the verify-ledger endpoint is called When it recomputes the chain integrity Then it returns status "valid" and the head chain_hash matches the recomputed value Given any attempt to update a persisted field in a ledger entry When the write is attempted by any user Then the system rejects the mutation and logs an immutable_violation_attempt event
UI Verification Badge on Signed Revision
Given a drawing revision with a valid ledgered signature When a project member opens the revision page Then a verification badge displays "Signature Verified" and shows signer_display_name, role, key_fingerprint (last 8 chars), and timestamp_utc And clicking the badge opens a details modal containing the ledger_entry_id and a "Download Receipt" action Given a drawing revision whose signature is invalid due to revoked/expired key or out-of-window delegation When the revision page loads Then the badge displays "Signature Invalid" and a concise failure reason And the details modal lists the failed checks and links to remediation documentation
Client‑Downloadable Signature Receipt
Given a verified signature exists for revision D When a user selects "Download Receipt" Then a receipt is generated within 3 seconds containing: ledger_entry_id, drawing_revision_hash, signer_identity, role, key_id, key_version, policy_snapshot_hash, timestamp_utc, ip_address, device_metadata, verification_result="valid" And the receipt is cryptographically signed by PlanPulse and includes a detached signature file Given the public verification URL embedded in the receipt When accessed via a time‑bound signed link before expiry (default 7 days) Then it returns JSON with verification_result and fields matching the receipt And after expiry the endpoint returns HTTP 403 Given the receipt payload is altered When the detached signature is verified with the PlanPulse public key Then signature verification fails
Exportable Reports with Filters and Performance
Given ledger data across multiple projects When a user exports with filters: project in [A,B], role in [Reviewer, Approver], status in [signed, invalid], time_range [T1,T2] Then the CSV and JSON outputs contain only matching rows and include columns: project_id, project_name, signer_user_id, role, key_id, key_version, policy_snapshot_hash, timestamp_utc, ip_address, device_metadata, drawing_revision_hash, signature_status, ledger_entry_id Given an export with ≤ 100,000 matching rows When the export is initiated Then the file is available within 60 seconds and the CSV has a header row while the JSON validates against the published export schema Given any combination of filters When applied Then filters use AND semantics and empty filters are ignored Given the export is downloaded When its checksum is computed Then it matches the provided SHA‑256 in the export manifest
Webhook Delivery for Security and Audit Events
Given events signature_created, signature_verified, signature_invalidated, key_rotated, delegation_started, delegation_ended occur When a webhook endpoint is configured and enabled Then a POST is delivered within 5 seconds per event with JSON including event_type, event_id, occurred_at, and relevant ledger fields And the request includes an HMAC signature header and timestamp Given the receiver validates the HMAC using the shared secret and checks timestamp within a 5‑minute replay window When verification is performed Then the signature is valid for untampered requests and invalid otherwise Given a delivery attempt results in HTTP 5xx or timeout When retries are scheduled Then the system retries up to 8 times with exponential backoff over 30 minutes and records all attempts in an admin‑visible log Given consecutive delivery failures exceed the threshold When the endpoint remains unhealthy Then deliveries are auto‑disabled and admins are notified
SIEM Integration Event Streaming
Given a SIEM destination is configured (Syslog over TLS or HTTPS JSON) When ledger events are generated Then events are streamed within 10 seconds using the documented schema including all ledger fields plus project_id and environment, with at‑least‑once delivery semantics Given a network interruption occurs When the local buffer reaches 10,000 events Then events are durably queued to disk and flushed on reconnect without loss or duplication Given a SIEM health check is requested When the status endpoint is called Then it returns connection state, last_successful_delivery_at, and backlog_size
Audit Dashboards for Policy Adherence and Key Health
Given the dashboards are opened with filters project=X and time_range [T1,T2] When data is loaded Then widgets display: policy adherence (% signatures meeting policy), key rotation status (keys expiring in <30/60/90 days), and delegation activity (active/scheduled/expired) consistent with the ledger Given identical filters are used for an export When totals are compared to dashboard counts Then the counts match exactly Given new relevant events occur When up to 5 minutes have elapsed Then dashboard metrics reflect the changes

AHJ Profiles

Use prebuilt, jurisdiction‑specific templates that assemble exactly the ledger appendix, forms, signature placements, and metadata each Authority Having Jurisdiction expects. One‑click exports validate against profile rules to cut rejections, resubmittals, and permit delays.

Requirements

AHJ Template Library
"As an architect, I want to select a jurisdiction profile for my project so that my workspace, forms, and exports are automatically configured to that AHJ’s requirements."
Description

Provide a centralized, versioned library of jurisdiction-specific profiles that define ledger appendix layouts, required forms, signature field configurations, metadata schemas, file/page ordering, acceptable file types, naming conventions, and export container formats. Each profile includes effective dates and jurisdiction metadata and can be pinned by project to ensure consistent submissions across revision cycles. Support admin-managed CRUD, semantic versioning, import/export of profiles (JSON), and access controls. Integrate the library directly into the PlanPulse project setup so selecting a profile immediately configures the workspace, forms, and export behavior for that AHJ.

Acceptance Criteria
Create and Version an AHJ Profile with Semantic Versioning
Given I am an Admin with profile-create permission When I create a new AHJ profile specifying ledger appendix layout, required forms, signature fields, metadata schema, file/page ordering, acceptable file types, naming conventions, export container format, jurisdiction metadata, and effective start date Then the profile is saved with version 1.0.0 and is visible in the library Given a profile at version X.Y.Z When I save a backward-compatible change flagged as minor Then version X.(Y+1).0 is created and the prior version remains immutable and selectable Given a profile at version X.Y.Z When I save a backward-incompatible change flagged as major Then version (X+1).0.0 is created and existing projects pinned to prior versions are unaffected Given a profile at version X.Y.Z When I save a patch change flagged as patch Then version X.Y.(Z+1) is created Rule: Version strings must match MAJOR.MINOR.PATCH and be unique per profile
Apply AHJ Profile During Project Setup to Auto-Configure Workspace
Given I am creating a project and select an AHJ profile version When I create the project Then the project workspace is auto-configured with the profile’s forms, signature fields, metadata fields, file/page ordering, acceptable file types, naming conventions, and export container format Given the workspace is configured When I open the forms and metadata panels Then all mandatory fields defined by the profile are marked required and are validated on save Given the workspace is configured When I upload a file Then only file types allowed by the profile are accepted; disallowed types display a validation error that cites the violated rule and filename
Pin AHJ Profile to Project for Revision Consistency
Given a project has a selected AHJ profile version v1.2.0 When drawings are revised and new exports are requested Then the project continues to use v1.2.0 until a user with permission explicitly changes the pinned version Given a newer profile version exists When a user views project settings Then the current pinned version is displayed with an indicator if newer versions are available Given a user with permission changes the pinned version to v1.3.0 When they confirm the change Then the workspace reconfigures to match v1.3.0 and the new pinned version is shown in project settings
Validate Export Against AHJ Profile Rules and Container Format
Given a project pinned to an AHJ profile When I trigger one-click export Then the system validates that required forms are complete, required signatures are placed, all required metadata fields are filled, file names match naming conventions, file/page ordering matches the profile, and only acceptable file types are included Given one or more validations fail When export is attempted Then the export is blocked and a summary lists each failing rule and affected item(s) Given all validations pass When export completes Then the output package uses the profile’s specified container format and includes only the configured files in the configured order
Access Control Enforcement for AHJ Profile Library
Given I lack Admin privileges When I attempt to create, update, delete, import, or export an AHJ profile Then the action is denied with a 403 response and a UI error indicating insufficient permissions Given I have Admin privileges When I create, update, delete, import, or export an AHJ profile Then the action succeeds and the result is reflected in the library Rule: All users can read and select profiles during project setup; only Admins can perform CRUD or import/export
Import/Export AHJ Profiles as JSON with Schema Validation
Given I am an Admin When I export an AHJ profile version Then a JSON file is downloaded containing all profile fields, jurisdiction metadata, effective dates, and SemVer version, and it passes validation against the profile JSON schema Given a valid JSON file that conforms to the profile schema When I import it Then a profile is created if its unique ID is new, or a new SemVer version is added under the existing profile if the unique ID matches Given an invalid JSON file When I attempt import Then the import is rejected and the validation errors identify the schema paths that failed
Effective Date Handling and Jurisdiction Metadata Display
Given multiple versions of a jurisdiction’s profile with different effective start and/or end dates When I select the jurisdiction during project setup without specifying a version Then the system suggests the latest version whose effective start date is on or before today and not past its effective end date Given a profile is past its effective end date When a user attempts to pin it to a new project Then the profile is marked inactive and cannot be newly pinned Given a user views a profile in the library When the profile details are shown Then jurisdiction metadata and the profile’s effective date range are displayed
Profile Rule Validation Engine
"As a project lead, I want automated checks against the selected AHJ’s rules so that I can catch issues early and avoid permit rejections and resubmittals."
Description

Implement an executable rules engine that enforces each AHJ profile’s validation logic before export, including required fields, conditional sections, cross-field dependencies, page counts, file types, naming patterns, and stamping requirements. Provide severity levels (error/warning), clear messages, and machine-readable codes that link back to specific fields, sheets, or forms. Run validations on-demand and automatically on key changes, with performance optimized for sub-3-second feedback on typical projects. Expose an internal rule specification (JSON/YAML) to author and test rules per profile and ensure deterministic, versioned outcomes.

Acceptance Criteria
Block Export on Profile Rule Errors
Given a project with a selected AHJ profile and a pending export When the user initiates export Then the engine evaluates all rules for the selected profile against the current project snapshot And if any Error-severity violations exist, the export is blocked And if only Warning-severity violations exist, the export proceeds and a warning summary is displayed And the validation result payload contains counts by severity, a flat list of violations, and a timestamp And each violation includes code, severity, message, and a pointer to the exact field, sheet, or form And the export manifest stores the profileId and rulesetVersion used during validation
Inline Validation on Key Changes Within 3 Seconds
Given a project open in the workspace with an AHJ profile selected When any of the following occur: profile change, field value commit, sheet add/remove/reorder/rename, file attach/remove/rename, stamping toggle, or form add/remove Then validations re-run automatically with debouncing no more frequent than once per 300 ms for continuous edits And the updated results replace the prior results atomically And the 95th-percentile end-to-end validation time is <= 3,000 ms for projects with <= 50 sheets, <= 20 forms, and <= 30 attachments totaling <= 250 MB And a manual "Run Validation" action returns an identical result set for the same inputs
Conditional Sections and Cross-Field Dependencies
Given a rule that conditionally requires a section or form based on another field's value When the controlling field value changes to satisfy the condition Then the required section or form is validated and violations appear if missing or incomplete And when the controlling field value no longer satisfies the condition Then previously raised violations for that condition are cleared without residual errors And cross-field dependency rules re-evaluate when any referenced field changes
File, Naming, and Page Count Constraints
Given profile rules that specify allowed file types, naming patterns, page counts, and stamping requirements When the project includes files, sheets, and forms subject to those rules Then each artifact is validated against the applicable constraints And violations identify the specific artifact and the failed constraint (fileType, namePattern, pageCount, stamping) And naming patterns are evaluated using regex with case sensitivity per rule definition And page count rules support exact, min/max, and per-sheet-type constraints
Severity, Messages, and Codes with Precise References
Given a ruleset with defined severity levels Error and Warning When violations are produced Then each violation includes a stable machine-readable code unique within the profile and rulesetVersion And each violation includes a human-readable message with actionable guidance And each violation includes a location pointer identifying type (field|sheet|form|file) and the corresponding IDs And the result set is sorted by severity (Error before Warning) then by location And codes and messages are consistent across repeated runs with the same inputs
Deterministic, Versioned Rule Execution
Given an AHJ profile rulesetVersion and an identical project input When validations are executed multiple times Then the result payloads are identical after excluding run metadata fields (timestamp and duration) And the payload includes rulesetVersion and inputHash fields And exports record and use the locked rulesetVersion from the last passing validation unless explicitly upgraded And changing to a new rulesetVersion alters results only according to rule differences
Rule Spec Authoring and Test Harness
Given a JSON/YAML ruleset and its schema When a ruleset is loaded Then it is validated against the schema and rejected with a precise error path on schema violations And a built-in test harness can execute declared test fixtures (inputs and expected violations) for a profile And running the same fixtures yields deterministic Pass/Fail outcomes across runs And rule authors can override messages and severities per profile via the spec without code changes
One-Click Export Packaging
"As a small-firm architect, I want a one-click export that produces a fully compliant submission package so that I can submit without manual assembly or formatting errors."
Description

Enable a single action that triggers validation and, upon pass, assembles the submission package exactly as the AHJ specifies: merges drawings and ledger appendix, fills and flattens forms, inserts signature fields/stamps, orders pages, applies bookmarks, embeds barcodes/QR or watermarks if required, and sets document metadata. Output the package in the required container (single PDF portfolio or ZIP with prescribed folder structure) with compliant file naming. Generate a manifest and store the artifact with revision metadata in PlanPulse, supporting repeatable exports and auditability. If validation fails, block export and route the user to remediation via deep links.

Acceptance Criteria
Successful One-Click Export for PDF Portfolio AHJ
Given a project with completed drawings, a ledger appendix, all required forms populated, and an AHJ profile specifying a single PDF portfolio output When the user clicks One-Click Export Then validation executes against the selected AHJ profile and returns Pass And the system assembles a single PDF portfolio with pages ordered per profile rule And all required source PDFs are merged and the ledger appendix is appended in the specified section And required forms are filled from project data and flattened (no editable fields remain) And required signature fields/stamps are inserted at specified pages and coordinates And document bookmarks are created per the profile’s outline And required barcode/QR/watermark assets are embedded per the profile And PDF metadata fields (Title, Author, Subject, Keywords, Custom) are set to profile-specified values And the file name conforms exactly to the profile’s naming template And a manifest.json is generated with SHA-256 checksums for the portfolio and embedded components And the export artifact and manifest are stored in PlanPulse with revision metadata and are downloadable And the UI shows Export Successful with a link to the artifact and manifest
Successful One-Click Export for ZIP Folder-Structure AHJ
Given a project ready for submission and an AHJ profile specifying a ZIP output with a prescribed folder structure When the user clicks One-Click Export Then validation returns Pass And a ZIP package is created And the folder hierarchy matches the profile exactly (folder names, nesting, and casing) And each required file is placed in its prescribed folder with a compliant file name And any PDFs inside the ZIP contain the required bookmarks and flattened forms And required watermarks/barcodes are applied where specified And the manifest lists all files with their relative paths and SHA-256 checksums And the export artifact and manifest are stored in PlanPulse with revision metadata and are downloadable
Validation Failure Blocks Export and Routes to Remediation
Given the project has one or more blocking rule violations per the selected AHJ profile When the user clicks One-Click Export Then validation returns Fail and no export package is generated And the UI displays a validation report listing each failed rule with severity and description And each failed rule provides a deep link to the precise remediation location (form field, drawing sheet, metadata entry) And the primary Export action remains disabled until all blocking issues are resolved And after fixes are applied, re-running validation reflects updated pass/fail status And a downloadable validation report (PDF/JSON) is available for audit
Deterministic Repeatable Export with Manifest and Audit Trail
Given the same project state, inputs, and AHJ profile version When One-Click Export is executed twice Then the resulting package bytes and manifest checksums are identical And the manifest records profile ID/version, PlanPulse revision ID/SHA, source artifact versions, page order, bookmark tree, metadata snapshot, signer configuration, timestamps, and SHA-256 checksums for each file And PlanPulse stores the artifact, manifest, and an immutable audit log entry (user, timestamp, outcome) And when any input changes, the next export increments revision metadata and the manifest highlights deltas from the prior export
Signature Fields and Stamps Compliance
Given an AHJ profile requiring named signature fields/stamps at defined pages and coordinates When the export is generated Then each required signature field exists with the correct field name, role, page index, and bounding box within ±1 pt tolerance And where stamps are required, the correct stamp image is placed and flattened with the specified DPI and opacity And flattened outputs contain no unintended editable form fields And the PDFs open without signature or validation errors in Adobe Acrobat and pass preflight checks for required signature field presence
File Naming, Metadata, and Watermark/Barcode Compliance
Given the AHJ profile defines a naming template, required PDF/XMP metadata, and optional watermark/barcode/QR requirements When the export completes Then every file name exactly matches the template with tokens resolved and only allowed characters present And PDF/XMP metadata fields match profile definitions and validation regexes And required watermarks appear only on specified pages at the defined position and opacity And required barcode/QR codes are present, encode the specified payload, and are scannable at 300 DPI from print And when not required by the profile, these elements are absent
Signature Placement & E-Sign Mapping
"As a permit coordinator, I want signature fields to appear exactly where each AHJ expects them so that our submissions are accepted without signature placement corrections."
Description

Define per-profile signature placements using coordinate or anchor-based mapping for wet-sign placeholders and digital signature fields across forms and drawing sheets. Support multiple signers (applicant, engineer, owner, notary), signing order, date/time formats, and required seals or stamps. Provide a preview overlay and manual override while preserving profile constraints. Ensure exports meet common standards (e.g., PDF/A where required) and remain compatible with PlanPulse’s approval workflows and external e-sign providers when applicable.

Acceptance Criteria
Coordinate-Based Signature Field Placement on Forms and Sheets
Given an AHJ profile with a form or sheet page and a signature field defined by x,y,w,h in page units When the profile is applied and an export is generated Then the field appears at the exact coordinates within ±2 px (screen) and ±0.5 mm (export) tolerance Given multiple coordinate-based fields on the same page When rendered Then no fields overlap and all remain within the page content box and outside the 10 mm minimum margin unless the profile explicitly allows otherwise Given zoom or DPI changes When previewing or exporting Then the field dimensions and aspect ratios are preserved without scaling drift Given a field designated as wet-sign placeholder When exporting a print/PDF-only package Then the placeholder is visible but not an interactive form field Given a field designated as digital signature When exporting an e-sign package Then the field is an interactive signature form field with the correct role tag
Anchor-Based Signature Mapping Using Labels and Callouts
Given an AHJ profile field mapped to an anchor string or marker with an offset (dx,dy) When the source form template text matches the anchor Then the field is placed at anchor location plus offset with ±2 px/±0.5 mm tolerance Given rotated or scaled drawing sheets When anchors are detected Then placement adjusts to the sheet transform and remains adjacent to the intended callout Given multiple identical anchors When placement is computed Then the nearest anchor on the specified page region or the configured nth occurrence is used Given the anchor is not found When validating the export Then export is blocked and an error lists the missing anchor(s) and page(s) and prompts for manual override Given manual override is used to place an anchor-mapped field When saving the profile instance Then the override is stored and flagged for review on subsequent exports
Multi-Signer Roles and Enforced Signing Order
Given roles Applicant, Engineer, Owner, and Notary are required by the profile When the signing package is initiated Then all roles are included with unique assignable signature/date/seal fields Given a sequential signing order [Applicant -> Engineer -> Owner -> Notary] When recipients attempt to sign Then only the current role can sign; subsequent roles are blocked until prior roles complete Given the profile specifies a parallel signing group for a subset of roles When the package is initiated Then those roles can sign in any order, and the next sequential step does not start until all parallel roles complete Given a role is optional per profile rules When the condition to omit is met Then the related fields are removed from the package and validation passes Given all required roles have signed When exporting the final document Then all fields are locked and the signature chain is present in the audit trail
Profile-Specific Date/Time Formatting at Signature
Given a role-specific date field with a format (e.g., MM/DD/YYYY, DD.MM.YYYY, ISO 8601) and a timezone When the signer completes their signature Then the visible date/time matches the format and timezone and is embedded as selectable text Given the profile requires 24-hour time with seconds When the signer completes Then the time is captured as HH:mm:ss in the specified timezone Given a locale is specified When month names are required Then localized month names are used per locale Given a signed date field When the document is reopened Then the value is immutable and matches preview and export exactly Given metadata export When generating a compliant PDF Then XMP metadata includes signing timestamps in UTC and the profile timezone identifier
Required Seals/Stamps Presence, Size, and Position Validation
Given the profile requires an Engineer seal with minimum diameter 38 mm at print scale When exporting Then the seal asset is vector or ≥300 DPI bitmap and meets or exceeds 38 mm on paper Given a required stamp bounding box on a sheet When placing the seal or notary stamp Then the stamp is fully within the box and maintains ≥2 mm clearance from other fields Given the seal is marked must-overlap-signature When exporting Then the seal overlaps the signature field by at least 10% of the signature area without obscuring the date text layer Given a required seal is missing When validating before export Then export is blocked with an error naming the missing item(s) and page(s) Given jurisdictions prohibiting rasterized seals When exporting to an e-sign package Then vector seals are preserved; print-only exports may rasterize at ≥300 DPI
Preview Overlay with Constraint-Aware Manual Override
Given a user opens preview for a profile-applied document When the overlay is shown Then all signature/date/seal fields are visible with role labels and snap-to guides Given the user drags a field When the new position violates a profile constraint (zone, margin, overlap) Then the move is prevented and a tooltip explains the constraint Given the user makes a manual override within allowed constraints When saving Then the override delta (x,y,w,h) is persisted per document version and recorded in the audit log Given the user clicks Reset to Profile When confirmed Then all overrides revert to profile defaults Given overlay-to-export fidelity When comparing overlay positions to exported PDF positions Then differences are within ±2 px/±0.5 mm
Export Compliance (PDF/A) and E-Sign Provider Compatibility
Given the profile requires PDF/A-2b When exporting Then the PDF validates as PDF/A-2b and non-conformances block export with actionable errors Given a digital signature workflow using an external provider When exporting an e-sign package Then all role-tagged fields map to provider field types and retain signing order and required/optional flags Given a wet-sign package When exporting Then interactive fields are flattened, seals/watermarks are preserved, and output is 100% scale per configured sheet size Given PlanPulse internal approval workflows When client approval occurs before signing Then approval status/history are preserved and attached to the finalized signed artifact Given an executed document is returned by an external provider When it is ingested Then the document is versioned into the workspace, mapped fields are locked, and a complete audit trail (timestamps, signer identifiers, IPs) is viewable
Metadata Schema Mapping & Transformations
"As an architect, I want project data to auto-populate AHJ forms with the right formats and units so that I don’t spend time retyping and risk formatting-related rejections."
Description

Map PlanPulse project data to each AHJ profile’s required fields with support for unit conversions, enumerations, date/number formatting, address components, parcel/APN formats, license identifiers, and contact roles. Provide field-level requirements, defaults, and read-only derivations, auto-filling forms and ledger appendices. Highlight missing or invalid values inline and expose a centralized data panel to edit once and propagate across all forms. Log transformations for transparency and auditing.

Acceptance Criteria
AHJ Field Mapping with Defaults and Derivations
Given an AHJ profile is selected for the project and the profile version is fixed When the mapping engine applies the profile to the project Then every AHJ-required field is resolved to a value via direct map, default, or derivation And fields lacking a resolved value are marked "Missing" with the profile's requirement text and do not pass validation And defaults apply only when the source value is empty/null and are labeled "Defaulted" And read-only derived fields are non-editable in UI and display their derivation expression or source fields And the active mapping configuration is saved with the project and includes the AHJ profile version ID
Unit Conversion and Formatting per Profile Rules
Given profile rules specify units and formats for numeric and date fields When values are transformed for output Then unit conversions use the specified factors (e.g., ft²→m² = 0.092903) and rounding mode per rule And numeric outputs match the specified precision and separators (e.g., 2 decimals, 1,000 separator) And date/time outputs match the required pattern (e.g., MM/DD/YYYY) and timezone handling per rule And addresses are split/assembled into components per profile (street, city, state, postal) with required casing And parcel/APN and license identifiers conform to mask/regex defined by the profile
Enumeration and Contact Role Mapping Validation
Given mapping tables exist for enumerations and contact roles for the selected profile When project values are prepared for export Then each enumerated value is translated to the AHJ-required code/value pair And any unmapped value triggers a validation error with suggestions and cannot pass export And required contact roles (e.g., Applicant, Architect of Record) are present and mapped to correct fields And profile-level overrides to mappings can be created without modifying global lists, and are versioned
Centralized Data Panel: Edit Once, Propagate Everywhere
Given the centralized data panel lists all mapped fields and their sources When a user edits a field value in the panel Then the updated value propagates to all bound form fields and ledger entries within 500 ms And dependent derived fields recompute automatically and update their displays And undo/redo restores prior values across all affected fields And read-only derived fields remain non-editable and show 'derived from' references
Inline Validation and Export Gatekeeping
Given forms populated from mapped data are open When validation runs automatically on change or manually on demand Then missing or invalid fields are highlighted inline with accessible error text and rule reference And a summary panel shows counts of errors and warnings with deep links to each field And one-click export is disabled while any required error exists and enabled when all required pass And validation of up to 500 mapped fields completes within 1 second on a standard project dataset
Transformation Logging and Audit Trail
Given transformation logging is enabled for the project When mapping and transformations are executed Then an audit record is written per field with source field(s), input value(s), rule ID and version, transformation type, output value, timestamp, user/process, and AHJ profile version And audit records are immutable, filterable by field/rule/error, and exportable to JSON and PDF appendix And a cryptographic checksum validates the exported audit log integrity And users can view diffs between successive exports for any field
Validation Report & Fix-It Guidance
"As a project lead, I want a clear validation report with direct links to fixes so that I can resolve issues quickly and confidently before exporting."
Description

Produce a human-readable and machine-readable validation report that groups issues by severity and category, shows exact locations (sheet, page, field), and offers actionable guidance or one-click navigation to fix areas in the workspace or forms. Support quick-fix automations where safe (e.g., renaming files to match patterns) and re-validate incrementally. Allow exporting or sharing the report as PDF/HTML for internal reviews and client visibility, and preserve a history of validation snapshots per revision.

Acceptance Criteria
Validation Run Produces Dual-Format Report
Given a project with an AHJ Profile selected and a current revision When the user clicks Validate against the selected profile Then the system generates a human-readable report (HTML view) and a machine-readable report (JSON) in a single run Then the machine-readable JSON conforms to schema id "planpulse.validation.v1" and passes JSON Schema validation Then both reports include metadata: projectId, revisionId, profileId, profileVersion, userId, timestamp (ISO 8601), ruleSetHash Then each reported issue includes: issueId (stable across runs if unchanged), ruleId, severity, category, message, locations[], quickFixAvailable (boolean) Then for projects up to 100 sheets and 200 form fields, the initial validation completes within 10 seconds on a standard workspace
Issues Grouped by Severity and Category with Precise Locations
Given validation issues exist When the report is viewed Then issues are grouped by severity in order: Blocker, Warning, Info Then issues are grouped by category at second level: Files, Forms, Signatures, Metadata, Content Then each issue displays exact location: sheetNumber/name, pageNumber, fieldPath, coordinates (x,y) when applicable, plus a deep link Then the report header displays total counts per severity and overall Then filters allow toggling severities and categories, and the visible list updates within 200 ms Then if no issues are present, the report displays "No issues found for this profile" with a success state
One-Click Navigation and Fix-It Actions
Given the user clicks Go to on an issue When invoked Then the workspace opens the corresponding sheet/page and scrolls/zooms to the target element, highlighting it for at least 3 seconds Then pressing Back in the report returns focus to the prior scroll position in the report Given an issue supports a safe quick fix When the user clicks Quick Fix and confirms the preview Then the system applies the change, records an audit entry (userId, timestamp, change summary), and marks the issue as resolved Then only the affected scope is re-validated, and the updated issue list reflects the resolution within 2 seconds
Incremental Re-Validation after Edits
Given the user edits one or more files/forms after an initial validation When Re-validate is triggered Then only changed artifacts and dependent rules are evaluated Then unchanged issueIds remain stable; new/removed issues are clearly labeled as Added/Resolved Then incremental validation completes within 2 seconds per changed artifact, not exceeding 30% of the last full run time for the same project Then a badge indicates the report is Updated with the new timestamp
Export and Share Report
Given the report is open When the user exports as PDF Then the PDF reproduces the current filters, sorting, and expanded/collapsed groups, includes a summary page, and embeds the run metadata Then PDF renders with selectable text, supports A4 and US Letter with automatic pagination, and is under 5 MB for reports under 1,000 issues When the user exports as HTML Then a self-contained HTML file is generated with embedded assets and a downloadable attachment containing the machine-readable JSON When the user creates a share link Then the link scopes access to the selected report snapshot, supports expiry choices (1h, 24h, 7d), can be revoked, and all access is audit logged
Validation Snapshot History per Revision
Given a validation run completes When Save Snapshot is invoked or auto-save on completion occurs Then an immutable snapshot is stored and associated with the current revisionId and profileId Then users can list snapshots chronologically, open any snapshot read-only, and compare any two snapshots to see Added/Resolved/Unchanged issues Then each snapshot retains its machine-readable JSON and human-readable rendering and shows the exact ruleSetHash used Then the system preserves at least the last 50 snapshots per project or 180 days (whichever is greater), and older snapshots are pruned per retention settings with admin override
Severity Gates Block One-Click Export
Given at least one Blocker severity issue is present in the latest validation for the current revision When the user attempts One-Click Export to the AHJ package Then the export action is disabled and a tooltip explains the gating rule with a link to the report Then users with the Override Export Gate permission see an Override and Export button When clicked Then a modal requires a reason note (minimum 10 characters) and confirmation Then the override event is logged (userId, timestamp, reason, snapshotId), and the export proceeds; otherwise, without override, no export is initiated Then if zero Blocker issues are present, One-Click Export proceeds without prompts
Profile Update & Notification Service
"As a firm admin, I want managed updates to AHJ profiles with clear change logs so that our projects stay compliant without unexpected disruptions."
Description

Provide a cloud-based update channel for AHJ profiles with signed releases, change logs, and effective dates. Automatically notify affected projects of updates, allow pinning to a specific profile version, and provide a diff view highlighting rule and template changes. Support staged rollout, deprecation warnings, and backward-compatible fallbacks. Offer admin controls to approve, delay, or mandate updates to ensure teams stay aligned with the latest AHJ requirements.

Acceptance Criteria
Signed Release Integrity for AHJ Profile Updates
Given a new AHJ profile version is published to the update channel with a signed release manifest When a client environment checks for updates Then the client validates the cryptographic signature against a trusted public key before surfacing the update And if signature verification fails or any artifact checksum mismatches, the update is not displayed, a SECURITY event with a unique trace ID is logged, and admins are alerted And the release metadata exposes semantic version, release timestamp (UTC ISO 8601), effective date (UTC ISO 8601), and publisher ID And all downloadable artifacts’ checksums match the manifest prior to download completion
Change Log and Effective Date Visibility
Given a release is visible in the update panel When a user opens the release details Then the change log lists changes grouped by Rules, Templates, Forms, Signature Placements, and Metadata with per-item summaries and a breaking-change flag And the effective date displays in the user's local timezone with UTC reference and shows a countdown (days/hours) when within 30 days And the change log is retrievable via API at /profiles/{id}/versions/{version}/changelog as JSON including fields: id, category, action (add/remove/modify), summary, breaking (boolean), and links to impacted templates/rules
Project Impact Notification for Affected Profiles
Given a project uses AHJ profile X at version vA When version vB of profile X is published Then within 10 minutes the project shows an in-app notification badge and records an activity entry including impact counts (# rules added/removed/modified, # templates changed) and a breaking-change flag And email and in-app notifications are sent to users with roles Project Admin and Architect Lead, deduplicated per release per user And a per-project setting allows opting out of email while retaining in-app alerts (default: on), and all notification deliveries and preference changes are audited
Pinning to Specific Profile Version
Given a user with Project Admin role selects "Pin to version vA" for profile X When pinning is confirmed Then the project's validations and exports use vA regardless of newer versions until unpinned And auto-update prompts are suppressed and replaced with a "Pinned to vA" banner showing an Unpin action And unpinning requires confirmation and records an audit entry with user, timestamp, and from/to version And attempts by non-admin users to pin or unpin are blocked with a permissions error
Diff View of Rule and Template Changes
Given two profile versions vA and vB are selected When the user opens the diff view Then the system displays added/removed/modified items with stable IDs, old/new values, and highlights for breaking changes And filters by category (Rules, Templates, Forms, Signature Placements, Metadata) are available and persist per user session And a summary header shows total changes per category and overall And the diff can be exported as PDF and JSON; exports complete within 10 seconds for diffs of ≤5,000 changed items
Staged Rollout and Mandate Controls
Given an admin configures a staged rollout for version vB When they target 25% of projects by cohort (firm, region, or project tag) starting at a scheduled time Then only targeted projects receive the update prompt during that stage And the admin can pause/resume or adjust the percentage without creating a new version; changes take effect within 5 minutes And marking vB as Mandatory with a deadline enforces auto-update on unpinned projects at the deadline and displays a countdown banner at least 7 days prior And all actions (create, edit, pause, resume, mandate) are recorded in an audit log with actor, timestamp, and parameters
Deprecation Warnings and Backward-Compatible Fallbacks
Given version vA is marked Deprecated with an end-of-life (EOL) date When a project remains on vA Then a non-blocking deprecation banner is shown in the project and pre-export validation until EOL And exports and validations continue via a compatibility layer; missing rules fall back to prior mappings and emit warnings but not errors And after EOL, unpinned projects auto-update to the minimum supported version; pinned projects block export with a clear error and an "Update Now" action unless an admin override is granted And the deprecation status and dates are visible in the version selector UI and via API

TraceLink IDs

Assign immutable IDs to every comment, decision, and markup region, then watermark those IDs into exports. Anyone can reference the ID to jump back to the precise context in PlanPulse, proving what was approved and where—speeding resolution and ending he‑said‑she‑said debates.

Requirements

Immutable TraceLink Assignment
"As a project lead, I want each comment, decision, and markup to receive a permanent unique ID so that I can reliably reference the same item across edits and versions."
Description

Generate an immutable, globally unique TraceLink ID for every comment, decision, and markup region at creation time. The ID must be collision-resistant and opaque (e.g., UUIDv7/ULID) with an optional short human-friendly alias for display. Persist the ID across edits, moves, forks, and merges, maintaining referential integrity and lineage across file versions. IDs cannot be edited, reused, or reassigned; when content is deleted, retain a tombstoned record to preserve history. Expose the ID in UI tooltips, copy-to-clipboard actions, and detail panels; index IDs for low-latency lookup. Implement in the domain layer, store with object records, and emit in activity streams to underpin traceability and audits.

Acceptance Criteria
ID Generation Is Immutable and Globally Unique
Given a user creates a comment, decision, or markup region, When the object is persisted, Then a TraceLink ID is assigned at creation time and is globally unique across all workspaces and projects. Given an existing object with a TraceLink ID, When the object is edited, moved, or renamed, Then the TraceLink ID remains unchanged. Given high-volume concurrent creation of 1,000,000 objects within 10 minutes, When IDs are generated, Then zero collisions occur and each ID conforms to the configured opaque scheme (UUIDv7 or ULID). Given any API or UI attempt to set or modify a TraceLink ID, When the request is processed, Then the request is rejected (HTTP 400/422) and the original ID remains intact. Given a service restart or deployment, When new objects are created, Then uniqueness and monotonic ordering (as applicable to scheme) are preserved.
Human-Friendly Alias Is Optional, Non-Editable, and Stable
Given aliasing is enabled, When an object is created, Then a short human-friendly alias (<=12 chars, Crockford base32) is deterministically derived from the TraceLink ID and stored. Given an object with an alias, When rendered in the UI, Then the alias is displayed alongside the TraceLink ID in tooltips and detail panels. Given a user attempts to edit an alias, When the action is submitted, Then the system disallows the change and the alias remains unchanged. Given a generated alias collides within a workspace, When the alias is assigned, Then the system automatically disambiguates (e.g., suffix) to maintain workspace-unique aliases without changing the underlying TraceLink ID. Given aliasing is disabled, When objects are created and viewed, Then no alias is generated and alias UI elements are hidden without errors.
TraceLink Persistence Across Edits, Moves, Forks, and Merges with Lineage
Given an object is copied forward to a new file version or moved between folders/projects, When the operation completes, Then the original TraceLink ID is retained. Given a branch/fork is created from a drawing version, When objects are forked, Then their TraceLink IDs are retained and lineage metadata (source_version_id, branch_id) is recorded. Given two branches contain the same TraceLink ID and are merged without conflicting changes, When merge completes, Then the merged object preserves the TraceLink ID and consolidates lineage without duplication. Given two branches edited the same object (same TraceLink ID), When merged, Then a conflict is flagged but the resolved object retains the same TraceLink ID and all lineage entries are preserved. Given a user duplicates an object intentionally (copy-duplicate within the same file), When the duplicate is created, Then it receives a new TraceLink ID and a lineage parent_id pointing to the source ID.
Deletion Produces Tombstone and Prevents ID Reuse
Given an object with a TraceLink ID is deleted, When deletion is confirmed, Then a tombstone record is persisted containing id, object_type, timestamps, last known container/version, deleter, and reason, and the content becomes immutable/inaccessible. Given a tombstoned TraceLink ID, When any API or UI attempts to create a new object with that ID, Then the request is rejected and the ID remains permanently reserved. Given a lookup by a tombstoned TraceLink ID, When queried via API or UI, Then the system returns tombstone status with metadata; REST content reads return HTTP 410 Gone. Given audit/export pipelines run, When a deletion occurs, Then a "deleted" activity is emitted referencing the TraceLink ID and is visible in audit consumers.
UI Exposure and Copy/Deep-Link Navigation by ID
Given a user hovers over a comment, decision, or markup, When the tooltip appears, Then it shows the TraceLink ID and the alias if available. Given a user opens the detail panel for an object, When the panel renders, Then it displays the full TraceLink ID, alias (if present), and a copy-to-clipboard control that copies the canonical ID string. Given a user clicks Copy TraceLink, When pasting to a text field, Then the clipboard contains exactly the canonical TraceLink ID with no extra characters. Given a user navigates via a deep link /t/{id} or pastes an ID into global search, When navigation executes, Then the app focuses and highlights the exact object context within 500ms of scene render; if the ID is tombstoned, a Removed Item view with tombstone details is shown instead.
Indexed Lookup and Performance SLOs
Given ID and alias indexes are in place, When querying by TraceLink ID or alias via API/service, Then lookups resolve to the object or tombstone with p95 latency <= 60ms and p99 <= 150ms on a dataset of 10M records on staging baseline hardware. Given 500 concurrent lookup requests over 10 minutes, When load tests run, Then lookup error rate remains <0.1% and timeouts are 0. Given the index is rebuilt or the service cold-starts, When lookups resume, Then cold-start p95 <= 150ms within the first 2 minutes and steady-state SLOs are met thereafter. Given a non-existent ID or alias, When queried, Then the system returns HTTP 404 Not Found within the same latency SLOs.
Domain-Layer Enforcement and Activity Stream Emission
Given object creation via any API or UI, When the domain layer processes the command, Then it assigns the TraceLink ID server-side, stores it with the object record, and returns it; clients cannot assign IDs. Given a persistence attempt bypassing the domain service, When integration tests and code checks run, Then the build fails and no object can be stored without a TraceLink ID. Given create, update, move, fork, merge, or delete operations occur, When processed, Then an activity event is emitted containing TraceLink ID, alias (if present), object_type, action, actor, timestamp, and version references, and is consumable by the audit pipeline. Given an audit replay by TraceLink ID, When reconstructing history, Then the event sequence is contiguous and matches the current object state or tombstone without gaps.
TraceLink Context Jump
"As an architect, I want to jump to the precise drawing and markup tied to an ID so that I can resolve questions without hunting through files."
Description

Enable users to paste, scan, or click a TraceLink ID to navigate directly to its precise context: project, drawing file, version, viewport (zoom/pan), and selected entity or comment thread. Provide a universal resolver UI and URL pattern (/t/{id}) with QR support. Enforce permissions, presenting redacted views or access request options where necessary. Implement graceful fallbacks when the exact version is unavailable (e.g., nearest snapshot) with clear status messaging. Handle errors for unknown/archived IDs. Integrate with global search, share menus, and analytics to track resolution speed and usage.

Acceptance Criteria
Deep Link Resolution to Exact Context
Given a valid TraceLink ID and a logged-in user with access, When the user visits /t/{id} via paste or click, Then the app resolves to the exact project, drawing file, and drawing version, selects the target entity or opens the target comment thread, and restores the viewport (zoom within ±1% and pan within ±10 px of the saved state) within 1500 ms p95 on broadband. Given a valid TraceLink ID and a user who is not authenticated, When the user visits /t/{id}, Then the user is prompted to authenticate and upon success is returned to the exact context with the same performance thresholds. Given a valid TraceLink ID, When resolution completes, Then the browser URL reflects the canonical deep link for that context and an analytics event tr_resolve_success is emitted with fields {id, exact:true, duration_ms}.
QR Scan to Context (Mobile)
Given a QR code that encodes https://{host}/t/{id}, When scanned on iOS or Android, Then the device opens the link in the default browser and the app resolves to the same precise context as desktop, restoring viewport within ±5% zoom and ±24 px pan within 3000 ms p95 on 4G. Given the user is not authenticated on mobile, When the QR deep link is opened, Then the user is prompted to log in and, after success, is returned to the intended context without re-scan. Given the QR deep link is opened, When the resolver is loading, Then a mobile-friendly skeleton UI appears and an analytics event tr_resolve_qr is emitted including {id, platform, duration_ms}.
Permission Enforcement and Redacted Access
Given a valid TraceLink ID for content the user lacks permission to view, When the user opens /t/{id}, Then no sensitive drawing or comment data is returned by the network or rendered and a non-leaking resolver screen appears with an Access Required message and a Request Access CTA. Given project-level discoverability is enabled, When an unauthorized user opens /t/{id}, Then only the project name and high-level metadata permitted by policy are shown; all content is redacted and comment text is truncated or hidden per policy. Given the user has partial permissions (e.g., comment-only), When opening /t/{id}, Then only permitted elements of the context render (e.g., target comment thread without restricted markups) and restricted elements are replaced with placeholders. Given any access-denied outcome, When it occurs, Then an analytics event tr_resolve_denied is emitted with {id, reason, requested_role} and an auditable access-request entry is created if the CTA is used.
Graceful Fallback for Missing Version
Given a valid TraceLink ID whose exact drawing version is unavailable (deleted or not synced), When the user opens /t/{id}, Then the resolver loads the nearest available snapshot (prefer prior by timestamp, else next) and displays a banner indicating Fallback to version {v} with a link to View available versions. Given the target entity no longer exists in the fallback version, When the context loads, Then the viewport centers on the original saved bounds and an approximate highlight appears with a label Entity not found in this version (approximate location) and the selection panel indicates fallback mapping. Given a fallback occurs, When analytics are recorded, Then an event tr_resolve_fallback is emitted with {id, reason, from_version, to_version} and exact:false. Given the version is archived and user has restore permission, When opening /t/{id}, Then the UI offers Request restore and logs the request; if no permission, the action is hidden.
Unknown, Malformed, or Archived TraceLink IDs
Given an unknown or malformed TraceLink ID, When the user opens /t/{id}, Then the resolver returns HTTP 404 for unknown or 400 for malformed, shows a clear error state with the entered ID, and provides actions Copy ID and Report a problem. Given a TraceLink ID that has been explicitly deleted or retired, When the user opens /t/{id}, Then the resolver returns HTTP 410 and shows a Gone message with guidance and a link to related items if available. Given a TraceLink ID marked as Archived, When the user opens /t/{id}, Then the resolver displays a read-only archived view with an ARCHIVED watermark and a Restore request control if the user has permission; otherwise only metadata allowed by policy is shown. Given any error outcome, When it occurs, Then an analytics event tr_resolve_error is emitted with {id, http_status, error_code}.
Global Search and Share Menu Integration
Given a user enters a complete or partial TraceLink ID in global search, When results appear, Then the exact ID match is returned as the top result within 300 ms p95 and selecting it navigates in-app to the resolved context without full page reload. Given a user opens the Share menu on a comment, decision, or markup, When selecting Copy TraceLink, Then the clipboard receives the canonical URL https://{host}/t/{id} and an analytics event tr_share_copy is emitted. Given a user opens the Share menu, When selecting Generate QR, Then a downloadable QR code PNG is produced encoding https://{host}/t/{id} at 512x512 px with error correction level M. Given a TraceLink is clicked inside an in-app comment, When activated, Then the app performs client-side navigation to the target context and updates history state correctly for back/forward navigation.
Resolver Performance, Reliability, and Analytics
Given normal operating conditions, When resolving valid TraceLink IDs over a rolling 7-day window, Then p95 end-to-end resolution time is ≤1500 ms desktop broadband and ≤3000 ms mobile 4G; availability of the resolver endpoint is ≥99.9% monthly. Given each resolution attempt, When it completes, Then analytics capture {id, user_id (hashed), device, exact, duration_ms, outcome} and aggregate dashboards display median, p95, success rate, fallback rate, and access-denied rate by day. Given a regression in p95 time exceeding 10% week-over-week, When detected by monitoring, Then an alert is sent to the on-call channel within 5 minutes and a ticket is automatically created with the last 50 resolver traces attached.
ID Watermark in Exports
"As a client, I want exported drawings to show the IDs next to comments and approvals so that I can reference them in email and jump back to the exact context."
Description

Embed TraceLink IDs into all relevant exports (PDF, PNG, print) as visible labels adjacent to markup regions and as invisible metadata (e.g., PDF annotations/XMP). Watermarks must remain legible across scales and DPIs, support contrast and placement rules to avoid obscuring content, and include optional QR/deep link encoding. Provide export options to include an index page mapping IDs to summaries and toggles for client-facing redaction or minimal mode. Ensure IDs survive print/scan workflows via high-contrast rendering and checksum encoding. Integrate with the existing export pipeline and templates without degrading performance.

Acceptance Criteria
Visible ID labels across PDF/PNG exports
Given a document with M markup regions each holding a unique TraceLink ID When the user exports with Include IDs enabled to PDF or PNG Then each markup region renders exactly one visible label showing its ID, with a 12–24 pt offset from the region’s bounding box and no overlap with the markup geometry And in PDF the label text is vector-based with font size ≥ 9 pt; in PNG at 72/150/300 DPI the label cap-height is ≥ 12/25/50 px respectively And disabling Include IDs results in no visible ID labels in the export And the count of visible ID labels equals M
Embedded ID metadata in exported files
Given a document with TraceLink IDs and summaries When exporting to PDF Then the PDF XMP contains a planpulse:TraceLink namespace with an array of objects {id, page, bbox, summary, deepLink?} And for each visible ID a matching entry exists in XMP and a PDF annotation in the region’s area whose Contents includes that ID When exporting to PNG Then the PNG contains an iTXt chunk named PlanPulse.TraceLink with UTF-8 JSON mapping {id -> {page, bbox, summary, deepLink?}} And the metadata entries exactly match the set of visible IDs And metadata writing does not corrupt or remove any pre-existing document metadata unrelated to TraceLink
Label contrast and placement rules
Given backgrounds of varying luminance and texture beneath candidate label positions When computing label style and placement Then the label achieves a contrast ratio ≥ 4.5:1 against its immediate background via auto color swap and 1–2 px halo/outline And the label’s filled bounding box does not intersect the markup region geometry; if no adjacent position is conflict-free, a leader line is drawn to the nearest free position within 100 pt of the region And labels never occlude other labels; ties are resolved by nudging with a minimum 6 pt separation And placement is stable across repeated exports of the same content (no more than 1 pt jitter)
Optional QR codes and deep links with checksum
Given export options Include QR next to IDs and Include deep links in metadata When Include QR is enabled Then a QR code encoding the deep link plus a CRC-16 checksum parameter is rendered adjacent to each label, with error correction level M, quiet zone ≥ 2 modules, and a printed size ≥ 8 mm on PDF at 300 DPI (or scaled equivalently in PNG) And scanning the printed QR for a sample of at least 30 IDs resolves to the exact PlanPulse context with ≥ 95% success on first attempt When Include QR is disabled Then no QR glyphs are present while deep links remain only in metadata if that option is enabled
ID index page and client-facing redaction/minimal mode
Given export options Include ID Index and Client minimal mode/redaction When Include ID Index is enabled Then the export contains a generated index page listing each ID with page number, region thumbnail, and summary, and the list count equals the number of visible IDs When Client minimal mode/redaction is enabled Then visible labels remain, but the index omits internal comments/author emails and only shows ID, page, and a summary truncated to 120 characters And the exported file contains no residual sensitive fields in metadata (e.g., no commentBody or authorEmail keys in PDF XMP or PNG iTXt)
Print/scan survivability of IDs
Given a PDF export with visible IDs and optional QR codes When the PDF is printed at 300 DPI on standard office paper and scanned at 200 DPI grayscale Then at least 95% of QR codes decode to the correct deep link And for pages without QR, alphanumeric IDs remain human-legible with post-scan character x-height ≥ 1.2 mm and pass checksum validation And labels retain contrast ratio ≥ 3:1 after print/scan and remain adjacent to their regions within 5 mm
Export pipeline integration and performance budgets
Given the existing export pipeline and templates When exporting a 20-page project with 100 markup regions and IDs to PDF and PNG Then total export time increases by ≤ 10% versus baseline (without IDs) or remains ≤ 8 seconds on reference hardware, and peak memory increases by ≤ 15% And resulting file sizes increase by no more than 2 MB plus 10 KB per ID for PDFs and 5% for PNGs at 300 DPI And existing template headers/footers/watermarks remain unchanged when Include IDs is disabled And enabling/disabling each new option persists per user and does not regress non-ID exports
Approval Proof & Audit Record
"As a project manager, I want a tamper-evident approval record tied to an ID so that I can demonstrate exactly what was approved and when."
Description

Capture an immutable proof record upon approval of any decision or markup region, bound to the TraceLink ID. Store the approved version hash, approver identity, timestamp, and a visual snapshot, along with a cryptographic content hash to make subsequent alterations tamper-evident. Lock the proof snapshot; any later changes create a new state linked via lineage. Provide a shareable proof view and export that are watermarked with the TraceLink ID. Surface status badges (Approved, Superseded, Revoked) wherever the ID appears. Integrate with audit logs and retention policies for compliance and dispute resolution.

Acceptance Criteria
Immutable Proof Record Created on Approval
Given a decision or markup region with a TraceLink ID and a pending approval When an authorized user approves it via UI or API Then the system creates a proof record bound to that TraceLink ID containing: approver identity (userId and displayName), UTC ISO-8601 timestamp, approved version hash, cryptographic content hash (SHA-256) of the approved content and the visual snapshot, and a PNG/JPEG snapshot And the proof record and snapshot are stored in append-only, read-only storage And attempts to modify or delete any proof record field are rejected and logged And the proof record is retrievable by TraceLink ID via UI and API within 1 second p95
Tamper-Evident Lineage on Post-Approval Changes
Given an approved TraceLink with an existing proof record When the underlying content or markup region is edited Then the original proof snapshot remains locked and unchanged And a new state is created with a parent pointer to the prior state (lineage) And upon subsequent approval of the edited content, a new proof record is created with a new content hash and version hash And the prior state is automatically marked Superseded And any hash mismatch between stored proof and recomputed content is detected and flagged in audit logs
Shareable Proof View and Watermarked Export
Given an approved TraceLink with a proof record When an authorized user generates a shareable proof link or export Then the proof view shows the visual snapshot, approver, timestamp, version hash, content hash, lineage, and status And the exported PDF/PNG is watermarked with the TraceLink ID and current status on every page And share links are view-only, carry no edit controls, and can be revoked by the creator or admin And a revoked link returns HTTP 403 and is unusable within 60 seconds of revocation And exported files embed metadata (TraceLink ID, proofRecordId, generatedAt UTC) in the file properties
Status Badges Surfaced Across the UI and API
Given any PlanPulse view that surfaces a TraceLink ID (canvas overlays, comments, decisions, search results, exports, API) When the status changes to Approved, Superseded, or Revoked Then a visible status badge with the current value is displayed consistently in all locations within 2 seconds And the badge includes a tooltip with approver and timestamp And the status is available via API for the TraceLink ID And badges meet AA contrast ratios for accessibility
Comprehensive Audit Log Integration
Given any event related to a TraceLink proof (approval, revocation, supersession, share link create/revoke, export generated) When the event occurs Then an audit log entry is written with actor (userId), timestamp (UTC ISO-8601), IP, eventType, TraceLink ID, proofRecordId (if applicable), previousStatus, newStatus, and requestId And audit entries are append-only and immutable And audit logs are filterable by TraceLink ID and exportable as CSV and JSON And audit retrieval by TraceLink ID returns results within 2 seconds p95
Compliance Retention and Legal Hold Enforcement
Given an organization retention policy configured (e.g., 7 years) and optional legal holds When a proof record or audit entry reaches end-of-retention with no active legal hold Then it is purged by a scheduled job and a purge audit entry is recorded And attempts to delete or alter proof records or audit entries before retention expiry are blocked and logged And applying a legal hold prevents purge and deletion until the hold is removed And share links and exports to be purged are invalidated and no longer accessible
Performance, Concurrency, and Idempotent Approval
Given normal load conditions When a user approves a decision or markup Then proof record creation completes within 2 seconds p95 and 5 seconds p99 And generating a proof export completes within 5 seconds p95 and 10 seconds p99 And concurrent approval attempts on the same TraceLink result in a single proof record; subsequent attempts receive a 409 Conflict with a pointer to the existing proof And repeated approval requests with the same idempotency key do not create duplicate proofs
Legacy Content Backfill
"As an admin, I want to assign IDs to existing content so that the entire workspace becomes consistently referenceable without disrupting current work."
Description

Migrate existing projects by backfilling TraceLink IDs for all historical comments, decisions, and markup regions. Provide a batch process with progress tracking, throttling, retries, and idempotency. Maintain a mapping from any legacy references to newly assigned IDs and create redirects to avoid broken links. Detect duplicates and consolidate with lineage notes. Offer an admin UI for conflict review and safe reruns. Produce coverage and exception reports per project to verify completeness without interrupting ongoing work.

Acceptance Criteria
Idempotent Backfill Assignment
Given a project contains legacy comments, decisions, and markup regions without TraceLink IDs When the legacy backfill job is executed Then each legacy item is assigned a unique immutable TraceLink ID And rerunning the job assigns no new IDs to already processed items And the count of newly assigned IDs equals the number of eligible items discovered at job start And existing createdAt/updatedAt timestamps and authorship metadata remain unchanged And concurrent end‑user edits during the job are saved and visible, with no job‑induced write conflicts
Progress, Throttling, and Pause/Resume
Given a backfill job is in progress for a project with at least 10,000 legacy items When an admin views the batch UI or queries the progress API Then processed, total, percent complete, success/retry/failure counts, and ETA are shown and refreshed at least every 5 seconds And the admin can set throughput between 50 and 500 items per minute And observed processing rate stays within ±10% of the configured throughput over a 1‑minute window And processor resource consumption does not exceed configured caps during throttled operation And the admin can pause the job and resume it, with pause taking effect within 10 seconds And on resume the job continues from the last checkpoint without reprocessing completed items
Retry and Error Handling with Backoff
Given transient errors (e.g., HTTP 5xx, DB deadlocks, timeouts) occur during processing of an item When the job attempts to persist or fetch data Then the job retries the item up to 3 times with exponential backoff starting at 1 second And on success within retries, the item is marked succeeded once with one TraceLink ID And on exhausting retries, the item is marked failed with error code and message captured And no duplicate IDs are created due to partial writes And a subsequent rerun processes only items in failed or unprocessed state
Legacy Reference Mapping and Redirects
Given legacy links or references (URLs, GUIDs, or anchor hashes) exist for items When a user or API client requests a legacy reference after backfill Then the system resolves the legacy reference to the new TraceLink ID and issues a redirect to the exact item context (project and drawing/version) within 300 ms server‑side And the mapping is exposed via a secured GET /mapping endpoint returning legacyRef and traceLinkId pairs to authorized roles And unknown or malformed legacy references return HTTP 404 with a diagnostic payload and link to search And all internal references inside PlanPulse are updated to use TraceLink IDs without breaking existing bookmarks
Coverage and Exception Reporting Per Project
Given a project’s backfill has completed or been paused When an admin downloads the coverage report Then the report shows per‑object‑type totals, assigned IDs, coverage percentage, duplicates consolidated, failures, and unresolved conflicts And the exception report lists each failed item with error type, last attempt timestamp, and retryable flag And report data matches the job’s final counters within 1% And reports are accessible via UI and API, exportable as CSV and JSON And generating reports does not block ongoing user activity
Duplicate Detection, Consolidation, and Lineage
Given the backfill encounters multiple legacy items deemed duplicates by matching rules When consolidation is performed Then exactly one canonical item is retained and assigned a TraceLink ID And all duplicates are linked to the canonical via lineage notes capturing original identifiers, timestamps, authors, and detection rules And all legacy references for any duplicate resolve to the canonical TraceLink ID And no attachments, geometry, or text content are lost; they are merged or preserved per rule And admins can review and override consolidation decisions without creating new TraceLink IDs on already assigned items
Admin UI for Conflict Review and Safe Reruns
Given an admin needs to resolve conflicts and rerun backfill When they open the Backfill Admin UI Then they can filter by project, object type, status (succeeded, failed, duplicate, pending), and error type And they can run a dry run that produces a proposed change set without persisting changes And they can trigger a scoped rerun (by project, date range, object type, or failed‑only) that is idempotent and preserves existing TraceLink IDs And all actions are restricted to admin role and are audit‑logged with actor, timestamp, scope, and outcome And the UI prevents destructive actions unless explicitly confirmed and validated
TraceLink API and Webhooks
"As an integrator, I want APIs and webhooks for TraceLink IDs so that external systems can reference and react to approvals automatically."
Description

Expose secure REST endpoints to resolve a TraceLink ID to its context and metadata, query approval status, and retrieve proof snapshots. Provide outgoing webhooks for key events (ID created, approved, superseded) with HMAC signatures, retries, and rate limits. Support token-scoped access, detailed permission checks, and audit trails. Generate deep links safe for email and third-party tools. Publish documentation and sample code to accelerate integrations with PM, CRM, and ticketing systems.

Acceptance Criteria
Resolve TraceLink ID to Context via REST
Given a valid access token with scope read:tracelinks and a TraceLink ID within an accessible project When the client GETs /api/tracelinks/{id} Then the response status is 200 and the body includes id, type, projectId, objectRef, createdAt, createdBy, status, deepLink, metadata And following the deepLink opens the exact comment, decision, or markup region in PlanPulse And the response includes ETag and Last-Modified headers And a non-existent ID returns 404 And an ID in a project the caller cannot access returns 403 And requests without a valid token return 401
Query Approval Status and History
Given a valid access token with scope read:status and a TraceLink ID When the client GETs /api/tracelinks/{id}/status Then the response status is 200 and the body contains state (pending|approved|superseded|revoked), decidedAt, decidedBy, and supersededBy when applicable And GET /api/tracelinks/{id}/history returns a 200 with a reverse-chronological list of state changes including timestamp, actorId, and reason And responses include ETag for caching and return 404, 403, or 401 as appropriate for missing, forbidden, or unauthenticated requests
Retrieve Proof Snapshot with Watermarked TraceLink ID
Given a valid access token with scope read:proofs and an existing TraceLink ID with proofs enabled When the client GETs /api/tracelinks/{id}/proof?format=pdf Then the response is 200 with content-type application/pdf and the snapshot visibly watermarks the TraceLink ID and current approval state And the response includes Content-Disposition with a filename containing the ID and a SHA-256 checksum in Digest header And HTTP Range requests are supported for large files And if a proof is not yet generated, the API returns 202 with a Retry-After header until available And optional GET /api/tracelinks/{id}/proof:url returns 200 with a time-limited signed URL valid for no more than 15 minutes
Webhook Delivery with HMAC Signatures and Retries
Given a registered webhook with a shared secret subscribed to tracelink.created, tracelink.approved, and tracelink.superseded When one of these events occurs Then PlanPulse POSTs to the destination within 5 seconds with JSON including id, event, occurredAt, projectId, actorId, and resourceUrl And each request includes headers X-PlanPulse-Signature (HMAC-SHA256 of the raw body), X-PlanPulse-Timestamp (epoch ms), and X-PlanPulse-Delivery-Id (UUID) And non-2xx responses trigger exponential backoff retries up to 8 attempts over 24 hours And deliveries are rate-limited to 10 requests per second per destination, with 429 responses respected and retried And duplicate deliveries use the same Delivery-Id to support idempotency and at-least-once delivery guarantees
Token-Scoped Access, Permission Enforcement, and Audit Trail
Given API tokens are issued with scopes and project bindings When a token lacking the required scope calls an endpoint Then the response is 403 with error code insufficient_scope and a WWW-Authenticate header indicating required scopes When a token has scope but the principal lacks project access Then the response is 403 forbidden without leaking resource existence beyond project context When a token is invalid or expired Then the response is 401 with error invalid_token And scope requirements are enforced as: read:tracelinks for context, read:status for status/history, read:proofs for proofs, manage:webhooks for webhook CRUD, read:audit for audit access And every API call and webhook delivery attempt is recorded in an immutable audit trail with timestamp, actorId, tokenId, ip, action, resourceId, status, latency, and requestId, retained at least 365 days
Deep Link Generation Safe for Email and Third-Party Tools
Given a valid access token with scope read:tracelinks and a TraceLink ID When the client GETs /api/tracelinks/{id}/deeplink Then the response is 200 with a URL that opens the exact context in PlanPulse and contains no PII in query parameters And deep links include no credentials; optional guest share links are signed, expire within 24 hours by default, and may be single-use And links render safely in email clients and third-party tools (no secrets in previews) and fall back to web if native app protocol is unavailable And stale, expired, or tampered links result in 401 without revealing resource existence
Public Documentation and Sample Code for Integrations
Given an external developer When they access the public API documentation Then an OpenAPI 3.1 specification is available at a stable URL, with endpoint reference, authentication, webhook signing and verification steps, retry and rate limit semantics, and an error catalog And ready-to-run sample code in JavaScript and Python demonstrates ID resolution, status query, proof retrieval, and webhook signature verification And a Postman collection and environment are downloadable and import without errors And a sandbox environment with test API keys enables end-to-end calls that return 2xx under the documented scenarios

Anomaly Guard

Continuously scan the ledger for hash mismatches, clock skew, duplicate signers, or out‑of‑order events. Suspect records are flagged with plain‑language explanations, quarantined from export, and routed for quick re‑attestation—keeping your audit trail clean without security expertise.

Requirements

Real-time Ledger Scan Engine
"As a project lead, I want the ledger automatically scanned for integrity anomalies so that I can trust the audit trail without manual reviews."
Description

Continuously evaluates every ledger event at write time and via scheduled backfills to detect hash mismatches, signature verification failures, duplicate signers on the same step/version, out-of-order sequence numbers, missing parent references, and timestamps outside configurable skew tolerances. Operates asynchronously to avoid blocking user actions, with idempotent detection and structured anomaly records (code, severity, evidence snapshot, detected-at). Integrates with PlanPulse’s event pipeline, publishing machine-readable findings to the UI, notification channels, and routing logic. Ensures performance budgets for high-volume projects and exposes observability (metrics, logs, traces) for tuning and auditability.

Acceptance Criteria
Write-Time Async Detection Non-Blocking
Given a user writes a ledger event through the PlanPulse UI or API When the event is persisted to the ledger store Then the write acknowledgment is returned to the user without waiting for anomaly scanning And the 95th percentile write acknowledgment latency is <= 300 ms And the scan for that event starts asynchronously within 200 ms (p95) of commit And if any rule is violated, an anomaly record is created and published within 2 seconds (p95) of commit
Scheduled Backfill Scan Idempotent
Given a scheduled backfill job is configured to scan a time range When the job executes for the specified range Then all events in the range are evaluated against the current detection rules And anomalies are not duplicated; correlation by (eventId, anomalyCode) results in a single open anomaly per unique issue And newly found issues produce anomaly records with detectedAt set to the detection time and fingerprint set consistently And the job exposes a summary with eventsScanned, anomaliesCreated, anomaliesSkipped, duration, and lastCursor committed And rerunning the same backfill range produces zero new anomalies when no new issues exist
Anomaly Coverage and Structured Record Format
Given events exhibiting each anomaly type When the engine scans these events Then an anomaly is created with code one of [HASH_MISMATCH, SIG_VERIFY_FAIL, DUPLICATE_SIGNER_SAME_STEP_VERSION, OUT_OF_ORDER_SEQUENCE, MISSING_PARENT_REFERENCE, CLOCK_SKEW_EXCEEDED] matching the issue And severity is CRITICAL for HASH_MISMATCH and SIG_VERIFY_FAIL, MAJOR for OUT_OF_ORDER_SEQUENCE and MISSING_PARENT_REFERENCE, and MINOR for CLOCK_SKEW_EXCEEDED and DUPLICATE_SIGNER_SAME_STEP_VERSION And each anomaly record includes anomalyId (UUID), code, severity, eventId, projectId, stepId/versionId (if applicable), detectedAt (UTC ISO-8601), detectorVersion, fingerprint, explanation (plain language), and evidence (JSON snapshot) And the evidence includes fields relevant to the code (e.g., storedHash and computedHash; signerIds/signatures; sequenceExpected and sequenceActual; parentId; eventTimestamp, serverTimestamp, skewMs) And anomaly records are machine-readable and serializable to JSON without loss
Quarantine, Publishing, and Routing Integration
Given an anomaly exists and is Open for an event When an export is requested for any scope that includes that event Then the event is excluded from export output and the export payload includes a quarantine section listing affected eventIds and anomalyIds And a machine-readable finding is published to the UI and notification channels within 5 seconds (p95) of anomaly creation And a routing message for re-attestation is published containing eventId, anomalyId, code, severity, and evidence reference And when the anomaly is marked Resolved after re-attestation, subsequent exports include the event and a resolution message is published
Performance Budgets Under High Volume
Given a project producing 10,000 events per minute sustained for 15 minutes When the engine processes this load Then p95 detection latency from commit to anomaly publication is <= 5 seconds And the processing backlog queue depth does not grow over the 15-minute interval And the service maintains CPU utilization <= 70% and memory utilization <= 75% on the provisioned instance class And user-facing write acknowledgment p95 remains <= 300 ms
Observability: Metrics, Logs, Traces
Given the engine is running under normal and error conditions When metrics are scraped Then the following metrics are exposed with labels {code, severity, source}: anomaly_guard_detections_total, anomaly_guard_detection_latency_seconds, anomaly_guard_queue_depth, anomaly_guard_publish_failures_total, anomaly_guard_deduplications_total, anomaly_guard_backfill_scan_range_seconds, anomaly_guard_exports_quarantined_total And structured logs include traceId, eventId, anomalyId (when applicable), code, severity, and message, emitted at INFO for detections and WARN/ERROR for failures And distributed traces include spans for ledger.write, anomaly.scan, anomaly.publish with parent-child relationships linking to the originating write And sampling rate and log verbosity are configurable at runtime
Configurable Clock Skew Tolerance
Given clockSkewToleranceMs is set to 2000 When an event timestamp differs from server time by 2500 ms Then a CLOCK_SKEW_EXCEEDED anomaly is created with evidence.skewMs ≈ 2500 And when clockSkewToleranceMs is updated to 3000 without restart, the new threshold takes effect within 2 minutes for both write-time and backfill scans And an event differing by 2500 ms no longer triggers CLOCK_SKEW_EXCEEDED under the new tolerance
Suspect Record Quarantine
"As a compliance officer, I want suspect records automatically quarantined from export so that all external deliverables remain clean and compliant."
Description

Automatically places flagged events into a non-destructive quarantine state that prevents inclusion in exports, approval packets, and downstream automations while preserving visibility to authorized users. Applies clear status badges in timelines, blocks external API exports of quarantined items by default, and records the quarantine action immutably in the ledger. Supports dependency awareness (e.g., derived artifacts inherit quarantine) and provides APIs/filters to include or exclude quarantined records for internal review workflows.

Acceptance Criteria
Auto-Quarantine on Flag Detection
Given an event is flagged by Anomaly Guard for a detectable reason (e.g., hash_mismatch, clock_skew, duplicate_signer, out_of_order) When the flag is raised Then the event state is set to quarantine_state='Quarantined' And quarantine_reason lists the detected reason(s) in plain language And quarantine_timestamp is recorded in UTC And only quarantine metadata is added; the event payload and version history remain unchanged And the event remains addressable by ID to authorized users
UI Visibility and Badges for Authorized Users
Given a user with an authorized role (Architect, Project Lead, Compliance Auditor) When viewing the project timeline or event detail Then quarantined events display a visible 'Quarantined' status badge with distinct color/icon And a tooltip or info panel shows the plain-language reason and timestamp And a 'Review/Retest' action is available if the user has permission to initiate re-attestation And users without authorization (e.g., Client Viewer) do not see quarantined events in lists and receive 403 on direct access
Exclusion from Exports, Approval Packets, and Automations
Given a project contains both quarantined and non-quarantined events When a user generates an export or approval packet Then only non-quarantined events are included by default And the export summary shows the count and IDs of excluded quarantined items And attempts to manually include a quarantined item are blocked with error code 'QUARANTINED_ITEM' And downstream automations subscribed to event changes do not trigger for quarantined events while they remain quarantined
External API Default Blocking
Given an external API client requests data via export-oriented endpoints When calling collection endpoints (e.g., GET /exports) without explicit review scope Then quarantined items are omitted from responses by default And direct export fetch for a quarantined item (e.g., GET /exports/{id}) returns 403 with error_code='QUARANTINED' and includes reason and remediation link And the blocked attempt is recorded with client_id and timestamp in audit logs
Immutable Ledger Quarantine Entry
Given a quarantine action occurs (automatic or manual) When the system writes to the ledger Then an immutable entry is appended containing actor (system/user), reason(s), prior_state, new_state='Quarantined', timestamps, and hash And any attempt to modify or delete this ledger entry is rejected with an integrity error And ledger verification confirms the chain remains valid including the quarantine entry
Dependency Quarantine Inheritance
Given derived artifacts A1 and A2 are produced from event E1 When E1 enters quarantine Then A1 and A2 inherit quarantine_state='Quarantined' with reason 'Parent quarantined: E1' And any approvals or workflows involving A1/A2 move to status 'Blocked by Quarantine' And when E1 is cleared, A1/A2 are re-evaluated and only exit quarantine if no other quarantine conditions apply
Internal Review APIs and Filters
Given an internal reviewer with Compliance Auditor scope When calling list/search APIs with status=quarantined or includeQuarantined=true Then quarantined items are returned with full quarantine metadata And default list/search requests exclude quarantined items unless explicitly included And export endpoints remain blocked for quarantined items regardless of filters And the OpenAPI spec documents these parameters and default behaviors
One-click Re-attestation Flow
"As an architect, I want to quickly re-attest flagged events so that I can resolve false positives and keep work moving without waiting on specialists."
Description

Provides a guided remediation flow to resolve flagged anomalies with minimal friction, including re-hashing content, re-signing with approved keys, server-time normalization for skew issues, or confirming a legitimate override with rationale. Preserves the original event and appends a superseding attestation entry, linking evidence and resolution outcome. Supports batch remediation for repetitive anomalies, validates permissions, and updates anomaly status for real-time feedback across the workspace.

Acceptance Criteria
Single-Record Re-attestation via One-Click Flow
Given a flagged record with a hash mismatch and a user with Remediate Anomalies permission When the user initiates One-click Re-attestation and selects an approved signing key Then the system re-hashes the content, validates the format, re-signs with the selected key, appends a superseding attestation entry, marks the anomaly as Resolved, and lifts quarantine for that record within 2 seconds And a plain-language explanation of the original anomaly and the remediation steps is shown to the user And if the selected key is not on the approved list or is expired, the operation is blocked with an explicit error and no ledger changes occur And an audit log is written capturing actor, key-id, old-hash, new-hash, timestamps, and outcome
Clock Skew Normalization Remediation
Given a flagged record whose anomaly type is clock skew When the user runs the normalization step in the re-attestation flow Then the system computes a normalized server timestamp, re-orders the affected event if needed, appends a superseding attestation referencing the normalization, and updates anomaly status to Resolved And the original event timestamp remains immutable; the normalized timestamp is recorded only in the superseding entry And the event ordering in the ledger becomes strictly non-decreasing across adjacent entries And the change propagates to all viewers within 5 seconds
Legitimate Override Confirmation with Rationale
Given a flagged record whose anomaly type allows legitimate override (e.g., duplicate signer, out-of-order event) When the user selects Confirm Legitimate Override, enters a rationale of at least 20 characters, and chooses an allowed policy reason code Then the system appends a superseding attestation with the rationale and policy code, sets outcome to Legitimate Override, and marks the anomaly as Resolved And if organizational policy requires secondary approval, the attestation is placed in Pending and routed to an approver; export remains quarantined until approval And attempts with missing rationale, disallowed policy code, or insufficient permissions are rejected with clear validation messages and no ledger changes
Immutable Original and Linked Superseding Attestation
Given any remediation action completes When the system writes the superseding attestation Then the original event remains immutable and readable, and the superseding entry includes a bidirectional link (original_id and superseding_id) And the UI exposes View Original and View Superseding actions from either record And evidence files (hash proofs, rationale note, key certificate) are attached to the superseding entry and checksummed; attachments are retrievable and match stored checksums
Batch Remediation for Repetitive Anomalies
Given a user with Remediate Anomalies permission selects 10–500 flagged records of the same anomaly type When the user runs batch re-attestation with a chosen approved key or selected remediation option Then the system processes records concurrently with progress feedback, producing per-record outcomes (Resolved, Pending Approval, Failed) And partial failures do not block other records; failed items are listed with error reasons and a one-click Retry option And the final summary shows counts by outcome and updates each record’s quarantine status accordingly
Permission Validation and Access Control Enforcement
Given any user attempts to initiate re-attestation (single or batch) When the user lacks Remediate Anomalies permission or the selected signing key requires Key Use permission not held by the user Then the system returns 403 for API calls or shows an Access Denied message in UI, performs no ledger mutation, and logs the attempt with user id and reason And cross-workspace remediation attempts are blocked; only records within the user’s current workspace scope are eligible
Real-Time Status Propagation and Export Quarantine Enforcement
Given a record is flagged and quarantined from export When an anomaly is resolved via any supported remediation path Then the record’s status updates to Resolved for all connected clients within 5 seconds, the quarantine is lifted, and the record becomes eligible for the next export And attempts to export while a record remains flagged return an error indicating the count and ids of quarantined records, with a link to open the remediation flow
Plain-language Anomaly Explanations
"As a client-facing PM, I want clear, plain-language explanations of anomalies so that I can understand what happened and take the right action without security expertise."
Description

Generates human-readable explanations for each anomaly that describe what was detected, why it matters, the impacted records, and the recommended next step in simple terms. Pairs explanations with concise evidence (e.g., hash before/after, signer fingerprint, timestamp delta) and a machine-readable code for automation. Surfaces explanations consistently in timeline tooltips, anomaly detail panels, and notifications, enabling non-experts to act confidently.

Acceptance Criteria
Hash Mismatch Explanation in Detail Panel
Given a ledger entry whose recomputed hash does not equal the stored hash When the user opens the anomaly detail panel Then the explanation states what was detected, why it matters, which records are impacted, and the recommended next step And the explanation readability (Flesch–Kincaid Grade Level) is <= 8.0 and each sentence <= 24 words And evidence includes original_hash, recomputed_hash, and impacted_record_ids And a machine-readable code equals "HASH_MISMATCH" And an action control labeled "Request re-attestation" is visible and enabled And the explanation text contains none of the following terms: nonce, merkle, HMAC
Clock Skew Explanation in Timeline Tooltip
Given two consecutive events where the timestamp delta exceeds the configured clock_skew_threshold_ms or is negative beyond the threshold When the user hovers the anomaly icon in the timeline Then the tooltip shows a single-sentence plain-language explanation that includes the drift magnitude in seconds and why it matters And the tooltip text length is <= 140 characters And evidence in the tooltip data attributes includes timestamp_delta_ms and event_ids And the machine-readable code equals "CLOCK_SKEW" And the tooltip "View details" action opens the anomaly detail panel for the same anomaly_id
Duplicate Signer Explanation in Notification
Given multiple signatures from the same signer on a single revision exceeding the duplicate_signer_policy When a notification is sent to the project lead Then the notification body states what was detected, why it matters, the impacted revision_id, the signer display_name, and the recommended next step And evidence includes signer_fingerprint and signature_ids And the machine-readable code equals "DUPLICATE_SIGNER" And the notification includes a CTA labeled "Resolve duplicate" linking to the anomaly detail with the anomaly_id And the readability (Flesch–Kincaid Grade Level) is <= 8.0
Out-of-Order Events Explanation with Impacted Records
Given events are detected out of natural order (an event sequence is earlier than its predecessor) When the user views the anomaly detail panel Then the explanation states what was detected, why it matters, and lists affected event_ids in expected vs actual order And evidence includes expected_sequence and actual_sequence arrays and impacted_record_ids And the machine-readable code equals "OUT_OF_ORDER" And the recommended next step is present with an action labeled "Resequence events" or "Re-ingest events" And the impacted count displayed equals the number of event_ids listed
Consistency of Explanation Across Timeline, Detail, Notification
Given any supported anomaly is generated When the explanation is surfaced in the timeline tooltip, anomaly detail panel, and notification Then the message_id is identical across all surfaces And the natural-language text is identical across surfaces except for truncation in the tooltip to <= 140 characters with an ellipsis And the machine-readable code is identical across surfaces And evidence values are identical across surfaces And the recommended next step wording is identical across surfaces
Machine-Readable Code and Evidence in Webhook/API Payload
Given an anomaly is generated and recorded When an outbound webhook is emitted and the anomalies API endpoint is queried for that anomaly_id Then the payload contains fields: anomaly_id, code, message, impacted_record_ids, evidence, recommended_next_step{label,action_id}, severity, created_at And code is one of: "HASH_MISMATCH","CLOCK_SKEW","DUPLICATE_SIGNER","OUT_OF_ORDER" And evidence keys are type-consistent per code (e.g., hash_before/hash_after for HASH_MISMATCH; timestamp_delta_ms for CLOCK_SKEW; signer_fingerprint for DUPLICATE_SIGNER; expected_sequence/actual_sequence for OUT_OF_ORDER) And message text matches the UI message for locale "en-US" and message_id And the payload validates against schema version "v1.0" and contains no nulls in required fields
Role-based Routing & Permissions
"As a workspace admin, I want anomalies routed to the right reviewer with proper permissions so that resolution is fast and controlled."
Description

Routes new anomalies to designated reviewers based on project, severity, and rule type, assigning owners, due times, and escalation paths. Enforces granular permissions for viewing, re-attesting, overriding, or changing detection rules, with audit logs for all actions. Integrates with email, Slack, and webhooks for notifications and acknowledgments, and supports queue views for teams handling multiple projects.

Acceptance Criteria
Auto-route Anomalies by Project, Severity, and Rule Type
Given routing rules include project="Atrium", severity="High", ruleType="HashMismatch" -> reviewer="alice", escalationPath="SecLead" When a matching anomaly is created Then owner="alice", dueTime set from High SLA (4h), escalationPath="SecLead", and the item appears in Alice's queue within 5 seconds Given multiple rules match When routing executes Then the most specific rule (project+severity+ruleType) is chosen deterministically and stored as routingRuleId on the anomaly Given no matching rule exists When routing executes Then the anomaly is placed in the project's default review queue; if that is unset, then route to the global "Unassigned" queue and notify project admins Given the resolved reviewer is inactive or lacks access to the project When routing executes Then fallback to the next escalation target and log the fallback in the audit trail
Ownership, Due Time, and Escalation
Given severity SLAs are configured (High=4h, Medium=24h, Low=72h) and Ack SLAs (High=30m, Medium=2h, Low=8h) When an anomaly is assigned Then dueTime and ackDueTime are computed from createdAt per severity and stored on the record Given ackDueTime passes without an Acknowledged event When the timer fires Then send escalation notifications to the first escalation target and set status="Escalated: Unacknowledged" Given dueTime passes without status in {Resolved, Re-attested, Overridden} When the timer fires Then reassign owner to the next escalation target, notify both previous and new owners, and append an escalation entry to the audit log Given the owner acknowledges via any supported channel When the acknowledgement is recorded Then pause further ack escalations for this anomaly and retain the original dueTime
Granular Permissions Enforcement
Given a user with Viewer role for project X When they open an anomaly Then they can view details but any attempt to re-attest, override, or change rules returns HTTP 403 and the UI controls are disabled Given a user with Reviewer role for project X When they re-attest an anomaly Then the action succeeds only if they provide a justification (min 10 characters) and the anomaly status updates to "Re-attested" Given a user with Approver role for project X When they perform an override Then the system requires a reason and records the override reason, actor, and timestamp; changes to detection rules remain forbidden (HTTP 403) Given a user with Admin role for project X When they attempt any of view, re-attest, override, or change detection rules/routing Then all actions are permitted and evaluated against project scope only; cross-project items remain inaccessible
Audit Log for Routing and Permissions Actions
Given any action on an anomaly or rule (assignment, re-attest, override, rule create/update/disable, routing change, permission grant/deny) When the action is committed Then an immutable audit log entry is created within 2 seconds containing timestamp (UTC ISO8601), actorId, projectId, targetId, actionType, before/after snapshot, channel (UI/API), and clientIp Given an auditor queries audit logs for anomalyId X When the results are returned Then they include a complete, chronological list of all related entries with no gaps, and entries are read-only Given an unauthorized action attempt occurs When it is blocked Then a "Denied" audit entry is recorded without exposing sensitive before/after content Given audit logs are exported via API When a request is made with proper permissions Then entries are returned in paginated JSON and CSV formats with consistent counts and checksum hash per page
Notifications and Acknowledgments via Email, Slack, and Webhooks
Given an anomaly is assigned or escalated When the event occurs Then email, Slack, and webhook notifications are dispatched within 60 seconds and include anomalyId, project, severity, ruleType, owner, dueTime, and deep links Given a recipient clicks an email Acknowledge link containing a signed, single-use token When the token is valid and unexpired (<=24h) Then the anomaly is marked Acknowledged with channel=email; otherwise a safe error page is shown and no state changes occur Given a Slack interactive Acknowledge action is used When the Slack signature validates Then the anomaly is marked Acknowledged with channel=slack and the message updates to reflect the new state Given a partner system sends an acknowledgment to the webhook ack endpoint When the request carries a valid HMAC signature and idempotency key Then the anomaly is marked Acknowledged with channel=webhook and duplicate requests do not alter the state more than once Given any notification delivery fails (bounce, 4xx/5xx) When retries are attempted Then up to 3 retries with exponential backoff are performed, failures are logged, and the owner is alerted via alternate channel if available
Multi-Project Team Queues and Filters
Given a user is a member of Team T with access to projects A and B When they open the Team Queue Then they see anomalies for A and B only, sorted by severity desc then createdAt desc, with SLA badges showing time remaining or overdue Given the user applies filters (project, severity, ruleType, status, owner) When the results are shown Then counts, list items, and charts reflect the same filtered set and the filter state persists across sessions Given the queue contains more than 100 items When the view loads Then the first page (25 items) renders in <=2 seconds and pagination controls for 25/50/100 per page function correctly Given the user selects multiple anomalies they are authorized to modify When they perform a bulk action (Acknowledge, Assign to me, Reassign to user) Then the action completes for authorized items and skips others with clear per-item errors
Detection Rule Change Governance
Given a user with Admin role for project X When they create, update, disable, or rollback detection rule R Then the change requires a reason, increments rule version, records effectiveAt, and is applied to new anomalies within 60 seconds; existing anomalies are unaffected Given a user without Admin role attempts to change detection rules or routing When the request is processed Then the API returns HTTP 403 and no changes are persisted Given a rule change is made When notifications are configured for rule subscribers Then an email/Slack/webhook notification is sent within 60 seconds summarizing the change (ruleId, version, actor, reason, effectiveAt) Given an audit is performed on rule R When audit logs are queried Then all versions and changes for R are present with before/after details and actor attribution
Integrity Dashboard & Alerts
"As a project lead, I want an integrity dashboard and alerts so that I can monitor health and react quickly to issues across my projects."
Description

Provides a real-time dashboard summarizing anomaly counts, types, severities, MTTR, open vs. resolved status, and trends over time, filterable by project, client, user, and date range. Supports alert policies with thresholds and deduplication, sending notifications via email, Slack, and webhooks. Offers exportable reports, including a clean audit summary that excludes quarantined items and a full audit view for internal review.

Acceptance Criteria
Real-Time Anomaly Metrics Dashboard Load
Given the Integrity Dashboard is accessed by an authenticated user with view permissions When anomalies exist across multiple projects Then the dashboard displays total anomaly count, counts by type (hash mismatch, clock skew, duplicate signers, out‑of‑order), severity distribution, MTTR, open vs. resolved counts, and a trends-over-time chart within 2 seconds of page load And the "Last updated" timestamp indicates freshness within the past 5 seconds And the dashboard auto-refreshes every 15 seconds without full page reload, updating metrics if new anomalies arrive And when no anomalies exist, all widgets show zero values and an empty-state message
Filter by Project, Client, User, and Date Range
Given anomalies span multiple projects, clients, users, and dates When the user applies any combination of filters Then all metrics, tables, and charts reflect the filtered dataset using AND semantics And applied filters are visible as removable chips and persist in the URL so shared links reproduce the same view And date range is inclusive of start 00:00:00 to end 23:59:59 in the workspace timezone, which is displayed in the UI And clearing filters restores the default range of the last 30 days And filter application returns results within 1.5 seconds for up to 100k anomaly records
MTTR and Trend Accuracy
Given a selected date range with resolved anomalies When MTTR is calculated Then MTTR is computed as the median time from detection to resolution for the filtered set and displayed to the nearest minute And the trends chart shows daily counts stacked by anomaly type with a gapless x-axis and tooltips displaying counts and percent change versus the previous equal period And dashboard values match the backend /metrics API for the same filters with an absolute difference of <= 1%
Alert Policy Creation with Thresholds and Deduplication
Given a user with manage-alerts permission When the user creates an alert policy with selected anomaly types, minimum severity, scope (project/client), condition (> X anomalies in Y minutes), dedup window, and notification channels Then the policy is saved, enabled by default, and listed with its next evaluation time And triggering 10 qualifying anomalies within the window generates a single alert notification per policy-scope within the dedup window, aggregating the count And the policy auto-resolves after no qualifying anomalies for the configured quiet period and emits a resolution notification And a test button sends a test notification to all configured channels without affecting metrics
Multi-channel Notifications: Email, Slack, Webhook
Given an alert fires When notifications are sent Then email is delivered to configured recipients with subject containing policy name, scope, and highest severity, and body includes top 5 examples and a deep link to the dashboard And Slack message is posted to the configured channel using blocks, threading updates for the same dedup key, and includes a "View in PlanPulse" link And a webhook POST is sent over HTTPS with JSON payload matching the published schema, signed with HMAC-SHA256 using the shared secret, and includes X-Signature and X-Timestamp headers And failed webhook deliveries are retried with exponential backoff up to 5 attempts and are visible in a delivery log with success/failure status And all notifications are emitted within 30 seconds of the alert trigger
Exportable Reports: Clean Audit Summary and Full Audit View
Given a user with Export Audit permission and active filters When the user requests an export Then the user can choose Clean Audit Summary (excluding quarantined items) or Full Audit View (including quarantined with reasons) and CSV or PDF format And the export job is queued and completes within 60 seconds for up to 50k records, after which a secure download link is presented And totals in the Clean Audit Summary reconcile with the dashboard counts for the same filters with zero difference And the Full Audit export includes columns: anomaly_id, type, severity, status (open/resolved/quarantined), quarantine_reason, detected_at, resolved_at, project, client, user, and explanation And download links expire after 24 hours and cannot be accessed without authorization
Time Sync & Skew Tolerances
"As a workspace administrator, I want configurable time synchronization and skew tolerances so that benign clock differences don’t flood the system with noisy alerts."
Description

Centralizes time synchronization by relying on server-issued timestamp tokens and NTP-backed server time, storing both client-reported and server-validated timestamps. Allows workspace-level configuration of acceptable skew tolerances and rule sensitivity to reduce false positives while preserving integrity guarantees. Provides SDK guidance for client timestamp handling and flags events exceeding thresholds for review.

Acceptance Criteria
Server Timestamp Token on Event Ingest
Given an event submission with payload and optional client_ts When the server receives the event Then the server issues and attaches a timestamp token that includes server_ts (UTC, ms precision) and event_id, signed with the server’s private key And server_ts is derived from the NTP-synced system clock at receipt time And the token verifies successfully using the server’s public key And the token is persisted with the event record
Dual Timestamp Persistence (Client and Server Times)
Given an event contains a client_ts value When the event is persisted Then the record stores client_ts (UTC, ms) and server_ts (UTC, ms) as immutable fields And the retrieval API returns both timestamps unmodified And if client_ts is absent or invalid, server_ts is still stored and client_ts is null And updates to either timestamp are rejected at the API and data layer
NTP-Backed Server Time Health Check
Given the server maintains NTP synchronization When time_sync_offset_ms is within the configured threshold (e.g., ≤100 ms) Then time source status is reported as time_sync_ok via health metrics And timestamp tokens continue to be issued When time_sync_offset_ms exceeds the threshold Then new events are flagged with reason server_time_unsynced for review And the health metrics expose the current offset and degraded status
Workspace Skew Tolerance Configuration
Given a workspace admin with appropriate permissions When they open Time Sync & Skew settings Then they can set skew_tolerance_seconds within 0–600 (default 120) And they can choose rule_sensitivity preset (strict|normal|lenient) that maps to tolerance values or multipliers And the chosen values are validated, saved, and auditable with who/when And the active tolerance is applied immediately to subsequent ingests in that workspace
Skew Within Tolerance Passes
Given a workspace with skew_tolerance_seconds = T And an incoming event has both client_ts and server_ts When abs(client_ts − server_ts) ≤ T Then the event is not flagged for time skew And no time-related anomaly reason is attached And the event proceeds through normal processing and approval flows
Skew Exceeds Tolerance Flagged with Reason
Given a workspace with skew_tolerance_seconds = T And an incoming event has both client_ts and server_ts When abs(client_ts − server_ts) > T Then the event is flagged for review with status Requires Time Review And a plain-language reason is attached in the form client_server_skew_exceeded: <measured_skew_seconds> And the event is routed to Anomaly Guard’s review queue And no auto-approval or downstream export is triggered until resolved
SDK Guidance and Sample Implementation for Client Time
Given the SDK package and developer docs When developers follow the Time Sync guidance Then they can capture client_ts in UTC with ms precision, include it in event payloads, and avoid device-local timezone offsets And sample code (at least JS and Python) compiles/runs and sets client_ts correctly in integration tests And docs clearly describe fallback behavior when client time is unavailable (omit client_ts; rely on server_ts) And linting/CI checks validate presence and format of client_ts in sample apps

Milestone Mapper

Map contract line items and percentages directly to approval events (rungs, merge gates, final stamps) so invoices generate the moment work is truly approved. Supports phased billing, retainage, tax rules, and multi‑currency. Each line item references its approval hash and TraceLink ID, eliminating manual builds and disputes over what was earned.

Requirements

Approval Event Mapping Engine
"As a project lead, I want to map contract line items to approval events so that earned amounts are calculated automatically and unambiguously."
Description

Implements a rules engine that maps each contract line item and percentage to specific PlanPulse approval events (rungs, merge gates, final stamps). Stores and resolves an immutable approval hash and TraceLink ID per mapping to guarantee idempotent calculations and eliminate duplicate triggers. Supports partial approvals, percent-complete rollups, split allocations across multiple events, and validation to prevent unmapped or over-allocated amounts. Continuously listens to approval webhooks from the visual workspace, updates earned value in real time, and exposes a consistent API for downstream billing, reporting, and dashboards.

Acceptance Criteria
Create Mapping With Split Allocations
Given a contract line item with amount and currency and a mapping request allocating percentages to specific approval events (rungs, merge gates, final stamps) When the mapping is submitted via POST /v1/mappings Then the system validates that the sum of allocations for the line item equals 100% (precision: two decimals) and that each allocation references a valid event type and identifier And it rejects the request with HTTP 422 and error codes {SUM_NOT_100, INVALID_EVENT_REFERENCE} if validation fails And on success returns HTTP 201 with mapping_id, line_item_id, allocations[], created_at, and schema_version And initial approval_hash and trace_link_id for each allocation are null until the corresponding approval event is received
Idempotent Processing of Approval Webhooks
Given a valid approval webhook payload containing approval_hash, trace_link_id, event_type, event_id, line_item_id, allocation_percentage, and approval_timestamp When the payload is processed for the first time Then the engine records a single immutable ledger entry keyed by approval_hash and trace_link_id And increases the line item earned_value by allocation_percentage of the line item amount (subject to remaining allocation) using the exchange rate at approval_timestamp if currencies differ And responds 200 with body {processed: true, approval_hash, trace_link_id} When the identical payload (same approval_hash and trace_link_id) is received again Then no additional ledger entry is created, earned_value remains unchanged, and the engine responds 200 with idempotent: true And concurrent duplicate deliveries result in at most one ledger entry due to uniqueness constraints on approval_hash and trace_link_id
Real-Time Earned Value Update SLA
Given a steady stream of approval webhooks at up to 120 events/minute When events are valid and downstream data stores are healthy Then p95 time from webhook receipt to API reflection of updated earned_value is <= 3 seconds and p99 <= 10 seconds And GET /v1/contracts/{id}/earned-value and GET /v1/line-items/{id} return updated values within those thresholds When downstream store outages occur Then events are queued with at-least-once delivery and retried with exponential backoff for up to 15 minutes without data loss And once stores recover, backlog is drained and ordering by approval_timestamp is preserved for calculations
Partial Approvals and Percent-Complete Rollups
Given a line item mapped 60% to a merge gate and 40% to a final stamp When the merge gate reports a partial approval of 50% Then the engine credits 30% (50% of 60%) of the line item amount as earned_value And subsequent partials for the same event cannot cause the cumulative credited amount for that event to exceed its 60% allocation And across all events, cumulative earned_value for the line item cannot exceed 100% of the amount And percent_complete reported by the API equals cumulative_earned/amount rounded to two decimals
Validation Against Unmapped or Over-Allocated Amounts
Given a create or update mapping request for a line item When the sum of allocations is less than 100% or greater than 100% Then the request is rejected with HTTP 422 and error code SUM_NOT_100 and no changes are persisted When an approval webhook references a line item/event that has no mapping Then the event is rejected with HTTP 409, error code UNMAPPED_EVENT, and no earned_value change is applied When an approval requires more than the remaining uncredited portion of its allocated percentage Then the engine caps the credit at the remaining allocation and records an OVER_ALLOCATION warning on the ledger without erroring the request
Downstream API for Earned Value, Taxes, Retainage, and Multi‑Currency
Given a contract with multiple line items, taxes, retainage, and mixed currencies When a client calls GET /v1/contracts/{id}/earned-value?as_of=ISO8601 Then the response is 200 and returns, per line item: currency, original_amount, earned_amount, retained_amount, released_retainage_amount, tax_amount, net_billable_amount, percent_complete, allocation_breakdown[], approval_hashes[], trace_link_ids[] And currency conversion uses the FX rate captured at each approval_timestamp with source and rate fields included, and rounding follows ISO 4217 minor units And retainage is withheld at the configured rate until a final stamp event releases it, after which released_retainage_amount increases and retained_amount decreases accordingly And tax calculation applies the configured tax rules to earned_amount minus retained_amount and includes jurisdiction and rule_id in the response And the API is versioned (v1) and returns a stable schema matching the published OpenAPI document; unknown query params are ignored with 200
Phased Billing & Retainage Rules
"As a project lead, I want to configure phases and retainage so that billing aligns with contractual terms without manual spreadsheets."
Description

Provides configurable phase schedules, milestone percentages, and retainage parameters at contract or line-item level. Allows retainage accrual and conditional release (e.g., upon final stamp or punchlist completion), with automatic withholding and later issuance of release invoices. Supports per-phase payment terms, caps, minimums, and carry-forward of unearned amounts. Integrates with the mapping engine to update earned vs. retained values instantly upon approvals and exposes clear phase state in the UI and exports.

Acceptance Criteria
Configure Phase Schedules at Contract and Line-Item Levels
Given a contract with defined phases and line items When a user sets phase start/end dates, milestone percentages per phase and per line item, and retainage % at contract or line-item level Then validation ensures milestone percentages per scope sum to 100% ±0.01 And line-item retainage % overrides contract-level retainage % when both are set And invalid inputs (negative values, totals >100%, overlapping dates) are rejected with actionable errors And saving succeeds, persisting a versioned configuration with audit (user, timestamp, before/after), returning a 201 Created with config ID
Automatic Retainage Accrual on Approval Events
Given a line item with retainage % R and a milestone mapped to an approval event And an approval is posted with approval_hash and trace_link_id for amount A When the system processes the approval Then earned_gross = A And retained_amount = A × R rounded to currency precision And billable_now = A − retained_amount And an invoice draft for billable_now is created within 5 seconds referencing approval_hash and trace_link_id And a retained ledger entry is recorded and visible in phase and contract totals and via API
Conditional Retainage Release on Final Stamp or Punchlist Completion
Given retainage accrued for a phase or line item with release condition set to Final Stamp or Punchlist Complete And no open blockers exist for the selected release condition When the qualifying approval event occurs Then a retainage release invoice draft is generated for retained_to_date minus prior releases, rounded to currency precision And release entries reference the triggering approval_hash and trace_link_id And the phase/line-item status transitions to Retainage Released And duplicate or repeated triggers do not create duplicate releases (idempotent processing)
Per-Phase Payment Terms, Caps, Minimums, and Carry-Forward
Given a phase with payment terms T, cap C, minimum threshold Min, and carry-forward enabled When earned billable for the phase exceeds C Then the invoiceable amount is limited to C and the excess is carried forward as earned_unbilled to the next phase And when earned billable is below Min at billing time Then invoicing is deferred and the amount is marked pending_minimum in UI and API And generated invoices apply T to due_date calculations And all deferrals and carry-forward adjustments are logged in the audit trail
Real-Time Earned vs. Retained Updates via Mapping Engine
Given line items are mapped to approval events When an approval is recorded Then phase and line-item cards in the UI update earned and retained figures within 2 seconds And contract/phase/line-item totals remain consistent with a max rounding variance of 0.01 in display currency And the public API reflects updated amounts (ETag/version incremented) on subsequent reads And no background job failures are logged for the update
Phase State Visibility in UI and Exports
Given a contract with multiple phases in varying states When the user opens Billing > Phases and exports data Then each phase shows a status in {Not Started, In Progress, Earned, Retained, Retainage Released, Closed} and displays earned_amount, retained_amount, released_amount And the export (CSV and JSON) includes phase_id, phase_name, status, earned_amount, retained_amount, released_amount, currency, approval_hashes[], trace_link_ids[] And exported totals match on-screen totals within 0.01 And the export can be re-imported without data loss for these fields
Mid-Stream Rule Changes and Retroactive Recalculation
Given approvals and invoices already exist for a contract When a user updates retainage % or phase schedule and chooses Apply From Next Approval or Retroactive Recalc Then for Apply From Next Approval, existing earned/retained amounts remain unchanged and new approvals use the updated rules And for Retroactive Recalc, the system recomputes earned vs retained and generates adjustment entries (invoice, credit memo) for deltas without reducing any posted invoice below zero And all recalculations are versioned with audit trail linking prior and new values And affected UI totals and exports reflect the adjustments within 2 seconds
Tax & Multi-Currency Compliance
"As a finance manager, I want tax rules and multi-currency support so that invoices are compliant and accurate across regions."
Description

Adds jurisdiction-aware tax handling (VAT/GST/sales tax), inclusive/exclusive pricing, reverse-charge flags, and item-level taxability. Supports multi-currency contracts and invoicing with FX rate sourcing, timestamped rate locks at approval time, and transparent rounding rules per currency. Maintains a base ledger currency for reporting while showing client-facing currency on invoices. Automatically applies tax rules to earned amounts as approvals occur and includes tax summaries in invoice artifacts and exports.

Acceptance Criteria
Jurisdiction-Aware Tax Determination and Item-Level Taxability
Given a contract with a defined project jurisdiction, client tax profile, and item-level taxability flags When an invoice is compiled from approved earned amounts Then the system selects the correct tax regime (VAT, GST, or Sales Tax) for that jurisdiction And applies the effective rate(s) and rules as of the approval timestamp And honors item-level taxability (taxable, reduced rate, exempt, out of scope) And stores applied tax regime, rate IDs, and rule version in invoice metadata And tax totals reconcile to the sum of line-level calculations
Reverse Charge Handling for VAT
Given an EU VAT jurisdiction with supplier and customer both taxable persons and the customer providing a valid VAT ID in another EU country When generating the invoice Then VAT is not charged on taxable items (0 billed) And the invoice includes the required reverse-charge legend and the customer's VAT ID And the tax summary shows zero VAT collected with reason "Reverse charge" And the ledger records no VAT payable for the supplier
Inclusive vs Exclusive Pricing Calculations
Given a line item marked tax-inclusive with price P in the client currency and applicable tax rate r When calculating taxes Then tax amount equals P - P/(1+r) rounded per currency rules And net equals P minus tax And document totals equal the original P after rounding Given a line item marked tax-exclusive with net N and rate r When calculating taxes Then tax amount equals N*r rounded per currency rules And gross equals N plus tax And invoice totals equal the sum of line gross values
FX Rate Sourcing and Approval-Time Rate Lock
Given contract currency, client invoice currency, base ledger currency, and a configured FX source When an approval event occurs and an approval hash is created Then the system fetches FX rates as of the approval timestamp and locks them (contract→base, contract→client) And stores rate, source, timestamp, and links them to the approval hash and TraceLink ID And all ledger and invoice conversions for that approval use the locked rates And any re-generation or partial invoicing reuses the locked rates for those approvals
Automatic Tax Application on Approval with Phased Billing and Retainage
Given a line item with phased billing percentages and optional retainage configuration When a rung approval earns X% of the line item Then tax is computed only on the earned portion according to item taxability and reverse-charge rules And retainage is taxed on release or at earning per contract configuration And the invoice generated from that approval includes correct tax for the earned portion and correct treatment for retainage And audit metadata links tax calculations to the approval hash and tax rule version
Transparent Currency Rounding Rules
Given computed line and total amounts in one or more currencies When rounding is applied Then each currency's minor unit precision is honored per ISO 4217 And the configured rounding mode is applied consistently across lines, taxes, and totals And any per-document rounding delta does not exceed one minor unit and is posted as a separate rounding adjustment line And the audit log records original amount, rounded amount, delta, currency, and rounding mode
Tax Summary and Dual-Currency Presentation in Artifacts and Exports
Given an invoice with multiple tax rates and jurisdictions and dual currencies (client and base) When generating the invoice artifact and data exports Then the invoice displays a tax summary per rate and jurisdiction showing net, tax, and gross And the export includes tax code, rate ID, rule version, jurisdiction, and amounts in client and base currencies And totals per currency reconcile to the sum of lines and tax summary And the invoice displays currency codes/symbols and the FX rate(s) and timestamp used
Auto-Invoice Generation & Dispatch
"As a billing admin, I want invoices to generate at approval so that cash flow accelerates and manual assembly is eliminated."
Description

Automatically generates draft or final invoices the moment mapped approval events are recorded, using the approval hash for idempotency and duplicate prevention. Supports customizable invoice templates, numbering sequences, and payment terms. Enables auto-send to clients with configurable review gates, and exports/syncs to accounting systems (e.g., QuickBooks/Xero) via connectors or standard formats (PDF, CSV, UBL). Bundles multiple approvals into a single invoice per rules (by client, project, phase) and prevents invoicing unearned items.

Acceptance Criteria
Idempotent Invoice Generation on Approval Event
Given a mapped approval event is recorded with approvalHash=H and traceLinkId=T And no prior invoice exists for H When invoice generation is triggered by the event Then exactly one invoice record is created within 5 seconds And the invoice stores approvalHash=H and traceLinkId=T And the invoice state is Draft or Final according to org configuration Given the same event H is re-delivered or retried any number of times When invoice generation runs Then no additional invoice is created And the existing invoice is returned in the audit trail and API response Given concurrent deliveries of H When processed Then the outcome is a single invoice with no duplicates
Auto-Send with Configurable Review Gates
Given auto-send is enabled and reviewGate="Internal Review" is ON When an invoice is generated Then it is created in Draft state and routed to the Internal Review worklist And notifications are sent to reviewers within 1 minute And the invoice is not sent to the client Given reviewers approve the draft When the approval action is recorded Then the invoice is finalized and dispatched to the client via the configured channel(s) And a delivery receipt is captured Given reviewers reject the draft When rejection is recorded Then the invoice remains unsent and returns to Draft with reviewer comments Given auto-send is enabled and reviewGate is OFF When an invoice is generated Then it is finalized and dispatched immediately
Customizable Invoice Templates and Fields
Given invoice template "Standard A" is active for the project When an invoice is generated Then the client-facing PDF and web view use template "Standard A" And required fields are populated: invoiceNumber (if finalized), client, project, phase, line items, subtotals, taxes (if applicable), total, payment terms, due date, currency And no unresolved placeholders remain And the template selection is logged on the invoice record
Invoice Numbering Sequence Management
Given numbering pattern "INV-{YYYY}-{####}" is configured with scope=Project When an invoice is finalized Then the next sequential number is assigned atomically And numbers are unique within the scope and year And drafts have no final number until finalized And year rollover resets the counter to 0001 And aborted or failed finalizations do not consume numbers
Rule-Based Bundling of Multiple Approvals
Given bundling rule "client+project+phase, weekly window (Mon–Sun)" is active And three eligible approvals occur within the same window for the same tuple When billing runs Then exactly one invoice is produced containing all eligible line items And line items reference their approvalHash values And totals and subtotals reflect the sum of bundled items Given approvals differ by client, project, phase, currency, or payment terms When billing runs Then separate invoices are created per distinct grouping And no invoice mixes different clients or currencies
Guardrails Against Unearned or Duplicate Billing
Given a contract line item mapped to approval events with cumulative earned-to-date E and previously invoiced amount I When an invoice is generated Then the included amount for that line item equals max(E − I, 0), rounded to currency precision And if E − I <= 0 the item is excluded And attempts to invoice without a recorded approval are blocked with a validation error And any attempt to re-invoice an approvalHash already billed returns the existing invoice reference
Accounting Sync and Standard Export
Given QuickBooks Online connector is authorized and mapped When an invoice is finalized Then it is pushed to QuickBooks within 2 minutes with an externalId equal to the approvalHash or bundled composite id And sync status is visible on the invoice Given any connector push fails When retries occur Then exponential backoff is applied up to 5 attempts and the last error is recorded And the invoice remains in PlanPulse without duplication Given no connector is configured When an invoice is finalized Then downloadable artifacts are generated: PDF, CSV, and UBL 2.1 And artifact contents match the invoice totals and line items
Dispute & Adjustment Workflow
"As a project lead, I want a way to handle disputes and change orders so that corrections can be made without losing audit integrity."
Description

Introduces a controlled process to pause, adjust, or reverse earned amounts when clients dispute scope or when change orders alter mappings. Supports issuing credit notes, partial reversals, and re-mapping with full traceability while preserving the original approval record. Provides role-based actions, notifications, and reconciliation views to align stakeholders and reissue corrected invoices without breaking the audit chain.

Acceptance Criteria
Pause Disputed Line Item and Freeze Billing
Given a mapped line item with an earned amount tied to an Approval Hash and TraceLink ID, When a user with Project Lead or Billing Admin role submits a dispute with a mandatory reason and selects the affected percentage or amount, Then the system records a Pause event with timestamp, actor, reason, impacted amount, Approval Hash, and TraceLink ID. And the billing status for the selected portion changes to "Paused" and excluded from any new invoice generation runs. And unaffected line items and portions remain billable with no interruption. And the reconciliation view shows the paused amount and reason within 60 seconds of the action. And an in-app and email notification is sent to the Billing Admin and Client Contact within 60 seconds. And the system prevents pausing more than the currently uncredited earned amount.
Issue Partial Reversal via Credit Note
Given an invoice containing an earned amount previously approved and billed, When a Billing Admin issues a partial reversal for a selected portion, Then a Credit Note is generated in the original invoice currency and tax jurisdiction, referencing the original Invoice ID, Approval Hash, and TraceLink ID. And the reversal amount cannot exceed the net earned amount minus prior reversals; the UI blocks and explains any excess. And taxes, retainage, and rounding are recalculated according to original rules; any rounding difference is posted to the designated rounding account and is <= 0.01 in local currency. And Accounts Receivable and project ledger reflect balanced entries (debit/credit) for the credit note. And the client-facing PDF/HTML clearly labels the credit as linked to the original invoice with the dispute reason. And notifications are sent to Project Lead and Client Contact within 60 seconds.
Remap After Change Order Without Altering Approval Record
Given an approved Change Order that alters scope or percentages, When a user with Project Lead role initiates remapping, Then the original Approval Record remains read-only and intact, preserving its Approval Hash. And a new Mapping Version (n+1) is created with a unique TraceLink ID chaining to the prior version. And the system recalculates earned vs. unearned amounts under the new map, producing explicit delta entries for any over/under billings. And no line item is orphaned or double-counted; total contract value before tax remains invariant across versions. And impacted future invoice schedules are updated to reflect the new mapping. And the reconciliation view displays a side-by-side before/after map with user, timestamp, and reason captured.
Adjustment Audit Trail with Hash-Linked Events
Given any pause, reversal, remap, or credit note action, When the action is confirmed, Then an immutable audit event is written capturing actor, timestamp, action type, affected items, before/after amounts and mappings, justification, Approval Hash, and TraceLink IDs. And events are hash-chained so that tampering changes chain verification status to "INVALID". And an audit export (CSV/JSON/PDF) can be generated for a selected date range and project, producing matching totals to the ledger within +/- 0.00 tolerance. And audit events are filterable by action type, role, and line item within the UI and via API. And read-only Auditor role can access the full audit trail but cannot perform actions.
Role-Based Controls and Threshold Approvals
Given role assignments for Project Lead, Billing Admin, Client Reviewer, and Auditor, When users attempt actions, Then permissions are enforced: Project Lead can pause/dispute and propose remaps; Billing Admin can issue credit notes and finalize remaps; Client Reviewer can acknowledge or contest; Auditor is view-only. And any reversal or credit exceeding a configurable threshold (e.g., 10% of invoice or $5,000 equivalent) requires dual approval (Project Lead + Billing Admin) before posting. And users without required roles see disabled controls with explanatory tooltips, and all denied attempts are logged as audit events. And 2FA is required for posting credit notes in production environments. And permission changes take effect immediately and are logged with actor and rationale.
Notifications and Reconciliation View for Disputes
Given a dispute, reversal, or remap event, When the event is saved, Then notifications are dispatched to the configured stakeholders (Project Lead, Billing Admin, Client Contact) via in-app and email within 60 seconds. And the reconciliation view displays: open disputes, paused amounts, pending credits, posted credits, net AR impact, and deltas by currency and tax jurisdiction. And totals in the reconciliation view match the accounting ledger and invoice register for the selected project and date range with 0.00 variance. And filters by project, milestone, currency, tax region, and status return results within 2 seconds for datasets up to 10,000 items. And exporting the reconciliation view preserves the same totals and references Approval Hashes and TraceLink IDs.
Reissue Corrected Invoice and Preserve Audit Chain
Given a dispute is resolved and necessary adjustments are posted, When the Billing Admin reissues the invoice, Then the system generates a replacement invoice with a new invoice number that references the superseded invoice ID, Approval Hashes, and all related Credit Note IDs. And the superseded invoice is marked "Voided/ superseded" and excluded from aging, while payments and credits are correctly carried forward. And taxes, retainage, and currency conversions are recalculated per original tax rules and current FX rate policy; differences are itemized. And the replacement invoice total equals prior invoice total minus credits plus any approved deltas; no double-billing occurs. And clients receive the corrected invoice via configured channels within 5 minutes, and the audit trail links all related documents.
Traceability & Audit Ledger
"As an approver, I want an auditable trace from invoice to approval so that there are no disputes over what was earned."
Description

Maintains an immutable audit ledger that links each invoiced amount to its approval hash, TraceLink ID, approver identity, timestamp, and drawing/version context. Captures all configuration changes (mappings, phases, tax settings), invoice events, and adjustments with before/after snapshots. Provides exportable audit packages for clients and auditors and supports tamper-evident signatures to eliminate ambiguity in what was earned and when.

Acceptance Criteria
Visual Milestone Mapping UI
"As an architect, I want a visual mapper for milestones so that I can configure billing quickly and confidently."
Description

Delivers a drag-and-drop interface to connect contract line items to approval rungs, merge gates, and final stamps within the PlanPulse workspace. Displays live status, earned percentages, retainage withheld/released, tax previews, and currency breakdowns. Includes guided setup, validation warnings for unmapped/over-allocated items, and tooltips that reveal approval hashes and TraceLink details. Optimized for quick configuration and low error rates, with accessibility and responsive design standards.

Acceptance Criteria

Instant Capture

At one‑click client sign‑off, automatically create and capture a Stripe PaymentIntent for the mapped amount. Supports cards/ACH, SCA, partial captures, deposits, and smart retries. If no method is on file, it issues a secure pay link. Receipts embed Verify QR and ledger details, while refunds/voids follow Revocation Trail—speeding cash while keeping the audit squeaky clean.

Requirements

One-click PaymentIntent Orchestration
"As a project lead, I want payments to be automatically created and captured at client sign-off so that cash flow is immediate and I don’t have to issue manual invoices."
Description

On client sign-off, automatically create, confirm, and capture a Stripe PaymentIntent for the mapped amount, supporting cards and ACH, SCA challenges, idempotency keys, and metadata linking to the PlanPulse project, approval record, and drawing version. Persist PaymentIntent IDs and status in PlanPulse, update payment state via Stripe webhooks, and gate project transitions based on successful capture when configured. Ensure multi-currency support, safe retries for transient errors, and consistent error surfaces in the sign-off UI without duplicating charges.

Acceptance Criteria
On-file Method & Secure Pay Link
"As a client, I want a secure pay link when no method is saved so that I can complete approval safely without sharing payment details over email."
Description

If a default payment method is on file, use it for immediate confirmation; otherwise issue a secure Stripe-hosted pay link or Checkout Session tied to the same approval, with configurable expiry, one-time use, and SCA-ready flows. Track link lifecycle and attach results to the approval record. Provide client notifications and an in-app banner with link status, while ensuring no sensitive payment data is stored in PlanPulse.

Acceptance Criteria
Amount Mapping & Deposits Engine
"As a project lead, I want deposits and partial captures to map from the approved scope so that billing stays aligned with the contract and milestones."
Description

Map capture amounts directly from approved scope: support deposits (percentage or fixed), retainers, partial capture schedules by milestone, and final balances. Include taxes, discounts, currency rounding, and fee presentation rules. Allow per-project configuration with role-based overrides and validation preventing capture above the approved total. Store the capture schedule and remaining balance within the project ledger to keep billing aligned with the approval.

Acceptance Criteria
Smart Retries & Dunning
"As a project lead, I want failed payments to retry automatically and notify the client so that I spend less time chasing collections and keep projects moving."
Description

Implement intelligent retry logic for failed confirmations/captures based on Stripe error categories (insufficient funds, network issues, SCA required), with backoff schedules, ACH settlement awareness, and webhook-driven state transitions. Trigger client dunning emails/in-app notifications with pay-link regeneration when needed, and provide a dashboard for staff to view failure reasons and manually trigger safe retries without risking duplicate charges.

Acceptance Criteria
Receipt with Verify QR & Ledger Embed
"As a client, I want a verifiable receipt with a QR code so that my finance team can confirm payment details and reconcile quickly."
Description

Generate immutable receipts upon successful capture containing itemization, taxes/fees, PaymentIntent ID, approval reference, and a Verify QR code pointing to a PlanPulse-hosted verification page. Embed ledger entry references and store the receipt under the project timeline. Email receipts to client and internal recipients, support branded templates, and allow one-click retrieval from the approval record and audit trail.

Acceptance Criteria
Refunds & Voids via Revocation Trail
"As an operations admin, I want refunds and voids to follow the revocation trail so that the audit remains complete and clients receive timely, consistent updates."
Description

When an approval is revoked or edited under authorized roles, orchestrate voids for uncaptured intents and full/partial refunds for captured payments. Capture reason codes, timestamps, actor identity, and affected line items in the Revocation Trail, synchronize with Stripe, update ledger entries, and notify stakeholders. Respect ACH refund windows and present status progression (submitted, pending, succeeded/failed) in the project timeline.

Acceptance Criteria
PCI-Safe Processing & SCA Compliance
"As a security officer, I want compliant, hardened payment processing so that we reduce risk and pass audits without slowing the team."
Description

Ensure PCI compliance by never storing raw PAN data, using Stripe Elements/Hosted flows, encrypting all secrets, and verifying Stripe webhook signatures. Enforce role-based access controls for payment actions, PII minimization, configurable data retention, environment isolation (test vs prod), comprehensive audit logging, and rate limiting. Support SCA flows (3DS, mandate management) for cards and ACH with clear UX prompts and fallback paths.

Acceptance Criteria

ScopeShift Billing

Detects approved changes outside base scope using Change Atlas, Impact Meter, and Zone Watch, then proposes micro‑milestones priced by rules (unit, percent, or flat) with optional override. Bundles them into the next invoice with a clear change narrative for one‑click client acceptance, updating contract value automatically so extras aren’t lost or argued.

Requirements

Change Scope Detection
"As a project lead, I want out-of-scope changes automatically identified from approved markups and discussions so that I can capture billable extras without manual tracking."
Description

Continuously compares approved markups and conversation decisions against the base scope using Change Atlas mappings and Zone Watch boundaries to automatically flag items that fall outside the contracted scope. Produces structured change records with references to drawings, zones, and decision threads, deduplicates repeated detections across versions, and queues uncertain cases for manual review. Triggers downstream pricing and proposal workflows only after an approval signal is captured in PlanPulse, ensuring signal integrity and minimizing false positives.

Acceptance Criteria
Impact Meter Computation
"As a project lead, I want the impact of each change quantified so that I can price it consistently and justify costs to clients."
Description

Calculates cost, effort, and schedule impact for each detected change by analyzing affected zones, quantities, disciplines, and phase timing. Outputs a normalized impact score and baseline effort/cost ranges that feed the pricing rules engine. Supports calibration per project template, exposes key drivers for transparency, and caches computations to keep the workspace responsive while maintaining accuracy across revisions.

Acceptance Criteria
Pricing Rules Engine with Override
"As a project lead, I want pricing rules that automatically calculate fees with an option to override so that I can balance standardization with professional judgment."
Description

Applies contract-aware pricing rules (unit-based, percentage of base, or flat fee) to each change using criteria such as change type, client profile, and contract addenda. Supports tiers, minimums/maximums, taxes, and currency handling, with deterministic rounding. Allows authorized users to override calculated prices with justification notes and tracks rule versioning for auditability. Emits a priced line item ready for micro‑milestone packaging.

Acceptance Criteria
Micro‑Milestone Proposal & Grouping
"As a project lead, I want changes grouped into clear micro‑milestones so that clients understand deliverables and approvals fit our billing cadence."
Description

Transforms priced changes into micro‑milestones with clear deliverables, acceptance criteria, and target dates. Groups related items by zone, discipline, or billing cycle to reduce invoice clutter while preserving traceability back to individual changes. Supports partial acceptance, dependency ordering, and final review/edit by the project lead prior to client publishing.

Acceptance Criteria
Invoice Bundling & One‑Click Acceptance
"As a client, I want to accept billed changes in one click within the invoice so that approvals are fast and unambiguous."
Description

Bundles proposed micro‑milestones into the next invoice draft, generating a client-facing section with a concise change summary and a one‑click acceptance action. Syncs acceptance status back to the PlanPulse workspace, updates invoice totals in real time, and handles retries for failed submissions. Respects existing invoice numbering, tax settings, and payment terms, and provides a sandbox preview before publishing to clients.

Acceptance Criteria
Contract Value Auto‑Update & Audit Trail
"As a firm owner, I want contract values and financial records to update automatically on acceptance so that revenue is accurate and auditable."
Description

On client acceptance, automatically updates the project’s contract value, budget allocations, and schedule of values to reflect the approved changes. Records immutable audit entries including timestamps, approver identity, pricing rule versions, and override justifications. Supports controlled rollback for rescinded approvals via counter‑entries to maintain financial integrity across reports and exports.

Acceptance Criteria
Client Change Narrative Generator
"As a client, I want a clear narrative of each change with visuals so that I can see exactly what I’m approving and why it costs extra."
Description

Generates a clear, plain‑language narrative for each change that explains what changed, why it’s out of scope, where it occurs (zone references), and its impact on cost/schedule. Embeds before/after visual snippets and links back to source markups and conversations. Supports localization, accessibility standards, and consistent formatting across invoices and proposal previews.

Acceptance Criteria

Retainage Release

Set retainage rates per project/phase and let PlanPulse auto‑withhold on each invoice. Release holdbacks automatically when defined approvals, permit milestones, or punch‑list sign‑offs are met. Generate a notarized release certificate, support partial releases by sheet/zone, and keep everyone aligned on when remaining funds are due.

Requirements

Retainage Settings per Project and Phase
"As a project lead, I want to set retainage rates per project and phase with effective dates so that invoices withhold the correct amounts automatically."
Description

Provide a configurable retainage module that allows administrators and project leads to define default retainage rates at the organization level and override them per project and phase. Support effective dates, caps, rounding rules, inclusion/exclusion of taxes, multi-currency handling, and item-level exemptions (e.g., reimbursables). Enforce validation against contract values and phase budgets, require role-based permissions for edits, and surface changes in a traceable history. Integrate settings with invoice generation, project templates, and API endpoints for import/export, ensuring that any change immediately reflects in upcoming invoices while preserving historical calculations.

Acceptance Criteria
Invoice Auto-Withhold Engine
"As a billing coordinator, I want invoices to calculate and display retainage per line item so that billing is accurate and transparent to clients."
Description

Implement an invoice calculation engine that automatically computes and displays retainage per line item at invoice creation and update. Support exclusions, prior-period adjustments, change orders, credit memos, and progressive billing. Present clear subtotals for current billings, retainage withheld, retainage released, and outstanding retainage. Ensure recalculation on edits, preview before posting, and alignment with AIA-style pay applications where applicable. Write accounting events to the ledger, expose results via API and exports, and provide transparent client-facing invoice PDFs with line-level retainage visibility.

Acceptance Criteria
Milestone-Driven Release Orchestration
"As a project manager, I want retainage to auto-release when approvals or permit milestones are met so that cash flow isn’t blocked by manual steps."
Description

Enable configurable release rules that automatically trigger retainage releases based on PlanPulse events such as client approval of drawings/markups, permit issuance updates, or punch-list sign-offs. Support partial and percentage-based releases, AND/OR logic across multiple conditions, minimum hold durations, and approval chains (internal and client). Generate a release record with audit details, prevent over-release beyond amounts withheld, and allow authorized manual overrides with required reason codes. Schedule releases immediately or at set dates, notify stakeholders, and reconcile the invoice ledger accordingly.

Acceptance Criteria
Partial Release by Sheet/Zone
"As an architect, I want to release retainage for completed sheets or zones while holding others so that we can bill progressively as areas are finished."
Description

Provide granular retainage release by associating drawings and zones to budget line items or quantities, enabling selection of completed sheets/zones for proportional release. Support multiple partial releases over time, guard against double-counting, and lock released elements while preserving their linkage to the underlying drawings and markups. Offer a clear UI to select items, preview financial impact, and include a detailed breakdown in client-facing documentation and certificates.

Acceptance Criteria
Notarized Release Certificate Generation
"As a firm principal, I want a notarized release certificate generated at each release so that clients and lenders have defensible documentation."
Description

Generate a formal release certificate for each retainage release that includes project metadata, parties, amounts, line-level breakdowns, released sheets/zones, applicable jurisdictional language, and timestamps. Integrate with e-sign and e-notary providers to support notarization where required, capturing seals and certificates of completion. Store the signed PDF in the project’s document repository, embed a verification hash/QR code, maintain version history on re-issue, and support locale and template variations by jurisdiction.

Acceptance Criteria
Funds Reconciliation and Stakeholder Notifications
"As an accounts receivable specialist, I want remaining funds and due dates to update and notify stakeholders after each release so that payments are collected on time."
Description

Automatically update outstanding balances, payment schedules, and due dates after each retainage release, reflecting changes across project, phase, and invoice views. Send configurable in-app and email notifications to clients, project leads, and finance with release details, revised amounts due, and next steps. Provide a dashboard widget summarizing remaining retainage by project/phase and upcoming releases, and expose webhooks for downstream accounting integrations (e.g., QuickBooks, Xero) to sync balances.

Acceptance Criteria
End-to-End Audit Trail and Compliance Export
"As a compliance officer, I want a complete, immutable audit trail of retainage settings, calculations, approvals, and releases so that we can meet contractual and regulatory requirements."
Description

Maintain an immutable, time-stamped log of retainage settings, invoice calculations, approval events, release triggers, manual overrides, certificates, and notifications. Provide role-based, read-only access for auditors with filterable views and export to CSV/PDF packages that include supporting artifacts. Ensure traceability from each release back to source milestones and drawing revisions, and support compliance needs for contracts modeled on AIA pay applications or similar frameworks.

Acceptance Criteria

Client Checkout

A branded, mobile‑ready portal that pairs approval with a secure pay step. Clients see what was approved, the related invoice, taxes, and funding options (saved method, ACH, split cost centers). Smart Nudges follow up gently on unpaid items. Clear, one‑place experience drives on‑time payments and reduces back‑and‑forth.

Requirements

Unified Approval-to-Pay Flow
"As a client stakeholder, I want to review what I approved and pay in the same place so that I can complete the process without confusion or extra back-and-forth."
Description

Couples the client approval event to invoice creation within PlanPulse, presenting a single consolidated screen that summarizes approved drawings/revisions and the corresponding charges, taxes, and payment actions. The flow preserves an immutable link between the approved version and the invoice, writes a complete audit trail of who approved and who paid, and prevents payment against superseded versions. Supports partial approvals by segmenting line items, and regenerates invoices when scope changes, maintaining traceability across versions. Provides a fallback pay-later link while keeping approval status and payment state synchronized in real time.

Acceptance Criteria
Branded Mobile Checkout Portal
"As a client on my phone, I want a branded, easy checkout screen so that I can approve and pay quickly without needing a desktop."
Description

Delivers a white-labeled, mobile-first checkout portal that inherits firm branding (logo, colors, typography) and supports custom subdomains, ensuring a consistent client experience. The portal implements responsive layouts across devices, WCAG 2.1 AA accessibility, fast initial load under 2 seconds on 4G, and localized time, currency, and copy. Clients access via secure, expiring magic links or authenticated sign-in, with session management and device-level CSRF protection. The experience surfaces approval summary, invoice details, and payment options in a clean, single-page flow optimized for touch interaction.

Acceptance Criteria
Multi-Method Payments & Vaulting
"As a client accounts approver, I want to choose card or ACH and split costs by cost center so that our internal accounting stays accurate."
Description

Provides multiple funding options including saved card on file, new card entry with hosted tokenization, ACH bank transfer via instant auth or micro-deposits, and split payments across cost centers with configurable allocation percentages or amounts. Stores payment methods in a PCI-compliant vault for reuse on future approvals, supports payer notes, PO numbers, and tax IDs, and enforces business rules such as minimum ACH amounts, surcharges, or convenience fees where allowed. The UI guides clients through method selection and split allocation with validation and clear totals before submission.

Acceptance Criteria
Tax Calculation & Invoice Transparency
"As a finance reviewer, I want to see clear invoice and tax details so that I can verify charges before authorizing payment."
Description

Integrates real-time tax calculation to display accurate taxes by jurisdiction and exemptions, shows a detailed line-item breakdown tied to the approved scope, and generates a numbered, downloadable invoice and receipt. Supports multi-currency display, VAT/GST fields, reverse-charge flags, and firm tax identity disclosure. The portal keeps totals synchronized as clients change payment options or split allocations and prevents submission if mandatory tax or billing fields are incomplete.

Acceptance Criteria
Smart Nudges & Dunning Automation
"As a project lead, I want automated reminders on unpaid approvals so that I spend less time chasing payments and keep projects moving."
Description

Automates polite, context-aware reminders for unpaid approvals using email and SMS with configurable cadences, quiet hours, and escalation paths. Messages include deep links back to the checkout portal, reflect the latest balance and due dates, and automatically pause on payment, dispute, or scope change. Implements intelligent retries for failed card payments, reason-code capture, and analytics on send, open, click, and conversion to help teams optimize follow-ups while staying compliant with consent and regional messaging regulations.

Acceptance Criteria
Reconciliation, Receipts & Ledger Sync
"As a firm administrator, I want payments to reconcile automatically with our books so that reporting and audits are accurate with minimal manual work."
Description

Updates payment status in PlanPulse instantly, supports partial and split payments, issues itemized receipts, and synchronizes transactions, fees, refunds, and chargebacks to connected accounting systems. Provides downloadable settlement and payout reports, webhooks for real-time events, and an admin queue for error handling and retry on sync failures. All financial events are appended to the project and client timeline for end-to-end traceability.

Acceptance Criteria
Security, Compliance & Auditability
"As a security-conscious owner, I want the checkout to be compliant and auditable so that client data and payments remain protected."
Description

Ensures PCI DSS SAQ-A scope by using hosted payment fields and tokenization, enforces TLS 1.2+ in transit and encryption at rest for PII, and provides least-privilege access controls for staff. Captures consent records for ACH debits and messaging, applies fraud controls such as AVS/CVV checks and velocity limits, and logs immutable audit trails of approvals, invoices, payments, and notifications. Implements rate limiting, bot protection, and data retention policies aligned with NACHA and applicable privacy regulations.

Acceptance Criteria

Cashflow Forecast

Forecast earned‑versus‑owed by week using planned approval windows and milestone targets. Run what‑if scenarios for schedule slips or added scope and get alerts when forecasted cash dips below thresholds. Gives principals and project leads a forward view to plan staffing, expenses, and runway confidently.

Requirements

Revenue Rules & Milestone Mapping
"As a principal, I want to define how each project earns revenue by week based on milestones and approvals so that forecasts mirror our actual billing and collection patterns."
Description

Implement flexible revenue recognition rules that support fixed-fee by milestone, percent-complete over time, and time-and-materials with caps and retainers. Map PlanPulse project phases and milestone targets to billable items, define planned approval windows, and set payment terms (e.g., Net 15/30) to translate approvals into cash timing. Support partial approvals, split milestones, and overruns, with project-level defaults and per-milestone overrides. Produce weekly earned and owed amounts per project based on these mappings to drive the forecast.

Acceptance Criteria
PlanPulse Data Sync & Financial Imports
"As a project lead, I want forecasts to pull live milestones and approvals from PlanPulse and align with our invoices and payments so that I don’t maintain duplicate data."
Description

Synchronize project structures, milestones, approval statuses, and target dates from PlanPulse as the system of record, and enable importing historical invoices and payments via CSV initially (with API connectors queued). Provide on-demand and scheduled syncs, basic currency handling, and conflict resolution with field-level audit logs. Ensure backfilling of historical financials to establish an accurate baseline for owed versus collected and to seed the forecast engine.

Acceptance Criteria
Weekly Forecast Engine
"As a principal, I want a weekly earned-versus-owed view so that I can plan staffing and expenses around expected cash movements."
Description

Generate week-by-week projections for 26–52 weeks computing earned, billed, collected, and owed totals at firm, client, project, and milestone levels. Incorporate planned approval windows, payment terms, and revenue rules to time-shift revenue into expected cash receipts. Recalculate automatically when schedules change, with sub‑2‑second recalculation for typical portfolios and deterministic rounding to avoid penny drift. Handle holidays and firm calendar settings and expose a reliable data model for downstream visualization and alerts.

Acceptance Criteria
Expense Plan & Starting Balance Inputs
"As a principal, I want to include starting cash and planned expenses so that forecasted cash dips reflect reality, not just receivables."
Description

Capture a firm-level starting cash balance and planned operating expenses by week or month, including payroll, rent, software, subcontractors, and other non-reimbursables. Support recurring expense schedules, one-time entries, and CSV import. Optionally derive projected payroll from staffing plans tied to projects where available. Combine expenses with forecasted receipts to compute net cash trajectory and runway, maintaining edit history and effective-dated changes for auditability.

Acceptance Criteria
What‑if Scenario Modeling
"As a principal, I want to run what‑if scenarios for schedule slips or added scope so that I can see the cash impact before committing to changes."
Description

Enable creation of named scenarios that adjust key assumptions such as approval dates, milestone sequencing, added scope, staffing rates, write‑ups/downs, and payment terms. Present baseline versus scenario diffs for earned, owed, net cash, and runway by week, and allow saving, duplicating, and sharing scenarios without altering source data until explicitly applied. Provide quick presets for common shocks like two‑week slip or 10% scope increase.

Acceptance Criteria
Cash Threshold Alerts
"As a principal, I want proactive alerts before a cash dip so that I can take corrective action early."
Description

Offer configurable firm-wide and per-project cash thresholds and lead times that trigger alerts when net cash is forecast to dip below the threshold in any upcoming week. Deliver notifications via in‑app banners, email, and Slack with deduplication, snooze, and resolution tracking. Each alert links to the impacted weeks and projects and suggests actions such as accelerating approvals or creating a scenario.

Acceptance Criteria
Forecast Visualization & Export
"As a project lead, I want clear visualizations and exports so that I can share cash expectations with stakeholders and make decisions quickly."
Description

Provide an interactive weekly timeline chart and tabular view of earned, collected, owed, expenses, and net cash with filters by client, project, and tag. Highlight weeks breaching thresholds, support drill‑downs to milestones and approvals, and allow exporting to CSV and PDF with firm branding. Ensure accessibility and mobile-friendly layouts for quick reviews on the go.

Acceptance Criteria

LedgerLink Sync

Two‑way sync with QuickBooks/Xero to push invoices and line items, pull payment/fee status, and auto‑reconcile Stripe payouts to projects. Map GL codes, tax rates, and entities, with each transaction anchored to Timeproof Notary entries. Cuts duplicate entry and keeps finance airtight without leaving PlanPulse.

Requirements

QuickBooks/Xero OAuth Connector
"As a project lead, I want to securely connect our QuickBooks/Xero account to PlanPulse so that invoices and payments can sync automatically without manual exports."
Description

Implement secure OAuth 2.0 connections to QuickBooks Online and Xero with support for sandbox/production selection, tenant/company pickers, token refresh/rotation, least‑privilege scopes, and encrypted credential storage. Provide connect/disconnect flows, connection health status, and audit logs. On connect, auto‑register required webhooks/callbacks where supported. Enforce RBAC so only authorized users can manage connections. Persist external tenant IDs and metadata to drive downstream syncs without manual reconfiguration.

Acceptance Criteria
Invoice Push with Line‑Item Mapping
"As an architect, I want to create and push invoices with detailed line items from my project workspace so that accounting reflects exactly what was delivered and approved."
Description

Allow users to generate and push invoices from PlanPulse projects using Timeproof Notary entries, milestones, and reimbursables as source data. Map customers/contacts, projects/jobs (classes/tracking categories), GL income accounts, and tax rates per line item. Support draft vs. approved invoice creation, due date/terms, currency, and numbering options. Attach optional artifacts (e.g., approval snapshots) to invoices. Store external invoice IDs and maintain update flows (append/change/cancel) with idempotency to prevent duplicates. Enable one‑click send via the accounting system where available.

Acceptance Criteria
Payment and Fee Status Pull
"As a project manager, I want PlanPulse to pull payment status from QuickBooks/Xero so that I always see up‑to‑date balances without logging into the accounting system."
Description

Continuously pull or receive webhook notifications for payments, partial payments, refunds, fees, and credit notes from QuickBooks/Xero, updating invoice and project financial status in PlanPulse. Reflect allocations to specific invoices/line items, compute outstanding balances and aging, and lock paid invoices to prevent accidental edits. Trigger notifications and dashboards when payment states change. Maintain a full audit trail linking external payment records to their originating PlanPulse transactions.

Acceptance Criteria
Stripe Payout Auto‑Reconciliation
"As a firm owner, I want Stripe charges and payouts to auto‑reconcile to invoices and projects so that our books and project budgets stay accurate with no double entry."
Description

Ingest Stripe charges, fees, and payouts, and automatically reconcile them to PlanPulse invoices and projects. Create or match bank deposits and fee lines in QuickBooks/Xero using a clearing account, splitting net vs. fee amounts correctly. Match payments by invoice number, amount, customer, and metadata; surface exceptions for manual review. Handle partial payments, multi‑invoice payouts, refunds, disputes, and rounding tolerances. Display reconciliation status within PlanPulse and write back references to the accounting system.

Acceptance Criteria
GL, Tax, and Entity Mapping UI
"As an operations manager, I want to map our GL accounts, tax rates, and entities to PlanPulse items so that synced transactions post to the right places in our ledger."
Description

Provide a settings UI to map PlanPulse services/items to GL income accounts, tax rates (VAT/sales tax), and entities/classes/locations/tracking categories per connected tenant. Support defaults and overrides at workspace, project, and line‑item levels with validation against remote ledgers. Allow versioned mappings with change history, import/export of mapping presets, and environment‑specific configurations (sandbox vs. production). Prevent sync until required mappings are complete and highlight unmapped items during draft review.

Acceptance Criteria
Timeproof Notary Anchoring
"As a project lead, I want each billed line item to reference its Timeproof Notary entry so that every charge is audit‑ready and defensible."
Description

Anchor every invoiced line item and payment reference to immutable Timeproof Notary entries, storing notarized IDs, timestamps, approver identity, and content hashes. Block invoicing of non‑notarized time or scope and prevent destructive edits to notarized records after billing; changes create new versions with lineage. Expose an audit pane on invoices that traces each charge to its notarized source and client approval event, ensuring defensibility and compliance.

Acceptance Criteria
Sync Orchestrator and Error Handling
"As an admin, I want a reliable sync engine with clear error reporting and self‑healing so that accounting stays in sync without babysitting."
Description

Build an idempotent sync engine with queued background jobs, webhook handlers, and polling fallbacks that respect API rate limits. Implement retry with exponential backoff, deduplication keys, conflict detection/resolution, and data integrity checks. Provide a health dashboard with per‑connection status, detailed logs (with PII redaction), manual replay tools, and proactive alerts for failures or drift. Normalize pagination, timezone, and currency handling across providers and persist sync state to support incremental updates and recovery.

Acceptance Criteria

Scope Wizard

A guided, 60-second setup that asks about building type, size, disciplines, delivery method, AHJ, scope, and risk zones. Instantly assembles approval paths, consultant windows, sheet bundles, and role presets tailored to your answers. Launch with a right-sized project skeleton, cutting guesswork and avoiding rework from mismatched setups.

Requirements

Adaptive Scope Questionnaire
"As an independent architect, I want a fast, guided setup that only asks what’s relevant so that I can configure a project accurately in under a minute without guesswork."
Description

Implement a guided, time-boxed questionnaire that adapts in real time based on prior answers to capture building type, size, disciplines, delivery method, Authority Having Jurisdiction (AHJ), scope inclusions, and risk zones. The flow must validate inputs, autosave progress, prefill from recent projects, and surface context tips that align with architectural workflows. Responses are normalized into a structured schema consumed by downstream rule evaluation. The UX must be accessible, mobile-friendly, and designed to complete in under 60 seconds at the 90th percentile, with graceful fallbacks for slow networks. This module integrates with user profiles and project creation APIs to initialize new projects and seed the rules engine.

Acceptance Criteria
Configurable Rules Engine
"As a project lead, I want the wizard to translate my answers into the right workflows and templates so that the project starts aligned with our process and jurisdiction."
Description

Provide a deterministic, versioned rules engine that maps questionnaire inputs to outputs including approval paths, consultant engagement windows, sheet bundle templates, role presets, and notification defaults. Rules are declarative, testable, and support jurisdictional and delivery-method variants with precedence and conflict resolution. Include a sandbox to simulate outcomes, unit tests for rule packs, and rollback to prior versions. Expose APIs for evaluation, return rationale traces for each decision, and ensure performance under 300ms per evaluation. Admin tooling allows product ops to publish updated rule packs without code deploys.

Acceptance Criteria
Instant Project Skeleton Builder
"As a project owner, I want the system to build a right-sized project skeleton instantly so that I can start coordinating without manual setup or rework."
Description

Automatically assemble a new PlanPulse project from evaluated rule outputs, creating phases, approval workflows, task checklists, sheet bundle structures, and default roles/permissions in a single atomic operation. Provide a dry-run mode for preview, idempotent re-generation when inputs change, and a one-click undo. Ensure referential integrity with core modules (drawings, conversations, approvals) and seed initial artifacts (channels, folders, placeholders) to eliminate manual setup. Target end-to-end generation in under 2 seconds server-side with clear error handling and audit logging.

Acceptance Criteria
AHJ Knowledge Service Integration
"As an architect, I want the wizard to incorporate my AHJ’s specific requirements so that approval paths and deliverables are compliant from day one."
Description

Integrate a curated, updatable knowledge service of AHJ requirements including codes in force, submittal checklists, review timelines, digital submission portals, and fee triggers, keyed by geography and project type. Support geocoding or manual selection, multiple overlapping jurisdictions, and local amendments. Cache frequently used records, expose provenance and last-updated dates, and allow project-level overrides with clear disclaimers. Provide an admin ingestion pipeline for updates and a change-notification mechanism to flag impacted active projects.

Acceptance Criteria
Launch Preview and Overrides
"As a user, I want to preview and tweak the generated setup before launch so that the project reflects my exact needs without starting from scratch."
Description

Before creating the project, present a concise, editable preview of the generated approval paths, consultant windows, sheet bundles, and role presets. Allow targeted overrides (add/remove disciplines, adjust timelines, rename bundles, change approvers) with inline validation and real-time impact updates while preserving the under-60-second setup goal. Persist user overrides as preference hints for future projects and ensure overrides are respected during re-generation. Provide clear warnings when overrides create conflicts or noncompliance with AHJ rules.

Acceptance Criteria
Decision Trace and Analytics
"As a product stakeholder, I want traceability and metrics on how setups are derived and used so that we can improve accuracy and reduce setup time over time."
Description

Capture a complete, exportable decision trace linking inputs to rule evaluations and generated outputs for auditability and support. Instrument the wizard to measure completion time, drop-off points, error rates, and override frequency, and surface aggregate insights to improve rule quality and UX. Respect privacy by excluding drawing content and limiting PII, provide consent messaging, and store analytics in a secure, partitioned store. Expose dashboards and event streams for product and CS teams, and configure A/B experiments to optimize question order and defaults.

Acceptance Criteria

Template Gallery

Curated, ready-to-use templates by project type, region, and contract model. Each includes prebuilt approval ladders, sheet packs, role presets, and milestone billing hooks you can preview and clone. One-click start gives small teams a proven starting point that aligns stakeholders from day one.

Requirements

Gallery Browsing & Advanced Filters
"As a small-firm project lead, I want to quickly filter and search templates by project type, region, and contract model so that I can find a compliant starting point without sifting through irrelevant options."
Description

Provide a fast, searchable gallery that allows users to discover templates by project type, region, contract model, tags, and keyword search, with sort options (popularity, rating, latest) and pagination for large catalogs. Enforce visibility rules for public, firm-curated, and private templates, and ensure results render quickly with API-backed filtering and client-side caching. Integrate with PlanPulse roles and approval workflow metadata so filter facets accurately reflect template contents and compatibility with the user’s account and locale.

Acceptance Criteria
Template Preview with Interactive Components
"As an architect evaluating options, I want to preview a template’s workflows and deliverables so that I can confirm fit before committing to it."
Description

Enable a preview drawer/page that shows a non-editable summary of each template’s approval ladder, sheet pack composition, role presets, milestone billing hooks, and required inputs before cloning. Display version, maintainer, last updated date, and quality badges, and allow structured expansion to inspect steps, sheets, and billing triggers. Optimize for desktop and tablet, prefetch preview data on hover, and degrade gracefully when some components are unavailable.

Acceptance Criteria
One-Click Clone & Initialize
"As a project lead, I want to start a project from a template with one click so that my team and client are aligned on day one without manual setup."
Description

Provide a single action to clone a selected template into a new PlanPulse project workspace, instantiating the approval ladder, sheet pack, role presets, and milestone billing hooks. Map roles to existing firm users, prompt for any required fields (project name, region, currency) inline, and create an auditable initialization record. Ensure idempotent backend operations with rollback on failure, permission checks, and immediate readiness for client approval workflows.

Acceptance Criteria
Regional & Contract Localization Rules
"As an architect working across regions, I want templates to auto-adapt to local standards so that my approvals and billing align with regulations without manual rework."
Description

Apply localization rules during preview and cloning to adjust terminology, units (metric/imperial), holidays, currencies, taxes, and contract clauses based on the selected region and contract model. Validate that approval steps and billing triggers comply with local standards, surface incompatibilities before cloning, and allow safe overrides post-clone with audit tracking. Provide fallback defaults when regional data is incomplete.

Acceptance Criteria
Template Management & Curation Console
"As a firm curator, I want to govern a high-quality template catalog so that teams can trust and reuse proven setups consistently."
Description

Deliver an admin console for curators to create, edit, version, and publish templates with draft/published states. Include tagging, region/contract applicability settings, dependency checks for roles and billing hooks, a completeness validator, and sandboxes for preview testing. Support ownership, approvals for publishing, deprecation/retirement flows, and audit logs for all changes.

Acceptance Criteria
Template Analytics & Quality Signals
"As a template curator, I want analytics and feedback so that I can improve templates and highlight the ones that drive faster approvals."
Description

Collect and surface aggregate metrics such as clone count, activation rate, time to first client approval, and abandonment rate to inform sorting and badges like Popular and Verified. Provide a lightweight feedback mechanism and star ratings with moderation. Ensure privacy by aggregating data, excluding PII, and honoring firm-level opt-outs and data retention policies.

Acceptance Criteria

Auto-Tune Presets

Learns from your past projects to adjust SLAs, nudge cadence, approver ladders, and discipline splits per template. Suggests tweaks before kickoff and refines defaults over time so templates fit how your firm actually works. Fewer manual overrides mean faster setup and smoother approval cycles.

Requirements

Historical Data Ingestion & Feature Extraction
"As a project lead, I want PlanPulse to learn from our past projects without manual data cleanup so that suggestions reflect how our firm and clients actually work."
Description

Implement a secure, scalable pipeline that ingests historical project data (templates used, client, disciplines, approver ladders, SLA targets vs. actuals, number/timing of nudges, markup/comment volumes, overrides, and approval outcomes). Normalize and de-duplicate records across projects and clients, derive longitudinal features per template–client–discipline, and backfill at least 12–24 months where available. Provide feature stores optimized for training/inference, with PII minimization, data retention controls, and region-aware storage. Expose data quality metrics and alerts (freshness, completeness, drift) to ensure trustworthy inputs for Auto-Tune. Output a consistent schema that downstream services can query in real time at kickoff.

Acceptance Criteria
Preset Suggestion Engine
"As a project lead, I want kickoff-ready presets tailored to each client so that I can start faster and reduce manual edits."
Description

Build a rules-augmented ML service that generates recommended presets per new project/template: SLA targets by discipline, reminder (nudge) cadence, approver ladder sequence and delegation rules, and discipline splits. Accept contextual inputs (client, template, team size, project scale, contract type) and return a suggested configuration with confidence scores, rationale factors, and diffs from baseline. Enforce fallbacks for cold-start scenarios, support latency under 300 ms per inference, and provide an API for batch and real-time calls. Ensure idempotent suggestions given the same inputs and include versioned model artifacts for traceability.

Acceptance Criteria
Pre-Kickoff Review UI
"As an architect, I want to quickly review and apply recommended tweaks before kickoff so that I maintain control while saving setup time."
Description

Design an inline review panel surfaced during template selection that presents suggested changes side-by-side with current defaults. Allow accept-all, per-category (SLA, nudges, ladder, splits), and per-field acceptance with instant previews. Display confidence, rationale snippets, and expected impact (e.g., projected approval cycle reduction). Support role-based permissions, one-click apply to project, and the ability to pin chosen defaults back to a template. Track user actions for analytics and learning signals. Ensure accessibility standards and responsive performance for typical project sizes.

Acceptance Criteria
Learning Feedback Loop & Model Updating
"As an operations lead, I want the system to improve with every project so that defaults continuously align with our real workflow."
Description

Capture post-kickoff outcomes (accepted vs. overridden suggestions, SLA adherence, actual approval durations, nudge effectiveness, escalations) and feed them into scheduled retraining jobs. Implement offline evaluation, drift detection, and automatic rollback to previous model versions when performance degrades. Weight feedback by acceptance and outcome quality, and allow human-in-the-loop labeling for edge cases. Update suggestion heuristics weekly (configurable), maintain experiment logs, and expose performance dashboards (override rate, cycle-time delta, satisfaction proxy).

Acceptance Criteria
Constraint Guardrails & Auditability
"As a compliance admin, I want enforced guardrails and an auditable trail of preset changes so that we meet client and regulatory requirements."
Description

Provide admin-configurable hard and soft constraints that suggestions must satisfy (e.g., SLA min/max, mandatory approvers, client-specific compliance rules, region-specific data and notification policies). Validate all recommendations against guardrails pre-apply and block with clear reasons when violated. Maintain a tamper-evident audit log for suggested changes, approvals, overrides, applied configurations, and model/version used. Support export to CSV/JSON and webhook delivery for compliance systems. Enforce RBAC, least-privilege access, and data residency controls.

Acceptance Criteria
Preset Versioning & Rollback
"As a project manager, I want to compare and revert preset changes so that I can quickly recover if a suggestion underperforms."
Description

Introduce version control for template presets at the template–client and template–global levels. Record every applied suggestion as a new version with metadata (who applied, when, rationale, model version, diffs). Provide visual diffs, labels (e.g., Baseline, Stable, Candidate), and one-click rollback to any prior version with dependency checks. Link each version to observed performance metrics (cycle time, override rate) to inform selection. Ensure compatibility checks when templates evolve and preserve referential integrity across projects.

Acceptance Criteria

SheetPack Builder

Generates a scoped sheet list from a work plan, BIM model metadata, or a simple checklist. Tags sheets by discipline and zone, sets Version Pin and Change Layers defaults, and creates placeholders for not-yet-issued sheets. Start reviews on the right pages, not a blank directory.

Requirements

Multi-source Sheet Scope Builder
"As a project lead, I want to generate a sheet list directly from my work plan or model so that I don’t have to rebuild scope manually and can avoid omission and numbering errors."
Description

Provide an import wizard that generates a scoped sheet list from three input types: (1) work plans (CSV/XLSX/PDF table extraction), (2) BIM model metadata (e.g., Revit schedules, IFC exports), and (3) a manual checklist. Normalize inputs into a unified schema (sheet number, title, discipline, zone, status, source) with field-mapping templates per firm. Support deduplication, sequence validation, and partial imports, with a preview and error report before commit. Persist mappings for reuse and allow incremental additions without rebuilding the pack. Integrate with PlanPulse projects so the resulting pack is immediately available to tagging, defaults, and review flows.

Acceptance Criteria
Auto-Tagging by Discipline and Zone
"As a BIM coordinator, I want sheets auto-tagged by discipline and zone so that I can quickly filter and route reviews to the right stakeholders."
Description

Automatically tag each sheet by discipline and zone using naming conventions (e.g., sheet number prefixes), BIM parameters (e.g., discipline, level, building), and configurable taxonomies. Provide a rules engine with preview, confidence indicators, and batch edit/override. Support custom discipline sets and zone hierarchies (building > level > area). Persist tags to enable filtering, routing, and downstream review scoping. Include safeguards for ambiguous tags (flag for manual review) and an audit trail of rule applications.

Acceptance Criteria
Version Pin and Change Layer Defaults
"As an architect, I want consistent version and change layer defaults applied to my sheet pack so that reviews open with the right context without manual setup each time."
Description

Allow project-level and pack-level configuration of default Version Pin (baseline to compare against) and Change Layers (which change types are visible) with the ability to override per sheet or per tag group. Apply defaults automatically when packs are created and when reviews are launched from the pack. Expose bulk operations, inheritance rules, and a clear UI for what is pinned and why. Ensure defaults are stored with the pack and are retrievable via API for consistent behavior across sessions.

Acceptance Criteria
Placeholder Sheet Creation and Lifecycle
"As a project lead, I want to add placeholders for future sheets so that my review scope and routing can be set up before all sheets are issued."
Description

Enable creation of placeholders for not-yet-issued sheets with reserved numbers, provisional titles, expected disciplines/zones, and source intent. Visually distinguish placeholders in lists and reviews while keeping them selectable for routing and comments. Enforce numbering uniqueness, track readiness status, and notify when a real sheet matching the placeholder appears in an import/sync. Provide one-click merge of placeholder to real sheet while preserving tags, defaults, and discussion history.

Acceptance Criteria
Smart Review Start Pages
"As a client reviewer, I want reviews to open on the exact sheets I need to see so that I can focus immediately without navigating a directory."
Description

From a configured sheet pack, open review sessions directly on the relevant sheets rather than a blank directory. Pre-filter by discipline, zone, and apply pack defaults for Version Pin and Change Layers. Generate stable deep links that encode the selected pack and filters for easy sharing with clients. Preserve user context (last visited sheet, filter set) across sessions and devices. Provide a lightweight storyboard view to jump across the pack in a defined review order.

Acceptance Criteria
Validation and Gap Analysis
"As a project manager, I want automated checks that highlight missing or inconsistent sheets so that I can ensure the pack is complete before sending it for review."
Description

Validate the sheet pack against the selected source scope to detect duplicates, numbering gaps, missing required disciplines/zones, and sheets without tags. Provide a summary dashboard with actionable fixes (auto-renumber suggestions, rule proposals, add placeholders) and blockable criteria to mark the pack as Ready for Review. Export validation results as tasks or CSV for sharing. Log all checks in an auditable report attached to the pack.

Acceptance Criteria
Incremental Sync with BIM Metadata
"As a BIM coordinator, I want to incrementally sync the sheet pack with model updates so that the scope stays current without losing my manual adjustments."
Description

Support re-syncing the sheet pack with updated BIM metadata or work plan changes. Provide a diff view that classifies changes (added, removed, renumbered, retitled, retagged), maintains manual overrides, and offers one-click apply with conflict resolution. Allow scheduling of sync checks and notifications when material scope changes are detected. Ensure placeholders are matched and resolved during sync to minimize manual reconciliation.

Acceptance Criteria

Role Auto-Map

Automatically maps invited teammates and clients to the correct template roles using past assignments, email domain rules, and project scope. Preloads delegates and signing permissions so approvals route correctly from day one. Prevents misfires and reduces admin time for principals and project leads.

Requirements

Domain Rule Engine
"As a project lead, I want invited users to be auto-mapped to roles by email domain so that setup is fast and consistent."
Description

Implements a configurable rules engine that maps invitees to predefined role templates based on email domain patterns, subdomains, and allow/deny lists. Supports wildcard matching, rule precedence, and per-contact exceptions. Validates suggested role against project context and default fallbacks, ensuring consistent, low-friction setup during invitations and project onboarding. Includes rule versioning, preview mode, and import/export to keep mappings consistent across projects and workspaces.

Acceptance Criteria
Historical Role Predictor
"As a principal, I want the system to learn from past assignments so that role suggestions reflect how our firm actually works."
Description

Leverages prior project assignments and activity to suggest the most likely role for an invitee, factoring in firm-specific patterns, project type, and client relationship. Produces a confidence score and an explanation (e.g., "Assigned as Client Approver on 6 similar projects"), and gracefully degrades to rules/defaults when confidence is low. Provides opt-in controls, data retention boundaries, and organization-level privacy settings to comply with internal policies.

Acceptance Criteria
Scope-Driven Role Templates
"As a project lead, I want role templates to adapt to project scope so that the right roles and permissions are in place from day one."
Description

Auto-selects appropriate role templates and permission sets based on project scope metadata (e.g., architectural phase, contract type, and deliverable set). Ensures the correct roles, capabilities, and contact types are present before work begins, minimizing rework. Allows admins to define scope-to-template mappings, with overrides at the project level and immediate propagation to invite flows and approval routing.

Acceptance Criteria
Preloaded Approval Routing
"As a project lead, I want approval routes and delegates to preload from roles so that signoffs move to the right people without manual configuration."
Description

Automatically assembles approval chains, delegates, and signing permissions from the mapped roles at project creation or first invite. Supports sequential and parallel approvals, fallback delegates, and auto-rerouting on out-of-office or reassignment. Integrates with PlanPulse’s approval engine to enforce signing order, capture timestamps, and prevent misrouted requests, reducing delays and rework.

Acceptance Criteria
Invite Flow Auto-Map UI
"As an inviter, I want to review and override the suggested roles in the invite flow so that I stay in control and fix edge cases quickly."
Description

Provides an invite dialog that surfaces suggested roles with rationale, one-click accept/override, and bulk mapping for CSV or multi-email invites. Displays a live preview of resulting permissions and approval routing, with inline warnings for conflicts. Changes are persisted back to the rules/predictor (where allowed) to continuously improve suggestions without leaving the flow.

Acceptance Criteria
Override and Audit Trail
"As a compliance manager, I want an audit trail of role mappings and overrides so that we can trace decisions and pass audits."
Description

Captures every auto-mapping decision, manual override, and resulting permission/route change with actor, timestamp, source (rule/history/scope), and reason. Provides a searchable audit log and export for compliance, plus the ability to revert to a prior mapping state. Exposes event hooks for downstream systems (e.g., SOC reporting) without leaking sensitive content.

Acceptance Criteria
Permission Safety Guardrails
"As a security-conscious admin, I want guardrails that prevent over-permissioned mappings so that client data and drawings remain protected."
Description

Enforces least-privilege by validating suggested roles against permission thresholds and approval responsibilities. Flags and blocks over-permissioned assignments, highlights conflicts (e.g., requester equals approver), and requires secondary confirmation for escalations. Provides policy templates per organization to align mapping outcomes with security and contractual constraints.

Acceptance Criteria

Template Sync

Safely propagate improvements to active projects as your standards evolve. Shows a clear diff of ladder changes, sheet pack updates, and role adjustments, with per-project accept, postpone, or partial-apply options. Keep projects current without disrupting live approvals.

Requirements

Template-to-Project Diff Engine
"As a project lead, I want a clear diff between my project and the updated template so that I can understand exactly what will change before I apply any updates."
Description

Implements a deterministic diff engine that compares the latest template baseline to each active project’s customized configuration, detecting and categorizing changes across phase/issue ladder definitions, sheet pack composition (additions, deletions, renames, reorders), and role/permission matrices. Produces a clear, visual diff with per-category counts and side-by-side context, including intelligent matching to handle renames and mapping suggestions. Integrates with PlanPulse’s project model to preserve existing markups and client approval states, flags conflicts, and supports deep links to affected sheets and roles. Provides an API surface for fetching diffs and a normalized schema to enable downstream apply, scheduling, and reporting workflows. Outcome: precise visibility into what will change before any update is applied, reducing risk and surprises.

Acceptance Criteria
Granular Apply Controls
"As an architect, I want to selectively apply parts of a template update so that I can keep my project aligned to standards without breaking my current workflow."
Description

Provides per-project and per-change controls to accept, postpone, or partially apply template updates with fine-grained selection by category (ladder, sheet pack, roles), by subset (specific phases, sheets, or roles), and by operation type (add/rename/delete/reorder/permission-change). Includes a preview with impact analysis, dependency checks (e.g., sheet renames that affect links/markups), and a dry-run mode. Supports batched apply with consistent ordering and transactional safety to ensure all-or-nothing application where required. Integrates with the diff engine, honors project-specific overrides, and records user rationale for auditability. Outcome: teams confidently apply only the changes they need without disrupting ongoing work.

Acceptance Criteria
Approval-Safe Update Guardrails
"As a project manager, I want template updates to avoid interfering with live client approvals so that my review cycles remain uninterrupted."
Description

Introduces protective rules that prevent disruptive changes during live client approval cycles. Detects sheets or markups currently under review and either auto-queues related updates, prompts for scheduling in a safe window, or offers scoped alternatives (e.g., apply to non-review sheets only). Validates that updates won’t reset approval states or invalidate markups; where necessary, provides migration steps to preserve annotations and history. Includes configurable policies (strict block, warn-and-queue, scheduled window) and integrates with one-click approvals to ensure continuity. Outcome: updates never derail or delay ongoing client approvals.

Acceptance Criteria
Sync Versioning and Rollback
"As an administrator, I want versioned records and rollback for template syncs so that I can recover quickly if an update causes issues."
Description

Creates versioned "sync packages" that encapsulate the exact set of template changes applied to a project, with immutable IDs, timestamps, authors, and affected entities. Captures pre- and post-state snapshots to enable one-click rollback and differential reapply. Exposes an audit trail per project and across the portfolio, with exportable logs for compliance and reviews. Integrates with PlanPulse’s existing versioning to keep markups and approval records consistent across rollbacks. Outcome: safe experimentation and rapid recovery from unintended updates.

Acceptance Criteria
Role-Based Sync Permissions
"As a standards owner, I want permissioned control over who can view and apply template updates so that template governance is enforced consistently."
Description

Adds granular RBAC for who can propose, review, and apply template updates at global, template, and project scopes. Supports separation of duties (e.g., standards owner proposes, project lead applies), approval workflows for high-impact changes, and least-privilege defaults. Integrates with existing roles and SSO/SCIM where available. Enforces permission checks across diff viewing, apply actions, scheduling, and rollback. Outcome: governance and control over how standards propagate, reducing organizational risk.

Acceptance Criteria
Update Notifications and Bulk Actions
"As a portfolio lead, I want actionable notifications and bulk apply options so that I can roll out template improvements across projects efficiently."
Description

Delivers multi-channel notifications (in-app, email) when new template updates are available, with project-level context and quick actions to review diffs. Provides a portfolio view to filter projects by update readiness, impact size, or policy state, and supports bulk operations (apply, postpone, schedule) with progress tracking and failure handling. Includes reminder nudges for postponed updates and a digest to reduce notification noise. Outcome: timely awareness and efficient rollout of standards across many projects.

Acceptance Criteria

AHJ Match

Bind a template to the correct permitting profile at kickoff. Auto-inserts required gates, signatures, and export packets for the selected jurisdiction so submittals pass on the first try. Eliminates rework from missing forms and mismatched approval steps.

Requirements

Smart AHJ Detection
"As a project lead, I want the system to auto-suggest the correct jurisdiction based on the project address so that I avoid choosing the wrong permitting profile and rework."
Description

Automatically identifies the likely Authority Having Jurisdiction (AHJ) for a project at kickoff by geocoding the site address and intersecting it with jurisdiction boundary data. Presents ranked suggestions with confidence scores, supports manual override, and handles overlapping or multi-jurisdiction scenarios (e.g., city, county, fire district). Persists the selection for auditability and downstream automation, minimizes misselection risk, and integrates directly into PlanPulse’s project creation flow.

Acceptance Criteria
AHJ Profile Registry
"As a compliance admin, I want a versioned registry of AHJ profiles so that required steps, forms, and rules are consistent and up to date across projects."
Description

Maintains a version-controlled registry of permitting profiles per jurisdiction, including required gates and tasks, signatory roles and sequence, mandatory forms and data fields, export structure and naming rules, submission portal details, and stamping/seal policies. Provides an admin interface and import pipeline for adding and updating profiles, enforces schema validation, tracks effective dates, and exposes read-only profiles to projects to ensure consistent, up-to-date compliance across the organization.

Acceptance Criteria
Template Workflow Injection
"As a project lead, I want the selected AHJ to automatically inject the required gates and approval steps into my project so that my workflow matches the jurisdiction’s process from day one."
Description

Binds the selected AHJ profile to a project and automatically injects jurisdiction-specific gates, tasks, and approval steps into the project’s workflow template. Maps AHJ signatory roles to PlanPulse roles, assigns default owners, locks system-required steps to preserve compliance, and supports safe overrides by authorized users. Ensures idempotency to prevent duplication on rebind and displays the injected workflow visually within the PlanPulse workspace.

Acceptance Criteria
Auto Packet Builder
"As a project lead, I want a jurisdiction-specific submittal packet auto-generated and pre-filled so that I can submit once with the correct forms and formatting."
Description

Generates a jurisdiction-specific submittal packet with the correct forms, ordering, and file structure based on the bound AHJ profile. Pre-fills forms from project metadata, supports fillable PDFs, bundles drawings and attachments per required conventions, and outputs a compliant single PDF or ZIP with correct naming. Stores the packet in the project library with version tags and enables one-click download or share for submission.

Acceptance Criteria
Signature Routing & Compliance
"As an architect of record, I want required signatures to be routed to the correct parties in the right order so that the packet is compliant without manual coordination."
Description

Creates and routes required signature tasks according to the AHJ’s specified roles and signing order, supporting sequential or parallel execution. Integrates with e-sign providers, captures stamps/seals and license metadata, validates signing completeness before packet finalization, and supports a controlled alternative workflow for wet signatures when e-sign is not accepted. Stores signed artifacts and certificates for audit and reuse across revisions.

Acceptance Criteria
Readiness Checker & Submission Checklist
"As a project lead, I want a real-time readiness check against AHJ requirements so that I can resolve gaps before attempting submission."
Description

Continuously validates project data, attachments, signatures, and workflow completion against the bound AHJ profile to determine submittal readiness. Surfaces a progress indicator with actionable gaps, enforces blocking on critical missing items, and provides direct links to resolve deficiencies. Performs file-type and formatting checks and prevents packet export until mandatory criteria are satisfied to increase first-pass approval rates.

Acceptance Criteria
Change Propagation & Audit Trail
"As a project lead, I want to be notified when the AHJ profile changes and apply updates safely so that my project stays compliant without losing progress."
Description

Monitors AHJ profile updates and notifies project owners of impactful changes, presenting a clear diff and offering safe re-sync options that preserve completed work. Applies migrations to workflows and packets without overwriting user data, records the profile version used for each export, and maintains a comprehensive audit trail of selections and overrides for compliance reviews and historical traceability.

Acceptance Criteria

Product Ideas

Innovative concepts that could enhance this product's value proposition.

Smart Approval Ladder

Auto-routes sheets to the right approvers based on tags and scope, with SLAs and nudges. One-click moves to the next rung, killing stalls.

Idea

Visual Diff Heatmap

Highlights deltas between versions in color-coded intensity, per sheet and across sets. Click any glow to jump to markup history.

Idea

Fieldproof Offline Kit

Review, annotate, and approve drawings without signal on tablets; syncs versioned markups and signatures when online. Includes conflict resolver.

Idea

Consultant Sync Windows

Time-boxed consultant review slots per discipline with auto reminders and lockouts. Ensures MEP/structural notes hit the right version before client approval.

Idea

Tamperproof Approval Ledger

Creates a tamper-evident, hashed log of approvals and comments, with e-signatures and timestamps. Exports AHJ-ready packets in one click.

Idea

Milestone Meter Billing

Tie invoices to approval milestones; auto-collect via Stripe after one-click client sign-off. Shows live earned-versus-owed meter.

Idea

Fast-Track Templates

Spin up projects with prebuilt approval paths, sheet bundles, and role presets by project type. Cuts setup to minutes.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.