Fleet Maintenance Software

FleetPulse

Stop Breakdowns Before They Happen

FleetPulse is a lightweight SaaS that centralizes OBD‑II telematics, maintenance scheduling, and repair-cost tracking for owner-operators and small fleet managers (3–100 vehicles), flagging engine, battery, and brake anomalies, automating inspections and service reminders, and reducing unplanned downtime and repair costs through early detection and consolidated records.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

FleetPulse

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
Empower small fleet owners to prevent breakdowns, maximize uptime, and save time and money through predictive maintenance.
Long Term Goal
Within 3 years, enable 25,000 small fleets to reduce unplanned downtime by 35% and collectively save $100 million through predictive OBD‑II maintenance and affordable telematics.
Impact
Helps small fleet owners and local delivery managers reduce unplanned downtime by up to 40%, cut average repair costs 20% per vehicle, and reclaim 2+ hours weekly by automating 90% of inspections and consolidating OBD‑II telematics and maintenance records.

Problem & Solution

Problem Statement
Small fleet owners and local delivery managers face unexpected vehicle breakdowns because maintenance records are scattered and telematics options are expensive or overly complex, leaving teams reactive and unable to detect emerging OBD-II issues early.
Solution Overview
FleetPulse centralizes inexpensive OBD‑II telematics and vehicle records into a single dashboard that prevents surprise breakdowns by flagging early engine, battery, and brake anomalies and automatically scheduling service reminders while consolidating repair expenses.

Details & Audience

Description
FleetPulse is a lightweight SaaS that centralizes OBD-II telematics, maintenance scheduling, and repair-cost tracking for small commercial fleets. It serves owner-operators, small fleet managers, and local delivery businesses managing 3–100 vehicles. FleetPulse prevents surprise breakdowns and cuts downtime by automating inspections, service reminders, and consolidated expense logs. Its automatic OBD-II anomaly detection flags emerging engine, battery, and brake issues before failures.
Target Audience
Small fleet owners/managers (30–55) in local delivery seeking to prevent breakdowns, preferring simple telematics
Inspiration
At midnight I replaced a delivery van’s alternator after a missed service; at dawn the owner emptied a drawer of mismatched receipts and paper printouts. We plugged in a $5 OBD‑II dongle and watched battery voltage drift lower over weeks—obvious in seconds yet invisible in paperwork. That clash of cheap telemetry and chaotic records sparked FleetPulse: simple, automated OBD‑II monitoring and consolidated maintenance so small fleets never get blindsided.

User Personas

Detailed profiles of the target users who would benefit most from this product.

W

Warranty Watcher Wendy

- Age 32–48; service admin/coordinator for mixed-make fleet, 10–40 light-duty - Suburban HQ; multiple outside shops; owns VIN/recall and warranty paperwork - Associate degree; ASE C1 exposure; spreadsheet and PDF power user - Budget impact: targets claim recoveries; reports to owner/ops lead

Background

Former dealership service advisor who learned OEM claim rules the hard way. After joining a small fleet, she clawed back thousands by organizing VIN histories and recalls.

Needs & Pain Points

Needs

1) VIN-linked recalls and warranty eligibility alerts 2) Timestamped, immutable service history evidence 3) One-click claim export bundles

Pain Points

1) Claims denied for missing or mismatched documentation 2) Recalls discovered after repairs already paid 3) Hard to trace root cause across visits

Psychographics

- Lives by receipts; documentation equals recovered dollars - Trusts data over anecdotes; hates gray areas - Prepares for audits like clockwork, no surprises

Channels

1) Gmail — daily processing 2) NHTSA recall lookup — verification 3) OEM service portals — bulletins 4) LinkedIn groups — warranty tips 5) Google Drive — docs

F

Fuel-Saver Felix

- Age 28–45; dispatch/ops lead at courier fleet, 15–30 vans - Urban routes; fuel spend 20–35% of OPEX; P&L accountable - Bachelor’s or equivalent; telematics- and coaching-savvy - Android-first; manages fuel cards and weekly MPG KPIs

Background

Started as a courier, then led a route team. A brutal fuel-price spike forced him to weaponize telematics and maintenance, cutting idle time and fixing low-MPG culprits.

Needs & Pain Points

Needs

1) Real-time high-idle alerts per vehicle 2) Tire-pressure and misfire anomaly flags 3) Route-level fuel economy trend reports

Pain Points

1) Fuel spend spikes without clear cause 2) Drivers ignore idle and tire policies 3) Late detection of MPG-killing engine issues

Psychographics

- Obsessed with efficiency; every gallon must count - Gamifies savings; motivates drivers with leaderboards - Data-first decisions; detests idling and waste

Channels

1) SMS — instant alerts 2) Android app — FleetPulse 3) Looker Studio — dashboards 4) Reddit r/fleetmanagement — peer advice 5) LinkedIn feed — industry posts

I

Insurance Insight Isla

- Age 35–55; owner/manager of 10–50 commercial vehicles - Works with regional broker; annual premium six figures; loss-sensitive - Suburban office; light/medium-duty trucks; mixed drivers - Spreadsheet-comfortable; meets insurer quarterly

Background

Scaled from five trucks to thirty while premiums ballooned. A painful claim denial pushed her to formalize inspections and document every repair.

Needs & Pain Points

Needs

1) Insurer-ready maintenance compliance reports 2) Defect closure proof with timestamps 3) Incident-to-maintenance correlation insights

Pain Points

1) Premium hikes after maintenance-related claims 2) Audit scramble for compliance evidence 3) Disconnected records across shops and units

Psychographics

- Risk-averse operator; seeks predictable, lower premiums - Values professionalism, documentation, and insurer trust - Measures safety by maintenance discipline

Channels

1) Gmail — broker communications 2) Zoom — quarterly reviews 3) LinkedIn — insurer insights 4) Google Search — policy info 5) Dropbox — document sharing

R

Rental-Ready Raj

- Age 27–45; founder of rideshare rental fleet, 20–80 sedans - Downtown base; high utilization; weekend peaks; rapid turnarounds - Former driver; bilingual team; multiple external repair partners - Android-centric; uses Stripe and e-sign contracts

Background

Grew a rideshare rental side-hustle into a fleet. Learned that sloppy handoffs and missing records kill utilization and invite disputes.

Needs & Pain Points

Needs

1) Fast maintenance scheduling between rentals 2) Driver damage attribution with evidence 3) Availability calendar tied to service status

Pain Points

1) Vehicles idle awaiting simple, scheduled services 2) Disputes over driver-caused damage costs 3) Fragmented records across external shops

Psychographics

- Uptime fanatic; empty cars equal lost revenue - Pragmatic negotiator; balances driver happiness and margins - Mobile-first operator; moves fast, decides faster

Channels

1) WhatsApp — driver comms 2) Facebook Groups — local drivers 3) Google Calendar — turnarounds 4) Yelp — shop ratings 5) SMS — urgent updates

N

New-Fleet Navigator Nia

- Age 25–40; trades/service owner scaling 2–12 vans - On the road daily; limited maintenance knowledge; budget constrained - iPhone-first; relies on YouTube and Google for answers - DIY bookkeeping in QuickBooks; first dispatcher soon

Background

Won a big contract and bought vans fast. A roadside breakdown cost her the client, sparking a quest for simple, proactive maintenance.

Needs & Pain Points

Needs

1) Guided setup with checklists and templates 2) Plain-language explanations of diagnostic codes 3) Auto-prioritized to-do list by severity

Pain Points

1) Overwhelmed by jargon and code complexity 2) Forgets intervals amid daily firefighting 3) Unsure which issues to prioritize first

Psychographics

- Curious self-educator; values clear, friendly guidance - Risk-averse after past breakdown loss - Seeks simplicity; avoids vendor lock-in

Channels

1) YouTube — maintenance how-tos 2) Google Search — codes explained 3) TikTok — small biz tips 4) Gmail — onboarding 5) Facebook Groups — trades owners

D

Data-Bridge Devon

- Age 30–50; operations analyst/bookkeeper for 10–100 vehicles - Hybrid/remote; bridges operations and finance reporting - Heavy Excel, QuickBooks Online, and BI tool user - Responsible for monthly close, audits, and data standards

Background

Built the company’s first cost-per-unit model in Excel. Tired of CSV gymnastics, he now standardizes tags and automates reconciliations.

Needs & Pain Points

Needs

1) Stable, well-documented REST API 2) Custom tags and GL code mapping 3) Scheduled CSV exports to S3

Pain Points

1) Manually reconciling exports across systems 2) Inconsistent unit naming and tagging 3) API limits throttling essential workflows

Psychographics

- Systems thinker; demands clean, consistent data - Automate everything; eliminate repetitive tasks - Documentation devotee; loves clear API references

Channels

1) QuickBooks Online — accounting 2) Zapier — automations 3) Slack — team channel 4) Stack Overflow — API help 5) GitHub — scripts

Product Features

Key capabilities that make this product valuable to its target users.

Coverage Compass

Instantly checks OEM warranty eligibility using VIN, in‑service date, mileage, and prior repairs, then returns a clear green/yellow/red verdict with reason codes. It maps base, powertrain, emissions, and extended programs so you know exactly what’s covered before you spend time filing—cutting denials and guesswork.

Requirements

VIN Validation & Decode
"As a fleet manager, I want VINs validated and decoded automatically so that coverage checks are accurate and tied to the correct OEM and vehicle configuration."
Description

Validate VIN format and checksum at entry, then decode to year, make, model, trim, engine, emissions family, and OEM program identifiers. Link decoded attributes to the FleetPulse vehicle profile to ensure the correct OEM lookup path and component coverage mapping. Flag invalid or ambiguous VINs with actionable errors and prevent downstream queries that would return inconsistent results. Persist decoded data with source and timestamp to support re-use, auditability, and cross-feature consistency.

Acceptance Criteria
Real-time VIN Format and Checksum Validation at Entry
Given a user enters a VIN via UI or API, When the VIN field is submitted or loses focus, Then the system validates 17 characters, uppercase alphanumeric excluding I/O/Q, and returns validation_status. Given the VIN passes format rules, When the checksum is computed per ISO 3779, Then checksum_status is pass or fail with reason_codes in {format_error, invalid_characters, checksum_failed}. Given a valid VIN, When validation completes, Then the Decode action is enabled; otherwise it remains disabled and an inline error is displayed. Given an API request with an invalid VIN, When POST /vin/validate is called, Then the API responds 400 with machine-readable error codes and no decode attempt is made. Given typical conditions, When validation runs, Then validation result is returned within 500 ms at p95.
Successful VIN Decode to Core Vehicle Attributes
Given a valid VIN, When a decode is requested, Then the response includes non-null fields: year, make, model, trim, engine_code, engine_displacement, fuel_type. Given a decode completes, When compared to the reference test set, Then field accuracy is ≥ 99.5% on year/make/model and ≥ 98.0% on trim and engine_code. Given typical load, When decode is requested, Then p95 end-to-end decode latency is ≤ 2 seconds. Given provider returns unknowns, When mapping occurs, Then unknown fields are explicitly flagged as unknown with reason and no default values are applied silently.
Decode Includes Emissions Family and OEM Program Identifiers
Given a valid VIN, When decoding completes, Then the response includes emissions_family_code for applicable model years or not_applicable with reason. Given a valid VIN, When decoding completes, Then the response includes OEM program identifiers for base, powertrain, emissions, and extended programs when determinable, each with program_code and coverage_version. Given OEM identifiers cannot be uniquely determined from VIN alone, When decoding completes, Then the response status is ambiguous and required_additional_fields (e.g., in_service_date, mileage, region) are listed; no OEM downstream lookup is initiated.
Link Decoded Attributes to FleetPulse Vehicle Profile
Given a vehicle profile exists for the VIN, When decode succeeds, Then the profile is updated atomically with decoded fields and last_decoded_at timestamp, and a single canonical record is stored. Given the same VIN is decoded multiple times, When values are unchanged, Then no duplicate attribute records are created; last_decoded_at is refreshed and profile_version is not incremented. Given a decode completes, When Coverage Compass requests attributes, Then it retrieves the same canonical record ID and values as stored in the vehicle profile.
Actionable Errors for Invalid or Ambiguous VINs and Query Prevention
Given a VIN fails format or checksum validation, When a decode is requested, Then the system returns 422 with error_code and remediation_message and does not call any downstream OEM services. Given a VIN decodes to multiple possible trims or engines, When a decode is requested, Then the system returns 409 ambiguous with a list of disambiguation_fields and example values to collect, and the UI presents a guided prompt to capture them. Given an ambiguous or invalid VIN, When Coverage Compass is invoked, Then the OEM lookup action is disabled and a tooltip explains the required next step.
Persistence, Source, Timestamp, and Auditability
Given a decode completes, When persisted, Then the record stores VIN, decoded_payload, source_provider, provider_version, mapping_version, and decoded_at timestamp. Given the same VIN is requested again within 30 days and schema versions match, When a decode is requested, Then the cached record is reused and returned within 150 ms p95 with cache_hit=true. Given any decoded field changes due to provider updates, When a new decode is stored, Then an audit entry records old_value, new_value, changed_at, changed_by, and change_reason, and at least the last 10 changes remain queryable.
OEM Coverage API Integration & Caching
"As a fleet manager, I want Coverage Compass to automatically query OEM warranty systems using a VIN so that I get up-to-date eligibility without making phone calls or dealer visits."
Description

Integrate with OEM and third-party warranty eligibility APIs to retrieve base, powertrain, emissions, and extended coverage details in real time. Support OAuth/token auth, rate limiting, retries with backoff, and circuit breakers. Implement per-VIN response caching with configurable TTL and regional endpoints to minimize latency and costs. Provide graceful fallbacks for OEMs without APIs via document upload/manual entry with verification. Log requests and responses with PII-safe redaction for observability and dispute resolution.

Acceptance Criteria
OAuth/Token Authentication with OEM and Third-Party APIs
Given valid client credentials and a stored refresh token for an OEM API When the access token is within 2 minutes of expiry Then the system refreshes the token automatically and uses the new token for subsequent calls Given a request is made with an expired or invalid token When the OEM returns 401 or 403 Then the system performs one token refresh and retries the request once And if it still fails, returns error code CC-AUTH-401 with a human-readable message Tokens and client secrets are never written to logs or analytics; all such fields are masked as **** in 100% of entries All tokens at rest are stored in an encrypted secret manager and retrievable only by the Coverage Compass service identity
Request Throttling and Exponential Backoff Retries
Given OEM rate limits are configured (e.g., 600 rpm) When 1,000 requests are enqueued within 60 seconds Then the system sends no more than the configured rpm to the OEM and queues/delays the remainder without causing OEM 429 responses Given a transient 5xx or network timeout occurs When invoking an OEM API Then the system retries up to 3 times with exponential backoff starting at 500 ms with ±20% jitter And stops retrying on any 4xx other than 408/429 Given idempotent lookup inputs (VIN, in-service date, mileage, OEM) When a retry occurs Then the same idempotency key is used so the OEM is not charged twice, and only one external call is observed in OEM logs
Circuit Breaker Protection per OEM Endpoint
Given an OEM endpoint experiences errors When the last 20 requests have a failure rate >= 50% and at least 10 requests were attempted Then the circuit opens for 60 seconds While the circuit is open When a coverage lookup is requested Then the system serves from cache if a valid entry exists; otherwise it returns error CC-CB-OPEN and does not call the OEM After the open interval When a half-open probe succeeds once Then the circuit closes and normal traffic resumes; on failure, the circuit re-opens for another 60 seconds All circuit state changes emit metrics and structured logs with OEM identifier and trace ID
Per-VIN Caching with Configurable TTL and Mileage Sensitivity
Given a successful coverage response for a VIN and OEM When a subsequent request for the same VIN and OEM occurs within the configured TTL (default 24h, configurable 1h–7d) and the reported mileage delta is <= 500 miles Then the response is served from cache and no OEM call is made Given the TTL has expired or the mileage delta > 500 miles When a coverage request is made Then the cache is bypassed and refreshed with the latest response, updating the cache entry’s timestamp, mileage, and provenance Cache entries are keyed at minimum by VIN and OEM, store reason codes and coverage categories, and include provenance (API source or manual verification) Cache hit/miss and per-VIN eviction metrics are recorded
Regional Endpoint Routing and Latency Targets
Given regional OEM endpoints exist (e.g., US, EU) When a request originates from a user in region US Then the system routes to the US endpoint unless it is marked unhealthy by the health checker Given the primary region is unhealthy (2 consecutive connection errors within 30 seconds) When routing traffic Then the system fails over to the next healthiest region and logs the failover with reason For in-region requests with healthy OEM Then the p95 end-to-end lookup latency (excluding user network) is <= 800 ms measured over a rolling 15-minute window
Non-API OEM Fallback via Document Upload and Manual Entry
Given an OEM is configured as No API available When a user initiates a coverage check Then the UI presents options to upload warranty documents (PDF/JPG/PNG up to 10 MB) or enter data manually When a document is uploaded Then OCR extracts VIN, program type, in-service date, mileage, and coverage terms; if any field has confidence < 0.90, the record is flagged for manual verification When manual verification is completed Then the system returns a structured coverage result with the same schema as API responses, including verdict and reason codes, and stores the document as evidence linked by trace ID The workflow exposes statuses Draft, Pending Verification, Verified, and Rejected, and only Verified records are eligible for caching
PII-Safe Logging and Auditability
For every coverage lookup Then the system logs request/response metadata and reason codes with a trace ID while redacting access tokens and secrets (never present) and masking VINs to last 4 characters only Raw OEM payloads stored for dispute resolution have PII fields (owner name, address, phone, email) removed or hashed; automated tests verify redaction on a representative sample Logs and traces are queryable for 30 days with role-based access control; attempts to access logs without permissions are denied and logged
Verdict Engine with Reason Codes
"As a service coordinator, I want a clear green/yellow/red decision with reasons so that I know whether to proceed under warranty or plan for out-of-pocket repairs."
Description

Compute a deterministic green/yellow/red coverage verdict by evaluating in-service date, current mileage, time/mileage limits, prior repairs, and program rules. Emit standardized reason codes (e.g., IN_WARRANTY_MILEAGE_OK, EXPIRED_TIME_LIMIT, PRIOR_REPAIR_EXCLUSION) and human-readable explanations. Make thresholds and rule parameters configurable per OEM/program. Expose the verdict and reasons via internal API for UI and alerts, and persist results with inputs for auditability and re-evaluation on data changes.

Acceptance Criteria
Deterministic Green/Yellow/Red Verdict Based on Limits
Given a VIN with inServiceDate and currentMileage and a selected program with timeLimitMonths and mileageLimitMiles and a warningWindow configured When the vehicle is within both limits and not within the warningWindow Then the verdict is GREEN and reasonCodes include IN_WARRANTY_TIME_OK and IN_WARRANTY_MILEAGE_OK and explanations describe remaining time and miles Given the vehicle is within limits but within the warningWindow for either time or mileage When the verdict is computed Then the verdict is YELLOW and reasonCodes include REACHING_TIME_LIMIT or REACHING_MILEAGE_LIMIT with explanations quantifying proximity to the limit Given the vehicle exceeds either the timeLimitMonths or mileageLimitMiles When the verdict is computed Then the verdict is RED and reasonCodes include EXPIRED_TIME_LIMIT or EXPIRED_MILEAGE_LIMIT with explanations quantifying the overage Given a prior repair matches an exclusion rule for the program When the verdict is computed Then the verdict is RED and reasonCodes include PRIOR_REPAIR_EXCLUSION with an explanation referencing the prior repair record identifier and date
Standardized Reason Codes and Explanations Emission
Given a computed verdict with multiple applicable rationales When reason codes are emitted Then reasonCodes is a non-empty array of unique strings matching ^[A-Z][A-Z0-9_]+$ and each appears in the program's reason code registry And explanations is a same-length array of human-readable strings mapped 1:1 to reasonCodes And each explanation mentions the relevant computed values (e.g., days/miles remaining or exceeded) without exposing raw internal IDs except where required for audit And reasonCodes are ordered deterministically by severity (blocking > warning > informative) and then by domain (time before mileage before other) And emitting any reason code not in the registry causes the evaluation to fail validation and not persist
Per-OEM and Per-Program Rule Configurability
Given OEM A Base program with 36 months/36,000 miles and OEM A Powertrain with 60 months/60,000 miles and a program-specific warningWindow When the same VIN is evaluated against Base and then Powertrain Then the engine selects the corresponding rule set by evaluatedProgramId and computes different verdicts as dictated by each program's limits Given a runtime configuration update for OEM B Emissions program (e.g., warningWindow changed) When a new evaluation occurs after the config version increments Then the verdict uses the new parameters and the response includes ruleVersion matching the active config Given a program has a custom priorRepairExclusionRule When a matching prior repair is present Then the exclusion is applied only for that program and reflected in reasonCodes
Internal API Returns Structured Verdict Response
Given a request with VIN, inServiceDate, currentMileage, priorRepairs[], evaluatedProgramId When POST /internal/coverage/verdict is called with valid payload Then the response status is 200 application/json with fields: evaluationId (UUID), evaluatedAt (ISO-8601), verdict (GREEN|YELLOW|RED), reasonCodes[], explanations[], evaluatedProgramId, ruleVersion, inputsEchoed {VIN, inServiceDate, currentMileage, priorRepairs[]} And reasonCodes[].length equals explanations[].length and is >= 1 And the response is deterministic for identical inputs and ruleVersion And P95 latency for evaluations using cached configs is <= 300 ms
Persistence and Auditability of Evaluations
Given a successful evaluation response When persistence is executed Then a record is stored with evaluationId, VIN, inputs snapshot, verdict, reasonCodes, explanations, evaluatedProgramId, ruleVersion, evaluatedAt And the record is immutable and retrievable by evaluationId and by VIN+evaluatedAt range And the stored inputs are sufficient to recompute the verdict offline and match the stored outcome under the same ruleVersion
Re-evaluation on Data or Rule Changes
Given an existing stored evaluation When any of currentMileage, inServiceDate, priorRepairs, or the applicable program ruleVersion changes Then a new evaluation is triggered or can be invoked, producing a new evaluationId and updated verdict/reasons And the previous evaluation remains intact for audit, with a link to the superseding evaluationId And if the new evaluation changes the verdict color, the statusChange field in the new record reflects the transition (e.g., GREEN->YELLOW, YELLOW->RED)
Coverage Program Mapping (Base/Powertrain/Emissions/Extended)
"As a shop manager, I want coverage programs mapped to standardized categories so that I can quickly see which components are covered under which program."
Description

Normalize disparate OEM warranty terms into a standardized schema covering program type, start/end dates, mileage caps, covered components/systems, and exclusions. Support concurrent programs per vehicle and VIN-specific extensions. Store mappings with versioning to accommodate OEM updates and regional variations. Provide a component-to-program matrix to inform repair estimates and prevent filing for non-covered items.

Acceptance Criteria
Standardize OEM Warranty Data into Schema
Given an OEM warranty source record with program details (program name, time limit, mileage limit, components list, exclusions, region) When it is processed by the mapping service Then a normalized Program record is created with fields: programType in {Base, Powertrain, Emissions, Extended}, startDate, endDate, mileageCap, coveredComponents[], coveredSystems[], exclusions[], oemCode, regionCode, sourceReference, schemaVersion And Then mandatory fields (programType, startDate or startBasis, endDate or duration, oemCode) are present and pass schema validation And Then coveredComponents and exclusions reference canonical component IDs; any unmapped terms trigger reasonCode "UNKNOWN_COMPONENT" and the record is rejected And Then the normalized record preserves source-to-target traceability for each normalized field
Support Concurrent Programs Per VIN
Given a VIN with an active Base program and an overlapping Powertrain program When both programs are stored Then the vehicle record holds multiple concurrent Program entries without conflict And Then component coverage queries return the union of covered components with each component annotated by its covering program(s) and respective limits And Then program expirations are evaluated independently such that expiry of one program does not alter the active status of the other And Then duplicates for the same programType and identical effective window are deduplicated by sourceReference; otherwise both versions are retained
Handle VIN-Specific Extensions and Overrides
Given a VIN-specific extended warranty that adds 24 months/24,000 miles coverage for the battery system When the extension is ingested Then a Program record with scope "VIN" is created and linked to the VIN only And Then the component-to-program matrix shows battery components covered by the VIN extension with precedence over base program exclusions for the effective period And Then other VINs of the same model are not affected by this extension And Then removal or expiration of the extension reverts component coverage to the underlying applicable programs
Versioning and Regional Variations
Given an OEM updates Powertrain coverage effective 2025-01-01 for region "US-CA" When the update is received Then a new Program mapping version is created with effectiveFrom 2025-01-01 and regionCode "US-CA" while the prior version remains retrievable And Then coverage lookups for events dated before 2025-01-01 in "US-CA" use the prior version; events on or after 2025-01-01 use the new version And Then coverage lookups for other regions continue to use their region-specific current version And Then an audit log records the creator, change summary, and timestamp for the new version
Component-to-Program Coverage Matrix Output
Given a VIN with multiple programs and defined components When the matrix is requested Then the response includes every canonical component with fields: componentId, coveredByPrograms[], exclusions[], timeLimit, mileageLimit, and coverageStatus in {Covered, PartiallyCovered, Uncovered} And Then any component not covered by any active program is marked Uncovered with reasonCode "NO_COVERAGE_MATCH" And Then any component explicitly excluded by all applicable programs is marked Uncovered with reasonCode "EXCLUDED_BY_PROGRAM" And Then the matrix response is returned within 300 ms for up to 500 components per VIN at the 95th percentile
Eligibility Boundary Calculations
Given a program with startDate S, endDate E, and mileageCap C When evaluating coverage at eventDate D and odometer M Then coverage is Active if S <= D <= E and (C is null or M <= C); otherwise Inactive with reasonCode "TIME_EXPIRED" or "MILEAGE_EXCEEDED" And Then programs with time-only limits (no mileageCap) evaluate solely on dates; mileage-only programs (no endDate) evaluate solely on mileage where permissible And Then boundary values on exactly endDate E or exactly at mileageCap C are treated as covered And Then missing odometer M results in time-only evaluation and a reasonCode "MILEAGE_UNKNOWN" included in the decision
Mileage & In-Service Reconciliation
"As an owner-operator, I want the system to use the most accurate mileage and in-service date so that coverage decisions reflect my vehicle’s true eligibility."
Description

Ingest mileage from OBD-II telemetry, odometer photos/manual entries, and last service records; detect staleness and conflicts; and select a canonical mileage with provenance. Determine in-service date from OEM records or sales data and allow verified overrides with documentation. Trigger automatic re-evaluation of coverage when mileage or in-service date updates cross thresholds. Surface confidence and freshness indicators to the UI and alerts.

Acceptance Criteria
Resolve Canonical Mileage from Multiple Sources
Given OBD-II telemetry mileage, odometer photo/manual entries, and last service record mileage are available When a new mileage datapoint is ingested or any existing mileage source is updated Then the system updates the canonical mileage within 10 seconds using this priority: non-stale OBD-II > most recent non-stale source > highest value if all are stale, to prevent rollback And the canonical mileage includes value, unit, source type, source timestamp, and provenance ID And an audit log entry records previous canonical value, new canonical value, chosen source, comparator sources, user/system actor, and timestamp
Mileage Staleness Detection and Freshness Indicators
Given source-specific freshness rules When evaluating mileage sources Then mark OBD-II mileage as stale if last valid OBD mileage sample is older than 48 hours And mark manual/photo mileage as stale if older than 30 days And mark last service record mileage as stale if older than 90 days And expose per-source freshness_age_hours and stale=true/false in the API response And display a freshness badge in the UI for each source and for the canonical mileage derived from them
Conflict Detection and Mileage Tie-Breaking
Given at least two non-stale mileage sources are present When the absolute difference between any two non-stale sources exceeds max(250 miles, 2% of the canonical candidate) Then flag mileage_conflict=true with reason code MILEAGE_CONFLICT_DELTA_EXCEEDED And downgrade confidence to Low And apply tie-breaking: prefer OBD-II if its sample is <=24 hours old and not stale; otherwise prefer the most recent non-stale source; if still tied, select the highest value to avoid rollback And emit a conflict event for alerting and audit
In-Service Date Determination and Verified Override
Given a VIN with potential OEM and sales data When determining in-service date Then set source=OEM if an OEM-recorded in-service date exists; else set source=Sales if a dealership/sales record date exists; else leave unset until user action And allow override only to users with roles Fleet Admin or Service Manager And require a supporting document (PDF/JPG/PNG) on override; block save without it And record override metadata: previous date, new date, user, role, document ID, reason, timestamp And mark override_verified=true and source=Override if documentation is present
Automatic Coverage Re-evaluation on Threshold Crossing
Given configured warranty programs (base, powertrain, emissions, extended) with mileage and time limits When canonical mileage or in-service date updates cause a program boundary to be crossed or a reason code to change Then recompute Coverage Compass verdict and reason codes within 5 seconds And persist the new verdict with the triggering threshold (e.g., BASE_MILES_36000, POWERTRAIN_YEARS_5) And emit an event coverage_reevaluated with old/new verdicts and reasons And send an in-app alert when the verdict color changes (green/yellow/red)
Surface Confidence, Provenance, and Freshness in UI and API
Given canonical mileage and in-service date are available When returning data via API and rendering the vehicle detail screen Then include confidence_level High/Medium/Low computed as: High if ≥2 non-stale sources within 100 miles and freshest ≤24h; Medium if no conflicts and freshest ≤7d; Low otherwise And include for each: value/date, source, source_timestamp, freshness_age_hours, stale flag, and provenance/document IDs And show canonical mileage and in-service date badges for confidence and freshness in the UI And ensure tooltips/details expose reason codes when confidence is Medium/Low or conflicts exist
Prior Repair Exclusions Logic
"As a warranty coordinator, I want prior repairs considered automatically so that I avoid filing claims likely to be denied due to exclusions."
Description

Ingest and normalize repair history from FleetPulse maintenance records, uploaded invoices, and partner feeds to identify prior paid warranty claims or repairs that may influence eligibility. Apply OEM-specific exclusion rules (e.g., component replaced outside OEM network) and distinguish goodwill or recall work. De-duplicate events and link documentation to support appeals. Feed the outcome into the verdict engine as structured signals.

Acceptance Criteria
Consolidate and De-duplicate Repair Events from Multiple Sources
Given a vehicle VIN appears in FleetPulse maintenance records, uploaded invoices, and partner feeds And the records share overlapping component, date, mileage, and RO/reference identifiers When the ingestion and normalization job runs Then the system normalizes each record to the canonical repair-event schema And merges duplicates into a single canonical event using deterministic keys (VIN + RO/reference + date within ±2 days) and fuzzy matching on provider and component And preserves source lineage with source_event_ids, merge_confidence, and merged_source_count fields And emits no more than one canonical event per real-world repair
Classify Payment Type and Work Category for Prior Repairs
Given an ingested repair event with payment lines, warranty claim indicators, and program names When the classification service evaluates the event Then it sets payment_type to one of {OEM_WARRANTY, EXTENDED_WARRANTY, CUSTOMER_PAY, RECALL, GOODWILL, UNKNOWN} And sets work_category to one of {BASE, POWERTRAIN, EMISSIONS, OTHER} And populates classifier_confidence between 0 and 1 And stores raw evidence fields used for classification
Apply OEM-Specific Exclusion Rules for Non-Network Repairs
Given OEM rule configuration for the vehicle brand is available And a prior repair replaced the same covered component outside the OEM network When exclusion rules are evaluated Then prior_repair_exclusion is set to true And reason_codes includes NON_OEM_REPLACEMENT And oem_applied is set to the matching OEM And rule_version is recorded
Exempt Recall and Goodwill Work from Exclusions
Given a prior repair is classified as RECALL or GOODWILL When exclusion rules are evaluated Then prior_repair_exclusion is set to false And reason_codes includes EXEMPT_RECALL or EXEMPT_GOODWILL as applicable And exemption_explanation is populated for audit
Time and Mileage Eligibility Context for Exclusions
Given vehicle in-service date and warranty program limits are known And a prior repair event has a repair_date and mileage_at_service When eligibility context is computed Then the system determines whether the prior repair occurred within or outside the warranty coverage window And records coverage_context as WITHIN_WINDOW or OUTSIDE_WINDOW And only applies exclusion if the prior repair occurred within the relevant coverage window per OEM rules
Link Evidence Documents to Canonical Events
Given invoices, claim PDFs, images, or partner artifacts are available for a canonical event When normalization completes Then the canonical event includes an evidence array with document_type, issued_date, source, checksum, and secure_url for each artifact And evidence_count is the number of linked documents And all secure_url links are accessible via authorized token and return HTTP 200
Emit Structured Signals to Verdict Engine with Quality Safeguards
Given exclusion evaluation completes for a VIN When signals are emitted to the verdict engine Then the payload conforms to the contract schema version v1.0 including fields: prior_repair_exclusion, reason_codes[], payment_type, work_category, coverage_context, classifier_confidence, evidence_count, lineage metadata, rule_version, data_quality And payload passes schema validation and is timestamped And if data_quality is below threshold or critical fields are missing, set verdict_hint=YELLOW and do not set prior_repair_exclusion=true
Coverage UI Badges & Drill-down
"As a fleet manager, I want an at-a-glance badge with a detailed breakdown on click so that I can make fast, informed repair decisions."
Description

Display a prominent green/yellow/red Coverage Compass badge on vehicle and work-order views, with tooltips summarizing the verdict and primary reason codes. Provide a drill-down panel showing program timelines, remaining months/miles, covered components, prior-repair impacts, and data freshness. Include export/print of a coverage summary and deep links to supporting documents. Ensure mobile responsiveness and accessibility compliance.

Acceptance Criteria
Coverage Badge Verdict and Tooltip on Vehicle and Work-Order Views
Given a vehicle or work order has a valid VIN with a computed coverage verdict and reason codes When the vehicle details view or work-order view loads Then a Coverage Compass badge is displayed prominently above the fold on both views And the badge color maps to the verdict: Green = Eligible, Yellow = Conditional, Red = Not Covered And the badge includes a visible text label of the verdict (not color-only) And hovering (pointer) or tapping (touch) reveals a tooltip showing the verdict and up to the top 3 primary reason codes plus a "View details" control And activating the badge or "View details" opens the coverage drill-down panel
Coverage Drill-down Panel Completeness and Data Accuracy
Given the user opens the coverage drill-down from the badge When the panel renders Then it displays sections for: Program timelines (Base, Powertrain, Emissions, Extended), Remaining coverage (months and miles), Covered components, Prior-repair impacts, and Data freshness And each program shows in-service date, end date, max miles/months, and remaining miles/months computed from the current recorded odometer and in-service date And Covered components lists inclusions and clearly marks exclusions where applicable And Prior-repair impacts list repair date, component, and its effect on coverage eligibility And Data freshness shows a Last updated timestamp in the user’s timezone and a relative time (e.g., "2 days ago") And all displayed values match the backend API response for the same VIN and mileage used to compute the verdict
Export and Print Coverage Summary
Given the coverage drill-down panel is open When the user selects Export Then a downloadable PDF is generated containing: VIN, in-service date, current mileage used, verdict (text and color indicator), primary reason codes, program timelines, remaining months/miles, covered components, prior-repair impacts, data freshness timestamp, and deep links to supporting documents And the PDF preserves clickable links and includes a generation timestamp And when the user selects Print Then a print-optimized view renders the same information without truncation on Letter and A4 page sizes, with page numbers and generation timestamp shown and non-essential chrome hidden
Deep Links to Supporting Documents
Given supporting document links are available for the VIN or program When the user activates a deep link from the drill-down or exported PDF Then the link opens in a new browser tab to the target HTTPS URL And the link text is descriptive (not just a raw URL) and indicates file type/size when known And if the link target is unreachable (HTTP 4xx/5xx) Then a non-blocking error message is shown and the link remains available for retry
Mobile Responsiveness for Coverage UI
Given a device with a viewport width between 320px and 767px When loading the vehicle or work-order view Then the Coverage badge remains visible without horizontal scrolling and the text label wraps without truncation of the verdict And the tooltip content is accessible via tap and dismissible via tap outside or a close control And the drill-down opens as a full-screen overlay with collapsible sections and no horizontal scrolling And all interactive controls in the coverage UI have touch targets at least 44x44px with at least 8px spacing
Accessibility Compliance for Coverage Indicators and Panel
Given keyboard-only and screen reader users interact with the coverage UI When navigating the badge, tooltip, and drill-down panel Then focus order follows reading order; the badge, "View details", and panel controls are reachable; Enter/Space activates; Escape closes the panel; focus returns to the trigger And screen readers announce the badge as "Coverage status: [verdict]" with appropriate roles/states; tooltips are reachable via focus; panel sections have programmatic names and headings And verdict is conveyed with text/icon in addition to color; text and badge contrast ratios meet WCAG 2.1 AA; link text is descriptive; non-text elements have accessible names

Proof Packager

Auto-assembles a complete, OEM-ready claim packet: DTC timelines, inspection photos, technician notes, service history, mileage logs, and receipts—ordered chronologically and matched to required forms. One click exports a clean PDF or portal bundle, reducing back‑and‑forth and speeding approvals.

Requirements

DTC Timeline Compiler
"As a fleet manager, I want an auto-generated DTC and anomaly timeline per vehicle and claim so that I can evidence fault progression and reduce back-and-forth with OEM reviewers."
Description

Automatically compiles a unified timeline of Diagnostic Trouble Codes and anomaly events (engine, battery, brake) from FleetPulse OBD-II streams, correlating them with mileage logs, inspections, and service history. Normalizes timestamps and time zones, deduplicates repeated codes, and captures key states (first occurrence, intermittent, cleared, recurrence). Outputs structured data for form population and a human-readable narrative section for the packet, ensuring reviewers see clear fault progression tied to vehicle identity (VIN) and mileage at each event.

Acceptance Criteria
Normalize and Order Cross-Time-Zone Events
Given OBD-II DTC and anomaly events with timestamps from multiple sources and time zones When the compiler processes a request Then each output event includes timestamp_utc in ISO 8601 with 'Z' And includes timezone_offset_minutes and timestamp_source And events are strictly ordered by timestamp_utc ascending, with ties broken by source_priority then source_id And any input event lacking timezone info is normalized using the vehicle’s last-known GPS offset or account default and marked tz_assumed=true
DTC De-duplication with State Transitions
Given a stream containing repeated instances of the same DTC code, ECU clear messages, and intermittent occurrences within a 24-hour window When the compiler aggregates events Then identical DTC occurrences within a 5-minute dedup_window are collapsed into a single event with occurrences_count and sources listed And the first observed instance is flagged state="first_occurrence" And a code that disappears for ≥10 minutes and reappears is flagged state="recurrence" And an explicit ECU clear produces an event state="cleared" with cleared_at set And a code toggling present/absent within 24 hours without ECU clear is flagged state="intermittent"
Mileage Correlation at Each Event
Given odometer readings and mileage logs with timestamps When compiling the timeline Then each event is assigned odometer_miles using the nearest reading within ±15 minutes or linear interpolation between the nearest bracketing readings And values are rounded to 0.1 mile And if no reading within ±15 minutes and no brackets exist, odometer_miles is null and missing_mileage=true And any negative or non-monotonic trend triggers data_quality="mileage_anomaly" on affected events
Vehicle Isolation by VIN
Given an account with multiple vehicles and a compile request specifying a VIN When the compiler runs Then only events whose vin exactly matches the request are included And the output header contains vin, make, model, and year And if no events exist for the VIN in the date range, the timeline is empty with reason="no_events_in_range" and http_status=200 And if the VIN is unknown to the account, the response is an empty timeline with reason="vin_not_found" and http_status=404
Date Range and Component Filters
Given a compile request with a date_range and component filters {engine, battery, brake} and DTC families {P,B,C,U} When the compiler filters inputs Then only events within the inclusive [start,end] window and matching at least one selected component and family are included And totals for included_count and excluded_count are returned with excluded_count broken down by {out_of_range, component_mismatch, family_mismatch} And if no date_range is provided, the default window is the last 90 days from now
Dual Output: Structured Schema and Narrative
Given a successful compile When outputs are generated Then a JSON document conforming to schema_version "1.0" is returned and validates against the published JSON Schema And a human-readable narrative is produced that lists events chronologically with {date_local, code, description, state, mileage} And both outputs reference the same event_ids And when ingested by Proof Packager, the narrative renders without formatting errors and the total event count matches the JSON
Data Gaps and Conflict Flagging
Given missing, conflicting, or out-of-order source data When the compiler detects anomalies Then it does not fabricate timestamps or mileage And it appends data_quality flags from {clock_drift_detected, duplicate_event, inconsistent_timezone, missing_mileage, source_gap} to affected events And the narrative includes a "Data gaps and notes" section summarizing flags with counts And any inter-source clock drift >120 seconds is corrected by aligning to the majority source and flagged clock_drift_detected=true
OEM Template Manager
"As an operations admin, I want configurable OEM claim templates with mapped data fields so that packets always meet each manufacturer’s requirements without manual editing."
Description

Provides a versioned library of OEM-specific claim templates and mapping rules that connect FleetPulse data fields (VIN, mileage, DTCs, labor hours, receipts) to required form fields. Supports conditional logic by component/system and warranty type, required vs optional fields, field transformations (units, date formats), and per-OEM validation rules. Includes an admin UI for updating templates without code, rule testing, and preview to ensure packets always conform to current OEM requirements.

Acceptance Criteria
Selects Correct OEM Template by OEM, Component, and Warranty Type
Given a library containing multiple OEM templates with conditional rules for OEM, component/system, and warranty type And at least one template is Published and Active And a claim context with vehicle OEM, selected component/system, and warrantyType provided When the template manager resolves a template for the claim context Then exactly one Published template whose conditions evaluate to true is selected And the selected template identifier and version are returned to the caller And if zero templates match, the resolver returns error code FP-TPL-001 with message 'No applicable template found' and logs the claim context And if more than one template matches, the resolver returns error code FP-TPL-002 with the list of conflicting template IDs and blocks further processing
Enforces Versioning and Effective Dates
Given multiple versions of an OEM template exist with statuses Draft, Published, or Archived and effectiveStart/effectiveEnd dates configured And a claimDate is present in the claim context When resolving the template for the claim Then the system selects the latest Published version whose effective window includes the claimDate And versions with status Archived or Draft are never selected for runtime use And if the claimDate is outside all Published effective windows, resolution fails with FP-TPL-003 'No version effective on claim date' And the returned metadata includes templateId, version number, status, and effective window
Maps FleetPulse Fields to OEM Form Fields with Transformations
Given a Published template defines field mappings from FleetPulse data (VIN, mileage, DTC timeline, laborHours, receipts) to OEM form fields And the template specifies transformations (units, date format, string casing, numeric rounding) When generating a preview payload using a complete sample dataset Then VIN maps to the OEM field with exactly 17 characters and passes the OEM VIN regex if configured And mileage is converted and formatted per template settings (e.g., miles→kilometers using the configured factor, with defined rounding) And dates are output in the configured format (e.g., YYYY-MM-DD) and timezone per template And DTC timeline is sorted chronologically ascending and includes code, description, and timestamp per mapping And laborHours and receipt totals are formatted as numeric values per OEM rules And unmapped fields remain absent, and mapped fields with null source values render as empty only if marked optional; otherwise validation flags them
Validates Required vs Optional Fields and Conditional Logic
Given a Published template defines required and optional fields with conditional expressions (e.g., required when component == 'engine' and warrantyType == 'powertrain') And per-OEM validation rules (regex patterns, enumerations, min/max lengths, numeric ranges) are configured When validating a populated payload against the template Then all required fields for which conditions evaluate to true must be present and non-empty And optional fields may be empty without causing failure And per-field validation errors are aggregated and returned with field paths, rule names, and messages And the validation result includes a pass/fail boolean and counts of errors and warnings And failure blocks publish/export with error code FP-TPL-004
Admin UI Allows No-Code Template Edit, Test, and Preview
Given a user with the Template Admin role When the user creates or edits a template in the admin UI Then the user can define mappings, conditional rules, transformations, and validations without writing code And the user can save changes as Draft with version increment And the user can run a built-in Rule Tester by supplying sample claim context and data to see which conditions match and which fields populate, with pass/fail results And the user can generate a visual preview (PDF layout and/or portal payload view) from the Draft And publishing a Draft to Published requires a confirmation step and succeeds only if validation passes And non-admin users cannot create, edit, publish, or archive templates (access denied with HTTP 403)
Pre-Publish Linting and Per-OEM Validation Rules
Given a Draft template ready for publish And per-OEM validators are configured (enumerations, field length limits, attachment requirements, date rules) When the admin initiates Publish Then the system runs a lint/validation suite against the template and a required minimal sample dataset And errors (e.g., invalid enum, max length exceeded, missing required attachment) block publish and list offending fields with rule references And warnings (e.g., deprecated field use) do not block publish but require explicit confirmation to proceed And the publish action returns a summary with counts of errors and warnings and a success/failure flag And on success the template status changes to Published and becomes available for runtime resolution
Audit Trail and Rollback of Template Versions
Given templates undergo create, edit, publish, and archive operations When any change is saved or a status transition occurs Then an audit record is persisted with userId, timestamp, action, templateId, version, and a diff of rules/mappings And admins can view the full version history and change diff in the UI And admins can perform a rollback which creates a new version that restores the selected version's rules and marks it as Draft And upon publishing the rollback version, it becomes the active Published version for resolution And all audit records remain immutable and exportable as JSON/CSV
Evidence Ingestion & OCR
"As a technician, I want to snap photos and upload receipts that are auto-read and tagged so that the packet contains complete, searchable evidence without extra paperwork."
Description

Ingests inspection photos, technician notes, and receipts from web and mobile, applying file validation, virus scanning, and metadata capture (timestamp, user, vehicle, GPS/EXIF). Performs OCR on receipts and handwritten notes to extract vendors, parts, totals, taxes, and dates, normalizing currencies and units. Auto-rotates and compresses images, supports redaction of PII, and stores originals plus derived text in secure object storage with searchable metadata linked to the claim.

Acceptance Criteria
Web Upload: File Validation & Virus Scan
Given I am an authenticated user on the web uploader When I select files (jpg, jpeg, png, pdf, heic) up to 25 MB each and at most 100 files per batch Then the system accepts the files and begins upload with a visible progress indicator per file Given a file is received by the server When antivirus scanning completes Then only clean files are persisted; infected files are quarantined, rejected with error "File failed virus scan," and are not stored nor linked to any claim Given a file completes upload When validation runs Then the system records checksum (SHA-256), MIME type, and byte size, and rejects mismatched extensions or corrupted files with actionable error codes Given a user attempts to upload a duplicate file within the same claim When checksum matches an existing object Then the system prevents duplicate storage, links the existing object, and informs the user "Duplicate detected" Given any validation or scanning failure occurs When the batch contains mixed results Then successful files remain, failed files are listed with per-file reasons, and the user can retry only failed files
Mobile Upload: EXIF/GPS Metadata Capture
Given I capture or select a photo in the mobile app with location permissions granted When I upload it to a specific claim and vehicle Then the system captures and stores EXIF (timestamp, orientation), GPS lat/long with accuracy if present, uploader user ID, vehicle ID, and claim ID Given device time zone is not UTC When metadata is stored Then the capture timestamp is converted and stored in UTC with the original local time zone offset preserved in metadata Given GPS is unavailable or denied When the photo is uploaded Then the system flags gps_status="missing", stores EXIF as available, and does not block the upload Given EXIF orientation indicates rotation When the photo is processed Then the derived display/OCR image is correctly oriented while the original is preserved unmodified
OCR Extraction: Receipts and Handwritten Notes
Given a receipt or handwritten technician note image or PDF is ingested When OCR and handwriting recognition run Then the system extracts and returns a structured payload containing vendor_name, date, currency_code/symbol, line_items (description, quantity, unit, unit_price, line_total), subtotal, tax, total, and detected invoice/receipt number if present Given OCR completes When confidence scores are calculated Then totals and dates have confidence >= 0.95, vendor_name >= 0.90, and any field below threshold is flagged requires_review=true Given a multi-page PDF receipt is uploaded When OCR runs Then text is extracted from all pages in order and associated page numbers are maintained Given a rotated or skewed image up to b115 degrees When OCR runs Then the system deskews automatically and extracts fields with the same accuracy thresholds Given the user corrects any OCR field via UI When the correction is saved Then the system stores the corrected value with an audit trail (who, when, before/after), and updates the structured payload and downstream indexes
Currency and Units Normalization
Given OCR extracted monetary amounts and detected currency When normalization runs Then amounts are converted to the fleet base currency configured for the account using the exchange rate on the receipt date; both original and normalized amounts are stored with rate source and timestamp Given line items include quantities and units (e.g., mi, km, gal, L, pcs) When normalization runs Then units are standardized to account defaults (e.g., miles, gallons) with unit conversion factors recorded; original units are preserved alongside normalized values Given totals and taxes are present When normalization completes Then subtotal + tax equals total within b10.01 in base currency; discrepancies are flagged for review Given the receipt date is missing but a transaction date exists within OCR text When normalization requires a date Then the earliest valid date in the document is used; if none found, the upload is marked requires_review=true
Image Processing: Auto-Rotate, Compress, and Preserve Originals
Given an image with EXIF orientation or oversized dimensions (>4096 px on the longest edge) When processing runs Then a derived image is auto-rotated and resized to a max 4096 px longest edge and compressed to <= 2 MB while preserving legibility for OCR (SSIM >= 0.98 vs the auto-rotated original) Given any image is processed When storage occurs Then the original binary is stored unmodified; all derivatives are stored separately with a derivation manifest including processing steps, parameters, and parent checksum Given HEIC or WEBP images are uploaded When derivatives are created Then a JPEG derivative (quality >= 85) is produced for compatibility, with color profile converted to sRGB Given the same original is reprocessed (e.g., new settings) When processing runs again Then the process is idempotent with versioned derivatives; existing derivatives are not overwritten without version increment Given a typical 12 MP photo When processing runs under nominal load Then P95 end-to-end processing time (rotate + compress + OCR kickoff) is <= 3 seconds
Redaction of PII Prior to Storage and Export
Given an ingested document contains PII (names, phone numbers, email addresses, street addresses, credit card PAN beyond last 4) When automatic PII detection runs Then detected PII regions are suggested with bounding boxes and redaction types; precision >= 0.90 and recall >= 0.85 on validation set Given a user with redaction permission reviews a document When the user applies manual redaction boxes Then the redactions are applied to all derivatives and exports; the original remains unaltered; OCR text is updated to mask redacted tokens Given a redacted document is exported in a claim packet When the Proof Packager runs Then no redacted PII is visible in the PDF or portal bundle; redaction audit (who, when, what) is included in the claim metadata but not in the visible packet Given a user lacks redaction permission When they attempt to view redacted originals Then access is denied; only redacted derivatives are accessible
Storage and Search: Link Originals and Derived Text to Claim
Given a clean, validated file and its OCR payload When storage runs Then the system stores objects in encrypted storage (AES-256 at rest, TLS 1.2+ in transit), assigns a unique immutable object ID, and links it to the claim ID and vehicle ID with referential integrity Given metadata is captured When indexing completes Then the following fields are searchable by exact and partial match: vendor_name, date range, user, vehicle, claim ID, GPS radius, file type, invoice/receipt number, and total amount; P95 search latency <= 500 ms on datasets up to 1M documents Given a delete is requested by an authorized user When deletion is performed Then a soft-delete flag is set, objects are hidden from default results, a retention policy of 30 days is enforced before hard delete, and all accesses are logged Given an external system requests a secure download When a signed URL is generated Then the URL is scoped to the specific object, expires within 15 minutes by default, and honors claim-level access controls Given ingestion finishes for a batch When all items have statuses Then an event "claim.evidence.ingested" is emitted with counts by type, failures, and searchable IDs for downstream Proof Packager assembly
Chronological Packet Assembler
"As a claims coordinator, I want all evidence ordered chronologically with clear sections so that reviewers can follow the story quickly and approve faster."
Description

Merges DTC timeline, service history, mileage logs, technician notes, photos, and receipts into a single, chronologically ordered packet with clear sections, captions, and cross-references. Generates an index/table of contents, bookmarks, and page numbers; normalizes time zones; and supports manual overrides for ordering with change tracking. Persists packet versions for auditability and allows reassembly after new evidence arrives while preserving a prior exported snapshot.

Acceptance Criteria
Chronological merge across mixed time zones
Given a DTC event at 2025-07-10 08:30-07:00, a receipt at 2025-07-10 10:10-04:00, and a technician note at 2025-07-10 15:05Z for the same vehicle And the fleet default time zone is America/Chicago When the packet is assembled Then all timestamps are normalized to the fleet default time zone and displayed with time zone abbreviation And items are ordered ascending by normalized timestamp: receipt (09:10 CDT), technician note (10:05 CDT), DTC (10:30 CDT) And each item shows its ISO-8601 original timestamp in metadata And if two items share the same normalized timestamp, they are ordered by original event timestamp to millisecond precision, then by stable UUID ascending
Generate clear sections with captions and cross-references
Given artifacts from DTC timeline, service history, mileage logs, technician notes, photos, and receipts exist for a single claim When the packet is assembled Then the packet includes top-level sections in this order: DTC Timeline, Inspections & Technician Notes, Service History, Mileage Logs, Photos, Receipts, Appendices And each artifact is placed under its appropriate section And each artifact has an auto-generated caption including vehicle identifier, normalized date/time, source type, odometer (if present), and a concise summary or filename And DTC entries list clickable cross-references to related service entries, technician notes, photos, and receipts by item number And cross-references navigate to the correct page and anchor within the packet And artifacts without relationships display “No related items”
Table of contents, bookmarks, and page numbering in export
Given the assembled packet is exported to PDF When the export completes Then a Table of Contents appears on page 1 with section titles and accurate page numbers And each section and artifact has a PDF bookmark labeled with section and caption And page numbers are consecutive starting at 1 and displayed as “Page X of Y” in the footer And all TOC entries and bookmarks navigate to the exact target page and anchor And 100% of internal links validate with no broken destinations
Manual ordering overrides with change tracking
Given a user with role Claims Manager or higher enables manual ordering for a packet version When the user changes an item’s position via drag-and-drop or by setting a sequence number Then the packet reflects the new order without altering the underlying timestamps And an override record is captured with user ID, timestamp, item ID, old position, new position, and optional reason And an “Override” badge is shown next to manually moved items And the user can revert an item or the entire packet to auto-order, restoring the computed chronological sequence And all override records are included in the packet’s audit log and are exportable
Versioning and reassembly after new evidence
Given packet version v1 was exported at time T And new artifacts (e.g., a receipt and technician note) are ingested at time T+1 When the user selects Reassemble Then the system creates packet version v2 and merges new artifacts into the chronological sequence without duplicating items from v1 And version v1 remains immutable and retrievable, including its exact exported file And a version diff lists added, removed, and reordered items between v1 and v2 And the exported filename and packet metadata include the packet ID, version, and timestamp
Handling undated or conflicting timestamps
Given an artifact is missing a timestamp or contains conflicting time zone metadata When the packet is assembled Then the artifact is placed in an Appendix section labeled Undated or Conflicted Evidence and excluded from the main chronological sequence And the UI prompts the user to supply or correct the timestamp and time zone And upon user entry and save, reassembly positions the artifact chronologically and removes it from the Appendix And the metadata change is recorded in the audit log with previous value, new value, user ID, and timestamp
Claim Completeness Validator
"As a fleet manager, I want a pre-submit completeness check so that I avoid claim rejections due to missing or mismatched information."
Description

Runs OEM-specific completeness and consistency checks before export or submission, ensuring presence and coherence of required data (VIN, mileage at failure/repair, DTCs, inspection photos, labor hours, receipts). Flags missing or stale information, detects mismatches (e.g., mileage progression), and provides targeted guidance to resolve gaps. Supports blocking and non-blocking rules, calculates an export-readiness score, and integrates with notifications to prompt users to collect missing items.

Acceptance Criteria
OEM Rule Set Selection and Blocking/Warning Behavior
Given a claim with OEM X and claim type Y When validation runs Then the validator loads the X/Y rule set and enumerates required fields and severities Given any blocking-required field is missing or empty When the user attempts to Export or Submit Then the action is prevented and a blocker summary is shown with a count of blockers and warnings Given any non-blocking-required field is missing or empty When validation runs Then the claim is marked with a warning and Export/Submit remain enabled Given all blocking rules pass When the user clicks Export Then export proceeds without blocker error
Mileage and Date Consistency Checks
Given mileage_at_failure, mileage_at_repair, and last_recorded_mileage exist with their dates When validation runs Then mileage_at_failure >= last_recorded_mileage and mileage_at_repair >= mileage_at_failure Given mileage progression is violated When validation runs Then a "Mileage progression mismatch" is flagged on offending fields with configured severity and an edit deep link Given mileage_at_failure equals mileage_at_repair When failure_date equals repair_date Then no mismatch is raised Given a mileage unit differs from the configured unit for the OEM When validation runs Then a unit mismatch is flagged and blocks if the OEM requires a specific unit
DTC Timeline Presence and Recency
Given a claim has a failure_date When validation runs Then at least one DTC snapshot exists within N days before failure_date as defined by the OEM rule set Given no DTC snapshot meets recency When validation runs Then "Stale or missing DTC evidence" is flagged with configured severity Given DTC events include detected, confirmed, and cleared timestamps When validation runs Then timestamps are non-decreasing in the order detected -> confirmed -> repaired/cleared; otherwise flag "DTC timeline inconsistency" Given DTCs recorded by OBD and by technician notes disagree on codes present When validation runs Then discrepancies are listed by code with severity Warn
Inspection Photos and Technician Notes Completeness
Given the OEM requires at least P inspection photos When validation runs Then the count of attached inspection photos >= P and each has a timestamp within M days of failure_date Given required photo metadata (timestamp or angle tag) is missing When validation runs Then a photo metadata deficiency is flagged with configured severity; if angle tags are required, missing required angles are identified Given technician notes are required with fields symptom, cause, and correction When validation runs Then notes exist with minimum length L characters and all mandatory fields present; otherwise missing segments are flagged Given notes or photos reference VIN or mileage When validation runs Then values match the claim's VIN and mileage_at_failure; otherwise an inconsistency is flagged
Receipts, Labor, and Cost Coherence
Given labor_hours, parts line items (qty, unit_price), taxes/fees, labor_rate, and receipt_total When validation runs Then subtotal_parts + (labor_hours * labor_rate) + taxes/fees equals receipt_total within tolerance T; otherwise flag "Financial totals mismatch" Given the OEM requires receipts When validation runs Then at least one receipt file exists, is readable, and is dated on/after failure_date and on/before submission_date Given fractional labor_hours and defined rounding rules When totals are displayed Then rounding is applied per rule set and line-item sums match displayed totals Given receipt currency differs from OEM currency When validation runs Then conversion uses the configured rate date and converted totals match within tolerance; otherwise flag "Currency conversion missing"
Export-Readiness Score and Notifications
Given validation results include blockers and warnings with configured weights When validation completes Then the system computes export_readiness_score (0–100) and displays the score with a category breakdown Given export_readiness_score >= threshold_pass and no blockers When the user views the claim Then the claim status is "Ready to Export" Given at least one missing item When notifications are enabled Then an actionable notification is sent to the claim owner listing missing items with deep links to resolve each Given a flagged item is resolved When validation re-runs Then the score updates accordingly and the notification task auto-completes or updates remaining items
One-Click Export: PDF & Portal Bundle
"As a claims coordinator, I want a one-click export that produces a PDF and portal-ready bundle so that I can submit immediately without manual formatting."
Description

Produces a clean, branded PDF and a portal-ready bundle from the assembled packet with a single action. Populates OEM forms (fillable or flattened), embeds bookmarks and hyperlinks to evidence, and applies filename conventions and watermarking when required. Generates accompanying machine-readable artifacts (JSON/XML/CSV) per OEM spec, packages assets into a ZIP, supports background generation with progress indicators and webhooks, and stores the immutable artifact and hash for audit and re-download.

Acceptance Criteria
One-Click Export Generates PDF and Portal Bundle
Given an assembled claim packet exists and the user has Export permission And an OEM profile is selected with default export options When the user clicks "Export Claim Packet" Then the system creates a single background job that produces both a branded PDF and a portal bundle And the job completes within 120 seconds for packets <= 200 MB and <= 500 assets And the UI shows a success notification and enables Download buttons/links for both artifacts And both artifacts are named per pattern <Fleet>-<VIN>-<ClaimID>-<YYYYMMDD>-v<Seq>.<ext>
OEM Forms Population and Flattening
Given the selected OEM profile defines required and optional form fields with mappings When the export runs Then 100% of required fields are populated from packet data; optional unmapped fields remain blank And field validations pass (e.g., VIN length = 17, dates = YYYY-MM-DD, mileage is non-negative integer) And if the OEM profile requires flattened forms, form fields in the PDF are flattened and non-editable; otherwise they remain fillable And populated forms are embedded in the PDF and also included as separate files in the bundle if required by the OEM profile
PDF Bookmarks and Evidence Hyperlinks
Given the packet includes sections (DTC timeline, inspection photos, technician notes, service history, mileage logs, receipts) When the PDF is generated Then the PDF contains top-level bookmarks for each section in chronological order by event timestamp And each evidence reference in the PDF links to the corresponding asset within the bundle (or internal URI) and opens to the correct item And 100% of hyperlinks function when the ZIP is extracted and viewed offline And all bookmarks and links are free of broken destinations
Filename Conventions and Conditional Watermarking
Given the OEM profile defines a filename pattern and watermark rules When the export completes Then every file in the ZIP (PDF, forms, evidence, machine-readable artifacts) follows the pattern exactly, uses ASCII-safe characters, and is <= 120 characters long And when export is marked Draft, the PDF is watermarked "For Review - Not Submitted" with 5%–15% opacity; when Final, no watermark is applied And watermarks do not obscure required text or form fields And any filename that fails validation causes the job to fail with a descriptive error recorded in logs
Machine-Readable Artifacts per OEM Specification
Given the OEM profile specifies machine-readable outputs (JSON and/or XML and/or CSV) with schemas When the export runs Then the system generates each required artifact And JSON validates against the OEM JSON Schema; XML validates against the OEM XSD; CSV headers, delimiter, and quoting match the spec exactly And numeric values use dot decimal, timestamps are ISO 8601 UTC, and VINs are uppercase alphanumeric And a manifest.json lists each file with SHA-256 hash, byte size, and relative path, and hashes match the actual files
ZIP Packaging and Bundle Structure
Given the export completes successfully When the bundle is created Then a single ZIP is produced with the structure: / (root) contains claim.pdf, manifest.json, and folders: forms/, evidence/, machine_readable/, logs/ And the ZIP opens without errors in Windows and macOS default archive tools And re-running the same export with identical inputs within 10 minutes produces a byte-identical ZIP and the same SHA-256 hash And total ZIP size equals the sum of contained files within 1% tolerance
Background Processing, Progress, and Webhooks
Given an export job is initiated When the job runs in the background Then the UI displays a job ID, progress in >= 10% increments, and an ETA, updating at least every 5 seconds while active And the job status transitions through Queued -> Running -> Completed or Failed And on completion or failure, a webhook POST is sent to the tenant-configured URL with a signed signature header and payload including jobId, status, startedAt, completedAt, durationMs, artifact URLs, and hashes And transient errors are retried up to 3 times with exponential backoff; terminal failures are logged and surfaced with a user-visible retry action
Submission Status Tracking & Portal Integration
"As a fleet manager, I want to see the submission status and errors from OEM portals so that I can resolve issues quickly and know when a claim is approved."
Description

Integrates with supported OEM portals for direct submission using secure, vaulted credentials. Queues submissions with retries, handles rate limits, and polls or receives webhooks for status updates. Maps external portal states (received, needs info, approved, rejected) to internal statuses, shows a submission event timeline, surfaces errors with remediation guidance, and provides a fallback path with instructions for manual upload while retaining status tracking.

Acceptance Criteria
Credential Vault and Portal Authentication
Given an org admin enters valid OEM portal credentials and clicks Connect When the system validates and saves the credentials Then the credentials are stored in a secure vault and never displayed in plaintext in UI or API responses And a connectivity check to the OEM portal succeeds within 5 seconds And access tokens are refreshed automatically at least 5 minutes before expiration And revoking credentials prevents new submissions within 60 seconds and shows Reauthentication required on submission attempts
Direct Submission Queue with Retries and Rate Limits
Given a claim packet is ready and the user clicks Submit to Portal When the submission job is created Then the job is enqueued with a unique idempotency key for the claim+portal combination And the UI shows status Queued within 2 seconds And only one in-flight job per claim+portal is allowed; duplicates are deduplicated by idempotency key for 24 hours And transient errors (HTTP 5xx, timeouts, 429) are retried with exponential backoff starting at 1s, capping at 5m, up to 10 attempts or 24h TTL And permanent errors (HTTP 4xx excluding 408/429) stop retries and set status Failed with the portal error code and message
Status Sync via Webhooks and Polling
Given a submission has been accepted by the portal When a portal webhook is received with status received, needs info, approved, or rejected Then the internal status is updated to Received, Needs Info, Approved, or Rejected within 10 seconds and an event is added to the timeline And if webhooks are unavailable When polling runs Then the system polls the portal every 15 minutes with jitter until a terminal state (Approved or Rejected) or 7 days elapse And status changes detected via polling are applied within 2 minutes and added to the timeline And if conflicting updates arrive, the most recent portal timestamp wins
Submission Event Timeline
Given a submission exists When a user views the claim's Submission Timeline Then events are shown in chronological order with timestamp (UTC and org local), event type, source (system, webhook, user), and details And the timeline includes at minimum: queued, attempt started, attempt failed with code, retry scheduled, submitted/acknowledged, status changes, errors, manual edits, and credential changes affecting the submission And the timeline can be exported as JSON and included in the Proof Packager PDF
Error Surfacing with Remediation Guidance
Given a submission encounters an error When the user opens the submission details Then an error banner displays a human-readable message, portal error code, last attempt time, next retry time (if applicable), and suggested remediation steps And sensitive values (passwords, tokens) are never displayed or logged And for authentication errors, a Reconnect Portal action is shown And for validation errors, a View Missing Info action deep-links to required fields And clicking Retry Now triggers an immediate retry and records an event if the error is transient; otherwise the button is disabled with explanation
Manual Fallback Submission with Tracking
Given a portal is unsupported or down When the user selects Manual Upload Then the system generates a downloadable packet and step-by-step instructions for manual portal upload And the user can enter an external reference number and upload a submission confirmation file And the system tracks Manual Submitted, Approved, Rejected, and Needs Info statuses with the same timeline model And the UI reminds the user to update status every 24 hours until a terminal state is set or 14 days pass

Portal Bridge

Push claims directly into major OEM and dealer portals with prefilled fields, required attachments, and validation checks. FleetPulse returns claim IDs, syncs status changes, and alerts you when more info is requested—eliminating duplicate data entry and giving real‑time visibility end‑to‑end.

Requirements

OEM Portal Submission Connectors
"As a fleet manager, I want to submit claims directly to OEM and dealer portals from FleetPulse so that I eliminate duplicate data entry and speed up reimbursement."
Description

Build and maintain connectors to major OEM and dealer portals (e.g., Ford, GM, Daimler/Freightliner, Volvo/Mack, PACCAR) enabling direct claim submission from FleetPulse. Support both official APIs and resilient headless browser/RPA fallbacks where APIs are unavailable. Handle authentication variants (OAuth2, SSO/SAML, API keys, session cookies), enforce portal-specific schemas and endpoints, and respect rate limits. Normalize connector interfaces behind a common adapter, log request/response payloads with PII-safe redaction, and surface claim IDs returned by portals to the FleetPulse claim record. Ensure high availability and versioning to handle portal updates without service interruption.

Acceptance Criteria
API Submission via OAuth2 to OEM Portal
Given a valid FleetPulse claim with all required fields and attachments mapped to the target OEM schema And valid OAuth2 client credentials configured for the OEM portal When the user submits the claim via the OEM connector Then the connector obtains or refreshes an access token and calls the correct portal endpoint over TLS 1.2+ with the mapped payload And local schema validation blocks submission if any required or conditional field is missing or invalid, returning field-level errors to the user And on 2xx response the connector extracts the portal claim ID and writes it to the FleetPulse claim within 5 seconds And the connector returns a success result to the caller and enqueues status sync And request/response bodies are logged with PII redacted and a correlation ID, with no secrets in logs
Headless Browser/RPA Fallback Submission
Given the target OEM portal lacks a public API or its API health check is failing for 3 consecutive minutes And a service account with non-interactive login is configured for the portal When the user submits the claim Then the connector uses a headless browser to authenticate, navigate to the claim form, populate required fields, and upload attachments per portal constraints And the workflow retries up to 2 times per step on transient DOM or network errors with capped backoff And upon successful submission the connector captures the confirmation page, parses the claim ID, saves it to the FleetPulse claim, and stores a screenshot/PDF as an attachment And the entire fallback run completes within 3 minutes for 95th percentile submissions And all automation steps and selectors are logged with PII redaction and correlation ID
Authentication Variants and Secure Secret Handling
Given a connector configured for one of the supported auth modes (OAuth2, API key, SSO/SAML, session cookie) When authentication is performed Then secrets are stored in a managed secrets vault, never logged, and rotated per policy And OAuth2 tokens are refreshed proactively at 80% of expiry and retried once on failure before surfacing an actionable error And API keys are transmitted only via HTTPS, scoped to least privilege, and validated by a successful test call at setup time And SSO/SAML flows support IdP metadata configuration and persist a renewable session without interactive MFA; if MFA is enforced, submission is blocked with a clear error And session cookies are renewed before expiry and invalid sessions trigger auto re-login with backoff
Portal-Specific Schema Mapping and Validation
Given a FleetPulse claim in the normalized model with attachments When mapped to a portal-specific payload Then all required and conditional fields, enumerations, and units are transformed to match the portal schema And client-side validation prevents submission if any field violates portal constraints, returning specific per-field messages and remediation hints And attachment types, sizes (<= 25 MB each unless portal-specific limit differs), and counts adhere to portal rules; noncompliant files are rejected pre-submit And a JSON payload preview is available for API-based portals; for RPA, a field-value summary is presented
Rate Limiting, Retry Strategy, and Idempotency
Given the portal enforces rate limits or returns transient errors (HTTP 429/5xx/timeouts) When the connector sends requests Then per-portal rate limits are enforced client-side and Retry-After headers are honored And retries use exponential backoff with jitter up to a 2-minute cap and a maximum of 5 attempts per request And an idempotency key derived from the FleetPulse claim ID and payload hash prevents duplicate submissions across retries and processes And metrics emit success, retry, throttle, and failure counts per portal and are visible in monitoring And the user is informed of queued or delayed submissions with ETA when backpressure is active
Connector Versioning, Health Checks, and Zero-Downtime Updates
Given a connector update or a portal change is detected When a new connector version is deployed Then blue/green or canary rollout is used with automatic rollback if error rate exceeds 2% or median latency degrades by >25% And connector interfaces are versioned; older versions continue to operate for at least 30 days or until migration completes And health checks cover auth, schema validation, endpoint reachability, and a synthetic submission to a sandbox where available, running at 5-minute intervals And monthly availability for submission endpoints meets or exceeds 99.9%, excluding scheduled maintenance, with SLIs and alerts configured
Common Adapter Interface Compliance
Given the FleetPulse connector SDK and adapter contract When a connector is built or updated Then it implements the required methods (submit, getStatus, getPortalLimits, getSchema, health) with standardized request/response shapes and error codes And unit contract tests pass at 100% for required methods and 95% for optional features And integration tests validate cross-connector parity for at least Ford, GM, Daimler/Freightliner, Volvo/Mack, and PACCAR And breaking changes require a new major version with migration notes; minor/patch versions remain backward compatible
Claim Prefill & Validation Engine
"As a service coordinator, I want claim forms auto-populated and validated with our vehicle and diagnostic data so that submissions are accurate and accepted on the first attempt."
Description

Implement a rules-driven engine that assembles claim payloads from FleetPulse data sources: VIN, vehicle profile, warranty coverage, OBD-II DTCs, freeze-frame, odometer, fault timestamps, maintenance history, inspection results, parts/labor estimates, and photos. Map fields to each portal’s schema, auto-fill required values, and validate against portal constraints (required docs, value ranges, allowed codes). Provide inline error messages and pre-submission checks to maximize first-pass acceptance. Support default templates per OEM and claim type, and allow admin overrides for field mappings and business rules.

Acceptance Criteria
OEM-Specific Powertrain Claim Prefill Completeness (OEM A)
Given a FleetPulse vehicle record has VIN (17 chars), make, model, year, and active OEM A powertrain coverage And diagnostic data exists with at least one OBD-II DTC, freeze-frame, odometer reading, and fault timestamp When the engine generates a claim payload for OEM A, claim type "Powertrain" Then VIN, Make, Model, Year, WarrantyProgram, Odometer, DTCCode(s), FreezeFrame, and FaultTimestamp are auto-filled from FleetPulse data And field names and formats match OEM A schema (e.g., ISO-8601 timestamps, integer odometer, uppercase VIN) And the payload validates against OEM A schema with zero errors And mandatory OEM A fields are 100% populated when corresponding FleetPulse data exists And payload generation completes within 1.5 seconds for payloads ≤1 MB at p95
Required Document Assembly and Attachment Validation
Given OEM portal rules for the selected OEM/claim type require: ≥2 photos, last 12 months maintenance history (PDF), latest inspection report (PDF), and parts/labor estimate (PDF) And the FleetPulse record contains corresponding artifacts When the engine assembles the claim package Then each required document is attached and referenced to the correct portal field IDs And file types and sizes meet constraints (JPEG/PNG ≤10 MB each; PDF ≤20 MB) And if any required item is missing, the pre-submission check lists each missing artifact by name and required count And duplicate attachments are not included And the attachment manifest passes portal validation with zero errors
Portal Constraint Enforcement: Allowed Codes and Value Ranges
Given the portal defines allowed component codes, allowed DTCs for the claim type, and numeric ranges (e.g., odometer 0–1,000,000) When the payload contains a disallowed code or an out-of-range value Then inline, field-level errors appear next to each offending field with messages formatted as: "<Field>: <Problem>. Allowed: <Allowed/Range>. Actual: <Actual>" And the API returns machine-readable error codes for each violation And the Submit action remains disabled until all blocking errors are resolved And upon correction, the errors clear in real time without page reload
Pre-Submission Validation Report and Submit Gate
Given a claim draft with all required fields and attachments present When the user triggers Validate Then a validation summary displays zero errors and any non-blocking warnings And the Submit button becomes enabled And validation completes within 2 seconds at p95 And the validation event is logged with timestamp, user ID, and payload checksum
Default Template Application by OEM and Claim Type
Given a default template exists for the selected OEM and claim type When a user creates a new claim of that OEM and claim type Then the engine auto-applies the template's field mappings, default values, and required-attachments set And the applied template name/version is visible in the UI and included in payload metadata And non-admin users cannot change default template assignments And changing OEM or claim type switches to the correct default template automatically
Admin Overrides for Field Mappings and Business Rules
Given an admin updates a mapping to map FleetPulse.odometer_km to portal.mileage_mi with km→mi conversion (rounded to nearest integer) And adds a business rule "reject if mileage_mi < prior_mileage" When the admin publishes the changes Then new validations use the updated mapping and rule within 60 seconds And violations trigger a blocking error: "Mileage must not decrease" And all changes are versioned with admin ID, timestamp, and diff; rollback restores prior behavior immediately And non-admin users cannot create, edit, or publish mappings/rules
Graceful Handling of Missing or Partial Source Data
Given some expected data sources are unavailable (e.g., no freeze-frame, missing inspection report) When generating and validating a claim payload Then the engine classifies gaps as Blocking or Non-Blocking per rules and lists them in the validation summary And blocking gaps prevent submission; non-blocking gaps show warnings only And the engine omits null/empty fields when the portal forbids them and applies safe defaults where allowed And the user can save as Draft with a completeness score (%) and guided prompts to supply missing data
Status Sync & Normalization
"As a fleet owner, I want real-time, normalized claim status updates and IDs in FleetPulse so that I have clear visibility and can manage follow-ups efficiently."
Description

Continuously retrieve and reconcile claim statuses from OEM/dealer portals via webhooks where available or scheduled polling with backoff otherwise. Capture claim IDs, status transitions, reason codes, payout amounts, and requested actions. Normalize heterogeneous portal status codes into FleetPulse’s canonical lifecycle (Submitted, Under Review, Info Requested, Approved, Denied, Paid, Closed) and update the claim timeline. Ensure deduplication and idempotent updates, and expose a real-time status view and history in FleetPulse.

Acceptance Criteria
Canonical Status Normalization & Mapping Across Portals
Given a configured mapping table from external portal codes to FleetPulse canonical lifecycle (Submitted, Under Review, Info Requested, Approved, Denied, Paid, Closed) When a status update with an external code arrives from any portal Then the system maps it to the correct canonical status per the active mapping table and updates the claim’s current status And the applied mapping version, external code, and source portal are recorded on the timeline event And if the external code is not recognized, the claim’s canonical status remains unchanged, the raw event is stored, and the event is flagged as unmapped for review
Webhook Ingestion: Idempotency, Deduplication, and Ordering
Given multiple webhook deliveries with the same external event ID or identical payload hash When processing the deliveries Then only one timeline event is created and downstream side effects occur once (idempotency) And subsequent duplicate deliveries are acknowledged without creating additional events Given webhook events for the same claim arrive out of order When building the claim timeline Then events are ordered by portal event timestamp (fallback to received time when missing) And the claim’s current status reflects the latest event by portal event timestamp
Polling Fallback with Exponential Backoff and Rate-Limit Compliance
Given a portal without webhooks or with webhook outages When status sync runs Then the system polls the portal at the configured interval (default 5 minutes) and records last successful sync and next scheduled poll time And on HTTP 429 or 5xx responses, the system applies exponential backoff up to a configured max delay and retries without exceeding portal rate limits And once connectivity is restored, the system fetches incremental updates since the last checkpoint to avoid gaps or duplicates
Claim Timeline and Audit History Completeness
Given any status update is ingested (webhook or polling) When creating a timeline entry Then the entry includes: external claim ID(s), source portal, portal event timestamp, received timestamp, external status code, canonical status, reason code(s) if provided, requested action(s) if provided, payout amount and currency if provided, and mapping version And the history is append-only; corrections are represented as new entries referencing the prior entry And the UI timeline and API return this full history in chronological order with stable identifiers for each event
Requested Information Capture and Exposure
Given a portal update indicating the portal requires additional information When processing the update Then the claim’s canonical status is set to Info Requested And the timeline entry captures the requested action details (message, required documents/fields, and due date if provided) And the current claim view and status API expose the requested actions and due date And when a subsequent non-Info Requested status is received, the claim exits Info Requested and the timeline reflects the transition
Payout, Approval/Denial, Payment, and Closure Handling
Given a portal update indicating approval with an estimated or final payout When processing the update Then the canonical status is set to Approved (or Paid when payment is confirmed) and the payout amount, currency (ISO 4217), and relevant dates are captured on the timeline Given a portal update indicating denial When processing the update Then the canonical status is set to Denied and any denial reason codes/messages are captured on the timeline Given a portal update indicating the claim is closed by the portal When processing the update Then the canonical status is set to Closed only when the portal provides a terminal close state, and the close reason (if any) is recorded
Real-Time Status View Consistency and Latency
Given a new status update is ingested When viewing the claim in FleetPulse UI or fetching via the status/history API Then the displayed current status, last updated timestamp, external claim ID, and source portal are consistent across UI and API And P95 end-to-end latency from ingestion to UI/API availability is under 10 seconds And the history endpoint returns events ordered by portal event timestamp (descending), with pagination and a stable cursor to support continuous reads
Attachment Auto-Packaging & Templating
"As a service coordinator, I want required claim attachments prepared and formatted automatically so that I meet each portal’s submission rules without manual effort."
Description

Automatically collect, generate, and transform required attachments per portal/claim type, including inspection checklists, diagnostic snapshots, photos, cost estimates, invoices, and service records. Convert to accepted formats (PDF/JPEG), compress and sanitize metadata, apply required file naming conventions, and assemble a submission bundle that meets each portal’s checklist. Generate templated cover sheets and OEM-specific forms populated with claim metadata, and track attachment provenance for auditability.

Acceptance Criteria
Auto-Collect Required Attachments by Portal and Claim Type
Given a selected portal and claim type, when auto-packaging starts, then the system derives the exact required and optional attachment list from the configured checklist for that portal/claim type. Given eligible artifacts exist in FleetPulse, when auto-collection runs, then the most recent, context-relevant versions are selected per mapping rules (claim- or vehicle-scoped) with deterministic tie-breaking. Given a required attachment is missing, when auto-collection completes, then the claim is marked "Attachments Incomplete" and a list of missing items with source hints is displayed. Given all required attachments are found, when auto-collection completes, then the claim is marked "Attachments Complete". Given a user manually adds/replaces an attachment, when auto-collection is re-run, then manual selections persist unless superseded by newer artifacts according to the recency policy configured.
Format Conversion and Compression Compliance
Given portal-specific format rules are configured, when assembling the bundle, then documents are converted to compliant formats (e.g., PDF) and images to compliant formats (e.g., JPEG) as required by the portal. Given portal-specific maximum file size and resolution thresholds, when processing images and PDFs, then compression/scaling is applied to meet thresholds while preserving legibility, and the result passes automated readability checks (e.g., minimum 200 DPI for documents). Given a file cannot be converted while meeting constraints, when packaging, then the system blocks submission of that file and surfaces an actionable error detailing the violated constraint(s).
Metadata Sanitization and Privacy Controls
Given outbound images and PDFs, when sanitization runs, then EXIF GPS, device identifiers, author/creator, and application metadata are removed unless explicitly required by the portal. Given configured PII redaction rules, when generating documents from templates, then specified fields are irreversibly redacted (burn-in for images, true redaction objects for PDFs) prior to packaging. Given sanitization completes, when validation runs, then a sanitization check passes and no disallowed metadata keys remain in any packaged file.
File Naming Convention Enforcement
Given portal-specific filename templates and constraints are configured, when filenames are generated, then each attachment name matches the template tokens (e.g., {ClaimID}_{VIN}_{Type}_{Seq}) and allowed character set and does not exceed the maximum length. Given multiple attachments of the same type exist, when naming, then stable, gapless sequence numbers are assigned deterministically. Given illegal characters or overlength names occur, when normalization runs, then names are sanitized and truncated while preserving required tokens and uniqueness. Given pre-submission validation runs, then zero filename errors are reported for a submission-ready bundle.
Checklist-Based Bundle Validation and Submission Readiness
Given a portal checklist for the selected claim type, when bundle validation runs, then all required attachments are present and each file passes format, size, naming, and content-type validations, producing a per-item pass/fail report. Given the bundle passes all validations, when readiness is evaluated, then the bundle state changes to "Submission-Ready" and a signed manifest hash is stored. Given any validation fails, when readiness is evaluated, then submission is blocked and blocking issues are presented with remediation guidance and deep links to the offending items.
Generated Cover Sheets and OEM Forms Prefilled with Claim Metadata
Given the selected portal requires a cover sheet, when packaging runs, then a cover sheet is generated from the correct template version and populated with claim metadata (e.g., Claim ID, VIN, mileage, DTCs, failure date, facility, contact). Given OEM-specific forms are required, when packaging runs, then forms are generated and populated with the latest claim data and required signature fields are flagged for e-sign or manual signature per portal capability. Given any required field is missing, when form generation runs, then a field-level error is raised and the bundle is not marked submission-ready until resolved.
Attachment Provenance and Audit Trail
Given each attachment in the bundle, when packaging completes, then the manifest records provenance: source entity, original filename, original checksum (SHA-256), transformation steps (convert/compress/sanitize), resulting filename, resulting checksum, actor, and timestamp. Given any attachment is modified after packaging, when re-packaging occurs, then the manifest version increments and a diff of changed artifacts and steps is recorded. Given an auditor requests evidence, when exporting the audit, then the system produces a machine-readable manifest (JSON) and a human-readable report that reconcile with the packaged files' checksums.
Notification & Task Routing for Portal Requests
"As a fleet manager, I want actionable alerts and tasks when a portal requests more information so that we can respond before deadlines and keep claims moving."
Description

Provide configurable alerts and workflow when portals request additional information or take key actions. Create tasks with due dates and owners, surface missing items, and provide one-click upload/response back to the originating portal. Support in-app notifications, email, and optional SMS, with SLA timers and escalation rules. Log all communications and responses in the claim timeline to maintain end-to-end traceability.

Acceptance Criteria
Task Creation on Portal Info Request
Given an active claim linked to an OEM/dealer portal and a unique portal event ID for "Request Additional Information" When the portal notifies FleetPulse via webhook or polling callback Then a single task is created within 10 seconds with: - title "Portal info request" - due date calculated from the configured SLA rule - owner assigned per routing rules (default: claim owner) - checklist of required fields and attachments parsed from the portal request - SLA timer started and visible And the task is not duplicated for the same portal event ID And the claim header shows an "Information requested" banner with a deep link to the task
Configurable Multi-Channel Alerts
Given user and organization notification preferences (in-app, email, SMS) and verified SMS opt-in When a portal info-request task is created, escalated, reassigned, due-soon (<=25% SLA remaining), breached, or resolved Then notifications are delivered per preference: - in-app within 5 seconds - email within 60 seconds - SMS within 60 seconds if opted-in and enabled at org level And each notification includes claim ID, portal name, request type, current task status, owner, due date/time in recipient’s timezone, and a deep link to the task And quiet hours and do-not-disturb settings are respected, deferring non-urgent notifications to the next allowed window And duplicate notifications for the same event are suppressed within a 60-second window
SLA Timers and Escalation
Given an SLA rule exists for "Additional Information" per portal and claim type (default 24h; min 1h; max 168h) When a related task is created Then the SLA countdown displays remaining time in hours/minutes and color-codes: green (>50%), amber (<=50% and >0%), red (breached) And at 75% elapsed a reminder is sent to the owner and watchers And upon SLA breach the task status changes to "Escalated", the escalation assignee/group receives notifications, and the breach timestamp/cause are recorded And reassigning or pausing (on-hold) adjusts the SLA per configured pause rules and is reflected in the timeline
One-Click Upload and Portal Response
Given a portal info-request task with a required items checklist When the user clicks "Upload and Respond" Then the UI allows drag-and-drop and mobile capture for attachments (pdf, jpg, png, docx up to 25 MB each; max 10 files) And validates file type, size, and required metadata before enabling "Send" When "Send" is clicked Then FleetPulse transmits the payload to the originating portal API with mapped fields and attachments And on success updates the task to "Waiting on Portal", records the portal response ID, and marks completed checklist items And on partial success (some items accepted) surfaces which items remain and leaves the task "Open" with remaining checklist items
End-to-End Claim Timeline Logging
Given audit logging is enabled by default When any of the following occurs: portal event received, task created/updated/reassigned/escalated, notification sent, user upload/response, portal response received, error/retry Then a timeline entry is recorded with UTC timestamp, actor (user/system), channel (portal/in-app/email/SMS/API), event type, summary, and links to artifacts (files, tasks, response IDs) And entries are immutable, searchable by claim ID, portal event ID, and date range And the timeline can be exported to CSV including all above fields
Portal Status Change Sync
Given a claim is linked to a portal with claim ID mapping When the portal posts "claim status changed" or "more info needed" Then FleetPulse updates the internal claim status to match the portal code via mapping table within 15 seconds And creates or reopens the corresponding task if additional information is needed And ensures idempotency by ignoring duplicate events with the same portal event ID while still logging receipt And notifies the claim owner of the status change per notification preferences
Error Handling, Retry, and Fallback
Given a user attempts to "Upload and Respond" and the portal API returns an error When the error is transient (HTTP 5xx or network) Then FleetPulse retries up to 3 times with exponential backoff (1s, 5s, 15s) And if still failing, marks the task "Action Required - Delivery Failed", logs the error code/message, attaches the payload, and notifies the owner and escalation group When the error is validation-related (HTTP 4xx) Then FleetPulse displays field-level errors, does not retry automatically, and leaves the task "Open" with unmet checklist items highlighted And all failures and retries are recorded in the claim timeline
Retry, Idempotency & Audit Logging
"As a platform admin, I want robust retries and a full audit trail so that transient errors don’t result in lost claims and issues can be diagnosed quickly."
Description

Introduce a durable job queue for submissions and sync operations with exponential backoff, circuit breakers, and dead-letter handling. Use idempotency keys per claim and portal action to prevent duplicate submissions. Persist structured audit logs of every request/response, payload hash, user action, and connector version to support troubleshooting and compliance. Provide an admin dashboard to replay failed jobs safely and export audit trails as needed.

Acceptance Criteria
Durable Queue with Exponential Backoff for Submissions and Sync
Given a claim submission or status sync job is enqueued And the external portal returns transient errors (HTTP 429/5xx) or network timeouts When the job is processed Then the system retries using exponential backoff with jitter per default config (initialDelay=2s, factor=2, maxDelay=5m, jitter=±20%) And it persists job state and nextAttemptAt across service restarts And it stops retrying after maxAttempts=8 without manual intervention And it marks the job Success on the first 2xx response and records the external claim ID if present And it marks the job Failed only after maxAttempts are exhausted without a 2xx response
Circuit Breaker for Unhealthy Portal Endpoints
Given a portal endpoint experiences 5 consecutive failures within 1 minute or ≥50% error rate over the last 20 requests When additional jobs target that endpoint Then the circuit opens for 2 minutes and short-circuits calls with outcome "Skipped - Circuit Open" (does not decrement remaining retries) And metrics and audit logs record the circuit state change with reason And after cooldown the circuit enters Half-Open allowing up to 3 trial requests And if all trial requests succeed, the circuit closes; otherwise it reopens for another 2 minutes
Dead-Letter Handling with Alerting and Retention
Given a job has exhausted maxAttempts or has been skipped due to an open circuit for 3 consecutive cooldowns When dead-letter conditions are met Then the job is moved to the Dead-Letter Queue with fields: jobId, claimId, portal, action, idempotencyKey, attemptCount, lastError, timestamps, and lastResponseStatus And an alert is sent to the configured channel within 1 minute containing jobId and reason And DLQ items are retained for 30 days and are visible and filterable in the admin dashboard And DLQ jobs never auto-execute and require explicit replay
Idempotency Keys Prevent Duplicate Submissions
Given a submission or portal action uses an idempotency key scoped to claimId+action+portal and valid for 24 hours When the same key is used across retries, service restarts, or manual replays within the window Then only one portal-side mutation occurs and all attempts return the same outcome (external claim ID and response body) And concurrent attempts with the same key are de-duplicated so that one proceeds while others await and receive the final outcome And if the portal returns a known duplicate/conflict response, the job is treated as idempotent success and reconciles the external claim ID And the idempotency key is stored with the job and audit entries
Structured Audit Logging of All External Interactions
Given any outbound request to a portal or inbound callback related to a claim or sync occurs When the interaction is processed Then a structured audit record is written with: timestamp (UTC), actor (user email or service), claimId, portal, endpoint, method, request header whitelist, request payload SHA-256 hash, idempotencyKey, jobId/correlationId, connectorVersion, response status code, response latency (ms), and outcome And secret values (API keys, tokens, passwords) are redacted and payload bodies are not stored And audit entries are immutable (no updates) and deletions only occur via retention policy with a logged reason And audit records are queryable by date range, claimId, portal, idempotencyKey, jobId, and actor
Admin Dashboard Safe Replay of Failed Jobs
Given an Admin views a failed or dead-lettered job in the dashboard When they initiate Safe Replay Then the UI displays the original request summary, idempotencyKey, last error, and preflight checks for credentials and circuit state And the replay executes with the original payload and idempotencyKey by default and writes a new audit entry linked to the original job And on success, the job status updates to Success and the item is removed from the DLQ; on failure, the replay attempt count increments and the item remains in the DLQ And permissions enforce that only Admins can replay and all replay actions are audit logged with actor and timestamp
Audit Trail Export with Filtering and Integrity Checks
Given an Admin needs to export audit logs for a date range When they request an export with optional filters (claimId, portal, jobId, actor, idempotencyKey) and format (CSV or JSONL) Then the system generates a downloadable file containing matching records with the standard audit fields and no redacted secrets And the export completes within 60 seconds for up to 100,000 records and includes a file-level SHA-256 checksum And an audit entry is created for the export action with the filter criteria and file metadata
Connector Configuration & Credential Vault
"As an administrator, I want a secure, simple way to configure portal connectors and credentials so that onboarding and maintenance are fast and compliant."
Description

Deliver an admin UI to configure portal connections, credentials, field mappings, and default templates per OEM. Store credentials in an encrypted secrets vault with role-based access controls, rotation policies, and just-in-time decryption at runtime. Validate credentials during setup, test connectivity, and provide health/status indicators for each connector. Support per-tenant isolation and audit who changed what and when.

Acceptance Criteria
Connector Creation with Credential Validation
Given a user with Connector Admin role in Tenant T When they select an OEM portal, enter all required connector settings and credentials Then the Save action is enabled and the connector is created with status "Not Connected" Given valid credentials for the selected OEM When the user clicks Validate Then the system performs an authentication call and returns "Valid" within 5 seconds and sets status "Connected" Given invalid or expired credentials When the user clicks Validate Then the system displays a descriptive error message and blocks saving until resolved Given a partially completed form When the user attempts to save Then required fields are highlighted and an error summary lists missing or invalid fields
Encrypted Credential Vault and RBAC
Given credentials are saved Then they are stored encrypted at rest using tenant-scoped keys and are never retrievable in plaintext via UI or API Given a user without Connector Admin role When they view a connector Then secret fields are masked and edit/rotate actions are unavailable Given a Connector Admin When they attempt to reveal a secret Then the UI requires re-authentication and reveals for a maximum of 60 seconds before re-masking Given any access to secrets Then an audit event is recorded with user, connector, action, timestamp, and IP, with secret values redacted
Connectivity Test and Health Monitoring
Given an existing connector When the user clicks Test Connectivity Then the system performs a live authentication and endpoint reachability check and displays Health = Healthy/Degraded/Down with last-checked timestamp Given a configured connector Then automated health checks run every 5 minutes and update the health status accordingly Given a health check failure persists for two consecutive intervals Then an alert is sent to the tenant's configured notification channels Given the OEM returns an error during testing Then the UI surfaces the error code/message and a link to troubleshooting documentation
Field Mapping and Default Templates per OEM
Given Tenant T configures an OEM portal When the admin maps internal claim fields to OEM fields in the mapping UI Then required OEM fields are indicated and cannot be left unmapped without a default value Given a mapping and default template are saved When a sample claim is previewed Then the rendered payload shows all mapped values and defaults and flags any unmapped required fields Given invalid mappings (type mismatch or unsupported value) When saving Then the system blocks save and indicates the offending fields with validation messages Given an OEM template Then it can be versioned, named, set as default, and rolled back to a prior version
Per-Tenant Isolation of Connectors and Secrets
Given a connector created in Tenant A Then it is not visible, editable, or callable from Tenant B via UI or API Given any API request with cross-tenant identifiers Then the system returns 403 and logs the attempt without disclosing existence Given stored secrets Then encryption keys and storage namespaces are tenant-scoped, and backups preserve the same isolation Given background jobs for health checks or sync Then they execute within the tenant context only
Configuration Change Audit Trail
Given any create, update, rotate, or delete action on a connector, mapping, or template Then an immutable audit record is written with actor, tenant, object, before/after diffs, timestamp, and request ID Given audit records When filtered by tenant, actor, object, or date range Then results return within 2 seconds for up to 10,000 records Given viewing of audit details Then secret values are redacted while field names and metadata remain visible Given system time variations Then audit timestamps are stored in UTC with millisecond precision
Credential Rotation Policy and Just-in-Time Decryption
Given a connector When an admin sets a rotation policy (interval in days and notification lead time) Then the system tracks last-rotated date and sends reminders per schedule Given a credential past its rotation due date Then the connector status shows "Rotation Due" and alerts are sent; connectivity continues unless enforcement is enabled Given runtime calls to an OEM portal Then credentials are decrypted just in time in memory, used for the call, and purged immediately after, with no secrets written to logs or disk Given any decryption event Then an audit record is created with purpose, connector, timestamp, and outcome without exposing the secret

Bulletin Match

Automatically links DTCs and symptoms to relevant TSBs, recalls, and customer satisfaction programs. It suggests approved diagnostics, labor ops, and parts numbers—and cites the source in the claim packet—so reviewers see exactly why the fix qualifies, boosting first‑pass approvals.

Requirements

VIN-Aware DTC-to-Bulletin Matching
"As a small fleet manager, I want DTCs auto-linked to applicable TSBs and recalls for my vehicle’s VIN so that I can repair faster and qualify for coverage."
Description

Implements a matching engine that uses VIN-decoded attributes (make, model, year, engine, trim), active and historical DTCs, mileage, and captured symptoms to automatically link vehicles to applicable TSBs, recalls, and customer satisfaction programs. Supports real-time lookups from live OBD-II events and batch processing for backfill. Handles superseded/revised bulletins, multiple concurrent DTCs, and synonymized symptom keywords. Ensures OEM applicability rules (VIN ranges, build dates, option codes) are enforced and returns normalized bulletin IDs, applicability notes, and effective dates. Targets sub-2s p95 response time and 99.9% matching service uptime, with audit logs for each decision to support claim reviews.

Acceptance Criteria
Real-time OBD-II Event Match Returns Applicable Bulletins
Given a connected vehicle with a valid VIN decoded to make, model, year, engine, and trim, with at least one active DTC, historical DTCs within the configured lookback window, a mileage reading, and captured symptom text When a new OBD-II event is received by the matching service Then the service returns a response within 2,000 ms at the 95th percentile over a rolling 5-minute window And the response includes only bulletins of type TSB, Recall, or CSP that are applicable to the VIN and any mileage constraints specified by the OEM And each returned bulletin includes normalized bulletin ID, bulletin type, applicability notes, and effective start/end dates when provided by the OEM And no duplicate bulletins are returned
VIN Applicability Rules Enforcement
Given OEM applicability rules that specify VIN ranges (inclusive/exclusive), build dates, and option codes for a set of bulletins And vehicles whose VIN/build/options either satisfy or violate those rules When the matching engine evaluates applicability Then bulletins whose rules are satisfied are marked applicable and included in the result set And bulletins whose rules are not satisfied are excluded from the result set And VIN boundary cases (start and end VIN) are handled per rule definition with 0 false positives and 0 false negatives in the conformance test set
Superseded and Revised Bulletins Handling
Given a data set where bulletin B1 is superseded by B2 and both would otherwise match When the matching engine returns results Then only the latest active bulletin (B2) is returned And applicability notes indicate that B2 supersedes B1 And revised bulletins reflect the latest effective dates and revision identifiers in the response And the audit log records the supersession chain
Multiple Concurrent DTCs and Symptom Synonyms
Given multiple active DTCs and symptom text that includes synonyms of OEM terminology (e.g., “hard start” vs “extended crank”) When the engine matches bulletins Then synonym expansion is applied using the approved synonym dictionary version in effect And bulletins matched via either DTCs or synonymized symptoms are included And duplicates across DTC- and symptom-derived matches are deduplicated into a single bulletin entry And result ordering is deterministic for identical inputs
Batch Backfill Processing Parity and Resumability
Given a batch request covering N vehicles and a historical date range When the batch job runs Then for any vehicle snapshot processed in batch, the returned bulletin set is identical to a real-time lookup using the same inputs and rules version And the job supports resumable execution via a checkpoint so retries do not reprocess completed items And the job produces an idempotency key per vehicle snapshot; re-running the same batch with the same key yields no duplicate outputs
Service Performance and Availability SLOs
Given production-like traffic for small fleets When measuring over a calendar month Then the p95 end-to-end response time for real-time lookups is less than 2,000 ms And matching service uptime is at least 99.9%, excluding pre-announced maintenance windows, as measured by successful health checks and match request responses
Audit Logging and Claim Review Traceability
Given any match decision, including no-match outcomes When the request completes Then an audit record is written that includes: request ID, timestamp, VIN-decoded attributes used, list of active and historical DTCs with timestamps, mileage, captured symptoms, rules version, synonym dictionary version, data sources consulted, all candidate bulletins evaluated with pass/fail reasons, final matched bulletins, and normalized bulletin IDs And the audit record is retrievable by request ID via the audit API And the audit record includes OEM source identifiers sufficient for claim packet citation
Source Aggregation & Normalization
"As a service coordinator, I want FleetPulse to pull and normalize bulletins from OEM and NHTSA sources so that matches are complete and trustworthy."
Description

Integrates with authoritative sources (e.g., NHTSA recall API, OEM service/TSB portals and licensed feeds) to ingest, deduplicate, and normalize bulletin content into a canonical schema. Captures bulletin metadata (IDs, titles, affected systems, VIN applicability rules, labor ops, parts, supersessions, effective dates), full-text content, and regional variations. Maintains revision history with diffing, source citations, and deep links. Supports incremental updates via webhooks or scheduled pulls with ETag/If-Modified-Since, backfills historical bulletins, and provides monitoring, retries, and alerting on ingestion failures. Enforces data licensing terms and access controls at the source and record levels.

Acceptance Criteria
Canonical Schema Normalization & Deduplication
Given bulletin payloads from NHTSA and two OEM feeds When ingestion runs Then each bulletin is mapped to the canonical schema with required fields populated: source_id, external_ids, title, affected_systems, vin_applicability_rules, labor_ops, parts, supersessions, effective_start/end_dates, region_codes, and full_text Given two bulletins share identical source identifiers or normalized fingerprints When normalization completes Then only one canonical record exists with merged source citations and feed provenance preserved Given a superseding bulletin is present in any source When normalized Then the record is marked current and all predecessors are linked via supersessions with reason and dates Given a bulletin is missing any required canonical field When validated Then ingestion rejects the record, logs a validation error with field-level details, and exposes it via the ingestion errors dashboard
Incremental Updates via ETag/If-Modified-Since
Given a prior successful pull with stored ETag and Last-Modified When the next scheduled pull executes Then requests include If-None-Match and If-Modified-Since and skip downloads on 304 Not Modified while logging no-change stats Given a bulletin changed since the last pull When fetched Then a 200 response is processed, the canonical record is updated, and a new revision is created reflecting only the changed fields in the diff Given the source enforces rate limits When HTTP 429 is returned Then the system applies exponential backoff and resumes within the configured window without data loss, and metrics record the throttling event Given transient network or 5xx errors occur When pulling updates Then the system retries up to 5 times with exponential backoff; on final failure it raises an alert and schedules a catch-up pull
Webhook Ingestion with Verification, Retries, and Alerting
Given an OEM webhook notification is received When processing begins Then the HMAC signature and allowed source IPs are verified before acceptance; invalid signatures result in a 401 and no processing Given a valid webhook references a bulletin update When processed Then the bulletin is fetched, normalized, revisioned, and the webhook is acknowledged within 2 seconds P95 Given the same webhook is delivered more than once When handled Then processing is idempotent and does not create duplicate records or revisions Given a transient failure occurs during webhook processing When retried Then up to 5 attempts are made with exponential backoff; on exhaustion an alert is sent to on-call with source, bulletin ID, error, and correlation ID within 1 minute
Revision History, Diffing, and Source Citations
Given any change to a bulletin’s content or metadata When saved Then an immutable revision is appended capturing timestamp, actor (ingestor), source feed, deep link(s), and a field-level diff (added/removed/changed values) Given an API consumer requests a bulletin’s revisions When queried Then at least the last 20 revisions are retrievable with diffs, source citations, and deep links for each Given a bulletin is included in a claim packet When generated Then the packet contains the latest bulletin’s citation (publisher, doc ID, version/date) and deep link URL that resolves to the source
VIN Applicability Rules and Regional Variations
Given a bulletin specifies VIN ranges, model years, plant codes, calibrations, or build dates When ingested Then these are captured as structured, evaluable rules and not only as free text Given a VIN and a target region that match a bulletin’s rules When evaluated via API Then the bulletin is returned as applicable with the matched rule details Given a VIN or region that do not match a bulletin’s rules When evaluated Then the bulletin is not returned (or marked not applicable) and the decision is explainable Given multiple regional variants exist for a bulletin When normalized Then each variant is preserved with region code/locale and linked to its parent bulletin family
Historical Backfill and Supersession Chains
Given a backfill job for the last 10 years is started When executed Then all historical bulletins in scope are ingested and normalized with progress tracking and the job is resumable after interruption without duplicates Given supersession relationships span multiple years When backfilled Then chains are fully reconstructed so that only the latest bulletin is current and predecessors are marked superseded with dates Given a bulletin is withdrawn or replaced by OEM/NHTSA When detected Then the canonical record is updated to reflect withdrawn/replaced status, effective end date is set, and the source deep link is stored
Licensing Enforcement and Record-Level Access Controls
Given a licensed OEM feed is restricted to specific tenants When records are queried Then only authorized tenants can access those records; unauthorized requests receive 403 and the attempt is audit-logged Given a source requires attribution or retention limits When records are served or stored Then attribution fields are included and retention/expiry rules are enforced (e.g., records hidden or deleted after license expiry) with events logged Given an API token lacks source-level permission When listing sources or querying bulletins Then endpoints for that source are hidden or denied without leaking existence Given any read or write to bulletin data occurs When completed Then an audit log entry is recorded with actor, action, record, timestamp, source, and outcome
Diagnostic & Repair Recommendation Builder
"As a technician, I want approved diagnostics and parts suggestions tied to the bulletin so that I can follow OEM guidance and reduce rework."
Description

Generates actionable repair guidance from matched bulletins, including OEM-approved diagnostic steps, labor operation codes, standard labor times, required special tools, and validated part numbers with supersession awareness. Presents recommendations as a checklist that can be applied to a work order with one click, with regional/OEM variants resolved by VIN. Includes cost and coverage indicators (warranty, recall, CSP) and links to full OEM procedures. Provides structured data output to power downstream analytics and inventory reservations.

Acceptance Criteria
VIN-Resolved OEM Variant Selection
Given a valid VIN and at least one DTC or symptom is present for the vehicle in FleetPulse When Bulletin Match returns candidate bulletins, recalls, and CSPs Then the system filters to only those applicable to the VIN’s OEM, market/region, model year, engine, drivetrain, and trim, recording exclusion reasons for each item filtered out And the selected variant determines diagnostic steps, parts, labor ops, and times presented And the recommendation generation completes in ≤ 2.0 seconds at the 95th percentile under a load of 50 concurrent requests And an audit log entry is created with VIN, selected source IDs, variant attributes, and timestamp
Checklist Generation with OEM Steps and Tooling
Given applicable sources are identified for the VIN When recommendations are generated Then a checklist is produced with ordered, numbered OEM-approved diagnostic steps, each step containing: step ID, action text, expected result, measurement fields (type and units), pass/fail criteria, and required special tools (with tool IDs) And labor operation codes and standard labor times are included per job as specified by the OEM And validated part numbers are included with supersession awareness, showing current active PN and linking deprecated PNs, plus required quantities And each checklist item is individually checkable with timestamp and user ID capture on completion And the checklist can be saved as draft and re-opened without data loss
One-Click Apply to Work Order
Given a user with permission to edit work orders views a generated recommendation When the user clicks Apply to Work Order Then tasks, labor ops, and parts (with quantities) are added to the selected work order atomically And any existing duplicate tasks or parts in the work order are deduplicated, merging quantities and preserving notes And coverage indicators and source citations are attached to the work order and claim packet And the operation completes in ≤ 1.0 second locally and ≤ 3.0 seconds end-to-end including API calls And a rollback is performed and an error message displayed if any step fails, with no partial updates persisted
Coverage Indicators and Cost Estimation
Given a VIN and dealership/fleet coverage configuration When recommendations include items tied to warranty, recall, or CSP programs Then each item displays a coverage badge (Warranty/Recall/CSP/Customer Pay) and link to the governing source And covered items show customer pay = 0 and carrier pay calculated per labor ops and parts at configured rates And out-of-coverage items show estimated costs using configured labor rates and parts pricing, with total and per-line subtotals And if program eligibility cannot be resolved, the item is flagged “Coverage Unknown” with a reason and is excluded from covered totals
Source Citation and Traceability
Given recommendations are generated from one or more sources When the user views details or exports the claim packet Then each step, labor op, and part line includes source citation(s) with OEM bulletin/recall/CSP IDs, publication dates, section/step references, and deep links to full procedures And the claim packet export includes a machine-readable citation manifest that lists all sources used And reviewers can access citation links without FleetPulse authentication (public OEM links or whitelisted portals)
Structured Data Export for Analytics and Inventory Reservations
Given a completed or saved recommendation When data is exported via API or message bus Then the payload conforms to schema version v1.0 with sections: vehicle (VIN), sources[], steps[], labor[], parts[] (including supersession: chain[], activePN, qty), coverage[], totals, timestamps, and user IDs And payload passes JSON Schema validation with 100% conformance And the inventory reservations service receives the parts list and returns reservation IDs or backorder statuses, which are stored on the work order And transient failures trigger retries with exponential backoff up to 3 attempts and are logged with correlation IDs
Offline/No-Match Handling and UX Feedback
Given no applicable bulletins, recalls, or CSPs are found for the VIN and inputs When the user attempts to generate recommendations Then the system displays a clear “No applicable sources found” state with troubleshooting options and does not produce a checklist And the action completes in ≤ 1.0 second without errors And telemetry records the no-match outcome and inputs for analytics And if source services are unavailable, a user-friendly error is shown with a retry option and no partial state is saved
Claim Packet Auto-Citation
"As a warranty administrator, I want claim packets to auto-include source citations and evidence so that reviewers can approve on the first pass."
Description

Automatically assembles a claim-ready packet containing matched bulletin/recall identifiers, quoted excerpts of relevant sections, source citations with deep links, ingestion timestamps, and bulletin version hashes. Attaches vehicle evidence (VIN, mileage, DTC freeze-frame data, symptom notes, inspection checklist results, photos) and recommended labor ops/parts. Supports exports to PDF and common EDI/portal formats, and embeds a machine-readable manifest for automated payer validation. Ensures packets are reproducible by snapshotting the exact bulletin revision used at time of claim.

Acceptance Criteria
Auto-Citation with Deep Links and Version Hash
Given a vehicle with matched DTCs/symptoms to one or more bulletins/recalls When a claim packet is generated Then the packet includes for each match: identifier, title, publisher, publication date/effective range, and bulletin version hash And each citation contains a quoted excerpt of the relevant section (between 1 and 500 words) And each citation provides a deep link URL that returns HTTP 200 and resolves to the cited section And each citation records the ingestion timestamp in ISO 8601 UTC and the content source And the bulletin version hash in the packet equals the hash of the content snapshot used
Vehicle Evidence Aggregation and Integrity
Given a session with VIN, odometer reading, DTC freeze-frame data, symptom notes, inspection checklist results, and photos When a claim packet is generated Then the packet includes VIN, mileage with unit, event timestamp, and all DTCs with SAE codes and freeze-frame payloads And the packet includes inspection checklist results with itemized pass/fail, inspector identity, and timestamp And the packet includes technician notes as UTF-8 text and photos as image attachments with capture timestamps And each attachment is referenced by a stable ID and SHA-256 checksum in the manifest And the packet contains no missing or orphaned attachment references
Recommended Labor Ops and Parts with Source Traceability
Given matched guidance prescribing diagnostics, labor operations, and parts When a claim packet is generated Then the packet lists recommended diagnostics/labor operations with operation codes, descriptions, and standard time units And the packet lists recommended parts with part numbers, descriptions, and quantities And each recommended item cites its source bulletin/section anchor And in cases of conflicting guidance, items are included with a conflict flag and the applied priority rule is recorded in the manifest
Export to PDF and Portal/EDI Formats
Given a generated claim packet When the user exports to PDF Then the PDF renders all packet fields and citations with page numbers and includes the embedded machine-readable manifest And all source hyperlinks and attachment references in the PDF are clickable and functional And PDF generation completes within 10 seconds for packets up to 25 attachments totaling 50 MB When the user exports to each supported portal/EDI format Then each output file validates against its published schema with zero errors And each export includes all required fields, attachments or attachment references, and passes the built-in export validator
Embedded Machine-Readable Manifest Validation
Given a generated claim packet When the embedded manifest is extracted Then it validates against the FleetPulse Claim Manifest schema version 1.0.0 with zero validation errors And it includes canonical IDs, bulletin/recall identifiers, version hashes, ingestion timestamps, recommended items, and attachment checksums And a round-trip re-import of the manifest reconstructs the packet with field-for-field equivalence And the internal payer-validation routine returns OK with no missing or inconsistent fields
Revision Snapshot and Reproducibility
Given a claim packet generated at time T using bulletin revision R And the bulletin is updated to revision R+1 after T When the packet is re-opened or re-exported Then it continues to reference revision R with the same version hash and citations And a rebuild from the embedded manifest produces a byte-identical PDF and matching checksums for all attachments And audit logs record the snapshot revision, user, and timestamps for generation and export
Match Confidence & Explainability
"As a shop foreman, I want to see why a bulletin was matched and how confident it is so that I can trust or adjust the recommendation."
Description

Calculates and displays a confidence score for each match with transparent rationale (e.g., VIN range match, DTC correlation, symptom keyword overlap, bulletin recency). Highlights any conflicting applicability criteria and suggests additional data to raise confidence (e.g., capture build date or option code). Provides thresholds for auto-attach versus manual review and a feedback mechanism for users to confirm or reject matches, feeding a learning loop that refines rules and synonym dictionaries over time.

Acceptance Criteria
Confidence Score Calculation and Display
Given a vehicle with VIN, active DTCs, and reported symptoms and a set of candidate bulletins When Bulletin Match results are generated Then each match displays a confidence score between 0.0 and 100.0 with one decimal place And the score is deterministic for identical inputs And the rationale lists factor contributions for VIN range, DTC correlation, symptom keyword overlap, and bulletin recency And each factor displays its percentage contribution And the factor contributions sum to 100% ± 0.5% And the calculation timestamp is shown in the user's timezone
Applicability Conflict Highlighting
Given a bulletin match where any applicability criterion fails (VIN range, model year, engine, drivetrain, market, build date, option code) When viewing the match details Then a Conflicts section is displayed and visually emphasized And each failing criterion is listed with expected value/range and the vehicle's actual value And each conflict includes a source snippet citation from the bulletin And if no conflicts exist, a "No conflicts detected" indicator is displayed
Data Gap Suggestions to Raise Confidence
Given a match with confidence below the AutoAttachThreshold and missing or low-confidence data When the match details are opened Then the system displays up to three prioritized suggested data items to capture (e.g., build date, option code, trim) And each suggestion shows the estimated confidence lift if provided And upon entering the requested data and re-evaluating, the confidence updates within 2 seconds and shows the delta change And suggestions are hidden once confidence meets or exceeds the AutoAttachThreshold
Auto-Attach vs Manual Review Thresholds
Given tenant-level thresholds AutoAttachThreshold (default 85) and ReviewThreshold (default 60) When a bulletin match is evaluated Then if confidence >= AutoAttachThreshold and no conflicts exist, the match is auto-attached to the claim and labeled Auto-Attached And if ReviewThreshold <= confidence < AutoAttachThreshold or any conflicts exist, the match is flagged Needs Review and not auto-attached And if confidence < ReviewThreshold, the match is not attached and is hidden behind a Show low-confidence toggle And all attach decisions are logged with match id, thresholds, decision, user id (if applicable), and timestamp And threshold changes by a tenant admin take effect within 5 minutes and are audit logged
Reviewer Feedback Capture
Given a reviewer is viewing a bulletin match When they choose Confirm or Reject Then the system requires selection of a reason code from a configurable list and allows an optional comment up to 500 characters And the feedback is saved with user id, timestamp, vehicle id, match id, decision, and reason And the match status updates to Confirmed or Rejected accordingly And feedback can be reversed within 24 hours with full audit trail preserved
Learning Loop Updates Influence Future Matches
Given at least 10 feedback records on a specific bulletin–DTC pair in the last 30 days When the nightly learning job runs Then synonym dictionaries and match rules are updated using that feedback And on re-evaluation of the same fixture case after the job, confirmed pairs increase confidence by at least 5 points on average and rejected pairs decrease by at least 5 points on average And a change log entry is produced summarizing updated synonyms/rules and affected match counts
Claim Packet Rationale and Source Citation
Given a match that is auto-attached or reviewer-confirmed When a claim packet is generated (PDF and JSON exports) Then the packet includes the bulletin/recall/program identifier, publication date, and source URL And it includes the confidence score and a rationale summary covering VIN range match, DTC correlation, symptom overlap, and any applicability conflicts And the packet cites page/section references for applicability rules used And the citations render consistently in both export formats
Work Order & Scheduler Integration
"As a fleet manager, I want matched bulletins to populate work orders and parts lists so that I can schedule repairs without re-entering data."
Description

Enables one-click conversion of matched bulletin recommendations into work order tasks, including prefilled labor ops, parts lists, required tools, and estimated times. Syncs with FleetPulse’s scheduler to reserve bays and technicians, checks parts availability, and triggers purchase requests if stock is low. Tags work orders with coverage type (warranty/recall/CSP/customer pay) and updates maintenance history and repair-cost tracking upon completion. Maintains an audit trail linking the work order to the cited bulletin and claim packet.

Acceptance Criteria
One-Click Conversion Creates Draft Work Order with Prefilled Details
Given a vehicle with a matched bulletin recommendation and the user has create permissions When the user clicks "Convert to Work Order" from the Bulletin Match view Then a new draft work order is created and linked to the vehicle and the originating bulletin And the work order contains prefilled tasks including labor operation codes, parts list, required tools, and estimated labor times from the bulletin And the total estimated labor time equals the sum of task estimates And the work order can be opened from the confirmation toast within 3 seconds (p95) And the work order status is set to Draft and is editable prior to scheduling And duplicate prevention prompts the user if an open work order already exists for the same vehicle and bulletin
Scheduler Sync Reserves Bay and Technician Without Conflicts
Given a draft work order with estimated duration and required skills When the user selects a target date range and clicks "Reserve in Scheduler" Then the system assigns the earliest available bay and technician who meet skill and shift constraints without creating conflicts And the scheduled start and end times are written to the work order And conflicts (overlaps, blackout periods, technician time off) are detected and an alternative list of at least 3 next-available slots is suggested And reservations respect the organization's time zone settings And the reservation appears on the scheduler calendar within 2 seconds (p95) and is visible to the assigned technician
Real-Time Parts Check and Auto Purchase Request
Given a work order with a prefilled parts list When the user opens the Parts tab or triggers a stock check Then on-hand quantity and location are displayed for each part and items with insufficient stock are flagged And if on-hand plus inbound is less than required quantity, a purchase request is generated with suggested quantity to fulfill the shortfall and linked to the work order And duplicate purchase requests for the same work order and part are prevented And stakeholders with Parts Manager role receive a notification of the new purchase request And if parts ETA exceeds the scheduled start date, the system warns the user and offers to adjust the schedule
Coverage Type Tagging Drives Billing and Approvals
Given a matched bulletin with a known program type (warranty, recall, CSP) or a manual selection by the user When the work order is created from the bulletin Then the coverage type field is set to one of: Warranty, Recall, CSP, or Customer Pay And labor and parts line items are priced according to the coverage rules: Warranty/Recall/CSP billed to program payer; Customer Pay billed to fleet/customer rate card And customer-facing totals show $0 for covered items where applicable And work orders tagged Warranty/Recall/CSP include the bulletin citation and are marked as "Requires Claim Packet"
Work Order Completion Updates Maintenance History and Costs
Given an in-progress work order with actual labor and parts consumption recorded When the user marks all tasks complete and closes the work order Then the vehicle maintenance history is updated with task details, completion date, odometer, technician notes, and parts used And repair-cost tracking is updated with actual labor hours, labor cost, parts cost, taxes/fees, and total, attributed to the correct coverage type And inventory on-hand is decremented by the quantities consumed And the work order status changes to Completed and becomes read-only except for administrative corrections
End-to-End Audit Trail from Bulletin to Claim Packet
Given a work order created from a bulletin recommendation When creation, scheduling, parts requests, edits, and completion events occur Then an immutable audit log entry is recorded for each event with timestamp, user, action, and before/after values And the work order retains links to the originating bulletin ID, TSB/recall/CSP identifiers, and the generated claim packet ID And the audit trail can be exported as PDF or JSON and attached to the claim packet And users without Audit permissions cannot alter or delete audit entries
Failure Handling and Rollback on External System Errors
Given scheduler or inventory services are temporarily unavailable When the user attempts to reserve a slot or run a parts availability check during conversion Then the system presents a clear error message and leaves the work order in Draft without partial reservations or duplicate purchase requests And all failed integrations are logged with correlation IDs And the system retries background synchronization up to 3 times with exponential backoff And the user may proceed with manual override (deferred scheduling or parts confirmation) and the work order is flagged for follow-up

Payout Optimizer

Simulates expected reimbursement across warranty, recall, and goodwill routes, recommending the best path by OEM policy. It proposes compliant labor times, operation codes, and parts pricing to maximize recovery while avoiding flags that trigger rework or denials.

Requirements

OEM Policy Rules Engine & Versioning
"As a warranty coordinator, I want the system to maintain OEM policy rules by VIN and effective date so that recommendations are always compliant and current."
Description

Implements a centralized, versioned repository of OEM warranty, recall, and goodwill policies, including TSBs, labor time tables, operation code catalogs, regional variations, VIN applicability ranges, effective dates, coverage limits, and submission rules. Supports automated ingestion from OEM feeds and manual curation with approvals. Ensures each recommendation references the exact policy snapshot in effect at the time of service. Integrates with FleetPulse’s vehicle catalog to map policies by VIN, model, trim, in-service date, and mileage, providing a single source of truth for compliance across simulations and exports.

Acceptance Criteria
Automated OEM Policy Feed Ingestion & Versioning
Given an OEM policy feed containing new, updated, and invalid records When the scheduled ingestion job runs hourly and is also triggered manually Then a new immutable policy snapshot is created for successfully processed records, previous snapshots remain unchanged, and the snapshot is timestamped and versioned And the job completes within 15 minutes for a feed of up to 10,000 records And idempotent re-runs with the same input do not create duplicate snapshots or duplicate policy versions And invalid records are quarantined with machine-readable error codes and human-readable messages, and do not block valid records And a processing report is generated with counts of created, updated, superseded, and rejected records And all changes are traceable with source feed identifiers and checksums
Manual Policy Curation with Approval Workflow & Audit Trail
Given a curator edits or creates a policy entry through the admin UI When they submit the change for review Then the change moves to Pending Approval and is not used by simulations or exports until approved And an approver with appropriate permissions can Approve or Reject with mandatory comments And upon approval, a new version is created with incremented version number, effective timestamp, and immutable snapshot ID And the system records an audit trail capturing who, what, when, before/after diffs, and reason And rollback to any prior approved version is available and creates a new version with a reference to the rolled-back source And permissions prevent non-approvers from approving their own changes
VIN-Scoped Policy Resolution with Regional Variants
Given a vehicle VIN, dealer region, in-service date, and current mileage When the rules engine resolves applicable warranty/recall/goodwill policies Then only policies matching VIN range or decoded model/trim, the specified region, and the service date within effective date windows are returned And overlapping policies are prioritized by specificity (VIN-range over model-level), then by most recent effective version not after the service date And policies for other regions are excluded And if trim is unavailable, model-level fallback is used only when the policy allows fallback And the response includes the resolved policy IDs, snapshot ID, and the decision path used
Immutable Policy Snapshot Referenced in Recommendations & Exports
Given a Payout Optimizer simulation executed for a service date When recommendations are generated Then each recommendation references the exact policySnapshotId and policyVersion used for resolution And subsequent policy updates do not change the recommendation outcome or its references And re-running the same simulation inputs reproduces identical results when using the stored snapshot And exports (e.g., claim packages) include policySnapshotId and policyVersion for auditability And attempts to modify or delete a snapshot referenced by any recommendation are blocked with a clear error
Coverage Limits, Effective Dates, and Mileage Enforcement
Given a warranty policy with 3 years/36,000 miles coverage and an effective date range When resolving eligibility for a vehicle at 2 years and 35,000 miles Then the policy is marked eligible When resolving eligibility for the same vehicle at 3 years 1 month or 37,000 miles Then the policy is marked ineligible with explicit reasons (time exceeded, mileage exceeded) And unit handling respects regional units (miles vs. kilometers) with correct conversion and rounding rules per OEM And if the service date precedes the policy effective start or follows the end, the policy is marked ineligible with reason And goodwill and recall policies apply their distinct coverage rules as defined, with reasons returned
Submission Rule Validation API for Payout Optimizer
Given a draft claim containing operation codes, labor times, parts, prices, documentation, and a target OEM When the draft is validated via the rules engine API Then the response returns pass/fail per rule, corrective guidance, and a list of required missing artifacts And only operation codes and labor times permitted by the applicable policy snapshot are accepted; disallowed items include explicit violation codes And pricing validations use the policy snapshot's parts pricing and discount rules And the API responds within 500 ms p95 under 100 concurrent requests and scales to 10 requests/second sustained without error And the same inputs against the same snapshot produce deterministic results
Operation Codes, Labor Times, and Parts Catalog Consistency
Given an operation code resolved for a VIN and region When the rules engine looks up labor time tables and parts pricing Then the returned labor times, allowed operations, and parts lists match the policy snapshot in effect for the service date and region And deprecated operation codes are flagged with their superseding codes per OEM mapping And labor time rounding and caps follow OEM-specific rules (e.g., tenths, max hours) And exported recommendations format codes, times, and currency per OEM submission requirements And any discrepancy between op code and labor table versions is detected and blocked with a clear error
Eligibility Matcher by VIN & Service History
"As a fleet manager, I want automatic eligibility checks against warranty, recall, and goodwill criteria so that I know which reimbursement paths are available for each repair."
Description

Determines coverage eligibility for each repair event by evaluating VIN-specific factors (in-service date, mileage, prior claims), telematics signals (DTCs, freeze-frame, battery/engine/brake anomalies), maintenance history, and active recalls/campaigns. Flags edge conditions such as aftermarket modifications, missed maintenance intervals, or prior denials that can affect eligibility. Outputs eligible paths (warranty, recall, goodwill) with reasons and required evidence, feeding downstream simulation and compliance checks.

Acceptance Criteria
Warranty Eligibility by VIN, In‑Service Date, and Odometer
Given OEM policy config for the VIN’s OEM: base_warranty_months=36, base_warranty_miles=36000, start=in_service_date And vehicle VIN "1ABCDEF2345678901" with in_service_date "2023-01-01" And RO open date "2025-12-31" and odometer "35,900" When the Eligibility Matcher runs Then it returns path "warranty" as Eligible=true with reasons ["within_months","within_miles"] and required_evidence includes ["RO_open_timestamp","odometer_photo"] Given the same vehicle And RO open date "2026-01-02" and odometer "34,000" When the Eligibility Matcher runs Then it returns path "warranty" as Eligible=false with reasons ["age_exceeded"] and includes cutoff_date "2026-01-01" Given the same vehicle And RO open date "2025-06-01" and odometer "36,001" When the Eligibility Matcher runs Then it returns path "warranty" as Eligible=false with reasons ["mileage_exceeded"] and includes limit_miles=36000 and observed_miles=36001
Recall Campaign Eligibility and Prior-Claim Exclusion
Given OEM recall feed lists campaign "R-123" status "OPEN" for the VIN And claim history shows no completion of R-123 When the Eligibility Matcher runs Then it returns path "recall" as Eligible=true with reasons ["campaign_open"] and required_evidence ["campaign_id:R-123","campaign_bulletin","VIN_lookup_proof"] Given OEM recall feed lists campaign "R-123" status "OPEN" for the VIN And claim history shows completion date "2024-02-10" When the Eligibility Matcher runs Then it returns path "recall" as Eligible=false with reasons ["campaign_already_performed"] and includes completion_reference_id Given recall feed returns no open campaigns for the VIN When the Eligibility Matcher runs Then it does not return a "recall" path
Goodwill Path Suggestion Based on Policy and Maintenance Compliance
Given OEM policy config goodwill_window_percent=10 and goodwill_max_age_over_months=6 And the vehicle exceeds base warranty by 5% miles and 2 months And maintenance history shows last 3 services within <=7500 miles and <=6 months intervals And no disqualifying flags are present When the Eligibility Matcher runs Then it returns path "goodwill" as Eligible=true with reasons ["near_miss","maintenance_compliant"] and required_evidence ["maintenance_records","dealer_notes","photos"] Given the vehicle exceeds warranty by >10% miles or >6 months When the Eligibility Matcher runs Then it returns path "goodwill" as Eligible=false with reasons ["beyond_goodwill_window"] Given maintenance history shows a missed interval exceeding grace_percent=20 When the Eligibility Matcher runs Then it returns path "goodwill" as Eligible=false with reasons ["missed_maintenance_interval"]
Telematics Evidence Correlation with Repair Event
Given telematics shows DTC "P0301" with freeze-frame timestamp within 7 days before RO open And the repair operation code targets a cylinder 1 misfire repair When the Eligibility Matcher runs Then it attaches evidence ["DTC:P0301","freeze_frame","time_window_ok"] to applicable paths and stores evidence timestamps and sensor values Given no DTC within the last 30 days or only DTCs unrelated to the operation code When the Eligibility Matcher runs Then it does not attach telematics evidence and adds reason ["no_correlated_telematics"] where applicable Given anomaly detection flagged battery voltage sag >20% within 48 hours of RO open for a battery-related repair When the Eligibility Matcher runs Then it attaches evidence ["battery_anomaly"] and adds reason ["objective_evidence_present"] to the eligible path
Edge Condition Flagging: Aftermarket Mods, Missed Maintenance, Prior Denials
Given service history indicates an aftermarket tuner affecting powertrain When the Eligibility Matcher runs Then it adds flag ["aftermarket_mod_powertrain"] and sets path "warranty" as Eligible=false for powertrain-related repairs with reasons ["aftermarket_mod_disqualifier"] Given a missed maintenance interval beyond grace_percent=20 per policy When the Eligibility Matcher runs Then it adds flag ["missed_maintenance"] and sets path "warranty" as Eligible=false for related component failures and path "goodwill" as Eligible=false Given a prior claim denial with the same symptom_code within cooldown_days=180 When the Eligibility Matcher runs Then it adds flag ["prior_denial_recent"] and sets path "warranty" as Eligible=false with reasons ["repeat_denial_cooldown"] Given a prior approved claim older than cooldown_days=180 When the Eligibility Matcher runs Then it does not add a disqualifying prior-claim flag
Output: Eligible Paths with Reasons and Required Evidence
Given any evaluation request with vin, ro_id, and context When the Eligibility Matcher completes Then it returns a payload matching schema: {paths:[{type in ["warranty","recall","goodwill"], eligible:boolean, reasons:[string], required_evidence:[string]}], evaluation_id:string, vin:string, ro_id:string} And for each returned path where eligible=false, reasons is non-empty And for each returned path where eligible=true, required_evidence is populated with actionable items Given evaluation_id returned in a prior response When a downstream consumer retrieves the evaluation Then the payload is retrievable and immutable and is published to the Payout Optimizer within <=2000ms at the 95th percentile Given reasons include policy references When the payload is generated Then each reason includes policy_code and clause_reference when available
Claim Path Simulator & Payout Forecast
"As a service advisor, I want a forecast of expected payout and approval likelihood across claim paths so that I can choose the route that maximizes recovery and minimizes rework."
Description

Calculates expected reimbursement for each viable path using policy rules, labor times, parts pricing, shop/warranty rates, deductibles, caps, and likelihood of approval. Produces a recommended path with confidence, sensitivity analysis (e.g., varying labor time or parts selection), and projected cycle time. Leverages historical outcomes where available and integrates with FleetPulse repair-cost tracking to quantify savings and update forecasts over time.

Acceptance Criteria
Multi-Path Reimbursement Calculation and Recommendation
Given a repair order with VIN, concern/cause/correction, labor operations, parts list, and applicable shop and warranty rates When the simulator is executed Then it computes expected reimbursement for each eligible path (warranty, recall, goodwill) And it outputs a per-path breakdown (labor, parts, deductibles, caps, taxes, total) And it recommends the path with the highest net recovery And it displays a confidence score (0–100%) for the recommendation And it records the timestamp and policy ruleset version used
Policy Compliance and Denial Risk Guardrails
Given an OEM policy library is loaded for the vehicle and concern When generating operation codes, labor times, and parts pricing for a selected path Then all suggested operation codes are valid for the OEM and model year And labor times are within policy bounds for the operation And parts pricing adheres to OEM pricing and markup rules And any non-compliance is flagged with the specific violated rule and remediation And non-compliant configurations are excluded from recommendations
Likelihood of Approval Modeling with Historical Outcomes
Given historical outcomes for similar claims exist (same OEM, platform, symptom, and repair) When computing likelihood of approval Then the probability is derived from historical outcomes with time-decay weighting And the confidence score references this probability source And if no relevant history exists, baseline OEM approval priors are used and marked as such And the model version, features used, and sample size are logged
Sensitivity Analysis on Labor Time and Parts Selection
Given a baseline forecast and recommendation are available When the user varies labor time by ±10% or toggles OEM vs aftermarket parts Then the forecast recalculates within 2 seconds And deltas for payout, approval likelihood, and cycle time are displayed And recommendation change points are highlighted And the user can export a comparison table (CSV) of baseline vs variants
Projected Cycle Time per Path
Given shop capacity, OEM approval SLAs, and parts ETAs are available or defaulted When computing projections per path Then a projected cycle time is shown with components (approval, parts, repair) And missing inputs are surfaced with defaults clearly labeled And the recommendation incorporates downtime cost weighting if provided And an error margin is displayed with the underlying assumptions
Integration with Repair-Cost Tracking and Savings Attribution
Given the claim is completed and actuals are recorded in FleetPulse When actual costs and reimbursements are linked to the forecast Then forecast vs actual variance is computed per component and in total And model bias is updated using the new data point And savings attributed to the chosen path are written to vehicle- and fleet-level reports And an alert is raised if MAPE exceeds 15% over the last 20 claims
Edge Cases: Deductibles, Caps, and Rate Application
Given scenarios with deductibles, per-claim caps, and differing shop vs warranty rates When calculating totals Then rates are applied correctly per labor type and path And deductibles and caps are applied in the policy-defined sequence And totals are accurate to within $0.01 against known test vectors And unit tests cover at least 90% of permutations across these parameters And paths yielding net negative recovery are not recommended
Repair Procedure & Pricing Recommendation
"As a technician or service writer, I want suggested op codes, labor times, and parts pricing that comply with OEM rules so that I can build accurate, compliant RO lines quickly."
Description

Maps diagnostics (DTCs, inspection notes) to OEM-compliant operation codes, labor procedures, and standard labor times, and recommends parts numbers with compliant pricing (warranty rates, list, core handling, markup caps, and taxes). Supports bundling related operations, detecting mutually exclusive procedures, and proposing required documentation artifacts per operation. Allows controlled overrides with justification while preserving compliance checks.

Acceptance Criteria
Operation Code & Procedure Mapping
Given a VIN that decodes to a specific OEM/model/year/engine and a set of DTCs plus inspection notes When the system generates recommendations Then it returns at least one OEM-compliant operation code with linked labor procedure(s) valid for the decoded configuration per the active OEM catalog And each recommendation includes the OEM source/reference ID and catalog version And ambiguous inputs produce up to three ranked options with disambiguation hints And the recommendation response time is <= 2 seconds per vehicle
Standard Labor Time Compliance
Given recommended operation code(s) for a decoded VIN When labor times are calculated Then the system selects the OEM standard time applicable to the VIN configuration and any relevant TSB/campaign rule And displays labor time units and the authoritative source reference And in warranty/recall routes, dealer custom times cannot be applied And any user-entered time variance > 0.1 hours is flagged for review
Parts Recommendation & Pricing Compliance
Given selected operation code(s) and a chosen payout route (warranty, recall, goodwill) When parts are determined and priced Then the system returns current OEM part numbers including supersessions and core handling requirements And calculates prices using the correct rate basis for the payout route (warranty rates, list) with jurisdictional taxes applied And enforces markup caps per OEM policy and flags any violation And provides an itemized breakdown (labor, parts, core, taxes, totals) suitable for submission
Bundling Related Operations
Given multiple DTCs and inspection notes on the same repair order When recommendations are generated Then the system bundles related operations using OEM overlap rules to avoid duplicate labor And shows gross labor time, overlap deductions, and net labor time And consolidates duplicate parts across bundled operations And displays expected payout delta compared to an unbundled scenario
Mutually Exclusive Procedure Detection
Given a set of selected operation code(s) When mutually exclusive procedures exist (e.g., repair vs replace for the same component) Then the system prevents simultaneous selection of conflicting procedures And presents the user with the conflict rationale and policy citation And suggests compliant alternative combinations that resolve the conflict
Required Documentation Artifacts Proposal
Given finalized operation code(s) and a payout route When preparing the submission checklist Then the system lists required documentation artifacts per operation (e.g., photos, test results, calibration logs, customer complaint, signatures) And specifies acceptable formats and minimum content for each artifact And prevents submission until all mandatory artifacts are attached and associated to their operation code(s) And records artifact completeness status per operation
Controlled Overrides with Compliance Re-check
Given a user with override permissions When the user changes an operation code, labor time, or pricing element Then the system requires a justification and, where policy dictates, an attachment And records the override in an immutable audit log with user, timestamp, before/after values And re-runs compliance checks and blocks submission for non-overridable violations And, where possible, proposes compliant alternatives to satisfy the requested change
Compliance Validator & Denial Risk Scoring
"As a warranty administrator, I want pre-submission compliance checks and denial risk scores so that I can fix issues before submission and avoid rejections."
Description

Performs pre-submission validation against OEM portal rules and common denial triggers, verifying required data and artifacts (photos, DTC snapshots, mileage and time stamps, technician certifications, parts provenance). Scores denial risk based on patterns such as repeated claims, mismatched op codes, or out-of-window mileage and provides specific remediation steps. Integrates with FleetPulse inspections to ensure evidence is captured before submission.

Acceptance Criteria
Pre-Submission Evidence Validation and Attachment
Given a claim draft with VIN, mileage, DTCs, technician, and parts list When the validator runs Then it verifies required artifacts exist: ≥3 photos (overall, component, odometer), a DTC snapshot, mileage timestamp, technician certification, and parts provenance, returning pass/fail per artifact Given each artifact passes existence When format checks run Then photos are JPG/PNG ≥1024x768 with EXIF timestamp; DTC snapshot includes codes and freeze-frame; mileage includes timestamp and units; technician cert is unexpired; parts provenance includes supplier, lot/serial, and purchase evidence Given artifacts are missing or invalid When results are displayed Then specific error codes and remediation text are shown per artifact and claim submission is blocked Given a FleetPulse inspection within the last 7 days or 300 miles When validation runs Then matching photos and DTC snapshot auto-attach to the claim and pass the existence check Given a claim with ≤20 artifacts When validation runs Then results are returned within 2 seconds at the 95th percentile
OEM Rule Compliance for Operation Codes, Labor, and Pricing
Given the OEM, vehicle VIN/model-year, and chosen route (warranty/recall/goodwill) When rule validation runs Then the operation code is valid for the OEM, vehicle, symptom/DTC, and route; otherwise the validator fails with a specific code and message Given proposed labor times When compared to OEM standards Then labor time ≤ OEM standard for warranty/recall and ≤ OEM standard + 10% for goodwill; otherwise flagged as noncompliant Given proposed parts pricing When checked against OEM policy Then markup is within the OEM-allowed range for the route; otherwise flagged with the delta Given a VIN with an open recall/campaign When route selection conflicts with OEM policy Then the system recommends the recall route and filters the op-code list accordingly Given out-of-window warranty mileage/time When the route is warranty Then the validator fails warranty compliance, increases denial risk, and suggests goodwill route evaluation
Denial Risk Scoring and Explainability
Given a complete claim draft When denial risk scoring runs Then a score from 0–100 is returned with bands: Low (0–29), Medium (30–69), High (70–100) Given a score is returned When the user opens Risk Details Then the top 5 contributing factors with weights and the exact data points driving them are displayed Given the same claim input When rescored within the same validator model version Then the score is deterministic within ±1 point Given scoring request volume ≤ 20 RPS When scoring runs Then p95 latency is < 300 ms and error rate < 0.5% Given a High risk band When attempting submission Then a warning is shown and submission requires remediation or an authorized override
Actionable Remediation and One-Click Revalidation
Given any failed validation or high-impact risk factor When the user views Remediation Then step-by-step actions with deep links are shown (capture missing photo, correct op code from OEM list, adjust labor, upload certification, attach parts invoice) Given one or more remediation actions are completed When the user clicks Revalidate Then the validator reruns and updates pass/fail and risk within 2 seconds p95 without a full page reload Given all blocking issues are resolved and risk ≤ 69 When validation completes Then the claim status indicator turns green and submission is enabled Given only non-blocking advisories remain When proceeding to submission Then submission is allowed and advisories are logged to the claim
Submission Guardrails and Role-Based Override
Given critical validation failures (e.g., missing required artifacts, invalid operation code) When attempting to submit Then submission is blocked for all roles with explicit blocking reasons Given a user with Supervisor role and at least one noncompliant item When selecting Override Then a justification of ≥20 characters is required and recorded with timestamp, user ID, claim ID, overridden items, and current risk score Given an override submission When the claim is sent Then the audit trail stores cryptographic hashes of the pre- and post-override payloads and the validator model version Given an overridden submission When viewing dashboards Then the claim displays an Overridden tag and the current risk band
Inspection Workflow Integration and Evidence Freshness
Given a vehicle with a FleetPulse inspection When creating a claim Then the system prompts to link the most recent inspection within the last 7 days or 300 miles; otherwise prompts to initiate a new inspection Given a linked inspection When validation runs Then verify photos include VIN plate and odometer images and that their timestamps are ≤7 days before submission Given an inspection captured offline on mobile When the device reconnects Then evidence syncs within 5 minutes and the validator auto-runs, updating validation status Given the inspection VIN does not match the claim VIN When linking is attempted Then linking is blocked with a specific error
Comprehensive Audit Package for OEM Disputes
Given a submitted claim When the user requests Export Package Then a ZIP is generated within 30 seconds including claim JSON, all evidence files, validation report, denial risk report, OEM ruleset version, model version, and the audit log Given the package is generated When integrity verification runs Then a manifest with SHA-256 hashes per file and a master hash is included and validates Given data retention policy of 24 months When claim age < 24 months Then all artifacts are accessible; otherwise archived indicators are shown and retrieval SLA is ≤ 24 hours Given the export includes personal data When the package is built Then PII is redacted per policy except fields required by OEM, and redaction is logged
Submission Package Export & OEM Integrations
"As a dealer liaison, I want to export submission packages to OEM portals and DMS formats so that I can submit claims without re-entering data."
Description

Generates submission-ready claim packages for OEM portals and DMS systems in required formats (XML/EDI/CSV/PDF), including attachments and metadata. Provides per-OEM templates, sandbox/test modes, error handling, retries, and audit-friendly checksums. Offers API and webhook endpoints to push approved claims to partner systems and to receive status updates, reducing rekeying and submission time.

Acceptance Criteria
OEM XML Package Generation with Attachments
Given an approved claim for OEM A with all required fields and attachments present When the user selects "Export Package" with template "OEM A XML vX.Y" Then the system generates a ZIP containing claim.xml, attachments/, manifest.json, and checksum.txt And claim.xml validates against the OEM A XSD with zero errors And each attachment is included with correct filename, MIME type, and reference IDs that match manifest.json And checksum.txt contains SHA-256 for claim.xml, each attachment, and a package-level SHA-256 And the export uses UTF-8 encoding and consistent line endings (LF)
Multi-Format Export Compliance (XML/EDI/CSV/PDF)
Given an approved claim and OEM B template "v3.1" is selected When the user selects export formats XML, CSV, and PDF Then the XML output validates against OEM B schema and uses UTF-8 encoding And the CSV file uses comma delimiter, UTF-8 encoding, a header row matching the mapping spec, and one row per operation line And the PDF is text-searchable, includes the claim ID in the header of each page, and renders all required fields and attachment references And filenames follow the convention {claimId}_{format}.{ext} And the export reports individual pass/fail results per artifact
Sandbox/Test Mode Submission Routing
Given Test Mode is enabled for OEM submissions When the user submits a claim package to OEM C Then the request is sent to the OEM C sandbox endpoint with the required test flag in the payload And production credentials are not used And the UI displays a "Test submission" banner and prevents mixing test and production items in the same batch And the submission is excluded from production payout calculations while still generating status updates
Resilient Error Handling and Retries
Given a transient failure (HTTP 5xx or network timeout) occurs during submission When the system pushes the package to an OEM endpoint Then the system retries up to 3 times with exponential backoff (2s, 4s, 8s) And on success, only one external submission record exists due to idempotency key {claimId}-{destination} And on validation errors (HTTP 4xx with error codes), the system does not retry and returns field-level error messages And the user can reattempt after fixing errors without creating duplicate records
Partner API Push and Secure Webhooks
Given a partner is registered with an API key, webhook URL, and shared secret When the partner calls POST /api/v1/claims/{id}/submit with a valid token Then the API responds 202 Accepted and enqueues the submission job And outbound webhooks include claim status updates with HMAC-SHA256 signature header X-Signature and unique event_id And the system deduplicates webhook events by event_id for 24 hours and acknowledges with 2xx only after processing And all inbound and outbound traffic requires TLS 1.2+; weaker protocols are rejected with 426 Upgrade Required
Audit Trail, Checksums, and Reproducibility
Given a claim package has been exported or submitted When an auditor views the audit details Then the log shows timestamp, user/actor, OEM template name and version, destination endpoint, payload hash (SHA-256), attachment hashes, and response codes And recalculating hashes from stored artifacts matches the recorded hashes And re-exporting the package using the same template version and source data produces identical bytes (hash equality) And any hash mismatch raises an alert and blocks further submission until acknowledged
Per-OEM Template Selection, Mapping, and Validation Rules
Given OEM D with template version v2.3 is selected When the user previews or exports the package Then only fields required by OEM D v2.3 are included and mapped per the template specification And deprecated fields from prior versions are excluded And client-side validation blocks export with a list of missing or invalid fields if requirements are not met And switching the template version updates the mapping, preview, and validation rules immediately
Audit Trail, Explainability & Governance
"As an auditor or manager, I want traceable explanations and logs of decisions and overrides so that I can defend claims and meet governance requirements."
Description

Captures a complete, immutable audit trail of inputs, rule versions, simulation parameters, recommendations, user overrides, and exports. Presents an explainability panel that cites the policies and data points that drove each recommendation, enabling dispute resolution and training. Enforces role-based access, PII redaction, and retention policies, with exportable logs for compliance reviews.

Acceptance Criteria
Immutable Audit Log for Payout Simulation Runs
Given a user with permission "Run_Simulation" executes the Payout Optimizer for a vehicle or claim When the simulation completes Then the system appends a new audit record with fields: runId (UUIDv4), claimId (optional), vehicleId, maskedVIN (last 6 visible), userId, userRole, startedAt/endedAt (UTC ISO-8601), clientIp, inputPayloadHash (SHA-256), inputFieldList, rulesetVersion, policyBundleVersion, pricingCatalogVersion, simulationParameters, routeEvaluations, recommendedRoute, and recommendationId And the audit store is append-only and rejects updates or deletes via API, returning HTTP 405 for mutation attempts And each record is chained using previousRecordHash to produce an immutable hash chain; verifying the chain over the last 10,000 records succeeds with no gaps And retrieval of a specific run by runId returns within 2 seconds P95 for the last 90 days of data
Explainability Panel with Policy and Data Citations
Given a recommendation is displayed for a completed run When the user opens the Explainability panel Then the panel lists the policies and clauses (policyId, clauseId, version, effectiveDate) that influenced the recommendation, with deep links to source documentation And each cited policy shows the specific data points used (e.g., DTC codes, mileage, in-service date, prior repairs) with their values at evaluation time And the panel shows how labor time and parts pricing were derived, including operation codes and calculations And any denial-risk flags are shown with the rule that triggered them And the panel displays the rulesetVersion and policyBundleVersion used for the run And the entire explanation can be exported as a JSON object and a PDF with the associated runId
Deterministic Replays via Versioned Snapshots
Given an auditor has permission "Replay_Run" and a target runId When the auditor replays the run using the stored rulesetVersion, policyBundleVersion, pricingCatalogVersion, and inputPayload snapshot Then the resulting recommendationId, recommendedRoute, and payout amounts match the original within a tolerance of ±$0.01 And the replay event is appended to the audit log with a replayOf reference to the original runId And if any referenced version is unavailable, the system returns an error with code REPLAY_VERSION_MISSING and does not proceed
User Override Capture, Justification, and Traceability
Given a user with "Override_Recommendation" permission views a completed run When the user overrides a recommended route or edits labor time, operation codes, or parts pricing Then the system requires a justification (min 20 characters) and at least one reason code from a configurable list And the system allows optional evidence attachments (PDF/JPG/PNG) up to 25 MB total And the override is stored as a new audit entry referencing the prior value, including before/after values, userId, userRole, timestamp, and evidence fingerprints (SHA-256) And the Explainability panel displays the override history and justification alongside the original recommendation And exports include override details unless the requesting role lacks "Export_Overrides" permission
RBAC Enforcement and PII Redaction in UI and Exports
Given predefined roles (Owner, Manager, Adjuster, Auditor, ReadOnly) and granular permissions When a user without "View_Audit_Log" attempts to access audit records Then access is denied with HTTP 403 and the attempt is itself logged And fields classified as PII (driverName, driverEmail, phone, address) are redacted by default in UI and exports, showing tokens or masked values And only users with "PII_View" may view unmasked PII, and such access events are logged with who, when, and which fields were revealed And VIN is masked to last 6 by default in UI, with full VIN visible only to "PII_View" holders
Compliance Log Exports, Signatures, and Retention
Given a user with "Export_Compliance" permission requests an export with filters (date range, vehicleId/VIN, userId, rule/policy version, claimId) When the export job is created Then the system generates a time-boxed export (max 1 million records or 2 GB per file) in CSV and JSON Lines with a manifest including schemaVersion, recordCount, createdAt, and SHA-256 checksums And the export is PII-redacted by default; including PII requires "PII_View" and explicit confirmation And the export is signed with an application certificate; the signature verifies successfully with the published public key And the download link is available within 5 minutes P95, expires after 24 hours, and all downloads are logged And logs are retained for 7 years by default with configurable retention and legal hold; expiry purges are automatic, logged, and verifiable by count

Claim Clock

Tracks filing windows, mileage caps, and documentation deadlines per OEM and vehicle. Get proactive alerts as coverage windows near expiry and nudges to submit now, so you never miss reimbursements due to timing or incomplete evidence.

Requirements

OEM Coverage Rules Engine
"As a fleet manager, I want accurate OEM coverage rules applied to each vehicle so that I know exactly what claims are eligible and when."
Description

A rules repository and engine that models OEM warranty programs, extended coverages, and recall policies, including filing windows, mileage/hour caps, documentation requirements, and exceptions by region or vehicle line. Rules are versioned with effective dates, support multiple OEMs per fleet, and expose a deterministic API to evaluate coverage for a given vehicle or service event. Integrates with FleetPulse VIN decoding and vehicle profiles to normalize OEM terminology, supports an admin UI for safe rule updates, and provides validation and unit-testable rule definitions to ensure accuracy.

Acceptance Criteria
Deterministic coverage evaluation for service event
Given a vehicle with a decoded VIN, associated OEM, regionCode, inServiceDate, and a service event payload containing eventDateTime (UTC), odometerMiles, and attachedDocuments[] When the client invokes POST /coverage/evaluate with that payload Then the API responds 200 within p95 <= 300 ms and p99 <= 700 ms for payloads <= 1 KB, and the response contains: coverageDecision in [Covered, NotCovered, NeedsMoreInfo], reasonCodes[], ruleId, ruleVersion, ruleEffectiveStart, ruleEffectiveEnd (nullable), filingWindowStart, filingWindowEnd (nullable), mileageCapMiles (nullable), hoursCap (nullable), regionApplied, docsRequired[], docsMissing[], submissionReadiness in [Ready, Blocked] And repeated calls with identical inputs return identical response bodies except for traceId and responseTimestamp fields And requests with invalid schema return 400 with error details; unknown VINs return 404; undecodable VINs return 422 with reasonCodes including 'VINDecodeFailed'
Effective-dated rule selection and backdating
Given an OEM has multiple Published rule versions with effectiveStart and effectiveEnd dates and a serviceEvent.eventDateTime When /coverage/evaluate is called Then the engine selects the Published rule where effectiveStart <= eventDateTime < effectiveEnd (or no end means open), preferring the most recent effectiveStart when overlaps exist And the response echoes selected ruleId and ruleVersion and includes ruleEffectiveStart and ruleEffectiveEnd And if no Published rule covers eventDateTime, coverageDecision = NeedsMoreInfo with reasonCodes including 'NoActiveRuleForDate' and nearestFutureRuleVersion populated when available And unit tests include cases for: past-dated event, boundary at start, boundary at end, overlapping versions, and future-dated event with no active rule
VIN normalization and OEM term mapping
Given a VIN decodes to OEM-specific attributes (e.g., vehicleLine, powertrain, inServiceDate) and terminology When /coverage/evaluate is called Then the engine normalizes OEM terms to FleetPulse canonical fields and includes both canonical fields and oemRawTerms in the response And normalization failures set coverageDecision = NeedsMoreInfo with reasonCodes including 'VINNormalizationFailed' and field-level errors are returned And unit tests cover at least 3 OEMs with 2 vehicle lines each, validating canonical mappings for powertrain, vehicleLine, and inServiceDate derivation
Regional exceptions and vehicle line overrides
Given rules define regional exceptions and vehicleLine-specific overrides When a vehicle has regionCode and vehicleLine provided to /coverage/evaluate Then precedence is applied as: vehicleLine override > region-specific rule > global default And the response includes appliedPrecedence in ['vehicleLine','region','global'] and regionApplied And if regionCode is missing, the global default applies and reasonCodes includes 'RegionDefaulted' And unit tests verify conflicts where overrides change coverageDecision or cap values
Documentation validation and Claim Clock deadlines
Given a rule specifies docsRequired with acceptableTypes per document and a filing window relative to inServiceDate or eventDateTime When /coverage/evaluate is called with attachedDocuments[], eventDateTime, odometerMiles, and inServiceDate Then the response includes filingWindowStart, filingWindowEnd, docsRequired[], docsMissing[], and mileageCapMiles (nullable) And submissionReadiness = 'Ready' only when docsMissing is empty, eventDateTime is within [filingWindowStart, filingWindowEnd], and odometerMiles <= mileageCapMiles (if present) And alertLevel is 'Urgent' when daysUntil filingWindowEnd <= 14, 'Upcoming' when <= 45, else 'Normal' And unit tests verify boundary conditions at 0, 14, and 45 days and at mileageCapMiles - 1, =, and + 1
Admin rule update workflow and safeguards
Given an admin proposes a ruleset change in the UI When the change is validated and published Then JSON schema validation passes, all linked unit tests pass, and linter reports 0 errors and <= 5 warnings And publishing increments ruleVersion, records changelog {actor, timestamp, summary}, and requires dual approval for Production while allowing single approval for Staging And a dry-run regression suite on Production publish shows <= 2% decision regressions vs prior Published version across the baseline dataset; otherwise publish is blocked And rollback restores the prior Published version within 2 minutes and emits an audit event with rollbackActor and reason
VIN-to-Coverage Mapping
"As a small fleet owner, I want each vehicle mapped to its coverages so that I can see remaining eligibility at a glance."
Description

A mapping service that links each vehicle’s VIN, in-service date, warranty start/end, odometer/engine hours, and ownership history to all applicable OEM and extended coverages. Automatically ingests VIN decode and telematics odometer/engine hours from FleetPulse, reconciles with user-entered dates, and computes remaining time and mileage. Supports multiple concurrent coverages per vehicle, tracks data provenance, and exposes a clear coverage summary within each vehicle profile.

Acceptance Criteria
Auto-Ingest VIN Decode and Telematics Readings
Given a vehicle is added with a valid VIN and is linked to an active FleetPulse telematics device When the VIN decode event and the latest odometer and engine hours readings are received by the mapping service Then the vehicle’s make, model, year, engine, odometer, and engine hours are stored in the coverage mapping record And the mapping record is linked to the vehicle profile by VIN And the ingestion source for each stored field is recorded as VIN Decode or Telematics with timestamp and event ID
Reconcile User-Entered Dates with System Sources
Given user-entered in-service date and warranty start/end dates exist alongside OEM- or integration-sourced values for the same fields When the values differ Then the system flags the conflict and prompts an authorized user to select the active value for each conflicting field And the selected active value is marked with its source and rationale; non-selected values are preserved with source and timestamp And coverage computations use only the active value And an audit trail records who resolved the conflict and when
Support Multiple Concurrent Coverages per Vehicle
Given a vehicle has an OEM base warranty and at least one additional applicable coverage (e.g., extended drivetrain, emissions) When coverage records are ingested or entered Then each coverage is stored as a separate record linked to the vehicle And overlaps in effective dates are allowed without deduplication And each record includes coverage name, type, start date, end date, mileage cap, hours cap, and transferability flag And the vehicle profile coverage summary lists all active and future coverages
Compute Remaining Time, Mileage, and Engine Hours
Given a coverage record with start date, end date, mileage cap and/or engine hours cap, and the vehicle’s current odometer and engine hours When the system computes coverage remaining Then remaining days = max(0, end date − today) And remaining miles = null if no mileage cap, else max(0, mileage cap − current odometer) And remaining hours = null if no hours cap, else max(0, hours cap − current engine hours) And coverage status is Active if today is within start/end and remaining miles/hours (if present) > 0; otherwise Expired And computations update automatically when a newer odometer/hours reading arrives or dates are edited
Data Provenance and Auditability
Given any field in the coverage mapping (e.g., VIN decode attributes, dates, caps, ownership, odometer/hours) is created or updated When the change is saved Then the system stores source (User, Telematics, OEM, Import, API), source identifier, timestamp, and actor (user ID or integration) And previous values are retained in a change history with who and when And the coverage summary displays the current value’s source via an info indicator And an export endpoint returns current values with provenance and change history
Coverage Summary Display in Vehicle Profile
Given a vehicle profile is opened When the Coverage Summary section is viewed Then for each coverage the UI displays: name, type, status (Active/Future/Expired), start date, end date, mileage cap, hours cap, remaining days, remaining miles, remaining hours, and source indicators And a filter allows toggling All, Active, Future, and Expired And conflicting field indicators (if any) are shown with a link to resolve And clicking a coverage row opens a details view with full provenance and raw values
Deadline & Window Tracker
"As a maintenance coordinator, I want clear claim deadlines so that I can schedule repairs and filing before coverage expires."
Description

A computation service that derives and persists critical claim timelines per vehicle and event, including earliest filing date, last allowable filing date, and documentation submission deadlines. Recalculates dynamically as new odometer readings, engine hours, or service events arrive from FleetPulse. Supports time zones, business-day calculations with regional holidays, and OEM-specific grace-period logic. Provides timeline UI components and APIs for dashboards and reports.

Acceptance Criteria
Compute Filing Windows with Grace and Business Days
Given an OEM policy with earliest_wait_days=W (default 0), filing_window_days=X, documentation_due_days=Y, grace_business_days=G, business_day_rule="next_business_day", and region=R with holiday calendar HR And a claimable event at event_at_local in time_zone TZ When the tracker computes timelines Then earliest_filing_date = add_business_days(event_at_local.date, W, HR) adjusted per business_day_rule in TZ And last_allowable_filing_date = add_business_days(event_at_local.date, X + G, HR) in TZ And documentation_deadline = add_business_days(earliest_filing_date, Y, HR) in TZ And each deadline timestamp is set to 23:59:59 in TZ and stored in UTC
Dynamic Recalculation on New Telemetry
Given an active timeline for vehicle V and event E computed at T1 with odometer O1 and engine_hours H1 And an OEM policy with mileage_cap=M and/or hours_cap=H When new telemetry arrives with odometer O2 and/or hours H2 at T2 > T1 Then if crossing a cap boundary changes any deadline, the service recalculates earliest_filing_date, last_allowable_filing_date, and documentation_deadline within 5 seconds of ingestion And creates exactly one new timeline version (version = previous + 1) with change_reason="telemetry_update" And if O2/H2 are stale (timestamp < last processed), no recalculation occurs and no new version is created And duplicate telemetry messages (same source_id and timestamp) are idempotently ignored
Timeline Persistence and Versioning
Given a computed timeline for vehicle V and event E When persisting to storage Then the record includes vehicle_id, event_id, oem_id, version (>=1), active (boolean), computed_at (UTC), effective_from (UTC), effective_to (nullable UTC), change_reason, and source_event_id And only one active version exists per (vehicle_id, event_id) at any time And superseding a timeline sets previous.active=false and previous.effective_to=now() And retrieving history by (vehicle_id, event_id) returns all versions ordered by version asc with no gaps
Regional Holidays and Business-Day Adjustment
Given region=R with holiday calendar HR and business_day_rule="next_business_day" unless overridden by OEM And any computed deadline lands on a weekend or HR holiday When applying business-day adjustment Then the deadline moves to the next business day per HR (or per OEM override rule) And vehicles in different regions R1 != R2 use their respective calendars; a holiday in R1 does not affect deadlines in R2
Time Zone Normalization and DST Safety
Given an event_at in local time and vehicle preferred_time_zone=TZ When computing and storing deadlines Then all stored timestamps are UTC with the original TZ captured in a tz field for display And displayed values are rendered in TZ with correct DST handling (no missing or duplicated calendar days) And converting a stored UTC deadline back to TZ preserves the original wall time within 1 second
Timeline UI Component Rendering
Given a vehicle dashboard loads timelines for vehicle V When rendering the Timeline component Then it displays earliest_filing_date, last_allowable_filing_date, documentation_deadline, and status ("Open", "Due Soon", "Expired") And "Due Soon" applies when days_until_last_allowable <= 7 (configurable) and >= 1 And expired timelines are highlighted with an error color and an "Expired" badge And tooltips disclose applied grace days and any business-day/holiday adjustments And the component supports keyboard navigation and meets WCAG 2.1 AA contrast for status indicators
Timeline APIs for Dashboards and Reports
Given the API consumer calls GET /claim-timelines with filters (vehicle_id, oem_id, date_range, status, region), sort, and pagination (limit, cursor) When the request is valid Then the response is 200 and items[] include vehicle_id, event_id, oem_id, earliest_filing_date, last_allowable_filing_date, documentation_deadline, status, version, active, computed_at, tz, business_day_rule, grace_days, and adjustment_notes And by default only active versions are returned; history=true returns all versions And invalid filters return 400 with field-level errors And P95 latency <= 500 ms for up to 1,000 items and supports ETag-based caching
Proactive Alerts & Escalations
"As a service manager, I want proactive alerts before deadlines so that nothing falls through the cracks."
Description

A configurable notification system that triggers nudges when thresholds are met (e.g., 30/14/7 days before deadline or 90% of mileage cap). Supports in-app, email, and SMS channels, digest mode to reduce noise, acknowledgement tracking, snooze, and escalation to supervisors if alerts go unaddressed. Integrates with FleetPulse’s notification center and role-based permissions, and records delivery/read receipts for accountability.

Acceptance Criteria
Time-Based Deadline Alerts (30/14/7 Days)
Given a claim with filing deadline D and fleet timezone TZ When the current time in TZ reaches 09:00 on D-30, D-14, and D-7 Then one alert is generated per subscribed user-channel for that claim and window And Given an alert for a specific window was already sent successfully to a user-channel When the same window recurs Then no duplicate is sent to that user-channel And Then each alert payload includes claimId, vehicleIdentifier (VIN), deadlineDate, daysRemaining, and a call-to-action link to start or continue the claim submission
Mileage Cap Threshold Alerts (≥90% and Exceeded)
Given a claim with mileage cap M and latest odometer O When O >= 0.9 * M and no prior 90% alert exists for the claim Then send a 90% threshold alert with percentOfCap rounded to one decimal place within 10 minutes of ingesting the reading When O > M and the claim is not closed Then send an Exceeded Cap alert marked Urgent within 10 minutes of ingesting the reading Then alerts include claimId, vehicleIdentifier (VIN), capMileage, currentMileage, percentOfCap, and readingTimestamp
Multi-Channel Delivery & Preferences with Receipts
Given user U has enabled channels C ⊆ {In-App, Email, SMS} and has RBAC permission to view claim X When an alert for claim X is generated Then notifications are delivered only via channels in C and only to users with permission for claim X Then for each attempted channel, delivery status is recorded as Sent, Delivered (where supported), or Failed with timestamp and provider messageId When user views the in-app notification or clicks the tracked link in email Then read/open is recorded with timestamp; for SMS, Delivered is recorded when supported by the carrier When a channel attempt fails Then the system retries up to 3 times with exponential backoff and logs failure after the final attempt
Digest Mode (Daily/Weekly) with Critical Bypass
Given user U has Digest Mode set to Daily at time T When alerts marked digestable are generated during the window [T, next T) Then individual alerts are suppressed and included in a single digest delivered at time T When an alert is Critical (deadline <= 7 days or mileage cap exceeded) Then it bypasses Digest Mode and is delivered immediately Then the digest groups items by claim and vehicle, includes counts and severities, and provides deep links to take action; the digest generation is idempotent to prevent duplicates
Acknowledgement Tracking & State Synchronization
Given an alert instance related to a specific claim-window is visible to authorized recipients When any authorized recipient acknowledges the alert in any channel or in the Notification Center Then the alert state becomes Acknowledged for that alert across all channels and recipients Then further reminders and any pending escalations for that alert stop within 1 minute Then the audit log records userId, channel, timestamp, and source for the acknowledgement
Snooze per User per Alert
Given user U has an unacknowledged alert A When U snoozes A until a datetime S Then A does not re-notify U via any channel before S When S is reached and A remains unacknowledged Then notifications to U resume per U’s channel preferences Then snooze actions are stored with userId, alertId, startTime, endTime, and optional reason in the audit log Then snoozing by one user does not affect delivery to other users
Escalation Workflow to Supervisors for Unaddressed Alerts
Given an alert has not been acknowledged by any assigned recipient within 48 hours or after 2 reminder cycles (whichever occurs first) When escalation rules are active Then an escalation notification is sent to the designated supervisor role(s) per RBAC mapping via their enabled channels When a supervisor acknowledges the escalated alert Then the base alert is marked Acknowledged and all further reminders and escalations stop Then each escalation event records original recipients, timestamps, supervisor recipients, and delivery/read receipts in the audit log
Evidence Checklist & Intake Validation
"As a technician, I want a clear list of required evidence and easy uploads so that my claim isn’t rejected for missing documents."
Description

A dynamic checklist generator that enumerates required documentation per OEM rule (e.g., repair order, photos, OBD-II DTC snapshot, inspection report) and streamlines intake via upload, mobile capture, and auto-attach from FleetPulse telematics and inspection modules. Validates completeness, timestamps, VIN matching, file types, and clarity thresholds; flags gaps with remediation guidance. Stores artifacts securely with audit trails and links them to the related vehicle and service event.

Acceptance Criteria
Dynamic OEM-Specific Checklist Generation
Given a vehicle VIN that maps to an OEM and warranty/program rule set When the user starts a new claim and opens the evidence checklist for a selected service event Then the system generates the checklist from the active rule set version for that OEM/program and vehicle configuration and displays all Required and Optional items defined by that rule set And the checklist includes item types mandated by rules (e.g., Repair Order, OBD-II DTC snapshot, Inspection Report, Photo set, Odometer reading) And each item shows its Required/Optional flag, acceptance rules summary, and current completion state (Not Started/In Progress/Complete) And the checklist loads within 2 seconds (p95) for checklists of up to 50 items And the rendered checklist item count and identifiers match the rule catalog definition for the selected program (no missing or extra items)
Auto-Attach Telematics and Inspection Artifacts
Given the selected service event has associated FleetPulse inspection records and telematics data within ±72 hours of the event timestamp When the checklist is generated Then the system auto-attaches the latest relevant artifacts (DTC snapshot, odometer reading, inspection report) to their corresponding checklist items And each auto-attached artifact matches the vehicle VIN exactly (17-character exact match) And auto-attach completes within 10 seconds of checklist generation And auto-attached items are marked Complete with source label (FleetPulse Telematics or FleetPulse Inspection) and capture timestamps And if no eligible artifacts are found, items remain incomplete and display actionable guidance
Evidence Intake via Upload and Mobile Capture
Given a user opens a checklist item that requires documents or photos When the user uploads files or captures images via mobile camera Then the system accepts files with MIME types application/pdf, image/jpeg, image/png up to 25 MB per file and up to 20 files per item And uploads show progress and succeed within 3 seconds after server receipt for files ≤ 10 MB on a 5 Mbps connection (simulated) And captured images are auto-rotated, timestamped, and associated with the checklist item And all files pass antivirus scanning; files flagged as malicious are rejected with a clear error message And duplicate files (same SHA-256 checksum) are detected and deduplicated with an option to reuse the existing file
Validation of Completeness, VIN, Timestamps, and File Types
Given the user attempts to submit the evidence package When validation runs Then submission is blocked if any Required checklist item is incomplete And any attached document must be tagged with or contain a VIN that exactly matches the vehicle’s 17-character VIN; mismatches are highlighted with item-level errors And artifact timestamps must fall within the claim’s allowed window per Claim Clock (e.g., repair date within coverage period; evidence captured before filing deadline); violations are flagged with specific reasons And files must have allowed MIME types and be within size limits; invalid files are rejected at upload with reason And a validation summary lists all failures with links that focus the corresponding item for remediation
Image and Document Clarity Thresholds
Given an image or scanned document is uploaded or captured When clarity checks run Then images with resolution below 1024x768 or blur score below threshold (variance of Laplacian < 150) are flagged as failing clarity And OCR is performed on Repair Orders and Inspection Reports; if key fields (VIN, repair date, odometer) have OCR confidence < 0.85, the item is marked Needs Attention with instructions to retake or upload a clearer copy And users may override a clarity warning only after providing a reason; the override is recorded and the item’s state changes to Complete (Overridden) And items failing clarity cannot be counted as Complete unless a clear version is provided or an override is recorded
Gap Flagging and Remediation Guidance
Given one or more checklist items are incomplete or invalid When the user views the checklist Then each gap displays a specific remediation action (e.g., “Capture odometer photo”, “Attach OBD-II DTC snapshot”) with concise steps And applicable items provide deep links to initiate capture flows (camera, OBD snapshot request, inspection module) directly And if the filing deadline is within 7 days per Claim Clock, a high-priority nudge appears prompting completion of remaining items now And when all Required items are valid, the checklist header shows 100% complete and the Submit action becomes enabled
Secure Storage, Audit Trail, and Linkage
Given an artifact is attached, updated, or removed When the change is saved Then the artifact is stored encrypted at rest (AES-256 or cloud-provider equivalent) with access restricted to users authorized for the vehicle and claim And an immutable audit record is created capturing actor (user/service), action (create/update/delete/override), UTC timestamp, artifact type, checksum (SHA-256), source (upload/mobile/telematics/inspection), and version linkage And artifacts are linked to the vehicle (VIN), service event (service_event_id), and claim id; retrieval by any of these keys returns within 1 second (p95) for indexed queries And deletion respects retention policy; attempted deletion of required evidence for active claims is blocked and logged with reason
Submission Packet & OEM Portal Links
"As a fleet admin, I want a ready-to-submit claim packet so that I can file quickly with each OEM."
Description

A one-click export that assembles a standardized claim packet with vehicle details, failure description, timestamps, mileage/engine hours, and all required evidence, formatted to OEM-specific templates. Provides deep links to OEM portals and auto-fills known fields where supported, with the option to generate a shareable packet for third-party administrators. Tracks submission status via manual updates or webhooks when integrations exist.

Acceptance Criteria
One-Click OEM Packet Generation
Given a vehicle with an assigned OEM template and complete claim data When the user selects Export Claim Packet Then the system generates the packet in the OEM-specific template within 5 seconds And the packet contains vehicle identifiers (VIN, unit ID), failure description, failure timestamp, odometer and engine hours at failure and at submission, and required evidence references And the packet passes schema validation against the OEM template with 0 errors And the generated file name follows {VIN}_{YYYYMMDD}_OEM_{templateVersion}_v{n} with an extension matching template requirements (PDF or ZIP)
Missing Data Validation Before Export
Given an OEM template that defines required fields and some are missing for the selected vehicle/claim When the user attempts to Export Claim Packet Then export is blocked And the user is shown a checklist of missing fields with inline links to complete them And the Export button becomes enabled only when all required fields are satisfied or the OEM allows placeholders And if placeholders are allowed, the user must confirm, placeholder fields are clearly marked in the packet, and a warning is logged
OEM Portal Deep Link with Autofill
Given an OEM portal that supports deep linking and field autofill and an authenticated FleetPulse session When the user clicks Open OEM Portal Then a new browser tab opens to the OEM portal URL within 2 seconds And the portal displays a pre-populated claim form with VIN, mileage/engine hours, failure date/time, and contact info matching FleetPulse records And the deep link is single-use and expires after 15 minutes And failures (HTTP error or portal unreachable) display a retry guidance message and are logged
Shareable TPA Packet Link
Given a generated claim packet When the user selects Create Share Link for TPA Then the system creates a secure, unguessable URL that expires in 7 days by default And the user can set a custom expiry (1 hour to 30 days) and optional access PIN And each access is logged (timestamp, IP, user-agent) and visible in the packet activity log And the owner can revoke the link; revoked links return HTTP 410
Evidence and Telemetry Attachment Packaging
Given a claim tied to a failure event with telematics and inspection data available When the packet is generated Then the packet includes: DTC snapshot at failure, telemetry trend (±24 hours), latest inspection report, photos/documents tagged to the event, and maintenance history relevant to the failed component And all timestamps are in ISO 8601 with timezone; odometer and engine hours include units And a manifest.json lists all attachments with SHA-256 checksums and file sizes And total attachment size does not exceed 25 MB; assets are compressed when necessary without loss of required readability
Submission Status Tracking and Audit Trail
Given a claim packet that is Draft or Submitted When a user updates status manually or an OEM webhook callback is received Then the claim status transitions only via allowed states: Draft → Submitted → Acknowledged → Approved/Rejected And each transition records actor/source, timestamp, previous/new status, and optional notes in an immutable audit log And duplicate webhook events are idempotently ignored; retries are handled with exponential backoff And users receive in-app notifications on status changes and the UI refreshes within 5 seconds
Template Localization and Units Formatting
Given an OEM template with a specified locale and unit system When generating a packet for that OEM Then dates, times, and numbers follow the template’s locale (e.g., MM/DD/YYYY vs DD/MM/YYYY; dot vs comma decimal separators) And distance and temperature units match the OEM requirement (mi/km, °F/°C) with conversion accuracy within 0.5% and appropriate rounding rules And terminology and field labels match the OEM template version configured for the fleet’s market
Mileage/Time Forecasting
"As an owner-operator, I want to know when I’ll hit mileage limits so that I can plan service and claims before losing coverage."
Description

A predictive service that estimates when each vehicle will hit coverage mileage/hour caps using recent telematics usage patterns, duty cycles, and seasonality. Surfaces projected dates in dashboards, adjusts alert lead times based on risk of overage, and supports CSV export and API access for planning. Continuously retrains with new data to improve accuracy.

Acceptance Criteria
Dashboard Projected Cap Date Display
Given a fleet user views Claim Clock > Forecasts for a vehicle with ≥14 of the last 21 days of telematics data, when the page loads, then the row shows: projected_cap_date (YYYY-MM-DD in vehicle timezone), days_to_cap (integer), remaining_to_cap (miles or hours), cap_type (mileage|engine_hours|time), cap_value (numeric), confidence (0–100), and last_updated_at (ISO 8601 with timezone). And sorting by projected_cap_date ascending/descending returns a stable order; ties are secondary-sorted by vehicle_id. And vehicles without sufficient data show status="Insufficient Data", no projected_cap_date, confidence=0, and an info tooltip stating "Need at least 14 of last 21 days of data". And values reflect the latest completed forecast run (see update cadence), never older than 48h unless flagged stale=true visibly in the row.
Nightly Forecast Update and Data Freshness
Rule: A forecast job runs daily and completes by 03:00 in each vehicle’s primary timezone, incorporating data ingested up to 23:59 local time of the previous day. Rule: If no new telematics data is received for a vehicle in the last 48h, mark forecast stale=true and reduce confidence by 20% (floor at 0), while still showing the last computed projection with a "Data gap" banner. Rule: New vehicles produce a low-confidence forecast after ≥7 days of data and a standard-confidence forecast after ≥14 days. Rule: Each forecast record includes forecast_generated_at and data_window_start/end timestamps for auditability.
Risk-Based Alert Lead Time Adjustment
Given the system computes probability_risk that a vehicle will hit its cap within the next 30 days, when probability_risk > 0.60 (High), then the alert lead time is increased to 45 days before projected_cap_date (or before the earliest relevant OEM filing deadline, whichever is sooner) and a "Submit Now" nudge is issued if the deadline is within 10 days and projected_cap_date ≤ deadline - 3 days. And when 0.30 ≤ probability_risk ≤ 0.60 (Medium), the lead time is 30 days. And when probability_risk < 0.30 (Low), the lead time is 15 days. And the system suppresses duplicate alerts for the same vehicle/cap/deadline within a 72-hour window while updating badge counts in the dashboard. And alerts include probability_risk (0–1), projected_cap_date, basis (miles|hours|time), and the deadline used for calculation.
CSV Export of Forecast Snapshots
Given a user exports "Forecasts" with current filters, when the CSV is generated for up to 100 vehicles, then it completes in ≤10 seconds and contains a header and one row per vehicle with columns: fleet_id, vehicle_id, vin, oem_program, cap_type, cap_value, unit, current_value (miles|hours), remaining_to_cap, projected_cap_date (YYYY-MM-DD), days_to_cap, probability_risk_30d, confidence, forecast_generated_at (ISO 8601), model_version, vehicle_timezone. And numeric values use dot decimal and no thousands separators; dates are in vehicle timezone; line endings are LF. And if no vehicles match, an empty file with only the header is returned. And the exported values match the dashboard snapshot at the time of export.
Forecast API for Planning Integrations
Given a caller with valid OAuth2 token requests GET /v1/forecasts/caps?vehicle_id={id}, when the vehicle exists and the caller is authorized, then return 200 with JSON: { vehicle_id, cap_type, cap_value, unit, current_value, remaining_to_cap, projected_cap_date (ISO 8601 date), days_to_cap, probability_risk_30d (0–1), confidence (0–100), forecast_generated_at (ISO 8601), model_version, stale (bool), vehicle_timezone }. And unsupported vehicle_id returns 404; unauthorized returns 401; rate-limited returns 429 with Retry-After. And list endpoint GET /v1/forecasts/caps?fleet_id={id}&limit=50&cursor=... supports pagination; p95 latency ≤800 ms for cached responses; rate limit ≥60 requests/min per tenant; responses include ETag for conditional GETs.
Continuous Retraining with Accuracy Guardrails
Rule: Models retrain weekly (Sunday 02:00 UTC) or when ≥14 new days of data per vehicle cohort are available or drift_score > 0.2. Rule: Backtest on the last 60 days must achieve: for vehicles with ≥30 days of history, median absolute error of projected cap date ≤7 days and median absolute percentage error of remaining_to_cap ≤12%; failing models are automatically rolled back to the last passing version. Rule: Each forecast carries model_version and trained_at; a training audit log with metrics and data ranges is persisted and viewable by admins. Rule: After retraining, confidence scores are recalibrated to maintain monotonic alignment with empirical accuracy (Kendall tau ≥ 0.6 on validation).
Support for Mileage, Engine Hours, and Time Caps
Given a vehicle has an OEM cap defined as mileage, when forecasting, then remaining_to_cap = cap_miles - current_odometer and projected_cap_date is based on mileage/day usage; units displayed in miles with 0 decimal places. Given a vehicle has an OEM cap defined as engine hours, when forecasting, then remaining_to_cap = cap_hours - current_engine_hours and projected_cap_date is based on hours/day usage; units displayed in hours with 1 decimal place. Given a vehicle has a time-based cap (e.g., 24 months from in-service), when forecasting, then projected_cap_date = min(calendar_cap_date, usage-based estimate if applicable) and marked basis=time. And when both mileage and hours caps exist, the earlier projected_cap_date across bases is used for Claim Clock alerts and displays the limiting basis in the UI and API.

Adaptive Idle Bands

Automatically tunes idle thresholds by vehicle class, ambient temperature, AC usage, PTO status, and typical route traffic. Cuts false alerts and targets wasteful idling, building driver trust and delivering real fuel savings without micromanagement.

Requirements

Context Signal Ingestion
"As a fleet manager, I want the system to account for temperature, A/C use, PTO, and traffic so that idle alerts reflect real-world operating conditions and reduce false positives."
Description

Ingest and normalize real-time context signals that influence idling thresholds: ambient temperature (from device sensors or geo-based weather API), A/C compressor engagement (OBD-II/J1939 PIDs), PTO status (digital I/O or J1939), elevation and traffic congestion (third‑party traffic provider keyed by GPS), and vehicle state (engine RPM, speed, gear, parking brake). Provide a unified context payload at 1–5s cadence to the Adaptive Idle Bands engine with consistent timestamps and units. Implement resilient retries, caching for weather/traffic, backfill on connectivity loss, and feature flags to toggle inputs by market. Ensure privacy-safe use of location data and configurable data retention (e.g., 90 days raw, 12 months aggregates).

Acceptance Criteria
Unified Context Payload Cadence and Schema
Given engine ignition is ON and at least one supported signal is available When the ingestion pipeline is running Then a unified context payload is emitted to the Adaptive Idle Bands engine every 1–5 seconds, with p95 inter-emission interval <= 5.0s and p50 >= 1.0s And each payload includes a single ISO 8601 UTC timestamp (ts) with monotonic non-decreasing order and max jitter ±200ms from device clock And all numeric units are normalized: temperature in °C, speed in km/h, elevation in meters, engine speed in RPM; boolean flags for acCompressorOn, ptoEngaged, parkingBrakeApplied And fields absent due to feature flags or missing signals are omitted or null and tagged with metadata quality = "missing" or reason = "disabledByPolicy" And a schemaVersion is included; minor version changes are backward compatible for at least one release cycle
OBD/J1939 Signal Normalization and Mapping
Given vehicles across OBD-II and J1939 protocols with known ground-truth for A/C compressor and PTO states When signals are ingested and normalized Then acCompressorOn and ptoEngaged match ground truth with >= 99.0% accuracy and < 0.5% false positives per 24h validation window And engineRPM, vehicleSpeedKph, gear, and parkingBrakeApplied are populated with correct units and value ranges (rpm 0–8000, speed 0–200 km/h, gear in {-1,0,1..18}) And conflicting readings are resolved by precedence (digital I/O > J1939 > OBD-II); irresolvable conflicts set field = null with metadata quality = "conflict" And make/model-specific scalers/offsets from configuration are applied and verified by automated tests covering top 10 makes with zero unit-conversion defects
Weather and Traffic Providers with Caching and Retries
Given GPS position is available When fetching ambient temperature and traffic congestion from third-party providers Then weather responses are cached per 5 km geohash with TTL = 5 minutes; traffic responses cached per 1 km geohash with TTL = 60 seconds And on provider timeout > 1s or error, retry with exponential backoff (3 attempts, base 500ms); if still failing, set fields = null and metadata quality = "stale" or "unavailable" And external call rates stay <= 80% of provider quota at p99 during a 2x expected peak load test And provider responses are normalized to required units and include providerId and fetchedAt (UTC) in metadata
Connectivity Loss Backfill and Ordering
Given the device experiences connectivity loss up to 24 hours When connectivity is restored Then buffered context payloads are uploaded in strict chronological order with no duplicates and original timestamps preserved And backfill proceeds at >= 10x real-time until caught up, or completes within 10 minutes for a 1-hour outage (whichever is sooner) And if backlog exceeds 24 hours or buffer capacity, oldest data is dropped and a gap event with start/end timestamps is emitted And 99% of backfilled payloads are acknowledged by the Adaptive Idle Bands engine within 2 seconds of upload
Feature Flags by Market/Tenant
Given a market or tenant has a feature flag disabled for an input signal (e.g., traffic) When the flag is toggled via the configuration service Then the change takes effect within 5 minutes without service restarts And the pipeline ceases outbound calls for the disabled provider and omits or nulls the corresponding fields with reason = "disabledByPolicy" And an audit log entry records actor, timestamp, flag name, old/new values, retained for 12 months And enabling/disabling flags does not degrade payload cadence beyond p95 <= 5s
Privacy-Safe Location and Data Retention
Given location data is collected When storing raw context Then GPS coordinates are encrypted at rest; access is least-privileged via service identity; raw records are purged after 90 days by default, configurable per tenant within 30–180 days And aggregates retain only coarse location (geohash precision 5) with no precise coordinates, purged after 12 months by default, configurable within 6–24 months And a daily purge job removes data older than configured retention with >= 99.9% deletion success and emits an auditable report of records purged And privacy mode "strict" redacts GPS from payloads to downstream consumers other than Adaptive Idle Bands, verified by contract tests
Pipeline Observability and SLOs
Given the ingestion service is operating under typical and peak loads When observing system metrics and traces Then 99.0% of context payloads are delivered to the Adaptive Idle Bands engine within 2 seconds of observation over a 30-day window, and end-to-end data loss < 0.1% And alerts trigger within 5 minutes when missing-signal rate per field > 1% for 15 minutes, provider error rate > 5% for 10 minutes, or queue lag > 30 seconds for 10 minutes And dashboards expose per-signal freshness, retry counts, queue lag, and p50/p95/p99 latencies with data retained for 90 days
Adaptive Idle Band Engine
"As a fleet operator, I want idle thresholds to adapt automatically to each vehicle’s context so that drivers are only flagged for truly wasteful idling."
Description

Compute dynamic idle thresholds per vehicle using a rules-plus-learning approach that adjusts for vehicle class, ambient temperature, A/C/PTO status, elevation, time-of-day, and route traffic. Maintain per-vehicle baselines learned over a rolling window (e.g., 14–28 days) and produce a current idle band (low/high) and classification for each idle event. Enforce guardrails (minimum/maximum bounds, hysteresis, cooldown) to prevent oscillation, and widen bands under extreme heat/cold or active PTO. Deliver decisions within 5 seconds of context updates to support near real-time alerting. Expose outputs via service API and event stream with versioned configs and audit logs for traceability.

Acceptance Criteria
Dynamic Idle Band Computation with Full Context
Given a vehicle with class, ambient temperature, A/C state, PTO state, elevation, time-of-day, and route traffic available and a learned per-vehicle baseline from a configured window W in [14,28] days When the engine computes the current idle band Then it returns numerical low/high thresholds within configured min/max bounds And applies rule-based adjustments for each context factor and learned offsets from the baseline And produces deterministic thresholds for identical inputs (tolerance 0) And stores the decision with a timestamp and correlation id
Latency on Context Updates
Given the engine is running and subscribed to context sources and the service API/event stream are available When any single context attribute changes (e.g., A/C toggles, ambient temperature updates, PTO engages, traffic score updates) Then an updated idle band and (if applicable) reclassification decision are produced within 5 seconds (decision_timestamp - context_update_timestamp <= 5s) And the updated decision is retrievable via API and published to the event stream And the payload includes the latest context version used
Rolling Baseline Learning and Decay
Given historical idle and context data exist for a vehicle across the past 30 days and the baseline window W is configured within [14,28] days When the learning job runs on schedule or on-demand Then the per-vehicle baseline statistics are computed using only data within W (older data excluded or downweighted per config) And the resulting baseline is versioned (baselineVersion incremented) and timestamped And subsequent idle band computations reference the new baselineVersion And a change log entry is written with prior and new summary stats
Guardrails Prevent Oscillation
Given rapidly fluctuating context inputs that would cause frequent band changes When the engine evaluates consecutive decisions Then low/high thresholds never exceed configured global min/max bounds And band changes occur only when the delta exceeds configured hysteresis H And no band update occurs more than once within the configured cooldown period T And the audit log records when guardrails suppressed a potential change
Extreme Temperature and PTO Widening
Given ambient temperature is beyond configured extreme thresholds OR PTO is actively engaged When computing the idle band Then the band width is increased by the configured widening factor for the condition And classification of idle events during these conditions uses the widened band And no alert is produced solely due to expected PTO-driven idling unless duration exceeds the widened high threshold plus any override rules
Per-Idle Event Classification with Reasons
Given an idle event is detected with start/end timestamps and associated context snapshot When the engine evaluates the event Then it assigns a classification label for the event and returns the low/high thresholds used And includes reason codes indicating which rule adjustments and learned baseline contributed to the decision And the classification, thresholds, and reason codes are available via API and emitted on the event stream
API/Event Stream Outputs, Versioning, and Auditability
Given a decision has been made for a vehicle When a client retrieves the decision via API or subscribes to the event stream Then the payload includes: vehicleId, decisionId, decisionTimestamp, idleBandLow, idleBandHigh, classification, contextVersion, configVersion, baselineVersion, and correlationId And an audit log record links decisionId to input context hashes/snapshots and configuration versions And any change to configVersion or baselineVersion is captured with who/when/what metadata and is retrievable by correlationId
Vehicle Class Mapping
"As a fleet admin, I want vehicles correctly classified with sensible defaults so that adaptive idle settings are accurate from day one without manual setup."
Description

Automatically derive vehicle class and fuel system attributes from VIN decode and telematics metadata (GVWR, fuel type, cylinder count, engine displacement) to seed baseline idle bands. Provide an admin UI and API to review and override class assignments, group vehicles (light/medium/heavy duty; gas/diesel/hybrid), and set default baselines by group. Ensure backward compatibility for unknown/partial VINs via heuristics and allow bulk import/export. Persist mapping changes with audit trails and propagate updates to the adaptive engine in near real time.

Acceptance Criteria
VIN/Telematics Derivation Seeds Baseline Idle Bands
Given a vehicle with a decodable VIN returning GVWR=14500 lb, fuelType=Diesel, cylinders=8, displacement=6.7L And the vehicle is newly added to FleetPulse When the VIN decode job runs or an on-demand decode is triggered Then the system classifies the vehicle as class=Medium Duty and fuelSystem=Diesel And persists engine metadata (cylinders, displacement) with source=VIN And seeds the vehicle’s baseline idle band from the Medium Duty Diesel default And the classification and baseline are available in Admin UI and via API GET /vehicles/{id}/classification within 5 seconds of processing And the operation is logged with jobId and decode source
Admin Override of Vehicle Class and Fuel System
Given an admin opens a vehicle’s Classification panel in the Admin UI And the current classification is Medium Duty Diesel When the admin changes class to Heavy Duty and fuelSystem to Diesel and clicks Save Then the mapping updates and is persisted within 2 seconds And an audit record is created capturing before/after values, userId, timestamp (UTC), reason (optional), and correlationId And if the vehicle has no custom idle baseline, the baseline is reset to the Heavy Duty Diesel group default; otherwise the custom baseline is retained And the API GET /vehicles/{id}/classification reflects the change within 5 seconds And the change is queued for propagation to the adaptive engine
Classification API Update with Validation and Concurrency Control
Given a client with scope=fleet.classification:write and a valid ETag for vehicle V1 When the client PATCHes /vehicles/V1/classification with {class: "Light Duty", fuelSystem: "Gas"} and If-Match: <etag> Then the service validates values against allowed enums and GVWR constraints and returns 200 with the updated resource and a new ETag And the response includes version, updatedAt (UTC), updatedBy, and source="Manual" When the client repeats the same PATCH Then the operation is idempotent and returns 200 with no additional audit duplication When a client sends an outdated ETag Then the service returns 409 Conflict with guidance to refetch When a client lacks the write scope Then the service returns 403 Forbidden
Set Group Default Baselines and Backfill
Given an admin sets the default idle baseline for group=Light Duty Gas to [X parameters] When the admin clicks Save Then the system applies the new default to all Light Duty Gas vehicles that do not have a custom baseline and reports the count updated And vehicles with custom baselines are left unchanged and reported separately And newly added Light Duty Gas vehicles automatically receive this default at classification time And Admin UI and API GET /groups/LightDutyGas/defaults reflect the change within 5 seconds And an audit record is created with before/after default values, userId, and affectedCounts
Heuristic Classification for Unknown or Partial VINs
Given a vehicle with missing or undecodable VIN but telematics reports GVWR=9300 lb and fuelType=Gas When classification runs Then the system assigns class=Light Duty and fuelSystem=Gas using source=Heuristic with confidence>=0.7 And seeds the baseline from Light Duty Gas defaults And flags the record status=Needs Review in Admin UI When a full VIN becomes available later and decodes to Medium Duty Diesel Then the system reclassifies accordingly, updates the baseline per rules for custom vs default, and writes an audit entry linking heuristic->vin change And any downstream consumers receive only the latest classification
Bulk Import/Export of Classifications
Given an admin requests export of classifications When the export is generated Then the system provides CSV and JSON formats containing vehicleId, VIN, class, fuelSystem, source, baselineSource, updatedAt, updatedBy Given an admin uploads an import file with up to 10000 rows When the import runs in async mode Then each row is validated; valid rows update classifications; invalid rows are rejected with row-level errors And the job report includes total, succeeded, failed counts, sample errors, and a downloadable error file And changes from successful rows create per-row audit entries and emit propagation events And a dry-run=true option performs full validation without persisting changes And partial failures do not block successful rows
Near Real-Time Propagation to Adaptive Idle Engine
Given any new or updated vehicle classification or group default affecting a vehicle When the change is persisted Then an event is published to the adaptive engine stream with vehicleId, new values, version, and correlationId And the adaptive engine applies the update within P95<=60s and P99<=120s of commit time And if delivery fails, the system retries with exponential backoff and surfaces an alert in monitoring And propagation latency metrics are recorded and visible in Ops dashboards
Smart Idle Alerts & Suppression
"As a driver, I want alerts only when idling is genuinely excessive so that notifications are relevant and I’m not micromanaged."
Description

Generate driver and manager alerts only when idle duration exceeds the adaptive band plus configurable dwell time. Implement deduplication, time-based backoff, and context-aware suppression (e.g., depot geofences, mandated warm-up windows, active PTO). Provide severity tiers and escalation for chronic exceedances. Surface alerts via mobile push, email, and in-app inbox with concise context (location, temp, A/C/PTO status, estimated fuel burn). Log all alert decisions for review and tuning; support per-group schedules to avoid off-hours noise.

Acceptance Criteria
Adaptive Threshold Breach Triggers Alert
- Given an adaptive idle limit L derived from vehicle class, ambient temperature, A/C usage, PTO status, and traffic profile, and a configured dwell time T, When engine idle duration exceeds L + T outside suppression contexts, Then exactly one alert is generated within 60 seconds of crossing L + T for the driver and assigned manager. - Given the idle ends before crossing L + T, When evaluation completes, Then no alert is generated. - Given an alert is generated, When the alert record is inspected, Then it contains L, T, and measured idle duration D in seconds.
Per-Stop Deduplication and Time-Based Backoff
- Given an idle event has already triggered an alert for the current stop, When the idle continues beyond L + T, Then no additional alerts are sent for a configurable backoff window B minutes. - Given the idle continues past the backoff window B and still exceeds L + T, When B expires, Then a single follow-up alert is sent and the next backoff doubles up to a configured maximum Bmax. - Given a second idle occurs within 5 minutes and GPS displacement < 150 meters of the prior idle, When determining event identity, Then it is treated as the same stop and dedup/backoff rules apply; otherwise, it starts a new stop and may generate a new alert.
Context-Aware Suppression: Depot, Warm-Up, Active PTO
- Given the vehicle is inside a configured depot geofence, When idle exceeds L + T, Then no alert is sent and suppression_reason = depot_geofence is logged. - Given a cold start with coolant temp below C and ambient temp below A, When within a configured warm-up window W minutes, Then no idle alert is sent and suppression_reason = warmup is logged. - Given PTO is active per OBD parameter, When idle exceeds L + T, Then no alert is sent and suppression_reason = pto_active is logged. - Suppressed events do not contribute to escalation metrics.
Severity Tiers and Escalation for Chronic Exceedances
- Given an idle alert is generated, When D (idle duration beyond L) is computed, Then severity is assigned per configuration: Tier1 if D ∈ [T1_min, T1_max], Tier2 if D ∈ [T2_min, T2_max], Tier3 if D ≥ T3_min. - Given a vehicle accrues N Tier2+ events within a rolling window of W days (excluding suppressed events), When the Nth event occurs, Then an escalation email is sent to the manager group and the alert is tagged escalated = true. - Given an escalation was sent in the last E hours, When additional Tier2+ events occur, Then escalations are rate-limited to at most one per E hours. - Given zero Tier2+ events for R consecutive days, When evaluating escalation state, Then the escalation status resets.
Multi-Channel Delivery with Required Context
- Given an alert is generated, When delivering notifications, Then the driver receives a mobile push, the manager receives an email, and both see the alert in the in-app inbox. - Each delivered alert includes: reverse-geocoded location (nearest street, city, state), ambient temperature, A/C status, PTO status, estimated fuel burned F, and idle duration D. - When measured end-to-end, Then 95th percentile delivery latency per channel is ≤ 60 seconds and 99th percentile is ≤ 120 seconds. - Given a device is offline, When push delivery fails, Then it is retried for up to 24 hours and the in-app inbox records the alert immediately.
Decision Logging and Auditability
- Given idle evaluation runs at the configured interval I seconds, When a decision is made (alert, suppress, backoff, or no-action), Then a log entry is written within 5 seconds containing vehicle_id, UTC timestamp, location, L, T, D, suppression_reason (if any), backoff state, severity, evaluator version, and channel outcomes. - Given an admin queries by vehicle and date range, When exporting audit logs up to 100k records, Then results return within 10 seconds and include all logged fields. - Logs are retained for at least 13 months and are immutable to non-admin users.
Per-Group Quiet Hours and Time Zone Handling
- Given a manager group has quiet hours Q in time zone TZ, When an alert is generated during Q, Then push and email to that group are suppressed, the alert appears in the in-app inbox, and suppression_reason = quiet_hours is recorded for those channels. - Given quiet hours are active, When the next send window begins, Then a single daily digest email of suppressed alerts is sent at the configured digest time. - Given a vehicle belongs to multiple groups with different quiet hours, When resolving sends, Then quiet-hour evaluation applies per recipient’s group and TZ. - Given a daylight saving transition in TZ, When evaluating quiet hours, Then local time rules are honored without duplicate or missed quiet periods.
Policy Controls & Overrides
"As a compliance manager, I want to configure guardrails and exemptions so that adaptive idling aligns with company policy and regional regulations."
Description

Offer fleet-level and group-level policy settings to bound and customize adaptive behavior: minimum/maximum idle thresholds, allowed warm-up/cool-down windows by temperature, PTO-exempt job codes, AC heat-index rules, depot/geofence exemptions, and seasonal schedules. Include role-based access control, change history, preview/simulate mode to see projected alert volume, and one-click rollback. Ensure policies are validated and safely hot-reloaded by the adaptive engine without service interruption.

Acceptance Criteria
Fleet and Group Policy Bounds & Precedence
Given a fleet-level policy defines minIdle=3 minutes and maxIdle=15 minutes When a group attempts to save idleThreshold=20 minutes Then the save is rejected with HTTP 422 and a field error "Idle threshold must be between 3 and 15 minutes" Given fleet bounds are 3–15 minutes and a group sets idleThreshold=10 minutes When policies are saved Then vehicles in that group evaluate idle using 10 minutes and vehicles with no group use the fleet default, and no vehicle evaluates outside 3–15 minutes Given an "Effective Policy" view for a chosen vehicle When the page loads Then it displays the resolved idle threshold and its source (Fleet or Group) accurately for 100% of tested vehicles
Temperature-Based Warm-up/Cooldown Windows
Given an admin defines warm-up/cooldown windows: ≤32°F = 10 minutes warm-up, 33–80°F = 3 minutes warm-up, >80°F = 5 minutes cooldown When idling occurs at ambient 28°F after engine start Then no idle alert is generated until 10 minutes have elapsed Given idling at ambient 75°F after engine start When the engine idles Then no idle alert is generated until 3 minutes have elapsed Given idling at ambient 95°F after engine stop When cooldown rules apply Then no idle alert is generated until 5 minutes have elapsed Given overlapping or gapped temperature ranges are entered When saving the policy Then the save is blocked with a clear validation error and HTTP 422 Given ambient temperature is unavailable for an event When evaluating warm-up/cooldown Then a default 0-minute window is applied and the evaluation is tagged "ambient_unknown"
PTO-Exempt Job Codes Handling
Given PTO-exempt job codes [A12, B07] are configured When a vehicle reports PTO=true with jobCode=A12 and is idling beyond the threshold Then no idle alert is generated and the evaluation is tagged "pto_exempt" Given PTO=true with jobCode=C99 (non-exempt) or PTO=false When idling beyond the threshold Then idle alerts are generated per policy Given PTO=true with a blank/missing jobCode When idling beyond the threshold Then no exemption is applied and normal idle rules evaluate
AC Heat-Index Idle Allowance
Given a heat-index rule is configured: if AC=true and heatIndex ≥ 90°F then add +5 minutes to the idle threshold When a vehicle idles with AC=true and heatIndex=95°F and baseline threshold=10 minutes Then an idle alert is generated only after 15 minutes Given AC=false or heatIndex=85°F When evaluating idling Then no additional allowance is applied Given heat index cannot be computed (missing temp/humidity/lookup) When evaluating idling Then the allowance is not applied and the evaluation is tagged "heat_index_unknown" Given an allowance outside 0–15 minutes is entered When saving the policy Then save is blocked with field-level validation errors
Depot/Geofence Idle Exemptions
Given a geofence "Depot A" marked Idle Exempt When a vehicle is inside the geofence Then idle alerts are suppressed; upon exit, evaluation reverts to normal and idle timers reset Given a geofence "Yard B" marked Idle Relaxed with multiplier 1.5x When a vehicle is inside the geofence with a 10-minute baseline threshold Then the effective threshold is 15 minutes Given a vehicle crosses a geofence boundary When entry/exit occurs Then exemption/relaxation state transitions within 5 seconds of receiving a GPS fix Given GPS is unavailable and last fix age > 120 seconds When evaluating exemptions Then no geofence-based exemption is applied and the evaluation is tagged "location_unknown"
Seasonal Policy Schedules by Timezone
Given a seasonal schedule "Winter" with start=Nov 1 and end=Mar 31 in timezone America/Denver When a vehicle idles on Dec 15 at 10:00 local time Then the Winter policy is applied Given the same schedule When a vehicle idles on Apr 1 at 10:00 local time Then the Winter policy is not applied Given overlapping date ranges are configured within the same scope When saving the schedule Then the save is blocked with a validation error and HTTP 422 Given a DST transition day in the configured timezone When evaluating schedule applicability Then coverage is computed in local time without duplicate or skipped applicability windows Given an "Effective Policy" view for a vehicle and timestamp When loading the view Then it displays the active schedule name accurately
Policy Governance: RBAC, Audit, Preview/Simulate, Hot-Reload & Rollback
Given roles Admin, Manager, and Viewer When a Viewer attempts to create, edit, preview, publish, or rollback policies Then the action is denied with HTTP 403 and no changes are persisted Given an Admin edits fleet-level policies or a Manager edits group-level policies within assigned groups When saving changes Then the save succeeds and scope is enforced Given any create/update/publish/rollback When the action completes Then an immutable audit record is written with policyId, version, actorId, actorRole, scope, UTC timestamp (ISO-8601), change diff, reason (optional), and source IP; audit list is filterable and exportable to CSV Given unsaved policy changes When the user clicks "Preview" Then a 14-day what-if simulation runs and returns projected idle alert counts and percent deltas by fleet, group, and vehicle class within 60 seconds for fleets ≤1,000 vehicles or 10M events; no production alerts or policies are modified Given a policy is published When the publish completes Then the adaptive engine hot-reloads the configuration without restart, with propagation time ≤30 seconds, 0 dropped evaluations, and no duplicate/missed alerts Given a prior policy version exists When an Admin clicks "Rollback" on that version and confirms Then the system restores that version, writes a rollback audit entry linking from/to versions, and hot-reloads within 30 seconds under the same non-disruption guarantees
Idle Savings Reporting
"As a fleet owner, I want transparent reports of idle trends and savings so that I can validate ROI and refine policies."
Description

Provide dashboards and exports quantifying idle time, fuel burned vs. avoided, cost savings, and CO2 reduction by vehicle, driver, group, and time period. Compare against historical baselines and simulate impact under alternative policies. Attribute events to context factors (temperature, traffic, PTO) to explain why alerts did or did not trigger, building trust. Include drill-down to raw events, hotspot maps, and scheduled weekly/monthly reports via email/CSV. Support API access for BI tools.

Acceptance Criteria
KPI Dashboard by Entity and Time Range
Given FleetPulse has collected OBD-II and telematics data for vehicles with Adaptive Idle Bands enabled And the user has permission to view the selected vehicles/drivers/groups When the user opens the Idle Savings dashboard and selects a scope (Vehicle/Driver/Group) and a time range (absolute or relative) Then the dashboard displays, for the selected scope and period: Total Idle Time (hh:mm), Fuel Burned During Idle (gal/L), Fuel Avoided (gal/L), Cost Savings (account currency), and CO2 Reduction (kg) And the metrics are available as overall totals, per-entity breakdowns, and daily/weekly time series And units follow account settings (US/Metric) and currency follows the account currency And all metrics reflect data up to the latest data refresh with a maximum freshness lag of 15 minutes And applying or changing filters updates the dashboard with p95 render time <= 3 seconds for scopes up to 100 vehicles over up to 12 months
Historical Baseline and Delta Comparison
Given a default baseline of trailing 90 days prior to the selected period (excluding overlap) And the user can choose an alternate baseline (custom date range or same period last year) When a baseline is selected Then the UI displays deltas and percent changes versus baseline for Idle Time, Fuel Burned, Fuel Avoided, Cost Savings, and CO2 Reduction And the baseline definition and date range are clearly indicated on the dashboard and in exports And exported CSVs include baseline fields and delta/percent-change columns for each metric And calculations match documented formulas within ±0.5% tolerance for test datasets
Idle Policy What‑If Simulation
Given a user with Manage Policies permission configures a What‑If scenario with alternate idle thresholds/bands by vehicle class, ambient temperature, AC usage, PTO status, and traffic And selects a scope (Vehicle/Driver/Group) and analysis period When the user runs the simulation Then the system produces projected Idle Time, Fuel Burned, Cost, and CO2 metrics and their deltas/percent changes versus actuals for the same scope and period And results are available at both aggregate and per-entity levels And the simulation does not modify live policies or alerts and is labeled as Draft/Simulated And the simulation run is assigned a stable ID and can be exported to CSV and retrieved via API And p95 completion time <= 60 seconds for up to 100 vehicles over 30 days
Context Attribution and Alert Explainability
Given idle events are stored with context tags (ambient temperature band, AC/HVAC usage, PTO status, traffic congestion band, vehicle class) When a user inspects an aggregate or a single event Then the UI displays top context factors with contribution percentages (summing to 100% for the attributed portion) explaining idle drivers and savings And for each event, the alert state shows Triggered or Suppressed with a rule reason code (e.g., TEMP_HIGH, PTO_ACTIVE, TRAFFIC_HEAVY) And at least 95% of events include temperature, PTO, and AC attribution when source signals are present; missing data is labeled Unknown And the same attribution fields and reason codes are included in CSV exports and API responses
Drill‑Down to Raw Idle Events
Given a user clicks a metric or entity on the Idle Savings dashboard When navigating to Raw Idle Events Then a table lists events with columns: event_id, start_ts, end_ts, duration_s, vehicle_id, driver_id, lat, lon, geohash, fuel_burned (gal/L), context flags (temp_band, ac_on, pto_on, traffic_band), alert_triggered (bool), reason_code And the table supports filtering by duration threshold, time-of-day, date range, entity, context flags, and map bounding box And the table supports sorting by duration, fuel_burned, and start_ts And selecting an event opens a detail view with a map pin and a 5‑minute OBD‑II trace around the event window And users can export the current filtered set to CSV; generation completes within 60 seconds for up to 100k events
Idle Hotspot Heatmap
Given the user opens Idle Hotspots and selects a scope and time period When the map renders Then a heat layer shows hotspots weighted by total idle minutes or fuel burned with selectable grid sizes (e.g., 250 m, 500 m, 1 km) And clusters are clickable to reveal hotspot stats and a link to the underlying filtered events And filters for day-of-week and hour-of-day refine the heatmap And the map uses grid snapping to avoid exposing precise addresses by default And p95 tile load time after pan/zoom is <= 2 seconds
Scheduled Reports, CSV Exports, and BI API Access
Given a user with Manage Reports permission creates a scheduled Idle Savings report (weekly or monthly) with selected scope, metrics, timezone, and recipients When the schedule executes Then recipients receive an email within the configured delivery window containing a KPI summary, a link to the dashboard, and either an attached CSV (<= 20 MB) or a secure download link that expires in 7 days And delivery failures trigger a notification to the report owner with retry status and error details And ad‑hoc CSV exports from dashboards include the same fields, filters, and baseline context as displayed And the REST API exposes endpoints for aggregates, baselines, simulations, events, and hotspots with OAuth2 client credentials, filtering, pagination, and field names consistent with CSV And API responses include rate limiting headers and return 429 with Retry‑After when limits are exceeded
Driver Idle Feedback Digest
"As a driver, I want simple weekly insights about my idling that consider my working conditions so that I can improve without constant interruptions."
Description

Deliver a weekly driver-friendly summary highlighting top actionable idle reductions without shaming: time-in-idle vs. peers, context-adjusted goals, most common locations/times, and tips tailored to vehicle and climate. Exclude PTO and extreme-weather events automatically. Provide mobile-first visuals, single-tap acknowledgment, and optional coaching nudges. Localize content and schedule delivery by driver’s timezone.

Acceptance Criteria
Localized Content and Timezone-Scheduled Delivery
Given a driver profile with timezone Tz, preferred language L, and preferred digest weekday D (default Monday) When the weekly digest is generated and delivered Then the digest is delivered between 08:00 and 10:00 local time in Tz on D And exactly one digest is delivered per driver per week And if Tz is missing, the system uses the fleet timezone; if unavailable, UTC And all strings in the digest are localized to L (supported: en, es, fr, pt; otherwise fall back to en) And numbers, dates, and times are formatted per L and Tz And delivery failures are retried up to 3 times with exponential backoff and logged
Context-Adjusted Idle Metrics and Goals
Given 8 weeks of historical trips for the driver, vehicle class, ambient temperature, AC usage, PTO flags, and typical route traffic patterns When calculating weekly idle metrics and goals for the digest Then the digest shows total idle minutes (excluding PTO and extreme-weather minutes), idle as a percent of engine-on time, and a context-adjusted target range (min–max minutes) And the target is computed using Adaptive Idle Bands by vehicle class and climate decile, adjusted for AC usage when ambient temperature > 27°C (80.6°F) And outlier trips beyond 3 standard deviations idle ratio are excluded from the baseline And the digest estimates potential fuel savings if the driver meets the target using fleet-configured idle fuel burn rate (default 0.8 L/hour or 0.2 gal/hour), shown in the driver’s unit system And all calculations are reproducible given the same inputs
Peer Comparison Without Shaming
Given a peer cohort defined as drivers operating the same vehicle class and similar route type within the fleet over the past 8 weeks When presenting the driver’s idle standing Then the digest displays the driver’s percentile and quartile within the cohort And cohort size must be ≥ 20; otherwise show “Insufficient peer data” and hide percentile visuals And no individual peer names or identifiers are shown (aggregate statistics only) And language is neutral and excludes banned terms: worst, bad, blame, shame, punish And the visual palette uses neutral colors (no red emphasis on the driver) And the comparison is computed using the same context-adjusted idle metric as the driver’s
Exclusion of PTO and Extreme-Weather Idle
Given idle events with timestamp, GPS location, PTO status, and local weather When computing idle minutes for the digest Then minutes with PTO engaged are excluded from idle totals and from goal/peer computations And minutes occurring with heat index ≥ 38°C (100.4°F) or wind chill ≤ −18°C (−0.4°F) are excluded And weather is resolved from a provider with ≤ 15-minute granularity using event location/time And if weather lookup fails, the minutes are included but the digest flags “weather data unavailable” and no weather-related tips are generated for those minutes And exclusions are reflected consistently across all displayed metrics
Most Common Idle Locations and Times
Given GPS breadcrumbs and idle event segments for the past week When identifying patterns Then idle events are clustered using geohash precision 7 (≈150 m) and clusters within 250 m are merged And the top 3 clusters with ≥ 10 total idle minutes in the week are shown And each cluster displays a human-readable name via reverse geocoding, average idle duration per stop, most common hour-of-day, and most common weekday And clusters at the same POI across multiple days are aggregated into one entry And location entries excluded by PTO/extreme-weather rules are not shown
Tailored Tips by Vehicle and Climate
Given the driver’s vehicle class, recent climate profile (last 8 weeks ambient temperatures), and observed idle patterns When generating tips for the digest Then 1–3 tips are selected from a curated library matching vehicle class and climate conditions And each tip is actionable, estimates potential weekly fuel/time savings based on the driver’s data, and links to a short resource or micro-lesson And tips avoid judgmental language and use neutral phrasing And the same tip is not repeated more than 3 consecutive digests unless the underlying pattern persists above threshold (e.g., > 30 idle minutes/week at that pattern) And tips are suppressed if total actionable idle is < 10 minutes for the week
Mobile-First UX with Single-Tap Acknowledgment and Coaching Nudges
Given the driver opens the digest on a mobile device with viewport width ≤ 375 px and a 1.5 Mbps connection When rendering the digest and interacting Then Largest Contentful Paint ≤ 2.5 s and total load time ≤ 3 s And primary tap targets are ≥ 44×44 px with visible focus states And a single-tap “Got it” action marks the digest as acknowledged, records driver_id, digest_id, timestamp, and device_id, and returns a success state without full page reload And an optional “Coach me” nudge opens a pre-populated message or scheduling flow and logs an analytics event And all interactions (open, ack, nudge) are captured with event time and device context

SafeStop Nudges

Driver-friendly prompts that appear only when safe (park brake set, in-gear neutral, or at full stop) and respect Do Not Disturb windows. Clear, plain-language tips encourage engine-off or tire checks without distraction—reducing alert fatigue and boosting follow-through.

Requirements

Safe-State Detection Gate
"As a driver, I want prompts to appear only when I’ve safely stopped so that I’m never distracted while driving."
Description

Implements a deterministic gate that only allows nudges to render when the vehicle is in a verified safe state: park brake engaged, transmission in neutral, or vehicle speed equals zero for a configurable dwell time. Evaluates OBD-II and extended PIDs locally to minimize latency, with debouncing to avoid transient false positives (e.g., momentary zero-speed at traffic lights). Includes heuristics and fallback logic where specific signals are unavailable, treats stale or conflicting telemetry as “unsafe,” and exposes tunable parameters per fleet. Ensures fail-closed behavior so prompts never appear while moving, logs gating decisions for audit, and provides a health check to monitor signal quality and coverage across makes/models.

Acceptance Criteria
Gate opens only after verified safe-state dwell
Given FleetPulse Safe-State Detection Gate is enabled for the vehicle and fleet configuration defines dwell_time_s And local OBD-II/extended PID sampling is active When any safe-state source (park_brake_engaged OR transmission_in_neutral OR vehicle_speed_kph == 0) remains true continuously for at least dwell_time_s measured by a monotonic clock And no conflicting signal is observed during the dwell window Then the gate transitions to OPEN and allows nudge rendering And any interruption of the safe-state before dwell_time_s resets the dwell counter and the gate remains CLOSED
Fail-closed on motion, stale, or conflicting telemetry
Given the gate is evaluating inputs at runtime with configured moving_speed_threshold_kph and stale_timeout_ms When vehicle_speed_kph > moving_speed_threshold_kph OR any required signal is stale beyond stale_timeout_ms OR signals conflict logically Then the gate is CLOSED and no nudge is rendered And a gating decision is recorded with reason_code indicating motion, stale, or conflict
Local evaluation meets latency budget without cloud dependency
Given the device has valid local access to required PIDs and decision_latency_budget_ms is configured When a new batch of PID values is received Then the gate computes a decision locally without any network call And the elapsed time from last PID sample receipt to decision publication is less than or equal to decision_latency_budget_ms at the 99th percentile over a 15-minute workload
Fallback heuristics when specific PIDs are unavailable
Given park_brake_engaged and transmission_in_neutral PIDs are unavailable or unsupported for the vehicle and fallback_enabled is true When evaluating safe-state Then the system considers the state SAFE only if vehicle_speed_kph == 0 continuously for at least dwell_time_s AND engine_rpm <= rpm_idle_threshold AND accelerator_pedal_position <= accel_idle_threshold for the same window And if any fallback signal is unavailable or stale, the state is UNSAFE and the gate remains CLOSED And the decision log includes fallback_used=true and lists missing PIDs
Per-fleet tunables validated, applied, and safe by default
Given a fleet admin updates gate parameters (dwell_time_s, moving_speed_threshold_kph, stale_timeout_ms, rpm_idle_threshold, accel_idle_threshold, decision_latency_budget_ms) When the update is submitted Then values are validated against documented safe ranges and rejected with errors if out-of-range or inconsistent And on successful update, the new configuration version is propagated and used by subsequent gate evaluations without requiring app restart And if configuration is missing or invalid at runtime, the system applies safe defaults that keep the gate CLOSED while moving
Deterministic audit logging of gate evaluations
Given gating decisions occur during operation and audit logging is enabled When the gate evaluates to OPEN or CLOSED or changes state Then a structured log entry is written containing timestamp, vehicle_id, decision_state, reason_code, rule_path, input_signals_with_values_and_ages, and config_version And logs are queryable by vehicle_id and time range via the diagnostics interface And repetitive CLOSED decisions due to continuous motion are rate-limited to at most one audit entry per rate_limit_interval_s while preserving state changes
Health check reports signal quality and coverage
Given the health_check endpoint is requested or a scheduled health report runs When computing health Then the system reports per-vehicle metrics: pid_support_matrix for park_brake, transmission_neutral, speed, rpm, accelerator; signal_update_rate_hz; stale_rate; conflict_rate; last_seen_ts And fleet-level aggregates include the percentage of vehicles with full, partial, or no safe-state coverage and the global stale_rate and conflict_rate And health_status is computed as OK, WARN, or CRIT based on configurable thresholds and included in the response And vehicles with partial or no coverage are automatically marked fail_closed=true
Do Not Disturb Windows
"As a driver, I want the app to respect my Do Not Disturb times so that I’m not interrupted during rest or off-duty."
Description

Adds configurable Do Not Disturb schedules that suppress nudges during off-duty, sleep, or user-defined quiet hours. Supports fleet-wide defaults with per-driver and per-vehicle overrides, recurring patterns (e.g., weekdays, shifts), time zone awareness, and daylight saving adjustments. When a nudge is generated during DND, it is queued and re-evaluated for the next eligible safe window, expiring if conditions are no longer relevant. Provides APIs and UI for managing schedules, validates conflicts, and records suppression reasons to analytics for transparency. No overrides are allowed for SafeStop Nudges to ensure zero intrusion during protected periods.

Acceptance Criteria
DND Suppresses and Queues Nudges
Given a SafeStop nudge is generated while a Do Not Disturb schedule is active for the driver or vehicle And the nudge includes an expiry timestamp When the delivery engine evaluates the nudge Then the nudge is not delivered And the nudge is queued with state "suppressed_dnd" And the suppression reason "dnd_active" is recorded with the applicable schedule identifier(s) And no OS or in-app prompt, sound, or badge is presented
Post-DND Re-evaluation at Next Safe Moment
Given a nudge queued due to DND And the DND window ends at 07:00 in the schedule's time zone And the vehicle enters a SafeStop state after 07:00 and before the nudge expiry When the delivery engine re-evaluates queued nudges Then the nudge is delivered at the first SafeStop moment after 07:00 And the queue state changes to "delivered" with a delivery timestamp And if no SafeStop occurs before the expiry timestamp Then the nudge transitions to "expired" with reason "expired_no_safe_window"
Recurring and Cross‑Midnight Quiet Hours
Given a recurring weekday DND from 22:00 to 06:00 in the schedule time zone When a nudge is generated Tuesday at 23:30 Then the nudge is suppressed and queued And suppression continues seamlessly past midnight until 06:00 Wednesday And if a SafeStop occurs at or after 06:00 Wednesday and before expiry Then the nudge becomes eligible for delivery
Time Zone and DST-Aware Schedules
Given a driver's DND schedule anchored to America/Chicago from 22:00 to 06:00 When daylight saving time starts Then the DND window still ends at 06:00 local time with no early release or extension beyond local 06:00 And when daylight saving time ends Then the DND window still ends at 06:00 local time without duplicate boundary processing And when the vehicle operates in a different time zone Then the DND applies according to the driver's schedule time zone
Effective Schedule Resolution (Fleet, Driver, Vehicle)
Given a fleet default DND 21:00–05:00, a driver override 22:00–07:00, and a vehicle override 00:00–04:00 in the same time zone When computing the effective DND for that driver in that vehicle Then the effective DND is the union of all applicable windows (continuous quiet from 21:00 to 07:00) And overlapping windows are merged without gaps And removing an override recalculates the union immediately And if no schedules exist for any scope Then no DND suppression is applied
API/UI Management and Conflict Validation
Given an admin creates or updates a DND via API or UI with name, scope (fleet/driver/vehicle), time zone, recurrence, and start/end When the payload is valid and non-conflicting within the same scope Then the schedule is saved with a unique identifier and timestamps And requesting a preview returns the next 7 days of quiet intervals in local time And creating overlapping schedules within the same scope returns a 409 Conflict via API and an inline conflict error in UI And deleting a schedule removes it from the effective DND and from subsequent previews immediately
Zero-Override and Analytics Transparency
Given DND is active When any SafeStop nudge is generated or scheduled for delivery Then no delivery occurs and no override control is presented to drivers And any attempt to mark a nudge as urgent does not bypass DND And analytics record an event with suppression reason "dnd_active", scope (fleet/driver/vehicle), schedule id, and timestamp
Contextual Nudge Rules Engine
"As a fleet manager, I want context-aware nudges based on vehicle conditions so that drivers receive relevant, actionable tips."
Description

Introduces a configurable rules engine that evaluates vehicle context and maintenance telemetry to trigger targeted nudges, such as reducing extended idle (engine on while stopped beyond a threshold), checking tires on pressure deviations, verifying battery health on low voltage events, or investigating brake anomalies flagged by diagnostics. Rules support boolean logic, thresholds, and time windows, are evaluated after the safe-state and DND gates, and provide de-duplication, prioritization, and expiration. Message templates are plain-language, localized, and include clear recommended actions. Fleet admins can enable/disable rules and adjust thresholds, with versioning to track changes and rollback if needed.

Acceptance Criteria
Safe-State and DND Gate Enforcement
- Given a rule condition becomes true while the vehicle is moving (speed > 0) and neither parking brake is engaged nor transmission is Neutral, When the rules engine evaluates eligibility, Then no nudge is displayed and the suppression reason is recorded as "unsafe_state". - Given any safe condition is true (parking brake engaged OR transmission Neutral OR vehicle speed = 0 for ≥ 3 seconds) and the driver is outside a Do Not Disturb window, When a rule condition becomes true, Then the nudge becomes eligible for display. - Given a Do Not Disturb window is active for the driver or fleet, When a rule condition becomes true, Then no nudge is displayed, no sound/vibration is emitted, and suppression reason is recorded as "dnd_active". - Given a nudge was suppressed due to DND, When the DND window ends and the triggering condition still holds after the rule’s cooldown, Then the nudge can display on the next evaluation cycle. - Given a nudge is visible, When the vehicle exits safe state (e.g., speed > 0) Then the nudge auto-dismisses within 1 second and is not re-shown until after the rule’s cooldown window elapses.
Extended Idle Nudge Trigger
- Given engine RPM > 0 and vehicle speed = 0 continuously for the configured idle threshold (default 5 minutes), and safe-state is true, When the threshold elapses, Then display the extended idle nudge with plain-language message and a clear recommended action to turn off the engine. - Given an admin changes the idle threshold to T minutes and saves, When the next evaluation occurs, Then the new threshold T is enforced within 60 seconds and the ruleset version is incremented and audit-logged. - Given the idle nudge is displayed, When the driver turns the engine off within 60 seconds, Then record outcome = "acted" with timestamp and suppress further idle nudges for the cooldown period (default 2 hours). - Given the idle nudge message template contains placeholders, When rendered, Then all placeholders resolve (e.g., duration, estimated fuel cost) and the localized message length is ≤ 180 characters.
Tire Pressure Deviation Nudge
- Given one or more tires deviate from baseline by ≥ 10% or ≥ 3 PSI (whichever is greater) for ≥ 30 seconds and safe-state is true, When evaluated, Then display a nudge prompting a tire check with the worst deviation value and tire position if available. - Given tire pressure returns within threshold before safe-state becomes true, When evaluated, Then no nudge is displayed and no queued reminder is created. - Given per-vehicle tire deviation threshold is customized, When evaluation runs, Then the customized threshold is used and recorded in the trigger event metadata. - Given multiple tires exceed threshold, When the message is rendered, Then it includes the count of affected tires and the maximum deviation, localized to the driver’s language or fallback to en-US.
Battery Low-Voltage and Brake Anomaly Nudges
- Given battery voltage < 12.0V for 3 consecutive readings within 30 seconds and safe-state is true, When evaluated, Then display a battery health nudge with a recommended action to schedule a battery test and start a 24-hour cooldown for this rule. - Given a diagnostic trouble code or ABS sensor event flags a brake anomaly and safe-state is true, When evaluated, Then display a brake check nudge with priority = High and a recommended action to schedule inspection. - Given both battery low-voltage and brake anomaly are eligible simultaneously, When prioritized, Then only the higher-priority nudge (brake) is displayed and the lower-priority event is logged as "suppressed_by_priority".
Nudge De-duplication, Cooldown, and Expiration
- Given a nudge for a specific rule has been shown to a driver-vehicle pair, When the same rule condition remains true within the configured cooldown (default 2 hours), Then no additional nudge is shown and the event is logged as "deduplicated". - Given a nudge is displayed, When there is no interaction for 15 minutes or the vehicle exits safe-state, Then the nudge expires, auto-dismisses, and the event is logged as "expired" with cause. - Given the triggering condition persists after expiration and the cooldown has elapsed, When re-evaluated, Then the nudge may be re-shown.
Priority Resolution for Concurrent Triggers
- Given each rule has an integer priority (higher wins) and multiple rules become eligible, When evaluated, Then only one nudge is shown at a time: the highest-priority rule; ties are broken by most recent trigger time. - Given a lower-priority nudge is visible and a higher-priority rule becomes eligible, When re-evaluated, Then the higher-priority nudge replaces the visible one within 2 seconds and the replaced nudge is logged as "preempted". - Given one or more nudges are suppressed by priority, When logging, Then each suppression record includes rule id, priority, trigger time, and winning rule id.
Admin Configuration, Versioning, and Localization
- Given an admin enables/disables a rule or edits thresholds/logic, When saving, Then inputs are validated (numeric ranges, boolean logic syntax), invalid configs are blocked with a clear error, and valid changes take effect within 60 seconds. - Given a configuration change is saved, When persisted, Then a new ruleset version is created with user id, timestamp, diff summary, and reason, and is available in audit logs. - Given a rollback to a prior version is requested, When confirmed, Then the prior version is restored, version incremented, and the restored settings are effective within 60 seconds. - Given localized message templates are defined, When rendering a nudge, Then the system selects the driver’s locale; if unavailable, it falls back to en-US; all placeholders (e.g., {{vehicle_name}}, {{threshold}}) resolve, a recommended action is present, and the final rendered string is ≤ 180 characters without truncation.
Driver-Safe Nudge UI
"As a driver, I want simple, clear prompts with easy actions when safe so that I can quickly follow through without hassle."
Description

Delivers a distraction-minimized, mobile and in-cab UI for nudges with plain-language copy, large touch targets, optional haptics, and accessible color contrast. Presents a single clear action (e.g., “Turn engine off”) with supportive secondary options (Snooze, Dismiss) and a one-tap “Done” to capture follow-through. Supports offline operation with local queuing, renders only when gated safe, and defers rich details behind an optional expand state to reduce cognitive load. Provides localization, right-to-left support, and configurable tone guidelines. Captures client-side telemetry for response latency and UX health.

Acceptance Criteria
Safe Gating: Show Nudges Only When Safe To Engage
Given vehicle speed > 0 AND parking brake is not set AND transmission is not in Park/Neutral, When a nudge becomes eligible, Then the nudge UI shall not render and the eligibility is deferred. Given vehicle speed == 0 for at least 2 seconds OR parking brake is set OR transmission is in Park/Neutral, When a nudge becomes eligible, Then the nudge UI renders within 1 second on supported surfaces (mobile and in-cab) with interactive controls enabled. Given a nudge is visible, When the vehicle leaves the safe state, Then the UI auto-dismisses or disables interaction within 300 ms and logs a "became-unsafe" deferral event.
Do Not Disturb: Honor OS Focus Modes and App Quiet Hours
Given OS Do Not Disturb/Driving Focus is active OR app-defined Quiet Hours are active, When a nudge becomes eligible, Then the nudge is queued locally and not shown and no sound/haptic is emitted. Given DND/Quiet Hours end and safe-gating conditions are met, When the app surface is active, Then any queued nudge displays within 5 seconds and a "deferred-by-dnd" reason is recorded. Given DND remains active beyond the nudge’s validity window, When the window expires, Then the nudge is dropped and a "dropped-by-expiry" event is recorded.
Minimal UI: Single Primary Action, Secondary Options, and Collapsed Details
Given a nudge is displayed, Then exactly one primary CTA is shown using plain-language imperative copy (max 40 characters) and is visually most prominent. Then secondary options “Snooze” and “Dismiss” are available, visually de-emphasized, and accessible via touch/keyboard. Given the nudge is initially collapsed, When the user taps “Expand,” Then rich details (e.g., codes, tips) reveal in an expandable region; When collapsed again, Then only the concise message and actions remain visible. Given the user taps “Done,” Then completion is recorded, the UI dismisses within 300 ms, and the nudge will not reappear until after its configured cooldown and only if conditions are met. Given the user taps “Snooze,” Then the nudge will not reappear before the configured snooze interval elapses and safe-gating is true; a snooze reason is logged.
Accessibility and Haptics: Contrast, Touch Targets, Assistive Tech
Given the nudge renders, Then all body text meets WCAG 2.1 AA contrast (>= 4.5:1) and large text meets >= 3:1; focus indicators are visible with >= 2 px contrast outline. Then all interactive targets (primary CTA, Snooze, Dismiss, Expand) are >= 48x48 dp with >= 8 dp spacing and are reachable via screen reader and keyboard/rotary controls. Given system Dynamic Type / font scaling up to 200%, Then primary CTA and message do not truncate; if overflow occurs, an accessible expand pattern reveals full text without overlapping controls. Given system "Reduce Motion/Haptics" is enabled OR app haptics are disabled, Then no haptic feedback is emitted; otherwise, a short confirmation haptic is emitted on primary CTA/Done. Given RTL locales, Then layout, focus order, and animations are mirrored and readable by assistive tech.
Localization, RTL, and Configurable Tone Guidelines
Given the app locale is set, Then all nudge UI strings are sourced from localization files and rendered in the selected locale without truncation or overlap in top 5 supported locales. Given an RTL locale (e.g., ar, he), When a nudge renders, Then the UI mirrors horizontally, text direction is RTL, numerals and punctuation follow locale rules, and focus order follows RTL reading order. Given tone guidelines per locale, When copy is built, Then it passes configured tone checks (plain-language, driver-friendly) and displays without idioms that fail localization. Given units are locale-dependent, Then units in details (e.g., km/mi, °C/°F) render per locale configuration.
Offline Operation: Local Queue and Eventual Sync
Given the device is offline, When a nudge displays and the user interacts (Done/Snooze/Dismiss), Then the action and telemetry are persisted to a durable local queue that survives app restarts/crashes. Given connectivity is restored, Then queued records are transmitted in FIFO order within 60 seconds, with de-duplication by event_id and retries using exponential backoff on transient failures. Given the queue is at capacity, When a new record is added, Then the oldest non-critical telemetry is dropped first and a "queue-trim" health event is recorded; user-facing UI remains responsive. Given offline state, Then nudge rendering and interactions remain fully functional without server round-trips.
Client-Side Telemetry: Response Latency and UX Health
Given a nudge is scheduled to render, Then render_start_ts and render_latency_ms (from eligibility to visible) are captured. Given a nudge is visible, When the user takes an action, Then action_type (done/snooze/dismiss), action_ts, response_latency_ms (visible->action), and dwell_time_ms are captured. Then each event includes nudge_id, vehicle_id (hashed), driver_id (hashed or session-scoped), locale, surface (mobile/in-cab), safe_state_at_render, and app/device version. Given connectivity, Then telemetry is sent within 60 seconds; otherwise it is queued per Offline Operation criteria. Given a telemetry failure, Then the error is logged with code and retried; PII is not stored in payloads.
Rate Limiting and Alert Fatigue Controls
"As a driver, I want fewer, smarter prompts so that I don’t tune them out."
Description

Implements multi-level throttling to prevent overload: per-driver daily caps, per-nudge-type cooldowns, progressive backoff when prompts are dismissed, and merging of compatible nudges into a single consolidated prompt at the next safe moment. Includes session-level suppression after a hard dismiss and de-duplication across identical root causes within a set horizon. All limits are configurable per fleet, with safeguards to preserve critical safety relevance while minimizing interruptions. Logs suppression and merge events for analytics and tuning.

Acceptance Criteria
Per-Driver Daily Cap with Critical Safeguard
Given per-fleet config: driverDailyCap=3 (non-critical), criticalHourlyCap=1, fleetTimeZone=US/Central And the day boundary is 00:00 in fleetTimeZone When a driver has more than 3 eligible non-critical nudges in that day Then only the first 3 are delivered at SafeStop-eligible moments and the rest are suppressed with reason=daily_cap And critical nudges may still be delivered up to 1 per rolling hour with reason=critical_override when over the daily cap And a merged prompt counts as 1 toward driverDailyCap And counters reset at the next day boundary or when driverId changes
Per-Nudge-Type Cooldown Enforcement
Given per-fleet config: perTypeCooldown.EngineIdleTip=4h When an EngineIdleTip nudge is delivered (standalone or within a merged prompt) Then subsequent EngineIdleTip nudges are suppressed for 4h with reason=type_cooldown And the cooldown is scoped per driverId and vehicleId And critical variants flagged override=true are exempt from this cooldown
Progressive Backoff on Consecutive Soft Dismissals
Given per-fleet config: backoffSchedule=[15m, 1h, 4h] for soft dismissals When the driver soft-dismisses the same rootCauseGroup nudge consecutively within a session Then the next eligible delivery is delayed by 15m after the first soft dismiss, 1h after the second, and 4h after the third and subsequent dismissals And the backoff counter resets when the driver completes the suggested action, the root cause clears, or a new session starts And each applied backoff is logged with reason=progressive_backoff and dismissCount
Merge Compatible Nudges into a Single Consolidated Prompt
Given per-fleet config: mergeRules allow merging of non-critical maintenance and efficiency nudges; maxMergedItems=3; mergeWindow=30m And multiple compatible nudges are triggered during the mergeWindow before a SafeStop-eligible moment occurs When the next SafeStop-eligible moment occurs Then a single consolidated prompt is delivered containing up to 3 items And each merged item starts its own per-type cooldown and de-dup horizon, while the consolidated prompt counts as 1 toward driverDailyCap And duplicate items with identical root causes within the merge are de-duplicated And a merge event is logged with childNudgeIds and reason=merge
Session-Level Suppression After Hard Dismiss
Given a "hard dismiss" action is available on prompts And per-fleet config defines a session as an ignition cycle where engine_on to engine_off > 5m When the driver hard-dismisses a prompt (single or consolidated) Then any further prompts from the same rootCauseGroup are suppressed for the remainder of the session with reason=session_suppression And the suppression resets at the start of the next session And critical nudges may bypass session suppression once per session if criticalOverride=true
De-duplication Across Identical Root Causes Within Horizon
Given per-fleet config: dedupHorizon=24h and rootCauseKey=DTC+sensor+vehicleId When identical rootCauseKey events recur within 24h Then only the first eligible prompt is delivered and the rest are suppressed with reason=dedup_horizon And if severity escalates from warning to critical, de-duplication is bypassed for the escalated event And de-duplication scope is per driverId and vehicleId
Fleet-Configurable Limits, Guardrails, and Auditability
Given an admin user with Fleet Admin role When the admin sets driverDailyCap (1-10), perTypeCooldowns (5m-48h), dedupHorizon (1h-7d), backoffSchedule (up to 5 steps, each 5m-48h), maxMergedItems (1-5), mergeWindow (5m-2h), criticalOverride (on/off) Then inputs outside bounds are rejected with validation errors and no changes are applied And valid changes persist, are versioned with audit entries (who, what, when, old->new), and propagate to enforcement within 5 minutes And safe defaults are applied for new fleets: driverDailyCap=3, perTypeCooldown default=4h, dedupHorizon=24h, backoffSchedule=[15m,1h,4h], maxMergedItems=3, mergeWindow=30m, criticalOverride=on
Follow-through Tracking and Effectiveness Analytics
"As a fleet manager, I want to measure nudge effectiveness so that I can improve policies and reduce idle and repair costs."
Description

Tracks nudge lifecycle events end-to-end, including generated, gated, suppressed, delivered, viewed, actioned (Done), snoozed, and dismissed, with timestamps, driver/vehicle context, and rule identifiers. Correlates follow-through with telematics outcomes such as reduced idle duration, stabilizing battery voltage, or resolved tire pressure alerts. Surfaces dashboards for adoption and impact, cohort comparisons, and rule-level conversion, and provides CSV/BI exports. Enforces data minimization, retention windows, and role-based access to protect driver privacy.

Acceptance Criteria
End-to-End Nudge Event Lifecycle Logging
Given a nudge is generated for a driver–vehicle by rule_id R with correlation_id C When any lifecycle transition occurs (generated, gated, suppressed, delivered, viewed, actioned, snoozed, dismissed) Then an immutable event is persisted with fields {event_type, correlation_id, rule_id, driver_id_pseudo, vehicle_id, occurred_at_utc_ms, source, app_version, reason_code(nullable)} And duplicate submissions with the same {correlation_id, event_type, occurred_at_utc_ms} are idempotently de-duplicated And events are queryable within 5 minutes at p95 from occurrence and durable across service restarts And 99.9% of delivered events have a corresponding generated event sharing the same correlation_id
Gating and Suppression Reason Attribution
Given a nudge evaluation results in non-delivery due to safety or policy When the system applies gating or suppression Then a gated or suppressed event is recorded with reason_code in {moving, no_park_brake, not_in_neutral, DND_active, cooldown_active, offline, other_policy} And 100% of gated/suppressed events include a non-null reason_code and correlation_id And dashboard/API aggregates by reason_code equal the raw event counts within ±0.5% for the same filter set
Follow-through (Done) Attribution and Conversion Windows
Given a nudge is delivered (and optionally viewed) under correlation_id C When the driver marks Done in-app or an approved telematics signal auto-resolves within the configured action_window for rule_id R Then an actioned event is recorded with {action_source in [driver, auto_telematics], action_window_seconds} And conversion for R is computed as actioned/delivered within the action_window and exposed via API and dashboard And time_to_action p50/p90 are computed per rule and cohort and exclude actions outside the window
Outcome Correlation to Telematics Improvements
Given a rule_id with a defined outcome metric and measurement windows When analytics runs the daily job Then the system computes for each rule: pre_metric, post_metric, absolute_delta, relative_change, sample_sizes, and 95% CI when n >= minimum_n, else flags insufficient_data And outcome metrics are defined as: idle_reduction = idle_minutes_per_engine_hour; battery_health = days_with_low_voltage_events_rate; tire_pressure = low_PSI_alerts_per_100_engine_hours And results are available by rule, vehicle, driver cohort, and fleet, and refresh successfully within 24 hours of data availability
Adoption and Impact Dashboard with Cohorts and Rule-level Conversion
Given a user opens Analytics When the default view loads for the last 30 days Then KPIs are shown: nudges_generated, delivered, viewed_rate, action_rate, snooze_rate, dismiss_rate, avg_time_to_action, rule-level conversion And filters are available and functional: date_range (up to 365 days), fleet, depot, vehicle_class, driver_group, rule_id, device_os And cohort comparison renders side-by-side metrics and deltas for selected cohorts And p90 response time for aggregated views <= 2 seconds for up to 1,000,000 lifecycle events
CSV and BI Export Parity and SLAs
Given an authorized user requests an export via UI or API When the export is generated Then the CSV schema includes columns: correlation_id, rule_id, event_type, occurred_at_utc_ms, driver_id_pseudo, vehicle_id, reason_code, source, app_version, action_source, action_window_seconds And row counts match the equivalent dashboard/API query within ±0.5% And exports up to 10,000,000 events complete within 10 minutes and are delivered via time-limited signed URL (expires <= 24h) And exports support incremental parameters since and until (UTC) and preserve ordering by occurred_at_utc_ms, correlation_id And BI connectors deliver identical columns and data types as CSV
Privacy: Data Minimization, Retention, and Role-Based Access
Given privacy requirements are enforced When lifecycle events are stored Then only necessary fields are persisted, driver identities are pseudonymized (no name/email/phone), and no precise GPS/location is stored in event logs And retention windows are configurable per tenant with defaults: raw_events_TTL = 180 days, aggregates_TTL = 730 days; data past TTL is hard-deleted and no longer retrievable via UI, API, or export And RBAC is enforced: Driver role can view only own events and personal conversion; Fleet Manager/Owner can view fleet-wide and export; Dispatcher can view fleet but cannot export; unauthorized access attempts return 403 and are audited And privacy configurations and access audits are exportable to support compliance reviews
Admin Configuration and Policy Controls
"As a fleet manager, I want to tailor nudge rules and schedules to my fleet so that they fit our operations and compliance needs."
Description

Provides a management console and APIs for fleet admins to tailor SafeStop Nudges: enable/disable specific nudge types, set thresholds and cooldowns, define DND schedules, and adjust safe-state parameters within allowed bounds. Includes role-based permissions, audit trails for policy changes, draft-and-publish workflows with effective dates, and validation to prevent unsafe configurations. Changes propagate to clients with versioned configurations and rollback support. Includes seed presets for common fleet profiles to accelerate onboarding.

Acceptance Criteria
Role-Based Permissions for Policy Management
Given a user with policy:admin permission, when they access the SafeStop policy console or API, then they can create, edit, publish, schedule, rollback, and delete policy drafts for fleets within their scope. Given a user with policy:edit but not policy:publish, when they attempt to publish a draft via UI or API, then the action is blocked with 403 Forbidden, the Publish controls are disabled in UI, and no changes are persisted. Given a user with policy:read only, when they attempt any write operation (create/edit/publish/rollback/delete), then the system returns 403 Forbidden and logs an audit entry with outcome=denied and reason=insufficient_permissions. Given an API request missing the policy:write scope, when it calls POST/PUT/PATCH/DELETE endpoints, then the response is 403 Forbidden and the configuration remains unchanged.
Do Not Disturb (DND) Schedules Enforcement
Given an admin defines one or more DND windows with an associated IANA timezone and publishes, when the effective period starts, then no SafeStop nudges are delivered to drivers within those windows for that timezone. Given overlapping or adjacent DND windows, when the policy is saved, then the system normalizes them into a single continuous window without gaps. Given a DND window where end time is earlier than start time, when saved, then it is treated as an overnight window that ends on the next calendar day in the specified timezone. Given a daylight saving time transition in the specified timezone, when DND windows are in effect, then nudges are suppressed according to the defined local wall-clock times across the DST change without delivering during the intended quiet hours. Given DND is active, when a nudge would otherwise be triggered, then it is suppressed and recorded in telemetry as suppressed_reason=DND without showing any in-cab prompt.
Parameter Thresholds, Cooldowns, and Safe-State Bounds Validation
Given a draft policy with parameter values (thresholds, cooldowns, and safe-state parameters), when a value is outside its schema-defined min/max bounds, then saving the draft fails with 422 Unprocessable Entity including parameter path, code=OUT_OF_RANGE, and the allowed [min,max]. Given a draft policy with mutually unsafe settings, when validation runs, then it fails with 422 code=SAFETY_RULE_VIOLATION and a message indicating the violated rule, and the draft is not saved. Example rule: at least one safe-state gate must remain enabled (park_brake_required OR in_gear_neutral_required OR speed_threshold_kph==0). Given all values are within bounds and pass safety rules, when the draft is saved or published, then the operation succeeds and the values are persisted and distributed per the publish workflow. Given a cooldown value is reduced below the current live value, when published, then clients apply the new cooldown on next config fetch without requiring a device reboot.
Draft, Review, and Publish with Effective Dates
Given a saved draft, when not published, then no clients receive its settings and the current production version remains in effect. Given a valid draft, when Publish Now is invoked, then a new configuration version is created with effective_at ≤ 30 seconds from action and clients adopt it within 5 minutes of their next sync. Given a valid draft, when Publish Later is scheduled with a future effective_at timestamp, then no clients adopt the changes before that timestamp, and clients adopt the new version within 5 minutes after effective_at. Given a scheduled publish in the future, when an authorized user cancels it before effective_at, then the schedule is removed and no new version becomes active. Given an invalid draft (validation errors), when publish is attempted, then the operation is blocked with 422 and a list of errors; no partial publish occurs.
Versioned Configuration Propagation and Rollback
Given a publish event, when it completes, then the system assigns a monotonically increasing version identifier and returns it in the API/UI, and clients include that version in subsequent heartbeat headers (e.g., X-FleetPulse-Policy-Version). Given clients are online, when a new version becomes effective, then 95% of clients report the new version within 5 minutes and 100% within 30 minutes assuming network availability. Given a rollback to a prior version is initiated and confirmed by an authorized user, when the rollback is executed, then that prior version becomes the latest effective version, a rollback event is recorded, and clients revert to it within 5 minutes of their next sync. Given an offline client reconnects after one or more publishes or rollbacks, when it next syncs, then it downloads and applies only the latest effective version, skipping superseded versions.
Audit Trail for Policy Changes and Access
Given any create, update, publish, schedule, cancel, or rollback action, when it occurs, then an immutable audit record is stored with fields: action, actor_id, actor_scopes, timestamp_utc, source_ip, target_entity, version_before, version_after, diff, result, and optional comment. Given a request to modify or delete an audit record, when attempted, then the operation is rejected with 405 Method Not Allowed or 403 Forbidden and no changes are made. Given a user with policy:audit permission, when they query the audit API with filters (date range, actor, action, version), then the system returns matching entries with pagination and allows CSV and JSON export. Given a user without policy:audit permission, when they access audit endpoints or UI, then access is denied with 403 and the attempt is itself audited.
Seed Presets Application and Override
Given the presets listing endpoint is called, when the response is returned, then it includes for each preset: preset_id, name, version, and description for the fleet profile. Given an admin selects a preset and creates a draft from it, when the draft is created, then preset-defined fields are populated and labeled with preset_id and preset_version as provenance. Given a draft created from a preset, when the admin overrides one or more fields, then the overrides are persisted while preset provenance remains recorded; publishing still requires all validations to pass. Given the vendor publishes a new preset version, when an existing policy based on an older preset is in use, then it remains unchanged until an admin explicitly applies or upgrades to the newer preset version. Given a preset is applied or upgraded, when the action completes, then an audit entry records preset_id, preset_version, and the number of overridden fields.

PTO Guard

Identifies PTO/refrigeration or mandated-idle scenarios and auto-tags them as exempt. Keeps coaching fair, prevents unnecessary nagging, and ensures managers see accurate idle metrics that stand up to audits and driver feedback.

Requirements

Signal-Fusion PTO/Idle Detection Engine
"As a fleet manager, I want the system to automatically detect PTO, reefer, and mandated idle events from multiple signals so that idle metrics reflect operational reality without manual review."
Description

Implements a real-time detection engine that fuses OBD-II/J1939 signals (e.g., PTO status, engine load, fuel rate, RPM), GPS speed, parking brake, accelerometer, auxiliary inputs, and optional ELD/reefer telemetry to identify PTO engagement, refrigeration operation, and mandated-idle scenarios (e.g., pre/post-trip inspections, DPF regens, safety checks). Provides configurable rules and thresholds per vehicle profile and job type, geofence-aware logic for known loading docks/cold storage sites, and fallbacks when certain sensors are unavailable. Runs streaming classification with sub-minute latency, buffers offline, and reconciles on reconnect. Emits structured events with reason codes and confidence scores, enabling downstream tagging, reporting, and coaching suppression. Integrates with FleetPulse’s telematics pipeline and event store, ensuring data lineage and scalability across 3–100-vehicle fleets.

Acceptance Criteria
Real-Time PTO Engagement Detection and Event Emission
Given a vehicle with validated VIN in FleetPulse and streaming OBD-II/J1939 data available And PTO status = On from OBD/J1939 or auxiliary input mapped to PTO And GPS speed <= 1 mph When these conditions persist for >= 30 seconds Then emit a PTO_ENGAGED event within 60 seconds of first qualification And include fields: vehicle_id, event_id (idempotent), start_time, location, reason_codes ["PTO_STATUS_ON","ZERO_SPEED"], confidence >= 0.80, lineage pointers to raw signals And write the event to the FleetPulse event store via the telematics pipeline successfully Given a prior PTO_ENGAGED event is open When PTO status = Off for >= 10 seconds Then emit PTO_DISENGAGED and close the prior event with end_time, duration, and fuel_burn_estimate And do not emit duplicate engagement/disengagement events while state remains unchanged for > 5 minutes Given a fleet of 100 vehicles concurrently streaming When PTO engagement occurs across vehicles Then average detection-to-event latency < 60 seconds and 99th percentile < 120 seconds
Refrigeration Operation (Reefer) Identification and Exempt Idle Tagging
Given a vehicle with auxiliary input mapped to "reefer_on" or optional reefer telemetry reporting compressor run And engine is On and GPS speed <= 1 mph When "reefer_on" persists for >= 60 seconds Then emit an EXEMPT_IDLE event within 60 seconds with reason_codes ["REEFER_OPERATION","ZERO_SPEED"], confidence >= 0.80 And tag the interval as exempt idle in downstream tagging metadata Given reefer telemetry is unavailable When auxiliary input indicates "reefer_on" and fuel_rate > configured idle baseline for >= 60 seconds Then emit EXEMPT_IDLE with confidence >= 0.70 and include reason_codes ["AUX_INPUT_REEFER","FUEL_RATE_ABOVE_BASELINE"]
Mandated Idle Detection: Inspections, DPF Regens, Safety Checks
Given parking brake = Applied and GPS speed = 0 and driver has started a pre/post-trip inspection (from ELD/DVIR or in-app checklist) When these conditions persist for >= 120 seconds Then emit EXEMPT_IDLE with reason_codes ["INSPECTION","PARK_BRAKE","ZERO_SPEED"], confidence >= 0.85 within 60 seconds Given OBD/J1939 indicates DPF regeneration active (e.g., aftertreatment status) and GPS speed <= 1 mph When regen state persists for >= 60 seconds Then emit EXEMPT_IDLE with reason_codes ["DPF_REGEN"], confidence >= 0.90 within 60 seconds Given a safety check mode is enabled from the app or input and engine is On, speed = 0 When mode persists for >= 60 seconds Then emit EXEMPT_IDLE with reason_codes ["SAFETY_CHECK"], confidence >= 0.80
Geofence-Aware Exempt Idle at Loading Docks and Cold Storage
Given the vehicle enters a configured geofence labeled as Loading Dock or Cold Storage site And GPS accuracy <= 25 meters and engine is On and GPS speed <= 1 mph When the vehicle remains within the geofence for >= 90 seconds Then emit EXEMPT_IDLE with reason_codes ["GEOFENCE_SITE","ZERO_SPEED"], confidence >= 0.80 within 60 seconds Given geofence configuration is updated in FleetPulse When a new geofence is created or an existing one is edited Then the detection engine applies the updated geofences within 10 minutes to active vehicles Given the vehicle is outside any configured geofence When idling occurs Then no geofence-based exempt rule is applied
Per-Vehicle and Job-Type Configurable Rules and Thresholds
Given a vehicle profile and job type are assigned (e.g., "Bucket Truck" + "Utility Service") And configuration defines thresholds (e.g., engine_load >= 20%, fuel_rate >= 0.6 gph, rpm >= 900) and minimum durations per rule When the vehicle meets the configured thresholds Then the detection decision uses the profile/job-type thresholds and emits events accordingly Given a configuration change is published (new thresholds or rule enable/disable) When the change is saved Then running classifiers pick up the change within 5 minutes and apply it to new detections (not retroactively) Given conflicting thresholds between default fleet profile and vehicle-specific profile When classification runs Then vehicle-specific profile takes precedence, then job-type override, then fleet default
Resilience: Sensor Fallbacks and Offline Buffering/Reconciliation
Given PTO status signal is unavailable for a vehicle When GPS speed <= 1 mph and engine_load >= configured threshold and auxiliary input "boom_active" = On for >= 60 seconds Then emit EXEMPT_IDLE with reason_codes ["PTO_SIGNAL_MISSING","AUX_INPUT_BOOM","ENGINE_LOAD_HIGH"], confidence >= 0.60 within 60 seconds Given GPS is unavailable but accelerometer shows stationary and parking brake = Applied When idling persists for >= 90 seconds Then apply relevant exempt rules using non-GPS signals and include reason_codes ["GPS_MISSING"], and confidence reflects degraded sensing (>= 0.60) Given the device loses connectivity for 30 minutes while exempt/engaged events occur When connectivity is restored Then all buffered events are delivered in order with original timestamps within 2 minutes, without duplication, and with lineage preserved And the buffer retains at least 24 hours of events without loss
Downstream Tagging, Coaching Suppression, and Idle Metrics Accuracy
Given EXEMPT_IDLE or PTO_ENGAGED events are active for a vehicle during a trip When idle metrics are computed for the same period Then exempt intervals are excluded from coachable idle time and from non-exempt idle KPIs And daily idle report reflects total_idle = raw_idle - exempt_idle within ±1 second per event Given an EXEMPT_IDLE event is active When real-time coaching would otherwise trigger an idle alert Then no idle alert is sent to the driver or manager, and an audit log records the suppression with the event_id Given a manager reviews an audit When selecting an exempt interval Then the UI displays event reason_codes, confidence, and lineage references to support driver feedback and audits
Auto-Tagging and Classification Rules
"As a compliance lead, I want idle exemptions to be auto-tagged with clear reason codes so that reports and audits have defensible, consistent classifications."
Description

Automatically classifies detected events into standardized exemption categories (e.g., PTO, Refrigeration, Mandated Idle: Inspection, Mandated Idle: DPF Regen) and tags affected idle intervals as exempt. Applies tags in real time and supports retroactive backfill when late data arrives. Stores evidence (signal snapshots, geofence match, rules fired, confidence) and immutable reason codes to support audits and driver feedback. Tags propagate to analytics, scorecards, alerts, and APIs, ensuring consistent treatment across FleetPulse modules. Includes a policy layer to define fleet-wide defaults and per-vehicle overrides with versioning and change history.

Acceptance Criteria
Real-Time Auxiliary Idle Classification (PTO/Refrigeration)
Given PTO Guard policy is enabled for the vehicle and min_idle_duration is configured And engine_on = true and speed = 0 And (PTO_status = ON or Reefer_status = ON) When the idle interval reaches min_idle_duration Then an exempt idle tag is created within 5 seconds with category "PTO" if PTO_status = ON else "Refrigeration" And tag.start_at = first timestamp engine_on = true and speed = 0 and condition true And tag.end_at = timestamp when condition becomes false or engine_on = false or speed > 0 And tag includes confidence >= 0.90 and a non-null reason_code And no more than one exempt category is applied to the interval
Mandated Idle: Inspection Geofence Auto-Tagging
Given the vehicle location is within an active Inspection geofence (within configured tolerance) And engine_on = true and speed = 0 And min_idle_duration is configured When the idle interval reaches min_idle_duration Then an exempt idle tag is created within 5 seconds with category "Mandated Idle: Inspection" And tag.geofence_id equals the matched geofence identifier And if multiple exemption conditions match, the policy-defined priority determines the final category
Mandated Idle: DPF Regeneration Detection and Exemption
Given engine_on = true and speed = 0 And DPF_regen_status = ACTIVE from OBD-II or inferred by the rules engine When regen remains active for at least 30 seconds Then an exempt idle tag is created within 5 seconds with category "Mandated Idle: DPF Regen" And tag duration equals the intersection of regen-active window and the idle interval And when regen ends, the tag closes within 5 seconds
Propagation Consistency Across Analytics, Scorecards, Alerts, and APIs
Given an idle interval has an exempt tag When analytics are computed for any period containing the interval Then idle totals and rates exclude the exempt duration and match across dashboard, driver scorecards, and scheduled reports within 0.1 minutes Given an exempt tag exists When real-time coaching or alerting logic evaluates idling Then no idle alerts or coaching prompts are emitted for the exempt duration Given an exempt tag exists When a consumer retrieves events via the public API Then the payload includes tag.category, reason_code, confidence, evidence_id, and idle metrics reflect the exemption Given an exempt tag is added, updated, or closed When caches and derived stores refresh Then all product surfaces reflect the change within 60 seconds
Retroactive Backfill on Late Data
Given the platform receives late telemetry or auxiliary signal data up to 72 hours after occurrence When reevaluating idle intervals within that window Then the system retroactively creates, adjusts, or removes exempt tags to match rules and policies And reprocesses affected analytics, scorecards, alerts, and API aggregates within 15 minutes And writes a change_log entry capturing before/after state, reason_code, data_source = "backfill", and timestamps Given the same late data is processed multiple times When reprocessing runs Then operations are idempotent and do not create duplicate tags
Evidence and Immutable Reason Codes for Audit
Given an exempt tag is created When the tag is persisted Then an evidence record is stored containing time-bounded signal snapshots, geofence match result, rules fired, inputs, and computed confidence And the tag is assigned an immutable reason_code and evidence_id And edits to reason_code and evidence payload are blocked; only append-only annotations are allowed Given an auditor or driver feedback request references a tag When exporting audit evidence Then the export includes tag metadata, evidence payload, and policy version identifiers
Policy Defaults, Overrides, Versioning, and Change History
Given a fleet-wide default PTO Guard policy exists and a per-vehicle override is configured When classification runs for that vehicle Then the per-vehicle override takes precedence over the fleet default Given a new policy version is created with an effective_from timestamp When classification runs for events at or after effective_from Then the new policy version is applied, and events before effective_from retain prior version behavior and tags Given a policy is created, updated, or disabled When viewing change history Then the system shows who, what, when, before/after diff, and change reason, with history retained for at least 12 months Given multiple exemption categories match simultaneously When resolving the final category Then the policy-defined priority order yields a single category and records the decision path in evidence
Vehicle and Equipment Configuration Profiles
"As an operations admin, I want to define vehicle and site profiles so that the system can accurately distinguish exempt idle from avoidable idle."
Description

Provides self-serve configuration of vehicle capabilities and site context that improve detection accuracy: PTO presence and wiring, aux input mappings, reefer type, known job sites/geofences, typical operating windows, and regulatory constraints by region. Supports CSV import, API provisioning, and UI editing with validation. Includes templated profiles by vehicle class and the ability to copy/apply profiles in bulk. Changes are versioned with effective dates and audit trails, and are consumed by the detection engine and tagging policies at runtime.

Acceptance Criteria
UI Profile Creation and Validation
- Given a fleet admin opens Configuration Profiles and clicks New, when they enter a unique name (3–64 chars), select a vehicle class, PTO presence (Yes/No), reefer type (None/Diesel/Electric/Hybrid), and a valid ISO 3166-2 regulatory region, then Save is enabled and the profile is created as Active (v1) with effective_date = now. - Given PTO presence = Yes, when Save is clicked without selecting a PTO wiring input (DI1–DI8), then the form blocks save and shows an inline error on PTO wiring. - Given aux input mappings are configured, when two mappings reference the same digital input, then the form blocks save and highlights the conflicting mappings. - Given operating windows are added, when any day has overlapping time ranges, then the form blocks save and identifies the overlapping ranges. - Given an invalid regulatory region code is entered, when Save is clicked, then the profile is not saved and an inline validation error is shown. - Given a successful save, then an audit record is written with actor, source=UI, timestamp, and field-level changes.
CSV Import of Profiles with Dry-Run and Error Reporting
- Given a CSV with required headers (name, vehicle_class, pto_presence, reefer_type, regulatory_region, optional fields...), when the admin selects Dry Run and uploads the file, then the system returns a validation report with total rows, valid rows, and row-level errors without creating or updating any profiles. - Given a valid CSV (<= 10,000 rows, <= 10 MB), when the admin selects Import, then each valid row creates a new profile or a new version if name exists and column update_if_exists=true; invalid rows are skipped and reported with row and column errors. - Given a row with PTO presence = Yes and missing pto_wiring, when importing, then that row fails with a specific error code and message. - Given a row includes geofence geometry in WKT columns, when the WKT is invalid or self-intersecting, then that row is rejected with an explicit geometry error. - Given the same CSV is re-imported with the same Run ID within 24 hours, when Import is executed, then no duplicate versions are created and the prior results are returned. - After a successful import, an audit entry is created summarizing counts (created, updated, failed) and linking to the file hash.
API Provisioning of Profiles (Idempotent Upsert)
- Given an API client with scope profiles:write, when it PUTs /api/v1/profiles/{external_id} with a valid payload, then the service upserts the profile keyed by external_id and returns 201 Created with version_id on create or 200 OK with new version_id on update. - Given Idempotency-Key header is provided, when the same request is retried within 24 hours, then the service returns the original response and does not create duplicate versions or assignments. - Given vehicle_ids are included (max 1000), when the request succeeds, then each listed vehicle is associated to the profile and GET /api/v1/vehicles/{id} reflects the association. - Given the payload violates a validation rule (e.g., invalid regulatory region, overlapping operating windows), when the request is processed, then the service returns 422 with JSON-pointer paths to each invalid field and machine-readable error codes. - Given the client exceeds 60 write requests per minute, when additional requests arrive, then the service responds 429 with Retry-After. - All successful upserts generate audit entries with actor (API token), source=API, and field-level diffs.
Templated Profiles by Vehicle Class and Bulk Apply
- Given a template library is available, when an admin selects a vehicle class template (e.g., Class 8 Tractor, PTO) and clicks Copy, then a tenant-scoped draft is created pre-populated from the template and requires a unique name before activation. - Given 1–1000 vehicles are selected, when the admin clicks Apply Profile and confirms Replace vs Skip for conflicts, then associations are processed asynchronously and a progress indicator shows started, in-progress, and completed states. - Given vehicles already have a different profile, when Replace is chosen, then the existing association is replaced; when Skip is chosen, then those vehicles are omitted and counted in the summary. - The bulk apply operation completes within 120 seconds for 1000 vehicles and surfaces a summary with counts (associated, replaced, skipped, failed) and downloadable CSV of failures. - All template copy and bulk apply actions create audit records including actor, source=UI, counts, and affected IDs.
Versioning, Effective Dates, and Audit Trail
- Given a profile v1 is Active, when an admin edits and sets effective_date in the future, then v2 is created with state=Scheduled and v1 remains Active until the effective_date, after which v2 becomes Active and v1 is Archived automatically. - Given an admin attempts to set an effective_date more than 30 days in the past, when saving, then the change is rejected with a validation error to preserve audit integrity. - Given multiple versions exist, when viewing the History tab, then the UI lists versions with created_at, created_by, effective_date, and a change summary; selecting a version shows the immutable snapshot and field-level diff vs prior version. - Given a profile has active vehicle associations, when attempting to hard-delete the profile, then deletion is blocked; soft-delete is allowed only when no active associations and all versions remain in audit for 7 years. - Every create/update/delete action records who, when, what (field diffs), and source (UI/CSV/API) and is retrievable via GET /api/v1/audit?entity=profile&entity_id={id}.
Runtime Consumption by Detection Engine and Tagging Policies
- Given a vehicle is assigned to a profile with effective_date <= now, when the PTO wiring config is changed from DI2 to DI3, then within 5 minutes the detection engine uses DI3 and PTO Guard stops exempting idle on DI2 and starts exempting idle on DI3. - Given a job site geofence with an operating window is configured, when a vehicle idles within that geofence during the window, then the idle segment is tagged Exempt: Job Site Window and excluded from coachable idle metrics; idles outside the window are not exempted. - Given reefer_type = Diesel and a regulatory region that exempts active refrigeration, when reefer_on is detected during idle, then the segment is tagged Exempt: Reefer Operation with profile_id and version_id attached to the tag reason. - Tagging decisions are visible on the trip timeline and in the event audit, including reason_code, profile_id, version_id, and timestamp; metric rollups reflect exemptions within 10 minutes.
Geofence and Operating Window Configuration and Validation
- Given an admin inputs a geofence geometry (WKT polygon or circle), when saving, then the system validates geometry is non-self-intersecting, within valid bounds, and has an area between 10 m² and 100 km²; invalid geometries block save with specific errors. - Given a geofence name is entered, when saving, then the name must be unique per tenant and 3–64 characters or the save is blocked. - Given operating windows are added to a geofence, when ranges overlap on the same day or timezone is missing, then save is blocked with an error indicating the conflicting ranges or missing timezone. - Given a CSV of geofences is imported, when rows contain invalid geometry or missing columns, then those rows are rejected with row-level errors; valid rows are created. - Given a tenant reaches 5,000 geofences, when attempting to add another, then the request is rejected with a limit exceeded error and guidance to clean up or request an increase. - Given vehicles are associated to geofences by tag or explicit list, when viewing vehicle details, then associated geofences are listed and filterable by tag.
Coaching Suppression and Fairness Controls
"As a driver, I want the app to recognize exempt idle and avoid coaching me for it so that feedback feels fair and relevant."
Description

Integrates exemption tags into driver coaching and scoring so that exempt idle never triggers nagging notifications or penalizes scores. Adds context-aware UI: driver app banners indicating exempt status during active events, and manager console indicators showing why coaching was suppressed. Allows configurable thresholds to flag excessive exempt idle (e.g., PTO running unusually long) without counting against standard idle metrics. Ensures consistency across notifications, scorecards, and weekly summaries, with clear separation between exempt and non-exempt behaviors.

Acceptance Criteria
Exempt Idle Suppresses Coaching and Score Penalties (Driver Session)
Given PTO/refrigeration tag is active for a vehicle When engine idle is detected for more than the standard idle threshold Then no idle coaching notifications (push, SMS, in-app) are sent to the driver for the exempt duration And the driver scorecard excludes the exempt idle duration from standard idle metrics and penalties And a banner labeled "Exempt idle (PTO active)" appears in the driver app within 5 seconds and persists while exempt is active And the event is recorded as "Idle - Exempt" with start/end timestamps and source tag Given exempt status ends while the engine remains idling When idle continues beyond the threshold Then subsequent idle is treated as non-exempt and subject to normal coaching and scoring rules
Manager Console Shows Suppression Rationale
Given an idle coaching event was suppressed due to an exemption tag When a manager views the driver's event timeline or details Then an indicator "Coaching suppressed: Exempt (PTO)" is visible with reason code, duration, and data source And expanding the event reveals the rule that triggered suppression and the exemption period And the suppression indicator is present in both list and detail views And the events API includes suppression_reason, is_exempt, and source_tag fields for the same event Given a manager applies the filter "Suppressed coaching only" When the timeline reloads Then only events with coaching suppression are shown
Configurable Exempt Idle Threshold and Non-Penalizing Alert
Given org-level settings define excessive_exempt_idle_threshold per-event (X minutes) and per-day per-driver (Y minutes) When an exempt idle segment exceeds X minutes Then the system creates an "Exempt Idle Alert" to the manager and logs it on the driver's timeline And this alert does not reduce the driver’s score nor increase standard idle metrics And reports display "Exempt idle (over threshold)" separately from standard idle Given the daily sum of exempt idle exceeds Y minutes for a driver When the daily summary is generated Then a daily Exempt Idle Alert is created and visible on manager dashboards and exports Given thresholds are updated by an admin When new data arrives after the change Then new thresholds apply prospectively; existing recorded alerts remain unchanged; reports label the active threshold version used
Consistency Across Notifications, Scorecards, and Weekly Summaries
Given a week contains N minutes of exempt idle and M minutes of non-exempt idle for a driver When viewing the driver scorecard, manager weekly summary, and notification analytics Then all three surfaces display identical exempt and non-exempt totals and breakdowns And only non-exempt idle contributes to the score calculation And suppressed notifications are counted in "suppressed" analytics but not delivered And totals reconcile within 0.1 minute across all surfaces and APIs Given data ingestion is delayed When metrics are rechecked later Then eventual consistency is achieved within 10 minutes
Mid-Event State Changes Are Segmented Properly
Given an idle event begins while no exemption is active When a PTO/refrigeration exemption activates mid-idle Then the idle is split into a non-exempt segment followed by an exempt segment with precise timestamps And coaching triggers only for the non-exempt segment per standard rules And the driver scorecard includes only the non-exempt segment in idle penalties And the UI timeline displays contiguous segments with clear exempt/non-exempt markers Given exemption deactivates while idle continues When idle persists beyond the threshold Then a new non-exempt segment is created and standard coaching/scoring resumes
Offline/Intermittent Connectivity Handling for Exempt Idle
Given the driver device is offline during an exempt idle event When connectivity is restored and telemetry is reconciled Then any queued idle coaching notifications corresponding to exempt periods are discarded and not delivered retroactively And the driver app shows the "Exempt idle" banner within 5 seconds of receiving the exemption status And the server records suppression with original event timestamps based on buffered telemetry And duplicate events/notifications are prevented Given partial data is received out of order When processing completes Then exempt/non-exempt segmentation and suppression decisions are consistent with final telemetry
Audit Export and API Parity for Suppressed Coaching and Idle Separation
Given a manager exports an audit report for a date range When the export is generated Then each idle event includes columns: is_exempt, suppression_reason, source_tag, duration, contributed_to_score, threshold_breach_flag And driver- and vehicle-level totals for exempt vs non-exempt idle match on-screen reports within 0.1 minute And the same data is retrievable via API endpoints with matching values and field names And exports are available in CSV format and respect org data-retention policies
Audit-Ready Idle Reports and Exports
"As a small fleet owner, I want defensible idle reports with evidence and exports so that I can satisfy audits and resolve driver disputes quickly."
Description

Delivers reports that separate exempt from non-exempt idle by driver, vehicle, site, and time period, including event details, reason codes, signal evidence, and geofence matches. Provides time-series charts, summaries, and drill-down to individual events. Supports export to CSV and PDF with embedded evidence snapshots and references to immutable event IDs. Includes an API endpoint for programmatic access and retention settings aligned with compliance needs. Ensures totals reconcile with fleet-level metrics and driver scorecards.

Acceptance Criteria
Exempt vs Non‑Exempt Idle Segregation by Driver/Vehicle/Site/Time
Given a user selects a fleet, date range, and timezone (and optional driver, vehicle, and site filters) When the Idle report is generated Then idle minutes and fuel impact are separated into Exempt and Non‑Exempt for each driver, vehicle, site (geofence), and time bucket And totals per dimension equal Exempt + Non‑Exempt and reconcile to the grand totals And PTO/refrigeration/mandated‑idle events are auto‑classified as Exempt using PTO Guard reason codes and signal evidence And ignition‑on, speed=0 events outside exempt rules are classified as Non‑Exempt And the active filters and data cut are displayed in the report header
Event Drill‑Down: Reason Codes, Signals, Geofence, Immutable ID
Given a user is viewing the Idle report tables or charts When they click a row or chart point representing idle time Then an event detail view opens showing: immutable EventID, start/end timestamps (ISO 8601, selected timezone), duration (seconds), driver, vehicle (name, ID, VIN), site/geofence (name and ID), exempt status, reason code, and signal evidence (speed, RPM, PTO state, reefer state, idle mandate flag) And evidence snapshot images or signal trace thumbnails are displayed with capture timestamps and are time‑aligned to the event window And the user can navigate to previous/next event and return to the originating list context preserving filters and sort And the values in the detail view exactly match the same event in CSV/PDF/API outputs
Time‑Series Charts and Summaries Consistent With Tables
Given idle data exists for the selected time period When the user views time‑series charts in the Idle report Then a stacked time‑series displays Exempt vs Non‑Exempt idle per chosen bucket (hour/day/week) with clear legends And tooltips show exact values and percentages for each stack segment And zooming or changing the bucket size updates the tables and summary widgets to the visible range And chart totals within the viewport match table totals within the same filters and range And empty or zero‑data periods render as zero values without breaking the series
CSV Export With Evidence Fields and Immutable Event IDs
Given a user applies filters to the Idle report When they export to CSV (event‑level) Then the CSV contains one row per idle event with columns: EventID (immutable), StartTime, EndTime, DurationSec, DriverID, DriverName, VehicleID, VIN, SiteID, SiteName, ExemptFlag, ReasonCode, SpeedAvg, RPMAvg, PTOState, ReeferState, GeofenceMatchID, GeofenceMatchName, SourceDeviceID, CreatedAt, UpdatedAt And the file header includes the applied filters, timezone, and generation timestamp And timestamps are ISO 8601 and machine‑parseable; numeric fields use dot decimal and no thousands separators And the number of rows equals the number of events shown in the UI for the same filters And large exports complete via server‑side pagination but produce a single deduplicated file without missing rows
PDF Export With Embedded Evidence Snapshots
Given a user applies filters to the Idle report When they export to PDF Then the PDF includes the report header (filters, timezone), time‑series charts, per‑dimension summaries, and a paginated list of events with EventID references And each listed event displays embedded evidence snapshot thumbnails (e.g., signal trace images) with capture timestamps and links to full evidence And totals and breakdowns in the PDF equal the on‑screen values for the same filters And the document includes page numbers, generation timestamp, and user watermark And for reports exceeding 1,000 events, the PDF includes full summaries and top N events with evidence; remaining events are referenced by EventID with a note to use CSV/API for full detail
Reconciliation With Fleet Metrics and Driver Scorecards
Given a selected timeframe, timezone, and filters When comparing idle totals and exempt splits between the Idle report, the fleet‑level dashboard, and driver scorecards using the same data cut Then totals match exactly after the ingestion cutoff timestamp is reached And if data is still processing, the UI displays a data freshness banner and discrepancies automatically resolve within 15 minutes And a nightly automated reconciliation job flags and alerts on any variance >0.5% between sources
API Access and Data Retention Compliance Controls
Given an API client with scope idle.read When it calls GET /api/v1/idle-events and GET /api/v1/idle-summary with specified filters (date range, timezone, driver, vehicle, site, exempt flag) Then the responses include event records and aggregates equivalent to the UI, with stable pagination cursors, deterministic sorting, and filter echoing And each event includes immutable EventID, timestamps (ISO 8601), driver/vehicle/site identifiers, exempt flag, reason code, and evidence URLs signed with time‑limited access And rate‑limit headers are returned; unauthorized/forbidden requests receive 401/403 with error codes And admins can configure retention periods (e.g., 24–36 months) and apply legal holds to specific EventIDs And events past retention are purged from UI/API/exports while purge actions are logged with timestamp, actor, and counts for audit
Dispute and Admin Override Workflow
"As a dispatcher, I want an efficient way to review and resolve idle exemption disputes so that metrics stay accurate and drivers trust the system."
Description

Enables drivers to flag suspected misclassifications and attach notes; routes items to managers with all supporting evidence (signals, location, rule hits). Provides approve/deny/bulk actions, with automatic recalculation of metrics and scorecards upon resolution. All actions are recorded with user, timestamp, and reason, maintaining a full audit trail. Includes SLA timers and notifications to prevent backlog and an analytics view to surface recurring misclassification patterns for rule tuning.

Acceptance Criteria
Driver Flags Misclassified PTO/Idle Event
Given a driver is viewing an event classified as Idle (Non‑Exempt) or PTO Exempt, When they tap Dispute, Then they must select a predefined reason and may add an optional note between 10 and 500 characters before submitting. Given a valid submission, When processed, Then the system creates a dispute with a unique ID, links it to the event, captures driver ID, UTC timestamp, and device metadata, and sets the event status to Under Review. Given duplicate attempts, When a driver tries to dispute the same event again, Then the system prevents duplication and shows the existing dispute and its status. Given intermittent connectivity, When the driver submits offline, Then the dispute is queued and auto-submits within 60 seconds of connectivity while preserving the original device timestamp.
Evidence Package Routed to Manager
Given a new dispute is created, When routing occurs, Then the assigned manager receives in‑app and email notifications within 2 minutes including dispute ID, vehicle, driver, and SLA due time. Given the manager opens the dispute, When viewing details, Then the UI displays the raw signals snapshot (RPM, speed, PTO state, coolant temp, battery voltage), GPS map/location, PTO Guard rule hits, and original classification rationale. Given access control, When a manager lacks permission for the vehicle or driver, Then access is denied and the attempt is logged with user and timestamp.
Manager Resolve and Bulk Actions
Given a pending dispute, When the manager selects Approve, Then they must choose a new classification (PTO Exempt, Mandated Idle Exempt, or Non‑Exempt Idle) and provide a reason of at least 5 characters, after which the event is reclassified. Given a pending dispute, When the manager selects Deny, Then they must enter a reason of at least 5 characters and the original classification is retained. Given multiple selected disputes, When the manager applies Bulk Approve or Bulk Deny, Then the action is applied to all selected items within 60 seconds and both batch‑level and item‑level audit entries are created. Given concurrent updates, When a dispute has already been resolved by another admin, Then further actions are blocked and a message indicates Already Resolved.
Automatic Metrics and Scorecard Recalculation
Given a dispute is resolved (approved or denied), When the resolution is saved, Then idle metrics, PTO‑exempt totals, and driver scorecards are recalculated and visible in dashboards within 5 minutes. Given recalculation completes, When coaching alerts are affected, Then impacted alerts are closed, updated, or reopened accordingly and an activity entry is added to the driver and vehicle timelines. Given API consumers, When recalculation completes, Then updated aggregates are available via API and UI with consistent values within 5 minutes of resolution. Given a resolution outcome, When notifications are sent, Then both driver and manager receive an in‑app notification (and email for managers) with the outcome, reason, and a link to the event within 2 minutes.
Full Audit Trail and Immutability
Given any dispute lifecycle event (create, view, comment, approve, deny, bulk action), When it occurs, Then an audit entry records user, role, action, UTC timestamp, before/after classification, reason, and source IP. Given audit logs are requested by dispute ID, When returned, Then entries are chronological, read‑only, and immutable; updates create new entries and never overwrite existing ones. Given compliance export is requested for a date range, When generated, Then a signed CSV with checksum and total record count is available for download within 2 minutes.
SLA Timers and Escalations
Given a dispute is created, When timers start, Then a 48 business‑hour SLA countdown is set and visible to driver and manager. Given SLA thresholds, When 24 business hours remain, Then a reminder is sent to the assigned manager; when 4 hours remain, an escalation is sent to the supervisor; when breached, status changes to Overdue and daily escalations are sent until resolution. Given fleet calendars are configured, When calculating business hours, Then weekends and configured holidays are excluded from SLA calculations. Given a dispute is resolved, When SLA evaluation occurs, Then the outcome (Met or Breached) is recorded and available in reporting.
Misclassification Analytics for Rule Tuning
Given resolved disputes exist, When viewing Analytics, Then the dashboard shows approve/deny rates by rule, vehicle make/model/year, driver, location cluster, and PTO state over Last 7/30/90 days with trend lines. Given pattern thresholds, When any rule has ≥5% approval rate with at least 20 disputes in the selected period, Then it is flagged as Candidate for Tuning with a link to 10 sample events. Given export is requested, When processed, Then a CSV of the current analytics view (all dimensions and metrics) is generated within 2 minutes.
ELD and Reefer Telemetry Integrations
"As a technical lead, I want to integrate external ELD and reefer signals so that PTO Guard can rely on authoritative states and reduce false positives."
Description

Adds connectors to ingest explicit PTO/engine state and reefer on/off or load data from common ELDs and refrigeration units via APIs or files. Normalizes and maps external states into FleetPulse’s event model with graceful degradation when integrations are unavailable. Includes health monitoring, retries, and data freshness indicators. Improves detection precision by cross-validating OBD-II/J1939 inferences with external signals and provides configuration to prioritize authoritative sources per vehicle.

Acceptance Criteria
Real-time ELD PTO State Ingestion via API
- Given vehicle V is linked to an ELD connector with valid credentials and priority ELD > OBD, When the ELD API delivers a PTO=ON event with source_ts T, Then FleetPulse creates VehicleEvent(type=PTO_ENGAGED, value=true, vehicle_id=V, source=ELD, occurred_at=T) within 60 seconds and marks authoritative=true. - Given a PTO=OFF event is received for vehicle V, When processed, Then FleetPulse creates VehicleEvent(type=PTO_ENGAGED, value=false) within 60 seconds, closes any open PTO window, and computes duration in seconds. - Given duplicate PTO events share the same source_event_id and source_ts, When delivered multiple times, Then only one VehicleEvent is persisted and subsequent duplicates are ignored (idempotent). - Given transient 5xx or network timeouts from the ELD provider, When fetching events, Then the connector retries with exponential backoff up to 3 attempts within 120 seconds and processes each source event at most once.
Reefer Telemetry File Import and Mapping
- Given hourly files in S3/SFTP matching the configured schema (CSV/JSON) with fields including reefer_on, setpoint, return_air, discharge_air, load_id, source_ts, When the importer runs, Then FleetPulse parses valid rows and emits mapped Reefer_ON/OFF and telemetry events with occurred_at=source_ts and source=REEFER within 5 minutes of file availability. - Given rows fail schema or type validation, When encountered, Then invalid rows are routed to a dead-letter queue with row index and error code, and valid rows continue processing. - Given a file is delivered or processed more than once, When importer executes, Then records with the same (vehicle_id, source_event_id or hash, source_ts) are deduplicated within a 24-hour window. - Given timestamps include source timezone offsets, When parsing, Then all occurred_at values are normalized to UTC and source_tz is stored in metadata.
Graceful Degradation When Integrations Unavailable
- Given vehicle V prioritizes ELD over OBD for PTO, When ELD connector health is ERROR or last_seen_at > 5 minutes, Then FleetPulse falls back to OBD-II/J1939 inference for PTO detection, creating events with authoritative=false and confidence=inferred. - Given a PTO-exempt idle period was inferred during an outage, When authoritative ELD data later arrives within 15 minutes of the same time window, Then FleetPulse reconciles by updating the tag to authoritative=true with provenance=ELD without duplicating the tag. - Given integrations are unavailable for > 24 hours for vehicle V, When idle metrics are displayed, Then the UI and API include a data-freshness warning and identify segments derived from inference with an inferred badge.
Health Monitoring, Retries, and Alerts
- Given any connector is active, When evaluated every minute, Then health is computed as OK (last_seen_at ≤ 2 min, error_rate < 1%), DEGRADED (2–5 min or error_rate 1–10%), or ERROR (> 5 min, auth failure, or error_rate > 10%) and surfaced in the Integrations Health dashboard and API. - Given transient errors (HTTP 5xx or timeouts), When requests fail, Then retries use exponential backoff starting at 2s with max 30s for up to 3 attempts; HTTP 429 honors Retry-After and enforces provider rate limits; HTTP 4xx (except 429) are not retried. - Given health remains DEGRADED or transitions to ERROR for more than 10 consecutive minutes, When detected, Then a tenant-scoped alert is sent to configured channels and an incident entry is created with last_error and suggested actions.
Data Freshness Indicators in UI and API
- Given a vehicle and integration, When calling GET /vehicles/{id}/integrations, Then the response includes last_seen_at, last_event_type, and freshness_state ∈ {FRESH, STALE, STALE_CRITICAL} computed with thresholds 0–5 minutes, 5–30 minutes, and > 30 minutes respectively. - Given the Integrations panel in the web UI, When rendered, Then each vehicle/integration row displays freshness_state and last_seen_at and supports filtering by freshness_state. - Given a metrics or audit export is generated, When downloaded, Then per-vehicle freshness_state and last_seen_at are included to support external audits.
Cross-Validation and Source Prioritization
- Given vehicle V has source priority [ELD, REEFER, OBD], When conflicting PTO state signals occur within a 2-minute window, Then FleetPulse selects the highest-priority source as authoritative, records losing_sources and conflict=true on the decision, and proceeds with exempt tagging. - Given no prioritized external source has data in the window, When OBD inference indicates PTO, Then an event is created with authoritative=false and confidence=inferred and is eligible for later reconciliation. - Given conflicts exceed 5% of PTO windows over a rolling 7-day period for vehicle V, When nightly analytics run, Then a diagnostic alert is created recommending source calibration review.
Auditability of Exempt Idle Tagging Decisions
- Given an idle segment is tagged exempt due to PTO or reefer, When a user opens the audit drawer or calls GET /idle-segments/{id}, Then the response includes decision_provenance with sources_considered, selected_source, rules_applied, confidence, timestamps, and raw_source_refs. - Given an audit export is requested for a date range, When completed, Then the CSV/JSON contains one row per exempt segment with vehicle_id, time bounds, decision fields, and checksum; data is read-only after 30 days. - Given an admin attempts to modify an exempt tag older than 7 days, When attempted, Then the system blocks the change and requires a correction request linked to the audit trail.

Idle Scorecards

Weekly driver and vehicle scorecards with trends, peers, and bite-size goals. Surfaces quick wins (e.g., minutes per stop) and quantifies fuel and CO2 saved, motivating sustained behavior change and friendly competition across the fleet.

Requirements

Idle Event Detection & Aggregation
"As a fleet manager, I want accurate detection and aggregation of idle time by driver and vehicle so that I can trust the scorecards and target coaching effectively."
Description

Detect idling events from OBD‑II signals (engine on, RPM > 0, vehicle speed = 0) with a configurable minimum duration, aggregate by driver, vehicle, stop, and location, and store normalized metrics for minutes, frequency, and percentage of engine-on time. Integrate with FleetPulse’s telemetry pipeline and driver–vehicle assignment to attribute events to the correct user and shift, handle time zones, and support exceptions for PTO usage, cold starts, and geo-fenced exclusions. Persist raw and summarized data to power score computation, trends, and reporting.

Acceptance Criteria
Detect Idle Event with Configurable Minimum Duration
Given OBD-II telemetry where engine_on is true, RPM > 0, and vehicle_speed = 0 continuously for at least the configured min_idle_seconds When the continuous condition duration reaches or exceeds min_idle_seconds Then the system creates a single idle event with start_time, end_time, duration_seconds, vehicle_id, and source_device_id And the duration is measured in whole seconds with no rounding up beyond actual observed time And the min_idle_seconds default is 120 seconds and is configurable per fleet with per-vehicle override taking precedence And if the condition terminates before min_idle_seconds, no idle event is created
Aggregate Idle Metrics by Driver, Vehicle, Stop, and Location
Given a set of detected idle events for a vehicle within a calendar day (local to the vehicle location) When aggregating metrics Then the system produces rollups by driver_id, vehicle_id, stop_id, and place_id (geo-fenced site or 8-char geohash if none) And for each grouping, the system outputs idle_seconds_total, idle_event_count, and idle_minutes_total = floor(idle_seconds_total/60) And a stop_id is defined as consecutive idle events within the same engine_on session and within 75 meters radius And aggregation windows include daily (local), weekly (ISO week, local), and custom date ranges And aggregation excludes any events marked excluded_reason not null
Attribute Events to Correct Driver and Shift with Time Zone Handling
Given driver-vehicle assignment records with time intervals and shift definitions (start/end times, local) And an idle event with start_time and end_time and GPS coordinates When attributing the idle event Then the event is assigned to the driver active at the event midpoint; if multiple, choose the assignment with the longest overlap And if no assignment overlaps, driver_id is set to null and attribution_status = "unassigned" And the event is split at local midnight boundaries for daily metrics using the IANA time zone of the event’s location at event midpoint And during DST transitions, local times are resolved using standard TZ database rules with no overlap or gaps in daily totals And shift_id is assigned based on local time overlap with defined shifts; events crossing shifts are split at shift boundaries
Exclude PTO, Cold Starts, and Geo-Fenced Areas from Idle Detection
Given an idle candidate segment When PTO_status = true at any time during the candidate segment Then the segment (or overlapping portion) is excluded with excluded_reason = "PTO" When the segment starts within the configured cold_start_window_minutes after the first engine_on event following at least 6 hours engine_off Then up to cold_start_window_minutes (default 5) of initial idling is excluded with excluded_reason = "COLD_START" When GPS location falls within any exclusion geofence tagged idle_exempt = true Then overlapping portions are excluded with excluded_reason = "GEOFENCE" And excluded durations are subtracted before event creation and aggregation And all exclusions are logged with rule_id, timestamps, and amounts excluded
Compute Normalized Idle Metrics (Minutes, Frequency, Percentage of Engine-On Time)
Given a time window and grouping dimensions (driver, vehicle, stop, location) When computing normalized metrics Then idle_minutes = floor(sum(idle_duration_seconds)/60) And idle_event_count = count(idle_events) And engine_on_seconds_total is computed from telemetry in the same window and group And idle_pct_engine_on = round((sum(idle_duration_seconds)/engine_on_seconds_total)*100, 2) with bounds [0,100] And groups with engine_on_seconds_total = 0 report idle_minutes = 0, idle_event_count = 0, idle_pct_engine_on = 0 And metrics are consistent between daily and weekly rollups (weekly equals sum of constituent days within the same local TZ)
Persist Raw and Summarized Idle Data for Scorecards and Reporting
Given the idle detection and aggregation outputs When persisting data Then raw idle events are stored with immutable records containing ids, timestamps (UTC), local_tz, driver_id, vehicle_id, excluded_seconds, and attribution_status And summarized daily and weekly tables are written with partitioning by local_date and vehicle_id for efficient retrieval And records are available via internal API endpoints for score computation within 15 minutes of event end (95th percentile) And data retention is 24 months for summaries and 90 days for raw with archival to cold storage thereafter And schema changes are versioned; new writes do not mutate historical records
Handle Telemetry Gaps, Duplicates, and Clock Skew in Idle Detection
Given incoming telemetry may contain duplicates, gaps, or timestamp skew When duplicate messages with identical device_timestamp and message_id are received Then only one is processed and the rest are dropped with dedupe_reason logged When a data gap > 15 seconds occurs during an idle candidate Then the idle segment is split at the gap and evaluated separately against min_idle_seconds When device GPS time differs from server ingest time by more than 5 seconds Then device time is used for ordering and event timing, with a skew_correction value recorded And overall detection accuracy is validated on a synthetic dataset: precision and recall for idle events both >= 0.98
Weekly Idle Score & Trends
"As a fleet manager, I want a clear weekly idle score with trends so that I can see who is improving and where to focus."
Description

Compute a weekly idle score per driver and vehicle using weighted factors such as idle minutes per engine hour, average idle per stop, and idle percentage, normalized for route type, climate band, and vehicle class. Generate week-over-week trends, deltas, and percentile ranks, and persist historical scores for comparison in FleetPulse analytics. Expose APIs/services to serve scores to the scorecard UI, notifications, and exports.

Acceptance Criteria
Weekly Idle Score Calculation (Driver and Vehicle)
Given a completed ISO week in the fleet’s home timezone And telemetry for each driver and vehicle includes engine-on duration, engine hours, idle minutes, and stop events for that week And data completeness per entity meets minimum thresholds (>= 8 engine hours OR >= 80% engine-on coverage, AND >= 5 stops) When the weekly scoring job executes Then a numeric idle score in the range 0–100 (higher is better) is produced for each eligible driver and each eligible vehicle And the score is a weighted composite of idle minutes per engine hour, average idle minutes per stop, and idle percentage using configurable weights with defaults 0.5, 0.3, 0.2 And scores are deterministically reproducible for the same inputs And entities failing completeness thresholds receive no score and are flagged with reason "INSUFFICIENT_DATA"
Normalization by Route Type, Climate Band, and Vehicle Class
Given baseline tables exist for combinations of route type, climate band, and vehicle class And each trip for the week is labeled with route type and climate band, and the vehicle has a class assigned When raw factor values are normalized during score computation Then normalization uses the matching cohort baselines for each factor and weights trip contributions proportionally by time And if a cohort baseline is missing, the fleet-wide baseline is used; if fleet-wide is missing, the global baseline is used and flagged "BASELINE_FALLBACK" And two entities with identical raw idle behavior but different climates or route types produce normalized factor values within 1 point of each other And an entity performing 20% worse than its cohort baseline yields a lower composite score than one performing 10% worse, all else equal
Week-over-Week Trends and Deltas
Given a current week W score and a prior week W-1 score for an entity When trend metrics are generated Then delta is computed as score(W) − score(W−1) And percent change is computed as (delta / max(1, score(W−1))) × 100 and rounded to one decimal place And trend direction is Up when delta > 0, Down when delta < 0, and Flat when |delta| < 0.5 And when W−1 is missing, delta and percent change are null and direction is "N/A" And a 12-week history array is emitted when available, otherwise as many weeks as exist
Percentile Ranks Within Fleet and Cohort
Given scores for all eligible entities in the fleet for week W And cohort memberships are known for vehicle class, route type, and climate band When percentile ranks are computed Then a fleet_percentile in [0,100] is assigned using the nearest-rank method over all eligible fleet entities (ties averaged) And a cohort_percentile in [0,100] is assigned within the matching cohort And when the comparison set size is < 5, the respective percentile is null and flagged "INSUFFICIENT_COHORT" And entities with identical scores receive identical percentiles
Historical Persistence and Recompute Governance
Given weekly scores are produced for entities When results are persisted Then each record is stored with keys (entity_type, entity_id, week_start, version) And the latest version is returned by default via APIs, with previous versions retained for audit And records are immutable once finalized, except through a versioned recompute And retention is at least 24 months And recomputes are triggered by late telemetry or baseline changes and emit an audit event with cause and counts of impacted records
Score APIs for UI, Notifications, and Exports
Given authenticated access with a valid OAuth2 token and scope fleetpulse.scores.read When a client requests GET /v1/scores/idle with entity_type=driver|vehicle, week or date_range, and optional filters (entity_ids, route_type, climate_band, vehicle_class) Then the API responds 200 with results including score, factors, data_completeness, delta, percent_change, trend_direction, fleet_percentile, cohort_percentile, and history And pagination parameters limit and cursor are supported with maximum page size 200 And typical response latency is <= 300 ms p95 for cached reads and <= 1 s p95 for uncached reads And invalid parameters return 400, unauthorized returns 401/403, nonexistent entities return 404, and rate limiting returns 429 And a CSV export is available via POST /v1/exports/idle-scores initiating an async job retrievable via GET /v1/exports/{job_id}
Data Freshness, Scheduling, and Late Data Handling
Given the weekly batch schedule is configured When the scheduler runs each Monday at 04:00 in the fleet’s home timezone for the prior ISO week Then all eligible entities receive scores, trends, and percentiles by 05:00 And late telemetry arriving within 72 hours triggers an automatic recompute; later arrivals queue a backfill job And repeated runs for the same week are idempotent And completion and recompute events are published with counts of processed entities and failures
Scorecard Delivery & Scheduling
"As a driver, I want to receive a weekly scorecard at a predictable time so that I can review my performance and plan improvements."
Description

Generate weekly scorecards with summary KPIs, trends, highlights, and deep-link entry points, and deliver via in-app module and email with mobile-responsive layouts. Support per-user time zone scheduling, retry on failure, and opt-in/opt-out preferences through FleetPulse notifications. Log deliveries, opens, and clicks to measure engagement and feed goal-tracking.

Acceptance Criteria
Scorecard Content Completeness & Accuracy
Given a driver/vehicle has eligible OBD-II idle data for the prior full week in their local time zone and an active FleetPulse account When the weekly scorecard is generated Then it includes: summary KPIs (idle minutes, idle rate %, fuel saved, CO2 saved), the week date range, change vs prior week, fleet percentile, top 3 highlights/quick wins, and at least three deep-link CTAs (e.g., View trends, See stops, Set goal) And all numeric values match the analytics source of truth within ±1% or ±0.5 minutes (whichever is larger) And fuel and CO2 saved calculations use the configured fleet factors and display correct units And names and labels render without truncation for values up to 40 characters
Per-User Time Zone Weekly Scheduling
Given user U has Scorecard Delivery set to Weekly on a specified day-of-week and local time in time zone T When that local date-time occurs Then a scorecard for the immediately preceding complete week in time zone T is generated and queued within 10 minutes And if T observes DST, the trigger fires at the configured wall-clock time after transitions And if U’s time zone is changed before the next run, the new time zone is used for subsequent deliveries And no more than one scorecard is generated per user per week (no duplicates)
Email Delivery & Mobile Responsiveness
Given Email channel for Scorecards is ON for user U and a weekly scorecard is generated When the email is sent Then it is accepted by the MTA within 15 minutes of generation and passes SPF, DKIM, and DMARC authentication And the subject is "[FleetPulse] Your Idle Scorecard — {YYYY-MM-DD – YYYY-MM-DD}" And the body is responsive at 320–600 px widths with font size ≥14 px, tap targets ≥44 px, images with alt text, and total payload ≤150 KB And all deep links resolve to the correct app destinations with the correct context (user, week range) on iOS, Android, and Web And hard bounces and soft bounces are recorded with reason codes
In-App Module Delivery & Latest State
Given In-App delivery is ON for user U and a weekly scorecard is generated When U opens FleetPulse within 7 days of generation Then the Idle Scorecards module displays the most recent weekly scorecard with the correct date range and a "New" badge until opened And the module loads in ≤2.0 seconds at the 90th percentile on a 4G connection And CTA buttons open the correct in-app destinations pre-filtered to the scorecard’s week And if the user has zero eligible trips for the week, the module shows a clear "No data for last week" state
Delivery Failure Retry & Alerting
Given a scorecard delivery attempt (email send or in-app notification/render) fails with a retriable error When the failure is recorded Then the system retries up to 3 times with exponential backoff intervals of approximately 5 minutes, 30 minutes, and 2 hours And duplicate deliveries are prevented across retries And after the final failed attempt, the scorecard is marked Failed with the terminal error code And if the failure rate exceeds 1% of attempted deliveries over any 10-minute window, an alert is sent to on-call within 5 minutes
Notification Preferences Opt-In/Opt-Out
Given user U updates Scorecard notification preferences in FleetPulse When U turns Email OFF Then no scorecard emails are sent to U until Email is turned ON again And when U clicks Unsubscribe in any scorecard email, U’s Email preference is set to OFF immediately and confirmed And when U turns In-App OFF, no in-app notifications or scorecard banners are surfaced for U And all preference changes are audit-logged with user_id, channel, old_value, new_value, and timestamp
Engagement Event Logging: Deliveries, Opens, Clicks
Given a weekly scorecard is generated and delivered via channel C for user U When the delivery is attempted, opened, or a link is clicked Then events are emitted: scorecard_generated, scorecard_delivery_attempt, scorecard_delivery_result, scorecard_open, scorecard_click And each event includes user_id, driver_or_vehicle_id (if applicable), week_start, week_end, channel, attempt_number, status, timestamp, message_id (email), link_id (clicks) And events are available in the analytics warehouse within 15 minutes at the 95th percentile And open/click events are deduplicated by (message_id, link_id, user_id) and attributed to the originating scorecard for goal-tracking updates
Quick Wins & Goal Recommendations
"As a driver, I want actionable, small goals tailored to my habits so that I know exactly how to improve."
Description

Produce personalized, bite-size goals based on recent idle patterns, top offending locations, and peer benchmarks, such as reducing idle at specific stops or cutting average idle per stop by a target. Estimate attainable impact windows, show expected fuel and CO2 savings for each goal, and track completion over subsequent weeks with automated follow-up in the next scorecard.

Acceptance Criteria
Weekly Personalized Idle-Reduction Goals Generation
Given a driver has at least 5 trips and ≥10 minutes of total idle in the last 7 days When the weekly scorecard is generated Then the system produces between 1 and 3 personalized idle-reduction goals for that driver And each goal is either (a) reduce average idle per stop by a target amount/percent or (b) reduce idle at a specific recurring stop And each goal includes a baseline value, target value, and time window And if data sufficiency is not met, no goals are created and a "Not enough recent data" message is displayed
Location-Targeted Quick Win from Top Offending Stops
Given the driver has at least one stop in the last 14 days with ≥3 occurrences and ranked in the top 3 by total idle minutes When generating weekly goals Then at least one goal targets the highest-impact qualifying stop And the goal specifies the stop name/geofence, current average idle per stop, target idle per stop, and expected weekly savings And the goal is not created if the stop is predicted to occur <2 times in the impact window
Peer Benchmark–Anchored Target Setting
Given peer benchmarks for similar vehicle class and route type are available with ≥10 peers When setting target values for goals Then targets close at least 30% of the gap to the fleet median idle per stop, capped at a 25% reduction in a single week And the goal displays the driver’s current percentile vs peers and projected percentile if the goal is met And if peer benchmarks are unavailable, the system falls back to the driver’s 4-week personal median as the target anchor
Fuel and CO2 Savings Estimation Display
Given the vehicle’s idle fuel burn rate and emissions factor are configured or defaulted by class When rendering each goal in the scorecard Then the expected savings are shown as fuel (gal or L per org settings) and CO2 (kg) for the goal’s time window And calculations use: savings = (baseline idle − target idle) × predicted occurrences × burn rate; CO2 = fuel × emissions factor And values are rounded to one decimal (fuel) and nearest whole kg (CO2) And each goal shows a tooltip indicating the burn-rate source (configured/default)
Attainable Impact Window and Follow-Up Scheduling
Given predicted stop frequency for the next week is derived from the last 4 weeks When creating a goal Then the goal includes a 7–14 day impact window aligned to the next scorecard period when predicted occurrences ≥2 And a follow-up entry for the goal is scheduled in the next weekly scorecard And the follow-up shows progress status: Completed, On Track, or Off Track
Goal Completion and Progress Evaluation
Given a goal defines baseline, target, scope (stop-specific or average per stop), and time window When evaluating in the next scorecard Then mark Completed if the target is met or exceeded across ≥80% of applicable stops/occurrences during the window And mark On Track if improvement is ≥50% toward the target but not fully met And mark Off Track otherwise And record the outcome date and retain goal history for at least 12 weeks
Goal Limit, Clarity, and Conflict Checks
Given multiple candidate goals are generated When finalizing the weekly scorecard Then publish no more than 3 goals per driver And de-duplicate goals and prevent overlapping scopes for the same stop within the same window And ensure each goal has a plain-language headline ≤140 characters and a "Why this goal" link referencing recent data and peer gap And suppress goals that duplicate an active goal or one completed within the last 2 weeks
Peer Benchmarking & Leaderboards
"As a fleet manager, I want peer benchmarks and leaderboards so that I can motivate drivers through fair, transparent comparisons."
Description

Provide peer comparisons and leaderboards across configurable cohorts (site, route type, vehicle class), showing percentile rank, week-over-week movement, and badges for improvements. Support anonymization, nicknames, and opt-out for privacy, and ensure rankings only compare like-for-like vehicles and routes for fairness.

Acceptance Criteria
Configurable Cohort Leaderboards (Site, Route Type, Vehicle Class)
- Given an org admin selects one or more cohort dimensions (site, route type, vehicle class) and a week, When the leaderboard is generated, Then only drivers/vehicles matching all selected dimensions and with valid data for that week are included. - Given a cohort filter change, When applied, Then leaderboard results update within 2 seconds for cohorts up to 2,000 entities. - Given a cohort with fewer than 5 eligible entities, When loading the leaderboard, Then display "Insufficient cohort size" and do not show ranks. - Given eligible entities, When the leaderboard renders, Then sort ascending by Idle Minutes per Engine Hour, breaking ties by greater week-over-week improvement, then greater engine hours, then alphabetical by nickname. - Given numeric outputs, When displayed, Then units/precision are: Idle Minutes per Engine Hour (min/h, 1 decimal) and engine hours (h, 0 decimals).
Percentile Rank and Week-over-Week Movement Indicators
- Given a cohort with N ≥ 5 and an entity with valid metrics for the current and prior complete week, When computing percentile, Then percentile rank equals the percentage of cohort with strictly worse scores plus half of equals, rounded to the nearest whole percentile (0–100). - Given current and prior percentile, When displayed, Then show an arrow up if percentile increased by ≥1 pp, down if decreased by ≥1 pp, flat otherwise, and display the signed change value in pp. - Given no valid prior-week metric or prior-week cohort N < 5, When rendering movement, Then display "—" and no arrow. - Given ranks, When displayed, Then show both rank (k of N) and percentile, consistent with the leaderboard ordering.
Improvement Badges (Weekly Awards)
- Given a weekly cohort with N ≥ 10, When computing improvement, Then award "Most Improved" to the top 10% by reduction in Idle Minutes per Engine Hour with minimum absolute improvement ≥ 0.5 min/h and minimum baseline ≥ 2.0 engine hours. - Given ties at the 10% cutoff, When awarding, Then include all tied at the cutoff but cap awardees at max 10% + 2. - Given badge issuance, When the week rolls over, Then badges reset and are recomputed within 6 hours of week close (Monday 00:00 org-local). - Given badge display, When viewing leaderboards, Then show badge label, criteria met, and week ending date; badges are scoped to the active cohort.
Anonymization and Nicknames
- Given anonymization is enabled for the viewer, When viewing the leaderboard, Then all peer identities are masked as stable pseudonyms (e.g., Driver #123) and the viewer sees their own chosen display name. - Given nickname create/edit, When saving, Then enforce 3–30 characters, uniqueness within org, and profanity filtering; changes propagate across the app within 1 hour. - Given a CSV export with anonymization enabled, When downloaded, Then peer identifiers are pseudonymized and no email, phone, or employee ID fields are present.
Opt-Out from Peer Leaderboards
- Given a user toggles opt-out, When saved, Then their entity is excluded from all peer-visible leaderboards and from rank/percentile calculations within their org within 24 hours. - Given opt-out status, When viewing one's own scorecard, Then personal metrics and cohort aggregates are visible without showing any peer identities or ranks. - Given audit requirements, When exporting audit logs, Then each opt-out/in event includes user, timestamp (UTC), and actor (self/admin) and cannot be edited or deleted.
Like-for-Like Comparison Enforcement
- Given an idling leaderboard, When generating the cohort, Then include only entities matching vehicle class, route type, and fuel type exactly; do not mix across these attributes. - Given an entity changes route type or vehicle class mid-week, When computing weekly metrics, Then use the majority-of-time assignment; if no majority, exclude the entity from that week's leaderboard. - Given insufficient like-for-like peers (N < 5), When a user views the leaderboard, Then show personal metrics and cohort definition but hide ranks, percentiles, and badges.
Data Freshness and Completeness Thresholds
- Given the weekly period ends at 23:59:59 Sunday org-local, When the new week begins, Then leaderboards, ranks, and badges for the prior week are available by 06:00 Monday org-local. - Given data completeness thresholds, When computing weekly idle metrics, Then include only entities with ≥ 2.0 engine hours and ≥ 3 recorded stops in the week; otherwise mark as ineligible for ranking. - Given late-arriving telematics within 48 hours, When recomputation runs, Then ranks, percentiles, movements, and badges are recalculated and change logs capture before/after rank for impacted entities.
Fuel and CO2 Savings Estimation
"As a sustainability lead, I want quantified fuel and CO2 savings from reduced idling so that I can report impact and justify the program."
Description

Convert reduced idle time into estimated fuel saved and CO2 avoided using configurable factors by fuel type, engine size, and ambient temperature. Surface savings at driver, vehicle, and fleet levels within scorecards and dashboards, and provide exportable summaries for sustainability reporting.

Acceptance Criteria
Configurable Idle-to-Fuel and CO2 Factors
Given an admin configures idle fuel burn rate factors by fuel type (gasoline, diesel), engine displacement band (e.g., 1.0–2.9L, 3.0–5.9L, 6.0L+), and ambient temperature band (≤0°C, 1–25°C, >25°C) with an effective start date and emission factor per fuel type When the system processes reduced idle minutes for a vehicle on a date within an active factor set Then it selects the factor set matching the vehicle’s fuel type, engine displacement band, and the ambient temperature band for the idle period And computes fuel_saved = reduced_idle_minutes × fuel_burn_rate (L/min or gal/min per org setting) And computes co2_avoided = fuel_saved × emission_factor (kg/unit fuel) And stores factor_set_id, version, and applied_bands alongside each calculation for auditability And if any required factor is missing, the calculation is skipped and the record is marked "Awaiting Factors" with a machine-readable reason
Weekly Savings at Driver, Vehicle, and Fleet Levels
Given the weekly reporting window is Mon–Sun using the organization’s time zone And the baseline is defined as the trailing 4 complete weeks’ average idle minutes for the same entity ending before the current week start When the current week’s idle minutes are lower than the baseline Then the system computes fuel and CO2 savings for driver, vehicle, and fleet as the sum of event-level savings using applicable factors And displays values in scorecards and dashboards with units per org setting (gal/L, kg) And rounds fuel to 1 decimal and CO2 to nearest whole kg And shows a trend vs baseline (%) and vs prior week (%) And if idle minutes are ≥ baseline, savings display as 0 with a "No reduction vs baseline" note
Ambient Temperature Determination and Precedence
Given each idle event has a timestamp and geolocation When determining the ambient temperature band for factor selection Then the system uses, in order of precedence: (1) OBD intake air temperature within ±5 minutes of the event, (2) external weather service ambient temperature at event location/time, (3) organization default band And the chosen source and resolved temperature band are stored with the calculation And if no source is available, the calculation is skipped with reason "No Temperature Source"
Peer Benchmarks and Bite-Size Goal Savings Projection
Given peers are defined as drivers/vehicles within the same fleet and engine size band When viewing a driver or vehicle scorecard Then show current idle minutes per operating hour vs peer median and top quartile And compute a bite-size goal to reduce idle minutes per stop by a configurable default of 1 minute (min 0.5, max 3.0) up to the peer median And estimate next-week fuel and CO2 savings if the goal is met using the same factor-selection logic And label these values as "Projected" and visually distinguish them from realized savings
Exportable Sustainability Summary
Given a user selects a date range and aggregation level (driver, vehicle, fleet) When exporting the Idle Savings Summary Then the export generates CSV and XLSX within 60 seconds for up to 10,000 entity-weeks And includes columns: entity_id, entity_name, week_start, week_end, idle_minutes_reduced, fuel_saved (gal/L), co2_avoided_kg, fuel_type, engine_size_band, temp_band, factor_version_id, coverage_pct, timezone, parameters_hash And includes per-entity and file-level totals that equal the sum of rows And units and rounding match organization settings And the parameters_hash encodes date range, org ID, factor version, and unit settings to ensure reproducibility
Data Coverage, Errors, and Diagnostics
Given OBD-II data coverage may be incomplete When weekly data coverage for an entity is below 90% of expected engine-on time Then the system flags the record as "Low Coverage" and either scales savings proportionally or sets savings to N/A based on an org-level setting And counts of gaps, factor-missing events, and temperature fallbacks are stored and visible in diagnostics And all calculation errors are logged with correlation IDs and are retriable without creating duplicate savings records
Admin Configuration & Privacy Controls
"As a fleet admin, I want configurable thresholds and privacy controls so that the scorecards fit our operations and compliance requirements."
Description

Offer admin settings for idle thresholds, exception rules (PTO, cold start windows, geo-fence exclusions), goal intensity levels, notification schedules, cohort definitions, and privacy options. Enforce RBAC for who can view driver-level data, record changes with audit logs, and align data retention and masking with regional regulations.

Acceptance Criteria
Configure Idle Thresholds by Scope
Given I am an Admin with Settings:Write permission When I set the global idle threshold to 5 minutes and save Then the value is persisted, versioned, and immediately visible in Settings And a validation error is shown if I enter a value outside 0–30 minutes or a non-integer When I set a vehicle-group threshold of 7 minutes for "Vans" and a vehicle-level threshold of 9 minutes for VIN 1HGCM82633A123456 Then the effective threshold precedence is Vehicle > Group > Global for scorecard calculations And the new thresholds are applied during the next calculation cycle (<24 hours) And rolling back to a prior version restores previous values and records the revert in the audit log
Define Idle Exception Rules (PTO, Cold Start, Geo‑Fence)
Given I am an Admin with Settings:Write permission When I create a PTO exception rule "Exclude idle when PTO engaged >= 30 seconds" Then idle minutes during PTO engagement are excluded from scorecards and flagged as "Excluded by PTO" in diagnostics When I create a cold-start rule "Exclude first 5 minutes after engine start when coolant temp < 40°C or ambient < 0°C" Then idle minutes matching the rule are excluded and totals decrease accordingly When I add a geo-fence exclusion polygon named "Depot A" Then idle events within the polygon are excluded And if multiple exception rules match the same idle event, the event is excluded once and attributed to the highest-priority rule And a preview shows estimated excluded idle minutes for the past 7 days before saving And saving creates an audit entry with rule name, scope, actor, and before/after values
Set Goal Intensity Levels and Goal Computation
Given goal intensity levels are defined as Conservative (<= 6 min/stop), Standard (<= 4 min/stop), Aggressive (<= 2 min/stop) When an Admin selects "Standard" for the fleet and saves Then the target thresholds are applied to all cohorts without overrides starting next cycle and shown in the admin preview And drivers’ weekly scorecards display their target, actual, delta, and estimated fuel and CO2 savings based on configured factors When an Admin overrides intensity to "Aggressive" for cohort "Class8 West" Then only members of that cohort use the overridden targets And changing intensity mid-week does not alter already published scorecards; changes apply on the next publication And invalid intensity values are rejected with validation errors and no changes are applied
Schedule Notifications with Timezone and Quiet Hours
Given the fleet default timezone is America/Denver and quiet hours are 21:00–07:00 local When an Admin schedules weekly scorecard notifications for Mondays at 09:00 and selects Email + In-app Then notifications are delivered to recipients at 09:00 in their local timezone, respecting quiet hours And if 09:00 falls within a recipient’s quiet hours, delivery is deferred to the next allowed window the same day And no recipient receives duplicate notifications for the same reporting period And undeliverable addresses are recorded with bounce reasons in the notification log And disabling a channel immediately suppresses future sends for that channel without affecting others
Create and Apply Cohorts for Peer Comparisons
Given I create cohort "Class8 West" with rules VehicleClass = Class8 AND Region = West When I save the cohort Then a membership preview count is displayed and the cohort becomes available for targeting and peer comparisons And membership updates automatically every 24 hours based on current attributes And peer comparisons in scorecards use the cohort when size >= 5; otherwise they fall back to fleet-wide peers And deleting or disabling the cohort removes it from future calculations without altering historical scorecards And an audit log entry records cohort create/update/delete with rule definitions and actor
Enforce RBAC and Privacy for Driver‑Level Data
Given roles exist: Owner, Fleet Manager, Dispatcher, Mechanic, Driver, and custom roles with granular permissions When a user without Drivers.ReadSensitive attempts to view a driver-level scorecard Then access is denied (HTTP 403 for API; redacted view for UI) and the event is logged And such users see only aggregated/anonymized metrics (e.g., hashed driver IDs, percentile ranks, cohort averages) When a user with Drivers.ReadSensitive views or exports driver-level data Then full PII fields are visible/available and exports include only data within their scope (fleet or assigned cohorts) And location precision for unauthorized roles is masked to >= 1 km and timestamps rounded to the nearest 15 minutes And privacy options can be set per region to default to anonymized views for non-privileged roles
Audit Logging and Regional Retention/Masking Compliance
Given auditing is enabled When any admin changes a setting (thresholds, exceptions, goals, notifications, cohorts, privacy/RBAC) Then an immutable audit record is created within 30 seconds including actor ID, role, timestamp (UTC), IP, entity, fields before/after, and reason And audit records are filterable by actor, entity, and date range and exportable to CSV Given regional policies are configured (e.g., EU: mask driver PII after 13 months; delete raw GPS after 90 days; US: retain 24 months) When data reaches its regional retention limit Then scheduled jobs purge or irreversibly mask the data, and subsequent UI/API requests return no PII for the affected period And executing a “Delete Driver Data” action completes within 7 days, removing or anonymizing the driver’s PII across scorecards, notifications, and logs while preserving aggregate metrics And all retention/masking actions are recorded in the audit log with counts of records affected

PSI Heatmap

Visualizes low-PSI hotspots by route, time of day, and weather; flags chronic leakers and underinflation patterns. Recommends optimal check intervals and nearby air locations to protect tires, improve MPG, and prevent roadside flats.

Requirements

PSI Telemetry Ingestion & Normalization
"As a fleet manager, I want accurate, normalized, and timely PSI data across my vehicles so that I can reliably compare readings and act on low-pressure trends."
Description

Implement a reliable pipeline to ingest tire pressure data from OBD-II/TPMS sources across mixed vehicle makes and sensors; normalize units (psi/kPa), validate ranges, de-duplicate, and time-align readings with GPS coordinates and wheel position (axle/location). Tag each reading with vehicle, tire position, timestamp, and route segment. Handle intermittent connectivity with buffering and backfill, and ensure data quality through outlier detection and sensor health checks. Provide a clean, standardized PSI stream to downstream analytics (heatmap, detection, recommendations) with sub-minute latency and well-defined schemas.

Acceptance Criteria
Normalize Units and Validate PSI Ranges
Given an input PSI reading value=240 and unit='kPa' When the pipeline processes the record Then the output contains psi=34.8 rounded to one decimal place, unit='psi', and original_unit='kPa'. Given an input PSI reading with unit='psi' or 'kPa' When processed Then the output includes a numeric psi field and the original_unit field is preserved exactly as received. Given a vehicle with configured plausible PSI bounds When a reading is below min or above max Then the record is tagged quality='invalid', reason='out_of_range', and it is excluded from the analytics stream. Given an input reading missing unit or with non-numeric value When validated Then the record is routed to the dead-letter queue with reason='schema_violation' and is not forwarded downstream. Given a reading with excessive precision When normalized Then psi is rounded to one decimal place consistently across all outputs.
Tag with Vehicle, Tire Position, Timestamp, GPS, and Route Segment
Given a valid input reading with vehicle and wheel metadata When processed Then the output includes vehicle_id, tire_position from the allowed enum, timestamp_utc in ISO 8601 with millisecond precision, lat, lon, and route_segment_id. Given a reading and a set of GPS samples When time-aligned Then the nearest GPS point within 5 seconds is attached; otherwise gps_status='missing' and lat, lon are null. Given tire position aliases from source When mapped Then tire_position is normalized to the allowed enum and mapping_source is recorded; if unmapped, tire_position='unknown' and quality='degraded'. Given a reading outside any mapped route When segmenting Then route_segment_id='off_route'. Given an input with local time When processed Then timestamp_utc is computed using declared timezone and clock_skew_ms is recorded.
De-duplicate and Order TPMS Readings
Given multiple identical readings with the same vehicle_id, tire_position, source_id, and timestamp_utc to the second When processed Then only one record is forwarded and duplicate_count is incremented. Given out-of-order arrivals within a 60-second window When processed Then records are emitted to downstream in timestamp order and ordering='in_order'. Given a record arriving later than the 60-second reordering window When processed Then the record is forwarded with ordering='late' and late_by_ms populated. Given a retry that replays previously accepted records When processed Then downstream receives no additional duplicates due to idempotent keys ingest_id and natural key checksums. Given dedup activity When monitored Then a metric dedup_dropped_total is incremented per dropped record.
Handle Intermittent Connectivity with Buffering and Backfill
Given a device experiences connectivity loss up to 30 minutes When connectivity is restored Then backfilled readings are accepted and forwarded in chronological order with no more than 0.5% loss. Given 30 minutes of backlog at up to 200 readings per second across all vehicles When backfilling Then the system catches up within 2 minutes while maintaining p95 latency under 45 seconds for new live data. Given a service restart during an ongoing backlog ingestion When resumed Then buffered records are recovered from durable storage and no gaps appear in downstream sequence numbers. Given offline readings with original timestamps When processed Then timestamp_utc is preserved from the device payload and arrival_time_utc is recorded separately. Given buffer saturation is approached When managed Then flow control engages and a metric buffer_utilization_percent is emitted at 10-second intervals.
Outlier Detection and Sensor Health Checks
Given a tire's psi changes by more than 5 psi within 1 minute While vehicle_speed_mph > 10 When analyzed Then the reading is flagged quality='suspect' with outlier_reason='roc_high'. Given psi variance is less than 0.1 psi over 30 minutes While vehicle_speed_mph > 10 When analyzed Then the sensor is flagged health='stuck_sensor'. Given expected sampling is 1 Hz and more than 10% of samples are missing over a rolling 60-minute window When analyzed Then the sensor is flagged health='intermittent'. Given a sustained underinflation of more than 20% below configured baseline for longer than 5 minutes When analyzed Then readings are flagged quality='suspect' with outlier_reason='under_inflation'. Given a device reports low battery or link quality When analyzed Then the record includes sensor_health_detail and the sensor is flagged health='degraded'.
Provide Standardized PSI Stream and Schema Versioning
Given a validated reading When emitted Then it conforms to JSON Schema v1.0 including fields ingest_id, tenant_id, vehicle_id, tire_position, timestamp_utc, psi, original_unit, lat, lon, route_segment_id, quality, source_id, schema_version. Given a reading fails schema validation When emitted Then it is routed to the dead-letter queue with a non-empty error_code and error_detail and is not published to the analytics topic. Given a non-breaking schema update to v1.1 When deployed Then v1.0 consumers continue to process events and schema_version is set to '1.1'. Given an unknown field is present in the input payload When processed Then the field is ignored and an enrichment note is added without failing validation. Given a request for the schema When served Then the registry returns the current JSON Schema with a unique version identifier and checksum.
Latency, Throughput, and Reliability Targets
Given normal operating load of up to 200 readings per second When measured over 15 minutes Then end-to-end p95 latency from sensor timestamp to analytics topic availability is <= 10 seconds and p99 <= 45 seconds. Given a rolling 30-day window When measured Then ingestion availability is >= 99.9% and data loss is <= 0.1% excluding device-side loss. Given duplicate-producing conditions such as retries and reconnects When measured Then post-dedup duplicate rate on the analytics topic is <= 0.1%. Given peak bursts up to 400 readings per second for 60 seconds When measured Then no throttling errors occur and backlog drains within 2 minutes. Given the system is under target load When monitored Then CPU utilization remains <= 70% and memory utilization <= 75% on the ingestion tier.
Route-Based PSI Heatmap Visualization
"As an owner-operator, I want to see low-PSI hotspots along my routes by time of day so that I can plan checks and avoid problematic segments."
Description

Deliver an interactive geospatial heatmap that visualizes PSI deviations along traveled routes, highlighting low-PSI hotspots by segment. Include filters for vehicle, route, date range, time of day, tire position, and severity bands. Support map tiling and clustering for performance, a clear legend for thresholds, and tap-to-drill into segment details with historical PSI traces per tire. Provide both web and mobile views with smooth pan/zoom, and synchronize selections with the vehicle/tire detail pages to maintain context across FleetPulse.

Acceptance Criteria
Interactive Route-Based PSI Heatmap Rendering
- Given processed PSI deviation data is available for one or more routes within the selected date and time window, When the heatmap view is opened, Then the map renders route segments color-coded by severity band with no missing segments for which data exists. - Given a segment has an average PSI deviation that falls into a defined severity band, When that segment is displayed, Then its color matches the legend’s color for that band exactly. - Given there is no PSI data for the current filters, When the map loads, Then an empty state is shown with “No PSI data for selected filters” and a shortcut to adjust filters. - Given the user toggles the PSI layer visibility, When the layer is off, Then no segments are colorized; When turned on, Then the previous visualization state is restored.
Multi-Dimensional Filters: Vehicle, Route, Date, Time of Day, Tire Position, Severity
- Given default entry to the view, When no filters are set by the user, Then the system applies defaults: date range = last 7 days, time of day = 00:00–24:00, severity = all, vehicle/route/tire position = all. - Given one or more filters are changed, When Apply is tapped (web: on change; mobile: on Apply), Then the map updates within 1,000 ms and the active filter chips reflect the selection. - Given multiple filters are set, When results are computed, Then filter logic is AND across dimensions and OR within multi-select values of the same dimension. - Given the user taps Clear All, When confirmed, Then all filters revert to defaults and the map refreshes accordingly. - Given the user selects a severity band via the legend, When toggled, Then the severity filter updates and only those segments are shown.
Map Tiling and Clustering Performance
- Given vector/raster tiles are used for the base map and heat layer, When panning or zooming on a broadband connection (≥25 Mbps), Then visible tiles load within 600 ms on average and no tile shows as blank for more than 1,000 ms. - Given ≥10,000 hotspot features in view, When zoomed out, Then clusters are displayed instead of individual features, with counts accurate to ±1% and expanding to individual features within 200 ms after zooming in. - Given continuous pan for 5 seconds, When measuring frame timing, Then average frame time ≤ 22 ms (≈45 FPS) and no single long task exceeds 100 ms.
Legend and Severity Thresholds
- Given severity thresholds are configured, When the heatmap loads, Then a fixed legend is visible showing labels and exact PSI deviation ranges and corresponding colors. - Given thresholds are updated in configuration, When the page is refreshed, Then the legend and segment colors reflect the new ranges and colors. - Given the user taps a legend band, When toggled on/off, Then only segments in the selected bands are highlighted/hidden and the filter chips update to match.
Segment Drill-Down with Historical PSI Traces
- Given a segment is visible, When the user taps/clicks it, Then a details panel (web: right drawer; mobile: bottom sheet) opens within 700 ms. - Then the panel shows: segment ID/route name, segment distance, time window covered, vehicles and tire positions included, and count of observations. - Then the panel includes a chart of PSI over time per tire for the selected date range with per-tire toggles, synchronized time axis, and tooltips with timestamp, tire position, and PSI value. - Given the selected tire belongs to a vehicle, When “Open Tire Details” is tapped, Then the tire detail page opens with the same date/time filters applied.
Context Synchronization with Vehicle/Tire Details
- Given the user navigates from the heatmap to a vehicle or tire detail page, When opened, Then the target page receives and applies the current vehicle, tire position, date range, time of day, and severity filters. - Given the user returns via back navigation, When the heatmap view reappears, Then the prior map viewport, zoom level, layer visibility, and filters are restored exactly. - Given a deep link URL with encoded filters and viewport, When opened on web or mobile, Then the heatmap reconstructs the same view and selections.
Responsive Web and Mobile Pan/Zoom
- Given supported environments (Web: latest Chrome, Edge, Firefox, Safari; Mobile: iOS 15+ and Android 10+), When panning/zooming, Then input latency ≤ 100 ms and average FPS ≥ 50 on web and ≥ 45 on mobile across a 60-second interaction on mid-tier devices. - Given the user performs pinch-zoom or double-tap zoom, When executed, Then zoom occurs centered on the gesture with no visible tearing and tiles remain aligned to features. - Given tappable segments and controls, When measured, Then interactive targets meet ≥ 44 px (mobile) and ≥ 32 px (web) minimum touch/click area.
Weather Context Enrichment
"As a fleet manager, I want PSI readings contextualized by weather so that I can tell the difference between normal temperature effects and actual leaks."
Description

Integrate ambient weather data (temperature, precipitation, humidity) aligned to GPS and timestamp to contextualize PSI readings and heatmap hotspots. Compute weather-adjusted baselines and annotate segments as cold-start, hot ambient, wet conditions, etc. Expose a weather filter and comparison mode (with/without normalization) to help distinguish temperature-driven PSI changes from true leaks. Persist weather snapshots with each PSI record for reproducibility and downstream analytics.

Acceptance Criteria
Weather-PSI Alignment by GPS/Timestamp
Given a PSI reading with latitude, longitude, and timestamp When the system queries the weather provider Then a weather snapshot is attached containing temperature_C, humidity_pct, precipitation_mm_hr, condition_code, source, station_id, observation_ts, and weather_quality And the observation is selected from the nearest station within 10 km and within ±5 minutes of the PSI timestamp And if multiple candidates meet the window, the nearest by geodesic distance is selected; ties are broken by most recent observation And if no observation is available within 25 km or 30 minutes, weather_quality is set to missing and weather fields are null, and the PSI record is still ingested And all stored units are standardized to SI (Celsius, mm/hr, percent) and derived imperial values are computed on read
Immutable Weather Snapshot with PSI Record
Given a stored PSI record with a weather snapshot When the record is retrieved via API or export Then the weather snapshot values exactly match the stored values used at ingestion time And the snapshot is immutable; rehydration from the provider does not overwrite stored values And the snapshot includes weather_version and normalization_version for reproducibility And snapshot data is retained and available for at least 3 years And any update attempts create a new versioned record without altering the original snapshot
Weather-Normalized PSI Baseline Computation
Given a PSI reading with ambient temperature T_C in Celsius When normalization is computed to a 20°C baseline Then normalized_psi_20C = raw_psi * (273.15 + 20) / (273.15 + T_C), rounded to the nearest 0.1 psi (or 1 kPa) And unit conversions between psi and kPa incur a cumulative error ≤ 0.1 psi (0.7 kPa) And the normalized value and normalization_version are persisted with each PSI record And the computation completes within 200 ms per record at p95 in batch processing And normalized values are available to the heatmap and analytics endpoints
Trip Segment Weather Annotations
Given trips segmented by ignition and motion When evaluating each segment start Then segments starting after ≥3 hours of vehicle inactivity are annotated cold_start And segments with ambient temperature ≥ 32°C (89.6°F) at any point are annotated hot_ambient And segments with precipitation_mm_hr ≥ 0.2 or condition_code indicating rain/snow/sleet are annotated wet_conditions And all applied annotations are stored with segment metadata and are filterable in the heatmap And annotation accuracy is ≥ 99% on a labeled test set (±1 minute and ±1 km alignment tolerance)
Heatmap Weather Filter and Normalization Compare Mode
Given the PSI Heatmap is loaded for a fleet When the user applies a Weather Filter (temperature range, humidity range, dry/wet toggle) Then the map, legends, and aggregations update within 2 seconds at p95 to reflect only matching segments And selected filters are reflected in the URL/query state for shareable views And when the user toggles Compare Mode (Normalized vs Raw) Then the UI shows side-by-side or overlay views where counts, color scales, and tooltips include both raw_psi and normalized_psi_20C And toggling Compare Mode on/off does not alter the underlying selection or map extent And Reset Filters restores default (no weather filter, normalization off) within 300 ms
Leak Detection Discriminates Temperature-Driven Variations
Given a test dataset with ambient temperature swings of ±15°C and no actual leaks When leak detection runs on normalized_psi_20C Then zero leak alerts are raised for those tires (0 false positives) And given a test dataset with slow leaks ≥ 2 psi/day at near-constant ambient (±2°C) When leak detection runs on normalized_psi_20C Then ≥ 90% of leaking tires are flagged within 24 hours with confidence ≥ 0.8 And alerts suppressed by normalization are labeled temp_compensated in audit logs for traceability
Weather Service Resilience and Backfill
Given the weather provider is unavailable at ingestion time When PSI records are processed Then records are stored with weather_quality = pending and queued for backfill And backfill retrieves snapshots and updates records within 24 hours for ≥ 99% of pending items And upon backfill, normalization and annotations are recomputed deterministically, and downstream caches are refreshed within 15 minutes And no duplicate snapshots are created; each PSI record has at most one finalized weather snapshot And all backfill events are logged with correlation IDs for audit
Chronic Leaker Detection & Scoring
"As a maintenance planner, I want the system to flag chronic leakers so that I can schedule targeted inspections before they cause roadside flats."
Description

Develop analytics to identify recurring underinflation patterns per tire, producing a risk score based on frequency, magnitude, and duration of low-PSI events after weather normalization. Define configurable thresholds for chronic status, surface a ranked list with trend charts, and attach flags on vehicle/tire profiles. Generate recommended actions (inspect valve stem, check bead, schedule repair) and integrate with alerts and maintenance scheduling to reduce roadside failure risk and tire wear.

Acceptance Criteria
Weather-Normalized Low-PSI Event Scoring
Given a tire with a recorded manufacturer cold PSI spec and ambient temperature data, When normalization is applied to PSI readings over the last 30 days, Then low-PSI events are detected when normalized PSI is ≥10% below spec for ≥15 consecutive minutes. Given detected low-PSI events with frequency, magnitude, and duration, When the risk score is computed, Then the score is a weighted composite scaled 0–100 and is deterministic for the same input within ±1 point. Given maintenance windows flagged for the vehicle/tire, When computing events and scores, Then readings within maintenance windows are excluded from event detection and scoring. Given a completed scoring run, When compared against a reference validation dataset, Then precision and recall for chronic classification are each ≥0.85.
Chronic Threshold Configuration & Defaults
Given fleet-level default thresholds (frequency per week, magnitude %, cumulative duration minutes), When applied without overrides, Then chronic status is assigned when any threshold condition is met. Given a user with Admin role, When updating chronic thresholds at fleet or vehicle/tire level, Then inputs are validated, saved with audit trail (who, what, when), and take effect in the next scoring run within 15 minutes. Given conflicting overrides (vehicle vs tire), When evaluating a tire on that vehicle, Then the most specific level (tire) takes precedence. Given thresholds are updated, When testing against a sandbox dataset, Then the resulting chronic classification changes are previewable before publishing.
Ranked Chronic Leaker List with Trends
Given the latest scoring results, When viewing the Chronic Leakers list, Then items are ranked by risk score desc and display tire ID, vehicle, position, last event time, score components, and confidence. Given filter inputs (route, time-of-day bucket, weather bucket, vehicle group), When applied, Then the list and counts update within 2 seconds and the filters persist in the URL. Given a selected item, When opening details, Then a 30-day normalized PSI trend chart renders with shaded low-PSI intervals and event markers and loads within 2 seconds. Given the list exceeds one page, When paginating or exporting, Then pagination returns consistent results and CSV export includes all filtered rows with the same columns.
Profile Flagging & Cross-Surface Visibility
Given a tire reaches chronic status, When viewing the vehicle or tire profile, Then a visible "Chronic Leaker" flag with current score, first-detected date, and last-reviewed date is shown and links to details. Given a tire’s score falls below the recovery threshold for 14 consecutive days, When reevaluated, Then the chronic flag is automatically cleared and the event is logged. Given PSI Heatmap and Route views, When chronic tires exist on a route, Then a chronic indicator icon is shown on affected segments and tooltips list impacted tires. Given API access with valid token, When calling the chronic flags endpoint, Then the API returns current flags with score, confidence, and timestamps for the requested assets.
Recommended Actions & Maintenance Integration
Given a tire classified as chronic with predominant slow-loss pattern, When generating recommendations, Then the system suggests actions (inspect valve stem, check bead seating, inspect puncture) mapped to observed patterns. Given a user selects "Schedule Repair" from a chronic tire, When confirming, Then a maintenance work order is created with prefilled actions, asset/tire position, SLA, and linked diagnostic context, avoiding duplicates if an open work order exists. Given a maintenance work order is closed, When the next scoring cycle completes, Then the tire is placed on a 7-day watchlist and a post-repair verification check is scheduled. Given a recommendation is dismissed, When scoring reruns without material change (score delta < 5), Then the recommendation is not re-suggested for 7 days.
Alerting Rules for Chronic Leakers
Given alerting is enabled and a tire crosses the chronic threshold, When evaluated, Then a single alert is sent via configured channels (in-app, email, push) with asset, position, score, last event, nearest air locations, and quick actions. Given a tire remains chronic, When subsequent evaluations occur, Then duplicate alerts are suppressed for 24 hours unless the risk score increases by ≥10 points or a new severe event (≥20% below spec) occurs. Given on-call schedules and escalation rules, When an alert is unacknowledged for 4 hours within business hours, Then it escalates to the next contact group and is logged. Given a user snoozes an alert for an asset, When re-evaluated during the snooze window, Then no alerts are sent for that asset and the evaluation is logged.
Data Quality Handling & Event Stitching
Given PSI readings contain sensor errors or extreme outliers, When ingesting data, Then readings flagged as invalid or beyond 5σ from a 24-hour rolling median are excluded from event detection. Given gaps in telemetry up to 30 minutes, When detecting events, Then adjacent low-PSI intervals are stitched into a single event; gaps >30 minutes split events. Given missing or stale weather data, When normalizing, Then the nearest station within 25 km is used; if none, a fallback model is applied and confidence is downgraded. Given insufficient data density (<12 valid readings/day over 7 days), When scoring, Then the tire is marked low-confidence and cannot transition into chronic status; a data quality notification is created.
Adaptive Check Interval Recommendations
"As a fleet manager, I want recommended PSI check intervals that adapt to tire condition and routes so that I can minimize downtime and extend tire life."
Description

Provide dynamic PSI check interval recommendations per vehicle/axle/tire using chronic-leak scores, recent anomalies, route characteristics, and ambient conditions. Present rationale in plain language, allow user overrides with notes, and write accepted intervals to the Maintenance Scheduler, generating inspection tasks and reminders. Continuously update recommendations as new data arrives and track outcomes to improve future suggestions.

Acceptance Criteria
Initial Per-Tire Recommendation Computation
Given a vehicle/axle/tire has PSI telemetry available and no existing user override When the recommendation engine runs on the latest data batch Then it outputs a per-vehicle/per-axle/per-tire check interval expressed in whole hours And the output includes tireId, intervalHours, generatedAt (UTC), confidence, and a ranked list of contributing factors And the computation completes within 2 seconds per vehicle under standard load And no errors are logged at error or fatal level for the run
Plain-Language Rationale Display
Given a recommendation exists for a tire When the user opens the recommendation details Then the rationale is displayed in plain language under 120 words And it names the top drivers (chronic leak score, recent anomalies (7d), route severity, ambient temperature range) And it quantifies each driver with meaningful units or percentages And it avoids undefined acronyms or telemetry jargon
Accept Recommendation to Schedule Inspection
Given a new recommendation for a tire has not been accepted or overridden When the user selects Accept Then the interval is written to the Maintenance Scheduler for that tire And an inspection task is created with a due date based on the accepted interval And a reminder is scheduled per the fleet’s notification settings And repeated Accept actions do not create duplicate tasks And the recommendation record is marked Accepted with timestamp and user id
User Override of Interval With Note
Given a recommendation is visible for a tire When the user enters a custom interval (hours) and adds a note Then the override is saved with tireId, intervalHours, note, user id, timestamp, and previous recommendation snapshot And the Maintenance Scheduler updates future tasks to use the override interval And the UI indicates the interval is user-overridden and displays the note And validation rejects negative, zero, or non-numeric values And an audit log entry is created for the override event
Continuous Recalculation on New Data
Given new PSI, route, or weather data is ingested for a tire When the scheduled recomputation occurs Then the recommendation is re-evaluated using the newly ingested data And if the interval change exceeds a configurable threshold, future tasks and reminders are updated accordingly And the user receives an in-app notification summarizing the change and its drivers And change history records previous and new intervals with factors and timestamps And tasks already in progress are not modified
Fallback Behavior With Sparse Data
Given fewer than 3 distinct days of PSI readings for the tire or missing route/weather data When the engine computes a recommendation Then a conservative default interval defined by org policy is returned And the confidence is set to Low and the rationale states the data gap And no scheduler updates occur without explicit user acceptance And once sufficient data is available, subsequent runs return computed intervals with Medium or High confidence
Incorporate Inspection Outcomes Into Future Recommendations
Given an inspection task generated from a recommendation is completed with recorded PSI and findings When the outcome is saved Then the tire’s chronic-leak score and related features are updated within 24 hours And subsequent recommendations for that tire reflect the updated score And the rationale cites recent inspection outcome when it materially affects the interval And analytics capture acceptance vs override rates and outcome correlation for continuous improvement
Nearby Air Location Recommendations
"As a driver, I want to quickly find suitable air locations along my route when PSI is low so that I can address the issue safely with minimal delay."
Description

Surface nearby and along-route air sources (truck stops, service stations, depots) when underinflation is detected, ranked by detour time, operating hours, PSI capacity, and suitability for vehicle size. Provide distance, ETA, cost info when available, and one-tap navigation handoff. Offer offline caching of recently used POIs and fallbacks when data is stale. Integrate selections into the trip timeline and log remediation actions for audit and cost tracking.

Acceptance Criteria
Underinflation Alert Surfaces Ranked Air Locations Along Route
Given an active trip route and a tire pressure reading below the vehicle’s configured underinflation threshold, When an underinflation alert is raised, Then the system displays at least 5 recommended air locations that are either along the active route or within a 15-minute total detour. Given recommendations are displayed, When the list is rendered, Then each item shows name, category (truck stop/service station/depot), distance from current location, additional detour time, ETA, open/closed status with hours snippet, max PSI capacity, vehicle-size suitability, cost (price or “Unknown”), last verified timestamp, and data source. Given multiple recommendations, When sorted, Then items are ordered by ascending detour time; ties are broken by open over closed, then by sufficient PSI capacity over insufficient, then by vehicle suitability over unknown, then by lower cost when available, then by alphabetical name. Given a request is made, When data is fetched over a typical 4G connection, Then the first paint of the recommendations list occurs within 3 seconds for p95 of requests with a cold cache.
One-Tap Navigation Handoff to External Maps
Given a recommendation is selected, When the user taps “Navigate”, Then the app opens the device’s default maps app with the selected POI’s latitude/longitude and name prefilled as the destination. Given navigation is handed off, When the maps app opens, Then the app records the handoff event with timestamp and destination ID, and presents an in-app link to return to FleetPulse. Given offline mode, When the user taps “Navigate”, Then the app presents coordinates and full address with a “Copy address” option and attempts to open the maps app if available offline. Given a user cancels navigation, When they return to FleetPulse, Then the recommendation list state is preserved.
Display and Validation of Hours, PSI Capacity, Cost, and Vehicle Suitability
Given a recommendation has operating hours data, When current time is evaluated in the POI’s local timezone, Then the item shows Open/Closed/Closing Soon (within 30 minutes) accurately. Given a vehicle’s required tire PSI max is known, When recommendations are calculated, Then locations with max PSI capacity below the required value are labeled “Insufficient PSI” and are not ranked above locations that meet the requirement. Given vehicle dimensions and class are known, When recommendations are filtered, Then locations not suitable for the vehicle (e.g., height/weight/turning radius restrictions) are excluded by default and may be included only when the user toggles “Include Unsuitable”. Given cost data is available, When rendering the item, Then the price is displayed with currency; if unavailable, the item shows “Unknown”.
Fallback Behavior When No Suitable Air Locations Found
Given no locations meet the suitability, PSI capacity, and open filters within a 30-minute detour of the route, When the search completes, Then the app shows a zero-results state with options to expand radius (to 60 minutes), include closed locations, and include unsuitable locations. Given the user expands the search, When new results are fetched, Then the list updates accordingly and clearly indicates any items that are closed or unsuitable. Given no results after expanded search, When the zero-results state persists, Then the app displays the nearest emergency assistance phone numbers and the option to retry search.
Offline Caching and Stale Data Handling
Given the device is offline, When the user opens recommendations, Then the app displays a banner “Offline — Showing cached POIs” and lists up to the last 20 air locations used in the past 30 days, sorted by most recently visited. Given cached POIs are shown, When rendering, Then each item includes last updated timestamp and an “Info may be outdated” badge if older than 24 hours. Given connectivity resumes, When the user pulls to refresh, Then live data replaces cached data and the offline banner is removed. Given no cached POIs exist, When offline, Then the app allows manual entry of an address for navigation and explains that live POI data requires connectivity.
Trip Timeline Integration and Remediation Logging
Given a recommendation is selected, When the user marks “Air Added” at that location, Then the system creates a trip timeline event with fields: timestamp, POI ID, latitude/longitude, tires serviced, PSI before/after per tire (if available from TPMS), cost amount and currency (optional), receipt photo (optional), and notes (optional). Given TPMS/OBD-II updates are received within 15 minutes of the event, When the updated PSI meets or exceeds the vehicle’s configured target range, Then the event is auto-marked “Verified” and linked to the corresponding underinflation alert. Given no telemetry update is received within 15 minutes, When 15 minutes elapse, Then the app prompts the user to confirm remediation or edit the logged details. Given a timeline event is created, When viewing the trip, Then the event appears in chronological order and contributes to maintenance analytics and cost totals.
Detour and Along-Route Calculation Accuracy
Given an active route is set, When calculating detour time for a POI, Then detour is computed as the additional travel time versus staying on the current route to the destination, using current traffic when available. Given two POIs, one along the route with zero detour and one off-route, When ranking, Then the along-route POI is ranked higher if both meet minimum suitability and PSI capacity. Given the user has no active route, When recommendations are requested, Then results are computed from current GPS position using travel time and distance without detour metrics.

One-Tap Fix

Turns nudges into immediate actions: start an auto-shutdown timer, schedule a tire top‑off, navigate to the nearest air source, or notify a manager when the driver can’t comply. Removes friction so the right action takes seconds, not calls.

Requirements

Contextual Action Drawer
"As a driver, I want a single, obvious action presented for each nudge so that I can resolve issues instantly without searching through menus."
Description

A persistent, context-aware action surface that appears directly on alerts and nudges, presenting the single most relevant primary action (e.g., “Start Shutdown Timer”) with optional secondary actions (e.g., “Navigate to Air Source,” “Notify Manager”). It eliminates multi-step navigation by converting diagnostic insights into one-tap executions. The drawer adapts to vehicle state (speed, gear), driver role, and fleet policies; it is disabled or modified when unsafe (e.g., moving vehicle) and supports haptic feedback and large-touch targets. Actions deep-link into native flows (shutdown timer, scheduling, navigation) and third-party apps where appropriate. All invocations are logged with timestamp, user, vehicle, and outcome for audit and analytics. Integrates with FleetPulse alerting, notifications, and maintenance modules to ensure a closed loop from detection to resolution.

Acceptance Criteria
Primary Action Selection for Low Tire Pressure While Moving
Given a low tire pressure alert is active for vehicle V and the vehicle speed > 0 mph And the driver's role is Driver and fleet policy permits navigation while moving When the Contextual Action Drawer is shown on the alert Then exactly one primary action is displayed: "Navigate to Nearest Air Source" And secondary actions include "Notify Manager" and "Schedule Tire Top-off" And any action requiring a stationary vehicle (e.g., "Start Shutdown Timer") is present but disabled with a safety message And the drawer renders within 300 ms of the alert becoming visible
Safety Gating and Disable States While Vehicle In Motion
Given the vehicle speed > 0 mph or the gear is not in Park When the Contextual Action Drawer appears for any alert or nudge Then all actions that require a stationary vehicle are disabled and visibly indicated as disabled And tapping a disabled action shows a non-blocking safety notice And when speed returns to 0 mph and gear is Park for at least 3 seconds Then the previously disabled actions automatically become enabled without requiring a screen reload
One-Tap Start of Engine Auto-Shutdown Timer
Given the vehicle is stopped (speed = 0 mph) and gear is Park And the fleet policy allows remote shutdown timer start for this vehicle When the user taps the primary action "Start Shutdown Timer" in the drawer Then the native Shutdown Timer flow opens within 500 ms via deep-link And the flow is prefilled with the vehicle ID, driver ID, and policy-recommended duration And when the user confirms the timer, the timer starts and a success confirmation is shown And the user can navigate back to the originating alert, which now reflects the action state as "Started"
Schedule Tire Top-off Creates Maintenance Task
Given a low tire pressure alert is active and the vehicle can be driven When the user taps the secondary action "Schedule Tire Top-off" in the drawer Then the Maintenance module opens via deep-link with a prefilled service request (serviceType = Tire top-off, vehicleId, dueBy within 24 hours, source = Alert ID) And upon submitting, a maintenance task/work order is created and linked to the originating alert And the drawer updates to show the action state "Scheduled" with the task ID And a confirmation toast is shown within 2 seconds
Navigate to Nearest Air Source via Maps Integration
Given location permissions are granted and the vehicle is moving or stopped When the user taps "Navigate to Nearest Air Source" in the drawer Then the app deep-links to the default maps app with destination set to the nearest supported air source within 25 miles using current GPS And if no compatible maps app is installed, an in-app map view opens with the same destination and turn-by-turn fallback And the navigation intent includes vehicle and alert context in the referrer for analytics
Haptic Feedback and Large Touch Targets
Given a mobile device with haptics support When the Contextual Action Drawer is rendered Then all tappable targets (primary and secondary actions) have a minimum touch area of 44x44 points and pass WCAG 2.1 AA contrast for text/icons And tapping an enabled primary action triggers a success haptic and visual state change within 100 ms And tapping a disabled action triggers a gentle warning haptic and displays the safety message
Complete Audit Log on Action Invocation
Given any action in the Contextual Action Drawer is tapped When the action is invoked (success or failure) Then an audit event is recorded within 2 seconds with fields: timestamp (UTC ISO-8601), userId, vehicleId, alertId, actionId, context (speed, gear, role), outcome (success|failure), failureReason (if any), and target (deep-link destination) And audit logging failures do not block the action execution and are retried asynchronously And the event is visible in the analytics store within 5 minutes
Safe Auto-Shutdown Timer
"As a driver, I want to start a safe engine shutdown countdown when alerted so that I can prevent damage and reduce idle without risking safety."
Description

Initiates a controlled engine shutdown countdown when conditions warrant (e.g., overheating, extended idling) with safety interlocks and vehicle capability detection. If a remote command is supported, the system executes via OBD-II or OEM APIs; otherwise it provides a driver-guided timer with clear prompts. Preconditions include vehicle stationary, in Park/Neutral, and not in traffic-sensitive contexts; if unmet, the UI offers alternative actions (e.g., navigate to safe pull-off). Features include adjustable countdown, cancel/extend, audible/visual warnings, and automatic escalation if shutdown cannot complete. Every event captures vehicle vitals, location, and user actions, writing to the maintenance timeline and calculating idling/emissions savings. Role- and policy-based controls require additional confirmation for certain vehicles or hours.

Acceptance Criteria
Eligibility and Safe Context Gate
Given a shutdown trigger is active (e.g., coolant temp above policy threshold or idling duration exceeds policy) And the driver taps One-Tap Fix for Safe Auto-Shutdown When the system evaluates preconditions Then it verifies GPS speed <= 1 mph for >= 5 seconds And verifies gear selector is Park or Neutral And verifies parking brake engaged when sensor is available And verifies no towing/in-motion signals are present And sets eligibility = true when all checks pass And displays "Safe to initiate shutdown" with the default countdown value Given any precondition above fails When the system evaluates preconditions Then eligibility = false And the shutdown action is disabled And the UI displays which specific preconditions are unmet And the reason is logged with a timestamp and sensor values
Alternative Actions When Preconditions Unmet
Given shutdown eligibility = false due to unmet preconditions (e.g., vehicle moving, gear not in Park/Neutral, or hazardous context) When the driver taps One-Tap Fix Then the UI presents alternative actions: Navigate to safe pull-off, Start reminder timer, Notify manager And "Navigate to safe pull-off" selects a nearby safe location within 5 miles, shows distance and ETA, and provides turn-by-turn navigation And "Start reminder timer" schedules a re-check in 2 minutes (configurable 1–10 minutes) And "Notify manager" sends a push/email with vehicle, location, trigger reason, and unmet preconditions within 10 seconds And shutdown initiation remains blocked until eligibility becomes true
Remote Auto-Shutdown via OBD-II/OEM APIs
Given the vehicle capability detection indicates remote engine stop is supported via OBD-II or OEM API And shutdown eligibility = true When the driver confirms Start Shutdown Then the system starts a visible countdown (default 30 seconds, policy range 10–120 seconds) And at T-5 seconds it sends the remote engine stop command via the configured channel And it verifies RPM = 0 within 10 seconds of command execution Then the UI shows Shutdown successful, records completion time, and ends the countdown And the timeline entry is created within 10 seconds of completion Given the command is rejected or RPM != 0 after 10 seconds When verification fails Then the system switches to the driver-guided path with clear on-screen instructions And logs failure details (API channel, error code, latency, last RPM) And sends a notification to the manager if policy requires escalation on remote failure
Driver-Guided Shutdown Timer (No Remote Control)
Given remote shutdown is unsupported or has failed verification And shutdown eligibility = true When the driver taps Start Shutdown Then a full-screen countdown starts (default 30 seconds, visible HH:MM:SS) And audible beeps occur every 5 seconds, increasing to every 1 second in the final 10 seconds And the UI displays step-by-step prompts to place vehicle in Park/Neutral, engage parking brake, and turn engine off at T=0 Then at T=0 the UI instructs "Turn engine off now" and presents a Confirm Engine Off button And if RPM remains > 0 or no confirmation is received within 15 seconds, the system repeats prompts, highlights risk, and offers Notify manager And if no confirmation after 60 seconds post T=0, the system records shutdown not completed and triggers escalation per policy
Countdown Controls and Alerts
Given a shutdown countdown is active When the driver taps Adjust Then the driver can set the countdown between 10 and 120 seconds in 5-second increments, constrained by policy Given a shutdown countdown is active When the driver taps Extend Then the countdown increases by 30 seconds up to a maximum of 120 seconds and logs the extend action with reason selection if policy requires Given a shutdown countdown is active When the driver taps Cancel Then the system asks for confirmation (two-step) and requires a reason if the trigger severity is high (e.g., coolant > critical threshold) And upon cancel, the system stops alerts, records the cancel event, and offers alternative actions Given any alert is playing When device volume or Do Not Disturb would suppress safety tones Then the app uses critical alert channel (where supported) to ensure audible warnings are delivered
Event Capture, Savings Calculation, and Timeline Write
Given any shutdown flow (remote or driver-guided) is initiated When the event is recorded Then the record includes: timestamps (start, T=0, completion), driver ID, vehicle ID, VIN, location (lat/long), odometer, coolant temp, oil temp (if available), RPM, battery voltage, active DTCs, trigger type (overheat/idling), path (remote/driver), countdown settings, user actions (extend/cancel/confirm), outcome (success/fail), and error codes And the entry posts to the vehicle maintenance timeline within 10 seconds (or queues offline and syncs within 2 minutes of connectivity) Given the engine-off time is known When calculating savings Then idling time avoided = baseline idle projection minus actual post-trigger idle duration And fuel saved = idle rate (vehicle-specific or default 0.6 gal/hr) * avoided idle hours And CO2 avoided is computed using 8.887 kg CO2 per gallon (configurable) And cost saved uses the organization’s fuel price setting And all calculations are included in the timeline entry and analytics rollups
Policy- and Role-Based Confirmation and Audit
Given a policy requires additional confirmation for specified vehicles or hours When a driver initiates shutdown Then a second confirmation screen appears with policy rationale and requires a reason selection And if the user role lacks permission for immediate shutdown under current policy, the UI blocks the action and offers Request manager approval And policy evaluation works offline using last-synced policy cache and re-validates on reconnect And all confirmations, denials, overrides, and approvals are written to an audit log with user, role, policy ID, timestamps, and geolocation And audit records are immutable and retrievable via the admin console within 15 seconds of sync
One-Tap Service Scheduling
"As a small fleet manager, I want drivers to schedule common services with one tap so that issues are queued and resolved without back-and-forth calls."
Description

Creates a service appointment in one tap for common issues (e.g., tire top-off) by pre-filling vehicle, fault context, urgency, and preferred vendor rules. Integrates with FleetPulse maintenance calendars and external provider networks where available; if no network, it generates an internal work order and notifies the assigned technician/manager. Supports time-window selection, mobile technician dispatch, and automatic task attachments (photos, fault codes). Confirms with the driver and manager, posts to the vehicle’s service schedule, and updates reminders. Handles rescheduling, conflicts (vehicle availability/route), and cost capture with expected vs. actual comparisons. All outcomes feed into repair-cost tracking and compliance reporting.

Acceptance Criteria
One-Tap Appointment Prefill and Creation
Given a driver is viewing a vehicle with an active serviceable fault in FleetPulse and One-Tap Fix is available When the driver taps "One-Tap Schedule" Then a service appointment is created within 3 seconds with prefilled vehicle ID, VIN (if available), GPS location, fault code(s) and description, fault timestamp, urgency derived from rules, and preferred vendor per vendor rules And the appointment is persisted and appears on the vehicle’s service schedule within 1 second of creation And the related maintenance reminder is updated to "Scheduled" with the appointment date/time And the driver sees a confirmation with appointment date/time, vendor name, location, and contact details
External Network Scheduling with Internal Work Order Fallback
Given preferred vendor rules exist and the vehicle’s region supports an integrated external provider network When the appointment is created Then FleetPulse sends the booking request to the provider API and receives a confirmation within 10 seconds And the external confirmation/reference number is stored with the appointment and shown to the user And notifications are sent to the driver and manager with the external confirmation number Given no external network is available, the API call fails, or vendor rules specify no external provider When the appointment is created Then an internal work order (WO) is generated with a unique WO ID And the assigned technician/manager is notified in-app and by email within 30 seconds with the WO details
Time Window Selection and Mobile Technician Dispatch
Given the system has the vehicle’s route and availability data for the selected date When the driver opens time selection during One-Tap scheduling Then at least three non-conflicting time windows are displayed, sorted by earliest feasible availability And selecting a time window sets the appointment window and reserves it on the maintenance calendar within 2 seconds And if the driver selects "Mobile Technician", a dispatch job is created with target location (current GPS or selected), time window, and contact details, and is visible to the provider/technician And once a provider accepts, an ETA is displayed to the driver and manager within 10 seconds of acceptance
Automatic Attachment of Fault Data and Media
Given a service appointment is being created from a detected fault When the system prepares the appointment payload Then the latest fault code snapshot (DTCs), fault timestamp, and freeze-frame data (if available) are attached automatically And up to 5 recent photos captured from the vehicle/driver in the last 24 hours are pre-attached automatically (if available) And the UI offers an optional prompt to add photos if none are available without blocking the one-tap flow And total attachment size is limited to 25 MB; if exceeded, the user is prompted to deselect or compress before submission And all attachments are accessible to the assigned provider/technician via the appointment or work order
Conflict Detection and Guided Rescheduling
Given an appointment conflicts with existing maintenance bookings or the vehicle’s route/availability When the conflict is detected at creation or during a reschedule attempt Then the conflict reason is displayed to the user And at least three alternative conflict-free time windows are offered And choosing a new window updates the appointment within 2 seconds and automatically updates the maintenance calendar and reminders And all impacted parties (driver, manager, provider/technician) receive updated notifications And the change is audit-logged with timestamp, actor, previous window, and new window
Driver and Manager Confirmation & Notifications
Given an appointment is created or rescheduled via One-Tap Service Scheduling When confirmations are dispatched Then the driver receives an in-app confirmation immediately and a push notification within 5 seconds And the manager receives a push and/or email notification within 30 seconds (per notification preferences) And confirmations include appointment date/time or window, vendor/technician, location, and a deep link to view/cancel/reschedule And delivery and read/acknowledgement statuses are recorded for both recipient types And if push delivery fails, SMS fallback is sent to the manager’s phone on file when consent is present
Cost Capture with Expected vs Actual and Reporting Integration
Given an appointment is successfully scheduled When the scheduling flow completes Then an expected cost estimate and cost category (e.g., tires, brakes) are recorded from vendor rules or historical averages And upon work completion, actual cost and line items are captured via provider API or manual entry by technician/manager And the system calculates and stores variance (absolute and percentage) between expected and actual And costs and appointment outcomes are posted to repair-cost tracking and compliance reports within 5 minutes And currency is stored in the org’s default and normalized to USD for reporting
Nearest Air Source Navigation
"As a driver with a low tire alert, I want to navigate to a suitable air source in one tap so that I can address the issue quickly and safely."
Description

Provides a curated list and one-tap navigation to the nearest compatible air sources based on vehicle class, PSI requirements, operating hours, amenities, and geofenced safety constraints. Aggregates data from internal POIs and third-party sources, with health checks and user feedback to maintain accuracy. Presents top options with distance/ETA, cost (if available), and capacity indicators, and deep-links into preferred navigation apps. Supports offline mode with cached POIs and routes, and records selection, arrival confirmation, and success/failure to improve recommendations and calculate downtime avoided.

Acceptance Criteria
One-Tap Nearest Air Source Results
Given the driver has an active low-tire-pressure nudge and location is available When the driver taps "Navigate to air source" Then the app returns a list within 3 seconds containing the top 3 nearest compatible air sources sorted by ETA ascending And each list item shows name, distance, ETA, open/closed status, cost if available, and capacity indicator And if fewer than 3 compatible sources are within 25 miles, show all available and display "Limited options" note
Compatibility Filtering by Vehicle Class and PSI
Given the vehicle profile includes class and required PSI When results are generated Then only sources with vehicle access matching the class and max PSI >= required PSI are included And if zero compatible sources exist within 25 miles, display an empty state that explains the constraint and provides a "Retry" action
Operating Hours and Safety Geofence Enforcement
Given current time and geofenced safety zones are known When listing results Then sources outside allowed safety zones are excluded And closed sources are excluded by default And if no open sources exist within 25 miles, up to 3 closed sources are shown labeled with next open time
Deep-Link Navigation Launch
Given a source is selected When the driver taps "Go" Then the app opens the preferred navigation app with destination coordinates and name within 1 second And if the preferred app is unavailable, the app prompts to choose from installed alternatives and proceeds And a deep-link launch event is recorded with source ID, nav app, and timestamp
Offline Mode with Cached POIs and Routing
Given the device is offline When the driver taps "Navigate to air source" Then the app presents cached POIs updated within the last 7 days, sorted by estimated distance using last known GPS And each item is labeled "Offline" and omits live cost/capacity And if no cached POIs exist, the app shows an error with "Retry" and "Save for later" actions And if no navigation apps are installed, the app displays the address and coordinates with a "Copy" action
POI Data Health Checks and Freshness
Given internal and third-party POI sources are configured When hourly health checks run Then the system flags a source as degraded if error rate exceeds 5% over the last hour and pauses ingestion until recovery And POIs without successful verification in the past 30 days are labeled "Unverified" and de-prioritized below verified entries And after 3 unique user reports within 7 days, a POI status changes to "Needs review" And degraded or unverified POIs are excluded from the top 3 unless no verified options exist
Selection, Arrival, and Outcome Logging
Given a driver selects an air source When the deep link is launched Then the app logs selection event with user ID, vehicle ID, source ID, timestamp, and context (online/offline) And upon geofence arrival within 50 meters or user-confirmed arrival, the app logs arrival event with timestamp and ETA error in minutes And upon user marking success/failure, the app logs outcome with reason codes; downtime avoided is calculated and stored And all events are retriable and synced within 60 seconds of connectivity restoration
Manager Notify & Exception Escalation
"As a manager, I want immediate, contextual notifications when a driver cannot comply so that I can make a quick, documented decision and keep the fleet moving safely."
Description

Enables drivers to report inability to comply (e.g., unsafe to stop) in one tap, sending a rich, actionable notification to the assigned manager with fault context, live location, vehicle state, and suggested next steps. Managers can acknowledge, reassign, approve deferrals, or trigger alternative actions (e.g., dispatch mobile service) from within the notification. Includes SLA timers, reminders, and an audit trail of acknowledgements and decisions. Supports multi-channel delivery (in-app, push, SMS/email fallback) and role-based routing. All events are written to the vehicle and driver timelines for compliance and incident review.

Acceptance Criteria
One-Tap Exception Report Notifies Assigned Manager
Given a driver has an active nudge and is unable to comply When the driver taps "Can't comply" in One-Tap Fix Then an exception event is created within 2 seconds and assigned to the vehicle's primary manager And in-app and push notifications are dispatched within 5 seconds of the tap And the notification payload includes fault context (code and description), severity, detection timestamp, live GPS location (lat/long and map link), vehicle identifier (name and VIN or plate), current vehicle state (speed, ignition state, battery voltage), and suggested next steps And the notification presents actionable buttons: Acknowledge, Approve Deferral, Reassign, Dispatch Mobile Service And the driver is shown an on-screen confirmation that the manager has been notified
Multi-Channel Delivery, Fallback, and Role-Based Routing
Given a manager is assigned to the vehicle and has valid contact endpoints When an exception event notification is sent Then deliver via in-app inbox and push to all active manager devices And if push delivery confirmation is not received within 10 seconds, send SMS and email fallback to the manager And route only to users with Manager or On-Call Manager roles for the vehicle's tenant And if the primary manager is marked Off Duty or Unavailable, auto-route to the on-call manager and CC the team inbox And do not deliver notifications to users outside the vehicle's tenant
Manager Actions Executed From Notification
Given a manager opens the exception notification via any channel When the manager taps Acknowledge Then the exception status changes to Acknowledged, the SLA timer stops, and the driver receives a confirmation banner within 5 seconds When the manager selects Approve Deferral Then a deferral-until timestamp and reason are required, the status becomes Deferred, and follow-up reminders are scheduled for both driver and manager When the manager selects Reassign Then a new assignee is required, ownership transfers to that assignee, and they receive a new notification while the previous owner is recorded in the audit trail When the manager selects Dispatch Mobile Service Then a provider is selected, a work order is created via integration, the provider name, ETA, and work-order ID are stored on the event, and the driver is notified with the ETA
SLA Timers, Reminders, and Escalation
Given an exception event is created Then start an Acknowledge SLA timer set to 5 minutes And if not acknowledged by 5 minutes, send Reminder #1 to the owner and record it in the audit trail And if still not acknowledged by 10 minutes, auto-escalate ownership to the on-call manager, notify them, and record Escalation #1 And send up to 3 reminders at 10-minute intervals until the event is Acknowledged, Deferred, or Resolved And when status is Deferred, SLA timers pause and resume when the deferral expires
Comprehensive Audit Trail and Timeline Writes
Given any exception lifecycle event occurs (creation, delivery, reminder, escalation, acknowledgement, reassignment, deferral, dispatch, resolution) Then write an immutable entry to both the vehicle timeline and driver timeline with UTC timestamp, actor, action, channel, details payload, and a correlation ID And entries appear in both timelines within 3 seconds of the event And exported timeline reports include these entries in CSV and JSON formats And each entry records the previous entry hash to provide append-only integrity verification
Driver Offline and Retry Behavior
Given the driver's device has no network connectivity When the driver taps "Can't comply" Then the request is queued locally and shown as Queued within 1 second And when connectivity is restored, the exception is sent automatically within 10 seconds and shown as Sent And if delivery fails for 5 consecutive minutes, the driver is prompted to retry and error details are logged locally And repeated taps while queued result in a single exception event (deduplicated within a 2-minute window)
Secure Fallback Content and Access Control
Given SMS or email fallback is used for an exception notification Then the message includes a secure deep link that requires authentication or a signed token that expires within 30 minutes And the SMS/email body omits PII beyond vehicle name, last 4 of VIN, coarse location (city/state), and a brief fault summary And opening the link logs a view event and enforces role-based access so only assigned or role-authorized managers can see full details And unauthorized users see Access Denied and no event details are leaked
Telemetry-to-Action Rules Engine
"As a fleet owner, I want actionable rules tied to telemetry so that the system recommends the right one-tap action at the right time for each vehicle."
Description

A configurable rules engine that maps sensor readings, OBD-II codes, and patterns (e.g., rising coolant temp, persistent TPMS low) to prioritized one-tap actions in the drawer. Supports fleet-level and vehicle-level policies, thresholds, and conditional logic (vehicle load, weather, route constraints). Includes a simulation mode to test rule changes on historical data, versioning with rollback, and transparent rationale (“why this action”). Captures outcomes to enable iterative tuning and optional ML-assisted recommendations. Ensures governance by requiring approvals for high-impact actions (e.g., shutdown) and maintaining a complete change log.

Acceptance Criteria
Telemetry Mapping to Prioritized One‑Tap Actions
Given a connected vehicle streaming telemetry at ≥1 Hz and ruleset version R active for the fleet When OBD-II code P0300 is received OR coolant_temp rises ≥10°C within 5 minutes OR TPMS pressure remains < threshold for ≥10 minutes Then the mapped one-tap action appears in the One‑Tap Fix drawer within 2 seconds of event detection And the action is ordered by descending rule priority score, breaking ties by most recent event time And duplicate actions for the same event window (5 minutes) are suppressed And rule evaluation latency is ≤200 ms per rule at p95 with up to 100 concurrent vehicles
Policy Scope and Precedence (Fleet vs Vehicle)
Given a fleet-level rule "TPMS < 30 psi ⇒ Navigate to nearest air source" and a vehicle-level override for vehicle V1 "TPMS < 32 psi ⇒ Navigate to nearest air source" When V1 reports 31 psi Then only the vehicle-level rule triggers and the action appears for V1 And when vehicle V2 (without override) reports 31 psi, only the fleet-level rule triggers And when two rules map to different actions for the same trigger, the higher priority rule’s action is shown and the lower priority action is hidden And the effective policy view shows the resolved source of each applied rule (vehicle override or fleet default) for the evaluated event
Contextual Conditions (Load, Weather, Route Constraints)
Given context providers for vehicle load, weather, and route constraints are configured When gross_vehicle_weight ≥ 80% of capacity AND ambient_temperature < 0°C AND the route includes a grade ≥6% within the next 10 km Then the rules engine prioritizes the "Start engine‑cooldown timer" action above non-contextual actions for the next 15 minutes And if any contextual data is unavailable, the rule evaluates the defined fallback branch and logs a missing‑context event with the specific field name And updates to contextual inputs trigger re‑evaluation within 10 seconds of change, with cache TTL ≤5 minutes
Historical Simulation Mode (No Side Effects)
Given a proposed ruleset version R' and a selected historical window (e.g., last 30 days for 10 vehicles) When the user runs simulation Then no live notifications or actions are emitted to drivers or managers And the system outputs counts of triggers and actions by rule, with a diff versus baseline version R (added/dropped/changed actions) and sample event links And if outcome labels exist, estimated precision/recall deltas versus baseline are reported And the simulation completes within 5 minutes at p90 for 10 vehicles × 30 days, or surfaces progress with cancellable execution And all simulation artifacts are stored and time‑stamped for later review
Versioning, Rollback, and Change Log Integrity
Given any saved change to the ruleset When the change is published Then an immutable version number increments and is recorded with author, timestamp, diff, rationale, and associated approvals (if any) And rollback to any prior version can be initiated by authorized users and completes propagation to all evaluators within 2 minutes at p95 And the change log presents a complete chronological history and cannot be altered or deleted, only appended And the active version is clearly indicated in the admin UI and on the evaluator nodes
Governance and Approvals for High‑Impact Actions
Given an action categorized as High‑Impact (e.g., Shutdown) requires approval When such a rule is created or its threshold is modified Then the rule’s state is Pending and cannot activate until approved by a user with the Approver role And approval requests notify designated approvers within 30 seconds and record approver identity, decision, and comment And driver-facing one‑tap actions tied to Pending rules are disabled with an "Awaiting approval" label and offer a one‑tap manager notification And all approvals and denials are auditable and linked to the specific ruleset version
Transparent Rationale and Outcome Capture
Given a one‑tap action is displayed in the drawer When the user taps "Why this action" Then a panel shows the rule name, version, matched conditions, contributing telemetry values with timestamps, thresholds, and any contextual inputs used And the rationale panel loads within 500 ms at p90 and is localized based on the user’s language setting And when the user executes, snoozes, or dismisses the action, an outcome event is recorded within 2 seconds including action_id, rule_version, vehicle_id, user_id, decision, time_to_action, and a snapshot of relevant telemetry And optional ML‑assisted recommendations are clearly labeled as "Suggested", include confidence, respect opt‑in, and cannot auto‑activate High‑Impact actions
Offline Action Queue & Retry
"As a driver operating in areas with poor coverage, I want my one-tap actions to still work and sync later so that I don’t lose time or duplicate efforts."
Description

Ensures One-Tap Fix works in low/no connectivity by locally confirming intent, queuing action payloads (with idempotency keys), and retrying with exponential backoff when the network is available. Provides clear driver feedback about queued vs. executed actions and offers offline fallbacks (e.g., locally cached POIs for air sources, locally run shutdown timer). Handles conflict resolution on reconnect and synchronizes outcomes to the server, preserving accurate timelines and analytics. Encrypts stored payloads at rest and purges them per retention policy.

Acceptance Criteria
Queue and Retry One-Tap Action Offline
Given the device has no network connectivity and a driver taps a One-Tap Fix action When the driver confirms the action Then the app must store a single action payload with an idempotency key, mark it "Queued (Offline)", and not attempt a network call immediately Given a queued payload exists When the network becomes available Then the client must retry delivery with exponential backoff (initial delay 2s, multiplier 2.0, max interval 5m, jitter ±20%) until a server 2xx is received or max attempts (8) are exhausted Given the server returns 2xx for a payload When the payload is acknowledged Then the client must transition the action state to "Executed", record sent_at and ack_at timestamps, and remove it from the retry queue Given max attempts are exhausted without success When the last attempt fails Then the client must surface a non-blocking error, keep the action in "Needs Attention", and allow manual retry
Offline Fallbacks: Local Shutdown Timer and Cached POIs
Given no connectivity and One-Tap Fix "Start Auto-Shutdown Timer" is invoked When the driver selects a duration Then a local countdown timer must start immediately, persist across app background/kill/reboot, and execute the shutdown reminder at expiry with vibration and sound Given no connectivity and One-Tap Fix "Navigate to nearest air source" is invoked When POIs are loaded from the local cache Then the app must present at least 20 nearest air sources within 10 seconds using a cache not older than 7 days, sorted by distance, and open turn-by-turn navigation on selection Given connectivity is restored while a fallback is active When the server-capable version of the action becomes available Then the app must reconcile and replace the fallback with the online action without losing user intent or history
Driver Feedback: Queued vs Executed States
Given a One-Tap Fix action is queued offline When the driver views the action card Then the UI must display status "Queued (Offline)", the queued_at timestamp, and the next retry ETA Given a retry attempt occurs When the attempt starts and ends Then the UI must update the "Last attempt" timestamp and show a transient "Sending..." state not exceeding 5 seconds without freezing input Given the action is executed server-side When the ack is received Then the UI must transition to "Executed" within 1 second and show executed_at, with accessibility labels updated accordingly Given the action cannot be sent after max attempts When the state becomes "Needs Attention" Then the UI must offer "Retry now" and "Dismiss" actions and log the event for analytics
Conflict Resolution and Idempotency on Reconnect
Given an action with idempotency key K was queued offline When the device reconnects and the server reports an existing action with the same K or a matching dedup signature Then the client must not re-execute the action, must attach local queued_at/sent_at to the server record, and mark the local payload as "Reconciled" Given an action was modified on another device while offline (e.g., rescheduled time) When the client syncs Then the server version must win, the client must show "Updated on another device", and present the final schedule without duplication Given two conflicting outcomes are detected between local and server states When sync completes Then the client must apply the server's conflict resolution decision, avoid duplicate execution, display "Resolved on sync" with final state, and emit a conflict_resolved event containing local and server states
Offline Payload Security and Retention
Given any action payload is stored locally When it is written to disk Then it must be encrypted at rest using AES-256-GCM with keys stored in the OS secure keystore, and be unreadable if the keystore is unavailable Given the user logs out or the app is uninstalled When secure storage is cleared Then all queued payloads and keys must be purged irrecoverably within 2 seconds of logout and upon uninstall hook Given a payload has been acknowledged by the server When ack is persisted Then the payload body must be purged within 10 minutes, retaining only minimal metadata (ids, timestamps, status) Given an unsent payload exceeds retention When its age surpasses 24 hours Then the client must mark it expired, stop retries, purge the body, and surface "Expired - Retention Limit" to the driver
Outcome Sync and Accurate Timelines
Given a queued action was created offline at time Tq and sent at Ts When the client syncs with the server Then the server must store queued_at=Tq, sent_at=Ts, executed_at from server processing, preserving monotonic order Given the device clock drift exceeds 2 minutes When syncing timestamps Then the client must include its offset estimate and the server must normalize timestamps so that analytics error is ≤ 2 seconds for relative durations Given outcomes are synchronized When analytics are queried Then offline-initiated actions must be indistinguishable from online actions except for the offline flag and must appear in reports within 60 seconds of final ack
Resilience Under Device Constraints
Given the device enters battery saver or background restrictions When retries are scheduled Then the client must use OS-friendly scheduling, coalesce retries, and cap background CPU time to < 30 seconds per hour Given local storage available to the app is low (< 50 MB free) When adding a new payload to the queue Then the client must cap the queue at 200 payloads, apply FIFO eviction of the oldest expired items, and notify the driver if a new intent cannot be queued Given the device reboots while retries are pending When the app restarts Then the retry schedule and timers must be restored within 5 seconds of launch without requiring user interaction Given a single action is tapped rapidly multiple times When generating payloads Then only one payload with a unique idempotency key per action instance must be enqueued and subsequent taps must be ignored or debounced with feedback

Coach Digest

Daily or weekly summaries for managers highlighting idling hotspots, top savers, exception counts, and estimated fuel/CO2 impact. Includes a prioritized coaching list and suggested messages to scale what works across teams.

Requirements

Digest Scheduling & Timezone Delivery
"As a fleet manager, I want to configure when I receive the Coach Digest in my local time so that I get actionable summaries at the right moment in my day."
Description

Enable managers to configure Coach Digest frequency (daily or weekly), delivery day/time, and time zone, with account-level defaults and per-user overrides. The system must generate digests based on a consistent data cutoff window, respect user quiet hours, and backfill missed digests after outages. Integrates with the notification scheduler to queue deliveries, validates time zones for distributed teams, and ensures deterministic delivery windows for downstream analytics. Expected outcome: timely, predictable summaries that arrive when managers can act.

Acceptance Criteria
Account-Level Default Schedule Configuration
- Given an admin sets account defaults for frequency (daily or weekly), delivery day/time, and IANA time zone, When the settings are saved, Then users without overrides inherit these defaults and the next delivery is scheduled. - Given invalid frequency or time zone is submitted, When saving, Then the system rejects the change with a validation error and no schedules are updated. - Given account defaults are updated, When saved, Then future deliveries recalculate based on the new settings without duplicating already sent digests. - Given the defaults are set, When queried via API, Then the stored values and the computed next delivery timestamp are returned.
Per-User Override With Validation
- Given a user sets a personal override for frequency, delivery day/time, time zone, and quiet hours, When saved, Then the user's schedule supersedes the account defaults for future digests. - Given an invalid IANA time zone or a delivery time outside allowed range, When saving, Then the override is rejected with a descriptive validation error. - Given a user removes their override, When saved, Then subsequent digests use the account defaults and the next delivery timestamp updates accordingly. - Given both daily and weekly are selected, When saving, Then the system prevents save and prompts to choose exactly one frequency.
Quiet Hours Deferral
- Given a user has quiet hours defined, When a digest would be delivered during quiet hours, Then delivery is deferred to the earliest minute after quiet hours in the user's time zone. - Given quiet hours span the entire configured delivery day, When the scheduled time falls within quiet hours, Then the digest is delivered at the first available minute after quiet hours end, even if that is the next calendar day. - Given delivery is deferred due to quiet hours, When the digest is sent, Then the data window remains based on the original scheduled cutoff and is labeled with the original scheduled date. - Given multiple deferrals occur consecutively, When delivering, Then only one digest is sent per configured period with no duplicates.
Deterministic Data Cutoff Windows
- Given a daily digest scheduled at 08:00 local time, When generating content, Then the data window is the previous local calendar day 00:00:00–23:59:59 and is stamped in the payload. - Given a weekly digest scheduled for Monday 08:00 local time, When generating content, Then the data window is the previous Monday 00:00:00 through Sunday 23:59:59 local and is stamped in the payload. - Given delivery is delayed (e.g., quiet hours or retries), When generating or sending, Then the data window does not change and downstream analytics receive the same window identifiers. - Given a retry or backfill occurs, When the same digest is regenerated, Then totals and identifiers are consistent with the original cutoff window.
Outage Recovery and Backfill
- Given one or more scheduled digests were not delivered due to an outage, When the system recovers, Then missed digests are backfilled in chronological order up to the configured backfill limit per user and frequency. - Given a digest was already delivered before the outage, When backfilling, Then it is not resent. - Given backfilled digests are sent, When delivered, Then they are tagged as backfill and retain their original data cutoff windows and scheduled timestamps. - Given the backfill queue exceeds rate limits, When processing, Then the scheduler throttles sends while guaranteeing order and idempotency.
Notification Scheduler Integration and Idempotency
- Given a schedule exists for a user, When the next delivery is due, Then a single notification job is enqueued with an idempotency key and executed within ±5 minutes of the scheduled time. - Given a transient delivery failure, When the job runs, Then it retries up to 3 times with exponential backoff without creating duplicate digests. - Given a configuration change (frequency, time, time zone, quiet hours), When saved, Then existing future jobs are updated or canceled and rescheduled accordingly. - Given multiple schedules across users and time zones, When enqueuing, Then the system scales without dropping jobs and logs audit entries for enqueue, attempt, success, and failure.
Time Zone Validation and DST Handling
- Given a user selects a time zone, When saving, Then only valid IANA time zone identifiers are accepted. - Given a scheduled time that falls into a DST spring-forward gap, When scheduling, Then the delivery occurs at the next valid local minute after the gap. - Given a scheduled time during a DST fall-back repeated hour, When scheduling, Then exactly one delivery occurs at the first occurrence of the local time. - Given a user changes their time zone, When saved, Then the next delivery and data cutoff windows are recalculated based on the new time zone starting with the next schedule cycle, without resending already delivered digests.
Telemetry Aggregation & Impact Modeling
"As an operations analyst, I want the digest to aggregate key telemetry into clear metrics with estimated fuel and CO2 impact so that I can quantify savings and prioritize actions."
Description

Aggregate OBD-II and telematics events over the selected period to compute idling hotspots, top savers, exception counts, and estimated fuel/CO2 impact. Define metric formulas (e.g., idle minutes, fuel burn from idle, harsh events per 100 miles), unit normalization (mpg vs. L/100km), and confidence scoring for data quality gaps. Perform geospatial clustering for hotspots, trend comparisons vs. prior period, and baseline modeling per vehicle class. Provide APIs that return summarized datasets optimized for digest rendering. Expected outcome: accurate, comparable metrics that quantify opportunities and savings.

Acceptance Criteria
Idling Hotspot Geospatial Clustering
Given a selected time period, fleet scope, and idle events defined as engine_on=true AND speed<=2 mph for >=60 seconds with valid GPS When the hotspot clustering job runs using DBSCAN with eps=200 meters and minPts=5 idle events per cluster Then it returns hotspots with fields: id, centroid_lat, centroid_lon, total_idle_minutes, contributing_vehicle_count, confidence_score, rank, and cluster_bounds And clusters with total_idle_minutes < 10 are excluded And events with GPS_accuracy > 50 meters are excluded from clustering And processing completes within 5 minutes for up to 100 vehicles and 30 days of data And repeated runs on the same input produce identical hotspot IDs and ranks
Idle Fuel and CO2 Impact Estimation
Given idle_minutes per vehicle and either measured idle fuel rate or a class baseline idle fuel rate When computing impact Then fuel_burned_idle = (idle_minutes / 60) * idle_fuel_rate and CO2_emitted = fuel_burned_idle * emission_factor where emission_factor = 8.887 kg/gal (gasoline) or 10.16 kg/gal (diesel) And results are returned at hotspot, vehicle, and fleet levels with 2-decimal precision and include delta_abs and delta_pct vs prior period (delta_pct omitted when prior = 0) And unit localization is applied: gallons/kg for imperial and liters/kg for metric with 1 gal = 3.78541 L conversion And confidence is reduced by 0.2 when baseline rate is used instead of measured, floored at 0.0
Exception Events Normalized per Distance
Given harsh_brake, harsh_accel, and harsh_corner events and total distance for the period per entity When computing normalized rates Then exceptions_per_100_miles = (event_count / max(distance_miles, 1)) * 100 and exceptions_per_100_km uses kilometers with the same formula And trips shorter than 0.5 miles (0.8 km) are excluded from numerator and denominator And per-driver and per-vehicle rankings are returned with ties broken by higher distance, then by stable entity ID And outputs include delta_abs and delta_pct vs prior period
Baseline Modeling per Vehicle Class
Given vehicle class with >=10 active vehicles and >=90 days of history When computing class baselines Then baseline_idle_fuel_rate and baseline_exception_rate_per_100_miles are calculated as a trimmed mean with 5% tails removed and are versioned with baseline_version and effective_date And classes with insufficient data fall back to global baselines with fallback_source included in outputs And baselines are recalculated weekly and historical API calls default to the baseline_version effective during the requested period unless a version override is provided
Confidence Scoring and Data Quality Handling
Given completeness metrics per channel (OBD, GPS) and sensor reliability factors When computing confidence_score in [0.0, 1.0] Then confidence_score = round(min(OBD_completeness, GPS_completeness) * sensor_reliability_factor, 2) And entities with confidence_score < 0.5 are excluded from top lists and flagged with machine-readable reason codes And aggregates include confidence_score; records with unknown fuel_type use a default emission factor and include reason code UNKNOWN_FUEL_TYPE
Top Savers Identification and Coaching Prioritization
Given per-entity (driver or vehicle) current and prior period metrics for idle fuel and exceptions When computing top savers Then fuel_saved = max(prior_idle_fuel - current_idle_fuel, 0) and exception_reduction = max(prior_exceptions_per_100 - current_exceptions_per_100, 0) And combined_score = (fuel_saved_normalized * 0.7) + (exception_reduction_normalized * 0.3) And return top 10 entities with: entity_id, name, fuel_saved, co2_avoided, exception_reduction, combined_score, and a suggested_message based on improvements And exclude entities with confidence_score < 0.5 or distance < 100 miles (160 km) in the current period
Digest Rendering API: Summaries, Trends, Performance, and Unit Normalization
Given an authenticated GET /v1/coach-digest/summary with params period, compare_to=prior, scope=org|team|vehicle_class, units=imperial|metric, tz=IANA When invoked Then respond 200 with payload containing hotspots[], top_savers[], exceptions_summary, fuel_co2_summary, trends[], and meta including baseline_version and confidence_notes conforming to JSON schema v1.2 And p95 latency <= 800 ms and p99 <= 1500 ms for <=100 vehicles and 30-day period; includes ETag and Cache-Control: max-age=900 And numeric fields adhere to requested units; per-100 metrics respect miles vs km; fuel economy supports mpg and L/100km with conversion mpg = 235.215 / L_per_100km And errors return 400 for invalid params, 422 for unsupported combinations, 429 for rate limiting, and 500 for server errors with machine-readable error codes
Coaching Prioritization Engine
"As a safety and efficiency coach, I want a ranked list of who to coach and why so that I focus my time where it will drive the greatest savings and behavior change."
Description

Generate a prioritized list of drivers/vehicles to coach by scoring potential impact, exception severity, trends, and confidence. Include explainability (top factors and supporting trips/events), de-duplication across behaviors, and configurable guardrails (minimum data volume, exclude new drivers). Cap list length per digest and ensure deterministic ranking for auditability. Expose scores and reasons via API to power digest and detail views. Expected outcome: a clear, defensible list that directs manager attention to the highest-impact opportunities.

Acceptance Criteria
Score Composition, Range, and Persisted Components
Given a configured lookback window (daily: 24h, weekly: 7d) and scoring weights for impact, severity, trend, and confidence When the engine runs for the specified digest period Then it computes a numeric score in the range [0,100] for each eligible coaching subject And the same inputs and configuration produce the same score And the engine persists each component value and the final score with subjectId, periodStart, periodEnd, and configVersion
Explainability: Top Factors and Evidence
Given a ranked coaching subject is returned by the engine When the client requests the subject's explanation Then the response includes the top 3 contributing factors with contribution percentages that sum to 100% ±1% And the response includes 1–5 supporting trips/events with IDs, timestamps, and metric deltas And the response includes estimated fuel and CO2 impact for the period And each trip/event ID resolves via existing trip/event APIs
Behavior De-duplication per Coaching Subject
Given a subject has multiple exception behaviors within the digest window When the prioritized list is generated Then at most one entry exists per subject (driverId or vehicleId) per digest And the entry aggregates behaviors into a reasons array with behaviorType, severity metric, and impact estimate And no duplicate subject entries appear in the same digest output
Guardrails: Minimum Data Volume and New Driver Exclusion
Given configuration values minTrips=N, minEngineHours=H, and excludeNewDriversDays=D When eligibility is evaluated for each subject Then subjects with trips < N or engineHours < H in the window are excluded with exclusionReason recorded in audit And subjects first seen within the last D days are excluded with exclusionReason recorded in audit And eligibility parameters are configurable and versioned (configVersion)
Digest List Cap and Ordering
Given a configured list cap K for the digest type When ranked results are produced Then the output contains at most K eligible subjects ordered by descending score And ties are broken deterministically by: higher potential impact, then higher severity, then larger trend magnitude, then ascending subjectId And the API response includes totalEligibleCount and returnedCount
Deterministic Ranking and Audit Trail
Given identical inputs and configuration for a period When the engine is rerun for that period Then the scores, ordering, and explanations are identical And an audit record is stored with period, configVersion, scoringVersion, inputDataHash, algorithmParameters, generatedAt, and operatorId And audit records are retained for ≥90 days and are retrievable via an internal endpoint
API Exposure for Digests and Detail Views
Given GET /coach-prioritizations is called with periodStart, periodEnd, scope, and cap When the request is valid and data exists Then the response is 200 and includes items with subjectId, subjectType, score, reasons, topFactors, evidence, periodStart, periodEnd, configVersion, and scoringVersion And P95 response time is ≤500 ms for up to 100 returned subjects And pagination is supported via cursor when totalEligibleCount > cap And invalid parameters return 400 with validation errors
Suggested Coaching Message Templates
"As a manager, I want suggested coaching messages tailored to each driver and behavior so that I can quickly provide consistent, effective guidance."
Description

Provide a library of behavior-specific coaching templates (e.g., idling reduction, harsh braking) with dynamic placeholders for driver name, vehicle, events, and quantified impact. Support tone variants, localization, and versioning, with guardrails for respectful and compliant language. Allow managers to copy/paste or insert messages into their preferred channels and log that guidance was sent. Expected outcome: faster, consistent, and higher-quality coaching communications that scale across teams.

Acceptance Criteria
Manager inserts resolved coaching message into Slack and logs send
Given a manager has Coach Digest open and selects a "Reduce Idling" template for a specific driver and vehicle When they click "Insert into Slack" and choose a DM or channel Then the posted message contains resolved placeholders for {driver_name}, {vehicle_label}, {idling_events_7d}, {est_fuel_saved}, and {est_co2_saved} with correct values for the last 7 days And a guidance log entry is created with fields: manager_id, driver_id, vehicle_id, template_id, template_version, tone, language, channel="slack", destination_id, message_id, timestamp, status="sent" And the end-to-end action completes in ≤ 2 seconds at the 95th percentile When the Slack API returns an error Then the UI displays the error, no "sent" log is created, and a failure log is recorded with error_code and retry_available flag
Manager copies rendered message to clipboard for external email or chat
Given a manager has selected a template, tone, language, driver, and vehicle When they click "Copy message" Then the clipboard receives a plain-text message with all placeholders resolved and no markup tokens (e.g., {driver_name}) And newline/paragraph formatting is preserved and URLs are retained as naked links And a non-blocking confirmation toast displays within 500 ms And a guidance log is recorded with channel="clipboard", status="copied", and no external destination_id
Localization with fallback to default language when translation is unavailable
Given the org default language is en, the manager locale is es-MX, and the driver preferred language is es When the manager opens the "Harsh Braking" template Then the Spanish (es) variant is used if a Published es version exists When no es variant exists Then the default language (en) is used and the UI displays a notice "Showing default language" And numbers, dates, and units are formatted per the manager locale/org settings And the guidance log records language_used and fallback=true|false
Tone variant selection with organizational default
Given a template offers tone variants [Supportive, Direct, Data-Driven] and the org default is Supportive When the manager previews the template without manually selecting a tone Then Supportive is preselected When the manager selects a different tone Then the preview updates within 200 ms and the selected tone is used for render and send/copy And the guidance log includes tone=<selected_variant>
Template versioning and publishing controls
Given a template version is Draft When a standard manager attempts to send or copy it Then the action is disabled with tooltip "Publish required" Given version X is Published and version X+1 is Draft When a manager sends the template Then version X is used and recorded in the log When version X+1 is Published Then subsequent renders use X+1 while historical logs continue referencing their original version numbers And the template details view shows version history with version, author, publish_date, and change_summary
Respectful language and compliance guardrails
Given the rendered message content (including any manager edits) contains profanity, discriminatory terms, or prohibited phrases per policy When the manager attempts to send or copy Then the action is blocked, offending phrases are highlighted, and suggested alternatives are shown And no guidance log with status="sent" or "copied" is created; a non-persisted validation event is recorded for telemetry with category="guardrail_block" And guardrail validation completes in ≤ 500 ms at the 95th percentile When the content is compliant Then send/copy proceeds without guardrail warnings
Placeholder data availability and safe fallbacks
Given required placeholders are {driver_name} and {vehicle_label}, and optional placeholders include {event_count}, {est_savings} When any required placeholder has no source data Then send/copy is blocked with a message identifying missing fields When optional placeholders are missing Then rendering substitutes "N/A" or configured default text and no raw tokens like {…} appear in the output And automated tests verify zero unreplaced tokens in 100% of successful sends/copies And the guidance log includes a placeholder_fallbacks list when defaults were applied
Multi-Channel Digest Delivery (Email & In-App)
"As a manager, I want to receive the digest via email and see it in-app so that I don’t miss critical insights wherever I am."
Description

Render the Coach Digest as a responsive email and an in-app dashboard module. Include compact charts, hotspot callouts, and deep links to detailed views. Ensure accessibility (alt text, high contrast), unsubscribe and frequency controls, and rate limiting to prevent spam. Track delivery success/failures and retries. Expected outcome: reliable, readable digests available both in the inbox and within FleetPulse, increasing visibility and engagement.

Acceptance Criteria
Responsive Email Digest Rendering
Given a manager is subscribed to Email digests and a digest is generated for their fleet, When the system composes the email, Then the HTML body size is <= 100 KB and images are referenced via HTTPS CDN with cache-busting query strings. Given the email is viewed on common clients, When rendered at 320px, 768px, and 1024px widths, Then there is no horizontal scroll, body text is >= 14px, and tap targets are >= 44px height. Given the digest content is assembled, When the email is generated, Then it includes compact charts, hotspot callouts, top savers, exception counts, and estimated fuel/CO2 impact for the period. Given images may be blocked, When the email is opened, Then all non-text visuals include meaningful alt text and a plain-text fallback section preserves key information. Given branding requirements, When the subject and preheader are set, Then the subject contains "Coach Digest", tenant name, and period (Daily/Weekly), and the preheader summarizes key exceptions. Given cross-client testing, When validated in latest Gmail (web/Android), Outlook 365 (web), and Apple Mail (iOS), Then critical layout and content are intact with no broken sections or missing data.
In-App Dashboard Digest Module
Given a manager has dashboard access, When they open FleetPulse, Then a "Coach Digest" module is visible showing the most recent digest for the user’s selected frequency. Given the module is rendered, When content loads, Then it displays compact charts, hotspot callouts, top savers, exception counts, and estimated fuel/CO2 impact. Given interactive elements, When a user clicks a chart, callout, or list item, Then they navigate via deep links to the corresponding detailed view with context preserved. Given responsive requirements, When viewed at 375px, 768px, and 1280px widths, Then there is no content overflow and tap targets are >= 44px. Given performance goals, When loading over a 4G connection, Then initial render completes in <= 2.0 seconds p95 using cached API data and subsequent refreshes in <= 1.0 second p95. Given missing data, When the digest has no content for the period, Then an explanatory empty state is shown with a link to configuration/setup.
Accessibility Compliance for Digests
Rule: All text and interactive elements in email and in-app module meet WCAG 2.1 AA contrast ratios (>= 4.5:1 for normal text, >= 3:1 for large text). Rule: All charts and images include concise, meaningful alt text; in-app charts provide a "View data as table" toggle compatible with screen readers. Rule: The in-app module is fully operable via keyboard (logical tab order, visible focus, no traps) and exposes appropriate ARIA roles/labels for interactive components. Rule: Email includes a plain-text alternative and retains essential information when images are blocked. Rule: Color is not the sole means of conveying information; icons/patterns/text are used to convey state and severity in charts and callouts.
Unsubscribe and Frequency Controls
Given a recipient opens a Coach Digest email, When they click the Unsubscribe link, Then they arrive at a confirmation page that immediately disables Email digests for that user without login and sends a confirmation notice. Given a user opens Manage Digest Settings, When authenticated, Then they can select frequency (Daily/Weekly/Off) and channels (Email/In-App) and save changes. Given settings are changed, When the next scheduling run occurs, Then the new preferences are honored and no email is sent if frequency is Off. Given compliance requirements, When any digest email is sent, Then the footer includes Unsubscribe and Manage Settings links and organization contact information. Given audit needs, When preferences are updated, Then an audit log records user, fields changed, old/new values, timestamp, and source (email link or in-app).
Rate Limiting and Deduplication
Rule: The system sends at most one Coach Digest email per user per frequency period (daily or weekly), inclusive of retries. Rule: A per-recipient cap of 2 Coach Digest emails in any rolling 24-hour window is enforced; excess sends are dropped and logged. Rule: Identical-content digests detected within a 12-hour window are deduplicated and not resent. Rule: A circuit breaker halts all digest sends if provider-level failure rate exceeds 20% over 10 minutes; alerts are raised and sends resume only after operator clearance. Rule: In-app digest notifications are limited to one per period and suppressed if the module has already been viewed that day.
Delivery Tracking and Retry Logic
Given an email digest is queued, When the provider responds, Then the system records status transitions with timestamps and provider message IDs for queued, sent, delivered, bounced, or complained events. Given a transient failure (e.g., 5xx, timeout), When detected, Then the system retries up to 3 times with exponential backoff (5m, 30m, 2h) without violating rate limits. Given a hard bounce or complaint, When recorded, Then the user’s email channel is marked Suspended, future email digests are suppressed, and an admin-visible reason is shown. Given reporting needs, When viewing delivery metrics, Then success, failure, and retry counts are available for a selected period and exportable as CSV. Given duplicate prevention, When retries occur, Then a canonical checksum correlates attempts and prevents duplicate deliveries.
Deep Links to Detailed Views
Given a user clicks any chart, hotspot callout, or list item in the digest (email or in-app), When the link opens, Then the detailed view loads with tenant, date range, and relevant filters (vehicle/driver/hotspot) applied. Given unauthenticated email clicks, When a link is opened, Then the user is redirected to login and returned to the intended page with context preserved; optional signed tokens expire in 7 days. Given analytics needs, When links are generated, Then they include UTM parameters (utm_source=digest, utm_medium=email|inapp, utm_campaign=coach_digest). Given reliability needs, When nightly link checks run, Then all deep links return HTTP 200 and render within 2.5 seconds p95; failures are logged with severity and surfaced to ops.
Personalization & Segmentation Controls
"As a regional lead, I want to tailor the digest to my team’s vehicles and goals so that the insights are relevant and actionable."
Description

Allow users to tailor digest content by selecting vehicle groups, regions, driver cohorts, and metrics of interest. Provide threshold tuning (e.g., idle minutes, exception severity), top-N list sizes, and the ability to exclude certain assets. Support saved presets at org, team, and user scope with audit tracking of changes. Expected outcome: digests that reflect each manager’s scope and goals, improving relevance and actionability.

Acceptance Criteria
Segmentation by Vehicle Groups, Regions, and Driver Cohorts
Given a manager selects one or more vehicle groups, regions, and driver cohorts in a preset When the scheduled Coach Digest is generated for that manager Then every section (idling hotspots, top savers, exceptions, fuel/CO2) includes only in-scope assets/drivers matching all selected segments And out-of-scope assets/drivers do not appear anywhere in the digest And KPIs, counts, and percentages are calculated only from the in-scope population And the active segment labels are shown in the digest header And segment membership is evaluated at send time; later changes do not retroactively alter an already sent digest
Threshold Tuning for Idling and Exception Severity
Given idling threshold is set between 0 and 120 minutes/day and exception severity threshold is set to one of [Info, Warning, Critical] When thresholds are saved to a preset Then items below the idling threshold are suppressed from idling hotspot sections And exceptions below the selected severity are excluded from exception counts And estimated fuel/CO2 impact uses the configured thresholds in its calculations And attempting to save invalid values (e.g., idling > 120, negative values, unknown severity) shows a validation error and prevents save And if no thresholds are configured, system defaults are applied and indicated in the UI
Metrics Selection and Ordering
Given a user selects a set of metrics (e.g., idle minutes, exception counts, fuel cost, CO2) and orders them in a preset When the digest is generated Then only the selected metrics appear, in the chosen order, across all relevant sections And unselected metrics are not shown And the selection persists for subsequent digests until changed And if a selected metric has no data for the period, the digest displays "No data" for that metric without failing generation
Top-N List Size Configuration
Given a user sets Top-N list size between 3 and 50 (default 10) in a preset When the digest generates top lists (e.g., idling hotspots, top savers) Then each list includes up to N distinct items sorted by the relevant metric descending And when ties occur at the Nth position, a deterministic secondary sort by asset ID ascending is applied And if fewer than N eligible items exist, only available items are shown with no placeholders And the configured N is captured in the audit trail upon save
Explicit Asset Exclusions Override Segments
Given a user excludes specific assets in a preset (up to 500 assets) even if those assets are part of selected groups/regions When the digest is generated Then excluded assets do not appear in any section or subtotal And exclusion takes precedence over segment inclusion And removing an asset from the exclusion list restores its eligibility in the next digest And attempting to exclude an asset not visible to the user’s scope is prevented with an authorization error
Preset Scopes, Permissions, and Inheritance
Given presets can be saved at Org, Team, and User scope When resolving the effective preset for a manager Then precedence is User > Team > Org; the highest available scope applies And org admins can create/update/delete Org presets; team managers can manage Team presets for their teams; users can manage only their own User presets And managers can switch among presets they have access to via a selector, with changes taking effect on the next digest And concurrent edits are protected by version tokens; conflicting saves are rejected with a retry prompt
Audit Trail for Preset Changes
Given auditing is enabled When a preset is created, updated, or deleted (including changes to segments, thresholds, metrics, Top-N, exclusions, scope, and schedule linkage) Then an immutable audit record is written with actor, action, scope, preset ID, timestamp (UTC ISO 8601), and before/after field-level diffs And audit records are retained for at least 400 days and are filterable by actor, date range, and scope And audit records can be exported to CSV by org admins And unauthorized change attempts are logged with a denied outcome and no state change
Engagement & Outcome Tracking
"As a fleet manager, I want to see which coaching actions were taken and their impact so that I can refine my approach and prove ROI."
Description

Capture digest open/click events (email and in-app) and optional logging of coaching actions taken. Correlate actions with subsequent changes in targeted metrics (e.g., idle time reduction) and summarize “since last digest” outcomes. Provide a lightweight dashboard and export for ROI reporting while respecting privacy and role-based access. Expected outcome: feedback loop that proves impact and guides continuous improvement of coaching content and focus.

Acceptance Criteria
Email Digest Engagement Tracking
Given a digest email with tracking parameters is sent to a manager When the recipient opens the email in a non-bot client Then a unique open event is recorded with fields: digest_instance_id, recipient_user_id, channel='email', event_type='open', occurred_at_utc, recipient_timezone Given the same recipient opens the email multiple times within 24 hours When tracking fires Then only one unique open is counted and additional opens are stored as non-unique events Given the recipient clicks any tracked link in the digest When the click redirect is executed Then a click event is recorded with fields: digest_instance_id, recipient_user_id, channel='email', event_type='click', link_id, occurred_at_utc, and added latency from redirect is <=500ms Given an open request from a known prefetcher or bot user agent When the pixel is requested Then the event is flagged bot=true and excluded from unique open rates Given the email provider experiences a transient outage When events cannot be delivered Then events are queued and retried for up to 24 hours with exponential backoff; unrecoverable failures are sent to a dead-letter queue with alerting
In-App Digest View and Interaction Tracking
Given a manager views the Daily or Weekly Coach Digest screen in-app When the screen is visible for at least 2 seconds Then a view event is recorded with fields: digest_instance_id, viewer_user_id, channel='in_app', event_type='view', occurred_at_utc, session_id Given the manager taps a suggested coaching message or driver profile from the digest When the tap triggers navigation Then a click event is recorded with fields: digest_instance_id, viewer_user_id, event_type='click', target_entity_id|link_id, occurred_at_utc Given the device is offline When events cannot be sent immediately Then events are persisted locally and synced within 5 minutes of reconnect, preserving original timestamps and order
Coaching Action Logging (Optional, Privacy-Aware)
Given a manager sends a coaching message from the digest or records a coaching action When the action is confirmed Then a coaching_action record is stored with fields: action_id, actor_user_id, target_scope (driver|team), target_id, action_type, template_id (nullable), digest_instance_id (nullable), occurred_at_utc, notes (nullable) Given the workspace has coaching action logging disabled When a manager attempts to log an action Then the UI hides logging controls and no action data is stored Given the target user has opted out of analytics tracking When an action would attribute to that user Then personally identifying fields are anonymized and the action is excluded from per-user reports Given role-based access controls When a user without Coach or Manager role accesses action logs Then the action list and export endpoints return 403 Forbidden
Outcome Correlation and Since-Last-Digest Summary
Given a coaching action on a driver at time T and sufficient data (>=10 trips or >=5 engine-hours) in both pre (T-7d to T-1d) and post (T to T+7d) windows When the nightly correlation job runs Then absolute and percent changes are computed for target metrics (idle time, hard brakes, overspeed) and stored as effect_size records linked to action_id Given insufficient data in either window When the job runs Then the correlation is marked status='insufficient_data' and excluded from aggregate impact Given multiple actions for the same driver within overlapping windows When correlating Then the earliest action is attributed and overlapping duplicates are suppressed to prevent double-counting Given a new digest is generated When rendering the "since last digest" section Then per-team aggregates display opens, clicks, actions, and net metric deltas with +/- signs and the exact time range used, or "insufficient data" when applicable
Role-Based Access and Privacy Controls
Given Admin, Manager, Coach, and Viewer roles When each user opens the engagement dashboard Then data scope is limited to their permitted teams; users outside scope are excluded, and names/PII are masked for aggregated views when sample size <5 entities Given a user requests raw event details When they lack the Exporter or Admin role Then the API returns 403 and the UI hides export controls Given the workspace setting "Include PII in exports" is off When a CSV is generated Then direct identifiers (email, phone, VIN) are removed or hashed Given a data subject deletion/redaction request When the scheduled privacy job executes Then associated engagement and action records are purged or anonymized within 30 days
Engagement and ROI Dashboard
Given a date range and team filter When the dashboard loads Then it displays engagement KPIs (unique open rate, unique click rate, avg interactions per digest, actions logged) and outcome KPIs (idle hours delta, gallons saved, CO2 saved) with a last_updated timestamp Given configured conversion factors (idle_to_gallons, gallons_to_CO2) When outcome KPIs are calculated Then gallons and CO2 are computed using the configured factors and displayed with units Given no data in the selected period When the dashboard loads Then it shows zeros and a "No data for selected period" message without errors Given 95th percentile load When the dashboard queries data Then the initial view responds within 2 seconds for cached ranges and within 5 seconds for uncached ranges
CSV Export for ROI Reporting
Given a user with Exporter or Admin role and a selected period When they request an export Then a CSV is generated within 60 seconds containing columns: period_start, period_end, team_id, manager_id, digests_sent, unique_opens, unique_clicks, actions_logged, idle_hours_delta, gallons_saved, co2_saved Given the export is ready When downloaded Then all timestamps are ISO 8601 UTC, numbers use dot decimals, and headers match the specification Given row-level privacy rules When a row would expose fewer than 5 distinct drivers Then the row is suppressed or aggregated to protect privacy Given an export generation error When it occurs Then the user is notified with a retriable error and the failure is logged with a correlation_id

Skill Matrix Sync

Maps technician certifications, specialties, and tool access, then auto-assigns jobs to the best-fit tech. Reduces rework, shortens cycle times, and keeps bays flowing by matching each task with proven capability—no more guesswork staffing.

Requirements

Technician Skill Profile Model
"As a shop manager, I want a complete, up-to-date profile of each technician’s certifications, specialties, and tool access so that jobs can be matched to proven capabilities."
Description

Define a normalized technician profile that stores certifications (type, issuer, level, expiry), specialties (systems, OEMs, vehicle classes), job history, proficiency ratings, tool access, shift availability, location/site, language, and labor/union constraints. Provide CRUD UI, CSV bulk import, and secure APIs to ingest/update data from HR/LMS systems. Link each profile to FleetPulse user accounts and permission models. Enforce validation and auditing for changes, with real-time updates that propagate to assignment decisions. This enables accurate, scalable matching across sites and shifts and underpins all Skill Matrix Sync logic.

Acceptance Criteria
Normalized Technician Profile Data Model & Validation
Given an authorized user with permission "skillmatrix.manage" When they create a technician profile including technicianId, siteId, userAccountId, languages (ISO 639-1), shiftAvailability (time ranges + timezone), specialties (system/OEM/vehicle class), certifications (type, issuer, level, expiry), jobHistory, proficiencyRatings (0–5), toolAccess, laborUnionConstraints, and location Then the profile is saved with normalized structures, technicianId is unique per tenant, and a 201 response is returned Given a certification entry with expiry in the past When the profile is saved Then the certification is stored as inactive and excluded from eligibility calculations Given invalid field values (e.g., unknown specialty category, invalid ISO language code, negative proficiency, malformed dates) When attempting to save Then the request is rejected with HTTP 422 and a per-field error list Given duplicate certifications with the same type+issuer+level When saving Then duplicates are merged into one record Given overlapping shiftAvailability ranges When saving Then the request is rejected with HTTP 422 indicating the overlapping intervals
Technician Profile CRUD UI with Permissions
Given a user with permission "skillmatrix.manage" When they create, edit, or delete a technician profile via the UI Then the operation succeeds, a success toast is shown, and the change appears in the list within 2 seconds Given a user lacking permission "skillmatrix.manage" When viewing the technician profile screen Then create/edit/delete controls are hidden or disabled and server-side authorization blocks writes with HTTP 403 Given a delete action on a linked profile When the user confirms deletion Then the profile is soft-deleted (status=inactive), the user link is removed, and associated audits are retained Given a failed network request during save When the user retries Then no partial updates are persisted and the save either succeeds fully or fails with an error message Given a profile edit When saved Then an audit entry is created capturing who, when, and what changed
CSV Bulk Import of Technician Profiles
Given a CSV matching the published template headers When uploading a file up to 10,000 rows Then processing completes within 2 minutes and an import summary {processed, created, updated, failed} is returned Given dry-run=true on import When the file is processed Then no records are persisted and a full validation report is returned Given duplicate rows by (technicianId, siteId) within a file When importing Then the last occurrence is applied and the duplicates are reported in the results Given per-row validation failures When importing Then invalid rows are skipped, valid rows are processed, and the response lists each failed row with line number, field, and error message Given re-import of the same dataset with the same idempotencyKey When importing again Then no duplicate records are created and the import summary matches the original
Secure APIs for HR/LMS Ingestion and Updates
Given a client authenticated via OAuth2 client_credentials with scope "skillmatrix.profile.write" When calling POST /api/skillmatrix/technicians with a valid payload and Idempotency-Key Then HTTP 201 is returned with the new resource id, and a retried request with the same Idempotency-Key does not create a duplicate Given requests exceed 600 write calls per minute per client When additional requests arrive Then HTTP 429 is returned with a Retry-After header Given a TLS connection attempt below version 1.2 When connecting to the API Then the connection is refused Given a PATCH to /api/skillmatrix/technicians/{id} updating certifications When processed Then certifications are upserted according to type+issuer+level keys, and an audit record is written Given a subscribed webhook endpoint When a technician profile is created or updated Then an event "skillmatrix.profile.updated" is delivered within 10 seconds including technicianId, version, and change summary
Linking Technician Profiles to FleetPulse User Accounts
Given an existing technician profile and a FleetPulse user When linking via UI or API Then a one-to-one link is created, only users with permission "skillmatrix.manage" can edit the linked profile, and the technician role can view their own profile Given a link attempt where the user is already linked to another profile When submitting Then the request is rejected with HTTP 409 Given a FleetPulse user is deactivated When the next sync runs Then the linked technician profile is marked inactive and becomes ineligible for new auto-assignments; in-progress assignments are not altered Given an unlink request by an authorized admin When submitted Then the link is removed without deleting the profile and an audit entry is recorded
Real-Time Propagation of Profile Changes to Assignment Decisions
Given a queued job requiring certification X and specialty Y When a technician profile gains or loses certification X or specialty Y Then the job's candidate list updates within 10 seconds to reflect eligibility changes Given a profile change that reduces eligibility for a technician on an in-progress job When the change is processed Then no automatic reassignment occurs for the in-progress job and a warning is logged Given a bulk import that updates multiple profiles When the import completes Then a batched recalculation of eligibility is triggered and completes within 5 minutes for 10,000 updated profiles
Auditing and Change History for Technician Profiles
Given any create, update, link/unlink, import, or delete operation on a technician profile When the operation completes Then an immutable audit record is written capturing actor (userId or clientId), timestamp (UTC), source (UI/API/Import), before/after diff, and optional reason Given an authorized user with permission "audit.read" When querying audit logs by technicianId and time range Then results are returned within 2 seconds for up to 10,000 records Given an attempt to modify or delete audit records When submitted Then the operation is rejected and the attempt is logged Given an export request for audit logs with filters When generated Then a CSV or JSON file is produced containing the filtered results and is available for download
Job Competency Mapping
"As a maintenance planner, I want each task to list the exact skills and tools required so that assignment is consistent and compliant."
Description

Create a mapping layer that links FleetPulse work order tasks/job codes to required competencies, certifications, tool sets, and minimum proficiency levels. Support mandatory vs optional requirements, co-sign/mentor rules, dependency sequencing, and OEM/vehicle-class templates. Provide a UI for authoring and versioning mappings, plus import from OEM procedures and maintenance schedules. Standardized, versioned mappings ensure consistent, compliant assignments and reduce rework across all maintenance and repair workflows.

Acceptance Criteria
Task-to-Competency Mapping Display and Validation
Given a work order task/job code with a saved mapping When a user opens the mapping detail or assignment view for that task Then the system displays required competencies, certifications, tool sets, and minimum proficiency levels with mandatory/optional tags And each requirement includes: requirementType, identifier, mandatory (true/false), minProficiency (0–5) And attempting to save a mapping missing any required field blocks the save and shows inline validation messages identifying the missing fields And the mapping can be retrieved via API/UI with a schema that includes all saved requirements and metadata (id, jobCode, version, effectiveDate)
Mandatory vs Optional Requirements Enforcement
Given a mapping where some requirements are marked mandatory and others optional When the system evaluates technician eligibility for auto-assignment to the task Then only technicians satisfying all mandatory requirements and minimum proficiency thresholds are eligible And technicians matching more optional requirements are ranked higher than those matching fewer optional requirements And if no technician satisfies all mandatory requirements, the system does not auto-assign and emits a compliance alert in the assignment view and event log
Mentor/Co-Sign Rule Definition and Triggering
Given a mapping that permits supervised assignment below minimum proficiency with a defined mentor/co-sign rule (eligible roles/levels, co-sign required at completion) When a technician below the minimum proficiency is selected with a qualified mentor assigned Then the work order requires mentor co-signature before closure and blocks closure until co-sign is provided And if no qualified mentor is available, the assignment attempt is blocked with an explanatory error And the audit log records mentee, mentor, rule applied, and timestamps for assignment and co-sign
Task Dependency Sequencing Persistence and Execution
Given a mapping that defines step/task dependencies (e.g., Step B depends on Step A) When a work order is created from the mapping Then tasks are ordered according to dependency rules And the system prevents starting a dependent task until all its prerequisites are marked complete And attempts to reorder tasks in violation of dependencies are rejected with an explanatory message and are not saved
OEM and Vehicle-Class Template Application and Override
Given an OEM/vehicle-class template exists for a job code When a user creates a new mapping for that job code and matching vehicle class Then the template pre-populates competencies, certifications, tools, proficiency levels, and dependency rules And tenant-level overrides can be saved without modifying the base template, with the mapping storing a reference to the template id and version And the UI displays both inherited values and any overrides, with a clear indicator of overridden fields
Mapping Authoring UI with Versioning and Rollback
Given an existing mapping at version v1 When a user edits and saves changes with a required change note Then a new immutable version v2 is created, v1 remains read-only, and both retain author, timestamp, and change note metadata And users can view a side-by-side diff of versions showing changes to competencies, tools, proficiency levels, and rules And users can perform a rollback, which creates a new version based on a prior version and sets/updates an effective date; assignment logic uses the version effective at evaluation time
OEM Procedure and Maintenance Schedule Import with Validation
Given a valid OEM procedure or maintenance schedule file/API response mapped to the system schema When an import is executed Then the system creates a draft mapping with parsed competencies, certifications, tools, proficiency levels, and dependencies And invalid records are rejected with line-level error details; valid records are imported without loss And import results display counts of created, updated, and rejected items with identifiers, and per-job-code import is atomic (all-or-none) And duplicate entries (same job code and template/version) are detected and either merged according to defined rules or flagged for user resolution before finalize
Best-Fit Auto-Assignment Engine
"As a dispatcher, I want jobs auto-assigned to the best-qualified available technician so that bays keep moving without manual guesswork."
Description

Implement a scoring engine that evaluates available technicians against job requirements and operational constraints (job priority, SLA/due date, shift, current workload, location/bay match, travel time for mobile jobs). Apply tie-breakers (utilization balance, seniority, last-service familiarity) and return the selected technician with an explanation and confidence score. Trigger on work order creation, schedule changes, or tool/bay events; support batch assignment and rapid recompute. Write-back assignments to calendars and FleetPulse work orders. Target sub-300 ms per single assignment and graceful fallback when no qualified tech is available.

Acceptance Criteria
Auto-Assign on Work Order Creation
Given a new work order with required certifications, specialties, tool/bay needs, priority, due date/SLA, location, and shift window And at least one available technician satisfies all mandatory requirements When the work order is created via API or UI Then the engine selects one technician, returns technician_id, confidence_score in [0.00,1.00], and an explanation listing matched constraints and any tie-breakers applied And writes the assignment to the technician’s calendar and the work order within 1 second of selection And p95 server-side decision latency for the single assignment is <= 300 ms (excluding network) And the selected technician does not exceed configured workload/utilization caps and meets location/bay constraints
Deterministic Tie-Breaking and Utilization Balancing
Given two or more candidate technicians have equal base suitability score for a work order When selecting among tied candidates Then tie-breakers are applied in this order: (1) utilization balance (prefer lowest projected utilization for the shift/day), (2) seniority (prefer higher), (3) last-service familiarity (prefer who most recently serviced the asset) And if still tied, a stable deterministic order by technician_id is applied And the explanation explicitly lists the applied tie-breaker(s) and final ordering decision And the chosen assignment does not violate configured utilization caps
Recompute on Schedule or Capacity Change
Given a schedule or capacity event (technician PTO/calendar block, shift change, work order start/end change) When the event is ingested Then impacted assignments are re-evaluated within 2 seconds and reassigned if a better or required change is identified And manual or locked assignments are not changed unless an explicit override flag is present And updated assignees are written back to calendars and work orders within 1 second of decision And an audit log records trigger type, before/after assignee, decision latency, and rationale
Tool/Bay Availability Event Handling
Given a required tool or bay becomes unavailable or available When the event is received Then all affected work orders are re-evaluated within 2 seconds And if the requirement cannot be met, the work order is moved to pending_unassigned with reason "tool/bay unavailable"; otherwise it is reassigned to a qualified technician with access And calendars and work orders reflect the change within 1 second And the explanation includes the tool/bay constraint that caused reassignment or deferral
Mobile Job Travel-Time Constraint
Given a mobile work order with a service location and technicians with known or last reported locations When auto-assigning the work order Then estimated travel time is included in scoring and the selected technician’s ETA meets the SLA/due window And the explanation includes estimated travel time and distance used in the decision And if no candidate can meet the SLA due to travel constraints, the engine defers assignment, marks reason "no SLA-feasible candidate", and returns within 300 ms p95
Graceful Fallback When No Qualified Technician
Given no available technician satisfies all mandatory requirements for a work order When attempting auto-assignment Then the engine does not assign; it sets status to pending_unassigned, records unmet constraints, and lists up to three nearest candidates with missing qualifications and earliest available times And an alert is posted to the dispatch queue per configuration And p95 server-side decision latency remains <= 300 ms And the work order includes a next-review timestamp based on escalation policy
Batch Assignment and Rapid Recompute
Given a batch of up to 500 work orders awaiting assignment When batch auto-assign is triggered Then the engine processes in parallel with throughput >= 10 assignments/second and per-item p95 decision latency <= 300 ms And returns a batch summary including counts for assigned, pending_unassigned, reassigned, and errors And transient failures affect <= 0.1% of items and are retried up to 3 times with exponential backoff And each item writes back to calendars and work orders and creates an audit entry with explanation and confidence_score
Certification Verification & Expiry Alerts
"As a compliance officer, I want alerts before certifications expire so that we avoid unsafe or noncompliant assignments."
Description

Store certification artifacts and issuer metadata, track issuance and expiry dates, and validate acceptable documentation formats. Generate automated reminders to technicians and managers before expirations and block or warn on assignment when mandatory certifications are expired or missing. Provide compliance dashboards and exportable reports, and integrate with LMS/issuer APIs where available to auto-refresh status. This ensures safety, warranty adherence, and regulatory compliance within Skill Matrix Sync.

Acceptance Criteria
Upload and Validate Certification Documents
Rule: Accept only PDF, JPG, JPEG, or PNG files; reject all other formats with a clear error message. Rule: Maximum file size is 20 MB; larger files are rejected with an error. Rule: Required metadata on upload: certification_type, issuer_name, issuance_date; expiry_date is required if the certification_type expires; credential_id is optional. Rule: issuance_date must be on or before today; expiry_date must be after issuance_date. Rule: On successful upload, store the file, calculate and persist a checksum, and link the artifact to the technician’s profile and certification record. Rule: Create an audit entry capturing created_by, created_at (UTC), source_ip, and file checksum.
Expiry Tracking and Reminder Scheduling
Rule: A certification is "Expiring Soon" when expiry_date is within a configurable threshold (default 60 days) of the user’s local date. Rule: Schedule reminders at 60, 30, and 7 days before expiry at 09:00 in the recipient’s local timezone. Rule: Send reminders to the technician and their manager via email and in-app notifications; send SMS only if the recipient has opted in. Rule: Prevent duplicate reminders for the same window; reschedule when expiry_date changes. Rule: Mark each reminder with delivery status (sent, bounced, failed) and timestamp; failures are retried up to 3 times with exponential backoff. Rule: Cancel all pending reminders immediately when a renewed certification with a later expiry_date is approved.
Assignment Blocking on Missing or Expired Mandatory Certifications
Given a work order requires certification_type X When the system auto-assigns or a dispatcher manually assigns a technician Then only technicians with a valid (today < expiry_date) certification_type X are eligible for assignment. And if no eligible technician exists, the assignment is blocked and the job remains unassigned with reason "No valid X certification" logged. And if the rule for X is configured as "Warn" (not "Block"), the system allows assignment only after the dispatcher enters a justification (min 10 characters); the override is audit-logged and flagged on the work order. And blocked or overridden attempts are visible in the compliance report within 1 minute.
Reminder Content, Preferences, and Dismissal Behavior
Rule: Reminder emails include technician_name, certification_type, issuer_name, expiry_date (with timezone), and a deep link to Manage Certifications; templates pass accessibility checks (WCAG AA for email contrast). Rule: In-app banners appear on the dashboard starting 60 days before expiry; dismissing snoozes the banner for 24 hours per user per certification. Rule: Users may opt out of non-critical email reminders per certification_type; managers cannot disable critical compliance emails for their direct reports. Rule: Localization: reminders render in the recipient’s selected language; fallback to English if unavailable. Rule: All links include secure, single-use tokens expiring in 24 hours; expired links prompt re-authentication.
Compliance Dashboard and Exportable Reports
Rule: Dashboard shows counts and percentages by status (Valid, Expiring ≤60 days, Expired) and supports filters for location, team, technician, certification_type, and status (multi-select). Rule: Dashboard loads within 3 seconds for organizations up to 5,000 technicians and 50,000 certifications. Rule: CSV and PDF exports respect active filters and include: technician_id, technician_name, certification_type, issuer_name, credential_id, issuance_date, expiry_date, status, days_to_expiry, last_verified_at, location, team. Rule: Exports complete within 30 seconds for up to 10,000 rows and are available for download for 7 days with signed URLs. Rule: Access control: managers/admins can view organization-wide data; technicians see only their own certifications. Rule: All exports are logged with requester, timestamp, filter summary, and row count.
Issuer/LMS API Sync and Auto-Refresh
Given an issuer/LMS integration is configured with valid credentials When the nightly sync runs at 02:00 tenant local time or a user triggers "Refresh Status" Then the system fetches current status, issuance, and expiry for mapped certifications and updates records when changes are detected. And upon update, reminder schedules are recalculated and obsolete reminders are canceled. And on API failures, the system retries up to 3 times with exponential backoff; persistent failures generate an admin alert and appear in a sync error log. And all sync operations are audit-logged with timestamp, issuer endpoint, result, and records updated.
Audit Trail and Access Control for Certification Records
Rule: Role-based access control: admins/managers can view/edit all certifications; technicians can view/edit only their own; document download is restricted to authorized roles. Rule: Every create/update/delete on certifications and documents is audit-logged with actor, timestamp (UTC), fields changed (old/new), and source_ip. Rule: Deleting a document requires admin role; the certification record remains with status "document missing" and references the deleted artifact id. Rule: Unauthorized API/UI access attempts return HTTP 403 and are captured in security logs. Rule: Audit logs are immutable and retained for at least 2 years; queries are filterable by actor, technician, certification_type, and date range.
Tool and Bay Availability Sync
"As a service coordinator, I want assignments to reserve the needed tools and bay so that work can start immediately when the tech is ready."
Description

Integrate with FleetPulse inventory and bay scheduling to verify required tools and bay types during assignment. Automatically reserve tools and bays when a job is assigned; surface conflicts and propose alternative times or locations; release reservations on reassignment or cancellation. Support mobile tool checkout and real-time status updates so technicians arrive work-ready. This prevents idle time and dead-on-arrival assignments caused by missing equipment or occupied bays.

Acceptance Criteria
Auto-Reserve Tools and Bay on Assignment
Given a work order with a scheduled time window, required tool list, and bay type And all required resources are available When the dispatcher assigns the job to a technician Then the system reserves all required tools and one compatible bay for the job window And reservation IDs are written to the work order And resource calendars reflect the holds within 5 seconds And the assignment succeeds without manual steps
Conflict Detection and Alternative Scheduling
Given a work order whose required tools and/or bay are not all available at the requested time/location When the dispatcher attempts assignment Then the assignment is blocked with a clear conflict message naming each unavailable resource And the system proposes at least 3 alternative time slots and/or locations that satisfy all requirements within the next 5 business days And selecting an alternative rechecks availability and creates the reservations within 5 seconds And the work order records the chosen alternative and the conflict reason
Reservation Release on Reassignment or Cancellation
Given a work order with existing tool and bay reservations When the dispatcher reassigns the job to a different time, technician, or location Then the previous tool and bay reservations are released within 5 seconds And new matching reservations are created for the updated assignment atomically (no gap where both sets are held) And an audit log captures who changed the assignment, what was released, and what was reserved And if the job is cancelled, all reservations are released and the work order shows no active holds
Mobile Tool Checkout and Real-Time Status Updates
Given a technician with an active reservation opens the mobile app at the shop When they scan or NFC-tap the reserved tool and confirm checkout Then the tool status changes from Reserved to Checked Out for that work order within 5 seconds And the work order preflight shows Tool Ready And checkout is denied with an explanatory message if the tool is not reserved for that technician/time And returning the tool updates status to Available and clears the work order linkage within 5 seconds
Compatibility and Access Validation
Given a work order requiring a specific bay type (e.g., lift, EV-safe) and tools with access restrictions When the dispatcher assigns the job Then the system validates bay type compatibility with the vehicle and job And validates technician access to each required tool before reserving And blocks assignment with explicit reasons if any compatibility or access rule fails And no reservations are created when validation fails
Concurrency and Double-Booking Prevention
Given two users or automations attempt to assign jobs that require the same tool or bay for overlapping times When both assignments are submitted nearly simultaneously Then only one reservation succeeds for each contested resource And the losing transaction receives a conflict response within 2 seconds without partial holds And the system remains consistent with no duplicate or overlapping reservations for the same resource and time window
Notifications and Calendar Visibility
Given a successful assignment that creates reservations When reservations are created, changed, or released Then the assigned technician and dispatcher receive notifications within 10 seconds And shop calendars for tools and bays display the updated holds with job ID, technician, and time window And iCal/ICS feeds and APIs reflect the change on the next sync (<= 60 seconds)
Supervisor Override with Guardrails and Audit
"As a shop foreman, I want to override auto-assignments when needed with clear risk indicators so that I can handle exceptions without losing control."
Description

Enable supervisors to manually override or reassign with inline guardrails that flag risks (missing certification, below-threshold proficiency, tool/bay conflicts). Require justification notes for risky overrides, record a full audit trail (who, what, when, why), and support locking/unlocking of assignments with role-based permissions. Provide a candidate comparison view to aid decision-making. This preserves operational flexibility while maintaining accountability and safety standards.

Acceptance Criteria
Risk-Flagged Override Requires Justification
Given a supervisor with override permission initiates reassignment And the selected technician triggers at least one guardrail risk (missing certification and/or below-threshold proficiency and/or tool/bay conflict) When the supervisor attempts to confirm the override Then the system displays a risk badge for each detected risk And disables the Confirm action until a reason category is selected And a justification note of at least 20 characters is entered (max 1000; no whitespace-only) And an acknowledgment checkbox is required for red-severity risks And the selected reason, justification note, and detected risks are persisted with the override
Complete Audit Trail on Override/Reassignment
Given an override, reassignment, lock, or unlock action completes via UI or API When the action succeeds Then an immutable audit record is created within 2 seconds And it contains: jobId, vehicleId, previousAssigneeId, newAssigneeId, actionType (override|reassign|lock|unlock), performedByUserId, performedByRole, performedAt (UTC ISO 8601), source (UI|API), clientIp, correlationId, detectedRisks[], justificationReason, justificationNote, beforeScore, afterScore, previousLockStatus, newLockStatus And the record is viewable to users with Audit.view permission in the Audit log And failed or canceled actions do not create a success audit record but do log an error event with status=failed
Role-Based Override and Lock Controls
Given role permissions are configured (Supervisor, Technician, Admin) When a user without override permission attempts to override, lock, or unlock an assignment Then the UI actions are disabled and API returns 403 Forbidden And users with Supervisor or Admin roles can perform override, lock, and unlock And locked assignments cannot be changed until unlocked And all lock/unlock actions require a reason and are written to the audit log And lock state is visibly indicated on job details and job cards
Candidate Comparison View for Decision Support
Given a supervisor opens the override flow for a job When they open Candidate Comparison Then the view loads within 1.5 seconds for up to 100 candidates And shows at least the top 5 ranked candidates including the current assignee And for each candidate displays: required certs met/missing, proficiency vs threshold, tool access, bay compatibility, availability window, current workload (# open jobs), historical success rate for similar jobs, and distance/ETA if available And risk conditions per candidate are highlighted with color-coded badges And the list supports sorting by match score, proficiency, or workload and filtering by certification/tool And closing the view returns to the override modal without losing selections
Guardrails for Tool and Bay Conflicts
Given the selected technician lacks required tooling or the target bay is incompatible/occupied for the job window When the supervisor attempts to confirm the override Then a red conflict banner identifies the specific missing tool or bay constraint And the system proposes the earliest compatible time window or required tool list And an "Acknowledge risk" checkbox plus justification note are required to proceed And the conflict details (type, affected resources, time window) are recorded in the audit entry
Searchable and Exportable Audit Log
Given an authorized user opens the Audit log When they search by date range, jobId, vehicleId, performedBy, actionType, or risk type Then results return within 2 seconds for up to 10,000 records And results are paginated at 50 per page with a total count And Export CSV downloads within 5 seconds and includes column headers, applied filters, UTC timestamps, and full justification notes (properly quoted)
Coverage Insights and Training Recommendations
"As an operations manager, I want visibility into skill gaps against our pipeline so that I can schedule training or staffing before work is delayed."
Description

Deliver dashboards that visualize skill coverage against scheduled work and common OBD-triggered repairs, with heatmaps by site and shift. Highlight gaps, predict upcoming risk windows, and run what-if scenarios for vacations or tool downtime. Recommend training or hiring to close gaps, and export learning plans to an LMS. Use rework and first-time-fix data from FleetPulse to prioritize high-impact skill development, enabling proactive workforce planning that reduces downtime.

Acceptance Criteria
Dashboard: Skill Coverage vs Scheduled Work & OBD Repairs
Given I have access to the FleetPulse Skill Matrix Sync coverage dashboard When I select a date range (up to 90 days) and filter by site and shift Then the dashboard displays, per skill, required hours from scheduled work and forecast OBD-triggered repairs and available certified hours including tool-access constraints And shows a coverage ratio = available_hours / required_hours with thresholds: green >= 1.0, amber 0.90–0.99, red < 0.90 And displays counts of impacted jobs and vehicles per skill for the selected period And required_hours includes buffer from historical average variance selectable at 0%, 5%, or 10% And data freshness is <= 15 minutes for scheduled work and <= 60 minutes for OBD forecasts And the user can download a CSV containing: date, site, shift, skill, required_hours, available_hours, coverage_ratio, impacted_jobs, impacted_vehicles
Heatmaps by Site and Shift
Given I open the Coverage Heatmap tab When I apply filters for date range, sites (multi-select), shifts, and skill categories Then a heatmap renders showing coverage_ratio per skill x site for each selected shift with a consistent legend (green >= 1.0, amber 0.90–0.99, red < 0.90) And tooltips reveal site, shift, skill, required_hours, available_hours, coverage_ratio to 2 decimals And I can toggle normalization between absolute ratio and z-score within site And the heatmap loads in <= 3 seconds for up to 10 sites, 3 shifts, 100 skills And I can export the heatmap as PNG and the underlying data as CSV And color palette meets WCAG 2.1 AA contrast for non-text elements
Gap Detection and Risk Window Prediction
Given coverage and forecast data are available for the next 14 days When I open the Gaps & Risk tab Then the system lists gaps where coverage_ratio < 1.0 per day and shift, sorted by severity (lowest ratio first) And it predicts risk windows (date, shift, site, skill) with an associated probability of service delay > 24 hours And model performance on a 6-month rolling backtest is displayed with Precision@Top-N >= 0.70 and Recall@Top-N >= 0.60 for N equal to average daily gap count And each risk window includes top 3 drivers (e.g., vacation overlap, tool conflict, OBD spike) with contribution percentages And users can subscribe to daily email alerts that include top 10 upcoming risk windows for their sites
What-If: Vacations and Tool Downtime
Given I create a what-if scenario naming unavailable technicians and/or tools with start and end timestamps and affected sites When I run the simulation Then coverage ratios, gaps, and risk windows recompute for the selected period and are compared to baseline with deltas shown per skill and site And the simulation completes in <= 10 seconds for up to 150 technicians, 300 scheduled jobs, 50 tools over a 14-day window And I can save the scenario, add a description, and share read-only links with other users in my org And I can revert to baseline with a single action and confirm the reset And saved scenarios are versioned and auditable with creator, created_at, and last_run_at
Training and Hiring Recommendations
Given gaps and rework/first-time-fix data are available for the last 180 days When I open the Recommendations tab and set a target date to close gaps Then the system generates a ranked list of skills with suggested training modules and/or hiring recommendations to achieve coverage_ratio >= 1.0 by the target date And each recommendation shows estimated impact on downtime (hours avoided), increase in first-time-fix rate, required training hours, and time-to-competency And ROI is calculated per recommendation using avoided downtime value minus training/hiring cost, with inputs editable and defaults documented And at least the top 10 high-impact recommendations are presented, sortable by ROI, downtime avoided, or FTFR lift And actions allow creating learning plan items per technician or open requisitions per site/shift/skill
Export Learning Plans to LMS
Given I have selected training plan items (technician, skill, module, due date) When I click Export to LMS Then the system supports CSV download and HTTPS webhook to a configured LMS endpoint using OAuth2 And CSV includes headers: employee_id, employee_name, site, skill_id, skill_name, module_id, module_name, due_date, hours, priority And webhook requests include an idempotency_key, and retries occur up to 3 times with exponential backoff on 5xx And success and failure counts are shown, with per-record error messages for any failures And an audit log entry is created with timestamp, actor, destination, record_count, success_count, failure_count, and a downloadable copy of the payload
Prioritization Using Rework and First-Time-Fix
Given historical work orders with rework flags and first-time-fix outcomes over the last 180 days When the system computes skill priorities Then it applies a weighted score per skill: score = 0.6 * normalized_rework_rate + 0.4 * (1 - normalized_FTFR) with a 30-day half-life time decay And prioritized skills are used to order gaps and recommendations by impact And users can toggle Weighted vs Unweighted views; Weighted is default and clearly labeled And backtest over the last 6 months shows a >= 10% increase in correlation between priority score and downtime events versus the unweighted baseline And the date range used, decay parameters, and weights are viewable and exportable for audit

Parts Staging

Auto-builds bay-by-bay picklists from upcoming work, consolidating shared parts across vehicles and flagging shortages early. Creates ready-to-grab staging bins so vehicles aren’t stuck on lifts waiting—cutting dwell time and missed promises.

Requirements

Bay-aware Work Queue Sync
"As a service manager, I want the bay schedule to sync automatically with upcoming work so that parts staging reflects the latest plan and bins are ready before vehicles arrive."
Description

Continuously ingest upcoming inspections, maintenance tasks, and repair orders from FleetPulse’s scheduler and assign them to specific bays and start times. Maintain a live, bay-level queue that reflects reschedules, cancellations, and priority changes. Provide a configurable staging horizon (e.g., next 24–72 hours) to determine which jobs feed the staging pipeline. Expose a normalized job payload (vehicle, VIN, WO ID, tasks, parts mappings, bay, ETA) that downstream picklist and forecasting services consume. Ensures staging operates on accurate, timely work demand and aligns with shop capacity.

Acceptance Criteria
Real-time Scheduler Ingest and Bay Assignment
Given a new or updated job appears in the FleetPulse scheduler within the staging horizon When the sync service receives the change via webhook or poll Then the job is ingested and persisted with assigned bay_id and scheduled_start within ≤15 seconds And if the upstream job specifies a bay that is available, the same bay_id is preserved And if the specified bay is unavailable, the job is tagged bay_conflict and withheld from downstream publication And if required upstream fields are missing or invalid, the job is rejected from the queue and the error is logged with a correlation ID
Live Queue Reflects Reschedules, Cancellations, and Priority Changes
Given a job exists in the live bay-level queue When its start time, bay, or priority changes upstream Then the queue reflects the latest change within ≤15 seconds and maintains a stable job_id with an incremented source_version And no duplicate entries exist for the job in the queue When the job is canceled upstream Then the job is removed from the queue and a job.canceled event is emitted within ≤15 seconds When multiple updates arrive out of order Then last-write-wins is applied based on source_version (or event time) and the queue state is consistent
Configurable Staging Horizon Filters Jobs
Given the staging horizon is configurable between 24 and 72 hours with a default of 48 hours When the horizon is set to H hours Then only jobs with scheduled_start ≥ now and < now + H are included in the staging queue and downstream emissions And jobs outside the horizon are excluded from the queue and emissions When the horizon value is changed Then the filter takes effect within ≤60 seconds without duplicating already-emitted jobs And boundary conditions at exactly now + H are excluded, and at now are included
Normalized Job Payload Contract
Given a job is ready for downstream consumption When the payload is produced Then it includes required fields: job_id, vehicle_id, vin, wo_id, tasks[], parts_mappings[], bay_id, scheduled_start, estimated_duration, eta_to_bay, priority, status, source_version, last_updated, payload_version And timestamps are ISO 8601 UTC (Z) strings; identifiers are non-empty strings; quantities are positive numbers And status is one of [scheduled, rescheduled, canceled, bay_conflict, ready_for_staging] And parts_mappings[] items contain part_number, quantity_required, and task_id And payload_version is present and set to a supported version And any payload missing required fields is not emitted and is logged with validation errors
Capacity-Aware Bay Scheduling and Conflict Resolution
Given existing assignments and bay operating hours When assigning a new job without an explicit bay Then the system selects an available bay and start time that do not overlap existing jobs in that bay, allowing a 5-minute buffer before/after And if no bay/time is available within the horizon, the job is marked bay_conflict within ≤15 seconds When a job’s priority is raised to rush Then the queue is re-ordered to place it at the earliest feasible slot and displaced jobs are re-slotted without creating overlaps in any bay When a bay is taken offline Then impacted jobs within the horizon are reallocated to other bays or marked bay_conflict within ≤60 seconds
Resilience, Idempotency, and Recovery
Given duplicate upstream events for the same wo_id and source_version When they are processed Then only one queue record and one downstream emission exist (idempotent behavior) Given the sync service restarts or experiences an outage of up to 15 minutes When it recovers Then it rebuilds the queue for the full horizon and reconciles missed changes within ≤2 minutes without emitting duplicate downstream events Given a downstream delivery failure When retries are attempted Then exponential backoff is applied for up to 5 attempts, after which the message is moved to a dead-letter queue and an alert is generated
Downstream Access, Performance, and Versioning
Given downstream services need the queue When requesting GET /work-queue?horizon=H&page=1&page_size=100 with a valid token Then the API responds 200 with the normalized payload and P95 latency ≤300 ms When subscribing to the work-queue.jobs topic Then change events are delivered within ≤5 seconds of upstream change and contain payload_version When the client exceeds 600 requests per minute Then the API returns 429 with rate-limit headers When a breaking payload change is introduced Then a new payload_version and parallel endpoint/topic are provided and the previous version continues to function until deprecation
Auto Picklist Generation per Bay
"As a parts manager, I want auto-built picklists per bay that consolidate shared parts so that I can pull once efficiently and have everything ready when each job starts."
Description

Generate bay-specific parts picklists from the live work queue by expanding each scheduled task into required parts and quantities using parts-task mappings and kits. Consolidate and de-duplicate shared parts across vehicles within each bay and staging window, optimize sequence by job start time, and clearly mark required-by timestamps. Output printable and digital picklists, with support for kit expansion, alternates, and notes. Integrate with FleetPulse inventory (or connected third-party systems) to display on-hand, reserved, and available-to-stage quantities for each line.

Acceptance Criteria
Auto Picklist Generation for a Single Bay and Staging Window
Given a live work queue with tasks assigned to Bay X that start within the selected staging window and parts-task mappings/kits are configured When the user triggers Auto Picklist Generation for Bay X for that window Then the system generates a picklist labeled with Bay X and the selected window And expands each scheduled task into required parts and quantities via mappings and kit definitions And includes for each part line: part number, description, aggregated required quantity, unit of measure, vehicle ID(s), job ID(s), and required-by timestamp(s)
Consolidation and De-duplication of Shared Parts Within Bay
Given two or more tasks in the same bay and staging window require the same part number When the picklist is generated Then only one line exists for that part number on the picklist And the line quantity equals the sum of quantities required across those tasks And the line references all contributing jobs and vehicles And no duplicate part lines for the same part number remain on the picklist
Sequence by Job Start and Required-By Timestamps
Given jobs in the bay have distinct scheduled start times within the staging window When the picklist is generated Then each part line shows a required-by timestamp equal to the earliest contributing job's scheduled start time And the picklist is sorted ascending by required-by timestamp And when two lines share the same timestamp, they are secondarily sorted by part number ascending
Inventory Availability Display and Calculation
Given inventory data provides on-hand and reserved quantities for each part from FleetPulse or a connected third-party system When the picklist is generated Then each part line displays on-hand, reserved, and available-to-stage (ATS = on-hand - reserved) And ATS is computed correctly for each part And count-based parts display integer quantities while weight/volume parts display two-decimal precision
Kit Expansion, Alternates, and Notes
Given a scheduled task references a kit When the picklist is generated Then the kit is expanded into its component part lines with quantities multiplied by the task quantity And if a primary part is unavailable (ATS < required), configured alternates are listed beneath the primary with their identifiers and ATS values And part-level and task-level notes are displayed on the relevant line(s)
Shortage Detection and Early Flagging
Given any part line where ATS < required quantity When the picklist is generated Then the line is flagged as Shortage and shows the shortage quantity (required - ATS) And the picklist includes an aggregated shortages section listing all such parts And if alternates exist with sufficient ATS, the alternate with the highest ATS is recommended on the shortage line
Printable and Digital Picklist Outputs
Given the picklist is generated When the user selects Print Then a paginated printer-friendly PDF is produced with bay/window header, sorted lines, and footers showing page X of Y and the generation timestamp And when the user views digitally, the picklist is available in-app with search, a Shortages filter, and export to CSV And both outputs contain identical line items, quantities, and required-by timestamps
Consolidated Demand & Shortage Forecasting
"As a parts buyer, I want to see upcoming shortages with lead-time-aware risk so that I can order or transfer parts early and prevent bay delays."
Description

Aggregate parts demand across all bays within the staging horizon and compare it to on-hand and already-reserved inventory to predict shortages before work begins. Calculate risk levels based on lead times, supplier SLAs, and job start times; recommend purchase orders or transfers; and surface viable alternates or superseded SKUs. Provide a dashboard highlighting critical shortages, ETAs, and impacted work orders, enabling early intervention to avoid vehicles waiting on lifts.

Acceptance Criteria
Aggregated Demand Within Staging Horizon
Given a configured staging horizon (H days) and upcoming work orders (WOs) with required parts and quantities exist across bays When the consolidation job runs manually or on its schedule Then the system aggregates demand per SKU across all WOs whose planned start is within H, excluding canceled or completed WOs And produces for each SKU: total_required, on_hand, reserved, available_to_promise (ATP = on_hand - reserved), projected_shortage = max(0, total_required - ATP) And orders allocations FIFO by WO planned start time, with bay number as tiebreaker And completes the calculation in ≤ 5 seconds for up to 5,000 part lines spanning up to 100 vehicles
Shortage Forecast Timing and WO Impact Allocation
Given a per-SKU FIFO allocation order and computed ATP When allocating quantities sequentially across WOs by planned start time Then the earliest WO where ATP is insufficient is marked as the first impacted WO and all subsequent unfilled WOs are listed as impacted And each impacted WO shows WO ID, bay, vehicle, required_qty, allocated_qty, and planned start time And shortages are flagged only for WOs with planned start ≥ now and within the staging horizon And the system updates impacted WOs correctly when reservations change
Risk Level Calculation Using Lead Times and SLAs
Given supplier lead_time (business days), supplier SLA on-time percent, and time_to_start for the earliest impacted WO per SKU When computing risk_level Then apply: High if (lead_time + intake_buffer_days) > time_to_start OR SLA_on_time < 90%; Medium if (lead_time + intake_buffer_days) is within 0–2 business days of time_to_start OR SLA_on_time between 90% and 95%; Low otherwise And intake_buffer_days is configurable (default 1 business day) And risk_level is shown per SKU and per impacted WO and is deterministic for identical inputs And risk_level recalculates immediately upon changes to lead time, SLA, or WO schedule
Purchase Order and Transfer Recommendations
Given a projected_shortage S for a SKU and known MOQ and pack_size When generating recommendations Then quantity_to_buy = ceil(max(0, S - qty_on_open_POs_arriving_before_required_by) / pack_size) * pack_size and rounded up to at least MOQ And required_by_date = earliest_impacted_start - intake_buffer_days (business days) And if an internal location has surplus arriving by required_by_date covering ≥ 90% of S, propose a transfer instead of a purchase And recommendations include: SKU, description, quantity, source (supplier or location), required_by_date, expected_ETA, unit_cost, extended_cost, and linked impacted WOs And no recommendation is created if existing open POs/transfers fulfill ≥ 95% of S by required_by_date
Alternate and Superseded SKU Suggestions
Given a shortage on SKU A and an alternates/supersession mapping catalog When generating alternates Then suggest up to 3 alternates ranked by earliest availability date, then lowest price And only show alternates that are compatible for the vehicle and not in categories flagged as no-alternate And each alternate displays supplier, expected ETA, unit_cost, and adjusted risk_level And upon user selection of an alternate, the forecast updates within 60 seconds to reflect reduced projected_shortage
Shortages Dashboard: Visibility and Performance
Given a computed shortage forecast When the user opens the Parts Staging > Shortages dashboard Then list all SKUs with projected_shortage > 0 within the staging horizon with columns: SKU, description, total_required, ATP, projected_shortage, risk_level, earliest_impacted_start, ETA, impacted_WO_count And sort by risk_level (High, then Medium, then Low) and then earliest_impacted_start ascending And provide filters for date range, bay, vehicle, supplier, and risk_level that persist per user And row details reveal impacted WOs and quick actions: Create PO, Create Transfer, View Alternates And initial load time is ≤ 2 seconds for up to 1,000 shortage rows
Real-time Refresh, Notifications, and Audit Trail
Given changes to WO schedules, inventory on-hand/receipts, reservations, or PO/transfer statuses When such a change occurs Then the forecast and dashboard reflect updates within 60 seconds And if a SKU’s risk_level increases to High or a new shortage emerges within the horizon, send in-app and email notifications to Parts Manager and Shop Lead including SKU, magnitude, and earliest_impacted_start And suppress duplicate notifications for the same SKU and reason within a 10-minute window And log each forecast run and user action (accepting recommendation, selecting alternate) with timestamp, actor, inputs, and output diffs retrievable via an audit view
Scan-to-Stage with Bin Labels
"As a technician, I want clearly labeled bins with scannable line items so that I can grab the right parts fast and confirm picks without leaving the bay."
Description

Create ready-to-grab staging bins per bay and job, generating label sets that include bay, vehicle, WO, and QR/barcodes for each line item. Support mobile scan workflows to confirm picks, handle partial fills, and record picker, time, and location. Provide printable bin summaries and integrate with common label printers. Persist staged bin status and location in FleetPulse so technicians can quickly retrieve the correct bin on job start.

Acceptance Criteria
Auto-build bins and labels from scheduled WOs
Given there are approved Work Orders scheduled within the next 48 hours and assigned to bays When the user initiates "Build Staging Bins" Then the system creates one bin per unique Bay + Work Order and assigns a unique Bin ID And the system generates a label set consisting of one Bin label and one label per WO line item requiring parts And each label includes: Bay number, Vehicle identifier (Unit # or VIN last 6), Work Order number, and scannable QR + Code128 barcode And the Bin label barcode encodes the Bin ID; each item label barcode encodes Part SKU, WO Line ID, and Required Qty And the label set is available to print within 5 seconds for batches up to 50 WOs And the system records the label generation timestamp and the user who initiated it
Mobile scan-to-stage confirms picks
Given a picker scans a Bin label to open a staging session on mobile When the picker scans an item barcode matching a required WO line Then the system increments Picked Qty up to the Required Qty and provides visual and audible confirmation And if the scanned part does not match any required line for the open bin, the system blocks the pick and shows an error And if the scan would exceed the Required Qty, the system blocks the excess and prompts to confirm a potential substitute flow (disabled by default) And the system records picker ID, device ID, timestamp, and stock location for each successful scan And the session can be paused and resumed without losing counts
Partial fills and shortage recording
Given a WO line requires a quantity greater than current on-hand When the picker scans all available quantity for that line Then the system marks the line as Partially Staged and records Short Qty = Required - Picked And the bin summary and WO show a Shortage badge for the affected line And the remaining Required Qty stays open for future staging runs And an activity entry is logged with picker ID, time, and location noting the partial fill
Printable bin summary and label printer integration
Given a staged bin exists When the user selects "Print Bin Summary" Then the system generates a printable summary containing Bin ID, Bay, Vehicle, WO, and a table of Part SKU, Description, Required, Picked, Short And the summary is available as PDF and ZPL output And label print jobs (bin and item labels) can be sent to configured printers supporting Zebra ZPL and DYMO LabelWriter And the system returns a print job status (Queued, Printing, Succeeded, Failed) and captures any printer error message And when a user requests a reprint for a bin or item label, the barcode data is identical to the original and the reprint is logged
Persisted bin status and location for technician retrieval
Given a bin is staged for a WO assigned to a bay When the technician scans the Bin QR at job start Then the app opens the matching WO context and displays the correct bin contents And the bin status transitions from Staged to In Use and the location is updated to the active bay And scanning a non-matching bin shows a mismatch warning and does not change the WO state And the last-known bin location and status are visible on the WO and bay dashboard
Audit trail visibility for staged picks
Given picks have been recorded for one or more WO lines When a supervisor opens the Bin History or WO Parts tab Then the system displays an immutable audit trail per line item including picker ID, timestamp, quantity picked, stock location, device ID, and action (pick/reprint/void) And the audit trail can be exported to CSV and filtered by user, date range, and action type And any voided pick requires a reason code and is reflected in adjusted Picked Qty and audit trail
Real-time Inventory Reservation & Substitutions
"As a parts clerk, I want staged parts to be reserved with support for approved substitutions so that inventory stays accurate and jobs aren’t blocked by double-picks."
Description

Reserve inventory when items are picked or marked staged, decrement available-to-stage in real time, and release reservations on cancellation or reschedule. Support substitutions and supersessions with approval rules and traceability. Track lots/serials when applicable, manage cores/returns, and allow return-to-stock for unused parts with condition logging to keep inventory accurate and prevent double-allocation.

Acceptance Criteria
Real-time Reservation on Pick and Stage
Given a work order line requires an in-stock item When a user marks the line item as Picked or Staged with quantity Q Then a reservation is created for quantity Q, the item's available-to-stage is decremented by Q within 2 seconds, and the updated quantity is visible to all users viewing the item And an audit record is captured with timestamp, user, work order, line, source location/bin, item, lot/serial (if applicable), and quantity Given another user attempts to pick/stage the same item simultaneously and insufficient availability remains When they confirm the action Then the system prevents the action and shows an error indicating remaining availability, and no negative quantities are committed Given a partial pick/stage When the user enters a quantity lower than the required amount Then the reservation and decrement reflect the partial quantity, and the remaining requirement remains open on the work order
Automatic Release on Cancel or Reschedule
Given a reservation exists for a work order line When the work order is canceled Then all associated reservations are released within 2 seconds, available-to-stage is incremented accordingly, lot/serial associations are cleared, and an audit entry is recorded with reason "Work order canceled" Given a reservation exists for a work order line When the work order is rescheduled and saved Then all associated reservations are released within 2 seconds, counts are restored, lot/serial associations are cleared, and an audit entry records the prior and new schedule Given a partial line cancellation or quantity reduction When only a portion of the reserved quantity is removed Then only the corresponding reserved quantity is released and logged
Approved Substitutions and Supersessions
Given a required item is unavailable or below the needed quantity When a user views alternatives Then the system displays approved substitutions and supersessions with current availability, unit of measure, and cost variance Given a user selects a substitute or superseded part When approval is required by rules (e.g., role, variance threshold) Then the system requests approval and blocks reservation until approval is granted And upon approval the reservation is created against the substitute, and the work order line records a link from original to substitute/Superseded part Given a substitution is executed Then the system logs approver, reason, timestamp, original SKU, substitute/Superseded SKU, and reserved quantity for traceability Given a user attempts to reserve a non-approved substitute Then the system rejects the action with a clear message and creates no reservation
Lot/Serial Selection and Validation
Given an item is lot- or serial-tracked When reserving or staging it Then the user must select or scan a specific lot/serial per unit reserved, and the system validates the lot/serial is in stock and not already reserved or installed Given barcodes are scanned When a valid lot/serial barcode is scanned Then the selected lot/serial populates automatically and increments the reserved quantity by one per scan Given a lot has expiration/recall status per rules When the lot is selected Then the system warns or blocks selection according to policy and does not reserve blocked lots Given a reservation for a lot/serial-tracked item is released Then the lot/serial association is removed and the unit becomes available-to-stage again, with an audit log entry
Core Charge and Return Workflow
Given an item has a core charge and is reserved/staged Then the work order shows a core-required flag and deposit amount, and a pending core return record is created linked to the work order, item, and reservation Given the installed part has an associated core When the core is received Then the system records the received core with condition and serial (if applicable), updates the core return status, and applies the core credit to the work order/accounting integration; all events are logged Given the core is not returned within the configured window When the window elapses Then the core return remains outstanding with status "Overdue" and no credit is issued automatically
Return-to-Stock for Unused Parts with Condition Logging
Given an unused staged item is returned from the bay When the user performs Return to Stock Then the system requires quantity, condition code (e.g., New-sealed, New-opened, Damaged), reason, and optional notes/photos; increments on-hand and available-to-stage per rules (e.g., New-sealed immediately available; others moved to quarantine); and records an audit entry Given the item is lot/serial tracked When returning to stock Then the exact lot/serial originally reserved must be provided and validated; otherwise the return is blocked Given a return is quarantined due to condition When QA approves the item Then the item transitions from quarantine to available-to-stage and the movement is logged
Concurrency and Double-Allocation Protection
Given two users attempt to reserve the final unit of an item simultaneously from different devices When both submit the pick/stage action Then only one reservation succeeds and the other is rejected with a clear message, available-to-stage never drops below zero, and both attempts are audit logged with correlation IDs Given a client experiences a timeout and retries a pick/stage request When the client supplies an idempotency key Then the API processes the request exactly once and does not create duplicate reservations Given 100 concurrent reservation attempts are made for the same SKU with only 10 units available When all requests complete Then the system creates exactly 10 reservations, rejects 90 with appropriate errors, and the final available-to-stage reflects the correct quantity with 0 double-allocations
Readiness & Shortage Notifications
"As a service manager, I want proactive alerts about bin readiness and shortages so that I can adjust the schedule and avoid missed promises."
Description

Send role-based alerts when bins are ready, partially filled, or at risk due to shortages, with ETAs and impacted work orders. Provide in-app, email, and push notifications, plus escalation rules as job start times approach. Surface readiness states on the work order timeline so service managers can re-sequence work or communicate with drivers proactively.

Acceptance Criteria
Bin Ready Notification to Assigned Roles
Given a staging bin reaches 100% fill for a scheduled work order and is assigned to a bay When the system updates bin readiness status Then the assigned technician, parts clerk, and service manager receive notifications within 30 seconds via their enabled channels (in-app, email, push) And each notification includes bin ID, work order ID, vehicle, bay, scheduled start time, and a deep link to the bin and work order And an in-app notification is created and marked unread until viewed or explicitly marked read And duplicate “bin ready” notifications for the same bin-work order pair are suppressed for 15 minutes And an audit log entry captures event time, recipients, channels attempted, and delivery outcomes
Partial Fill Shortage Alert with ETAs and Impacted Work Orders
Given a staging bin is 1–99% filled and at least one required part is missing When the system detects or receives an ETA for the missing part(s) Then a partial readiness alert is sent to the assigned technician, parts clerk, and service manager via their enabled channels And the alert lists each missing part (SKU, description), quantity short, source (supplier/warehouse), committed ETA per part, and confidence if available And the alert shows calculated earliest feasible job start based on the latest ETA among required parts And the alert includes any other work orders impacted by the same part shortage And the alert auto-updates (new notification or in-app update) within 5 minutes of any ETA change or fulfillment event And the partial alert is cleared when the bin reaches 100% fill or the part requirement is removed
Escalation Ladder Prior to Job Start
Given a work order has a scheduled start time T and its bin is not 100% ready When the current time passes T-120 minutes without resolution Then an escalation notification is sent to the service manager via in-app and email, marked as “At Risk: 120m” When the current time passes T-60 minutes without resolution Then an escalation notification is sent to the service manager and parts clerk via in-app, email, and push, marked as “At Risk: 60m” When the current time passes T-15 minutes without resolution Then an escalation notification is sent to the service manager and shop lead via in-app, email, and push, marked as “At Risk: 15m” And escalations stop immediately if the bin reaches 100% ready or the work order is re-sequenced to a later time And each escalation requires acknowledgement in-app; unacknowledged alerts are retried up to 3 times at 2-minute intervals And escalation events are recorded with timestamps, recipients, and acknowledgements for audit
Notification Delivery and Preferences Compliance
Given users have role-based subscriptions and channel preferences configured for readiness and shortage events When an event triggers a notification Then notifications are delivered only via the user’s enabled channels among in-app, email, and push And if push delivery fails due to missing token or 3 consecutive errors, an email fallback is sent and the failure is logged And notification timestamps and ETAs are rendered in the recipient’s time zone And delivery and open/read receipts are captured where supported (in-app and push)
Work Order Timeline Readiness States and Live Updates
Given a service manager views the work order timeline When a bin readiness state changes Then the corresponding work order card updates within 30 seconds to one of: Not Started, Partial-Shortage, At Risk (ETA later than start), or Ready And a color-coded badge and tooltip display the current state, percent fill, and next ETA And clicking the card opens a side panel with bin contents, missing parts, ETAs, and list of impacted work orders And the timeline can be filtered by readiness state and sorted to bring Ready work orders to the top within their bay And all changes are persisted and reflected across sessions
Proactive Resequencing and Driver Communication from Timeline
Given a work order is marked At Risk or Partial-Shortage within 60 minutes of its scheduled start When the service manager opens the work order on the timeline Then the UI presents a prompt to re-sequence or notify the driver And drag-and-drop or action controls allow moving the work order to a later slot or swapping bays without data loss And upon confirming a schedule change, the driver and assigned technician are notified via their enabled channels with the new start time and reason And the change and communications are logged with timestamp, actor, recipients, and message preview
Consolidated Shortage Alert Across Multiple Work Orders
Given a part shortage affects two or more upcoming work orders within the next 48 hours When the system detects the shared shortage Then a single consolidated shortage notification is sent to the parts clerk and service manager via their enabled channels And the notification lists the impacted work orders (IDs, vehicles, bays, scheduled start times), total quantity short, quantity on order, and per-order shortfall And the notification includes the earliest and latest ETAs for outstanding quantities and a link to a consolidated shortage view And per-work-order shortage alerts are suppressed for 15 minutes after the consolidated alert to prevent duplication And subsequent quantity or ETA changes update the consolidated view and trigger a summarized update within 5 minutes

Downtime Optimizer

Sequences jobs to minimize total vehicle idle hours by blending due dates, severity, dispatch windows, and estimated durations. Suggests the next-best slot that keeps wheels turning and spreads workload evenly across bays and shifts.

Requirements

Multi-factor Optimization Engine
"As a fleet manager, I want jobs sequenced by urgency, due dates, and duration to minimize downtime so that more vehicles remain available for revenue-generating routes."
Description

Implements a scheduling optimizer that minimizes total vehicle idle hours by sequencing maintenance jobs using weighted inputs such as fault severity from OBD‑II, due dates, estimated durations, dispatch windows, travel times, and parts readiness. Supports hard and soft constraints and exposes tunable weights per fleet. Integrates with FleetPulse maintenance jobs, telematics fault classifier, and service calendar to produce ranked job sequences that respect compliance thresholds while keeping vehicles in service longer. Outputs include optimized sequences, objective scores, and constraint explanations for transparency.

Acceptance Criteria
Reduce Total Idle Hours vs Baselines
Given baseline schedules produced by FIFO and due-date-only heuristics for test fixture FP-T1 (100 vehicles, 240 open jobs) And the optimization engine is configured with default weights and all data feeds available When the engine generates an optimized sequence Then the predicted total idle hours is reduced by at least 15% vs FIFO and at least 8% vs due-date-only And hard constraint violations count equals 0 And the response includes objective_total_idle_hours and comparator_baselines with values for FIFO and due-date-only
Hard Constraints Enforcement and Feasibility Handling
Given jobs with dispatch windows, parts_ready_at timestamps, travel times between locations, bay/shift capacities, and compliance deadlines When generating a schedule Then no job start_time is earlier than parts_ready_at And no job is scheduled outside its dispatch window And no resource capacity is exceeded per bay/shift interval And no compliance deadline is violated And arrival_time at each job is >= previous_job_end_time + required_travel_time And if constraints are infeasible, the engine returns solver_status = "Infeasible", an empty schedule, and constraint_explanations listing the blocking constraints with job_ids/resources
Tunable Weights Per Fleet Affect Sequence
Given a fleet with default weight profile W0 and an alternate profile W1 where severity_weight is increased by 50% while due_date_weight remains unchanged And a snapshot containing Critical, High, and Medium severity jobs When optimizing with W0 to produce S0 and with W1 to produce S1 on the same snapshot and random_seed Then the average position (rank) of Critical jobs in S1 improves by at least 2 positions compared to S0 And hard constraint violations count remains 0 in both schedules And results are deterministic given the same random_seed And weight profile updates are persisted with updated_by and effective_at metadata
Transparent Output with Scores and Explanations
Given an optimization run completes with solver_status in {"Optimal","Feasible"} When retrieving the output payload Then each scheduled job includes objective_contribution, soft_penalties breakdown by constraint key, and top_3_drivers with percentage attribution summing to 100% ±1% And schedule_meta includes objective_total_idle_hours, runtime_ms, solver_status, data_snapshot_id, and version And for any soft constraint violated, a human-readable explanation string is present with the penalty value And the payload validates against JSON schema version "v1.0" with 0 validation errors
Integration with FleetPulse Data Sources
Given FleetPulse APIs for maintenance jobs, telematics fault severity, and the service calendar are reachable When the engine requests data for a run Then the data snapshot used is no older than 5 minutes at run start And on transient API failures, the engine retries 3 times with exponential backoff of 0.5s, 1s, and 2s before falling back to a cached snapshot not older than 15 minutes And source_status per feed is recorded as "fresh", "cached", or "stale-fail"; "stale-fail" aborts the run with solver_status = "DataUnavailable" And all calls and snapshot IDs are logged with a correlation_id
Performance and Scalability Under Load
Given a workload up to 100 vehicles, 300 open jobs, and at most 6 bays across 3 shifts on the standard compute tier When optimizing with default settings Then the 95th percentile runtime over 50 runs is <= 5000 ms and the 99th percentile is <= 7000 ms And if a 4000 ms time limit is reached, a fallback heuristic returns a schedule within an additional 2000 ms And the fallback schedule’s objective_total_idle_hours is within 12% of the best bound found by the solver And peak memory usage does not exceed 1.5 GB
Incremental Re-Optimization and Next-Best Suggestion
Given a live schedule and an event occurs: early completion of a job, a bay outage, or a new Critical fault When the engine re-optimizes the remaining horizon Then a next-best suggestion is produced within 2000 ms And the resulting order changes at most 30% of remaining job positions while maintaining 0 hard constraint violations And impacted jobs include change_reason in {"early-completion","resource-outage","critical-fault"} with before/after timestamps
Bay and Shift Capacity Awareness
"As a shop supervisor, I want the optimizer to respect bay and technician availability so that schedules are feasible and evenly spread across shifts."
Description

Makes the optimizer capacity-aware by ingesting service bay calendars, technician shift schedules, and skill tags, ensuring proposed schedules do not exceed available bays or staff. Balances workload across shifts and days to reduce bottlenecks and overtime. Integrates with FleetPulse service calendar and user management to reflect live availability and time off, producing realistic start/end times per job that can be committed directly.

Acceptance Criteria
Do not exceed bay and on‑shift skill capacity
Given service bay calendars, technician shift schedules, and technician skill tags are synced and current When the optimizer generates a schedule for a planning horizon (e.g., next 14 days) Then in every 15‑minute timeslice, assigned jobs per site do not exceed available open bays And each scheduled job has assigned technician(s) whose skill tags cover all required job skill tags And no technician is double‑booked in overlapping 15‑minute intervals And no assignments occur outside technician on‑shift hours or bay open hours And hard‑constraint violation count equals 0
Reflect live availability and time‑off changes within SLA
Given a bay closure, technician PTO/sick leave, or calendar hold is added/edited/removed in FleetPulse When the change is saved Then the capacity model is updated within 60 seconds And any not‑started affected assignments are flagged and queued for re‑optimization within 2 minutes And the optimizer produces a list of impacted jobs with before/after feasibility status And re‑optimization proposals honor all capacity and skill constraints
Balance workload across shifts and days without overtime
Given a set of schedulable jobs and configured org settings (balancing_tolerance_pct=15, overtime_allowed flag, overtime_cap_pct=10) When the optimizer generates the schedule for the next 7 days Then projected bay utilization per shift is within ±15 percentage points of the site’s average projected utilization for that day, unless infeasible (document balancing exceptions) And assigned technician hours per shift do not exceed shift capacity unless overtime_allowed=true And if overtime is allowed, assigned overtime hours per shift do not exceed 10% of total shift hours And the optimizer selects among feasible options the plan with the lowest utilization standard deviation across shifts
Respect job durations, dispatch windows, and shift boundaries
Given each job has an estimated duration, earliest start time, latest finish time, and required skill tags When the optimizer schedules jobs Then the scheduled start time is within the job’s dispatch window And the scheduled end time equals start plus duration, rounded to the nearest 15‑minute increment And the job block does not cross bay closed periods or technician off‑shift time unless the job template has allow_split=true And if allow_split=true, segments are each ≥30 minutes and the sum of segment durations equals the job duration within ±5 minutes And jobs that cannot be placed within constraints are returned as unscheduled with reasons
Suggest next‑best feasible slots on conflict
Given a user attempts to schedule a job into a slot that violates capacity, skills, or shift constraints When the user invokes Find Next Best Then the system returns the three earliest feasible slots that satisfy bay, technician, skill, and dispatch window constraints And each suggestion includes bay, technician(s), start/end times, and projected impact on idle hours And the response time is ≤2 seconds with up to 500 active jobs in the horizon And the original infeasible request includes a clear reason code (e.g., NO_BAY, NO_SKILL, OFF_SHIFT)
Commit schedule updates to FleetPulse calendars atomically
Given a proposed schedule with specific bay and technician assignments When the user clicks Commit Then bay and technician calendar blocks are written atomically (all‑or‑nothing) using optimistic concurrency checks And if a conflict is detected (version mismatch or new hold), the commit is rejected with a conflict error and no partial writes occur And committed jobs appear in the FleetPulse service calendar within 1 second of success And an audit record is created with job ID, bay ID, technician IDs, user ID, timestamp, and previous/next times
Honor skill‑tag requirements and multi‑technician jobs
Given a job requires N technicians and specific skill tags When the optimizer schedules the job Then it assigns N distinct on‑shift technicians whose combined skill tags satisfy the job’s requirements for the full overlapping time window And if such a combination is unavailable, the job is not scheduled and is flagged with MISSING_SKILLS along with the missing tags And no assigned technician is scheduled concurrently on another job during the same timeslice And team/crew preferences are honored where configured (otherwise any valid combination is acceptable)
Next-best Slot Recommendations
"As a dispatcher, I want clear next-best slot suggestions with impact metrics so that I can schedule confidently without manual calculation."
Description

Surfaces a ranked list of next-best schedule slots for each pending job, showing predicted impact on key metrics such as idle hours delta, lateness risk, and bay utilization. Provides one-click commit to the maintenance schedule with automatic notifications to drivers and vendors. Includes audit logging of accepted or overridden suggestions for continuous tuning of optimization weights.

Acceptance Criteria
Ranked Next-Best Slots per Job
Given a pending maintenance job with due date, severity, dispatch windows, estimated duration, vehicle class, and the current bay/shift/vendor calendars When the user opens the Next-best Slots panel for that job Then the system displays a ranked list of at least 5 eligible slot recommendations within the 14-day horizon, sorted by greatest reduction in total fleet idle hours, with ties broken by lower lateness risk, then by proximity to due date, then by minimizing deviation from target bay utilization (50–85%) And each recommendation shows: idle hours delta (+/− hours, 1 decimal), lateness risk (%), projected bay utilization (%) for the affected shift, slot start/end times, bay, and vendor And ineligible slots that violate dispatch windows, bay capacity, shift coverage, vendor hours, or vehicle–bay compatibility are excluded And the recommendations include a model/weights version ID and generation timestamp And if no eligible slots exist, a "No eligible slots" state is shown with a link to view blocking constraints
One-Click Commit to Schedule
Given a displayed slot recommendation for a job When the scheduler clicks Commit Then the system atomically books the job into that slot with correct bay, start/end time, vendor, and vehicle, updates the maintenance calendar and job board within 2 seconds (P95), and prevents double-booking And if a conflicting booking has occurred since generation, the commit is aborted, the user sees a clear conflict message, and recommendations refresh And the operation is idempotent for 30 seconds to avoid duplicate bookings on repeated clicks And an audit record is created capturing: user, timestamp, job ID, previous schedule (if any), chosen slot, metrics snapshot, and model/weights version
Driver and Vendor Notifications on Commit
Given a successful commit of a job to a slot When the commit completes Then the assigned driver and vendor receive notifications via configured channels (in-app and email by default; SMS if enabled) within 60 seconds And the notification includes job ID, vehicle identifier, service type, location/bay, slot start/end with timezone, and an iCal attachment And failed deliveries are retried up to 3 times with exponential backoff; persistent failures surface a warning on the job with a Resend action And notifications respect configured quiet hours by deferring email/SMS until quiet hours end unless the slot starts within 12 hours, in which case only in-app is sent immediately
Audit Logging of Accepted and Overridden Suggestions
Given a suggestion is accepted or overridden by manually scheduling a different slot for the same job When the action is saved Then an immutable audit event is stored with unique ID, timestamp, user, job ID, vehicle, accepted_slot or overridden_slot, reason_code (optional), metrics_snapshot (idle_hours_delta, lateness_risk, bay_utilization), constraints_applied, and model/weights version And audit events are retained for at least 18 months and are exportable to CSV via admin reporting And the audit entry is visible on the job timeline within 5 seconds of the action
Eligibility and Constraint Compliance
Given fleet constraints including dispatch windows, driver availability, bay/shift schedules, vendor hours, regulatory inspection intervals, vehicle–bay compatibility, and overlapping jobs When generating slot recommendations Then only slots that satisfy all applicable constraints are presented And for Critical severity jobs due within 48 hours, at least one pre-due-date suggestion is presented if feasible; otherwise lateness is clearly flagged on all suggestions And the same vehicle is never double-booked across overlapping times
Performance, Freshness, and Determinism
Given up to 100 pending jobs and a 14-day planning horizon When opening the Next-best Slots panel Then top 5 suggestions per job render within 2 seconds at P95 and 4 seconds at P99 And given no changes to inputs, repeated generation within a session produces the same ranking order and metrics (deterministic) and displays the same model/weights version And when underlying schedule or constraints change, the UI refreshes the suggestions automatically within 10 seconds or prompts the user to refresh And where required data are missing, the UI labels metrics as unavailable and ranks those suggestions last
Real-time Event-driven Re-optimization
"As a fleet manager, I want the schedule to adapt when new faults occur or jobs slip so that downtime is minimized and delivery commitments are preserved."
Description

Continuously monitors telematics alerts, job progress updates, delays, and parts ETA changes to trigger incremental re-optimization without disrupting in-progress work. Automatically recalculates sequences and flags affected jobs, sending actionable alerts and updated next-best slots. Provides throttling and change windows to avoid excessive schedule churn during peak operations.

Acceptance Criteria
Telematics/Event Triggers for Incremental Re-optimization
- Given qualifying events (telematics DTC severity S2+; job progress state change; reported delay > 10 minutes; parts ETA variance > 15 minutes), When an event is validated, Then an incremental re-optimization is initiated within 5 seconds of validation. - Given an incremental re-optimization is triggered, When it runs, Then the affected scope is computed based on dependencies (vehicle, bay, technician, and precedence links) and limited to only impacted jobs. - Given an incremental re-optimization completes, When results are ready, Then updated sequences and next-best slots are persisted within 10 seconds end-to-end from event ingestion for fleets up to 100 vehicles and 300 jobs. - Given a non-qualifying event (below thresholds), When evaluated, Then no re-optimization is initiated and the event is recorded with reason "below-threshold".
Protect In-Progress Work During Re-optimization
- Given jobs with status In Progress or Bay Occupied, When re-optimization runs, Then their start time, assigned bay/technician, and sequence position remain unchanged (0 moves). - Given dependent downstream jobs exist, When re-optimization adjusts the plan, Then only downstream jobs may shift and no upstream job is pulled ahead of an in-progress job. - Given an authorized user applies an Allow Move In-Progress override to selected jobs, When re-optimization runs, Then only those selected jobs may be moved and an audit entry (user, timestamp, before/after) is stored. - Given SLAs for non-disruption, When re-optimization finishes, Then 100% of in-progress jobs retain their active assignment and start time within a tolerance of 0 minutes.
Subset Recalculation with Performance SLA
- Given a fleet up to 100 vehicles and 300 scheduled jobs, When an incremental re-optimization affects ≤ 30 jobs, Then compute time is ≤ 2 seconds at the 95th percentile. - Given unaffected jobs exist, When re-optimization completes, Then 0 start times, bay assignments, or technician assignments for unaffected jobs change. - Given the affected scope would exceed 30% of total jobs, When detected, Then the engine either (a) limits changes to the top 30% most impacted jobs or (b) schedules a full re-plan job if outside frozen windows, and records the chosen path. - Given repeated events target the same subset within 2 minutes, When re-optimization is requested again, Then the engine reuses cached dependency graphs to keep compute time ≤ 1 second P95.
Alerting and Next-Best Slot Distribution
- Given jobs are changed by re-optimization, When results are committed, Then each affected job is flagged and an alert is generated containing job_id, vehicle_id, reason_code, previous_start, new_start, delta_start_minutes, next_best_slot window, required_action, and a deep link to the schedule. - Given alert channels are configured, When re-optimization completes, Then in-app alerts render within 2 seconds and email notifications are queued within 60 seconds. - Given multiple events impact the same job within 5 minutes, When alerts are generated, Then a single consolidated alert is sent with aggregated reasons and the most recent schedule deltas. - Given a user acknowledges an alert, When acknowledgement is recorded, Then the alert is no longer shown as actionable and the acknowledgement is logged with user and timestamp.
Throttling and Change Window Enforcement
- Given throttle limits (max 3 re-optimizations per vehicle per hour; max 1 per bay per 10 minutes), When incoming events exceed limits, Then excess re-optimizations are queued or suppressed per policy and each suppression is logged with reason and next eligible time. - Given a frozen change window (e.g., 07:30–09:30 local), When events occur within the window, Then schedule changes are not applied automatically; a pending proposal is created for review after the window with an ETA and impact summary. - Given a soft change window policy, When events occur, Then only jobs starting more than 30 minutes in the future may be moved; jobs within 30 minutes remain fixed. - Given admin updates throttle or window settings, When saved, Then new settings take effect within 30 seconds and are versioned with an audit entry.
Plan Snapshotting, Rollback, and Auditability
- Given a re-optimization is initiated, When it starts, Then a pre-change schedule snapshot is saved with a unique version ID. - Given a re-optimization is applied, When users request a diff, Then the system presents before/after for all changed jobs including start time deltas, bay/technician changes, and affected dependencies. - Given a rollback is requested within 15 minutes of a change, When executed, Then the schedule reverts to the prior snapshot within 5 seconds and all alerts are updated to reflect the rollback. - Given any re-optimization or rollback, When completed, Then an immutable audit record is stored capturing trigger, scope, user (if any), timestamps, and outcome (success/fail).
Constraint Profiles and Blackout Windows
"As a planner, I want to define hard and soft constraints for vehicles and jobs so that the optimizer respects real-world limits while still finding workable schedules."
Description

Allows per-vehicle and per-job constraint profiles, including dispatch windows, driver rest periods, compliance deadlines (e.g., inspections), customer delivery commitments, and location blackout windows. Supports hard versus soft constraints with penalty weights, validation at commit time, and clear explanations when a plan cannot satisfy all constraints.

Acceptance Criteria
Hard Constraints Enforcement at Schedule Generation
Given a vehicle V with a constraint profile containing blackout windows and driver rest periods marked Hard; When the optimizer proposes next-best slots for job J on V; Then no proposed slot overlaps any Hard constraint window by ≥1 minute; And any manual attempt to place J overlapping a Hard window is blocked; And the error shows violated constraint names, window start/end, and J’s proposed start/end; And the API returns error code CONSTRAINT_HARD_VIOLATION with HTTP 409 on invalid commit.
Soft Constraints Penalty Weighting and Trade-offs
Given job J has Soft constraints with weights (e.g., dispatch preference window and delivery buffer deadline) and vehicle V has no conflicting Hard constraints; When the optimizer evaluates candidate schedules; Then it computes penalty = Σ(weight_i × violation_minutes_i) across J’s Soft constraints; And selects the plan with minimal total penalty across the batch; And the UI shows per-job and aggregate penalty scores and a breakdown by constraint; And updating a Soft weight triggers re-optimization and updates scores within 2 seconds for batches ≤100 jobs.
Per-Job Profile Overrides and Precedence Rules
Given vehicle-level profile P_v and a job-level profile P_j for job J on vehicle V; When P_j conflicts with P_v; Then job-level Hard constraints take precedence over vehicle-level Soft or Hard for J; And job-level Soft weights override vehicle-level Soft weights where specified, otherwise inherit; And removing a job-level override immediately reverts to vehicle-level behavior; And an audit log entry records the source (job vs vehicle), precedence decision, timestamp, and actor ID.
Location Blackout Windows and Bay Capacity
Given location L has blackout windows and bay capacity C, and jobs J1..Jn are eligible for L; When the optimizer sequences and assigns jobs; Then no job’s runtime overlaps L’s blackout windows by ≥1 minute; And at no time do concurrent assignments at L exceed capacity C; And if L is unavailable during a job’s only feasible window and an alternate location L2 is allowed, the optimizer proposes L2; And if no feasible location exists, the job is marked Unplaceable with reason LOCATION_BLACKOUT and a next-best time outside the blackout.
Commit-Time Validation with Clear Explanations
Given a draft plan contains potential constraint violations; When the user clicks Commit; Then validation completes within 2 seconds for plans ≤500 jobs; And if any Hard violations exist, the commit is blocked with HTTP 409; And the response lists per job: constraint_id, name, type (Hard/Soft), defined window/datetime, scheduled start/end, delta_minutes, penalty_weight, and suggested_fix; And if only Soft violations exist, the user can acknowledge and proceed; The commit succeeds with HTTP 200 and includes warnings.
Timezone and DST-Aware Constraint Evaluation
Given vehicles, locations, and jobs define windows and deadlines in differing timezones including DST transitions; When the optimizer evaluates feasibility and sequences jobs; Then all constraints are evaluated in their defined timezone and normalized to UTC for computation; And DST transitions do not produce spurious overlaps or gaps (e.g., a 01:30–02:30 window during fall-back is treated as 90 minutes); And the UI displays times in the user’s preferred timezone with clear TZ labels; And the API returns times in ISO 8601 with explicit offsets (e.g., 2025-09-12T10:00:00-05:00).
What-if Scenario Simulator
"As an operations lead, I want to simulate different scheduling strategies and compare their KPIs so that I can choose the plan that best reduces downtime and risk."
Description

Enables creation and comparison of alternative schedules by adjusting weights, constraints, and capacity parameters. Calculates and visualizes KPIs such as total idle hours, lateness, bay utilization, and overtime before committing. Supports cloning the current plan, side-by-side comparison, and safe rollback.

Acceptance Criteria
Clone Current Plan and Adjust Weights
Given an active committed plan exists, when the user selects "Clone plan" in Downtime Optimizer, then a new scenario is created with a unique ID, inheriting current weights, constraints, and capacities, and the committed plan remains unchanged Given the cloned scenario is open, when the user adjusts weights for due date, severity, dispatch windows, and durations and clicks "Recalculate," then the optimizer uses the updated weights only for that scenario and records the parameter changes with timestamp and author in scenario metadata Given optimization is running, when runtime exceeds the configured optimizationTimeout (default 60s), then the UI displays a non-blocking timeout notice and preserves the scenario without partial changes
Side-by-Side KPI Comparison
Given at least two scenarios exist (including the current plan), when the user selects them for comparison, then a side-by-side view displays KPIs for each: total idle hours, total lateness (hours), average bay utilization (%), and total overtime (hours) Given the comparison view is open, when the user toggles normalization (per vehicle or per job), then KPI values recalculate and update within 2 seconds Given scenarios have differing job counts, when KPIs render, then each KPI is labeled with units and calculation basis and tooltips expose formulas used
KPI Accuracy and Reproducibility
Given a scenario with known input data, when KPIs are calculated, then totals equal the sum of per-job metrics and rounding error does not exceed 0.1 unit per KPI Given the same scenario is recalculated without input changes, when run repeatedly, then KPI results are identical within 0.01 units Given overtime rules are configured (shift hours and max per tech), when assigned work exceeds limits, then computed overtime hours reflect the overage and are never negative
Constraint and Capacity Compliance
Given bay capacities, shift calendars, and dispatch windows are configured, when the simulator produces a schedule, then no job is placed outside its dispatch window and per-timeslot bay capacity is never exceeded Given user changes would violate a mandatory constraint, when the optimizer runs, then the run fails fast with a clear error listing violated constraints and offending jobs, and no partial schedule is saved Given soft constraints (weights) are set to extremes, when the simulation completes, then all hard constraints remain satisfied
Scenario Save, Name, and Management
Given the user creates a new scenario, when they save it, then they can provide a unique name, optional description, and tags, and duplicate names are prevented within the same fleet and creator Given scenarios exist, when the user filters by tag, date range, or author, then the list updates within 1 second for up to 50 scenarios Given a scenario is archived or deleted, when the user confirms, then it is removed from default lists, retained for 30 days for restore, and links to it remain non-breaking
Commit and Safe Rollback
Given a what-if scenario is approved, when the user clicks "Commit as current plan," then the system versions the previous plan, marks it read-only, and promotes the scenario within 5 seconds Given the current plan was committed from a scenario within the last 30 days, when the user selects "Rollback," then the prior plan is restored as current with all metadata and schedule data intact, and stakeholders receive notifications per their preferences Given a commit or rollback occurs, when auditors view history, then an immutable audit trail shows actor, action, timestamps, and KPI diffs between versions
Next-Best Slot Impact Preview
Given the user opens a job within a scenario and requests "Next-best slot" suggestions, when the user selects a suggestion, then a preview updates showing KPI deltas (idle hours, lateness, utilization, overtime) for the entire scenario before apply Given the user applies a suggested slot, when the update is confirmed, then the schedule and KPIs recalculate and render within 2 seconds Given a suggestion would violate any hard constraint, when suggestions are listed, then it is flagged as invalid and cannot be applied

Slot Swapper

When parts slip, a tech calls out, or a tow‑in arrives, the schedule rebalances automatically. Presents low-impact swap options with predicted finish times so coordinators can approve changes in one click and maintain SLAs without chaos.

Requirements

Real-time Disruption Detection
"As a service coordinator, I want disruptions to be detected and normalized automatically so that I don’t have to hunt for issues and can react before SLAs are put at risk."
Description

Continuously monitor and ingest disruption signals—parts delays/ETAs, technician call-outs, tow-ins, and urgent work orders triggered by telematics—to automatically flag at-risk appointments and capacity constraints. Normalize heterogeneous inputs into a standard disruption event model with timestamps, severity, impacted resources (techs, bays, vehicles), and expected duration shift. Integrate with FleetPulse’s scheduling and work-order services to identify affected jobs, compute available slack, and mark candidates for rebalancing without manual triage. This enables timely, automated responses that reduce coordinator load and prevent SLA breaches.

Acceptance Criteria
Normalize Heterogeneous Disruption Events
Given disruption payloads arrive from parts ETA API, roster/HR, tow-in webhook, and telematics When the ingestion service processes each payload Then it produces a DisruptionEvent with fields: eventId (UUID), source (enum), type (parts_delay|tech_callout|tow_in|urgent_wo), occurredAt (ISO-8601 UTC), severity (Info|Minor|Major|Critical), impactedResources (techIds, bayIds, vehicleIds), expectedDurationShiftMinutes (integer, may be negative), correlationId (optional), expiresAt (optional) And events missing any required field are rejected with errorCode and reason and are not persisted And the normalized event is persisted and queryable within 5 seconds of ingestion And PII not required for scheduling is excluded or masked before persistence And a metrics counter increments per source and type
Auto-Flag At-Risk Appointments on Tech Call-Out
Given a technician has scheduled appointments today and a tech_callout DisruptionEvent with an unavailable interval arrives When the event is persisted Then all appointments overlapping the interval are flagged atRisk=true within 30 seconds (P95) And capacityLostMinutes equals the total overlap minutes across affected appointments And for each affected appointment, availableSlackMinutes is computed And appointments with availableSlackMinutes >= abs(expectedDurationShiftMinutes) are marked candidateForRebalancing=true And a single notification is published with counts: affectedAppointments and rebalancingCandidates
Parts ETA Slip Recomputes Shift and Rebalancing Candidates
Given an appointment linked to a parts order with a tracked ETA and remainingSlackMinutes When a parts_delay DisruptionEvent indicating an ETA slip of S minutes arrives Then expectedDurationShiftMinutes for the appointment equals S And if S > remainingSlackMinutes then atRisk=true else atRisk=false And if S > 0 then candidateForRebalancing=true And the disruption event references partsOrderId and etaSource And the scheduler state reflects the change within 30 seconds (P95)
Tow-In Arrival Triggers Capacity Impact and Candidate Identification
Given the shop schedule for today and bay compatibility rules When a tow_in DisruptionEvent with requiredBayType and estimatedWorkMinutes arrives Then capacityImpactNext4hMinutes is computed and stored And all appointments in the next 4 hours that conflict by bay type are flagged atRisk within 30 seconds (P95) And at least one candidate job is identified where availableSlackMinutes >= estimatedWorkMinutes or a compatible bay swap exists And the tow-in work order and impacted appointments share a correlationId
Telematics Urgent Work Order Flags Vehicle Conflicts
Given telematics emits an urgent_wo DisruptionEvent for vehicle V with severity >= Major When the event is ingested Then a workOrderId is created or referenced on the event And any scheduled appointment involving V in the next 24 hours is flagged atRisk within 30 seconds (P95) And if V is currently in service, expectedDurationShiftMinutes is computed from severity-to-duration mapping and is >= 0 And the earliest impacted appointment is marked candidateForRebalancing=true
End-to-End Detection Latency and Throughput SLOs
Given a steady stream of 50 disruption events per minute from mixed sources When events are ingested and processed Then 95% of affected appointments are flagged atRisk within 30 seconds of event occurredAt And 99% within 60 seconds And the event drop rate is <= 0.1% over any rolling 60-minute window (ingested vs persisted)
Idempotent and Ordered Processing of Duplicate/Out-of-Order Events
Given identical external events (same externalEventId and source) are delivered multiple times within 24 hours, possibly out of order When the system processes the payloads Then only one DisruptionEvent exists per externalEventId+source And duplicates update the existing record without creating duplicate flags or notifications And older late-arriving events do not overwrite newer severity or expectedDurationShiftMinutes (ordering by occurredAt) And attemptCount increments and lastSeenAt updates on the existing record
Swap Optimization Engine with Predicted Finish Times
"As a dispatcher, I want the system to propose swap options with accurate predicted finish times so that I can choose the least disruptive plan and still meet SLAs."
Description

Generate and evaluate feasible swap sets across appointments, technicians, bays, and parts within the affected time horizon to rebalance the schedule with minimal impact. Enforce hard constraints (technician certifications, shift hours, legal breaks, bay capabilities, parts availability/reservations) and soft preferences (customer priority, travel reduction, technician continuity). Produce predicted finish times per option using a duration model informed by historical job data, vehicle attributes, technician performance, and telematics signals; apply buffer policies and uncertainty bands. Compute an impact score factoring SLA risk, total lateness, number of customers moved, overtime, and parts re-picks. Output top-ranked, low-impact options for review.

Acceptance Criteria
Hard-Constraint Feasibility Across Time Horizon
Given an affected time horizon with appointments, technicians, bays, and parts inventory with reservations When the engine generates swap sets Then every suggested swap set must satisfy all hard constraints: And each job's assigned technician holds all required certifications And every assignment falls within the technician’s shift hours excluding configured legal breaks And the assigned bay satisfies the job’s capability requirements And required parts are available at the scheduled time and existing reservations are honored And no technician, bay, vehicle, or appointment has overlapping assignments And technician travel time between sequential jobs is feasible within scheduled gaps And no appointment is scheduled outside the affected time horizon
Soft-Preference Scoring and Prioritization
Given soft preferences configured for customer priority, travel reduction, and technician continuity with weights wp_priority=3, wp_travel=2, wp_continuity=1 When the engine evaluates feasible swap sets Then it computes a soft_preference_score per set using the configured weights and exposes the score in the output And for two sets with identical impact_score, the set with higher soft_preference_score ranks higher And updating the soft-preference weights in configuration changes the ranking deterministically within the same data set
Predicted Finish Times with Buffers and Uncertainty Bands
Given a duration model and policy configuration buffer_policy=max(10% of predicted duration, 15 minutes) with uncertainty bands P50 and P90 When the engine computes each option Then for every appointment in every option it outputs base_predicted_duration_minutes, applied_buffer_minutes, buffered_duration_minutes, predicted_finish_timestamp, P50_finish_timestamp, and P90_finish_timestamp And predicted_finish_timestamp >= start_timestamp + buffered_duration_minutes And P90_finish_timestamp >= P50_finish_timestamp >= start_timestamp And applied_buffer_minutes >= 0 and equals buffer_policy(base_predicted_duration_minutes)
Impact Score Computation and Ranking
Given weights w_sla=5, w_late=3, w_moved=2, w_ot=4, w_parts=1 And per-option metrics SLA_risk (0..1), lateness_minutes, customers_moved, overtime_minutes, parts_re_picks When the engine evaluates each swap set Then it computes impact_score = w_sla*SLA_risk + w_late*lateness_minutes + w_moved*customers_moved + w_ot*overtime_minutes + w_parts*parts_re_picks And options are returned sorted ascending by impact_score And ties are broken by lower lateness_minutes, then lower customers_moved, then lower overtime_minutes, then higher soft_preference_score
Top-Ranked Options Output Contract
Given a schedule disruption (part delay, technician absence, or tow-in) When the engine runs on the affected horizon Then it returns between 3 and 10 options if at least 3 feasible sets exist; otherwise it returns all feasible options And each option payload includes a stable option_id, changed_appointments list, before_after times, assigned_technician_ids, bay_ids, parts_changes, per-appointment predicted finish times with P50/P90, impact_score, soft_preference_score, and SLA risk summary And no option modifies appointments outside the affected horizon And all options are uniquely identifiable and can be retrieved by option_id
SLA Risk Guardrails and Rejection of Harmful Plans
Given SLA thresholds max_per_appointment_breach_prob=0.20 and max_global_breach_prob_increase=0.05 When the engine evaluates options Then it excludes any option where any appointment exceeds max_per_appointment_breach_prob And it excludes any option where the aggregate SLA breach probability increases by more than max_global_breach_prob_increase versus current plan And remaining options include per-appointment flags at_risk=true when predicted finish exceeds the SLA window
Parts Reservation Integrity and Re-Picks Accounting
Given parts reservations linked to appointments with facility and pick list When generating swap sets that move appointments Then reservations are preserved when appointments remain in the same facility and day And if a move requires a different facility or warehouse, the option increments parts_re_picks and provides an updated pick list and source location And options requiring parts without available lead time by the new start time are excluded
Ranked Options Panel and What‑If Preview
"As a coordinator, I want to preview ranked swap options and see the exact impacts before committing so that I can make confident, fast decisions."
Description

Present coordinators with an interactive list of the top N swap options ranked by lowest impact, including per-appointment predicted start/finish times, SLA risk indicators, customers affected, overtime risk, and required approvals. Provide a timeline preview and visual diffs of the schedule before and after applying an option. Allow filtering by constraints (no customer time window violations, no overtime, limit moves per customer) and pinning of must-keep appointments. Ensure options are explainable with constraint reasoning (e.g., "kept Tech A due to certification").

Acceptance Criteria
Ranked Top N Options With Impact Details
Given a disruption (e.g., tech call-out) resulting in at least one conflicted appointment and default N=5 When the Ranked Options Panel opens Then it displays up to N options sorted ascending by impact score (lowest impact first) And each option shows for every affected appointment: predicted start time and predicted finish time in the coordinator’s local timezone And each option shows an SLA risk indicator with values {None, Low, Medium, High} And each option shows the count of customers affected And each option shows an overtime risk flag {True/False} And each option shows the list of required approvals (empty if none) And ties in impact score are broken by earliest predicted overall completion time And if fewer than N options exist, all available options are shown And the initial list renders in ≤ 3 seconds for a schedule of ≤ 100 active appointments and ≤ 12 technicians
Impact Scoring and Rank Determinism
Given a test dataset with known expected impact scores for candidate swap options When the system generates the ranked options Then the displayed ordering matches the expected lowest-to-highest impact scores And reloading the panel without schedule changes preserves the exact ordering And two options with equal impact scores are ordered by earliest predicted overall completion time, then by stable option identifier And generating the same options set multiple times yields identical scores and order
Constraint Filters: Time Windows, Overtime, Move Limits
Given the coordinator enables filters: No customer time window violations=On, No overtime=On, Limit moves per customer=1 When options are generated or a filter is toggled Then all options violating any enabled filter are excluded from the list And a result count is displayed and updates with each filter change And results refresh within ≤ 1 second of a filter change for the dataset defined in the ranked list criteria And when Limit moves per customer is increased to 2, additional compliant options appear And when filters exclude all options, an empty state is shown with a clear prompt to relax filters
Pinned Appointments Are Immutable
Given appointment A is pinned by the coordinator When the system generates swap options Then no returned option moves appointment A’s start time, end time, assigned technician, or bay And pinned appointments are visually marked and unchanged in both before and after timelines in preview And attempting to approve an option that would move a pinned appointment is blocked with an explanatory message And unpinning appointment A allows subsequent option generations to include moves of A
What-If Timeline Preview and Visual Diff
Given the Ranked Options Panel is open When the coordinator clicks any option Then a what-if preview opens within ≤ 1 second showing the relevant timeline for affected technicians on the impacted day And differences are highlighted: moved appointments, changed technician assignments, and start/end time deltas in minutes (±) And a summary displays total minutes shifted, customers affected, projected SLA breaches avoided/created, and predicted finish times per technician And zoom (15 min, 1 hr) and pan controls are available And closing the preview returns to the ranked list without applying changes
Explainable Options With Constraint Reasoning
Given an option in the ranked list When the coordinator expands "Why this option?" Then the system displays constraint reasoning including at least one concrete rule reference (e.g., "kept Tech A due to certification", "respected customer window 10:00–12:00", "avoided overtime per policy") And each reason identifies the affected appointment or resource and the specific constraint that triggered it And the collapsed summary line is ≤ 240 characters and states the primary reason And explanations are consistent with enabled filters and the changes shown in the preview
One-Click Approval and Apply
Given an option that requires only coordinator approval When the coordinator clicks Approve Then changes are applied atomically within ≤ 2 seconds, all affected appointments update to their predicted times, and a success confirmation is shown And the ranked list and timeline refresh to reflect the new schedule immediately Given an option that requires additional approvals When the coordinator clicks Approve Then an approval request is recorded and a status of "Pending approvals" is shown including the required approver types And the schedule is not changed until all required approvals are received And once approvals are complete, applying the option completes within ≤ 2 seconds Given the schedule has changed since options were generated When Approve is clicked Then the system prevents applying the stale option and prompts the coordinator to refresh options due to conflicts
One‑Click Apply with Transactional Cascade and Notifications
"As a coordinator, I want to commit a chosen swap in one click and have all stakeholders notified automatically so that the schedule and communications stay consistent without manual follow-up."
Description

Enable single-action approval to apply the selected swap set transactionally across scheduling, work orders, technician assignments, bay allocations, and parts reservations. Ensure idempotency and conflict resolution if multiple coordinators act simultaneously. Trigger downstream communications: in-app alerts and mobile push to technicians, SMS/email to customers when their times change, and webhooks for external systems. Sync updates to calendars and ensure all dependent records (labor clocks, check-in times) reflect the new schedule. Provide success/failure feedback and partial retry with safe rollback on errors.

Acceptance Criteria
Atomic One‑Click Apply Across Schedule, WOs, Techs, Bays, and Parts
Given a selected swap set modifies schedule slots, work orders, technician assignments, bay allocations, and parts reservations When the coordinator clicks One‑Click Apply Then all target records are updated in a single atomic transaction and each affected record’s version is incremented. Given any sub‑operation fails during the transaction When One‑Click Apply runs Then no target records are persisted and the system returns a single error with correlationId and reason codes. Given a successful commit affecting up to 25 jobs When One‑Click Apply completes Then recalculated start/finish times and SLA risk indicators are updated and visible within 3 seconds P95 and 8 seconds P99. Given parts inventory constraints exist When One‑Click Apply executes Then parts reservations are re‑allocated without violating availability or hold policies.
User Feedback, Partial Retry, and Safe Rollback
Given the core transaction commits successfully but downstream side‑effects (notifications/syncs) partially fail When One‑Click Apply completes Then the UI shows “Committed with follow‑ups” and the system retries failed side‑effects with exponential backoff for up to 24 hours, max 10 attempts per endpoint/channel. Given a side‑effect retry eventually succeeds When the system processes retries Then the UI activity log updates within 10 seconds and the status changes to “All follow‑ups delivered”. Given side‑effects exhaust all retries When failures persist Then the coordinator receives an in‑app alert with a consolidated list of failed endpoints/channels and a one‑click “Retry now” action. Given a core transaction step fails pre‑commit When One‑Click Apply executes Then the system performs a full rollback and no notifications or downstream syncs are triggered.
Idempotency and Duplicate Request Safety
Given the coordinator double‑clicks Apply or the client retries the same request within 60 seconds using the same idempotency key When the backend receives duplicate requests Then exactly one transaction is committed and all responses return the same result payload and correlationId. Given a network timeout occurs after the server commits When the client retries with the same idempotency key Then no duplicate notifications, reservations, or assignments are created. Given two distinct swap sets reference the same work order but have different idempotency keys When both are submitted sequentially Then the second request is processed only if the underlying resource versions match; otherwise it fails with Conflict and includes the current versions.
Concurrent Coordinator Conflict Resolution
Given two coordinators submit conflicting swap sets that touch the same vehicle, technician, bay, or work order within a 5‑second window When the system processes requests Then operations are serialized by resource and only the first transaction commits; the second returns Conflict with a diff of changes since plan generation. Given two coordinators submit non‑overlapping swap sets When processed concurrently Then both transactions commit without deadlocks and within 8 seconds P99. Given a conflict response is returned When the coordinator opts to re‑apply Then the client can request a refreshed swap set and re‑submit within one click.
Technician In‑App and Push Notifications on Schedule Change
Given a technician’s assignment time, bay, or job order changes due to the swap When One‑Click Apply commits Then the technician receives an in‑app alert and a mobile push within 15 seconds P95 containing work order ID, vehicle, new start time, bay, and coordinator note. Given the device token is invalid or push fails When delivery is attempted Then the system logs the failure, suppresses further push to that token, and still records an in‑app alert. Given the technician opens the alert When viewed Then an acknowledgement action is available and the acknowledgement timestamp is stored on the assignment.
Customer SMS/Email for Appointment Time Changes
Given a customer’s appointment time changes by 5 minutes or more When One‑Click Apply commits Then an SMS and email are sent per the customer’s contact preferences within 60 seconds P95 including the previous and new time window and the shop contact number. Given the customer has opted out of SMS or email When messages are prepared Then only permitted channels are used and no restricted channel is sent. Given a message send fails permanently When delivery providers return a hard failure Then the system records the failure, stops retrying that channel, and alerts the coordinator in‑app. Given the customer’s timezone differs from the shop’s When composing messages Then times are rendered in the customer’s timezone with offset.
Webhooks, Calendar Sync, and Dependent Record Updates
Given webhooks are configured for schedule changes When One‑Click Apply commits Then a POST is sent to each active endpoint within 10 seconds with HMAC‑SHA256 signature, eventId, idempotencyKey, and a schedule diff; failures are retried with exponential backoff for up to 24 hours. Given technician and bay calendars are enabled When One‑Click Apply commits Then calendar events are updated to the new times and resources within 10 seconds P95 and no overlapping events are created. Given dependent records exist (labor clocks not started, check‑in times pending) When the schedule shifts Then unstarted labor clocks are moved to the new start, check‑in windows adjust accordingly, and no negative or zero‑length durations are produced. Given reporting queries run after commit When queried Then they reflect the new schedule and assignments within 5 seconds.
SLA Guardrails and Policy Rules
"As an operations manager, I want enforceable rules that prevent the team from breaking SLAs unless explicitly approved so that we stay compliant and avoid penalties."
Description

Apply configurable policy rules that protect contractual SLAs and business constraints during both optimization and apply phases. Support hard blocks (e.g., cannot violate customer service windows or compliance checks) and soft warnings with override paths. Allow account-level policies such as tow-in priority, max moves per job, blackout times, and penalty thresholds. Require justification and optional approver for overrides; persist reason codes. Surface rule violations in the options panel and suppress options that cannot be executed under current policies.

Acceptance Criteria
Hard Block: Customer Service Window Cannot Be Violated
Given an account policy that customer service windows are hard blocks And a job with a committed service window from 10:00 to 12:00 And a proposed swap schedules the job to start at 12:15 When the optimization engine generates options or a coordinator attempts to apply the swap Then the option is not shown in the options list And attempting to apply the swap directly is blocked with an error "Violates service window (10:00–12:00)" And no override control is available And the schedule remains unchanged
Hard Block: Compliance Checks Prevent Assignment
Given a policy that requires a certified brake technician for brake-related jobs (hard block) And the only available technician lacks the required certification When the optimization engine evaluates swap options and when a coordinator attempts to apply a conflicting swap Then no options are generated that assign the job to the uncertified technician And the apply action is blocked with an error "Compliance check failed: Required certification missing" And no override path is presented And no changes are persisted
Soft Warning Override with Justification and Approver
Given an account policy with an SLA penalty warning threshold of $200 and approver role = Ops Manager And a proposed swap increases predicted SLA penalty by $240 but does not violate any hard block When the options panel displays the swap Then the option appears with a Warning badge and shows the predicted penalty delta ($240) When the coordinator selects Override Then a Justification modal requires a Reason Code selection and a free-text note of at least 10 characters And an approver field defaults to Ops Manager and is required When the approver approves the override Then the swap is applied, the schedule updates, and an audit record stores reason code, note, approver ID, user ID, timestamp, and impacted jobs And if the approver rejects, the schedule remains unchanged and the option is marked Rejected with rationale
Account Policy: Max Moves Per Job Enforced
Given an account policy Max Moves Per Job Per Day = 2 (hard block) And Job A has already been moved 2 times today When the optimization engine searches for swap options that would move Job A again Then no such options are generated and a suppression counter notes the reason "Max moves reached" When a coordinator attempts to apply a third move for Job A manually Then the action is blocked with an error "Max moves per job (2) reached today" and no override is available And the move count for Job A remains at 2
Account Policy: Blackout Times Respected
Given shop blackout time from 12:00 to 12:30 and lift L3 blackout from 15:00 to 15:30 (hard blocks) And a proposed swap would schedule work on lift L3 from 12:15 to 12:45 When optimization runs or a coordinator attempts to apply the swap Then the option is suppressed because it overlaps a blackout window And applying the swap is blocked with an error specifying the conflicting blackout window(s) And no override is available for blackout hard blocks
Tow-In Priority Policy in Optimization
Given an account policy that tow-in jobs have priority level Urgent over non-urgent scheduled work, without violating hard blocks And a tow-in arrives at 10:10 with an estimated triage duration of 30 minutes And at least one qualified bay and technician are available by 10:20 When the optimization engine rebalances the schedule Then at least one option places the tow-in to start by 10:25 (within 15 minutes of arrival), if feasible And any displaced jobs respect Max Moves Per Job and other policies And options that would violate hard blocks (e.g., blackout, service window) are not generated And each option displays predicted start/finish times and SLA impact deltas
Options Panel: Violation Surfacing and Suppression
Given account-level policy rules are configured (hard and soft) When the options panel renders swap suggestions Then options with any hard rule violation are suppressed and not shown And the UI displays a count of suppressed options with breakdown by rule name And options with soft violations display the rule name, severity Warning, and impact metrics (lateness minutes and/or penalty cost) And selecting a warning-listed option reveals an Override action subject to policy-configured justification and approver And applying an overridden option persists the reason code(s) and note to the audit trail
Change Audit and One‑Step Rollback
"As a compliance lead, I want a complete audit trail with the ability to roll back a change so that we can investigate issues and recover quickly from mistakes."
Description

Record an immutable audit trail for every proposed and applied swap, capturing actor, timestamp, disruption source, option chosen, before/after schedule states, notifications sent, and rule overrides with reasons. Provide a bounded-time one-step rollback that restores the previous schedule state and reverses dependent updates where safe, while logging the rollback as a new audit entry. Offer searchable exports for compliance reviews and post-mortems.

Acceptance Criteria
Audit Entry on Proposed Swap
Given a coordinator selects a swap option in response to a disruption and clicks Propose When the proposal is saved Then an audit entry is created with: actorId, actorRole, timestamp (UTC ISO 8601, ms), disruptionSourceType and id, swapOptionId and version, affectedWorkOrders list, predictedFinishTimes, beforeScheduleHash, proposalDiff, environment, and status=proposed And the audit entry is immutable and assigned a monotonically increasing sequence number And the audit entry is retrievable by id within 1 second
Audit Entry on Applied Swap with Before/After State and Notifications
Given a coordinator approves a proposed swap When the swap is applied to the schedule Then a new audit entry is recorded capturing: actorId, timestamp, disruptionSource, appliedSwapId, beforeScheduleSnapshotHash, afterScheduleSnapshotHash, affectedWorkOrders with before/after start-end times, technician and bay, ruleOverrides with reasonCodes, notificationsSent with type-channel-recipient-status-timestamps, and slaImpact delta And the after schedule state in the audit entry exactly matches the persisted schedule state at commit (hash equality) And all notification sends are logged in the same audit entry with delivery outcomes And the entry is append-only and linked to the previous entry via previousHash
One-Step Rollback Within Configured Window
Given an applied swap is the most recent schedule-changing entry and its age is <= configuredRollbackWindowMinutes (default 120, min 15, max 240) When an authorized coordinator clicks Rollback for that audit entry Then the system atomically restores the schedule to the exact beforeScheduleSnapshot referenced by that entry And reverses dependent updates where safe: reopens rescheduled work orders, cancels or resends notifications with reversal reason, restores technician and bay allocations, and recalculates SLA commitments And writes a new audit entry of type rollback that references the rolledBackAuditId and includes reversal actions and outcomes And disables further rollback on both the new rollback entry and the original applied entry And completes the rollback within 3 seconds for changes affecting up to 200 work orders
Rollback Safety Guardrails and Denials
Given an applied swap has irreversible dependencies (e.g., a technician has clocked in on a moved job, an inspection has started, a part has been consumed, or a subsequent schedule change has been applied) When a user attempts to roll back that swap Then the system prevents the rollback And displays specific blocking reasons and the earliest resolvable step, if any And records a rollbackAttempt audit entry with actorId, timestamp, targetAuditId, denialReasons, and no schedule changes applied And leaves notifications unchanged and marks rollback action as unavailable on the target audit entry
Searchable Audit and Export
Given at least 100,000 audit entries exist When a user with AuditExport permission searches by any combination of date range, actorId, vehicleId, workOrderId, disruptionSourceType, swapStatus (proposed, applied, rollback), and ruleOverride flag Then results return within 3 seconds for up to 10,000 matching records with stable sorting by timestamp descending And clicking Export produces a downloadable CSV or JSON file with schema: auditId, timestamp, actor, eventType, disruptionSource, beforeHash, afterHash, affectedWorkOrders, notifications, ruleOverrides, slaImpact, previousHash And all timestamps are UTC ISO 8601, ids are stable, and text fields are UTF-8 And exports include only records the user is authorized to view
Audit Immutability and Chain Integrity Verification
Given any audit entry is written When a system integrator calls the chain verification endpoint Then each entry exposes contentHash and previousHash forming a hash chain And the endpoint returns integrity=true when recomputed hashes match and the chain is unbroken And update or delete operations on audit entries are disallowed at the data layer and API; any attempted mutation creates a new audit entry of type amendmentAttempt with actorId, timestamp, targetAuditId, and outcome=blocked And only users with AuditAdmin role can access the verification endpoint; reads do not modify the chain

Clear-To-Roll

Verifies fixes before release by checking DTC clears, short road-test thresholds, and sensor stability. Automatically reopens the job or schedules a follow-up if anomalies persist, ensuring vehicles return to service confidently and stay there.

Requirements

DTC Clearance Verification
"As a technician, I want the system to confirm that repaired DTCs stay cleared across cycles so that I can confidently return the vehicle without repeat faults."
Description

Validate that repair-resolved Diagnostic Trouble Codes (DTCs) are cleared and remain cleared across defined ignition/drive cycles before a vehicle is released. The system reads active and pending DTCs, verifies the malfunction indicator light (MIL) state, and checks key Mode $01/$03 data points to ensure no reappearance of target codes. It binds the verification to the originating work order and vehicle VIN, captures timestamps and technician ID, and enforces pass/fail criteria. If communication with the OBD-II adapter fails, it retries with graceful error handling and logs root causes. Successful verification updates the work order and vehicle status; failures trigger workflow rules (reopen or follow-up).

Acceptance Criteria
Cross-Cycle DTC Clearance Confirmation
Given a work order with one or more target DTCs marked resolved for a specific VIN And verification thresholds are set to 2 ignition cycles and a minimum 5-minute cumulative drive time When the system performs verification and reads Mode $03 (stored DTCs), Mode $07 (pending DTCs), and Mode $01 PID 01 (MIL status/DTC count) after each cycle and at the final check Then none of the target DTCs appear in Mode $03 or Mode $07 at any check And the MIL is OFF and the DTC count is 0 at the final check And the verification outcome is Pass; otherwise the outcome is Fail with reason "Target DTC Present"
Mode $01/$03 Data Capture and Evidence Storage
Given DTC clearance verification is initiated When the system polls the OBD-II interface for each required check Then it captures and stores an immutable evidence snapshot per check including timestamp, ignition cycle index, Mode $03 DTC list, Mode $07 DTC list, Mode $01 PID 01 value, and adapter connection metadata And the evidence is retrievable by workOrderId and VIN within 2 seconds And evidence records are append-only and include an audit entry for creator, timestamp, and source
Work Order and VIN Binding
Given a technician starts verification from a work order for a specific VIN When the verification record is created Then it contains workOrderId, VIN, technicianId, startTimestamp, endTimestamp, adapterId, and outcome And if any required field is missing, the verification cannot be marked complete and the outcome is Fail-Validation with reason And the verification record is linked to and visible from the originating work order's activity log
OBD-II Communication Retry and Root-Cause Logging
Given a read attempt to the OBD-II adapter fails due to transport or protocol error When the system retries up to 3 times with exponential backoff (1s, 2s, 4s) for the failed step Then on success within retries, verification proceeds and the retry count is logged And on failure after retries, the verification ends with outcome Fail-Technical and a rootCause classified as one of [Timeout, ProtocolMismatch, AdapterNotFound, VehicleNotResponsive, PowerLoss] And the error log includes adapterId, signal metrics if available, vehicle voltage if available, timestamp, and the last error message
Pass/Fail Outcome Updates and Workflow Actions
Given a verification outcome is determined When the outcome is Pass Then the work order status is set to "Ready to Release" and the vehicle status is set to "Clear-To-Roll" within 5 seconds, idempotently When the outcome is Fail-Functional (target DTC present or MIL ON) Then the work order is reopened with reason "DTC Clearance Failed" and a follow-up task is created and assigned to the original technician When the outcome is Fail-Technical (communication failure) Then a follow-up verification job is scheduled within 24 hours and the current work order remains "In Progress" And all workflow changes are audit-logged with user, timestamp, and evidence link
Pending vs Stored DTC Decision Rules
Given target DTCs are absent in Mode $03 but present in Mode $07 at any required check When evaluating the outcome Then the verification result is Fail-Functional with reason "Pending DTC (target)" And all non-target DTCs observed are recorded as findings but do not change the target DTC clearance determination; the overall release decision remains subject to MIL state criteria
Guided Road-Test Protocol
"As a technician, I want a guided road-test checklist with clear thresholds so that I can validate repairs consistently and efficiently."
Description

Provide a guided, configurable short road-test workflow that ensures minimum distance/time, speed bands, RPM ranges, brake applications, and temperature windows are met to validate repairs under realistic conditions. The app prompts step-by-step actions, shows progress against thresholds, captures GPS and OBD-II telemetry in real time, supports limited offline buffering, and flags unmet criteria. Templates can be selected by vehicle profile or repair type, and results are automatically attached to the work order with a pass/fail summary and captured sensor traces.

Acceptance Criteria
Template Selection and Parameter Loading
Given a vehicle profile and repair type are present on the open work order When the user taps Start Road-Test Then the app suggests the top matching template(s) by vehicle profile and repair type, or applies the default template if only one match exists And the selected template’s thresholds (minimum distance, minimum time, speed bands, RPM ranges, brake application count, temperature window) are loaded exactly as configured and displayed to the user before the test starts And each threshold is labeled as Mandatory or Optional per the template And the applied template name, ID, and version are recorded with the session prior to data capture
Guided Prompts and Live Progress Tracking
Given a template is loaded and the user begins the road-test When the workflow starts Then the app displays step-by-step prompts generated from the template in the prescribed order And per-criterion progress indicators update at least once per second based on live telemetry And a criterion is marked Complete only when its threshold is met; otherwise it remains In Progress And the Finish Test action is disabled until all Mandatory criteria are Complete And if the user attempts to finish early, the app presents a list of unmet criteria with current vs required values
Threshold Validation: Distance, Time, Speed, RPM, Brake, Temperature
Given telemetry capture is active with a loaded template When the system evaluates thresholds during the test Then Minimum Distance passes when GPS-derived distance >= configured value (tolerance ±1% or 10 m, whichever is greater) And Minimum Time passes when active test duration (excluding Paused time) >= configured value And each Speed Band passes when accumulated time within the band >= configured seconds without requiring continuity And RPM Range criteria pass when engine RPM reaches each configured range for >= configured seconds And Brake Applications pass when counted off→on transitions of the brake pedal status PID >= configured count And Temperature Window passes when coolant temperature enters and remains within the configured window for >= configured seconds And all evaluations exclude periods when the test is Paused
Real-Time Telemetry Capture and Data Quality
Given the device is online and the road-test is running When capturing GPS and selected OBD-II PIDs Then samples are recorded at 1 Hz or faster for GPS and each selected PID with monotonic UTC timestamps And maximum timestamp skew between GPS and OBD-II streams is <= 200 ms And no online data gap exceeds 2 consecutive seconds; any gap is logged as a data quality warning And each sample includes vehicle ID, work order ID, template ID, PID identifier, value, and timestamp metadata And sensor traces are preserved at full sampling resolution for attachment and review
Offline Buffering and Sync
Given network connectivity is lost during an active road-test When the app transitions offline Then GPS and up to 20 selected PIDs are buffered locally for at least 15 minutes at 1 Hz without data loss And offline samples are flagged with offline=true in metadata And the UI displays an Offline badge and remaining buffer capacity, warning the user at >=80% utilization And upon reconnection, buffered data is synced in chronological order without duplicates, and data quality warnings are cleared if sync succeeds And the test cannot be finalized as Pass until all buffered data relevant to Mandatory criteria has successfully synced
Pass/Fail Summary and Work Order Attachment
Given the user ends the road-test When evaluation completes Then a Pass result is produced only if all Mandatory criteria pass; otherwise the result is Fail And the summary lists each criterion with threshold, achieved value(s), and pass/fail status, plus any data quality warnings And GPS and OBD-II sensor traces for the test interval are attached to the work order and accessible within 60 seconds if online, or within 60 seconds of reconnection if offline And the attachment includes session ID, user ID, vehicle ID, work order ID, template ID/version, start/end timestamps, and app version And unmet criteria are clearly flagged in the summary for technician follow-up
Sensor Stability Analytics
"As a fleet manager, I want automatic analysis of critical sensors post-repair so that hidden issues are caught before the vehicle returns to service."
Description

Analyze key sensor signals before, during, and after the road test to detect instability, drift, or out-of-range patterns that indicate unresolved issues. Metrics include variance and trend checks for fuel trims (STFT/LTFT), O2 sensor switching behavior, coolant temperature stabilization, battery voltage under load, misfire counts, and brake-related telemetry where available. The system compares post-repair signals to pre-repair baselines or fleet norms, computes a stability score, and highlights anomalies with visual traces and thresholds. Results feed the overall Clear-To-Roll decision and drive automatic follow-up actions.

Acceptance Criteria
Baseline Selection and Normalization
Given a vehicle has ≥10 minutes of mixed-drive pre-repair telemetry within the last 30 days, When Sensor Stability Analytics runs, Then it uses that dataset as the baseline and records its timestamp and coverage. Given the pre-repair baseline is missing or insufficient, When analysis runs, Then it selects fleet norms matched by VIN/engine family, model year, fuel type, and similar ambient temperature (±10°C) and payload class (±20%) and records the baseline source. Given any baseline is selected, When comparisons execute, Then values are normalized for ambient temperature, altitude, and fuel type and the normalization parameters are stored with the result. Given no suitable baseline can be found, When analysis executes, Then the vehicle is marked "Needs Baseline", Clear-To-Roll is blocked with a reason code, and a 30-minute drive baseline task is created and assigned.
Fuel Trim Stability During Road Test
Given the engine is in closed-loop and coolant temp ≥80°C, When cruising at 35–55 mph for ≥3 minutes, Then per-bank LTFT |value| ≤10% and mean STFT between -5% and +5%. Given the last 5 minutes of steady cruise, When analyzing trend, Then |slope(STFT)| ≤1%/min and variance(STFT) ≤6 (%^2) per bank. Given 2–3 moderate acceleration events, When analyzing combined trims, Then |STFT + LTFT| ≤15% for ≥95% of samples per bank. Given any threshold is breached for >10 consecutive seconds, When detected, Then flag a "Fuel trim instability" anomaly with bank, timestamps, and threshold overlays.
O2 Sensor Switching Health (Gasoline Engines)
Given upstream O2 sensors are in closed-loop and coolant temp ≥80°C, When idling for 60 seconds, Then each upstream sensor switches ≥8 times per 10 seconds with amplitude ≥0.6 V and spends ≤70% time in either rich or lean. Given a steady 1500–2500 rpm hold for 60 seconds, When analyzing, Then switching frequency ≥10 per 10 seconds and cross-counts ≥7 per 10 seconds per upstream sensor. Given any upstream O2 sensor flatlines (>5 seconds without crossing) or amplitude <0.2 V, When detected, Then flag an "O2 switching abnormal" anomaly with sensor ID and bank. Given the vehicle is diesel or O2 switching not applicable, When analysis runs, Then these checks are skipped and marked "N/A" without penalizing the score.
Coolant Temperature Warm-up and Stabilization
Given a cold start with ambient 0–30°C, When the road test begins, Then coolant reaches 80–105°C within 10 minutes or 5 miles, whichever comes first. Given the thermostat opening event is detected, When analyzing steady-state cruise after opening, Then temperature variance ≤3°C over 3 minutes and no single drop >10°C. Given ambient <0°C, When evaluating warm-up, Then time/mileage thresholds are relaxed by 50% and the adjustment is annotated. Given persistent under-temperature or oscillation beyond thresholds, When detected, Then flag a "Cooling system instability" anomaly with timestamps and overlayed thresholds.
Battery and Charging System Under Load
Given engine cranking from ≥80% SOC, When analyzing the crank event, Then minimum battery voltage does not drop below 9.6 V for more than 1 second. Given idle with headlights, blower, and rear defrost on, When analyzing charging, Then system voltage is within 13.5–14.8 V and AC ripple ≤0.5 V p‑p. Given a 2000 rpm hold with the same loads, When analyzing, Then voltage remains within 13.5–14.8 V and differs from idle by ≤0.5 V. Given voltage sag or ripple exceeds thresholds, When detected, Then flag a "Battery/charging anomaly" with operating condition, sample window, and min/max values.
Misfire and Brake Telemetry Anomaly Detection
Given per-cylinder misfire counts are available, When analyzing a 10-minute mixed drive, Then misfire rate is <5 per 1000 revolutions per cylinder and no continuous misfire lasts >3 seconds. Given brake pressure/travel telemetry is available, When analyzing downhill and stop‑go segments, Then steady‑pedal noise RMS ≤10% full scale, no oscillations >15% full scale for >2 seconds, and ABS activation rate ≤0.05 per mile in dry conditions. Given thresholds are exceeded, When detected, Then flag "Ignition misfire" or "Brake telemetry abnormal" with sensor/channel, context, and timestamps; otherwise mark the metric "Pass" or "N/A" as appropriate.
Stability Score, Visualization, and Follow-up Automation
Given all metric analyses are complete, When computing the stability score, Then the system outputs a 0–100 score with component weights: Fuel trims 25%, O2 switching 20%, Coolant 15%, Battery/charging 15%, Misfire 20%, Brake telemetry 5%, and stores component scores. Given stability score ≥85 and no critical anomalies open, When Clear-To-Roll evaluation triggers, Then this requirement contributes "Pass" and the system stores visual traces with baseline and threshold overlays, accessible in the repair job. Given score <85 or any critical anomaly is flagged, When evaluation triggers, Then the job is automatically reopened with reason codes and a follow-up drive-test task scheduled within 72 hours, and notifications are sent to the assigned technician and fleet manager. Given any metric is marked "N/A", When computing the score, Then weights are re-normalized to 100% and the score audit (inputs, weights, and totals) is stored with the result.
Auto Reopen or Follow-up Scheduling
"As a service coordinator, I want the system to automatically reopen jobs or schedule follow-ups based on test results so that no failing vehicle slips back into service."
Description

Automate post-test outcomes by reopening the work order with diagnostic findings if DTCs reappear or stability checks fail, or by scheduling a follow-up inspection when borderline metrics are detected. The system assigns the task to the appropriate shop or technician, proposes earliest available slots on the maintenance calendar, and notifies stakeholders via in-app, email, or SMS. All generated tasks link to the originating repair, include reason codes and evidence, and update the vehicle’s availability timeline to reduce unplanned downtime.

Acceptance Criteria
Auto Reopen on Post-Test Failure
Given a vehicle has a completed repair with Clear-To-Roll post-test enabled And the configured post-test window and thresholds are active When any monitored DTC reappears during or within the post-test window Or any configured sensor stability check fails during the road test Then the system reopens the originating work order And attaches the latest diagnostic snapshot, DTCs, and sensor readings as evidence And sets the work order status to "Reopened - Post-Test Failure" And records standardized reason codes for each failure condition And time-stamps the action and actor as "System"
Auto Follow-Up on Borderline Metrics
Given a vehicle passes post-test failure thresholds but triggers configured borderline metrics And a follow-up inspection SLA is defined When the post-test completes Then the system creates a new follow-up task linked to the originating repair And sets the due date within the configured SLA window And pre-populates standardized reason codes for borderline metrics with evidence snapshots And assigns the task priority per configuration
Smart Assignment to Shop/Technician
Given shop and technician capabilities, certifications, and availability are maintained And the vehicle's home base and current location are known When a work order is reopened or a follow-up task is created Then the system assigns the task to the best-matching shop or technician based on capability, proximity, and availability rules And resolves assignment conflicts by applying configured tie-breakers And records the assignee selection rationale in the task audit log
Earliest Slot Proposal and Booking
Given the maintenance calendar has open slots, blackout periods, and resource constraints When a task is created or reopened Then the system proposes the three earliest available slots that satisfy duration and resource requirements And, if auto-booking is enabled, books the earliest acceptable slot And, if no slots exist within the SLA, flags a scheduling exception and proposes the next best alternatives And writes the scheduled time to the task and assignee calendars
Multichannel Stakeholder Notifications
Given stakeholder notification preferences for in-app, email, and SMS are configured When a task is reopened or a follow-up is scheduled Then the system sends notifications via each enabled channel within 60 seconds And includes task type, vehicle identifier, reason codes, proposed or confirmed time, and links to evidence And records delivery status and retries on transient failures up to the configured limit And creates an in-app alert that persists until acknowledged
Task Linkage, Reason Codes, and Evidence
Given a task is created by the Clear-To-Roll process When the task is persisted Then the task stores a link to the originating repair and post-test session ID And includes at least one standardized reason code And attaches evidence artifacts including DTC logs, sensor graphs, and timestamps accessible to the assignee And prevents task closure until attached evidence is viewable
Vehicle Availability Timeline Update
Given a vehicle's availability timeline is maintained When a work order is reopened or a follow-up is scheduled Then the vehicle's availability is updated to reflect scheduled downtime and travel buffers per configuration And conflicts with existing commitments are detected and surfaced And utilization metrics are recalculated within 5 minutes And a change log entry is added with before and after availability windows
Release Gate and Overrides
"As a fleet manager, I want a release gate with controlled overrides so that vehicles only return to service when they truly meet safety and reliability criteria."
Description

Enforce a release gate that prevents a vehicle from being marked Available until all Clear-To-Roll checks pass. Authorized users can issue a documented override with a mandatory reason and risk acknowledgment. The gate integrates with fleet status, dispatch views, and APIs so downstream systems reflect availability in real time. All decisions, overrides, and timestamps are audit-logged, and reversal flows are supported if subsequent anomalies are detected.

Acceptance Criteria
Gate Blocks Availability Until Clear-To-Roll Passes
Given a vehicle with one or more failing Clear-To-Roll checks When a user attempts to set the vehicle to Available via the FleetPulse UI Then the action is blocked, the vehicle status remains Unavailable, and the UI lists the specific failing checks Given the same vehicle When an API client sends PATCH /vehicles/{id}/availability with value "Available" Then the response is 409 Conflict with error_code "release_gate_blocked" and a failing_checks array, and the vehicle remains Unavailable Given a bulk-availability operation that includes the vehicle When the operation executes Then the vehicle is skipped with a per-item failure result and other eligible vehicles are processed
Auto-Release and Real-Time Propagation
Given a vehicle currently held by the release gate due to pending checks When all Clear-To-Roll checks pass on the next evaluation cycle Then the system automatically updates the vehicle status to Available and records an auto-release decision Given the auto-release occurs When the dispatch board is viewed and GET /vehicles/{id} is called Then both reflect Available within 5 seconds of the decision time Given the auto-release occurs When webhooks are configured Then a status_changed webhook is delivered at least once within 30 seconds with old_status, new_status, and decision_type "auto_release"
Authorized Override with Mandatory Reason and Risk Acknowledgment
Given a user with permission release.override and a vehicle failing Clear-To-Roll checks When the user initiates an override release Then the UI requires a non-empty reason (minimum 10 characters) and a checked risk acknowledgment before enabling Confirm Given the override is confirmed When processed Then the vehicle becomes Available, an override banner is shown on the vehicle detail, and the override is revocable by users with release.override Given an unauthorized user attempts an override When submitted Then the request is rejected with 403 Forbidden in API and the UI shows an authorization error
Comprehensive Audit Logging of Gate Decisions
Given any release decision or attempt (blocked, auto-release, manual override, reversal) When it occurs Then an immutable audit record is created containing vehicle_id, actor_type, actor_id, action, source (UI/API/system), timestamp (ISO-8601 UTC), previous_status, new_status, reason, failing_checks (if any), and correlation_id Given audit records exist When queried by vehicle_id and time range Then results include all matching records in chronological order and are exportable as CSV and JSON Given an audit record When retrieval is attempted with its id Then the record content cannot be modified or deleted via public APIs
Automatic Reversal on Post-Release Anomalies
Given a vehicle set to Available by auto-release or override When a monitored anomaly (e.g., DTC reappears, sensor instability beyond threshold, failed road-test criteria) is detected on the first completed trip or within 24 hours, whichever comes first Then the system changes status to Needs Attention, reopens the previous job or schedules a follow-up task, and records a reversal decision with detected_anomalies listed Given the reversal occurs When dispatch and API consumers refresh Then the updated status propagates within 5 seconds and a status_changed webhook with decision_type "reversal" is emitted Given a user with permission release.revoke When the user manually reverses an Available status due to a discovered issue Then the status changes to Needs Attention and the manual reversal is audit-logged with reason
API and Dispatch View Consistency and Contracts
Given a status change event to or from Available When the dispatch view loads and API GET /vehicles and /vehicles/{id} are called Then all surfaces show the same status value and last_changed timestamp Given a client attempts to set Available but the gate blocks it When calling PATCH /vehicles/{id}/availability Then the response is 409 with a documented problem+json body including error_code "release_gate_blocked", reasons[], and link to failing checks Given webhooks are enabled When a status changes Then a signed webhook is delivered with an HMAC signature header and a monotonically increasing sequence id to preserve ordering
Concurrency and Race-Condition Safety at Release Gate
Given concurrent events where an override is submitted while new failing checks are ingested When processed Then the system resolves deterministically such that no momentary Available is observable if checks are failing, and the final state reflects the latest validated decision Given concurrent API writes When clients include If-Match with the last ETag from GET Then conflicting updates return 412 Precondition Failed and no status change occurs Given high-throughput updates When observed in logs and UI Then no duplicate or out-of-order status_changed webhooks are delivered for the same decision_id
Configurable Thresholds by Vehicle Profile
"As an administrator, I want to configure road-test and sensor thresholds by vehicle profile so that validation aligns with each asset’s operating realities."
Description

Allow administrators to define and version test thresholds, sensor limits, and protocol steps by vehicle class, engine type, and duty cycle, with per-VIN exceptions. Provide preset templates (e.g., light-duty gas, medium-duty diesel) and validation on save to avoid conflicting settings. Changes are tracked with effective dates and applied consistently across the Clear-To-Roll workflow, enabling tighter standards for critical assets and relaxed criteria where appropriate.

Acceptance Criteria
Create Profile With Thresholds And Save-Time Validation
Given an admin defines a new profile keyed by vehicle class, engine type, and duty cycle When any sensor limit has min >= max, a required field is empty, or two protocol steps share the same sequence number Then the save is blocked and field-level errors identify each conflicting setting Given all sensor limits have min < max, protocol steps have unique sequence numbers, and required fields are populated When the admin saves with an immediate effective date Then the profile is created as version 1 with the specified effective date/time and is available for resolution Given an existing profile for the same class+engine+duty has an overlapping effective date range When the admin attempts to save another overlapping version Then the save is blocked with an error indicating overlapping effective dates for that profile key
Apply Preset Template To Initialize Profile
Given an admin selects the “Light-Duty Gas” or “Medium-Duty Diesel” preset template while creating a profile When the template is applied Then default threshold values, sensor limits, and protocol step order populate the form Given the admin edits some fields after template application When the profile is saved Then the saved version reflects edited fields and retains template defaults for unedited fields, and the template source is recorded in the version metadata
Versioning With Effective Dates And Clear-To-Roll Lock-In
Given profile version v1 is effective now and version v2 is scheduled with a future effective date T1 When a Clear-To-Roll run starts before T1 Then the run uses v1 for all validations and logs the applied profile version ID Given the same schedule When a Clear-To-Roll run starts at or after T1 Then the run uses v2 for all validations and logs the applied profile version ID Given an admin creates a new immediate version v3 When saved Then v2 remains in history, v3 becomes effective for new runs, and historical runs continue to reference the version locked at their start time
Per-VIN Exceptions Override Profile Values
Given a vehicle VIN has an exception overriding one or more threshold fields When a Clear-To-Roll run executes for that VIN Then the overridden fields use the exception values and all other fields come from the resolved profile version Given an exception is defined with an invalid constraint (e.g., min >= max) When saving the exception Then the save is blocked with field-level errors Given an exception is removed When the next run executes Then the vehicle reverts to using the resolved profile values for all fields
Profile Resolution Precedence And Fallbacks
Given a vehicle has class C, engine E, and duty D and a profile exists for C+E+D When resolving thresholds for a run Then the system selects the latest effective version for C+E+D Given no C+E+D profile exists but a default preset template is mapped to C+E When resolving thresholds Then the system uses the mapped preset template values Given neither a C+E profile nor mapping exists When resolving thresholds Then the system uses the global default template values Given the VIN has exceptions When resolving thresholds Then per-VIN exception values take precedence over the resolved profile/template values for their specific fields
Change Tracking And Audit History
Given an admin creates or edits a profile version or a per-VIN exception When the change is saved Then the system records actor, timestamp, affected entity (profile key or VIN), before/after values, and effective date in an immutable audit log Given a user views a profile’s history When requesting change details between two versions Then the system displays a diff of changed fields and the effective date/time for each version Given a compliance export request for profile changes within a date range When exporting Then the system provides a CSV/JSON file containing the audit records with all recorded metadata
Role-Based Access For Configuration
Given a user without Administrator role When attempting to create, edit, or delete a profile version or per-VIN exception Then the action is denied and the user can only view resolved values Given a user with Administrator role When creating, editing, or scheduling versions and exceptions Then the actions succeed subject to validation rules and all changes are audited
Compliance Audit Trail and Reporting
"As a fleet owner, I want a documented audit trail and reports of repair validation so that I can prove compliance, manage vendors, and reduce repeat failures."
Description

Capture a complete, immutable record of the Clear-To-Roll session including DTC snapshots, sensor traces, GPS route, pass/fail criteria, technician notes and signatures, timestamps, and device metadata. Provide searchable history at the vehicle and work-order level, plus exportable PDF/CSV reports for audits, warranties, and customer communication. Offer KPIs such as first-pass fix rate, recurrence within 7/30 days, and average time-to-release to drive continuous improvement.

Acceptance Criteria
Immutable Clear-To-Roll Session Record
Given a Clear-To-Roll session is finalized by a signed-in technician, When "Finalize Session" is confirmed, Then the system stores an immutable session record containing: sessionId, vehicleId, VIN, workOrderId, startTimestampUTC, endTimestampUTC, deviceId, deviceFirmware, appVersion, technicianId, technicianSignature(method, image/hash), DTC snapshot (active and cleared with timestamps and freeze-frame), evaluated pass/fail criteria with outcomes, sensor trace segments (≥1 Hz) from the road test, GPS route polyline with start/end coordinates, distance, and duration, and technician notes. Given the record is stored, When any user attempts to edit or delete original fields, Then the system prevents direct mutation, creates a new append-only version linked via previousVersionId, and preserves the original; each version includes a SHA-256 contentHash and previousHash for chain integrity. Given integrity verification is executed, When contentHash is recomputed, Then it matches the stored hash for 100% of records; mismatches are blocked from export and flagged with severity "Critical". Given required fields are missing, When "Finalize Session" is attempted, Then the save is rejected with field-level errors and no partial record is created.
Vehicle and Work-Order Searchable History
Given a user is on a vehicle profile, When they open "Clear-To-Roll History", Then the list shows all CTR sessions for that vehicle with pagination (50 per page) and can be filtered by date range, technician, pass/fail status, and DTC code. Given a user is on a work order, When searching by workOrderId or vehicle VIN/plate, Then matching sessions are returned with exact matches ranked first. Given a dataset of up to 50,000 sessions for the tenant, When a filtered search is executed, Then the first page of results returns within 2 seconds p95. Given no sessions match filters, When search is executed, Then an empty state is shown with zero results and no errors. Given a session row is selected, When the user opens it, Then the full immutable record view opens within 2 seconds and displays all captured fields.
Exportable Audit Reports (PDF and CSV)
Given a session detail view, When "Export PDF" is clicked, Then a PDF is generated containing the session header (vehicle, VIN, workOrderId), timestamps in user-selected timezone, DTC snapshot, sensor trace charts, GPS route map, evaluated criteria with pass/fail, technician notes, signature image or hash, and device metadata. Given the same session, When "Export CSV" is clicked, Then a CSV is generated with one row per session and a companion CSV for sensor trace timeseries with columns: timestampUTC, sensorName, value, unit; values match stored data exactly. Given a multi-select of up to 500 sessions, When "Bulk Export" is initiated, Then PDF is delivered as a zip and CSV as consolidated files within 60 seconds p95; larger requests are processed asynchronously with email notification and in-app download link. Given an export is generated, When the file is created, Then an audit log entry is recorded with who, when, what, and a checksum; the file name follows pattern FleetPulse_CTR_{vehicleId|workOrderId}_{YYYYMMDDThhmmssZ}.{pdf|csv|zip}. Given a session contains no GPS data, When a PDF export is generated, Then the report clearly indicates "GPS unavailable" without failing the export.
KPI Accuracy and Drilldowns
Rule: First-pass fix rate = count of work orders whose first Clear-To-Roll attempt passed and had no DTC recurrence within the selected window divided by total work orders with at least one Clear-To-Roll attempt in the period; displayed for 7-day and 30-day windows. Rule: Average time-to-release = mean duration from work order creation timestamp to the timestamp of the passing Clear-To-Roll for work orders that achieved a pass in the selected period; units shown in hours with one decimal. Given a date range and filters (vehicle, technician, depot), When KPIs are rendered, Then values match a direct query against session/work-order records within ±0.1% and update within 15 minutes of new data. Given a KPI tile is clicked, When the user drills down, Then the list of contributing work orders/sessions is shown and totals reconcile exactly with the KPI numerator/denominator. Given missing or partial data, When metrics are computed, Then affected items are excluded with a "data incomplete" badge and an info tooltip explains the rule.
Recurrence Detection and Linking (7/30 Days)
Rule: A recurrence is detected when any DTC code present at the time of a passing Clear-To-Roll reappears as active on the same vehicle within 7 or 30 calendar days of the pass timestamp. Given a recurrence is detected, When the new DTC event is ingested, Then the prior session is flagged "Recurrence within 7 days" or "Recurrence within 30 days", a link is created between events, and the KPI sources update within 15 minutes. Given multiple DTCs recur, When detection runs, Then only one recurrence flag per window is applied to the prior session and duplicates are suppressed. Given no recurrence occurs within the window, When the window elapses, Then the session remains unflagged and contributes to first-pass fix numerator. Given a recurrence is manually dismissed by an authorized user, When dismissed, Then the dismissal reason is required, audit-logged, and KPI recalculations reflect the dismissal.
Technician Signature and Identity Verification
Given a technician is finalizing a session, When they sign, Then the system requires authenticated user context, captures signature method (drawn/typed/cert), signature image or PKI certificate hash, signer userId, and signedTimestampUTC. Given the session is later versioned, When any field affecting pass/fail is changed via an append-only new version, Then a new technician signature is required and the previous signature remains associated with the earlier version. Given a device is offline, When a session is finalized and signed, Then the signature payload and session data are stored locally with a content hash and synchronize within 10 minutes of reconnect without loss or modification. Given a report is exported, When the report renders, Then the technician name and signature (image or hash) are visible and verifiable in the output. Given an unauthorized user attempts to finalize without a signature, When "Finalize Session" is clicked, Then the action is blocked with a 403 error and audit log entry.

TimeFit Estimates

Learns real labor times by tech, vehicle, and task to refine duration estimates. Prevents over/under-booking, improves promise accuracy, and reveals where training or process tweaks can unlock more throughput.

Requirements

Unified Job Time Capture
"As a service manager, I want actual labor time captured automatically per task so that estimates can be trained on accurate data and my team avoids manual timekeeping errors."
Description

Capture precise actual labor time for each work order and task with start, pause, resume, and complete events. Support multiple input channels (mobile app clock-in/out, shop kiosk, and OBD-II ignition/state signals) with offline caching and automatic sync. Attribute each time segment to technician, vehicle (VIN), task code, bay, and required tools; handle multi-tech and parallel tasks. Provide deduplication and exception handling for missing or conflicting events, configurable rounding rules, and audit logs. Persist records in FleetPulse maintenance history to create reliable ground truth for model training and reporting.

Acceptance Criteria
Mobile App Clock-In/Out with Offline Caching and Auto-Sync
Given a technician is assigned to a work order and loses connectivity When they tap Start, Pause, Resume, or Complete in the mobile app Then the event is timestamped locally in UTC with millisecond precision and queued for sync Given queued events exist and connectivity is restored When the app detects network availability Then all queued events sync within 60 seconds, preserving original timestamps and event order Given a successful sync When the technician views the work order timeline Then the newly synced events appear in chronological order and total actual labor time updates within 5 seconds Given the device clock differs from server time by more than 60 seconds When events are synced Then server applies NTP-corrected offset and records both device and server timestamps in the audit log
Shop Kiosk Time Capture for Multi-Tech Attribution
Given a kiosk is configured and a technician authenticates via PIN/RFID When they select a work order and task code and tap Start Then a new active time segment is created attributed to technician, VIN, task code, bay, and required tools Given multiple technicians authenticate and start on the same task When they clock in Then separate time segments are created per technician and total labor time for the task equals the sum of per-tech segments Given a technician has an active segment on Task A When they attempt to start Task B on the kiosk Then the system blocks the action or prompts to pause/complete Task A based on shop policy configuration Given a technician clocks out from the kiosk When the action is successful Then the technician sees a confirmation with segment duration to the second and the segment appears in the timeline within 5 seconds
OBD-II Ignition/State Signals for Auto-Suggested Time Segments
Given a vehicle’s OBD-II device reports ignition ON with a recognized VIN linked to an open work order When no active labor segment exists for that vehicle Then the system creates an auto-suggested Start event tagged as source=OBD and notifies the assigned technician(s) Given ignition OFF is reported and an active segment exists for that vehicle When no manual Pause/Complete was recorded within the last 2 minutes Then the system creates an auto-suggested Pause event tagged source=OBD Given both manual and OBD events exist for the same minute When deduplicating Then manual technician-entered events take precedence and OBD-suggested duplicates are suppressed and logged Given an OBD signal is delayed or out-of-order by up to 5 minutes When processing Then the system inserts the event in correct chronological position and flags if it would cause negative or overlapping durations
Pause/Resume, Rounding Rules, and Audit Logging
Given shop rounding is configured (e.g., nearest 6 minutes, floor) When a segment is completed Then rounding is applied to the net active time (excluding paused intervals) according to the rule and stored with both raw and rounded values Given a technician pauses and resumes multiple times When the segment completes Then the sum of all active intervals equals raw time and matches the audit log entries with start/stop pairs Given any change to a time segment (create, update, delete, resolve) When it occurs Then an immutable audit entry is written capturing who, what, when (UTC), source channel, and previous/new values
Exception Handling for Missing or Conflicting Events
Given a segment is started but no Pause or Complete is received within the shop’s auto-close threshold (e.g., 12 hours) When the threshold elapses Then the system auto-completes the segment at the threshold time and creates an exception for review Given overlapping segments exist for the same technician across different tasks beyond the allowed overlap policy When detected Then the system flags a conflict, prevents further overlap, and routes the item to the exception queue Given an exception exists When a supervisor opens it Then they can choose a resolution action (adjust times, split segment, delete duplicate, accept OBD suggestion) and must enter a reason, after which totals recalculate and the audit log records the resolution
Parallel Tasks and Resource Attribution
Given two technicians work in parallel on two different tasks for the same vehicle When both are clocked in Then each task accrues labor independently and the vehicle’s total labor equals the sum of all active segments across tasks Given a time segment is created When recorded Then it is attributed to technician, VIN, task code, bay, and required tools, and these attributes are immutable after completion except via supervisor resolution with audit Given a bay or required tool is unavailable or already allocated When a technician attempts to start a segment requiring it Then the system warns of the conflict and blocks or queues according to shop policy
Persistence to Maintenance History and Data Availability
Given a segment is synced or completed When persistence occurs Then the record is written to FleetPulse maintenance history within 5 seconds and is retrievable via API and UI using work order ID or VIN Given a persisted record When retrieved Then it includes fields: technician ID, vehicle VIN, task code, bay, required tools, source channel, start/end UTC timestamps, pauses, raw and rounded durations, and audit reference ID Given reporting and model training jobs run hourly When new records exist Then they are included in the next run and any failures are logged with retry attempts up to 3 times
Adaptive Duration Estimation Engine
"As a dispatcher, I want accurate duration estimates with confidence indicators so that I can schedule realistically and set reliable customer promises."
Description

Deliver an estimation service that learns task durations by technician, vehicle, and task using historical job times and contextual signals (vehicle year/make/model, mileage, DTCs, environment, task complexity). Output predicted duration with confidence interval and variance metrics, and provide reason codes/features for transparency. Include cold-start defaults (OEM/flat-rate and global averages), outlier detection, continuous retraining, model versioning, and A/B evaluation. Expose REST/GraphQL endpoints and integrate with FleetPulse work orders, inspections, and service reminders.

Acceptance Criteria
Duration Prediction API Response Completeness
Given a valid request to REST POST /v1/estimates or GraphQL mutation estimateDuration with technicianId, vehicle {vin or ymm}, taskCode, mileage, dtcs[], environment, and complexity When the request is processed Then the response contains predictedDurationMinutes (number), confidenceIntervalMinutes {lower, upper} at 95%, varianceMinutesSquared (number), modelVersion (string), reasonCodes[] with {feature, contribution, direction}, requestId (string), and timestamp (ISO-8601) And predictedDurationMinutes >= 0.1 and <= 1440 And confidenceIntervalMinutes.lower <= predictedDurationMinutes <= confidenceIntervalMinutes.upper And HTTP 200 is returned on success; invalid or missing fields return HTTP 400 with fieldErrors[] identifying each invalid field And P95 latency <= 800 ms under nominal load (<=50 RPS)
Cold-Start Estimation Fallback Logic
Given no matching historical data for technician-task-vehicle cohorts When a prediction is requested Then the engine returns a baseline estimate from OEM_FLAT_RATE; if unavailable, from GLOBAL_AVG by task and vehicle class And the response includes reasonCodes containing COLD_START and baselineSource in {"OEM_FLAT_RATE","GLOBAL_AVG"} And baseline MAE per task is computed daily and exposed via GET /v1/metrics/baseline-mae
Outlier Detection and Data Hygiene
Given a new historical job record When ingesting for training Then if durationMinutes < 5 or > 720 or beyond 3.5 median absolute deviations for its cohort, the record is excluded from training and tagged with outlier=true and outlierReason And weekly outlier exclusion rate per task is computed; if rate > 5%, an alert is emitted and the cohort is flagged for review
Personalization by Technician, Vehicle, and Task
Given >= 30 completed jobs for a technician-task-YMM cohort in the last 180 days When predicting for that cohort Then the personalized model is used and response personalizationScope includes ["TECHNICIAN","YMM","TASK"] And offline backtest for that cohort shows MAE improvement >= 10% vs the global model over a rolling 30-day window; otherwise the system auto-falls back to a broader cohort model Given insufficient data (< 30) When predicting Then the system backs off to TASK+YMM, then TASK-only, then BASELINE in that order, and records the applied cohort in the response
Continuous Retraining and Model Versioning
Given daily data ingestion has completed When the nightly training pipeline runs Then a new candidate model version is produced with immutable versionId, trainingDataWindow, featureSetHash, codeCommitSha, and evaluation metrics stored in the registry And only candidate versions that meet validation gates (global MAE <= current MAE and P95 absolute error non-increasing) are eligible for A/B tests And the registry API exposes activeVersion and the last 5 historical versions via GET /v1/models/timefit
A/B Evaluation and Promotion Criteria
Given an active model and a candidate model When an A/B test runs for minimum 7 days or 5,000 predictions, whichever is first Then the candidate is promoted only if it achieves at least 5% reduction in MAE with 95% confidence, no regression in P95 absolute error, and P95 latency does not increase by > 10% And the experiment record contains experimentId, start/end, trafficSplit, metrics, decision, and is retrievable via GET /v1/experiments/{id}
FleetPulse Workflow Integration
Given a user adds a task to a FleetPulse work order When the work order line is saved Then a duration prediction is fetched and stored on the line with modelVersion, predictedDurationMinutes, confidenceIntervalMinutes, and reasonCodes, and displayed in the UI within 2 seconds Given an inspection creates DTC findings When a service reminder is generated Then prediction output informs the suggested scheduling window; if (confidenceIntervalMinutes.upper - confidenceIntervalMinutes.lower) / predictedDurationMinutes > 0.5, a Low confidence indicator is shown and a link to reasonCodes is available Given the estimation service returns error or times out (> 2 s) When saving a work order line Then the system falls back to baseline estimate, logs the error, and marks estimateSource = "BASELINE"
Scheduler Capacity Integration
"As a scheduler, I want jobs placed into available slots using learned durations and buffers so that shop load is balanced and on-time delivery improves."
Description

Integrate learned duration estimates into FleetPulse’s maintenance calendar to auto-calculate daily capacity by bay and technician. Suggest optimal time slots, prevent over/under-booking, and surface buffer recommendations based on estimate confidence and historical variance. Respect shop hours, technician skills/availability, required special tools, and existing appointments. Provide conflict detection, drag-and-drop rescheduling, and propagate updates to reminders, ETAs, and notifications.

Acceptance Criteria
Daily Capacity Calculation per Bay and Technician
Given learned duration estimates and resource calendars exist When the day view of the maintenance calendar loads for a selected date Then the system calculates and displays total available hours per bay and per technician excluding non-working hours and breaks And occupied time from existing appointments is subtracted to show remaining capacity per resource And technicians with custom shifts or time-off reflect accurate availability in capacity totals
Slot Suggestion for New Work Order
Given a work order with vehicle, tasks, required skills, and special tools When the user requests suggested time slots for a specified date range Then the system returns the top 5 optimal contiguous slots ranked by fit score within 2 seconds And each suggested slot fits within shop hours and an available bay and qualified technician are concurrently free And the slot duration equals the learned estimate plus the recommended buffer
Over/Under-Booking Prevention
Given existing appointments and computed capacity for the selected resources When a user attempts to book time that exceeds bay or technician capacity or overlaps required special tool usage Then the system blocks the booking and lists the specific conflict(s) preventing it And provides the next three valid alternative slots that resolve the conflict Given a proposed duration is shorter than the learned estimate plus minimum buffer When attempting to save the appointment Then the system warns of under-booking and suggests the minimum acceptable duration; saving requires admin override
Buffer Recommendation by Confidence and Variance
Given an estimate has a confidence score and historical duration variance When preparing a slot duration Then the system computes and displays a buffer value derived from confidence and variance rules in minutes And if confidence < 0.6 or variance > 20%, the slot is flagged as High Uncertainty and buffer is increased per rule When a user edits the buffer Then the system records the override, reason code, and preserves the original recommendation for analytics
Technician Skill and Availability Enforcement
Given tasks require specific certifications or skills When assigning a technician or generating suggestions Then only technicians with required skills are eligible and shown in suggestions Given a technician has PTO or overlapping work When attempting assignment Then the system blocks selection and suggests the next available qualified technician and time window Given no qualified technician exists for the requested window When generating suggestions Then the system proposes the earliest alternative date/time that satisfies skill and availability constraints
Drag-and-Drop Rescheduling with Conflict Detection
Given an existing appointment on the calendar When the user drags and drops it to a new time or bay Then the system validates capacity, skills, tools, and shop hours before committing changes And if conflicts are detected, the move is rejected with explicit conflict reasons and the nearest valid alternatives are shown And if valid, the appointment is updated and saved without data loss
Reminder, ETA, and Notification Propagation
Given a scheduled appointment is created or rescheduled When the change is saved Then customer- and technician-facing notifications reflect updated start time, duration, and ETA And pre-check/inspection reminders are rescheduled to maintain their relative lead times And email/SMS/push deliveries are sent and logged with timestamps and statuses within 60 seconds of save
Technician Feedback & Override Loop
"As a lead technician, I want to adjust estimates and record reasons when work differs so that the system learns from real conditions and future estimates improve."
Description

Enable technicians and managers to adjust pre-job estimates, add notes, and select reason codes (e.g., rust, seized fastener, parts delay). Prompt for post-job confirmation of actuals and variance cause. Flag large discrepancies for review and feed labeled outcomes back into the training dataset. Include permission controls, validation thresholds, and audit trails to ensure a robust human-in-the-loop learning process.

Acceptance Criteria
Pre-Job Estimate Override With Reason Codes
Given a technician with "Override Estimate" permission opens a job When they change the pre-job estimate value Then the system enforces min/max bounds from configured validation thresholds and shows an inline error if violated Given the technician submits an override within bounds When no reason code is selected Then the Save action is disabled with a required-field message Given the technician selects reason code "Other" When saving the override Then a free-text note (minimum 10 characters) is required Given the override exceeds the configured manager_approval_delta When the technician attempts to save Then the override is saved in Pending Approval state and the manager is notified Given a pre-job estimate is overridden When saved Then the job displays the Final Pre-Job Estimate and the audit trail records before/after values, user, timestamp, and reason code
Post-Job Actuals Confirmation and Variance Capture
Given a job is marked work complete by the technician When closing the job Then entry of actual labor time is required to proceed Given actual labor time is entered When variance versus the final pre-job estimate exceeds the configured variance_threshold Then selection of a variance cause code and a note (minimum 10 characters) is required Given actuals have not been submitted within 24 hours of job completion When the deadline passes Then an email and in-app reminder are sent to the technician and CC'd to the manager Given actuals are submitted and saved When saved Then variance is auto-calculated, displayed on the job, and written to the audit trail
Large Discrepancy Flagging and Manager Review
Given a job's variance exceeds the configured review_threshold When actuals are saved Then the job is auto-flagged Needs Review and appears in the manager review queue Given an item is in the review queue When a manager opens it Then they can Approve, Request Follow-up, or Reject and must provide a comment Given a manager submits a decision When processed Then the technician is notified and the decision plus comment are recorded in the audit trail Given an item remains Needs Review for more than 2 business days When the SLA timer elapses Then it is escalated to the designated supervisor and marked Overdue
Labeled Outcome Ingestion into Training Dataset
Given a job has final pre-job estimate, actual labor time, reason/variance codes, and review status When the nightly ingestion job runs Then a labeled record with fields [job_id, vehicle_id, task_id, tech_id, pre_estimate_system, pre_estimate_final, actual_time, reason_code, variance_code, review_outcome, created_at, completed_at] is appended to the training dataset Given any required field is missing or invalid When ingestion runs Then the record is rejected with an error logged and visible in a data quality dashboard Given ingestion completes When successful Then the metric labels_ingested_count for the run is at least 95% of eligible records or an alert is raised Given a record has been ingested When the next model training run starts Then it references the latest dataset version and logs the count of new labels used
Role-Based Permissions Enforcement
Given user roles are configured When a user without Override Estimate permission tries to change a pre-job estimate Then the control is disabled and an access denied event is logged Given a technician submits an override exceeding manager_approval_delta When no manager approval exists Then schedule capacity and customer promise time are not updated from the pending override Given an Admin opens Settings When updating thresholds or reason/variance codes Then changes require confirmation and are logged with actor, before/after, and timestamp Given an API client attempts to modify estimates When not authenticated with a role permitting that action Then the API returns HTTP 403 and no data changes occur
Audit Trail Completeness and Immutability
Given any create/update/delete affecting estimates, actuals, codes, or thresholds When the action is committed Then an audit entry is written with before/after values, actor, role, IP/device, job_id, and ISO 8601 timestamp Given an audit entry exists When viewed by an Admin or Manager Then it is readable in the UI and exportable as CSV with consistent column headers Given an audit entry exists When an attempt is made to edit or delete it via UI or API Then the system prevents modification and logs a tamper-attempt event Given daily integrity checks run When completed Then a checksum chain validates no audit entries were altered; failures generate a P1 alert
Thresholds and Reason Codes Configuration
Given an Admin is in Settings When configuring variance and approval thresholds Then thresholds can be set globally and overridden per task and vehicle class with documented precedence rules Given a reason code is added or edited When saved Then it becomes available immediately to new overrides with a unique code and display label Given a reason code is in use When an Admin attempts to delete it Then deletion is blocked; only deactivation is allowed and existing records remain intact Given configuration changes are saved When effective Then their version and effective timestamp are stored and visible in change history
Throughput & Variance Insights
"As an operations manager, I want visibility into where estimates diverge from actuals so that I can target training and process changes to unlock more throughput."
Description

Provide dashboards and reports showing estimated vs. actual duration, variance distributions, promise accuracy, capacity utilization, and throughput by technician, task, vehicle family, and time period. Highlight top variance drivers and training opportunities, and quantify potential throughput gains from process improvements. Support drill-through to work orders, configurable KPIs, scheduled email/PDF/CSV exports, and role-based visibility.

Acceptance Criteria
Est. vs. Actual Dashboard with Variance Distributions
Given a user with Analyst or Manager role and at least 6 months of work orders across technicians, tasks, vehicle families, and dates When they open the Throughput & Variance dashboard and apply date range, dimension (technician|task|vehicle family|time period), and filters Then charts and tables render in under 5 seconds for up to 10,000 work orders and show Estimated, Actual, and Variance (Actual - Estimated) with totals consistent across widgets Given the user switches the grouping dimension When the grouping is changed Then variance distribution (histogram or box plot) and summary stats (median, p90, p95) update to reflect only the current cohort Given some work orders lack an estimate or actual When variance is computed Then those records are excluded from variance calculations and counts are shown in a "Data Gaps" indicator Given the user resets filters When Reset is pressed Then all widgets revert to default state and global totals match the unfiltered dataset
Configurable KPIs including Promise Accuracy
Given a user with Manage KPIs permission When they add the Promise Accuracy KPI with a tolerance (minutes) and a target (%) Then the KPI tile appears on the dashboard within 2 seconds, persists to the user’s profile (and optionally role), and displays current value and target delta Given work orders have promised completion timestamps and actual completion timestamps When Promise Accuracy is calculated for a selected period Then it equals (count completed on/before promised time + tolerance) / (count with a promise) expressed as a percentage with 1 decimal, with a trend by period Given an invalid configuration (e.g., negative tolerance or target > 100%) When saving the KPI Then the system blocks save and presents an inline validation message Given a KPI is removed or reordered When changes are saved Then the KPI deck reflects the new configuration and remains after reload
Capacity Utilization and Throughput by Technician and Period
Given technician availability calendars define available labor hours per period When utilization is computed for a selected period and technician Then Utilization = Actual labor hours logged / Available hours and Throughput = completed tasks count and labor hours completed, grouped by day/week/month Given actual hours include overlapping time logs for a technician When aggregations are computed Then overlapping intervals are not double-counted in Actual labor hours Given a manager updates a technician’s availability When the change is saved Then the next dashboard refresh recalculates utilization and throughput within 60 seconds
Top Variance Drivers & Training Opportunities
Given variance metrics exist by technician, task, and vehicle family When the Top Drivers view is opened for a selected date range Then a ranked list of top 5 segments by total positive variance minutes is displayed with average variance, count, and contribution % Given a segment exceeds configured thresholds (e.g., avg variance >= 15 minutes and count >= 20) When thresholds are applied Then the segment is flagged as a Training Opportunity with a link to view representative work orders Given filters are changed When the Top Drivers view is refreshed Then rankings and flags update to reflect only the filtered cohort
Throughput Gain Simulation (What-If)
Given a user selects a driver segment and an improvement scenario (e.g., reduce average duration by 10% or cap p90 variance at N minutes) When the simulation is run Then projected capacity freed (hours/week) and additional jobs/week at current demand are calculated and displayed within 2 seconds with assumptions listed Given a simulation is saved with a name When the dashboard is reloaded Then the saved scenario can be re-applied and yields consistent results for the same underlying data Given a simulation would exceed available capacity constraints When results are computed Then additional jobs/week are capped by available technician hours and the cap is indicated
Drill-Through to Work Orders
Given a user clicks an aggregate cell, bar, or point in any dashboard widget When the drill-through action is invoked Then the Work Orders list opens filtered to the exact cohort, row count matches the aggregate, and columns include Technician, Vehicle, Task, Estimate, Actual, Variance, and timestamps Given a user clicks a row in the Work Orders list When navigating to details Then the work order detail view opens in a new tab or panel and a back action returns to the dashboard with filters preserved Given the user lacks permission to view work orders When attempting to drill through Then the drill-through is disabled or an authorization error is shown without exposing data
Scheduled Exports Respect Role-Based Visibility
Given a user with Report Scheduling permission configures a scheduled export with format (PDF|CSV), frequency, timezone, filters, and recipients When the schedule triggers Then each recipient receives the export within 10 minutes of the scheduled time and data is restricted to the recipient’s role-based visibility Given a scheduled export includes recipients with different access scopes When the export is generated Then separate exports are produced per unique access scope and audit logs record job ID, recipients, time, and row counts Given a user previews an export When Preview is requested Then the preview matches the scheduler’s current filters and format, and CSV uses UTF-8 with comma delimiter and header row; PDF renders all visible widgets with pagination Given a schedule is paused or deleted When the next trigger time occurs Then no emails are sent and the action is recorded in the audit log
Data Governance & Privacy Controls
"As an admin, I want granular controls over who can see technician-level time and performance data so that we comply with policy and maintain trust."
Description

Implement role-based access to technician-level time data and predictions, configurable visibility rules, and consent settings for any cross-fleet learning. Provide encryption in transit/at rest, audit logs, data retention policies, and regional compliance options. Allow anonymization/aggregation for benchmarking while protecting individual and organizational privacy.

Acceptance Criteria
RBAC and Configurable Visibility Controls
Given a user with the Technician role, when they access TimeFit Estimates, then they can view only their own time records and predictions for assigned tasks/vehicles. Given a user with the Manager role and assigned teams/locations, when they access TimeFit Estimates, then they can view technician-level time data and predictions only for their assigned scope per active visibility rules. Given an Admin, when they create or update a visibility rule specifying role filters, vehicle groups, task categories, and locations, then the rule is enforced consistently in UI and API within 5 minutes. When an unauthorized user requests time data or predictions outside their scope, then the API/UI returns HTTP 403 and no sensitive fields are included in the response body. When list, detail, search, and export endpoints are called with valid tokens, then RBAC and visibility rules are uniformly enforced across all endpoints. When a visibility rule is updated or deleted, then the change is applied within 5 minutes and an audit event records actor, timestamp, rule id, and before/after diffs.
Cross-Fleet Learning: Consent Management
Given the default org state is opt-out, when no consent is recorded, then the org’s and technicians’ data are excluded from any cross-fleet model training and benchmarking pipelines. When an Admin opts in at the org level, then consent is stored with actor, timestamp, scope, and policy version, and eligible new data begins flowing to cross-fleet pipelines within 24 hours. When a technician-level opt-out is recorded, then that technician’s data is excluded from cross-fleet learning even if the org is opted in. When org-level consent is withdrawn, then future training and benchmark exports exclude the org’s data within 24 hours and feature stores are scrubbed of the org’s data within 72 hours; actions are auditable. When a consent ledger export is requested, then CSV and JSON are available including org id, subject id (if applicable), action (opt-in/out), scope, policy version, actor, timestamp, and reason.
Encryption in Transit and at Rest
Given any client-server communication, then TLS 1.2+ with strong ciphers is enforced; HTTP requests are redirected to HTTPS and HSTS is enabled. Given data at rest in primary databases, object storage, search indexes, and backups, then encryption using AES-256 or cloud KMS CMKs is enabled and verified. When KMS keys are rotated on a 90-day schedule, then rotation completes without downtime and key change events are logged. When secrets or tokens are processed, then they are never logged in plaintext and are redacted in application logs and error traces. When external scanners test endpoints, then no weak ciphers or deprecated protocols (TLS 1.0/1.1) are accepted and the scan reports zero critical findings related to transport security.
Audit Logging and Access Review
When any read, create, update, delete, or export occurs for time data, predictions, visibility rules, RBAC roles, consent settings, or benchmarking outputs, then an audit event is recorded with actor id, subject/object ids, action, result, IP, user-agent, and UTC ISO-8601 timestamps. Given audit logs, then they are append-only, tamper-evident (hash-chained/WORM), and protected from modification by non-audit roles. When an auditor filters logs by actor, action, date range, and object id, then the system returns results within 5 seconds for a dataset up to 1,000,000 events and supports CSV/JSON export. Given default retention is 365 days, then org admins can configure retention between 90 and 1825 days; changes apply prospectively and are themselves logged. When client clock skew is within ±5 minutes, then server-side timestamps remain authoritative and event ordering in audit views is monotonic.
Data Retention, Purge, and Legal Hold
Given configurable retention policies per data class (time records, predictions, OBD snapshots, audit logs), then org admins can set durations in days within allowed bounds (e.g., 30–1825) and preview effective dates before saving. When data exceeds its retention period and is not under legal hold, then purge jobs delete it from primary stores, replicas, analytics warehouses, and search indexes within 72 hours and record deletion audit events. When a legal hold is placed on an entity, user, or scope, then deletions are suspended until the hold is removed; holds capture reason, actor, scope, and timestamps and are auditable. When an admin requests deletion of a technician’s data for a date range, then the system completes deletion across stores within 72 hours and produces a downloadable deletion report summarizing objects removed and stores touched. When backups containing expired data are encountered, then they expire per backup policy and are not restored except for disaster recovery; after any restore, the purge process re-applies within 24 hours.
Regional Compliance and Data Residency
Given an org-selected data region (US, EU, or APAC), then all storage, backups, and processing for TimeFit data remain in-region and cross-region transfers are blocked by policy and network controls. When users outside the selected region access data, then requests are served via in-region services without persisting data out-of-region; such access is logged with geography. When a GDPR data subject access request (export) is initiated for a technician, then a complete export (machine-readable JSON and human-readable PDF) is available within 7 days and logged. When a GDPR/CCPA deletion request is approved, then personal data is deleted or irreversibly anonymized within 30 days, with exceptions (e.g., legal holds) documented in the completion receipt. When an admin requests a region change, then a migration plan (scope, timeline, downtime) is presented, requires explicit confirmation, and the migration keeps data in compliant states throughout; all steps are logged.
Anonymized and Aggregated Benchmarking Safeguards
Given benchmarking dashboards or exports, then only aggregated metrics are shown when k-anonymity thresholds are met (k >= 10 organizations and >= 30 technicians per cell). When thresholds are not met, then metric values are suppressed or bucketed and a "threshold not met" indicator is displayed; no raw records are exposed. Given identifiers, then organization names, technician IDs, VINs, and exact timestamps are excluded from benchmarking outputs; only generalized dimensions or salted hashes are used. When repeated queries could enable differencing attacks, then the system applies rate limiting and query noise to prevent reconstruction and generates security alerts on suspicious patterns. When a user attempts drill-through from a benchmark to record-level views, then the navigation is blocked and the action is logged.
Cold-Start Defaults & Flat-Rate Mapping
"As a new customer, I want reasonable initial time estimates without historical data so that scheduling works accurately from day one."
Description

Import OEM/aftermarket flat-rate guides and map them to FleetPulse’s task taxonomy to seed initial estimates when local history is sparse. Configure bias/weighting rules to gradually shift from defaults to learned values as data accrues. Provide an admin UI for mapping management, unit conversions, and exception rules by vehicle family or task.

Acceptance Criteria
Flat-Rate Guide Import & Unit Normalization
Given I am an Admin with Mapping:Write permission And I have a flat-rate guide file in CSV, XLSX, or JSON containing required columns: provider, guide_version, make, model, year_start, year_end, task_code, task_desc, labor_time_value, labor_time_unit (hours|minutes) When I upload the file via the Import dialog and click Validate Then the system validates schema, required columns, data types, and year ranges; rows with missing/invalid required fields are flagged as errors with row numbers and messages And the system converts labor_time_unit to decimal hours with precision 0.1 h (e.g., 90 minutes -> 1.5 h) And a preview displays total rows, valid row count, and an error list When I click Import on a file with 0 validation errors Then the system creates/updates the guide identified by provider+guide_version and stores all rows without duplicates And importing up to 10,000 rows completes within 60 seconds in 95% of attempts And re-uploading the same provider+guide_version is idempotent (0 duplicate rows added) When validation errors exist Then the import is blocked and 0 rows are committed
Task Mapping to FleetPulse Taxonomy and Publish Gate
Given a successfully imported flat-rate guide When I open Mapping Manager Then unmapped task_codes receive auto-suggested FleetPulse task matches with a confidence score (0–100) And I can approve, edit, or create mappings individually or in bulk And each mapping entry is provider+guide_version+task_code -> fleetpulse_task_id, default_labor_time_hours And the system enforces one active mapping per provider+guide_version+task_code And I can mark task_codes as Ignored with a reason When I click Publish Then Publish is allowed only if 100% of imported task_codes are either Mapped or Ignored And the published mapping becomes the active default source for estimates immediately
Seeding Estimates When Local History Is Sparse
Given a service estimate is requested for task T on vehicle V And a published mapping provides a default_labor_time_hours for T applicable to V And the local completed job history for (T, V family) is fewer than N_min observations (default N_min=5; admin-configurable 1–20) When the estimate is generated Then the system uses the mapped default_labor_time_hours (after any applicable exceptions and unit conversions) And the UI/API labels the source as "Default (Flat-Rate)" And if no mapping applies, the system returns an actionable error "No default mapping for task T" and logs the miss with task, vehicle, and requesting user Given the history count for (T, V family) is >= N_min When the estimate is generated Then the system uses the configured weighting/blending rules instead of pure defaults
Bias/Weighting Transition from Defaults to Learned Values
Given an admin sets prior_weight w0 between 1 and 50 inclusive (default 5) And selects aggregation statistic for learned values: mean or median (default median) And there are n local observations for (task T, vehicle family V) with learned_value (in hours) And the mapped default value is default (in hours) When an estimate is generated Then the blended estimate E = round_to_0.1h( (w0/(w0+n))*default + (n/(w0+n))*learned_value ) And for n=0, E equals default And as n increases, E monotonically approaches learned_value; for n >= 10*w0, |E - learned_value| <= 10% of |default - learned_value| And the UI/API labels the source as "Blended (n=<n>, w0=<w0>, stat=<stat>)" And all inputs (default, n, learned_value, w0, stat) and the computed E are stored in an audit record for the estimate
Exception Rules by Vehicle Family or Task
Given an admin defines exception rules targeting combinations of vehicle attributes (make, model, year range, trim, engine) and task And an exception action is either Override to fixed hours, Adjust by +/- percent, or Adjust by +/- hours And exception priority is: task+trim+engine > task+vehicle family > task only > vehicle family only > global default When generating an estimate Then at most one highest-priority matching exception is applied And adjustments modify the default value prior to blending/weighting And conflicts are resolved deterministically by priority, then newest published version timestamp And the UI/API discloses the active exception rule id, reason, and numerical effect (e.g., +15% -> 1.5 h to 1.7 h) And unit conversions are accurate to 0.1 h
Admin UI for Mapping Management & Conversions
Given role permissions where Admin can create/edit/publish and Viewer can read-only When I search, filter, and paginate mapping entries by provider, guide_version, task_code, fleetpulse_task_id, and mapping_status Then results return within 2 seconds for datasets up to 100,000 entries (server-side pagination) And inline edit accepts hours or minutes and displays normalized decimal hours to 0.1 h And bulk edit applies to up to 1,000 rows with validation and a preview of changes before commit And changes save as Draft until explicitly Published; Discard reverts to last published state And concurrent edits are protected by optimistic locking; on conflict, the user sees a clear message and must refresh or retry
Audit Trail, Versioning, and Rollback of Mappings
Given any import, mapping edit, exception change, publish, or rollback action When the action is saved Then the system records an immutable audit entry with timestamp, actor, action type, scope (guide, mapping, exception), version id, and before/after diff And an Audit History view lists entries with filter/export to CSV And I can select a prior version and perform Rollback Then Rollback creates a new version identical to the selected one and promotes it to Active Published And subsequent estimates use the rolled-back defaults immediately And audit visibility is restricted to Admin role

Combo Service Builder

Bundles co-schedulable tasks that share wheel-off, fluids, or parts to avoid repeat tear-downs. Recommends efficient service combos per vehicle visit, cutting labor duplication and reducing the number of shop trips needed.

Requirements

Unified Service Task Taxonomy
"As a small fleet manager, I want a standardized catalog of service tasks with clear prerequisites and overlaps so that the system can reliably identify what to bundle during a visit."
Description

Provide a normalized catalog of service tasks with rich metadata (e.g., wheel-off required, fluid drain, shared parts, estimated labor time, prerequisite steps, interval rules by time/mileage/engine hours, DTC/OBD triggers, regulatory tags, and vehicle applicability). Ingest OEM schedules and existing custom tasks from FleetPulse, de-duplicate synonyms, and version-control task definitions. This taxonomy becomes the source of truth the Combo Service Builder uses to detect co-schedulable tasks and compute overlaps, ensuring consistent recommendations across scheduling, alerts, and reporting.

Acceptance Criteria
OEM and Custom Task Ingestion with Duplicate Normalization
Given an OEM schedule import and existing FleetPulse custom tasks with a synonymMap When ingestion runs Then tasks sharing the same OEMTaskCode OR listed as synonyms are merged into a single canonical task with a stableId And all original sourceIds are retained in aliases[] and remain searchable And no two canonical tasks exist with the same canonicalName and identical applicability scope And re-running ingestion with the same inputs produces zero additional changes (idempotent) And a dedup summary is stored with totalSources, totalCanonicalTasks, and mergedAliasesCount
Mandatory Metadata Validation for Service Tasks
Given a create or update request for a service task When the payload is validated Then required fields exist and are non-empty: canonicalName, operationType, wheelOffRequired, fluidDrainRequired, sharedParts[], estLaborMinutes.min/max, prerequisiteTaskIds[], intervalRules{time/miles/hours with comparator}, dtcTriggers[], regulatoryTags[], applicability{make/model/year/engine}, version And estLaborMinutes.min >= 1 and estLaborMinutes.max <= 480 and min <= max And dtcTriggers conform to SAE J2012 formats (e.g., P0XXX, C1XXX) And regulatoryTags match the controlled vocabulary (e.g., DOT, OSHA, EPA) And applicability contains at least one dimension (make/model/year/engine) and values exist in reference data And invalid requests are rejected with 400 and field-level error codes
Task Versioning and Audit History
Given an existing canonical task vN When a canonical field (name, metadata, applicability, intervals, triggers) changes Then a new immutable version vN+1 is created with effectiveFrom timestamp and the prior version remains retrievable And GET /taxonomy/tasks/{id}?version=vN returns the exact historical definition And GET without version returns the latest non-deprecated version And previous versions cannot be modified; attempted updates return 409 And an audit record captures actor, timestamp, changed fields (diff), and reason
Vehicle Applicability Resolution by VIN
Given a vehicle identified by VIN When VIN decoding yields make, model, year, engine, and trim Then GET /taxonomy/tasks?vehicleId={id} returns only tasks whose applicability matches the decoded attributes (including engine/trim constraints) And tasks excluded by applicability do not appear in the result And where multiple OEM schedules exist, the engine/trim-specific schedule supersedes the base schedule And the response includes the taskId, version, and applicability basis used for inclusion And p95 latency <= 500ms for 10 concurrent requests on a catalog of 50k tasks
Interval Rules and Trigger Evaluation
Given a task with intervalRules of 6 months OR 6,000 miles OR 200 engine hours (whichever comes first) and linked dtcTriggers [P0300] And a vehicle with lastCompletedAt=2025-01-01, lastOdometer=20,000 mi, lastEngineHours=800 h When current readings are date=2025-07-02, odometer=26,100, engineHours=995, and active DTCs=[] Then the task status is Due because 6 months elapsed And nextDueAt is computed for the earliest threshold with basis=TIME and includes projected mileage/hours at that date And upon a completion event, the nextDueAt recalculates from the new completion values And if DTC P0300 becomes active, the task status is set to Immediate regardless of interval remaining And clearing all dtcTriggers reverts status to interval-based
Co-schedulable Tear-Down and Shared Parts Flags
Given two tasks that both require wheel-off and removal of front rotors and share front brake pads When tasks are saved in the taxonomy Then both tasks expose wheelOffRequired=true, sharedParts includes 'front brake pads', and tearDownKeys includes 'front_axle_brake_rotor_off' And tasks that are incompatible due to conflicting prerequisites do not share tearDownKeys And the presence of a matching tearDownKey is sufficient for the Combo Service Builder to treat tasks as co-schedulable And removing a sharedPart or tearDownKey from one task breaks the co-schedulable linkage on subsequent queries
Combo Optimization Engine
"As a fleet manager, I want the system to recommend optimal service combos for each visit so that I reduce duplicate labor and minimize the number of shop trips."
Description

Implement an optimization service that recommends visit-level bundles by evaluating tasks due or forecasted within configurable windows and identifying shared teardown steps (wheel-off, fluid drain, component access) to minimize redundant labor and shop trips. Incorporate predicted due dates from telematics/usage, task criticality and safety rules, deferral limits, and cost/time models to propose ranked combos with rationale (e.g., shared wheel-off saves 1.2 hours). Recalculate on data changes and nightly, expose results via API/UI, and support what-if simulations before scheduling.

Acceptance Criteria
Visit-Level Combo Recommendation with Forecast Window
Given vehicle V1 has tasks T_A (due_in_days=7), T_B (forecast_due_in_miles=400), and T_C (due_in_days=45) with config window_days=30 and window_miles=500 When the optimization engine runs for V1 Then T_A and T_B are included in candidate combos and T_C is excluded And each returned combo includes: combo_id, tasks[], suggested_visit_date, estimated_labor_hours, estimated_parts_cost, estimated_total_cost, expected_trip_count_savings, rationale[] And suggested_visit_date falls within the configured window And at least one combo is returned if two or more tasks fall within the window
Shared Teardown Detection and Savings Calculation
Given tasks T_A and T_B require wheel-off with teardown_hours (T_A=0.8, T_B=0.6) and combined_wheel_off_hours=0.8 When the engine bundles T_A and T_B Then saved_labor_hours for shared teardown equals 0.6 and is rounded to 0.1h in outputs And the combo rationale contains "shared wheel-off saves 0.6 hours" And estimated_labor_hours equals sum(individual_task_labor) - saved_labor_hours And for task T_C that shares no teardown with T_A or T_B, no shared-savings is credited
Safety-Critical Task Handling and Deferral Limits
Given T_S (safety_critical=true) is due_in_days=0 with deferral_limit_days=0 and T_N (safety_critical=false) is due_in_days=20 with deferral_limit_days=30 When the engine optimizes combos Then T_S is scheduled in the earliest possible visit (<= today) And no candidate defers T_S beyond its deferral limit And T_N may be deferred but not beyond its deferral limit And any candidate violating deferral limits is discarded before ranking
Combo Ranking with Rationale Output
Given three candidate combos C1, C2, C3 with computed estimated_total_cost, saved_labor_hours, expected_trip_count_savings and weights w_cost=0.6, w_trips=0.3, w_time=0.1 When the engine ranks candidates Then each combo includes ranking_score and rationale[] with quantified savings and due dates per task And combos are returned sorted by descending ranking_score, ties broken by earliest suggested_visit_date then combo_id And the top-ranked combo yields the best score under the configured weights
Real-Time and Nightly Recalculation Triggers
Given a new telemetry reading updates predicted due date for T_A and changes its due_in_days from 40 to 10 When the update is ingested Then V1's combos are recalculated within 60 seconds and exposed via API/UI with an incremented version and updated_at timestamp And if no underlying input changed, the engine does not increment version and returns 304 Not Modified to conditional GETs And nightly at 02:00 UTC the engine recomputes combos for all active vehicles
API and UI Exposure of Combo Results
Given an authenticated request GET /v1/combos?vehicle_id=V1&top_n=3 When combos exist Then the API returns 200 with up to 3 combos including fields: combo_id, tasks[], suggested_visit_date, estimated_labor_hours, estimated_parts_cost, estimated_total_cost, expected_trip_count_savings, ranking_score, rationale[], version, updated_at And the response includes ETag and Cache-Control: max-age=60 And the UI Combos panel displays the top combo with savings summary, rationale bullets, and "Updated <time>" within 1 minute of recalculation And when no combos exist the API returns 204 and the UI shows a neutral empty state
What-If Simulation Prior to Scheduling
Given a dispatcher selects tasks [T_A, T_B] and sets simulate=true and window_days=45 When POST /v1/combos:simulate is called Then the API returns 200 with simulation=true, combos[], and does not persist results to the primary recommendations And each simulated combo includes baseline_reference and deltas: delta_estimated_total_cost, delta_saved_labor_hours, delta_expected_trip_count_savings And subsequent GET /v1/combos returns unchanged non-simulated recommendations
Fitment & Co-schedulability Rules
"As a service advisor, I want combos to include only compatible tasks and parts for a given vehicle so that work orders are accurate and safe to execute."
Description

Create a rules engine that validates task and parts compatibility per vehicle (year/make/model/engine, axle configuration, wheel size, fluid specifications) and enforces co-schedulability constraints (e.g., cannot combine tasks that require conflicting states; can combine tasks sharing the same wheel-off). Support importing OEM data and parts catalogs, managing exceptions, and surfacing rule violations clearly so illegal or unsafe bundles are never recommended.

Acceptance Criteria
Block Incompatible Fitment by Vehicle Attributes
Given a vehicle profile with VIN decoded to year make model engine axle_config wheel_size and fluid_specs When a user attempts to add a task or part whose required attributes do not match the vehicle attributes Then the system prevents the add operation Then the UI displays a blocking banner listing rule_id FITMENT_MISMATCH and the specific mismatched attributes Then the attempted task or part does not appear in the bundle Then an audit log entry with rule_id FITMENT_MISMATCH user_id timestamp vehicle_id and task_id is recorded Then the API responds 409 with code FITMENT_MISMATCH
Auto-Combine Shared Wheel-Off Tasks per Visit
Given at least two due tasks on the same axle that require wheel-off When the user generates recommended combos Then the system proposes a single combo including all eligible wheel-off tasks on that axle Then the total wheel-off events in the combo equals 1 Then tasks with conflicting states are excluded with rule_id CONFLICTING_STATE Then the UI shows estimated labor minutes saved greater than 0 for the combo
Enforce Co-schedulability for Conflicting Operational States
Given a candidate bundle containing tasks that require mutually exclusive states for the same time window such as wheels_off versus wheels_on fluids_drained versus fluids_filled vehicle_lifted versus on_alignment_rack engine_hot versus engine_cold battery_disconnected versus battery_connected When the user attempts to save the bundle Then the save is blocked with rule_id CONFLICTING_STATE Then the blocking message lists the exact state pairs causing the conflict Then no work order can be created from the bundle
Apply Rule Precedence Across OEM, Catalog, and Custom Sources
Given OEM fitment rules and aftermarket catalog rules exist for the same task or part When the rules engine computes eligibility Then precedence is OEM over Catalog over Custom and the applied rule_source is recorded Then the UI displays rule_source and version_id on the eligibility tooltip Then deactivating an OEM rule causes the next lower precedence rule to apply on recompute
Import and Validate OEM and Parts Catalog Datasets
Given a vendor dataset provided as CSV or JSON with required fields When the import job runs Then the system validates schema and required fields and rejects files that fail validation Then valid rows are ingested and versioned under a new dataset_id with checksum and source Then invalid rows are rejected with a downloadable error report including line numbers and reasons Then the job status reports counts for inserted updated and rejected records
Safety-Gated Exceptions and Overrides Workflow
Given a user with role RulesAdmin creates a scoped exception to override a rule for a vehicle or vehicle_family When the user submits the exception Then the system requires justification scope start_date end_date and target rule_ids Then a second approver with role SafetyReviewer must approve before the exception becomes active Then exceptions marked safety_critical cannot be created and the system returns 403 with code SAFETY_LOCKED Then all approved exceptions are logged with approver_id activation timestamp and expiry
Rules Engine Performance and Determinism
Given a vehicle with 50 candidate tasks When the rules engine evaluates fitment and co-schedulability Then the evaluation completes within 500 ms at the 95th percentile in the service environment Then repeated evaluations with identical inputs produce identical outputs and rule_traces Then the response includes a rule_trace array listing applied rule_ids and decision outcomes
User Constraints & Preferences
"As an owner-operator, I want to set limits and preferences for service combos so that recommendations align with my schedule, budget, and compliance needs."
Description

Provide configuration of policy controls that shape combo recommendations, including maximum visit duration, downtime windows per vehicle, shop preferences, budget caps, warranty considerations, deferral thresholds, regulatory inspection cycles, and preferred parts/brands. Support fleet-level defaults and per-vehicle overrides, and ensure the optimization engine respects these constraints when generating bundles.

Acceptance Criteria
Fleet Defaults and Vehicle Overrides Precedence
Given fleet-level default policy values exist and vehicle V has overrides for one or more policies When generating combo recommendations for V Then the resolved policy set uses V's overrides for those keys and fleet defaults for others Given vehicle V has no overrides When generating combo recommendations Then the fleet-level default policy set is used Given a user updates any policy value at fleet or vehicle scope When the change is saved Then a new version is recorded with timestamp, userId, scope (fleet/vehicle), and change diff in the audit log Given recommendations are displayed When the user opens the policy constraints panel Then the resolved policy set and source per key (fleet or vehicle) are shown Given the API is called to retrieve the resolved policy for vehicle V When V exists Then HTTP 200 returns the resolved policy set; When V does not exist Then HTTP 404 is returned
Max Visit Duration and Downtime Windows Enforcement
Given vehicle V has maxVisitMinutes = M and configured downtime windows W within a planning range R When generating a visit plan for V in R Then no single visit exceeds M minutes and all task start/end times fall fully within at least one window in W Given required tasks cannot fit within M and W When generating a plan Then tasks are split into the minimum number of visits that satisfy M and W or a "no feasible plan" result is returned listing the conflicting constraints Given a visit recommendation is produced When displayed in the UI Then the estimated visit duration and the matched downtime window are shown Given the engine estimates task durations from historical or default values When actual duration variances occur beyond ±15% Then the system suggests re-optimization or splitting in the next planning cycle
Budget Caps and Cost-Aware Bundling
Given per-visit budget cap B and rolling-period budget cap P are configured for vehicle V or at fleet default When generating combo recommendations Then the sum of estimated labor + parts + shop fees + taxes per visit ≤ B and the sum within the defined rolling period ≤ P Given multiple feasible bundles satisfy all constraints When selecting a final recommendation Then the engine chooses the least-cost bundle; ties are broken by fewer visits, then earliest completion Given no bundle can satisfy budget caps When generating recommendations Then the engine proposes the lowest-cost alternative with the minimal number of visits and flags "budget cap exceeded" with overage amounts per visit and period Given costs are displayed When viewing a recommendation Then line-item costs and total are shown with the policy source of each cost assumption (e.g., labor rate, tax rules)
Warranty-Safe Recommendations
Given component C has an active warranty through date D and task T would void or duplicate warranty coverage When generating combo recommendations Then T is excluded or scheduled after D unless T is required for safety or regulatory compliance Given multiple part options exist for a task on a warranted system When selecting parts Then warranty-preserving parts/brands are preferred and the rationale is displayed in the recommendation details Given warranty information is incomplete or stale for a component When a potentially warranty-impacting task is included Then the system flags "warranty data missing" and requires explicit user confirmation to proceed; the confirmation is audit-logged
Deferral Thresholds and Risk Disclosure
Given a health metric m (e.g., brake pad thickness, battery SOH) is above the configured deferral threshold for m When generating a visit plan Then non-critical tasks related to m are deferred and excluded from the current bundle Given m is below threshold or projected to breach before the next planned visit based on trend When generating a visit plan Then related tasks are included in the current bundle and marked as priority Given a task is deferred due to thresholds When displaying the plan Then the expected time-to-threshold breach and next evaluation date are shown Given a user attempts to override a deferral below threshold When saving the override Then a reason and userId are required and the plan is re-optimized; the override is audit-logged
Regulatory Inspection Cycle Compliance
Given a regulatory cycle R with due date D for vehicle V and a planning horizon H When generating combo recommendations within H Then required inspection tasks for R are included and scheduled no later than D and within downtime windows Given regulatory due dates conflict with other constraints (budget, duration, shop availability) When generating the plan Then regulatory tasks are prioritized and non-regulatory tasks are redistributed or deferred to maintain compliance Given no feasible plan can meet D under current constraints When generating the plan Then a high-priority "regulatory risk" exception is produced with blocking constraints listed and the earliest feasible schedule suggestion provided Given vehicles operate across time zones When calculating due dates and downtime windows Then the vehicle's assigned time zone is used consistently
Shop and Parts/Brand Preferences and Blacklists
Given preferred shops S, blacklisted shops S', required certifications C, and max service radius R are configured When selecting a shop for a recommendation Then only shops in S within R that meet C are considered and shops in S' are excluded Given preferred parts/brand list P and blacklist P' When selecting parts Then parts/brands from P are used; parts in P' are never selected; if no option in P satisfies warranty and budget, the closest compliant alternative is proposed with an explicit exception requiring user confirmation Given multiple shops and parts options meet all constraints When finalizing the recommendation Then the tie-breaker order is shortest travel time, earliest availability, then lowest cost Given no shop or part meets constraints When generating the plan Then the system flags a "preference conflict" and suggests specific constraints to relax (e.g., expand radius by 10%, allow brand Q)
Savings & Trip Reduction Estimator
"As a fleet manager, I want to see estimated savings for each combo so that I can justify accepting the recommendation and measure ROI over time."
Description

For each recommended combo, calculate projected labor hours saved, teardown steps avoided, parts overlaps, and number of shop trips prevented versus performing tasks separately. Display cost/time impact with assumptions and confidence, and after completion reconcile actuals to refine estimates and improve future recommendations. Expose summaries at the visit, vehicle, and fleet levels.

Acceptance Criteria
Compute Savings vs Separate Tasks
Given a recommended combo containing two or more tasks and a defined baseline of the same tasks performed in separate visits When the estimator runs Then it outputs projected_labor_hours_saved (rounded to 0.1 h and >= 0), teardown_steps_avoided (integer >= 0), parts_overlap_count (integer >= 0), and shop_trips_prevented (integer >= 0) And it persists these outputs on the visit record with a timestamp And baseline labor is calculated as sum(task.labor_hours) for separate visits, combo labor is calculated as shared_labor + unique_labor, and projected_labor_hours_saved equals max(baseline - combo, 0)
Display Cost/Time Impact with Assumptions and Confidence
Given organization labor_rate and parts prices are available When viewing a combo recommendation Then the UI displays cost_saved_currency = labor_rate * projected_labor_hours_saved + value_of_parts_overlap (>= 0), time_saved_hours = projected_labor_hours_saved, and shop_trips_prevented And an Assumptions panel lists co-schedule window (days and mi/km), labor_rate used, teardown_step_library_version, and parts_reuse_policy And a confidence_score 0–100 and band are shown using thresholds (High >= 80, Medium 50–79, Low < 50); if any required input is missing, confidence is hidden and a "missing assumption" indicator appears
Trip Reduction Computation
Given each task has a due window [start,end] in time or odometer and a co-schedule_threshold is set When computing minimal visits Then tasks whose windows overlap within the threshold are grouped into one visit; tasks marked must_separate are excluded from grouping And baseline_visits equals number_of_tasks; combo_visits equals number_of_groups plus must_separate_tasks; shop_trips_prevented equals max(baseline_visits - combo_visits, 0) And the estimator outputs the list of tasks per visit for audit
Post-Visit Reconciliation of Actuals
Given a combo visit marked completed When the user enters or imports actual labor_hours, teardown_steps_performed, parts_consumed, and actual_visit_count Then the system stores actuals, computes variances vs estimates (absolute and %), and marks the combo as Reconciled And if telemetry/clocked time is available, the system pre-fills actual labor_hours and allows override with reason And after reconciliation, summaries and reports use actuals; if not reconciled, they use estimates flagged as estimated
Model Learning and Estimate Refinement
Given at least 5 reconciled combos for a task category or vehicle When the nightly learning job runs Then it updates error metrics (MAE and MAPE) by category/vehicle, recalibrates parameters affecting labor sharing and teardown overlaps, and updates confidence scores And new estimates generated after the job include updated parameters and a model_version And an audit log preserves prior estimates and versions for traceability
Summaries at Visit, Vehicle, and Fleet Levels
Given a date range filter When viewing the Visit Summary Then show cost_saved, hours_saved, teardown_steps_avoided, and trips_prevented for that visit with source (Actual or Estimated) When viewing the Vehicle Summary Then show totals and averages per visit over the range, with CSV export and API pagination When viewing the Fleet Summary Then show fleet totals, top vehicles by cost_saved and hours_saved, and a confidence distribution; all values limited to the selected range
Edge Cases and Data Quality Handling
Given no overlapping teardown steps, wheel-off, fluids, or parts are detected When the estimator runs Then projected_labor_hours_saved, teardown_steps_avoided, parts_overlap_count, and shop_trips_prevented are all 0 and the UI displays "No combo savings opportunity" Given any required assumption (labor_rate, co-schedule_threshold, teardown library) is missing When the estimator runs Then it blocks calculation, shows a validation error listing missing fields, and does not persist partial outputs Given incomplete historical data leads to low confidence When the estimator runs Then confidence_score is <= 30 with a "Low confidence" badge; negative savings are never displayed (floored at 0)
Calendar & Work Order Sync
"As a dispatcher, I want accepted combos to automatically populate the maintenance calendar and work orders so that execution is coordinated without manual re-entry."
Description

Integrate accepted combo bundles with FleetPulse’s maintenance scheduler to create a single visit containing grouped tasks, target shop, and planned duration. Generate consolidated work orders/POs, handle reschedules and partial completions, and sync status updates from shop feedback and telematics. Provide APIs/webhooks to push combos into external shop management systems to streamline execution.

Acceptance Criteria
Create Single Visit with Grouped Combo Tasks
Given a user accepts a combo bundle for a vehicle When they click "Schedule Combo Visit" Then a single calendar visit is created containing all tasks in the bundle And the visit title includes the vehicle identifier and combo name And the visit planned duration is calculated by summing task base durations while applying shared setup time (wheel-off, fluid drain) only once per shared type And the planned duration is displayed and stored rounded to the nearest 5 minutes And if an open visit already exists for the same vehicle and combo within ±1 day Then the system blocks creation and prompts the user to open the existing visit instead And if any task in the bundle is already scheduled in another open visit Then the user must choose to merge or exclude that task before creation And an audit log entry is recorded with user, timestamp, and task list
Assign Target Shop and Planned Duration to Visit
Given the vehicle has a preferred shop configured When the visit is created Then the target shop defaults to the preferred shop; otherwise choose the nearest in-network shop within 25 miles of the vehicle's last known location And the user may override the target shop prior to saving And the planned duration is persisted on the visit and on each task as planned labor minutes And the visit start and end are stored in the shop's local timezone And if the selected day cannot fit the planned duration within shop hours Then the system suggests the next three available slots And saving the visit succeeds only if start/end fall fully within shop business hours
Generate Consolidated Work Order/PO for Combo Visit
Given a visit is created for an accepted combo bundle When the visit is saved Then a single Work Order (WO) is generated with a unique WO ID and status "Planned" And the WO includes a line item per task with labor minutes and rate And parts across tasks are deduplicated with quantities aggregated And a single Purchase Order (PO) addressed to the target shop is generated with consolidated parts and labor totals And PO total equals the sum of line items plus taxes/fees from the shop's tax profile And the WO and PO are attached to the visit, retrievable via UI and API, and downloadable as PDF And if any task exceeds the customer approval threshold Then the PO status is "Pending Approval" and no dispatch webhook is sent until approved
Reschedule Combo Visit with Conflict Resolution
Given a scheduled combo visit exists When a user reschedules it to a new date/time Then all grouped tasks remain linked to the visit And planned duration persists unless shop hour constraints require adjustment And if the new slot conflicts with shop blackout periods or vehicle availability Then the reschedule is blocked and reasons are displayed And on successful reschedule, notifications are sent to the shop, driver, and subscribed systems within 60 seconds And a "visit.updated" webhook is emitted with the new schedule And if moved within 24 hours of original start, mark as "Short Notice" and require shop acknowledgement; if no acknowledgement within 2 hours, send escalation email And if the target shop changes to a different timezone, start/end are converted and stored in the new shop's local time and the audit trail is updated
Handle Partial Completion and Follow-up Scheduling
Given a combo visit is in progress When the shop marks a subset of tasks Completed and others Deferred or Blocked Then the visit status becomes "Partially Completed" And the system creates a follow-up visit draft containing only remaining tasks And the suggested earliest follow-up date is based on parts ETA and vehicle availability And original WO/PO actuals are updated to include only completed tasks And a change order is created if actuals vary by more than 5% from planned And remaining tasks are excluded from the original WO/PO and linked to the new follow-up WO/PO And notifications are sent to the owner-operator summarizing completed vs pending tasks and the proposed follow-up
Sync Bidirectional Status from Shop and Telematics
Given the shop updates task statuses via portal or API to In Progress, Completed, or Blocked When the update is received Then FleetPulse updates the corresponding task and visit statuses within 30 seconds and records an audit entry And for tasks tagged "Requires Road Test Verification" When telematics detects at least 10 miles driven within 24 hours after completion and no repeat relevant DTCs Then the task auto-transitions to "Verified" And if a new relevant DTC occurs within 7 days of completion Then flag the visit as "Post-Service Issue" and notify the owner-operator And all inbound updates store source, timestamp, and prior/new values immutably
Push Combos to External Shop Systems via API/Webhooks
Given an external shop system integrates with FleetPulse When it calls POST /api/v1/shops/{shopId}/visits with OAuth2 client credentials, HMAC-SHA256 signature, and an Idempotency-Key header Then requests with the same Idempotency-Key are processed exactly once And on synchronous success return 201 with visitId, woId, poId; on async processing return 202 and emit a "visit.created" webhook when complete And emit webhooks for visit.created, visit.updated, workorder.created, and workorder.status.changed with HMAC signatures, exponential backoff retries up to 24 hours, and a schema version field And enforce rate limits of 100 requests per minute per client; exceedances return 429 with Retry-After And payloads missing required fields return 400 with machine-readable error codes And a sandbox environment and OpenAPI spec are provided for testing
Override & Audit Logging
"As a fleet manager, I want to override combo recommendations with documented reasons so that decisions are traceable and the system can learn from real-world constraints."
Description

Allow users to modify or reject combo recommendations, add/remove tasks, adjust service windows, or choose alternate shops while capturing reason codes and comments. Maintain a complete audit trail with before/after metrics and feed override data back into analytics to improve rule tuning and optimization behavior over time.

Acceptance Criteria
Modify Combo Tasks with Reason Capture
Given a vehicle has an auto-generated combo recommendation When the user removes one or more recommended tasks Then the system requires selection of a reason code from the configured list before saving And prevents save until a valid reason code is provided And optionally allows a free-text comment (0–500 chars) that is validated for length And displays a before/after task list diff prior to confirmation And upon save, persists an audit record capturing removed task IDs, reason code, comment, user, timestamp, and recommendation version Given a vehicle has an auto-generated combo recommendation When the user adds one or more tasks to the combo from the task catalog Then the system requires selection of a reason code before saving And optionally allows a free-text comment (0–500 chars) And displays a before/after task list diff and updated labor/parts estimates And upon save, persists an audit record capturing added task IDs, reason code, comment, user, timestamp, and recommendation version
Adjust Service Windows with Before/After Audit
Given a recommended combo with due windows for each task When the user modifies due mileage or due date windows or defers a task to a later visit Then the system validates the change against policy bounds (min/max deferral) and blocks out-of-bounds entries with an error And requires a reason code prior to save And recalculates and displays before/after metrics: number of tasks in visit, estimated labor hours, parts cost, next due miles/days per affected task, and visit date And upon save, writes an audit record including old/new window values, recalculated metrics, reason code, user, timestamp, and recommendation version
Select Alternate Shop and Log Impact
Given a recommended shop is suggested for the combo When the user selects an alternate shop Then the system verifies the user has permission to change shops and blocks if not authorized And displays calculated deltas for distance/ETA, labor rate, estimated total cost, and earliest appointment availability between the original and alternate shop And requires a reason code prior to save And upon save, persists an audit record including original vs new shop IDs, appointment time, cost/time deltas, reason code, user, and timestamp
Immutable, Searchable Audit Log
Given one or more overrides have been saved When the audit log is viewed Then each entry contains: actor ID and role, timestamp (UTC), source (UI or API), vehicle ID, visit/booking ID (if applicable), combo/recommendation ID and version, action type, fields changed with old/new values, reason code, and comment And entries are append-only and cannot be edited or deleted via UI or API And the log supports filtering by date range, actor, vehicle, action type, reason code, and export to CSV And audit records are retained per organization retention policy and remain queryable until expiry
Transactional Overrides and Failure Handling
Given an override is being saved When the audit record cannot be persisted Then the system does not apply the override and shows an error to the user indicating the save failed And no changes are committed to the recommendation until the audit write succeeds atomically And the system retries the write up to 3 times on transient errors and surfaces a correlation ID on failure And API endpoints are idempotent using a client-supplied idempotency key to prevent duplicate audit entries and state changes
Override Events Feed Analytics
Given an override is successfully committed When the analytics pipeline consumes override events Then an event conforming to the defined schema (including action type, reason code, before/after fields, vehicle, timestamps, and user role) is enqueued within 1 minute of commit And analytics counters and feature store are updated within 15 minutes SLA for downstream model/rule tuning And no PII beyond user ID/role is included; comments are excluded unless explicitly flagged for sharing by org policy And failures in analytics delivery do not affect transactional save and are retried asynchronously with backoff

ChainLock Ledger

Creates an immutable, time‑stamped trail of every inspection, defect, repair step, road test, and sign‑off—complete with event hashes that flag any tampering. Auditors and insurers can verify integrity at a glance, reducing disputes and speeding reviews.

Requirements

Append-Only Ledger Core
"As a fleet compliance manager, I want an immutable, append-only event ledger for each vehicle so that I can prove maintenance and inspection history has not been altered."
Description

Implements an immutable, append-only event store that records every inspection, defect, repair step, road test, and sign-off across 3–100 vehicle fleets. Ensures write-once semantics with strictly sequential event indices per vehicle, idempotent writes, and audit-grade durability. Integrates with existing FleetPulse modules (OBD-II telemetry, inspections, maintenance scheduling, and repair-cost tracking) via a normalized event schema and ingestion pipeline. Supports pagination, filtering by vehicle/date/type, and retention policies while preventing edits or deletes; corrections are modeled as new compensating events. Provides multi-tenant isolation, per-vehicle chains with organization-wide indexing, and backfill/migration utilities for historical data.

Acceptance Criteria
Append-Only, Sequential Indexing, and Idempotent Writes
Given vehicle V with last_index N, when a new event E is appended, then E.index = N+1 and all prior indices remain immutable. Given two concurrent append requests for vehicle V, when processed, then resulting indices are strictly sequential with no duplicates or gaps. Given an existing stored event with idempotency_key K, when the same write with K is retried, then the original event is returned and no additional event is created.
Immutability and Tamper-Evident Hash Chain Verification
Given a vehicle chain with events [1..M], when verifying the chain, then for each i>1 events[i].previous_hash equals hash(events[i-1]) and verification returns status=valid. Given any modification to a stored event, when chain verification runs, then a hash mismatch is detected at the first altered index and verification returns status=invalid with the failing index. Given a request for the current chain head, when queried, then the API returns the head_hash that deterministically represents the chain state.
Audit-Grade Durability and Crash Consistency
Given an append request that is acknowledged, when the service is restarted unexpectedly, then the appended event remains present and readable. Given a crash after a write is received but before acknowledgment, when the system recovers, then the event is either absent or present exactly once; no partial or duplicate records exist. Given power loss during high-throughput appends, when recovery completes, then per-vehicle sequences are contiguous with no gaps or out-of-order indices.
Normalized Schema and Cross-Module Ingestion
Given an OBD-II DTC alert for vehicle V, when ingested, then an event of type telemetry_fault with standardized fields (vehicle_id, event_time, type, payload, idempotency_key, previous_hash) is appended to V’s chain. Given a completed inspection for vehicle V, when submitted, then an event of type inspection using the same schema is appended and linked via previous_hash. Given a maintenance/repair step or a cost update for vehicle V, when recorded, then events of types repair_step and cost_update are appended using the normalized schema. Given historical records for vehicle V, when backfilled via the migration utility, then they are appended in import order with current indices while preserving original event_time in the payload.
Query Pagination, Filtering, and Retention Visibility
Given vehicle_id V, date range [A,B], and types [T1,T2], when listing events, then only events for V within [A,B] of types [T1,T2] are returned. Given page_size P and continuation cursor C, when requesting successive pages, then ordering is deterministic, no events are duplicated or omitted across pages, and the final page indicates end-of-data. Given an organization-wide query, when filtering by type and date, then results include only events from vehicles in that organization with each result including vehicle_id. Given an active retention policy R, when listing without auditor scope, then events beyond R are excluded from default queries while chain verification over the hidden range still succeeds.
Compensating Corrections Instead of Updates/Deletes
Given any attempt to update or delete event E, when the request is submitted, then it is rejected and no stored data changes. Given a correction C referencing E.id, when C is appended, then E remains unchanged and both E and C are visible in the stream with C referencing E in its metadata.
Multi-Tenant Isolation with Per-Vehicle Chains and Org-Wide Indexing
Given two organizations OrgA and OrgB, when a user from OrgA queries, then no events from OrgB are returned and cross-tenant identifiers are not accepted. Given per-vehicle chains in OrgA, when retrieving chain metadata, then each vehicle has an independent index starting at 1 and its own chain head hash. Given an organization-wide index for OrgA, when listing across vehicles, then only OrgA’s vehicles are included and access control is enforced.
Cryptographic Hash Linking & Event Identity
"As an auditor, I want each event to include a cryptographic hash linked to the prior event so that any tampering is immediately detectable."
Description

Calculates a deterministic SHA-256 hash over normalized event payloads, metadata, and the previous event’s hash to create a tamper-evident chain. Produces a content-addressable Event ID and stores the previous-hash pointer to ensure any change propagates detectable breaks. Enforces canonical serialization, stable field ordering, and explicit null handling to avoid hash drift. Exposes verify endpoints to recompute hashes, and returns mismatch indicators for auditors and insurers. Supports algorithm agility and versioning for future crypto upgrades without breaking historical verification.

Acceptance Criteria
Deterministic Hash Generation via Canonical Serialization
- Given canonicalization version v1 and semantically identical events produced on different platforms, When the SHA-256 digest is computed, Then the resulting 64-hex-character hash is identical across platforms. - Given the same event with differing key order, whitespace, or numeric/string formatting, When canonicalization v1 is applied, Then the SHA-256 digest equals the baseline digest. - Given an event payload, metadata, and previous_hash pointer, When hashing, Then the digest equals SHA-256(canonicalize_v1(payload, metadata, previous_hash)). - Given a field explicitly set to null, When canonicalization v1 runs, Then a null marker is serialized and contributes deterministically to the digest.
Hash Chain Linking with Previous-Hash Pointer
- Given event B references event A via previous_hash, When chain verification runs, Then B.previous_hash equals hash(A) and the link passes. - Given any modification to event A after event B was recorded, When verifying from A forward, Then the first failing event is B and first_broken_event_id == B. - Given a genesis event, When created, Then previous_hash equals the configured genesis marker (e.g., 64 zeroes) and verification treats it as valid.
Content-Addressable Event ID and Idempotency
- Given a new event is accepted, When hashing completes, Then Event ID equals the hex-encoded SHA-256 digest of the canonicalized components. - Given the exact same canonical event is submitted again, When processed, Then the same Event ID is returned and no duplicate ledger record is created. - Given any byte-level change to the canonicalized components, When hashing, Then a different Event ID is produced. - Given an Event ID, When the event is retrieved, Then the stored_hash field equals the Event ID.
Verify Endpoints Detect Tampering and Pinpoint Breaks
- Given a chain_id and optional start/end bounds, When GET /chains/{chain_id}/verify is called, Then the response includes chain_valid (bool), last_verified_event_id, and first_broken_event_id (nullable). - Given an event ID, When GET /events/{id}/verify is called, Then the response includes computed_hash, stored_hash, match (bool), and the algorithm and canonicalization version used. - Given a tampered event in the chain, When verification runs, Then chain_valid == false and reason_code is one of {previous_hash_mismatch, payload_hash_mismatch, algorithm_unsupported}.
Algorithm Agility and Versioned Verification
- Given historical events recorded with algorithm=sha256 and canonicalization=v1, When system defaults change to algorithm=sha3 and canonicalization=v2, Then historical events still verify using their recorded algorithm and canonicalization. - Given ingestion rotates from sha256/v1 to sha3/v2 at activation time T, When events are created after T, Then they carry the new identifiers and earlier events remain unchanged and verifiable. - Given an event declares an unknown algorithm identifier, When /verify is called, Then verification returns algorithm_unsupported and does not alter stored records.
Explicit Null vs Missing Field Handling
- Given two payloads where field F is absent in one and explicitly null in the other, When hashed under canonicalization v1, Then the resulting digests are different. - Given the schema later adds optional field G with default null, When verifying a historical event recorded before the change, Then its Event ID remains unchanged. - Given field F changes from null to empty string "", When hashed, Then the digest changes and a new Event ID is produced.
Trusted Timestamping & Clock Discipline
"As an insurer adjuster, I want trusted, consistent timestamps on all events so that I can accurately reconstruct sequences during claims reviews."
Description

Attaches trusted, UTC-normalized timestamps to every event using server-side receipt time plus device-captured time, with drift detection and reconciliation rules. Synchronizes mobile and edge devices via NTP and records drift offsets; when offline, captures signed device time and anchors it to server time upon sync. Maintains monotonic ordering per vehicle chain, flags out-of-order submissions, and exposes time provenance in the verification view and API. Ensures consistency across time zones and daylight savings, enabling accurate sequence reconstruction for audits and claims.

Acceptance Criteria
UTC-Normalized Server Receipts
Given an incoming event with device-captured time and timezone, When the server receives it, Then persist serverReceiptAtUtc in ISO 8601 with 'Z' and millisecond precision. Given the same event, When normalizing, Then persist deviceCapturedAtUtc alongside deviceCapturedAtLocal and deviceTimezone. Given the event is stored, When retrieved via API, Then serverReceiptAtUtc and deviceCapturedAtUtc are present, non-null, and correctly formatted. Given server NTP offset is measured, When offset exceeds 1000 ms at ingest, Then set systemTimeHealth="DEGRADED" and include serverNtpOffsetMs in the event provenance.
Device Drift Detection and Reconciliation
Given a device time sync occurs via NTP, When |offset| > 2000 ms, Then record deviceDriftMs and set driftStatus="OUT_OF_TOLERANCE"; else set driftStatus="IN_TOLERANCE". Given an event is ingested, When both deviceCapturedAtUtc and serverReceiptAtUtc are available, Then compute and persist perEventDriftMs = deviceCapturedAtUtc - serverReceiptAtUtc. Given reconciliation rules, When |perEventDriftMs| > 2000 ms, Then set effectiveEventTimeUtc = serverReceiptAtUtc and reconciliationMethod="SERVER_RECEIPT". Given reconciliation rules, When |perEventDriftMs| <= 2000 ms, Then set effectiveEventTimeUtc = deviceCapturedAtUtc and reconciliationMethod="DEVICE_TIME_CORRECTED".
Offline Capture and Anchoring on Sync
Given the device is offline, When a user records an event, Then capture deviceCapturedAtLocal, deviceTimezone, deviceCapturedAtUtc, and sign the payload with the device key. Given connectivity is restored, When the event syncs, Then verify the signature, persist syncReceiptAtUtc, and compute anchorOffsetMs = syncReceiptAtUtc - deviceCapturedAtUtc. Given anchoring completes, Then preserve the original signed device timestamps immutably and set effectiveEventTimeUtc according to reconciliationMethod.
Monotonic Ordering per Vehicle Chain
Given events A and B belong to the same vehicle, When effectiveEventTimeUtc(A) <= effectiveEventTimeUtc(B), Then ensure chainIndex(A) < chainIndex(B). Given two events share the same effectiveEventTimeUtc, When ordering, Then apply deterministic tiebreaker: ascending serverReceiptAtUtc then eventId to produce unique chainIndex values. Given insertion of a new event would break monotonic order, When processing, Then insert, recompute affected chainIndex values deterministically, and mark chainReorderOccurred=true for impacted events.
Out-of-Order Submission Detection and Flagging
Given an event arrives with effectiveEventTimeUtc earlier than the current latest for the vehicle, When processed, Then set timeAnomalyFlag="OUT_OF_ORDER" and include details {previousEventId, deltaMs}. Given an event is flagged OUT_OF_ORDER, When viewed in verification UI, Then display a warning banner and anomaly details within 1 second of ingestion. Given the event is retrieved via API, Then include {timeAnomalyFlag, deltaMs, previousEventId} in the response payload.
Time Provenance in Verification View and API
Given an event exists, When opened in the verification view, Then display serverReceiptAtUtc, deviceCapturedAtLocal, deviceTimezone, deviceCapturedAtUtc, perEventDriftMs, effectiveEventTimeUtc, reconciliationMethod, ntpSource, serverNtpOffsetMs, signatureVerified. Given clients call GET /events/{id}, When the event is returned, Then the response contains the same provenance fields with documented types and ISO 8601 formatting where applicable. Given any time-related field has been altered after signing, When verifying, Then set hashMismatch=true and highlight altered fields in the UI and response.
Time Zone and DST Consistency
Given a user preference timezone (e.g., America/New_York), When viewing events spanning a DST transition, Then local display times reflect DST rules while UTC values remain unchanged. Given sorting by time in UI or API, When ordering events, Then use effectiveEventTimeUtc for sort and ensure order is stable across timezones. Given generating a daily report, When grouping by day, Then compute day boundaries in the user’s timezone and verify underlying UTC ranges are correct and inclusive.
Verification Portal & Public Read-Only API
"As an external auditor, I want a self-serve portal and API to verify ledger integrity so that I can complete reviews quickly without requesting raw data."
Description

Provides a self-serve web interface and read-only API for auditors and insurers to verify ledger integrity at a glance. Displays green/red integrity status, chain continuity, timestamp provenance, and any detected anomalies for a selected vehicle, date range, or event type. Generates downloadable proof bundles containing event records, hash links, and verification results for inclusion in audit files. Implements role-limited access, rate limiting, and audit logging, while integrating with FleetPulse’s authorization model for secure external sharing.

Acceptance Criteria
Auditor verifies ledger integrity via web portal
- Given an authenticated auditor with access to Vehicle V and a selected date range and event types, When they open the Verification Portal and submit the query, Then the portal displays integrityStatus (Green or Red), chainContinuity (Continuous or Broken), timestampProvenance (Trusted or Unverified), and anomalies count within 3 seconds for up to 10,000 events. - And each event row shows eventId, eventType, timestamp with source, eventHash, and previousHash link; clicking previousHash navigates to the linked event. - And integrityStatus is Green only if every event’s hash link resolves to the expected previousHash and all validations succeed; otherwise Red with machine-readable reason codes. - And results are read-only and cannot be modified by the user.
Insurer downloads proof bundle for a date range
- Given an authorized user scopes a vehicle, date range, and event types, When Generate Proof Bundle is clicked, Then a downloadable bundle (ZIP) is produced within 60 seconds for up to 50,000 events containing: event records (JSONL or CSV), hash-link chain file, verification report (Green/Red with reasons), and a manifest with SHA-256 checksums. - And the bundle’s SHA-256 is displayed and matches the manifest; the manifest lists all included files and counts. - And the verification report reflects the same status and anomaly list shown in the portal for the same query. - And the download link is valid for 7 days, tenant-scoped, and revocable; revoked links return 410 and cannot be reactivated.
Public read-only API returns verification status
- Given a valid read-only API key or share token scoped to vehicle/date range, When GET /verification is called with vehicleId, from, to, and optional eventType filters, Then the API returns 200 with JSON including integrityStatus, chainContinuity, timestampProvenance, anomalies[], and pagination metadata. - And response time is <= 800 ms p95 for queries returning up to 5,000 events; larger responses use pagination with stable cursors. - And attempting to use write methods returns 405; no PII beyond authorized fields is present in responses. - And invalid scope or expired token returns 403 with an error code; missing parameters return 400; exceeding rate limits returns 429 with Retry-After.
Role-limited access integrated with FleetPulse authorization
- Given FleetPulse roles and share tokens, When a user or external party attempts to access the portal or API, Then access is granted only if the principal has ExternalVerifier permission or holds a valid share token scoped to specific vehicles, date ranges, and event types. - And scope boundaries are enforced server-side; out-of-scope requests return 403 and are audit-logged with reason codes. - And share tokens are time-bound with explicit expiry, are revocable, and cannot be used to access other FleetPulse features. - And revocations take effect within 60 seconds and invalidate active sessions and tokens.
API and portal rate limiting and abuse controls
- Given default tenant-level limits, When a client calls the verification API, Then rate limits of 100 requests per minute per API key and 20 requests per minute per IP are enforced with 429 responses and a Retry-After header on excess. - And proof bundle generation is limited to 5 bundles per hour per tenant and 1 concurrent generation per tenant; excess attempts return 429 with a descriptive error. - And limits are configurable per tenant by admins within platform maximums; changes take effect within 5 minutes and are audit-logged. - And rate limiting events do not impact other tenants and do not degrade portal availability beyond 1% error rate p95.
Comprehensive audit logging for verification access
- Given any portal view or API call, When the request completes, Then an immutable audit log entry records tenantId, principalId or tokenId, vehicleId(s), query parameters (with sensitive values hashed), timestamp (UTC), result code, response size, and originating IP. - And audit logs are retained for at least 365 days, are queryable by tenant admins, and are exportable in CSV within 60 seconds for up to 100,000 entries. - And tamper attempts on logs are detectable via hash chaining; verification of logs yields Green/Red status with reasons. - And access to audit logs is role-restricted; unauthorized access attempts return 403 and are logged.
Deterministic tamper detection and chain continuity checks
- Given a dataset with injected issues (altered event payload, missing event, duplicate event, out-of-order timestamp, invalid previousHash), When verified via portal and API, Then integrityStatus is Red, chainContinuity identifies the break(s), and anomalies include machine-readable reason codes for each failure type. - And the first failing eventId is identified with a pointer to the prior valid event; total anomalies count equals the number of injected issues. - And the same dataset without issues yields integrityStatus Green; results are identical across portal and API for the same query. - And timestamp provenance displays source (GPS/NTP/manual) and flags Unverified when source is missing without affecting hash-valid events.
Role-Based Digital Sign-Offs & Attestations
"As a maintenance manager, I want role-based digital sign-offs on inspections and repairs so that accountability and non-repudiation are enforced."
Description

Captures step-level sign-offs for inspections, repairs, and road tests with role-aware digital attestations (driver, mechanic, manager). Binds each sign-off to the event hash and timestamp, preventing post-signature edits and enabling non-repudiation. Integrates with FleetPulse RBAC to enforce who can attest to which steps, supports multi-signer workflows, and queues offline signatures for later submission. Surfaces attestation status in work orders and in the verification portal to demonstrate accountability and process compliance.

Acceptance Criteria
RBAC-Restricted Attestations
Given a work order step configured with required role(s), When a user with a permitted role attempts to attest, Then the system records a sign-off bound to the event hash with signer identity, selected role, and server-side UTC timestamp. Given a work order step configured with required role(s), When a user without a permitted role attempts to attest, Then the system rejects with a permission error, creates no sign-off, and records an audit event. Given a user holds multiple roles, When they attest, Then the role used is explicitly selected and stored and cannot be changed after submission. Given a step configured to allow any of multiple roles (e.g., Driver OR Mechanic), When an authorized role attests, Then the step counts that requirement as satisfied.
Post-Signature Immutability & Non-Repudiation
Given a step has a recorded sign-off, When any user attempts to edit signed fields or delete the sign-off, Then the system prevents the change and returns a read-only/immutable message. Given a signed step requires correction, When a revision is created, Then a new event with a new event hash is generated and requires fresh attestations; the prior sign-off remains visible and is marked "Superseded." Given a sign-off record, Then it includes the event hash, signer identity, role, and server-side UTC timestamp; verification of the stored hash against the canonical event payload succeeds. Given any discrepancy between stored hash and recomputed hash, Then the system flags the record as tampered, blocks "Complete" status, and exposes the tamper flag via UI and API.
Multi-Signer Workflow Completion & Ordering
Given a step requires multiple distinct roles (e.g., Driver + Mechanic + Manager), When all required roles have valid sign-offs, Then the step attestation status becomes "Complete." Given only a subset of required sign-offs are present, Then the step status is "Pending" and the UI lists which roles remain. Given optional signers are configured, Then "Complete" is reached when all required signers have attested regardless of optional signers. Given an ordering rule (e.g., Mechanic before Manager) is configured, When a sign-off is attempted out of order, Then the system rejects the attempt and explains the required order. Given a workflow policy that disallows one user fulfilling multiple required roles, When the same user attempts a second required role sign-off, Then the system rejects it with a policy error.
Offline Attestation Queue & Sync
Given the device is offline, When an authorized user attests a step, Then the sign-off is stored locally as "Queued," bound to the captured event payload snapshot and a provisional device timestamp, and is not editable. When connectivity is restored, Then the client submits the queued sign-off; the server validates RBAC, recomputes the event hash from the server record, stamps a server-side UTC timestamp, and persists the sign-off if valid. Given the underlying step changed while offline, When sync occurs, Then the queued sign-off is rejected as "Conflict," the user is notified, and a fresh attestation is required against the current step. Given duplicate queued submissions, When the client retries, Then the server processes idempotently using a client-generated UUID to avoid duplicate sign-offs.
Work Order and Verification Portal Visibility
Given a work order with attestation-enabled steps, Then each step displays a status badge (Pending, Partially Signed, Complete, Rejected, Superseded) and lists remaining required roles. Given an attestation exists, Then the work order view and verification portal display signer identity, role, server UTC timestamp, and event hash, with actions to copy the hash and download an attestation artifact. Given a tamper flag on any attested event, Then the verification portal shows "Integrity Check Failed," blocks the step from showing "Complete," and provides the failing hash comparison. Given a user or auditor enters a work order ID or event hash in the verification portal, Then the portal returns the attestation details and integrity status without permitting edits.
Event Hash Integrity & External Verification
Given an event payload and its stored event hash, When the hash is recomputed using the documented algorithm, Then it matches the stored hash for untampered records. Given an API request for an attested event, Then the API returns the canonical payload used for hashing, the event hash, signer identity, role, and server UTC timestamp. Given a request to export an attested work order, Then the system generates a verifiable bundle (payload + hashes + signatures) that external tools can validate, and the export hash equals the server-stored hash. Given any modification to the payload after attestation, Then the recomputed hash no longer matches and the system records and displays a tamper alert.
Evidence Attachments & Content Addressing
"As a fleet manager, I want to attach and verify evidence files within the ledger so that supporting documents are provably linked to each event."
Description

Allows attaching photos, PDFs, invoices, OBD-II snapshots, and test results to events, storing files in secure object storage and recording their cryptographic hashes in the ledger. Uses hash-based addressing to detect alteration, supports large-file uploads with resumable transfers, and performs antivirus scanning and metadata extraction. Generates thumbnails/previews for UI while preserving original files, enforces access controls for auditors and insurers, and includes attachment integrity within exported proof bundles.

Acceptance Criteria
Resumable Large-File Upload with Hash Verification
Given a permitted attachment type (photo, PDF, invoice, OBD-II snapshot, or test result) of size up to 2 GB And the client initiates a resumable upload with 8 MB chunks When the network connection drops mid-transfer and the client retries within 24 hours Then the upload resumes from the last confirmed chunk without data duplication And upon completion the server computes the SHA-256 of the stored object And the ledger records the attachment’s SHA-256, byte size, MIME type, uploader, event ID, and timestamp in an immutable entry And the API returns the computed SHA-256 and storage address to the client And the UI shows the attachment as "Verified" only after server-side hash computation succeeds
Malware Scanning Blocks Infected Attachments
Given a user uploads any permitted file type that is infected (e.g., contains the EICAR test signature) When server-side antivirus scanning runs before ledger finalization Then the file is quarantined and not persisted to the content-addressed bucket And no ledger entry is committed for that attachment And the user receives a clear error message indicating malware detection And a security audit log records uploader, event ID, file metadata, detection signature, and timestamp And a clean file uploaded immediately after is accepted and proceeds to hash verification
Metadata Extraction and Content Addressing
Given a JPEG photo and a PDF invoice are uploaded successfully When post-upload processing runs Then metadata is extracted and stored: MIME type, byte size, SHA-256, original filename, capture/create time (EXIF for JPEG if present), page count for PDF, and uploader ID And the object storage key equals the hex-encoded SHA-256 (content-addressed) And a second upload of identical content results in de-duplication: no new object is stored, but the new event references the same hash And metadata is retrievable via API and filterable by event ID and vehicle ID
Preview Generation Preserves Originals
Given an image (JPEG/PNG) and a PDF are attached When preview generation completes Then a thumbnail is created for images (longest side 512 px, JPEG) and the first page thumbnail for PDFs (PNG) And the original files remain byte-for-byte identical to the uploaded content (same SHA-256) And preview artifacts are stored under distinct keys and are excluded from integrity calculations And previews are available in the UI within 30 seconds of upload completion in 95% of cases
Scoped Access for Auditors and Insurers
Given an external auditor or insurer is granted read-only access scoped to a claim/audit case When they request an attachment download Then access is provided via a pre-signed URL that expires in 15 minutes and is single-use And only attachments linked to vehicles and events within the granted scope are accessible And any write, delete, or re-upload attempt is denied with HTTP 403 And all access events are logged with actor, role, event ID, attachment hash, IP, and timestamp And internal users with broader roles can access per RBAC rules while external access remains least-privilege
Canonical OBD-II Snapshot Attachments
Given an OBD-II snapshot is uploaded as JSON When the system canonicalizes the payload (stable key ordering, trimmed insignificant whitespace, normalized numeric precision) prior to hashing Then the computed SHA-256 is identical for semantically equivalent snapshots that differ only in formatting And the hash differs if any PID/DTC value changes And extracted metadata includes VIN (if present), PIDs included, DTCs present, and snapshot timestamp And the snapshot is stored and referenced by its hash in the ledger entry
Proof Bundle Export with Attachment Integrity
Given an auditor requests a proof bundle for selected events with attachments When the bundle is generated Then it includes a manifest listing each attachment’s filename, byte size, MIME, SHA-256, content-address key, event ID, and ledger inclusion proof And a verifier tool/endpoint can recompute hashes from the bundle and return Pass if all bytes match and proofs validate, otherwise Fail with the first mismatched item identified And the system can export a bundle for up to 500 attachments within 60 seconds under normal load And if access restrictions apply, the manifest includes the hash with a redacted flag while still enabling integrity verification
Continuous Chain Integrity Monitoring & Alerts
"As an operations lead, I want automatic integrity checks and alerts so that any tampering or data anomalies are surfaced immediately."
Description

Runs scheduled and on-demand verification jobs that traverse chains to detect hash breaks, missing predecessors, out-of-order timestamps, and clock drift beyond thresholds. Surfaces issues on an integrity dashboard and triggers alerts via email and in-app notifications, with integrations to incident channels. Supports automatic quarantine of suspect records from downstream analytics, provides guided remediation workflows (e.g., resubmit from source, annotate discrepancies), and logs verification history for audit traceability.

Acceptance Criteria
Nightly Scheduled Verification Detects Chain Anomalies
Given the verification scheduler is enabled for a tenant and chains contain both valid and invalid links And anomaly thresholds are configured (clock drift threshold = 5 minutes; timestamp order = strict; hash integrity = strict) When the nightly verification job runs at its scheduled time Then it traverses all chains in scope without skipping any segment And it identifies and classifies anomalies: hash break, missing predecessor, out-of-order timestamp, and clock drift beyond threshold And it records total records scanned and counts per anomaly type And it completes within the tenant’s configured verification SLA And it writes a signed verification summary (job ID, start/end time, scope, parameters, counts, digest) to the verification history
On-Demand Chain Verification Is Idempotent and Read-Only
Given an authorized user (Auditor or Admin) selects an on-demand verification for specified chains and time range When they trigger the verification job Then the system verifies only the requested scope And it does not mutate ledger records or stored hashes And repeating the job with identical parameters produces identical results (idempotent) And the request and results are logged with a correlation ID in verification history And concurrent on-demand runs respect configured concurrency limits without starving scheduled jobs
Integrity Dashboard Surfaces Issues and Supports Drill-Down
Given one or more anomalies have been detected in the last 24 hours When a user opens the Integrity Dashboard Then the dashboard shows total chains scanned, total records scanned, anomalies by type, affected vehicles/assets, last run status, and 7-day trend And the user can filter by anomaly type, chain, vehicle/asset, time range, and job ID And selecting an anomaly opens a detail view with event ID, predecessor ID, timestamps, observed drift, computed vs stored hashes, detection time, quarantine status, and remediation status And dashboard metrics refresh within 30 seconds after a job completes
Multi-Channel Alerts Trigger and Deduplicate
Given alerting is enabled and recipients and incident channels are configured for a tenant When a verification job detects one or more anomalies Then email and in-app notifications are sent to configured recipients within 60 seconds of detection And if Slack, Microsoft Teams, or Webhook is configured, an incident message is posted with tenant ID, job ID, anomaly types, counts, severity, and deep links And alerts for the same anomaly are deduplicated within the configured suppression window And escalation is triggered if an anomaly remains unresolved past the configured duration And failed alert deliveries are retried with exponential backoff up to 5 times and surfaced on the dashboard
Automatic Quarantine of Suspect Records from Analytics
Given quarantine is enabled for the tenant When verification flags records as suspect Then those records are marked as quarantined with reason code and source job ID And quarantined records are excluded by default from analytics queries, dashboards, exports, and downstream pipelines And authorized users can explicitly include quarantined records via includeSuspect=true, which is visibly indicated in responses and UI And lifting quarantine requires a successful remediation followed by re-verification that clears the issue And all quarantine state changes are audit-logged with actor, timestamp, action, and justification
Guided Remediation: Resubmit and Annotate Discrepancies
Given an anomaly exists on record R When a user initiates the remediation workflow for R Then the system recommends actions based on anomaly type (resubmit from source, correct timestamp offset, annotate suspected tampering) And mandatory fields (justification text; source reference) are enforced before submission; attachments are optional And resubmission retrieves the authoritative payload from the configured source connector and recomputes hashes And upon successful remediation, the link is re-verified and the state transitions from Suspect to Valid with chain continuity restored And failed remediation attempts leave the state as Suspect and log the failure with error details and next-step guidance And all annotations are immutable, time-stamped, and visible in record history
Verification History Is Immutable, Searchable, and Exportable
Given verification jobs (scheduled and on-demand) have been executed When an auditor queries verification history Then they can filter by tenant, job type, time range, chain, vehicle/asset, job status, anomaly type, and actor And each entry includes signed digest, parameters, start/end time, records scanned, anomaly counts, affected record IDs, quarantine actions, alerts sent, and final outcome And history entries are append-only; modification attempts are blocked and logged And history can be exported to CSV and JSON and shared via signed URLs that expire per policy And history is retained for at least 24 months, configurable per tenant within legal constraints

PhotoProof Timeline

Auto-threads photos and videos by VIN, timestamp, and GPS, pairing before/after shots and technician notes into a clear narrative. Smart annotations and callouts highlight defects and fixes so reviewers grasp context fast and request fewer clarifications.

Requirements

Auto-Thread by VIN/Timestamp/GPS
"As a fleet manager, I want photos and videos to auto-organize by vehicle and time so that I can review the full story of an issue or repair without manual sorting."
Description

Automatically ingest media metadata (VIN, capture timestamp, GPS coordinates) and associate each photo/video with the correct vehicle record and service event. Order items chronologically into a continuous timeline per VIN, deduplicate near-identical uploads, and reconcile clock skew using server time and EXIF data. Integrate with FleetPulse’s OBD-II events and work orders to anchor media around inspections and repairs, ensuring a unified narrative that reduces manual sorting and context gaps.

Acceptance Criteria
Auto-associate Media to Correct VIN
Given a photo or video upload contains a valid 17-character VIN in metadata (EXIF/XMP), a capture timestamp, and GPS coordinates When the ingestion service processes the upload Then the media is associated to the vehicle record with that VIN in FleetPulse within 5 seconds of receipt And the normalized fields (vin, corrected_capture_at, gps_lat, gps_lng) are persisted with the media record And the association result is exposed via API within 5 seconds of completion
Chronological Timeline Ordering per VIN
Given a VIN with multiple associated media items with corrected timestamps spanning multiple days When the PhotoProof timeline is requested for that VIN Then items are sorted ascending by corrected_capture_at And items with identical corrected_capture_at are secondarily ordered by server_received_at ascending And the ordering is consistent across refreshes and API pagination And all timestamps are presented in the fleet’s configured timezone
Deduplicate Near-Identical Uploads
Given two or more uploads for the same VIN where perceptual hash (pHash) Hamming distance ≤ 5, corrected_capture_at values are within 10 minutes, and GPS coordinates are within 100 meters When the deduplication job runs after ingestion Then one item is retained as canonical and the others are marked as duplicates referencing the canonical media_id And duplicates are excluded from the default timeline list API but remain retrievable via an include_duplicates flag And the dedup decision (distance, time_delta, gps_delta) is recorded in audit metadata
Clock Skew Reconciliation and Timestamp Correction
Given an upload where device capture timestamp differs from server_received_at by ≥ 2 minutes When timestamp reconciliation executes Then corrected_capture_at is computed using EXIF DateTimeOriginal adjusted by the measured device-server offset for the session And corrected_capture_at is used for all ordering and anchoring logic And time_correction metadata (offset_seconds, time_source, reconciled=true) is persisted And if EXIF capture time is missing, corrected_capture_at defaults to server_received_at And the absolute difference between (device_time + offset_seconds) and corrected_capture_at is ≤ 5 seconds
Anchor Media to OBD-II Events and Work Orders
Given media associated to a VIN with corrected_capture_at and GPS coordinates And there exist OBD-II events and/or open work orders for that VIN When anchoring is performed Then the media is linked to the OBD-II event whose window is nearest to corrected_capture_at if corrected_capture_at is within 60 minutes of the event window and GPS is within 200 meters of a ping during that window And the media is associated to the open or in-progress work order whose scheduled/actual window overlaps corrected_capture_at or whose service location is within 200 meters And if multiple candidates tie on time and distance, the media is flagged as ambiguous and placed in a review queue without automatic anchoring
Exception Handling for Missing or Conflicting Metadata
Given an upload missing VIN, or with a VIN not found in FleetPulse, or missing GPS/timestamp, or with conflicting VIN sources When ingestion runs Then the media is held in an Unassigned queue with reason codes describing the issue And if a work order ID is present in the upload context and maps to a single VIN, the media is associated to that VIN and work order And if VIN is missing but GPS and timestamp exist, the system attempts a best match by finding a single VIN with an OBD ping within 150 meters and 24 hours of corrected_capture_at; if exactly one candidate exists, auto-associate, else remain Unassigned And all exception outcomes are available via API and audit log
Mobile Capture & Metadata Extraction
"As a technician, I want the app to capture and auto-fill metadata when I take photos so that uploads attach to the right vehicle and job with minimal effort."
Description

Provide mobile/web upload with offline capture, background sync, and automatic extraction of EXIF (timestamp, GPS), VIN decoding (via barcode/plate scan), and device time validation. Enforce minimum quality thresholds (focus, resolution), auto-orient media, compress for efficient upload, and flag missing metadata for technician correction. Seamlessly attach uploads to the correct vehicle and work order through quick-select and scan flows.

Acceptance Criteria
Offline Capture & Background Sync
Given the device has no internet connectivity When a technician captures photos or videos for a selected vehicle and work order Then the media and extracted metadata are stored locally with encryption at rest And a background upload queue entry is created per media item And when connectivity is restored, syncing auto-starts within 10 seconds And items upload in original capture order with preserved timestamps And failed uploads retry up to 3 times with exponential backoff (2s, 4s, 8s) And server-side idempotency prevents duplicate records on retries And the technician can view per-item status: Pending, Syncing, Synced, or Failed with Retry
EXIF Extraction & Metadata Validation
Given a photo or video with EXIF metadata (timestamp, timezone, GPS, orientation) When the media is queued for upload Then the system extracts and stores capture_timestamp, timezone_offset, latitude, longitude, horizontal_accuracy (m), and orientation And if GPS is missing or horizontal_accuracy > 100 meters, the item is flagged as Needs Review And if timestamp is missing, the item is flagged as Needs Review And flagged items display inline prompts for the technician to supply or correct values before finalizing upload
Device Time Validation
Given a capture occurs on a device with a configurable time drift threshold set to 5 minutes When the media is prepared for upload Then the system compares EXIF capture time to server time And if absolute drift > 5 minutes, the item is marked Time Mismatch And the technician is prompted to confirm or correct the capture time And the corrected capture time is stored as capture_time_corrected while preserving the original as capture_time_original And the upload cannot be finalized until the time mismatch is resolved
VIN/Plate Scan & Decode
Given the technician initiates VIN capture via barcode scan or license plate scan When a valid barcode or plate is scanned Then the system decodes to a 17-character VIN and queries the tenant's vehicle list And if exactly one vehicle matches, it is auto-selected And if multiple vehicles match, the technician must select the correct one from a disambiguation list And if no match is found, the technician is prompted to create a new vehicle or enter VIN manually And VIN decoding returns year, make, model, and trim and stores them on the media record
Minimum Quality Thresholds Enforcement
Given the camera preview is active When a photo is taken Then the app validates focus sharpness (edge variance) and resolution And the photo must be at least 1600x1200 pixels and pass a sharpness threshold And if thresholds are not met, the capture is rejected and the user is prompted to retake When a video is recorded Then the app validates resolution and frame rate And the video must be at least 1280x720 at ≥24 fps And if thresholds are not met, the recording cannot be uploaded until reshot meeting thresholds
Auto-Orientation & Bandwidth-Efficient Compression
Given media is captured in any device orientation When the media is processed for upload Then images are auto-rotated to upright using EXIF orientation and saved preserving EXIF And images are resized to a maximum long edge of 2048 px and JPEG quality 85%, targeting ≤3 MB per image And videos are transcoded to H.264 1080p at ~6 Mbps with AAC 128 kbps, targeting ≤50 MB per 60 seconds And aspect ratio is preserved with no stretching or pillarboxing beyond letterbox as needed
Correct Association to Vehicle and Work Order
Given a vehicle is selected via scan or quick-select And one or more open work orders exist for that vehicle When the technician selects the target work order and uploads media Then the media record is persisted with vehicle_id and work_order_id And if multiple open work orders exist, the app requires an explicit choice before allowing upload And if no open work order exists, the technician can save to the vehicle only or create/select a work order before finalization And the associated media appears under the selected work order and vehicle timeline immediately after sync
Before/After Media Pairing
"As a service advisor, I want the system to automatically pair before and after images so that I can quickly demonstrate the impact of repairs to reviewers."
Description

Detect and pair before/after shots around a service event by analyzing timestamps, work order phases, filenames, and technician prompts. Visually present pairs side-by-side with clear labels and allow manual override for edge cases. Support multi-step sequences (before, during, after) and persist pairing decisions for auditability and reuse in exported reports.

Acceptance Criteria
Auto-Pairing by Metadata and Service Window
Given a service event exists for VIN V with defined start/end times and service location And media items are uploaded with VIN V, timestamps, filenames, GPS, and optional technician labels (e.g., "before", "after") When the pairing job runs Then items captured within 24h before event start and 24h after event end are considered candidates (configurable) And candidates are matched into 1:1 before/after pairs where the before time < after time And the system uses work-order phase boundaries, technician labels, filenames, timestamps, and GPS proximity to infer roles And items from other VINs or outside the candidate window are excluded And the system assigns a confidence score (0.0–1.0) to each pair
Deterministic Tie-Breaking and Confidence Thresholds
Given multiple candidate matches exist for a media item within the service window When selecting a best-fit pair Then the system applies the following priority deterministically: (1) explicit technician label match, (2) same work-order phase alignment, (3) minimal timestamp delta, (4) filename hints and numeric sequence patterns, (5) GPS distance to service location And the chosen pair’s confidence must be ≥ 0.70 (configurable) And if no candidate meets the threshold, the item remains Unpaired and is queued for review And an item cannot be assigned to more than one pair unless participating in a multi-step sequence
Multi-Step Sequence (Before/During/After) Support
Given at least three media items for the same VIN around a service event When their timestamps bracket the event from pre- to post-service Then the system forms an ordered sequence labeled Before, During (one or more steps), After And During steps are segmented by work-order sub-phases or gaps ≥ 10 minutes (configurable) And playback/order follows ascending timestamp And sequences are visually grouped and exported as a single narrative block And each media item belongs to at most one sequence
Side-by-Side Presentation with Clear Labels
Given a paired before/after or multi-step sequence exists When a user opens the PhotoProof Timeline for the service event Then paired items render side-by-side (or step-by-step for sequences) with badges: "Before", "During n", "After" And each panel displays capture time, technician, and location summary And images render at matched heights; videos display playable thumbnails And labels are announced to screen readers and color contrast meets WCAG AA
Manual Override and Audit Logging
Given a user with permission "Edit Media Pairing" views a pair or sequence When they reassign, relabel, add/remove items, or split/merge pairs/sequences Then the system records an audit entry with user id, action, previous state, new state, timestamp, and optional reason And the change persists immediately and marks confidence as Manual And the user can undo and redo the last 10 pairing actions And manual overrides are preserved from future auto-pairing unless explicitly reverted
Persistence Across Sessions and Export Reuse
Given pairs and sequences exist for a service event When the timeline is reloaded or accessed from another device Then the same pairings render identically And exporting a report (PDF/CSV/ZIP) includes paired media in the same order with labels and captions And exported assets include a manifest with pairing metadata (pair/sequence id, role, step index, timestamps) And the export includes an audit summary of manual overrides applied
Edge Case Handling and Review Queue
Given media items have missing or conflicting metadata (e.g., no GPS, mismatched VIN in EXIF, identical timestamps) When auto-pairing runs Then such items are not auto-paired unless an explicit, consistent technician label exists And conflicts (e.g., cross-VIN, out-of-window) are flagged with reasons And unpaired items appear in a "Needs Review" queue filterable by event and VIN And reviewers can resolve flags via manual override And reasons for non-pairing are recorded in the audit log
Inline Technician Notes Linking
"As a technician, I want to add notes tied to specific photos so that reviewers understand exactly what they’re seeing without follow-up questions."
Description

Enable technicians to attach time-stamped notes, voice-to-text transcriptions, and part references directly to specific media or timeline points. Support rich text with standardized defect codes, link notes to work order tasks and OBD-II DTCs, and display them inline in the timeline. Provide quick templates and tag suggestions to speed entry and improve consistency.

Acceptance Criteria
Add Note Anchored to Photo or Video Timestamp
- Given a technician is viewing a VIN’s PhotoProof timeline and has selected a photo or scrubbed a video to timestamp T, When they click "Add Note" and enter note content, Then the note is saved with anchors (vin, media_id or timeline_id, video_offset_ms if applicable, captured_at), author_id, and created_at. - Given a note is saved, When the timeline is displayed, Then the note appears inline directly beneath the target media and is ordered by its anchor timestamp; multiple notes at the same anchor are ordered by created_at. - Given the media item is re-ordered within the timeline, When the timeline is refreshed, Then the note remains attached to its original media/timepoint and continues to render under it. - Given a user submits an empty note, When validation runs, Then the save is rejected with the message "Note content required" and no note is created. - Given a note is anchored to a video timestamp, When the user plays the video and reaches T, Then a callout indicator is displayed at that moment and clicking it focuses the note.
Voice-to-Text Transcription with Editable Output
- Given microphone permission is granted, When a technician records and saves a voice note, Then the audio is attached to the note and an initial transcription appears within 10 seconds of upload completion. - Given a transcription is displayed, When the technician edits the text and saves, Then the edited text persists on reload and the original audio remains available for playback. - Given the voice-to-text service fails to transcribe, When the note is saved, Then the UI shows a clear error with a "Retry transcription" action and preserves the audio attachment. - Given a voice note exists, When the note is rendered inline, Then a voice icon is shown and tapping it plays the audio without leaving the timeline.
Structured Part Reference in Notes
- Given the parts catalog is connected, When a technician types at least 2 characters in the Part Reference field, Then an autosuggest list returns up to 5 matching parts with number and name. - Given a catalog match is selected, When the note is saved, Then the part is stored as a structured reference (part_id, part_number, part_name) and rendered as a clickable chip opening part details. - Given no catalog match is desired, When the technician confirms free-text entry, Then the part is saved as free-text and rendered with an "Unmatched" indicator. - Given a note may include multiple parts, When 2 or more are added, Then all are saved and rendered as separate chips in the note. - Given a part reference was added by mistake, When the technician removes it before saving, Then it is not persisted.
Rich Text Formatting and Standardized Defect Codes
- Given the rich text editor is focused, When the technician applies formatting (bold, italic, bullet list, hyperlink), Then the formatting is saved and rendered consistently inline in the timeline. - Given a configured defect code catalog exists, When the technician invokes Add Defect Code and selects a code, Then the code is stored as a structured reference (code_id, code, description) and rendered as a badge with a tooltip showing description. - Given an invalid or inactive defect code is entered, When validation runs, Then the note cannot be saved and the user is prompted to choose a valid code. - Given multiple defect codes are relevant, When up to 5 codes are added, Then all codes are saved and rendered as separate badges.
Link Note to Work Order Tasks and OBD-II DTCs
- Given there are existing work orders for the VIN, When the technician searches and selects a task to link, Then the note stores the link (work_order_id, task_id) and renders it as a chip that navigates to the task. - Given active or recent OBD-II DTCs exist for the VIN, When the technician links one or more DTCs, Then the note stores structured links (dtc_code, description, recorded_at) and renders them as chips with code and short description. - Given a linked task is later closed, When the note is rendered, Then the task chip displays a "Closed" state but remains clickable. - Given duplicate link attempts occur, When the note is saved, Then links are de-duplicated so each task/DTC appears only once. - Given a user lacks permission to modify links, When they attempt to add or remove a link, Then the action is blocked and an authorization error is shown.
Inline Timeline Display and Video Callouts
- Given the PhotoProof timeline is opened for a VIN, When notes exist, Then each note renders inline under its anchored media with: timestamp, author, text/transcription preview, and badges for parts, defect codes, tasks, and DTCs when present. - Given a note exceeds two lines of text, When rendered, Then it is collapsed with a "Show more" control that expands to full content without page reload. - Given a video contains anchored notes, When the video plays, Then a callout appears at each note’s timestamp and clicking a callout scrolls to and highlights the corresponding inline note. - Given the user filters by tag or code, When the filter is applied, Then the timeline hides non-matching notes and updates the note count without affecting media order.
Quick Templates and Tag Suggestions for Fast Entry
- Given quick templates are configured, When the technician opens the note editor, Then a template picker is available and selecting a template pre-fills the note body, recommended tags, and placeholders. - Given contextual signals (e.g., selected DTC, part, prior defects) exist, When the editor is focused, Then tag suggestions are shown as chips prioritized by context and selecting a chip adds the tag to the note. - Given the technician types "#" in the editor, When they continue typing, Then matching tag suggestions update in real time and Enter selects the highlighted tag. - Given a template is applied and then edited, When the note is saved, Then the final edited content persists and the template is not altered globally.
Smart Annotations & Defect Tags
"As a reviewer, I want clear callouts and standardized defect tags on images so that I can quickly identify issues and verify fixes."
Description

Offer region-based annotations (arrows, boxes, highlights) with callouts that snap to detected components and common defect areas. Provide a library of standardized defect tags (e.g., brake wear, fluid leak) with color coding and severity, and suggest tags based on OBD-II events and prior selections. Ensure annotations render consistently across web, mobile, exports, and maintain edit history.

Acceptance Criteria
Snap-to-Component Callouts on Detected Parts
Given a photo with component detection data for the VIN When a user places or drags a callout within 24px of a detected component boundary or anchor Then the callout snaps to the nearest component anchor and displays a snapped visual state And the annotation stores componentId, anchorType, and snapConfidence ≥ 0.8 And upon save and reload, the callout remains snapped to the same component anchor Given a photo without detection data When a user places a callout Then no snap occurs and the callout displays an unsnapped state And the user can toggle Snap off/on; when toggled off, snapping is disabled for that annotation
Region-Based Annotations: Arrows, Boxes, Highlights
Given the annotation toolbar is open on web and mobile When the user selects Arrow, Rectangle, or Highlight tool Then they can create, move, resize, and rotate the annotation with visible handles And annotations respect a minimum size of 8x8 px and are clamped within media bounds And created annotations persist and reopen with identical geometry after save And keyboard shortcuts A (Arrow), R (Rectangle), H (Highlight), Delete (remove), and Cmd/Ctrl+Z (undo) function on web And touch interactions support one-finger draw/select and two-finger pan/zoom on iOS/Android
Standardized Defect Tag Library with Severity and Color
Given the tag selector is opened for an annotation or media item When the user searches and selects tags Then only tags from the standardized library are attachable (no freeform entries) And each tag displays name, severity {Low, Medium, High, Critical}, and a color chip (#HEX) And multiple tags (up to 10) can be attached to a single annotation or media item And selected tags persist and render with the same color and severity badge across web, mobile, and export And color-to-severity mapping is consistent and meets WCAG AA contrast (≥ 4.5:1) on light and dark themes
Tag Suggestions from OBD-II Events and Prior Selections
Given OBD-II DTC events exist within 7 days of the media timestamp for the same VIN When the tag selector opens Then the system shows up to 3 suggested tags ranked by confidence ≥ 0.6 based on DTC→tag mappings And each suggestion displays source OBD-II and confidence percentage And accepting a suggestion attaches the standardized tag in one tap Given no relevant OBD-II events When the tag selector opens Then the system suggests up to 3 tags based on the user’s prior selections for this VIN in the last 30 days And suggestions are dismissible and do not reappear for that asset once dismissed
Cross-Platform Rendering and Export Fidelity
Given an asset annotated on one platform (web/iOS/Android) When it is opened on another supported platform or exported to PDF Then annotation positions match within ±2 px at 2× zoom and text sizes within ±0.5 pt And tag colors match their defined HEX codes without clipping or blurring And layer ordering of annotations and tags is preserved And exported PDFs embed vector shapes for arrows/boxes/highlights and selectable text for callouts Given a video with time-coded annotations When exported to MP4 Then annotations appear at intended timestamps within ±100 ms at 30 fps
Annotation and Tag Edit History with Revert
Given a user creates, updates, or deletes an annotation or tag When the change is saved Then an immutable history entry is recorded with id, userId, ISO-8601 UTC timestamp, action {create|update|delete}, and before/after payload And the history pane shows a chronological list with diff highlights And selecting Revert restores the chosen version and writes a new history entry referencing revertedFrom id And history remains accessible after logout/login and is retained for at least 24 months And concurrent edits trigger optimistic locking; on conflict, the user is prompted to review and merge before saving
Performance and Stability Under Typical and Heavy Load
Given a timeline with 10 media items each containing 10 annotations and 5 tags When opened on a mid-tier device (iPhone 12 or equivalent Android; Chrome desktop on 2019 i5) Then time-to-interactive is ≤ 2 s at p95 and interactions (select, drag, resize) complete within 100 ms at p95 Given an export of 20 annotated photos to a single PDF When initiated Then the export completes within 15 s at p95 and the file size is ≤ 25 MB with annotation fidelity preserved Given a single media item with 100 annotations When panning and zooming Then frame render times are ≤ 16 ms for 90% of frames and no crashes or memory leaks occur during a 5-minute session
Timeline Narrative View & Filters
"As a fleet manager, I want a clear timeline with powerful filters so that I can quickly find relevant media and understand context across inspections and repairs."
Description

Present a scrollable, chronological narrative per vehicle with grouped events (inspection, repair, road test), collapsible sections, and synchronized media, notes, and sensor events. Provide filters by date range, event type, tag, technician, and location, plus quick jump to anomalies flagged by FleetPulse telematics. Support keyboard navigation, compare mode, and fast-loading thumbnails with on-demand full-resolution media.

Acceptance Criteria
Chronological Vehicle Timeline With Grouped Events
- Given I select a VIN, when the Timeline view loads, then events display in ascending chronological order by event start time using the fleet’s timezone setting. - Given events exist, when rendered, then they are grouped under collapsible headers by event type (inspection, repair, road test) and by calendar day, each header showing the event count. - Given a group header, when I toggle it, then its open/closed state persists while on the VIN timeline and across filter changes within the session. - Given initial load, when fetching the timeline, then the first 50 events render within 1.5 seconds at p75 and infinite scroll appends the next 50 within 800 ms after reaching 90% scroll. - Given no events match, then an empty state appears with clear messaging and a Reset Filters action.
Advanced Filters: Date, Type, Tag, Technician, Location
- Given any combination of filters, when applied, then results reflect logical AND across filter groups and OR within multiple selections inside a single group. - Given a date range, when I set start/end, then only events overlapping that range are shown; timezone aligns to the fleet setting; default range is last 30 days. - Given Tag/Technician/Location pickers, when I type 2+ characters, then suggestions appear within 300 ms and support multi-select. - Given any filter change, when I stop interacting, then the timeline updates within 700 ms of the last change and scroll resets to the top. - Given filters applied, then the URL updates with shareable query parameters; reloading the URL restores the same results. - Given no results, then a clear "No events match filters" state displays with an option to Clear All.
Quick Jump to Telematics Anomalies
- Given anomalies exist for the VIN, when I click Next/Previous Anomaly or press Alt+N/Alt+P, then the timeline scrolls to the anomaly event and highlights it with a focus ring and an "Anomaly" badge. - Given filters are applied, when jumping to anomalies, then only anomalies within the filtered result set are navigated and the controls are disabled if none exist. - Given an anomaly list panel, when opened, then it lists anomalies with timestamp, type, and severity; selecting one navigates to it. - Given arrival at an anomaly event, then the media/notes panel opens to the anomaly timestamp automatically.
Synchronized Media, Notes, and Sensor Events
- Given an event with video and sensor data, when I scrub the video, then the sensor graph cursor moves within ±100 ms of the video timestamp and the note/annotation for that time is highlighted. - Given play/pause, when I play either media or sensor playback, then the other remains synchronized within ±150 ms throughout playback. - Given an annotation timestamp, when playback reaches that time, then the callout appears, is readable, and can be toggled without obscuring critical media. - Given missing media types, then the UI shows "No sensor data" or "No media available" and hides inapplicable controls without errors. - Given before/after paired media, when toggled, then switching occurs within 300 ms with clear Before/After labeling.
Keyboard Navigation and Accessibility
- Given the timeline has focus, when I press Up/Down, then focus moves to the previous/next visible event; Home/End moves to first/last; Page Up/Down scrolls by one viewport. - Given a group header has focus, when I press Left/Right or Enter, then it collapses/expands and announces state via aria-expanded for screen readers. - Given an event has focus, when I press Enter, then the details panel opens; when I press Esc, then it closes and focus returns to the originating event. - Given anomaly navigation, when I press Alt+N/Alt+P, then it navigates to the next/previous anomaly. - Given keyboard-only use, then all interactive elements are reachable in a logical tab order with visible focus styles and meet WCAG 2.1 AA for keyboard and contrast. - Given I press ?, then a shortcuts overlay appears and is dismissible with Esc.
Compare Mode: Side-by-Side Event Review
- Given I select exactly two events, when I click Compare, then a side-by-side view opens with synchronized playback controls and a unified timeline scale. - Given linked playback is enabled by default, when I play or scrub on one side, then the other mirrors the position within ±150 ms; a Link/Unlink toggle is available. - Given media or sensor durations differ, then the shorter timeline is padded with end markers and playback stops at its end without desynchronization. - Given I exit Compare, then I return to the timeline at the previous scroll position and selections are cleared. - Given mixed event types, then common metadata fields align and mismatched fields are clearly labeled or hidden.
Fast Thumbnails and On-Demand Full-Resolution Media
- Given the timeline loads, when thumbnails are requested, then 95% of visible thumbnails render within 600 ms on a 10 Mbps connection; placeholders show until loaded. - Given lazy-loading, when scrolling, then media outside the viewport is not fetched; items entering the viewport begin loading immediately. - Given a thumbnail is clicked, when requesting full-resolution, then the high-res media becomes viewable within 2 seconds at p90 on a 10 Mbps connection; a progress indicator shows and the request can be canceled. - Given a fetch error, then a retry button and error message appear; up to 3 automatic retries with exponential backoff are attempted. - Given a mobile viewport, then thumbnails are optimized to <= 200 KB each and respect device pixel ratio.
Secure Sharing & Audit Trail
"As an operations lead, I want to securely share a timeline snapshot with third parties so that they can review evidence without accessing our entire system."
Description

Enable role-based access and shareable, expiring links to a specific timeline snapshot or report with optional watermarking and download controls. Generate exportable PDFs/MP4 reels that preserve ordering, annotations, and notes. Maintain a full audit trail of views, shares, edits, and pairing changes for compliance and dispute resolution.

Acceptance Criteria
Enforce Role-Based Access on PhotoProof Timelines
Given a PhotoProof timeline snapshot belongs to Fleet X and an authenticated user without a role on Fleet X, When the user attempts to view the snapshot or its report, Then the request is denied with HTTP 403 and an audit event "access_denied" is recorded including userId, fleetId, resourceId, reason "role_mismatch", IP, and timestamp. Given an authenticated user with Owner or Fleet Manager role on Fleet X, When the user opens the snapshot/report, Then the content is displayed and an audit event "view" is recorded with userId, role, resourceId, IP, and timestamp. Given an authenticated user with Technician role on Fleet X, When the user attempts to generate a share link, Then the UI action is blocked (disabled) and an audit event "share_attempt_blocked" is recorded with reason "insufficient_role".
Create and Revoke Expiring Share Links for a Snapshot
Given a user with permission to share snapshots selects a specific snapshot and sets an expiration time (e.g., 72 hours), When the user generates a share link, Then the system creates a unique URL token bound to that snapshot version and expiration and records an audit event "share_created" including creatorId, snapshotId, expiresAt, and linkId. Given the generated share link, When an external viewer opens it before expiration, Then the snapshot renders read-only and an audit event "view" is recorded with linkId and viewer metadata. Given the same link after expiration or after the owner revokes it, When any party attempts to access it, Then the request is denied with HTTP 403 and an audit event "share_access_denied" is recorded with reason "expired" or "revoked". Given a share link is revoked by the creator or an Owner/Manager, When revocation is confirmed, Then the link becomes unusable within 60 seconds globally and an audit event "share_revoked" is recorded with linkId and revokerId.
Watermarking Controls on Shared Media
Given a share link is configured with watermarking enabled, When a recipient views PDF pages, images, or MP4 frames from the snapshot, Then a non-removable overlay watermark displays linkId, VIN, timestamp, and the label "FleetPulse Review" on all pages/frames. Given a share link is configured with watermarking disabled and the viewer is an internal authenticated user with access, When the user views or exports content, Then no watermark is applied. Given any viewer attempts to bypass watermarking via URL parameters or client overrides, When content is requested, Then the server still applies the configured watermark policy and records an audit event "watermark_enforced".
Download Permissions on Shared Links
Given a share link is created with downloads disabled, When the recipient attempts to download original media or the full archive, Then direct download endpoints return HTTP 403, the UI shows no download controls, and an audit event "download_attempt_blocked" is recorded with linkId. Given a share link is created with downloads enabled, When the recipient requests an export, Then the system allows download of the configured export (PDF or MP4) and records "download" with linkId and bytesTransferred. Given an API client uses the share token to call media download endpoints contrary to the link’s download setting, When the request is made, Then the server denies with HTTP 403 and records "download_attempt_blocked" with endpoint and linkId.
Export Fidelity for PDF and MP4 Reels
Given a timeline snapshot with N media items including paired before/after, annotations, and notes, When exporting to PDF, Then items are ordered by VIN and timestamp ascending, each pair renders consecutively, all annotations render at correct coordinates within ±2 px of preview, and all technician notes are included under their respective items. Given the same snapshot, When exporting to MP4, Then frames/clips follow the same order, on-frame callouts and annotations appear at the correct timecodes within ±100 ms of preview, and per-item captions include VIN, timestamp, and note summary. Given the same snapshot is exported twice with identical settings, When comparing outputs, Then page/frame counts, item order, annotation counts, and text content are identical across exports and an audit event "export_generated" is recorded for each export.
Comprehensive Audit Trail of Views, Shares, Edits, and Pairing Changes
Given any of the following events occur: view, share_created, share_revoked, export_generated, edit_annotation, edit_note, pairing_created, pairing_modified, pairing_removed, download, download_attempt_blocked, access_denied, When the event is processed, Then an immutable audit record is written within 2 seconds capturing eventType, actorId (or linkId for anonymous), actorRole (if applicable), resourceId, fleetId, timestamp (UTC), IP, and userAgent. Given audit records exist, When queried via UI or API with filters (date range, event type, actor, resource), Then matching results are returned and can be exported to CSV and JSON. Given an admin attempts to alter or delete an existing audit record, When the action is submitted, Then the system denies the action and records an audit event "audit_mutation_blocked".
Snapshot Immutability and Versioned Sharing
Given a user creates a snapshot for a timeline, When subsequent edits (annotation changes, note edits, pairing changes) are made to the underlying timeline, Then the existing snapshot content remains unchanged and a new snapshot version must be created to reflect changes. Given a share link points to snapshot version V, When the underlying timeline is later edited, Then the link continues to render snapshot V exactly as captured and an audit event "snapshot_view" includes the version. Given a user attempts to overwrite snapshot content, When the request is made, Then the system denies the action and prompts creation of a new snapshot version, recording an audit event "snapshot_overwrite_blocked".

Signature Relay

Routes role-based e‑signatures (driver, technician, supervisor) in the right order, capturing time, device, and location for each attestation. Built‑in nudges prevent bottlenecks, delivering a fully acknowledged dossier that stands up to audits and claims questions.

Requirements

Sequential Role-Based Routing
"As a fleet supervisor, I want signatures to route automatically in the correct order by role so that compliance doesn’t depend on manual follow-ups and nothing gets signed prematurely."
Description

Implements configurable, ordered signature workflows that route documents through required roles (e.g., Driver → Technician → Supervisor) without allowing out-of-order attestations. Supports template-based workflows per document type (DVIRs, work orders, repair approvals) with conditional steps (e.g., skip Technician if no defects found) and re-approval rules when data changes after a prior signature. Integrates with FleetPulse maintenance and inspection records to auto-populate recipients based on assigned vehicle, job, or shift, and blocks progression until mandatory fields and checks are complete.

Acceptance Criteria
DVIR: Enforce Driver d Technician d Supervisor Order
Given a DVIR template with sequence Driver d Technician d Supervisor and recipients assigned When the Technician attempts to sign before the Driver has signed Then the system blocks the action with a message indicating Driver signature is pending and no signature is recorded When the Driver signs successfully Then the Technician is notified and is able to sign When the Technician signs successfully Then the Supervisor is notified and is able to sign When the Supervisor signs successfully Then the document is marked Fully Acknowledged and no further signatures can be added
DVIR: Conditional Skip of Technician When No Defects
Given a DVIR template where the Technician step is required only if defects are reported And the Driver marks No defects and completes all mandatory fields When the Driver signs Then the workflow routes directly to the Supervisor and no Technician recipient is requested When the Driver later edits the DVIR to add a defect before the Supervisor signs Then the Technician step is inserted, the Supervisor step is paused, and the assigned Technician is notified
DVIR: Re-approval After Post-Signature Data Change
Given a DVIR with sequence Driver d Technician d Supervisor and the Driver has signed When the Driver edits any field tagged as approval-bound (e.g., defect list) Then the Driver’s prior signature is cleared, the document returns to the Driver stage, and downstream roles cannot sign until the Driver re-signs And the audit trail records that the signature was cleared due to data change including timestamp and user
Templates: Route Per Document Type
Given templates defined for DVIR (Driver d Technician? d Supervisor), Work Order (Technician d Supervisor), and Repair Approval (Supervisor d Driver) When a new document is created from each template Then the role sequence on the document matches its template and conditional steps are applied And users without Manage Workflow Templates permission cannot modify the role order on the document
Recipients: Auto-Populate From Vehicle/Job/Shift Records
Given vehicle V123 is assigned to Driver Dan, Technician Tia, and Supervisor Sue for the active shift in FleetPulse records When a DVIR for vehicle V123 is created Then the system auto-populates the recipients for Driver, Technician, and Supervisor as Dan, Tia, and Sue respectively And if a recipient is missing in records for any required role, the document is flagged as Incomplete Route and progression is blocked until the recipient is provided
Validation: Mandatory Fields Block Progression
Given a DVIR with required fields odometer, location, pre-trip checklist responses, and defect list When the Driver attempts to sign with any required field missing Then the sign action is prevented and a list of missing fields is presented When all required fields and checks are complete Then the sign action becomes available When the Technician attempts to sign while any reported defect lacks a repair action and resolution status Then the sign action is prevented and the defects requiring action are listed
Security: Block Out-of-Order Across Interfaces
Given a document with a defined role sequence When a user attempts to sign out of order via the mobile app, web app, or API Then the request is rejected consistently and no signature is recorded And the message indicates which prior role is pending And the attempt is logged in the audit trail with user, channel, and timestamp
Identity & Permission Verification
"As a compliance manager, I want each signature bound to a verified identity and authorized role so that attestations are defensible during audits and claims reviews."
Description

Enforces strong signer identity verification and role authorization at the point of signature. Supports SSO/OAuth2 for managers, PIN/MFA for drivers in the mobile app, and policy-based checks to ensure the signer matches the expected role for the document and asset (e.g., assigned driver of vehicle). Prevents proxy signing, records authentication method and session details, and denies signatures when role, assignment, or permission constraints are not met. Centralizes configuration with per-tenant policies and integrates with FleetPulse user/role directory.

Acceptance Criteria
Manager SSO Authorization at Signature (Web)
- Given a manager initiates a signature for a document requiring supervisor role, when the tenant has OAuth2/OIDC SSO configured, then the user is redirected to the IdP and, upon successful auth, a token with tenant scope and role claims is validated before enabling the signature action. - Given SSO succeeds but the token lacks the supervisor role or has a tenant mismatch, when control returns to FleetPulse, then the signature action is disabled and a clear authorization error is displayed and logged. - Given an SSO session age exceeds the policy max_auth_age of 15 minutes, when a manager starts a signature, then a step-up re-authentication is required prior to applying the signature. - Given the manager account is disabled in the directory, when attempting to sign, then the signature is denied and the audit log records directory_status=disabled and the IdP user ID.
Driver PIN + MFA Verification (Mobile)
- Given an assigned driver initiates a signature in the mobile app, when prompted for a 6-digit PIN and a second factor (TOTP or push), then the signature control unlocks only after both factors succeed within 30 seconds. - Given 5 consecutive failed PIN or MFA attempts within 10 minutes, then the driver account is locked for 15 minutes and the signature is blocked. - Given the device is offline and MFA delivery is unavailable, when policy permits one-time offline recovery codes, then a single valid recovery code can be used once to complete the signature and is immediately invalidated and flagged in the audit. - Given biometric second factor is enabled by policy and supported by the device, when the driver opts in, then biometric auth can satisfy the second factor; on failure, the flow falls back to PIN+MFA.
Role and Assignment Policy Enforcement per Document and Asset
- Given a document specifies a required role (driver, technician, supervisor) and an expected asset/work order, when a user attempts to sign, then the system validates both the user's role and their active assignment to the asset/work order and only allows signing if both pass. - Given a driver is not currently assigned to the vehicle listed on the document, when they attempt to sign, then the signature is denied with message "Not assigned to asset" and no partial signature data is saved. - Given a temporary delegation exists with expiry 2025-09-12T23:59:00Z, when the user attempts to sign after expiry, then the signature is denied and the delegation is marked expired in the audit log. - Given a document requires the order driver -> technician -> supervisor, when a later role attempts to sign before a prior role has signed, then the system blocks the attempt until sequence prerequisites are met.
Proxy Signing Prevention and Session Binding
- Given a device with multiple accounts, when any user attempts to sign "as" another user, then the signature is bound to the authenticated userId and any mismatched "sign as" identifier is ignored and recorded as a policy violation; the signature is denied. - Given the session has been idle for more than 5 minutes prior to signing, when the user proceeds to sign, then re-authentication (SSO for managers, PIN/MFA for drivers) is required. - Given a signature link is shared externally, when a non-authorized user opens it, then the UI displays "Access denied" and no signature action is available until proper authentication and authorization succeed. - Given kiosk/shared device mode is enabled, when a user completes a signature, then the session is terminated and cached credentials are cleared within 5 seconds to prevent proxy signing.
Audit Trail Capture: Auth Method, Time, Device, and Location
- Given a successful signature, then the system records timestamp (UTC), device fingerprint, app version, IP address, geolocation (GPS accuracy ≤ 100m or network-derived), authentication method (SSO, PIN+MFA, biometric), tenant policy ID, and user/session IDs, and stores them immutably with the dossier. - Given a permission or policy denial, then an audit record is created with reason_code (role_mismatch, assignment_missing, policy_violation, auth_failure), request_id, and userId (if known), and no signature payload is persisted. - Given location is unavailable, when policy requires location, then the signature is blocked; when policy allows fallback, then a location_unavailable flag and source=none are recorded in the audit. - Given a dossier is retrieved via API, then each signature entry returns the above auth and context fields and a hash chained to the document version to ensure tamper evidence.
Denial and Messaging on Policy Violations
- Given any identity, role, assignment, or sequence check fails, then the user receives a consolidated error with human-readable reason and remediation and a support reference code within 500 ms after evaluation. - Given multiple failures occur, then messaging prioritizes in order: identity > role > assignment > sequence > policy, showing only the highest-priority error to the user while all failures are logged with distinct codes. - Given a denial via API, then the response is HTTP 403 with a machine-readable error payload including error_code, reason, request_id, and retry_after (if applicable), and the client UI disables the signature action until state changes or cooldown elapses.
Tenant-Level Policy Configuration and Directory Integration
- Given a tenant admin updates identity policies (max_auth_age, 2FA required, location required, offline allowances), when changes are saved, then policies propagate and take effect within 60 seconds for new signature attempts. - Given a user role change in FleetPulse directory or IdP group mapping, when synchronization occurs, then the new role is enforced on the next authorization check and any cached authorization is invalidated within 2 minutes. - Given a tenant disables offline recovery codes for drivers, when a driver attempts offline signing, then the attempt is blocked and a policy_violation is recorded with policy_id. - Given the directory/IdP is unreachable, then signature attempts fail closed after up to 3 seconds of retries with exponential backoff, a "Directory unavailable" message is shown, and an admin alert is generated.
Geo‑Time‑Device Attestation Capture
"As a claims analyst, I want each signature to include time, device, and location details so that I can validate the context and detect potential fraud."
Description

Automatically captures precise timestamp (UTC and local), device identifiers (app version, OS, device model), and geolocation at the moment of signature. Includes accuracy metrics, fallbacks for low-signal environments (last known fix, network-assisted location), and explicit consent prompts where required by policy. Stores the metadata immutably with hash-based integrity, surfaces it in the UI and export, and flags signatures with insufficient accuracy per configurable thresholds.

Acceptance Criteria
Capture UTC and Local Timestamp at Signature
Given a user completes an e‑signature in Signature Relay When the user confirms the signature Then the system records signedAtUtc in ISO 8601 Z with millisecond precision And records signedAtLocal in ISO 8601 with timezone offset And when normalized to UTC, signedAtLocal equals signedAtUtc to the millisecond And both timestamp fields are persisted with the signature event and are read‑only thereafter
Record Device Identifiers on Signature
Given a signature event is captured When metadata is assembled Then metadata includes app.version (SemVer), app.buildNumber, device.osName, device.osVersion, and device.model And each field is present and non‑empty, or null with a corresponding deviceInfo.reason if unavailable And app.version matches the pattern MAJOR.MINOR.PATCH (e.g., 2.3.15) And all device fields are persisted atomically with the signature event
Capture Geolocation with Accuracy Metrics
Given location services are enabled and policy allows capture When the user confirms the signature Then the system records location.latitude and location.longitude in WGS84 And records location.accuracyMeters (float), location.provider in {gps, network}, and location.timestamp (ISO 8601 Z) And the fix used is no older than orgSetting.maxFixAgeSeconds (default 30s) And all location fields are persisted atomically with the signature event
Low‑Signal Fallbacks and Fix Age Handling
Given no GPS fix is available within orgSetting.locationTimeoutSeconds (default 3s) or accuracyMeters > orgSetting.requiredAccuracyMeters (default 50m) When the user confirms the signature Then the system attempts a network‑assisted location And if still unavailable, uses the last known fix where fixAgeSeconds <= orgSetting.maxLastKnownAgeSeconds (default 300s) And records location.method in {gps, network, lastKnown, none} and fixAgeSeconds And if no acceptable fix exists, sets locationCaptured=false and reason='unavailable' And all fallback decisions are included in the signature metadata
Explicit Consent Prompting and Recording
Given orgSetting.requireLocationConsent=true or the user is in a jurisdiction requiring explicit consent When the user reaches their first signature that would capture location on this device/app version Then the app displays a consent prompt with Accept and Decline options and links to policy text And on Accept, the system stores consent.userId, deviceId, app.version, consentedAtUtc, jurisdiction, and policyVersion and proceeds to capture location And on Decline, the system proceeds without location, sets locationCaptured=false and reason='declined', and marks the signature for review And subsequent signatures do not re‑prompt for 12 months or until policyVersion changes or consent is revoked
Immutable Storage and Hash‑Based Integrity
Given a signature and its metadata are ready to persist When the write completes Then the system computes contentHash=SHA‑256 over a canonicalized JSON payload and stores it with the record And persists the record in an append‑only (write‑once) store with an audit entry referencing contentHash And a verify endpoint returns valid=true when the recomputed hash matches and no tampering is detected And any attempt to modify persisted signature metadata is rejected (HTTP 409) and the original record remains unchanged
UI/Export Surfacing and Accuracy‑Based Flagging
Given a user views a signature detail screen or downloads a report/export When the attestation metadata is rendered Then the UI shows signedAtUtc, signedAtLocal, app.version, device.osName, device.osVersion, device.model, location.latitude, location.longitude, accuracyMeters, provider/method, fixAgeSeconds, consent status, and a contentHash prefix And CSV/PDF/JSON exports include the same fields with defined names And if accuracyMeters > orgSetting.requiredAccuracyMeters or fixAgeSeconds > orgSetting.maxFixAgeSeconds or locationCaptured=false, the signature is flagged location_insufficient with reason codes and a visible badge And the flag and reasons are included in exports
Smart Nudges & SLA Escalations
"As a shop manager, I want automated nudges and escalations to keep signature workflows moving so that work orders and inspections don’t stall."
Description

Delivers automated, context-aware reminders via push notification, SMS, and email to pending signers based on configurable SLAs and quiet hours. Supports escalation chains to alternates and supervisors when deadlines are missed, with throttle and retry logic to avoid spam. Tracks nudge history and response metrics, exposes a dashboard of bottlenecks, and integrates with FleetPulse notification preferences and duty schedules to respect on/off-shift windows.

Acceptance Criteria
SLA-Based Reminder Scheduling and Pause/Resume
Given a signature request is created for a role with a configured SLA and reminder cadence And the recipient's time zone, notification preferences, quiet hours, and duty schedule are defined When the request is created at time T Then the system schedules the first reminder according to the configured cadence, within the recipient's allowed window And schedules subsequent reminders until the SLA deadline, respecting quiet hours and duty schedules And pauses countdown timers during off-shift/quiet hours and resumes at the next allowed window And cancels all scheduled nudges immediately when the signature is completed or the request is withdrawn And records the next scheduled nudge time and deadline in the request metadata
Quiet Hours and Duty Schedule Compliance
Given a nudge is due during the recipient's quiet hours or off-shift period When evaluating send eligibility Then the nudge is deferred to the next on-shift, non-quiet window in the recipient's time zone And no notifications are sent during disallowed windows And the SLA timer excludes deferred intervals so that deadlines reflect working time And the system logs a suppression event with reason "quiet_hours" or "off_shift"
Multi-Channel Delivery with Fallback and Retries
Given a recipient has an ordered list of allowed channels and device/contact info When sending a nudge Then the system attempts the highest-ranked available channel And if delivery fails or the channel is unavailable, the system falls back to the next channel within the configured fallback window And retries use exponential backoff with a configurable maximum per channel And per-attempt delivery outcomes are captured (queued, sent, delivered, failed, bounced) And the final nudge status reflects the best successful channel or a terminal failure
Escalation Chains on Missed Deadlines
Given the SLA deadline is reached without the required signature When the breach is detected Then an escalation is sent to the designated alternate And if still unsigned after the configured escalation interval, an escalation is sent to the supervisor And escalations respect each recipient's preferences, quiet hours, and duty schedules And escalations stop immediately if the primary signer completes the signature And each escalation event records level, recipients, timestamps, and outcome
Throttle and Digest to Prevent Spam
Given multiple nudges for the same recipient are due within the configured digest window When preparing messages Then the system consolidates them into a single digest per channel And enforces a minimum send interval and a daily maximum per recipient per document And suppresses duplicate content within the duplicate window And records any suppression with reason and reference to the superseding message
Nudge History and Response Metrics Capture
Given any nudge attempt, suppression, or escalation occurs When persisting history Then the log includes: document_id, recipient_id, role, channel, scheduled_at, attempted_at/sent_at, outcome, retry_count, escalation_level, suppressed_reason, message_id And response events include: opened_at/clicked_at and signed_at when applicable And metrics computed per recipient/role/document include: time_to_first_open, time_to_sign, SLA_breach_flag, number_of_escalations And history is queryable via API by date range, recipient, role, document, outcome, and escalation level
Bottleneck Dashboard and Reporting
Given a user opens the Nudge Bottleneck Dashboard for a specified date range When data loads Then the dashboard displays: average time-to-sign by role, SLA breach rate, pending signatures by age bucket, top blockers (users/docs), escalation volume by level, and channel performance And supports filters for role, team, document type, vehicle, and time zone And reflects events ingested within the last 5 minutes And allows CSV export of the current filtered view
Delegation & Substitution Controls
"As an operations lead, I want controlled delegation when staff are unavailable so that critical documents can still be signed on time without compromising compliance."
Description

Allows temporary delegation of signature authority per role with granular constraints (time-bound, asset-bound, document-type-bound) and optional approval by supervisors. Records the original assignee, delegate identity, and rationale, and enforces conflict-of-interest rules (e.g., a technician cannot both perform and approve the same repair). Automatically revokes delegations at expiration and captures full audit trails of delegated actions.

Acceptance Criteria
Delegation Creation Validations and Duplicate Prevention
Given a user with permission to manage delegations When they attempt to create a delegation without role, delegate, start time, end time, or rationale Then the system rejects the request and displays field-level errors for each missing input Given the start time is not earlier than the end time When the user saves the delegation Then the system rejects the request with an error "End time must be after start time" Given the selected delegate is the same as the original assignee When the user saves the delegation Then the system rejects the request with an error "Delegate must differ from assignee" Given an active or scheduled delegation already exists with an overlapping time window for the same assignee, role, asset scope, document-type scope, and delegate When the user attempts to save another overlapping delegation Then the system rejects the request with an error "Overlapping delegation exists" Given all inputs are valid When the user saves the delegation Then the system creates a delegation with a unique ID and persists the role, assignee, delegate, time window, asset whitelist, document-type whitelist, and rationale
Time-Bound Delegation Activation and Expiry Auto-Revocation
Given a delegation with start time T1 and end time T2 When current time < T1 Then the delegate cannot sign under the delegation and any attempt is blocked with reason "Delegation not yet active" Given the same delegation When T1 <= current time < T2 Then the delegate can sign only within the delegation's asset and document-type scope; all other actions are blocked with reason "Out of scope" Given the same delegation When current time >= T2 Then the system automatically marks the delegation as expired and blocks further use; any in-progress signature submitted after T2 is rejected with reason "Delegation expired" And all blocked attempts are logged with timestamp, user, document ID, and reason code
Asset- and Document-Type Scoped Delegation Enforcement
Given an active delegation scoped to assets [A1, A2] and document types [DVIR, Repair Approval] When the delegate attempts to sign a document for asset A3 Then the system blocks the action with reason "Asset not in scope" Given the same delegation When the delegate attempts to sign a document of type "Fuel Receipt" Then the system blocks the action with reason "Document type not in scope" Given the same delegation When the delegate signs a DVIR for asset A2 Then the system accepts the signature and records that it was performed under the delegation ID
Supervisor Approval Workflow for Delegations
Given the organization requires supervisor approval for delegations When a requester submits a new delegation Then the delegation status is set to Pending Approval and it cannot be used by the delegate Given the same pending delegation When a supervisor approves it Then the delegation transitions to Scheduled or Active based on its time window and becomes usable accordingly Given the same pending delegation When a supervisor rejects it Then the delegation cannot be activated, the requester is notified, and the audit log records approver identity, timestamp, and rejection rationale
Conflict-of-Interest Rule: Technician Cannot Approve Own Repair
Given a work order WO123 includes labor recorded by User U in a technician role When User U attempts to approve WO123 as approver (either as original assignee or as a delegate) Then the system blocks the approval with error "Conflict of interest: performer cannot approve" and does not record a signature And the system logs the prevented action with rule ID, user ID, work order ID, timestamp, and reason
Delegated Signature Audit Trail Completeness and Immutability
Given a signature is performed under a delegation When the signature is recorded Then the audit trail entry includes: delegation ID, original assignee ID, delegate ID, role, document ID, document type, asset ID (if applicable), action (signed/attempted), UTC timestamp, device identifier or user agent, IP address, and geolocation (lat/long with accuracy when available), supervisor approver (if any), and the delegation rationale And the audit trail entry is immutable (no updates permitted), and any attempt to modify it is rejected and separately logged And the audit entry is retrievable via UI and API by delegation ID, document ID, asset ID, and user filters
Manual Early Revocation of Delegation and Session Invalidation
Given an active delegation When a supervisor with appropriate permission revokes the delegation before its end time Then the delegation status immediately becomes Revoked and the delegate can no longer sign under it Given a signing session was in progress under the revoked delegation When the delegate submits the signature after revocation Then the submission is rejected with error "Delegation revoked" and no signature is recorded And the system logs the revocation event with revoker identity, timestamp, and reason
Offline Signature Capture & Sync
"As a driver, I want to capture required signatures even without connectivity so that my route and inspections aren’t delayed."
Description

Enables drivers and technicians to sign documents offline in the mobile app with secure local storage, signer authentication, and on-device attestation (time, device, last known location) until a GPS fix is available. Queues signatures for background sync, performs conflict detection if content changed while offline, and ensures idempotent uploads to prevent duplicates. Provides clear UI indicators for offline state and pending sync, and respects enterprise policies on offline allowances.

Acceptance Criteria
Offline Signature Capture Without GPS Fix
Given the mobile device has no network connectivity and no current GPS fix When an authorized signer signs Document A in the app Then the app stores the signature locally with fields: signatureId (UUID), documentId, documentVersionHash, signerRole, signerUserId, deviceId, deviceTimestamp (ISO 8601 with timezone), and lastKnownLocation set to null with locationStatus="pending" And the document shows a "Pending location" badge And the signature is visible in the document timeline as "Captured (offline)" And no network calls are attempted during capture
Attestation Enrichment After GPS Fix
Given a signature with locationStatus="pending" exists locally And the app is running in foreground When the device obtains a GPS fix with accuracy ≤ 50 meters Then the app updates the signature attestation with latitude, longitude, accuracy, and gpsFixTimestamp And the document timeline updates to "Location captured" And the update occurs within 60 seconds of the GPS fix
Secure Local Storage and Signer Authentication
Given the user has not authenticated in the app within the last policy-defined interval When the user attempts to sign offline Then the app requires re-authentication per enterprise policy (PIN/biometric/SSO) before enabling the signature pad And upon capture, the signature payload is encrypted at rest with a device-keystore–backed key And exporting or viewing the payload via system file pickers is blocked And local payloads are auto-deleted within 5 minutes after confirmed server acknowledgment or per policy retention, whichever is stricter
Background Sync and Idempotent Uploads
Given one or more offline signatures are queued for upload And valid auth tokens are present When the device regains network connectivity Then the client attempts background sync within 30 seconds And each upload includes an idempotency key derived from signatureId And transient failures are retried with exponential backoff up to the policy limit And repeated uploads with the same idempotency key result in exactly one server record And upon server acknowledgment the local queue count decreases accordingly And no duplicate signatures appear in the server audit log for the same signatureId
Conflict Detection When Document Changed Offline
Given a signature was captured offline against documentVersionHash=v1 And the server document has advanced to version v2 while the device was offline When the client syncs the queued signature Then the server rejects auto-attachment with a conflict response referencing v1≠v2 And the client marks the item as "Conflict – review required" without losing the local audit data And the signer is prompted to review v2 and re-sign And no signature is attached to v2 without explicit re-sign
Role and Order Enforcement During Offline Capture
Given the workflow order is Driver → Technician → Supervisor for Document A When a Driver signs offline Then the signature is recorded as Driver and marked complete locally And the workflow does not advance to Technician until sync confirms server state And attempts by Technician or Supervisor to sign offline before their turn are blocked with an "Out of order" message And after successful sync, the next role receives a nudge according to notification policy
UI Indicators and Policy Enforcement for Offline Allowances
Given the enterprise policy defines maximum offline signatures and maximum offline age When the app enters offline mode Then a persistent offline banner is shown within 1 second and cleared within 1 second of connectivity restoration And documents with queued signatures display a pending-sync badge with an accurate count And a Pending Sync screen lists each item with status (pending, uploading, failed, conflict) and allows manual retry And when policy thresholds are reached, further offline signing is blocked with a clear policy message and a "Connect to proceed" action And any allowed supervisor override is captured with user, reason, timestamp, and is included in the audit trail
Audit Dossier & Export
"As a risk officer, I want a complete, tamper‑evident dossier export so that we can satisfy audits and insurer requests quickly and confidently."
Description

Generates a consolidated, tamper-evident dossier per document or work order, including signature sequence, signer identities, attestation metadata, nudge/escalation history, and any re-approval events. Provides export in human-readable PDF and machine-readable JSON with checksums, plus direct links back to the underlying FleetPulse records. Supports retention policies, legal hold, and secure sharing with external stakeholders via time-limited links and watermarking.

Acceptance Criteria
Generate Tamper-Evident Dossier After Signature Completion
Given a work order requiring role-based signatures and all required signatures have been captured in the configured order When the final required signature is recorded Then the system generates dossier version V1 within 10 seconds And computes and stores a SHA-256 checksum of the dossier content And marks the dossier as read-only And any subsequent edit to the underlying record creates a new dossier version (V2, V3, ...) with a new checksum And a recomputed checksum that does not match the stored checksum flags the dossier as "tamper suspect" and blocks export
Include Full Signature Sequence and Attestation Metadata
Given a dossier is generated for a work order signed by driver, technician, and supervisor When the dossier is viewed or exported Then it displays a signature timeline including role, signer name, unique signer ID, and sequence number for each signature And for each signature it records timestamp in ISO 8601 with timezone, device identifier, public IP address, and geolocation within 100 meters or a reason "location unavailable" And each signature entry includes a verification link to the corresponding signer session record
Capture Nudge, Escalation, and Re-approval History
Given nudges and/or escalations occurred during signature collection and at least one change required re-approval When the dossier is generated Then it includes a chronological history of all nudges and escalations with timestamp, channel (email/SMS/in-app), recipient, and status (delivered/opened/clicked) And re-approval events list the change summary, impacted fields, initiator identity, and re-approving roles with timestamps And the counts of nudges, escalations, and re-approvals in the dossier exactly match the event log for the work order
Export Dossier as PDF and JSON with Checksums and Backlinks
Given a dossier exists When a user requests an export Then the system returns a human-readable PDF and a machine-readable JSON within 5 seconds for dossiers up to 10 MB And both files include an embedded SHA-256 checksum (visible in PDF footer, top-level field in JSON) And the JSON validates against schemaVersion "1.0" with zero validation errors And both exports include clickable links back to the FleetPulse work order, signature entries, and event records that resolve for authenticated users
Secure Time-Limited Sharing with Watermarking
Given a user creates an external share link for a dossier with an expiry of 7 days When a recipient opens the PDF via that link before expiry Then access is permitted without authentication and logged with timestamp, IP, and user agent And each PDF page is watermarked with recipient email, access timestamp, and link ID And the link automatically expires at the configured time (to the minute) and returns an expiration message thereafter And revoking the link takes effect immediately, preventing further access
Retention Policy and Legal Hold Enforcement
Given an organization retention policy of 2 years for dossiers and an optional legal hold flag When a dossier reaches its retention end date without a legal hold Then it is purged within 24 hours and a purge record (dossier ID and checksum) is written to the audit log And all active external share links are invalidated immediately When a legal hold is applied before the retention end date Then the dossier remains retained and exportable until the hold is removed And attempts to delete the dossier while on legal hold are blocked and logged

Policy Match

Pre‑flight checks each dossier against FMCSA/DOT and insurer evidence lists, flagging gaps (e.g., missing torque sheet or post‑repair road test) and auto‑suggesting the exact artifacts needed. Prevents rework and rejected submissions.

Requirements

Policy Rules Catalog & Versioning
"As a compliance manager, I want a versioned catalog of insurer and FMCSA evidence rules so that dossiers are checked against the latest, policy-specific requirements."
Description

Maintain a centralized, version-controlled catalog of FMCSA/DOT and insurer evidence requirements, scoped by insurer, policy, coverage type, vehicle class, and geography. Support multiple ingestion paths (REST API, CSV upload, and admin UI) with effective dates, change logs, and deprecation handling. Provide a rules DSL to express conditional requirements (e.g., “if brake service performed then require torque sheet and post-repair road test within 24 hours”). Map each rule to FleetPulse data objects and artifact types (inspections, service orders, torque sheets, OBD-II fault clears, road test logs) to enable automatic checks. Expose a read-optimized cache for low-latency validation and ensure backward compatibility by retaining historical rule versions for auditing and re-validation.

Acceptance Criteria
CSV Rule Ingestion with Effective Dates and Scope
Given a CSV matching the published schema (insurer_id, policy_id, coverage_type, vehicle_class, geography, rule_dsl, effective_start, effective_end?, version_notes) When an authorized admin uploads it via the ingestion endpoint or Admin UI Then the system validates the schema and rejects files with unknown/missing required columns with code CSV_SCHEMA_INVALID And each row is validated for required fields, date ranges (effective_start < effective_end when provided), and DSL compilation And rows that fail validation are rejected with per-row error details while valid rows are committed (partial success) and a summary is returned And re-uploading an identical CSV within 24 hours is idempotent and creates no new versions (HTTP 200, no-op) while changed rows produce incremented versions And each committed change creates a change-log entry (rule_id, old_version, new_version, author, source=CSV, timestamp, diff) And newly published/updated rules become queryable via the catalog within 60 seconds
REST API Rule Ingestion with Version Control and Change Logs
Given a client with manage:rules scope and a valid payload containing scope attributes, DSL, effective dates, and notes When the client POSTs to /api/rules Then the API responds 201 with rule_id and version, persists the rule, and writes a change-log entry (source=API, request_id) And concurrent updates require If-Match ETag; mismatches return 409 CONFLICT without creating a new version And effective date overlaps are validated; overlapping active windows for identical scope require explicit supersede=true or are rejected with RULE_WINDOW_CONFLICT And providing deprecates_version marks the referenced version deprecated with a deprecation_reason and optional sunset_date And all validation errors are returned in a structured list with codes and line/field references
Admin UI Rule Management with DSL Validation and Deprecation
Given an authorized rules-admin user in the Admin UI When the user creates or edits a rule in the DSL editor Then the editor performs real-time syntax and reference validation and blocks Publish until errors are resolved And a Preview shows required artifacts for a selectable sample context (insurer, policy, coverage, vehicle_class, geography, service type, event times) And publishing requires a change summary and, when deprecating an existing version, a deprecation_reason and optional sunset_date And successful publish creates a new version, updates the catalog, and records the change log with diff And unauthorized users attempting to access Rule Management see a 403 and cannot publish changes
Rules DSL Expresses Conditional Evidence Requirements
Given the DSL expression "if brake_service_performed then require torque_sheet and road_test within 24h" When evaluated against a service order where brake_service_performed=true, torque_sheet exists, and road_test occurred at T+26h Then validation fails with code EVIDENCE_TIME_WINDOW_EXCEEDED and details include road_test required_by=T+24h, actual=T+26h And when evaluated where torque_sheet and road_test both exist within 24h of service completion Then validation passes with no missing artifacts And the DSL supports operators and constructs: if/else, and/or/not, exists(), count(), comparisons, time windows (minutes/hours/days), and geography/policy scoped overrides And invalid symbols or unmapped references cause compile-time error DSL_REF_UNKNOWN with the offending token and position
Mapping Rules to FleetPulse Data Objects and Artifact Types
Given a mapping registry linking DSL symbols to FleetPulse objects (inspections, service_orders, torque_sheets, obd_fault_clears, road_test_logs) When a rule references torque_sheet or road_test Then the compiler resolves the references to artifact types and data paths; unresolved references fail with MAP_REF_UNKNOWN And mappings are versioned and included in the rule version metadata to ensure deterministic evaluation across versions And changes to mappings are audited with author, timestamp, and diff and do not retroactively alter historical rule behavior And evaluation unit tests confirm that required artifacts are detected from the mapped data sources for at least one example per mapped type
Read-Optimized Cache for Low-Latency Rule Retrieval
Given one or more rules published to the catalog When the validation service queries for applicable rules by scope (insurer, policy, coverage_type, vehicle_class, geography) and effective_date Then the cache returns the resolved rule set with p95 latency ≤ 30ms and p99 ≤ 60ms for up to 50k active rule versions And cache invalidation occurs within 60 seconds of publish/update/deprecate events; new versions appear without requiring service restart And on cache miss, the service falls back to the source store and populates the cache; operational metrics report hit_rate ≥ 95% in steady state And the cache exposes a health endpoint reporting freshness (last_update), hit_rate, and error rates, and degrades gracefully on backend outages
Historical Versions Retained for Auditing and Re-Validation
Given historical rule versions and change logs stored with effective windows When a client requests GET /api/rules?as_of=2024-06-01T00:00:00Z for a scope Then the API returns the rule versions effective at that instant, including deprecation status where applicable And the audit endpoint returns an immutable change log trail for each rule_id/version; deletion/mutation of historical entries is blocked (WORM semantics) And the re-validation API accepts dossier_id and as_of and reproduces the validation outcome using the historical rules and mappings And exporting the catalog for a past date yields a consistent snapshot that matches the audit trail (hash validated)
Pre-flight Validation Engine
"As a fleet manager, I want an automated pre-flight validation of each repair/claim dossier so that gaps are flagged before submission and rework is avoided."
Description

Implement an on-demand and real-time validator that executes the rules catalog against each dossier prior to submission. Cross-reference artifacts from FleetPulse modules (maintenance events, inspection reports, repair orders, OBD-II events, attachments) to detect missing or stale evidence. Return structured results with severity levels, rule references, rationale, and remediation hints. Support incremental re-validation when new artifacts are added, and run validations at key workflow points (post-repair, pre-claim, pre-DOT audit). Design for performance (sub-second for typical dossiers; async batching for large fleets) and resilience (queue/retry on external data fetch failures).

Acceptance Criteria
On-Demand Pre-Submission Validation Response
Given a typical dossier (<= 50 artifacts) and the user triggers validation via UI or POST /validate with an idempotency key When the engine executes the rules catalog Then a synchronous response is returned within 1000 ms p95 And the response includes a summary with counts by severity {Info, Warning, Critical} And each rule result includes: ruleId, ruleVersion, title, severity, rationale, remediationHints[], evidenceRefs[], affectedArtifacts[], timestamp And results are deterministic for identical inputs and the same catalog version And results are ordered by severity descending, then ruleId
Cross-Module Evidence Correlation and Gap Detection
Given a dossier with a closed brake repair order lacking a torque sheet attachment When validation runs Then rule BRAKE_TORQUE_SHEET_REQUIRED returns severity Critical with rationale referencing the repair order and remediation hint to attach the torque sheet Given a dossier with an inspection report older than the rule-defined freshness window When validation runs Then rule INSPECTION_FRESHNESS flags severity Warning or Critical according to the catalog threshold and references the stale report Given a dossier with an active OBD-II MIL event without a documented resolution artifact When validation runs Then rule OBD_MIL_RESOLUTION_REQUIRED flags severity Critical and requests the post-repair road test or technician resolution note
Incremental Re-Validation on Artifact Changes
Given a dossier previously validated with a Critical finding for BRAKE_TORQUE_SHEET_REQUIRED And a torque sheet artifact is subsequently attached to the same repair order When incremental validation is triggered automatically within 2 seconds of the artifact save Then only rules impacted by the changed artifact are re-evaluated And the prior Critical finding is cleared or downgraded as appropriate And unaffected rule results retain their resultIds and timestamps And the delta update is available to the UI/API within 500 ms p95
Workflow Triggers: Post-Repair, Pre-Claim, Pre-DOT Audit
Given a repair order is marked Completed When the status change is saved Then validation auto-runs and results are posted to the dossier timeline within 2 seconds p95 Given a user initiates a claim submission from FleetPulse When the Pre-Claim step is reached Then validation auto-runs and any Critical findings are surfaced with clear remediation CTAs; claim submission remains disabled until all Critical findings are resolved Given a user generates a DOT audit packet When the pre-audit step begins Then validation auto-runs and the audit readiness badge reflects Pass only when no Critical findings remain
Performance and Scalability: Sync Typical, Async Batching for Large Fleets
Given a typical dossier (<= 50 artifacts) When validation is invoked synchronously Then the p95 latency is <= 1000 ms and p99 <= 1500 ms under nominal load (<= 20 concurrent requests) Given a bulk validation request for >= 1000 dossiers When the API /validate/batch is called Then the service responds 202 with a jobId within 2 seconds p95 And progress can be polled via /jobs/{jobId} with percentage complete and counts by status And p95 job completion time is <= 10 minutes for 1000 dossiers with no data-source outages And no more than 0.1% of dossiers are retried due to transient errors beyond one retry cycle
Resilience to External Data Fetch Failures
Given a rule requires external data (e.g., insurer policy evidence list) and the fetch fails transiently When validation runs Then the engine retries up to 3 times with exponential backoff (initial 1s, max 8s) And if still failing, affected rule outcomes are marked Deferred with severity None and include an errorCode, errorMessage, and retryAvailable=true And a background retry is scheduled within 15 minutes or upon webhook notification of data availability And the dossier is not marked Pass for gates that depend on Deferred rules; the UI/API clearly distinguishes Deferred from Fail Given the external source recovers When the retry succeeds Then Deferred outcomes are replaced with definitive results without requiring user action
Structured Results Contract and Rule Versioning
Given the rules catalog at version X.Y.Z When validation executes Then every rule result includes ruleId and ruleVersion=X.Y.Z and a documentationUrl referencing the catalog entry And the response conforms to the published JSON schema (schemaVersion present; validation passes 100%) And running the same dossier with the same catalog version produces byte-identical results modulo timestamp fields And when the catalog version changes to X.Y.(Z+1), the response reflects the new version and any modified ruleIds while maintaining backward compatibility for unchanged fields
Evidence Retrieval, Auto-Link, and Suggestions
"As a service coordinator, I want the system to auto-find existing evidence and suggest exactly what’s missing so that I can complete dossiers quickly and correctly."
Description

Automatically discover and link existing evidence across FleetPulse (e.g., inspection images, torque sheets, technician notes, telematics road test data) to the active dossier. When evidence is missing, present precise, policy-aligned suggestions that specify the exact artifact name, acceptable formats, capture location in the app, and required fields (e.g., torque wrench ID, spec range, technician signature). Provide quick-create templates and mobile-friendly capture flows with pre-filled asset metadata (VIN, odometer, work order). Validate file types, timestamps, and signatures at capture time to reduce back-and-forth.

Acceptance Criteria
Auto-Link Existing Evidence to Active Dossier
Given a dossier is opened for an asset (VIN and active work order present) and matching evidence exists in FleetPulse within the policy lookback window (default 30 days) When the dossier loads Then the system auto-links all matching artifacts (inspection images, torque sheets, technician notes, telematics road test data) within 5 seconds And each linked item displays artifact type, source module, capture timestamp, and the policy rule it satisfies And no artifact is linked more than once (deduplication by unique artifact ID and checksum) And the Linked Evidence counter equals the number of artifacts successfully linked
Policy-Aligned Missing Evidence Suggestions
Given the dossier is evaluated against FMCSA/DOT and insurer evidence lists and gaps are detected When the user opens the Policy Match panel Then the system lists each missing item with: exact artifact name, acceptable file formats, capture location in app (path), and required fields And suggestions are tailored to the vehicle class, policy version, and jurisdiction And each suggestion includes a one-tap Create action And the suggestions list updates within 1 second after new evidence is added or linked
Quick-Create Templates with Pre-Filled Asset Metadata (Mobile)
Given a user taps Create for a required artifact (e.g., torque sheet) on mobile When the template opens Then VIN, asset ID, odometer, and work order fields are pre-filled from the active dossier And all policy-required fields are present and clearly marked as required And GPS location and capture timestamp are auto-captured And the template supports offline mode and syncs within 2 minutes after connectivity is restored And the artifact can be submitted in ≤3 taps when no edits are needed
Capture-Time Validation of Files, Timestamps, and Signatures
Given a user uploads or captures an artifact When they attempt to save Then the system validates file type against the allowed list for that artifact and blocks invalid types with an actionable message And the capture timestamp must fall within the configured policy window (e.g., ±24 hours of work order completion) or a justification field is required And technician signature is required when specified by policy and must include technician ID, name, and verifiable signature hash And any missing/invalid required fields are highlighted inline and submission is prevented until resolved
Telematics Road Test Auto-Detection and Linking
Given a work order is marked repair complete for an asset with active telematics When telematics data indicates a post-repair road test meeting policy thresholds (distance ≥3 miles and speed ≥45 mph within 48 hours) Then a Post-Repair Road Test artifact is auto-created and linked to the dossier And the artifact includes start/end timestamps, distance, max speed, driver/device ID if available, and a route snapshot And if thresholds are not met within 48 hours, a suggestion is created prompting the user to perform and capture a road test
Evidence Deduplication, Versioning, and Provenance
Given multiple artifacts of the same type exist for the same work order and near-identical timestamps (±5 minutes) When linking evidence to the dossier Then only the most recent version is linked by default and earlier versions are accessible via version history And each artifact displays provenance: uploader, source module, original filename, and SHA-256 checksum And no more than one link exists per unique artifact ID in the dossier And all link, relink, and unlink actions are recorded in the audit log with user, action, timestamp
Submission Gatekeeper & Overrides
"As an operations lead, I want submission to be blocked or warned when required artifacts are missing, with controlled overrides, so that we prevent rejected claims while retaining flexibility."
Description

Enforce configurable submission policies that block or warn when required evidence is missing based on rule severity and target recipient (insurer vs. regulator). Present a concise gap summary with links to fulfill each requirement. Allow role-based overrides with mandatory reason codes and attach the override decision to the dossier timeline. Support both UI and API submission flows, ensuring consistent gating behavior and returning machine-readable errors for integrations.

Acceptance Criteria
Blocking vs Warning Behavior by Recipient and Severity
Given a dossier missing a regulator-required artifact with severity "block" When a user attempts submission via the UI to the regulator Then the primary submission action is disabled, a blocking banner lists at least one blocking gap, and no network submission call is executed Given a dossier missing an insurer-required artifact with severity "warn" When an integration submits via API to the insurer Then the API responds HTTP 200, includes a non-empty warnings[] array with ruleId, recipient, severity, artifactType, message, remediationUrl, and the submission is accepted and recorded Given a dossier missing at least one "block"-severity requirement for the target recipient When an integration submits via API Then the API responds HTTP 422 with code "SUBMISSION_BLOCKED", an errors[] array populated with the missing items, and no submission record is created
Gap Summary and Fulfillment Links in UI Submission Modal
Given gating detects missing evidence When the submit modal opens Then a summary header displays total counts by severity and by recipient, and renders within 1.5 seconds Given a listed gap item When the gap row is displayed Then it shows artifactType, ruleId, recipient, severity, concise message, and a single call-to-action that deep-links to the exact upload/task screen; following the link and completing the action returns the user to the modal with that gap resolved Given there are zero block-severity gaps for the target recipient When viewing the modal Then the primary submit button is enabled; otherwise it is disabled unless a permitted override is initiated
Role-Based Override With Mandatory Reason Codes and Timeline Attachment
Given a user without the SubmitOverride permission When they attempt to initiate an override via UI or API Then the UI control is hidden/disabled and the API returns HTTP 403 with code "FORBIDDEN_OVERRIDE" Given a user with the SubmitOverride permission and at least one block-severity gap When they choose to override Then they must select a reasonCode from a managed list and enter a rationale of at least 10 characters; the confirm action remains disabled until both are provided Given an override is confirmed When the submission proceeds Then the dossier timeline records an immutable entry containing timestamp (UTC ISO-8601), actorId, recipient, overridden ruleIds, reasonCode, rationale, and submissionId; the same details are returned in the API response metadata
Consistent Gating Logic Across UI and API
Given the same dossier snapshot and target recipient When evaluated via UI pre-flight and via the submissions API Then the set of gaps (ruleIds, severities, recipients, artifactTypes) is identical and includes ruleEngineVersion and policyVersionId Given concurrent evaluations at submission time When the request is processed Then a read-consistent snapshot is used (no partial updates), and results are deterministic for a given idempotency key Given the same API payload is retried with the same Idempotency-Key within 24 hours When processed Then the server returns the same HTTP status and body and does not create duplicate submission records
API Error and Warning Contract for Submissions
Given submission is blocked by policy When calling POST /submissions Then the response is HTTP 422 application/json with fields: code="SUBMISSION_BLOCKED", correlationId, policyVersionId, ruleEngineVersion, and errors[] each containing ruleId, recipient, severity, artifactType, message, remediationUrl; p95 latency ≤ 800 ms at 50 rps Given submission is accepted with warnings When calling POST /submissions Then the response is HTTP 200 application/json with submissionId, correlationId, policyVersionId, ruleEngineVersion, and warnings[] with the same item schema; warnings[].severity="warn" Given no gaps exist for the target recipient When calling POST /submissions Then the response is HTTP 201 application/json with submissionId and no errors or warnings arrays present
Policy Configuration Resolution and Versioning
Given policies exist for a fleet and recipient When a submission is evaluated Then the engine resolves the active ruleset by fleetId, recipient, and effectiveAt (server time) and applies each rule's severity; the response includes policyVersionId used Given no fleet-specific policy exists for the recipient When a submission is evaluated Then the default global policy is applied and policyVersionId reflects the default version Given a policy change is published When evaluated within 60 seconds Then new evaluations reflect the updated policy; a cached policy must not remain effective longer than 60 seconds
Real-time Re-evaluation After Gap Resolution
Given one or more blocking gaps are present When the user attaches the required artifact or completes the required task Then the gate re-evaluates and updates the gap summary within 3 seconds; if no blocking gaps remain, the submit action becomes enabled Given a previously blocked API submission is retried after gaps are resolved When calling POST /submissions Then the response contains no blocking errors for the previously missing rules and returns HTTP 200 or 201 depending on creation semantics Given a required artifact is removed prior to submission When viewing the submit modal Then the gap list and summary reflect the reintroduced gap within 3 seconds and the submit action is disabled (absent override)
Compliance Pack Generation & Audit Trail
"As an auditor, I want a downloadable compliance pack and a complete audit trail so that I can prove adherence and resolve disputes efficiently."
Description

Generate a standardized, exportable compliance pack (ZIP with indexed PDF/JSON) that bundles all evidence, validation results, and rule versions used at time of validation. Apply timestamps, hash checksums, and user/signature metadata. Maintain an immutable audit trail of validations, overrides, rule updates, and dossier changes with retention settings. Provide secure, expiring share links and access controls for external reviewers (insurers, auditors) with view-only scopes.

Acceptance Criteria
One-Click Compliance Pack Generation (ZIP with Indexed PDF/JSON)
Given a dossier with a completed validation run exists When a user with role "Owner" or "Manager" clicks "Generate Compliance Pack" Then the system produces a ZIP within 30 seconds and stores it under the dossier's Compliance Packs And the ZIP contains at minimum: index.pdf, manifest.json, validation_report.json, ruleset.json, and an /evidence folder with all referenced artifacts And manifest.json lists every file path, byte-size, MIME type, and SHA-256 checksum And the ZIP filename includes dossierId, validationRunId, and ISO-8601 timestamp And the pack is downloadable and its download URL requires authenticated access or a valid share link
Embedded Timestamps, Checksums, and User/Signature Metadata
Given a compliance pack has been generated Then every metadata record (manifest.json, validation_report.json) includes createdAt, validatedAt (UTC ISO-8601), createdBy.userId, createdBy.fullName, createdBy.role And each evidence item includes capturedAt (if provided), uploadedAt, uploadedBy.userId, and optional eSignatureId where an e-signature was captured And the ZIP-level SHA-256 checksum is computed and stored with the pack record And recomputing checksums of all files matches the manifest and ZIP-level checksum
Immutable Audit Trail of Validations, Overrides, Rules, and Dossier Changes
Given audit logging is enabled When a validation is run, an override is applied, a rule set is updated, or a dossier field/evidence is changed Then an append-only audit entry is written with eventType, actorId, actorRole, timestamp (UTC), objectId, changeSummary, and preHash/postHash pointers And attempts to modify or delete audit entries via API or UI are rejected with 403 and are themselves logged And audit entries are retrievable by dossierId and time range within 200 ms for 95th percentile of queries up to 10k entries And the audit log is included by reference in the compliance pack (audit_log_pointer with hash and range)
Retention Policy Configuration and Enforcement
Given an organization retention policy is set for packs and audit logs (e.g., 7 years) When records reach expiration Then a scheduled job tombstones the records, permanently deletes payloads, and writes a deletion entry with reason, timestamp, and content hashes And non-expired records remain immutable and accessible And administrators can generate a "Proof of Deletion" report listing IDs and hashes for a selected period And retention changes only affect future records; existing records retain their original expiry
Secure, Expiring Share Links with View-Only Access Controls
Given a user creates a share link for a compliance pack When the user sets an expiry between 1 hour and 30 days and optionally restricts recipients by email allowlist Then the system issues a non-guessable URL token and enforces view-only scope (no edits, uploads, or deletes) And access requires verified email matching the allowlist or possession of a one-time code sent to the invitee And access attempts after expiry or revocation return 410 Gone and are logged And share links do not grant access to other dossier resources beyond the selected pack
Share Link Access Logging and Revocation
Given a share link exists When any recipient views, previews, or downloads the pack Then an access event is logged with linkId, viewerEmail (or anonymous flag), ip, userAgent, timestamp, action, and outcome And the pack owner can revoke the link; subsequent attempts immediately fail with 410 Gone within 60 seconds of revocation And access logs are viewable to the pack owner and exportable as CSV/JSON
Reproducible Validation Using Stored Rule Versions
Given a compliance pack with ruleset.json and validation_report.json When a user selects "Revalidate with Original Rules" Then the system re-runs validation against the same evidence snapshot using the stored ruleset and produces a new report with an identical rules_evaluated hash and identical pass/fail outcomes And if current rules differ from stored rules, the UI clearly labels the revalidation as "Using Archived Rules" And a mismatch in results flags a warning and creates an audit entry referencing both reports
Notifications & Tasking for Missing Evidence
"As a technician, I want clear tasks and reminders for missing evidence items so that I know exactly what to capture and by when."
Description

Create actionable tasks for each unmet requirement with assignees (technician, driver, coordinator), due dates, and SLA rules. Deliver multi-channel notifications (in-app, email, mobile push) with deep links to capture flows and one-tap completion. Provide status tracking, reminders, and escalation to managers for overdue items. Offer a dashboard card summarizing open gaps by dossier, vehicle, and policy to focus daily operations.

Acceptance Criteria
Auto-Create Tasks for Missing Evidence
Given a dossier fails Policy Match due to 1+ missing evidence items When Policy Match completes its check Then the system creates exactly one task per missing artifact with fields: taskId, dossierId, vehicleId, policyId, artifactType, assigneeRole (technician|driver|coordinator), assigneeId (nullable), priority, dueAt, slaPolicyId, status=Open, createdAt And dueAt is calculated using the active slaPolicyId and the dossier/policy timezone And each task is linked to the originating dossier and vehicle and appears in the assignee’s Open queue within 10 seconds And duplicate tasks for the same artifactType and dossier are not created within a 24-hour window (idempotency) And an audit log entry is written with actor=system and reason=PolicyMatchGap
SLA-Based Due Date and Priority Calculation
Given an SLA rule exists mapping artifactType to dueAt offset and priority (e.g., torque sheet -> 24h, High) When a task is created or the SLA configuration for its artifactType is updated Then dueAt is computed as createdAt + SLA offset honoring business hours and holidays defined for the fleet And priority is set per SLA rule (Low|Medium|High|Critical) And if the SLA rule changes after task creation, the task dueAt/priority recalculates once and the change is captured in task history with old/new values and timestamp And tasks with Critical priority are flagged for immediate notification And tasks past dueAt are marked Overdue and included in escalation evaluation
Multi-Channel Notifications with Deep Links
Given a new Open task is assigned to a user with notification channels enabled (in-app, email, push) When the task is created or reassigned to a new assignee Then the assignee receives an in-app notification within 10 seconds containing task title, dossierId, vehicle label, policy name, artifactType, dueAt (local time), and a deep link to the capture flow And an email is sent within 60 seconds with the same details and a primary CTA button labeled “Submit Evidence” linking via a signed URL token that expires in 24 hours And a mobile push notification is delivered within 60 seconds with concise details and a deep link intent And notifications are deduplicated per event per channel; retries occur up to 3 times on transient failures And all notifications are recorded with delivery status (Sent|Delivered|Failed) and timestamp
One-Tap Completion from Notification
Given an assignee opens a notification and taps the deep-link CTA When the capture flow opens Then dossierId, vehicleId, policyId, and expected artifactType are pre-populated and read-only And the user can upload or capture the required artifact in one flow and submit And upon successful validation (correct artifactType, file size ≤ 25MB, required fields present), the task auto-transitions to Completed within 5 seconds and the dossier updates to reflect the new evidence And the notification source shows a success state; the user is returned to the task list or dossier view And if policy permits Not Applicable, the user can select N/A with a required justification, transitioning the task to Not Applicable status And if offline, the app queues the upload and marks the task In Progress, auto-completing when connectivity is restored
Status Tracking, Reminders, and Task Lifecycle
Given a task exists with status Open and a dueAt timestamp When the assignee first opens the capture flow Then the task transitions to In Progress and lastActivityAt is updated And reminder notifications are sent at T-24h, T-1h, and at +1h overdue until completion (max once per 24h thereafter) And completing the required artifact transitions the task to Completed, stops reminders, and records completedAt And tasks support statuses: Open, In Progress, Blocked (with required reason), Completed, Not Applicable, Overdue (derived flag when now > dueAt) And all status changes are audit-logged with actor, old/new status, timestamp, and optional note
Escalation to Manager on Overdue Items
Given an Overdue task remains not Completed past the escalation threshold defined by the SLA (e.g., 4h overdue for High, 1h for Critical) When the threshold is reached Then the system escalates to the designated manager(s) for the fleet/location via in-app, email, and push, including task details, age, and assignee And the task records escalationAt, escalatedTo, and escalationLevel=1 And further escalations occur at configured intervals (e.g., every 24h) up to a max of 3 levels, notifying next-level managers And escalation resets if the task is reassigned or moved to Completed/Not Applicable And all escalations are visible in the task timeline and exportable reports
Dashboard Card: Open Gaps by Dossier, Vehicle, and Policy
Given open or overdue tasks exist across dossiers and vehicles When a user opens the Operations dashboard Then a card displays counts of open gaps by dossier, by vehicle, and by policy, including totals and overdue counts And the card highlights top 5 vehicles and policies by overdue tasks and shows SLA risk badges (due within 24h, overdue) And clicking a segment drills down to a pre-filtered task list within 1 navigation And data reflects the last 60 seconds and loads in ≤ 1.0s for up to 5,000 tasks And users can filter the card by location, assignee role, priority, and policy And access respects user permissions, hiding vehicles or dossiers the user is not authorized to view

Redaction Shield

One‑click redaction of PII and sensitive fields (driver IDs, phone numbers, exact home locations, internal pricing), with a private master file and a share‑safe version. Share confidently with auditors and carriers while staying privacy‑compliant.

Requirements

One-Click Redaction Toggle
"As a fleet manager, I want to create a share-safe copy of records with one click so that I can quickly share information without exposing sensitive data."
Description

Provide a single-action control in UI and API to generate a share-safe version of any supported artifact (trip logs, DVIR inspections, maintenance records, invoices, exports) by applying the active redaction policy. The action must leave the master record unchanged, preserve schema compatibility, and complete within typical UI latency (<2s for single records). The flow includes inline progress, success/failure messaging, and a link to the newly created redacted artifact. The redaction applies masking for driver identifiers, phone numbers, VINs (partial), internal pricing, and removes/obfuscates precise home locations while retaining operational utility (e.g., city/ZIP-level granularity). Supports batch selection for multi-record redaction and idempotency to avoid duplicate artifacts.

Acceptance Criteria
UI Single-Record One-Click Redaction
Given a user with Redaction permissions is viewing a supported artifact (trip log, DVIR, maintenance record, invoice, or export) When the user clicks the One-Click Redact control Then the system applies the active redaction policy to create a new share-safe artifact without modifying the master record And shows inline progress within the artifact view until completion And completes the operation within 2 seconds p95 for artifacts ≤1 MB And displays a success message with a link labeled "Open Share-Safe Copy" And labels the new artifact as "Share-Safe" and the original as "Master" And writes an audit log entry capturing user, artifact type/id, policy id, timestamp, and outcome
API Redaction Endpoint Idempotency and Response
Given a client calls POST /v1/redactions with artifact_id, artifact_type, and policy_id and an Idempotency-Key header When the same request (identical body and Idempotency-Key) is retried within 24 hours Then the service returns 200 with the same redacted_artifact_id and does not create a duplicate artifact And the initial successful call returns 201 Created with Location header to the redacted artifact and a JSON body including redacted_artifact_id, status="completed", started_at, completed_at And if the artifact is already redacted for the same inputs but no Idempotency-Key is sent, the service deduplicates by deterministic content hash and returns 200 with the existing redacted_artifact_id And error responses include machine-readable codes and do not create partial artifacts
Field Masking Rules and Schema Compatibility
Given the active redaction policy is applied to an artifact When the redacted artifact is generated Then the redacted artifact validates against the same schema version as the master (no required fields removed) And all sensitive fields preserve their original field names and data types And driver identifiers are replaced by a stable, irreversible tenant-scoped token (SHA-256 base32, 10 chars) per original id And phone numbers are masked to E.164 with middle digits replaced (e.g., +1*******34), preserving string type And VINs retain the last 6 characters with the first 11 replaced by '*', length remains 17 And internal pricing numeric fields are set to 0 while preserving numeric type And a non-required metadata block redaction.policy_id, redaction.applied_at, redaction.masked_fields[] is added And validation tests confirm no breaking type changes for masked fields across all supported artifact types
Home Location Obfuscation to City/ZIP Granularity
Given an artifact contains a location classified as a driver home or inside a geofence tagged "Home" When the redaction policy is applied Then precise coordinates are replaced with the centroid coordinates of the detected ZIP (or city if ZIP unavailable), preserving numeric lat/lon types And street address lines are removed or replaced with "REDACTED" while city, state, and ZIP remain populated And reverse-geocode confidence and geocode_precision="zip" are recorded in optional metadata And automated tests verify that obfuscated coordinates are ≥1 km from the original and within the same city/ZIP And map rendering in the share-safe artifact displays city/ZIP but not street-level markers
Batch Redaction with Progress and Partial Failure Handling
Given a user selects multiple supported records (up to 500) and starts a batch redaction job When the job runs Then the UI shows a batch progress indicator with counts of completed, pending, failed, and deduplicated items updated at least every 2 seconds And each record is processed idempotently so duplicates are not created for repeated items And partial failures are reported per-item with retriable error codes, while successful items are not rolled back And p95 per-record processing time within the batch is ≤2 seconds for records ≤1 MB And job completion provides a summary and a downloadable CSV of failures with artifact_id and reason And API exposes GET /v1/redactions/jobs/{job_id} with status, totals, and links to created artifacts
Master Record Integrity and Share-Safe Linking
Given a redacted artifact is created via UI or API When comparing the master record before and after redaction Then no fields on the master are modified and the master revision/id remains unchanged And the redacted artifact has a new immutable id and a link back to its source master id And the UI presents a persistent link from the master to its latest share-safe copy and from the share-safe copy back to its master And access controls default to: master = private, share-safe = shareable per tenant policy And deleting a share-safe copy does not affect the master; deleting the master invalidates links from its share-safe copies
Policy-Driven Redaction Rules Engine
"As a compliance officer, I want configurable redaction rules so that shared data meets each recipient’s privacy requirements without losing necessary operational detail."
Description

Implement a centrally managed, versioned rules engine that maps PII/sensitive fields across data models and applies masking strategies: remove, hash, or partially mask (e.g., last 4 digits), with pattern support (regex) for phone/email/ID formats. Include geospatial anonymization options for home locations and sensitive stops: coordinate rounding, radius blur, and geohash precision controls; plus home-location inference based on frequent overnight stops. Support multiple policy templates (e.g., Auditor vs Carrier), sandbox testing on sample data with before/after diffs, and rollback to prior rule versions. Provide API/console to author, validate, and publish rules with guardrails to prevent empty or over-broad redactions.

Acceptance Criteria
Versioned Policy Authoring & Publishing Guardrails
Given a Policy Admin is in the Redaction Shield console with a valid session When they create a new policy draft with a unique name and semantic version (e.g., 1.2.0) and run Validate Then the system validates syntax, schema, and field references and returns status = Valid with any non-blocking warnings And publishing is blocked with an error if the rule set would redact 0 fields on a 1000-record representative sample or redact >50% of fields in any single model without an explicit override+justification And on Publish, the policy is assigned an immutable versionId, status = Active, and becomes read-only (except deprecate/rollback actions) And API endpoints to create/validate/publish return 400 for invalid schema, 404 for unknown fields, and 409 for name/version conflicts
Field Mapping & Masking Strategies Across Data Models
Given data models Driver, Vehicle, Trip, ServiceTicket, and Notes are registered in the schema registry When a policy maps specific fields to actions remove, hash(SHA-256 + system-managed salt), or mask(partial preserving last 4) Then those fields are transformed accordingly while preserving output data types and JSON structure And free-text fields (e.g., Notes.description) are scanned using configured regex patterns; matches are redacted using the specified action without altering non-matching text And example validations: Driver.phone -> ***-***-1234; Driver.externalId -> HASHED; ServiceTicket.internalPricing -> REMOVED And pattern library includes phone, email, and alphanumeric ID formats; tests on a labeled 500-record set achieve ≥95% precision and ≥95% recall; false positives are reported in validation output
Geospatial Anonymization Controls for Sensitive Locations
Given a policy specifies geospatial methods coordinate rounding, radius blur, and geohash precision When rounding is set to D decimals Then latitude/longitude are rounded to D decimals with maximum displacement consistent with decimal precision and no coordinates fall outside valid ranges When radius blur is set to R meters with random seed S Then output points lie within R meters of the original, are reproducible with seed S, and have near-uniform angular distribution When geohash precision is set to P Then original coordinates are replaced with a geohash of precision P; reverse decoding yields a cell containing the original point And policies may combine methods in order (e.g., blur then geohash) and the pipeline is recorded in policy metadata
Home-Location Inference from Overnight Stops
Given trip-stop history for a driver over at least 30 days When the system computes clusters of stops between 20:00–06:00 local time Then any cluster with ≥4 nights/week across ≥4 of the last 6 weeks and not within registered depot polygons is marked as a candidate home location And if a policy requires home-location redaction, the configured geospatial anonymization is applied to those locations in share-safe outputs And admins can opt a driver out of inference; opt-outs are honored and logged And validation on a labeled sample set shows ≤5% false positives; metrics are surfaced in the validation report
Audience-Specific Policy Templates (Auditor vs Carrier)
Given policy templates Auditor and Carrier exist When the Auditor template is applied Then Driver PII (name, phone, email, license, home location) is redacted; internal pricing is retained; VIN is partially masked (last 6 preserved) When the Carrier template is applied Then Driver PII and home locations are redacted; internal pricing is removed; VIN is fully preserved; vehicle health codes are preserved And template application is logged with template name and resulting effective ruleset diff And switching templates produces deterministically different outputs on the same input dataset as per template definitions
Sandbox Testing with Before/After Diffs
Given a user selects a policy draft or version and a sandbox dataset up to 10,000 records When they run a Sandbox Test Then the system produces a non-mutating share-safe preview and a before/after diff highlighting changed fields and counts per field/model And no writes occur to the private master dataset; preview artifacts auto-expire after 7 days unless pinned And users can export the diff and preview as JSON and CSV; total runtime for 10k records is ≤60 seconds with progress feedback
Rollback to Prior Policy Version with Audit Log
Given an Active policy version vX.Y.Z exists and a prior version vX.Y.(Z-1) is available When a Policy Admin initiates Rollback to vX.Y.(Z-1) with a reason Then vX.Y.Z becomes Deprecated, vX.Y.(Z-1) becomes Active, and an audit record captures actor, timestamp, from->to versions, and reason And new redaction jobs reference the Active version at execution time; previously generated share-safe files retain their original version metadata And API returns 404 if the target version does not exist and 409 if rollback would violate dependency constraints
Master–Share-Safe Dual Storage
"As a security-conscious admin, I want master data separated from redacted derivatives so that sensitive information stays protected even when sharing."
Description

Create a derivation pipeline that generates immutable, linkable redacted artifacts from master records, storing them in a separate collection/bucket with strong referential links, checksums, and metadata (policy version, timestamp, actor). Enforce strict access boundaries so only authorized roles can fetch masters; redacted artifacts receive distinct URLs and short-lived, signed access tokens. Support cascading rebuilds when policies change (background job to re-derive artifacts) and revocation that invalidates previously issued links. Ensure storage quotas and lifecycle policies are respected and that derived artifacts are tagged for retention and purge.

Acceptance Criteria
Immutable Redacted Artifact Generation
Given a master record exists with ID <master_id> and a redaction policy version <policy_version> is active When a user with role Fleet Admin triggers "Create Share-Safe Artifact" for that master Then the system persists a new redacted artifact in the dedicated redacted bucket/collection And the artifact is write-once immutable (subsequent PUT/DELETE attempts are rejected with 403/409) And the artifact metadata includes master_id, artifact_id, policy_version, derived_at (UTC), actor_id, and sha256 checksum And the master record remains unchanged
Strong Referential Links and Manifest
Given a redacted artifact exists for master_id <master_id> When the API GET /masters/<master_id>/artifacts is called Then it returns the list of linked redacted artifacts with artifact_id, policy_version, checksum, created_at, and current_status And each returned artifact_id resolves via GET /artifacts/<artifact_id> to the same metadata and location And GET /artifacts/<artifact_id>/master returns the canonical master_id And if the linkage is broken, the API returns 404 and emits an integrity alert
Access Control Segregation (Masters vs Share-Safe)
Given role Driver, External Auditor, or Carrier attempts to fetch a master record When GET /masters/<master_id> is called without MasterReader privilege Then access is denied with 403 and an audit log entry is recorded And role Fleet Admin and Compliance Officer with MasterReader privilege can fetch masters successfully And redacted artifacts are not retrievable via direct bucket paths; access requires a signed URL scoped to artifact_id with least privilege And the signed URL is limited to read-only and a single artifact per token
Signed URL Expiry and Revocation
Given a signed URL for artifact_id <artifact_id> is issued with TTL=10 minutes When the URL is used within 10 minutes Then the download succeeds and is logged with actor, artifact_id, and IP When the same URL is used after 10 minutes + 5 seconds Then the request is rejected with 401/403 due to expired signature When an admin revokes access for artifact_id <artifact_id> Then all previously issued URLs for that artifact become invalid within 60 seconds And new URLs issued after revocation work normally
Policy Change Cascading Rebuild
Given redaction policy is updated from version X to version Y When the background re-derivation job is triggered Then all artifacts derived under version X are queued for rebuild once per master (idempotent) And rebuilt artifacts are stored as new immutable objects tagged policy_version=Y And prior artifacts are marked superseded and their signed URLs are revoked And job metrics expose total, completed, failed, and retried counts via /ops/derivations And failures are retried up to 3 times with exponential backoff and surface alerting on exhaustion
Quotas, Lifecycle Tags, and Retention Enforcement
Given storage quota for the redacted bucket is configured to Q bytes and retention policy is set to 90 days When creating redacted artifacts would exceed Q Then the operation fails fast with 507 Insufficient Storage and a clear error code quota_exceeded without partially written artifacts And each created artifact is tagged with retention_until = created_at + 90d and lifecycle_class = share_safe And the lifecycle policy deletes the artifact after retention_until + 1 day grace while leaving master intact And all creations, deletions, and denials are audit-logged with reason codes
Role-Based Access and Field-Level Permissions
"As an account owner, I want fine-grained permissions around PII visibility so that my team and partners only see what they’re authorized to see."
Description

Extend RBAC to define who can view masters, create redacted artifacts, and share links. Provide least-privilege presets (Owner, Manager, Auditor, Carrier) and enforce field-level visibility flags in UI and API to block PII exposure by role. Require admin approval or elevated permission to disable redaction or to include master data in any external share. Integrate with SSO/OAuth scopes for programmatic access and provide audit-safe permission change logs. Deny-by-default posture for external contexts.

Acceptance Criteria
Least-Privilege Role Presets Enforce Access and Actions
Given a new tenant with default roles Owner, Manager, Auditor, Carrier When an Owner requests master vehicle and driver records in the UI or API Then access to master data is granted and redaction can be toggled on/off subject to approval policies Given a Manager requests master vehicle and driver records When viewing in the UI or exporting Then master data is visible, but sharing master data requires admin approval and is blocked until approved Given an Auditor signs in When browsing records Then only redacted artifacts are visible and all PII fields are masked; master views are blocked with 403 in API and masked in UI Given a Carrier follows a share link When accessing the content Then only redacted artifacts are accessible; navigation outside the shared artifact is blocked; API returns 403 for any master endpoints Given any user without an assigned role When attempting to access any resource Then access is denied by default (401/403)
Field-Level PII Visibility Flags Enforced in UI
Given a vehicle record containing fields driver_id, driver_phone, home_location, internal_pricing flagged as PII or sensitive When viewed by a Manager Then fields flagged as PII/sensitive according to Manager’s policy are masked with placeholder (e.g., ••••) and excluded from inline copy/export Given the same record viewed by an Auditor When opening details and exports Then all PII/sensitive fields are omitted from UI and downloadable files; tooltip indicates “Redacted by policy” Given the same record viewed by an Owner When toggling “Show master values” in UI Then the toggle is disabled until a valid approval exists; without approval, values remain masked and an inline notice explains the policy Given a user with no permission for field F When attempting to reveal F via UI dev tools or client-side parameters Then the backend response does not include F and UI continues to display a redacted placeholder
API Enforcement of Field-Level Visibility with OAuth Scopes
Given an API client authenticated with OAuth scopes: telematics.read, redactions.read (no pii.read) When calling GET /api/v1/vehicles/{id}?fields=driver_id,odometer Then response includes odometer and omits driver_id; HTTP 200 with driver_id absent from payload schema Given an API client with pii.read but no approval to include master in shares When calling GET /api/v1/exports?include=master Then HTTP 403 is returned with error code PERM_MASTER_SHARE_REQUIRES_APPROVAL and no data is streamed Given an unauthenticated or external-context request (share token only) When calling any /api/v1/masters/* endpoint Then HTTP 401/403 is returned; deny-by-default posture is enforced Given a client attempts to bypass masking via include=all or wildcard fields When scopes do not include pii.read Then PII-marked fields are still omitted from the response
Admin Approval Required to Disable Redaction or Share Master Data
Given a Manager initiates a share and selects “Include master data” When submitting the share request Then an Approval Request is created requiring an Owner/Admin approver; share status remains Pending and cannot be sent until approved Given an Owner/Admin receives the approval task When approving with reason and time-bound window (e.g., 24h) Then the system records approver, timestamp, reason, scope (datasets/fields), and expiry; the share can be sent within the window Given the approval window expires When the recipient accesses the link or the Manager attempts to resend Then master inclusion is revoked automatically and content reverts to redacted; access attempts to master return 403 Given an Owner/Admin rejects the request When the Manager retries the action Then the system blocks it and surfaces the rejection reason
External Share Links Are Redacted, Scoped, and Expiring by Default
Given a user generates a share link for an auditor When the link is created Then the link token is scoped to redacted:read only, includes no PII fields, and has an expiry configured by policy (default ≤ 30 days) Given a recipient appends query parameters (e.g., include=master, fields=driver_phone) When requesting the resource Then the server ignores unsupported parameters and continues to return only redacted fields; attempts to fetch master endpoints return 403 Given the share is revoked or expired When the link is used Then the server returns HTTP 410 (Gone) or 403 and no data payload Given rate limits and IP allowlisting are enabled for external links When abuse thresholds are exceeded or IP is not allowed Then access is throttled or denied and logged
Audit-Safe Logs for Permission and Redaction Changes
Given any change to role assignments, field visibility flags, approval grants/denials, or share scope When the change is committed Then an immutable audit log entry is written capturing actor, target, old_value, new_value, reason, timestamp (UTC), IP/agent, and SSO identity Given an auditor role requests logs for a date range When calling GET /api/v1/audit-logs?from=..&to=.. Then logs are returned with pagination, signed hash chain or tamper-evident checksum, and can be exported (CSV/JSON) without PII values Given an attempt is made to modify or delete an audit log entry When executed via UI or API Then the operation is denied and the attempt itself is logged as a security event
SSO/OAuth Role Mapping and Deny-by-Default for Unmapped Users
Given SSO is enabled with IdP group-to-role mappings (Owner, Manager, Auditor, Carrier) When a user authenticates with an IdP group mapped to Auditor Then the user session receives Auditor role only and master data remains inaccessible; PII fields are masked in UI/API Given a user authenticates with no mapped IdP group When accessing any FleetPulse resource Then no roles are assigned and access is denied by default (401/403); no data is returned Given IdP group mapping is updated by an Owner/Admin When the next user login occurs Then the new role takes effect immediately; changes are recorded in audit logs Given an OAuth token is minted for programmatic access When scopes requested exceed the user’s role capabilities Then the authorization server denies the extra scopes and issues a token limited to permissible scopes
Share-Safe Export and Delivery
"As an operations lead, I want to export and share redacted reports in standard formats so that auditors and carriers can consume them without extra tooling."
Description

Enable export of redacted data in PDF, CSV, XLSX, and JSON, with layout templates for common documents (DVIR, maintenance invoices, trip summaries). Include optional watermarking (e.g., “Redacted”), header/footer disclaimers, and a machine-readable redaction manifest detailing policy version, fields masked, and processing time. Provide expiring share links with password protection, download limits, and embeddable viewer components; capture recipient access events for visibility. Ensure exports remain structurally consistent for downstream ingestion.

Acceptance Criteria
One-Click Multi-Format Export With Templates
Given a user selects a redacted dataset and a layout template (DVIR, Maintenance Invoice, or Trip Summary) When they click Export and choose PDF, CSV, XLSX, or JSON Then the export is generated within 30 seconds and is downloadable without error And the exported content reflects the chosen template’s structure and fields And CSV/XLSX/JSON preserve stable headers, column order, and data types across repeated exports And PDF layout renders page headers, footers, and pagination correctly on A4 and Letter sizes And the redacted values are present in-place of sensitive fields according to the active policy
Optional Watermark and Disclaimers Applied
Given watermarking and header/footer disclaimers are toggled on with default text "Redacted" and a provided disclaimer up to 500 characters When the user exports to PDF or opens the embeddable viewer Then the watermark appears diagonally across each page at 20–30% opacity and does not obscure primary data And the header/footer disclaimer renders on every page without overlapping content And when watermarking/disclaimers are toggled off, neither appears in the export or viewer And CSV/XLSX/JSON include no visual watermark, but carry a metadata flag `watermarkApplied: true|false` in the manifest
Machine-Readable Redaction Manifest Included
Given a share-safe export is initiated When the export completes Then a machine-readable redaction manifest is produced containing at minimum: policyVersion, fieldsMasked (list with field names and mask method), processingStartedAt, processingCompletedAt, processingMillis, exportId, and sourceSchemaVersion And JSON exports include the manifest at $.meta.redactionManifest And CSV/XLSX/PDF exports include a companion manifest.json delivered alongside the file or embedded (XLSX sheet named "Manifest", PDF embedded file), retrievable via API and share link And the manifest validates against the published JSON Schema and lists a checksum (SHA-256) of the exported artifact
Expiring Password-Protected Share Links With Download Limits
Given a user creates a share link and sets an expiry (duration or date), a password, and a max download count When a recipient accesses the link Then the viewer prompts for the password and denies access after 5 consecutive failures for 15 minutes And each completed file download decrements the remaining count; when zero or expired, access returns 410 Gone via API and viewer shows "Link expired" And the owner can revoke the link at any time; revoked links are unusable within 60 seconds And passwords require minimum 10 characters and are stored salted and hashed (no plaintext) And share links are single-URL per artifact version and cannot be guessed (>=128-bit entropy)
Recipient Access Events Logged and Reportable
Given a share link exists When recipients view or download the artifact or fail authentication Then the system logs events including: timestamp (UTC), eventType (view|download|password_failed|expired|revoked), linkId, exportId, recipient user-agent, IP (anonymized to /24 for IPv4, /48 for IPv6), geolocation city/region (if available), success flag, and bytes transferred And the owner can view events in the UI within 1 minute of occurrence and export them as CSV And events are retrievable via API with pagination and time-range filters And no PII beyond the above is stored; event data is retained per tenant retention settings
Embeddable Viewer Renders Share-Safe Artifacts
Given the embeddable viewer is integrated via iframe or JS component with a valid share link When loading a PDF or tabular export (CSV/XLSX/JSON rendered as table) Then the viewer renders within 3 seconds on a 10 Mbps connection, is responsive on 320–1920 px widths, and respects password gating And optional download controls can be disabled by the owner; when disabled, raw file download is not available from the viewer And watermarks and disclaimers are visible in the viewer exactly as in the exported PDF And the viewer prevents indexing by search engines via headers (X-Robots-Tag: noindex)
Structural Consistency for Downstream Ingestion
Given a redacted dataset is exported as CSV, XLSX, or JSON When downstream systems ingest the file Then all required columns/fields are present with stable names and order as defined by sourceSchemaVersion And masked fields retain their data types (e.g., strings remain strings, numbers remain numbers) with placeholder values (e.g., "REDACTED", 0, or null per policy) but never change the field’s type And row counts match the unredacted dataset unless the policy explicitly drops entire fields; no rows are dropped due to redaction And the export declares sourceSchemaVersion in file-level metadata and passes validation against the published schema for that version
Automated Redaction in Workflows and Integrations
"As a small fleet owner, I want redaction to be automatic in my sharing workflows so that I don’t accidentally disclose sensitive information."
Description

Default all external shares to use redacted variants unless explicitly overridden by authorized roles. Apply redaction automatically to scheduled reports, email shares, partner integrations, and public links. Expose API parameters (e.g., redacted=true) and webhooks that deliver only redacted payloads to external recipients. Provide organization-level settings to select the default policy per channel (Auditor, Carrier) and guardrails that block sending masters outside the tenant boundary.

Acceptance Criteria
Default Redaction on External Email Share
Given a user shares a report or record via email to a recipient outside the tenant boundary and no override is used When the share is sent Then the attachment or link is the redacted variant by default And fields designated sensitive by the org redaction template (e.g., driver_id, phone_number, home_location, internal_pricing) are masked or removed And the master (unredacted) file is not transmitted outside the tenant
Scheduled Reports Use Redacted Variant by Default
Given a scheduled report job targets any external destination (email or external storage) and no override permission is applied When the schedule executes Then the delivered file is the redacted variant And sensitive fields are masked/removed per the active redaction template And if the job is configured to deliver a master file externally, the run fails before transmission with error code OUTBOUND_MASTER_BLOCKED and no data leaves the system
Public Link Enforces Redacted View
Given a user generates a public link for a report or record When the link is accessed without tenant authentication Then only the redacted variant is accessible And requests for the master view via query params or headers are rejected with 403 Forbidden And the response metadata indicates redaction_level=redacted
Partner Integration Webhook Delivers Redacted Payload
Given an outbound webhook is configured to post to a partner endpoint not owned by the tenant When a qualifying event fires Then the HTTP payload contains only redacted data per the org template And sensitive fields are omitted or tokenized; raw values are not present And the request includes header X-Redacted: true And attempts to configure a webhook to send master payload externally are blocked with validation error
API Redaction Parameter and Permission Enforcement
Given an API client calls an export or share endpoint with redacted=true When the request is authorized Then the response body contains only redacted fields per the org template Given the same endpoint is called with redacted=false for an external share or export When the caller lacks the Redaction Override permission Then the request is rejected with 403 Forbidden and error code REDACTION_OVERRIDE_REQUIRED And no unredacted data is returned
Organization Policy Per Channel with Overrides
Given an Org Admin sets the default redaction policy per channel (Auditor, Carrier, Public Link, Email, Webhook) When the policy is saved Then new outbound actions on that channel default to the configured redaction setting And if a channel is set to Redacted Only, users cannot select master for that channel even with override And if a channel is set to Allow Override, only users with Redaction Override permission can change an individual action to master; all others see the option disabled
Guardrail Blocks Master Outside Tenant Boundary
Given any flow attempts to transmit a master (unredacted) file or payload to a recipient outside the tenant boundary (email, public link, webhook, partner connector) When the action is initiated Then the system blocks the transmission before any data egress And the UI shows a blocking error and the API returns 403 with error code OUTBOUND_MASTER_BLOCKED And the action is not queued nor retried automatically
Redaction Audit Trail and Compliance Evidence
"As a compliance manager, I want a verifiable audit trail of redaction activity so that I can prove privacy compliance during audits."
Description

Record a tamper-evident log of every redaction event: actor, source records, policy version, fields affected, output checksum, recipients, IP, and retention policy applied. Provide dashboards and exportable reports that demonstrate what was shared, when, and under which policy to support audits and incident response. Trigger alerts for redaction failures or policy mismatches and surface remediation guidance. Store logs in append-only or hash-chained storage with configurable retention aligned to compliance requirements.

Acceptance Criteria
Tamper-Evident Redaction Event Logging
Given a user triggers a one-click redaction on selected records under policy P.vX When the redaction completes successfully Then the system appends a log entry capturing: timestamp (UTC ISO8601), actor user_id and role, request_id, source record IDs, policy_id and version, fields redacted (names and counts), output artifact checksum (SHA-256), recipients (IDs/emails), requester IP, retention_policy_id and expiry_at And the log entry includes prev_hash and entry_hash computed over all captured fields And the operation returns 201 Created with the log entry ID
Append-Only and Hash-Chain Integrity Verification
Given existing log entries 1..N with a valid hash chain When any client attempts to update or delete an existing log entry Then the request is rejected with 405 Method Not Allowed or 409 Conflict and a security audit event is recorded And only create/append operations are permitted to succeed with 201 Created And a scheduled integrity job recomputes hashes end-to-end hourly and raises a Critical alert within 5 minutes if any break or missing entry is detected, including the first bad index and expected vs actual hash And the Verify Chain API returns chain_ok=true for an untampered log
Audit Dashboard — Evidence of Sharing and Policy
Given an auditor selects a date range and optional filters (actor, recipient, vehicle/driver ID, policy, status) When the dashboard loads for the selection Then it lists all share events with columns: timestamp, actor, recipients, policy_id/version, source_records_count, fields_redacted_count, output_checksum, status, retention_policy And clicking any row opens a detail panel with all logged fields and remediation guidance if status != success And for <=30 days of data and <=10k events, initial load completes within 2 seconds at P95
Exportable Compliance Reports (CSV/PDF)
Given an auditor requests an export for a defined date range and filters When the export is generated Then CSV and PDF files include all log fields plus report metadata (generated_at UTC, requested_by, filter summary, report_checksum SHA-256) And exports for <=100k events complete within 60 seconds at P95, streamed if larger And the downloadable artifact checksum matches report_checksum And exports exclude data outside retention policies and mark redactions with failure status distinctly
Real-time Alerts for Redaction Failures and Policy Mismatches
Given a redaction job fails or applies a policy that does not match the requested policy/version When the failure or mismatch is detected Then a High-severity alert is created within 60 seconds containing: job_id, actor, policy_requested vs policy_applied, error details, impacted records count, remediation steps link And notifications are sent to configured channels (email, Slack/Webhook) with deduplication for identical issues within 15 minutes And acknowledgment and resolution status are tracked and visible in the dashboard
Retention Policy Enforcement and Legal Hold
Given retention_policy_id and expiry_at are recorded per log entry When the entry reaches expiry_at and no legal hold exists Then the entry becomes non-viewable to standard users and is purged within 24 hours by a retention job, emitting a purge audit event with reason=retention_expiry and a tombstone record containing entry_hash and purge_at And if a legal hold is applied before expiry, the entry remains accessible to authorized roles and is excluded from purge until the hold is released And attempts to manually delete entries prior to expiry or during legal hold are blocked and an alert is recorded

Export Seal

Generates auditor‑ready PDFs with watermarking, page numbers, cryptographic hash, and a clickable evidence index. Includes a QR code to an online verification page, boosting credibility and accelerating approvals.

Requirements

Auditor-Ready PDF Generator
"As a fleet manager, I want to export an auditor-ready PDF that consolidates all relevant fleet evidence into a standard layout so that I can quickly provide compliant documentation without manual assembly."
Description

Generate a single, self-contained PDF package for selected vehicles and date ranges that compiles inspections, OBD-II events, maintenance logs, invoices, and photos into a standardized, auditor-ready layout. Include a cover sheet with company/fleet metadata, reporting period, generation timestamp, and summary metrics; standardized section headers; and an appendix area for raw evidence. Ensure consistent typography, margins, and accessibility (bookmarks, tagged reading order), offline readability, and deterministic rendering across browsers. Integrate with FleetPulse data services, respecting tenant boundaries, time zones, and unit preferences. Support batched exports (up to 500 records), graceful degradation for missing artifacts, and retryable background jobs with progress updates and email/webhook completion notifications. Target sub-60s generation for typical fleets and enforce resource limits to protect system stability.

Acceptance Criteria
Cover Sheet and Standardized Layout Generation
Given an authenticated FleetPulse admin selects 1–10 vehicles and a date range within a single tenant When they request an Auditor-Ready PDF export Then the PDF includes a cover sheet with company/fleet name, reporting period (in user’s time zone), ISO 8601 generation timestamp with offset, and summary counts of inspections, OBD-II events, maintenance logs, invoices, and photos And every page (except cover) displays page numbers as “Page X of Y” in the footer And a semi-transparent “Export Seal” watermark is present on all content pages And section order is exactly: Cover, Summary, Inspections, OBD-II Events, Maintenance, Invoices, Photos, Appendix, with each section starting on a new page And one embedded font family is used consistently for body text and headings, and uniform page margins are applied across all pages And the PDF passes an automated preflight check that verifies embedded fonts and consistent margins across pages
Evidence Compilation and Clickable Index
Given selected vehicles and a date range yield inspections, OBD-II events, maintenance logs, invoices, and photos When the export is generated Then each evidence item is included once in its proper section with vehicle identifier, timestamp, and source reference And a clickable Evidence Index appears after the Summary, listing each item with a page link anchor that navigates to the first page of that item And all intra-document links (index to section, section to appendix references) are functional in Adobe Acrobat, Chrome PDF viewer, and Apple Preview And if a referenced artifact is missing or unreadable, a placeholder entry with reason (“missing,” “corrupt,” or “permission denied”) is inserted without failing the export, and the item is listed in an Appendix “Exceptions” table And photos render with orientation respected (EXIF-aware) and include captions with vehicle, timestamp, and source
Accessibility and Offline Readability
Given accessibility requirements When the export is opened in a PDF reader without network access Then the document is fully readable offline with all fonts and images embedded and no external resource calls And the PDF is tagged with a logical reading order; section headings are tagged as H1/H2; tables are tagged with header cells; decorative images are marked as artifacts And a bookmark outline mirrors the section hierarchy and allows navigation to each section And images (non-decorative) include alt text derived from captions (vehicle, timestamp, type) And the file passes an automated accessibility check (e.g., PAC or Acrobat Preflight) with no errors in tagging structure, reading order, or missing fonts And the QR code is present and scannable at 300+ DPI, but the document does not rely on it for core content
Tenant Boundaries, Time Zones, and Unit Preferences
Given a user belonging to Tenant A selects vehicles and a date range When the export is generated Then only data scoped to Tenant A is present; any cross-tenant identifiers are excluded, and access to other tenants’ data is denied (logged with audit event) And all timestamps render in the user’s selected time zone with explicit UTC offset (e.g., 2025-09-12T14:03:22-05:00) And distance, temperature, pressure, and currency values render using the user’s unit preferences and locale (e.g., mi/km, °F/°C, psi/kPa, currency symbol and thousands separator) And vehicle sections are sorted alphanumerically by vehicle name, and items within each section are sorted chronologically ascending by timestamp
Batch Export, Background Jobs, and Notifications
Given a selection that yields up to 500 evidence records across vehicles When the estimated generation time exceeds the synchronous threshold Then the request is enqueued as a background job and the UI shows status transitions: Queued → Processing → Finalizing → Completed (or Failed) And progress updates are emitted at least every 5 seconds via websocket or polling with percentage complete or item counts And upon completion or failure, an email and a webhook are sent containing jobId, parameters summary, status, duration, file size (on success), and a time-limited download URL And background jobs are retryable up to 3 times for transient errors with exponential backoff, and idempotency ensures identical parameter submissions within 24 hours return the same artifact without duplicating work And cancellation requests transition the job to Canceled and no partial file is made available for download
Performance Targets and Resource Limits
Given a typical fleet workload defined as ≤25 vehicles over a ≤90-day period with ≤150 inspections, ≤500 OBD-II events, ≤200 maintenance logs, ≤150 invoices, and ≤300 photos total When the export runs under normal load Then 95% of such jobs complete PDF generation within 60 seconds as measured by server-side job duration metrics And per-job resource usage remains within configured CPU/memory limits without impacting SLOs for other tenants (validated via monitoring dashboards) And if resource limits are approached, the exporter applies graceful degradation (e.g., downscales images in appendix, omits thumbnails) and inserts an Appendix note listing applied degradations, while still preserving all textual data and index links And jobs exceeding hard caps fail fast with a clear error status and guidance to narrow the scope
Deterministic Rendering and Cryptographic Verification
Given identical inputs and a frozen generation timestamp When the export is generated using rendering pipelines invoked from Chrome, Firefox, and Safari Then the resulting PDFs have identical page counts, page breaks, and line wrapping; rasterized page images differ by ≤1% pixels due to antialiasing tolerance And a SHA-256 hash of the final file is computed, displayed on the cover, and embedded in the PDF’s XMP metadata And the cover includes a QR code encoding a verification URL that contains the hash; visiting the URL returns Valid when the uploaded file’s hash matches a server-stored record and Invalid otherwise And the Evidence Index entries, bookmarks, and section anchors remain functional across the tested viewers
Dynamic Watermark & Pagination
"As a compliance officer, I want clear watermarks and page numbers on every page so that recipients can trust the document’s provenance and easily reference specific sections during review."
Description

Apply configurable watermarks (e.g., “Auditor Copy,” environment, fleet name) and page numbering (Page X of Y) to every page without obscuring content. Watermark opacity and placement must adapt to portrait/landscape pages and image-heavy appendices. Include generated-on timestamp and UTC offset in footers. Provide admin-level controls to toggle watermark text, opacity, and inclusion per export type while maintaining default secure settings. Ensure watermarks and pagination survive print, screen, and PDF/A compatibility and do not break internal links or bookmarks.

Acceptance Criteria
Admin Configures Default and Per-Export Watermark Settings
Given I am an admin user with access to Export settings And default security presets are applied When I open the Watermark settings Then watermarking is enabled by default for all export types And the default watermark text includes the literals "Auditor Copy", the {environment}, and the {fleetName} And the default opacity is 15% And non-admin users cannot view or modify watermark settings When I set a per-export override to disable watermarking for the "Internal Draft" export type And I generate an "Internal Draft" export and an "Auditor Package" export Then the "Internal Draft" export contains no watermark And the "Auditor Package" export contains the configured watermark
Tokenized Watermark Text Renders Correctly
Given a fleet named "Acme Logistics" in the "Production" environment And the watermark text template is "Auditor Copy — {environment} — {fleetName}" When I generate any export Then the watermark text on every page is "Auditor Copy — Production — Acme Logistics" And no unresolved tokens or braces appear in the rendered watermark text And the watermark text appears exactly once per page
Adaptive Watermark Placement and Opacity Across Orientations and Image-Heavy Pages
Given a document with mixed portrait and landscape pages and image-heavy appendices And watermark opacity is configured to 15% and placement is diagonal across the page When I generate the export Then the watermark appears on every page, scaled to the page size and oriented consistently for portrait and landscape pages And the watermark is rendered beneath all text and table vector content And text remains fully selectable and copyable without missing characters And on image-heavy pages the watermark remains visible and legible at 100% zoom and in printed output without obscuring or distorting charts, tables, or captions
Consistent Page Numbering 'Page X of Y'
Given any export with N pages When the export is generated Then each page footer displays "Page X of Y" with X as the 1-based page index and Y = N And X increments by 1 on each subsequent page with no resets across sections or appendices And page numbers are present and readable in screen view and printed output
Footer Timestamp with UTC Offset
Given the account timezone is America/Chicago And the export is initiated at 2025-09-12 14:30 local time When I generate the export Then the footer on every page displays the generated-on timestamp in ISO 8601 format "YYYY-MM-DDThh:mm:ss±hh:mm" And the UTC offset matches the account timezone at that instant And the timestamp value is identical on all pages and matches the export job completion time within ±60 seconds
PDF/A Compatibility and Persistence Across Viewers and Print
Given PDF/A-2b is the required archival standard When I validate the exported PDF with an industry-standard preflight tool Then the document passes PDF/A-2b compliance with zero errors or warnings And the watermark, page numbers, and footer timestamp are present and visually consistent in Adobe Acrobat, Apple Preview, and Chrome And when printed to paper and via a generic PDF printer, the watermark, page numbers, and footer timestamp remain present
Internal Links and Bookmarks Unaffected
Given the export contains a clickable evidence index, table of contents, internal section links, and bookmarks When the export is generated with watermarking and pagination enabled Then all internal links navigate to the correct destinations And all bookmarks expand and jump to the correct pages And no link targets or bookmark references are broken, duplicated, or misnumbered as a result of watermarking or pagination
Document Integrity Hash & Manifest
"As an auditor, I want a verifiable cryptographic hash and metadata manifest for the exported document so that I can confirm the file has not been altered after generation."
Description

Compute a SHA-256 hash of the final PDF binary and embed it in the PDF metadata and footer. Emit a signed JSON manifest containing document ID, generation timestamp, exporter identity, hash, page count, and dataset parameters (vehicle IDs, date range, filters). Store the manifest immutably with a versioned record and expose it through an internal API for verification services. Ensure hashing occurs post-render to cover the exact bytes delivered, and include safeguards against re-generation collisions. Provide optional organization-level signing using a managed key to strengthen non-repudiation.

Acceptance Criteria
Post-Render SHA-256 Hash Embedded in PDF
Given a PDF export has been rendered and stored for download When the exact served bytes are hashed with SHA-256 Then the digest equals the PDF metadata value at key "Integrity-Hash-SHA256" And the footer of every page contains the same 64-character hex digest prefixed by "Hash:" And the digest equals the "hash" field in the associated manifest And downloading the same document ID multiple times yields identical bytes and digest
Signed JSON Manifest with Required Fields
Given a PDF export completes When the system emits the JSON manifest for that document Then the manifest contains: documentId (UUID), generationTimestamp (RFC3339 UTC), exporterIdentity (string), hash (64-char hex), pageCount (integer >= 1), datasetParameters.vehicleIds (array of IDs), datasetParameters.dateRange.start (RFC3339 UTC), datasetParameters.dateRange.end (RFC3339 UTC), datasetParameters.filters (object) And the manifest is signed using a platform-managed key and the signature verifies against the registered public key And the manifest "hash" value matches the SHA-256 of the final served PDF bytes And pageCount equals the actual number of pages in the PDF
Immutable Versioned Manifest Storage
Given a manifest has been stored under a documentId and version When a write attempt targets an existing manifest version Then the system rejects the change and no data is altered And a subsequent export for the same dataset parameters creates a new immutable version (version = previous + 1) without overwriting prior versions And GET by documentId returns the latest version by default and GET with an explicit version returns that exact version
Internal Verification API Provides Manifest
Given an internal client requests a manifest When it calls GET /internal/export-seal/manifests/{documentId}?version={optional} Then the API responds 200 application/json with the manifest and signatures for valid IDs And unknown documentId returns 404 Not Found And unauthorized callers are rejected with 401/403 according to auth policy And the response includes ETag of the manifest payload for integrity checks
Re-Generation Collision Safeguards
Given an export request is submitted with identical dataset parameters as a previous export When the system processes the request Then a new documentId is generated and associated with a new manifest version; the prior document and manifest remain retrievable And attempts to force reuse of an existing documentId are rejected with a conflict and do not overwrite existing artifacts And concurrent generation requests for the same dataset do not produce duplicate documentIds
Organization-Level Manifest Signing
Given an organization has a managed signing key configured When a manifest is emitted for that organization’s export Then the manifest includes an additional organization signature that verifies with the organization public key and includes a key identifier (kid) And if no org key is configured, only the platform signature is present and valid And all signatures cover the same canonical payload that includes documentId, generationTimestamp, exporterIdentity, hash, pageCount, and datasetParameters
QR Code Verification & Landing Page
"As an external auditor, I want to scan a QR code on the PDF and view a verification page so that I can instantly confirm the document’s authenticity without needing system access."
Description

Generate a unique, expiring verification URL bound to the document ID and hash, render it as a QR code on the cover and footer, and host a public verification landing page that displays document metadata, hash value, and an integrity pass/fail result. The page must validate that the provided file’s hash matches the stored manifest and indicate expiration or revocation status. Support optional access controls (tokenized link, expiration window, and IP throttling) while allowing auditors to verify without a FleetPulse account. Log all verification events for compliance analytics.

Acceptance Criteria
Generate Unique, Expiring Verification URL and QR Placement
Given a finalized Export Seal PDF exists with document ID <doc_id> and SHA-256 hash <hash> When the system generates the verification URL Then the URL contains a cryptographically random token uniquely bound to <doc_id> and <hash> And the token expiration window is set per configuration (default 90 days) and stored in the manifest And the URL is HTTPS-only and includes no PII And a QR code encoding the URL is rendered on the PDF cover and on every page footer at ≥300 DPI And scanning the QR from any page opens the verification URL And requests after expiration return HTTP 410 and show “Link expired” with the expiration timestamp
Public Verification Landing Page Displays Metadata and Integrity Result
Given a user opens a valid, unexpired verification URL without being logged in When the landing page loads Then the page is accessible without a FleetPulse account And it displays document metadata: document name, document ID, export timestamp (UTC), organization, and hash algorithm (SHA-256) And it displays the stored hash value exactly as recorded And it shows link status (Active/Expired/Revoked) with timestamp And it displays an integrity status badge (Unknown until a file is provided; Pass/Fail after validation) And it provides a downloadable manifest JSON
File Hash Validation via Upload on Verification Page
Given a user has the exported PDF file corresponding to the verification URL When the user uploads the file on the verification page Then the system computes the file’s SHA-256 hash in the browser when supported, otherwise server-side And if the computed hash equals the stored hash, the page shows Integrity: Pass with the computed hash and timestamp And if the computed hash differs, the page shows Integrity: Fail with mismatch details and support link And the uploaded file is not persisted and is discarded immediately after hashing And no more than one integrity result is shown at a time and is reset when a new file is provided
Optional Access Controls: Tokenized Link and Expiration Without Login
Given optional access controls are enabled for the document’s verification URL When an unauthenticated user opens the link Then the system validates the embedded token and expiration without requiring login And invalid or tampered tokens return HTTP 401 with “Invalid token” And expired tokens return HTTP 410 with “Link expired” And when access controls are disabled, any user with the URL can access the page
Revocation Handling and Status Indication
Given an authorized admin revokes a document’s verification URL When any user accesses the revoked URL or scans its QR code Then the landing page shows Status: Revoked with revocation timestamp and optional reason And integrity checks are disabled and display “Verification unavailable for revoked document” And the HTTP response status is 410 for subsequent API calls related to this URL
Verification Event Logging for Compliance Analytics
Given any verification interaction occurs (page view, file hash check, expired/invalid access, throttled request) When the event is processed Then an immutable log entry is created with fields: timestamp (UTC), document ID, token ID, event type, outcome, requester IP (hashed), and user agent And logs are retained per policy and available in compliance analytics with filters by date range, outcome, and document ID And no uploaded file contents are stored in logs
IP Throttling and Rate Limiting on Verification Endpoints
Given repeated requests from the same IP address to verification endpoints When the request rate exceeds the default threshold of 10 requests per minute per IP Then the system returns HTTP 429 with a Retry-After header and does not perform integrity computation And throttled attempts are logged with outcome Throttled And legitimate burst behavior up to 5 requests in 60 seconds is allowed without throttling And the threshold is configurable per environment
Clickable Evidence Index & Appendices
"As a maintenance supervisor, I want a clickable evidence index that jumps to each artifact so that I can quickly locate supporting materials during audits and dispute resolution."
Description

Build a clickable evidence index on the cover or early pages that lists all included sections and artifacts with page numbers and internal links (bookmarks/anchors) to each section and appendix item (e.g., fault snapshots, inspection photos, work orders). Ensure index entries reflect applied filters and sort order, and provide ‘Back to Index’ links in appendix sections for smooth navigation. Automatically include thumbnails and captions for image artifacts when applicable, and fall back gracefully for unsupported formats by linking to a descriptive stub. Preserve link functionality across viewers and print-to-PDF workflows.

Acceptance Criteria
Evidence Index shows all included sections and artifacts with correct links and page numbers
Given a report with included sections and artifacts When I export the PDF with Export Seal Then an Evidence Index is present within the first 3 pages And the index contains an entry for every included section and artifact and no others And each entry displays the correct starting page number of its target And clicking any entry navigates to the exact target header or artifact start And no index entry results in a broken or incorrect link
Index reflects applied filters and sort order
Given filters and a sort order are applied in the UI (e.g., vehicle, date range, artifact type, sort by newest) When I export the PDF Then only items matching the filters appear in both the index and the body And the order of index entries matches the selected sort order And the order in the appendix/body matches the index order And page numbers correspond to that rendered order without gaps from filtered-out items
Back to Index links in appendix navigate to originating index entry
Given any appendix artifact or section page When I click a link labeled "Back to Index" Then the PDF navigates to the Evidence Index at the originating entry’s location And the originating entry is visible without additional scrolling And each appendix artifact/section includes a Back to Index link at the top of its first page and at the end of the section And no Back to Index link is broken
Image artifacts render thumbnails and captions automatically
Given an included image artifact (JPEG or PNG) When the PDF is generated Then the appendix shows a thumbnail with preserved aspect ratio sized between 1.5in and 3.0in in width And a caption is rendered beneath the thumbnail using the artifact title; if missing, fallback to filename and capture date/time And thumbnails are embedded at 150 DPI or higher without noticeable pixelation And the index entry for the image includes the same caption text
Unsupported or non-renderable artifacts use descriptive stub pages
Given an included artifact with an unsupported or failed-to-render format (e.g., .mp4, proprietary CAD) When the PDF is generated Then the index entry links to a stub page within the PDF (not external) describing the artifact And the stub page includes artifact name, file type, size (if available), created/captured date, and reason: "Unsupported format — not embedded" And the stub page includes a Back to Index link And no placeholder produces a broken link or empty page
Links and anchors function across common viewers and print-to-PDF workflows
Given the exported PDF When opened in Adobe Acrobat Reader (Windows), Apple Preview (macOS), Chrome and Edge built-in viewers, and iOS Files Then all index entries, Back to Index links, and bookmarks navigate to correct destinations And when the PDF is printed to a new PDF via Windows "Microsoft Print to PDF" and macOS "Save as PDF" Then the resulting PDF retains working internal links for index entries, Back to Index links, and bookmarks
PDF bookmarks mirror index hierarchy with stable anchors
Given the exported PDF When viewing the document outline/bookmarks panel Then the bookmarks mirror the Evidence Index hierarchy down to artifact level And selecting any bookmark navigates to the correct target And all internal anchor IDs are unique and consistent on repeated exports with the same filters and sort order
Export Permissions & Audit Trail
"As an account admin, I want exports to be permissioned and fully logged so that our organization remains compliant and can trace any shared documents back to their origin."
Description

Restrict export capabilities to authorized roles and fleets, requiring MFA if enforced by the org policy. Record a comprehensive audit trail for each export (who, when, what scope, hash, delivery channel), and expose these events in admin reports and via webhook. Enforce per-tenant rate limits and concurrency caps to prevent abuse. Allow admins to revoke verification links and set retention policies for manifests and generated files. Ensure all personally identifiable information respects existing redaction settings and data residency controls.

Acceptance Criteria
Role- and Fleet-Scoped Export Authorization
Given a tenant with RBAC policy granting export:generate scoped by fleet And a user assigned export:generate for Fleet A but not Fleet B When the user requests an export for Fleet A Then the API returns 200 and generates the export only for Fleet A And no data from Fleet B is included Given the same user When the user requests an export that includes Fleet B or mixed fleets (A+B) Then the API returns 403 with error code access_denied and no file is generated Given a user without export:generate on any fleet When the user requests any export Then the API returns 403 and no file is generated
MFA Enforcement on Export
Given org policy mfa.export.required=true and validity_window_minutes=T And the user’s session has no MFA verified within T minutes When the user requests an export Then the API responds 401 with error code mfa_required and provides a challenge link Given the same policy And the user completes MFA successfully When the user retries the export within T minutes Then the API returns 200 and generates the export Given org policy mfa.export.required=false When the user requests an export Then no MFA challenge is required and the export proceeds if authorized
Comprehensive Export Audit Trail Recording
Given a successful export completes When the export record is written Then the audit trail stores: export_id, org_id, user_id, user_role, tenant_region, fleet_ids scope, time_started, time_completed, parameters (date range, filters), delivery_channel (download|email|webhook), file_size_bytes, file_mime, file_hash_sha256, verification_url And the file_hash_sha256 matches the generated file’s SHA-256 And the audit event is immutable and append-only And the audit event is persisted and queryable within 5 seconds of completion Given a failed export When the failure occurs Then an audit event records export_id, org_id, user_id, time_started, time_failed, failure_reason, and no file_hash is stored
Admin Reports and Webhook Exposure of Export Events
Given an org admin with reporting access When viewing Export Events in Admin Reports Then they can filter by date range, user, fleet_id, delivery_channel, outcome (success|failure) And results are paginated and sortable by time_completed Given webhooks are configured with a signing secret When an export succeeds Then an export.succeeded event is delivered with payload including export_id, org_id, user_id, fleet_ids, time_completed, file_hash_sha256, delivery_channel, verification_url And the request includes an HMAC-SHA256 signature header And deliveries are retried up to 3 times with exponential backoff if non-2xx Given an export fails When the failure is recorded Then an export.failed event is delivered with export_id, failure_reason, and associated context Given idempotency requirements When duplicate webhook deliveries occur Then downstream can deduplicate via a stable event_id and idempotency_key in headers
Per-Tenant Rate Limits and Concurrency Caps
Given tenant limits are configured as L exports per rolling 60 seconds and C concurrent exports When the tenant submits more than L exports within 60 seconds Then subsequent requests return 429 Too Many Requests with a Retry-After header And no additional exports start processing Given current running exports equal C When another export request arrives Then the API returns 429 with error code concurrency_limit and the request is not queued nor started Given limits apply per tenant When two users in the same tenant submit exports Then rate limiting counts aggregate at the tenant level Given an export has already been generated When the user downloads an existing file Then rate limits do not apply to file downloads, only to generation requests
Verification Link Management and Revocation
Given a completed export with a verification_url embedded in the PDF QR code When an org admin revokes the verification link Then subsequent requests to the verification_url return HTTP 410 Gone And the QR code scan resolves to the same 410 state And a revocation audit event is recorded with admin user_id, time, export_id Given a revoked link When anyone attempts to reactivate it Then reactivation is rejected and a new export is required to obtain a new verification_url Given retention policy expiry When the verification link surpasses its configured TTL Then the verification_url automatically expires with HTTP 410 and is marked expired in reports
PII Redaction and Data Residency Compliance
Given tenant redaction setting = none|partial|full When generating the PDF, evidence index, verification page, admin report row, and webhook payload Then PII fields adhere to the setting: none=raw, partial=masked (e.g., driver_email=j***@example.com), full=omitted And no unredacted PII appears when partial or full is set Given tenant region = R When storing generated files, manifests, and audit events Then data is stored and served only from region R, and cross-region replication is disabled for customer data And webhook deliveries originate from region R endpoints Given conflicting redaction directives from multiple sources When resolving effective policy Then the most restrictive redaction level prevails

Lifecycle ROI

Projects total cost of ownership forward—repair frequency, fuel burn, parts wear, and remaining coverage—to pinpoint the break‑even month for keep vs replace. Delivers a clear Repair, Replace, or Monitor verdict with projected savings and a confidence range so managers act decisively, not by gut feel.

Requirements

Unified Cost & Telematics Data Pipeline
"As a fleet manager, I want all my vehicle costs and telematics in one accurate, up-to-date source so that Lifecycle ROI outputs are trustworthy and reflect real operations."
Description

Build a robust ingestion and normalization layer that consolidates OBD-II telemetry (DTCs, fuel rate, idle time, odometer), maintenance logs, repair invoices, parts purchases, and fuel transactions into a single, asset-scoped dataset. Standardize units and currencies, deduplicate events, categorize costs (labor, parts, fuel, downtime), and reconcile timestamps across sources. Implement outlier detection, missing-data imputation, and VIN-to-asset mapping. Provide daily batch updates with near-real-time deltas for critical signals (faults, mileage). Expose a consistent schema to the Lifecycle ROI models and persist lineage/assumption metadata for auditability and repeatability.

Acceptance Criteria
Daily Multi-Source Batch Ingestion & Normalization
Given OBD-II telemetry, maintenance logs, repair invoices, parts purchases, and fuel transactions sources are connected for a tenant When the daily batch runs between 02:00 and 04:00 UTC for the prior calendar day Then >=99.5% of available source records are ingested and scoped to asset_id And all quantities are converted to configured canonical units (distance, volume, time) and base currency using the FX rate at each transaction timestamp And VIN-to-asset mapping succeeds for >=99.9% of ingested records; records without a mapping are quarantined with actionable error codes And duplicate raw files or messages do not increase the number of persisted business events (idempotent load) And the run manifest persists counts for ingested, transformed, quarantined, and rejected records with reasons, and the batch is marked successful only if total error rate <0.5%
Near-Real-Time Critical Signal Deltas
Given live OBD-II stream delivering DTCs and odometer deltas When new DTC or odometer updates are received Then the normalized delta events are available to downstream consumers within 120 seconds end-to-end P95 and 300 seconds P99 And duplicate DTCs within a 10-minute window per asset are coalesced to a single active fault instance And odometer readings are monotonic non-decreasing per asset; non-monotonic readings are flagged and excluded from mileage deltas And in case of upstream outage <=60 minutes, the pipeline backfills missed events and reaches freshness parity within 10 minutes of recovery
Outlier Detection & Missing-Data Imputation
Given normalized telemetry for fuel_rate, idle_time, odometer, speed, and engine_load When values violate configured per-asset-class bounds or rate-of-change thresholds Then the records are flagged with outlier_flag=true and outlier_reason, excluded from model features, and routed to a quarantine table And missing critical fields within gaps <=60 minutes are imputed using documented methods (carry-forward or linear interpolation) with imputation_method and confidence_score persisted And for any asset, imputed coverage for critical signals does not exceed 10% of total samples in a 24-hour window; if exceeded, a data_quality_alert is emitted And all outlier and imputation decisions are reproducible via stored ruleset_version and parameters
Cross-Source Deduplication & Cost Categorization
Given maintenance log entries and corresponding repair invoices for the same asset When events occur within a configurable reconciliation window (default +/-72 hours) and share matching vendor, odometer band, and line items Then they are merged into a single maintenance_event with a stable event_fingerprint And repeated ingestion of the same source files or messages yields no additional maintenance_event records (idempotency) And 100% of cost line items are assigned to one of: labor, parts, fuel, downtime; uncategorized_rate <=1% per batch, with residuals routed to review And total_amount_standardized equals the sum of categorized amounts in base currency within +/-0.5% tolerance
Consistent Schema & Contract Exposure for Lifecycle ROI Models
Given the downstream model requests the unified dataset snapshot for a tenant When the dataset is served Then each record includes the required fields: tenant_id, asset_id, vin, timestamp_utc, source_system, source_record_id, signal_or_event_type, value_standard, unit_standard, currency_code, amount_standardized, cost_category, lineage_id, outlier_flag, imputation_flag, confidence_score, ruleset_version, schema_version And the published JSON Schema and data dictionary for schema_version are available via the contract endpoint and pass contract tests And schema changes are backward compatible; fields are only added or marked deprecated with a deprecation window >=2 minor versions; no breaking change is released without a schema_version increment
Lineage, Assumptions, and Reproducibility
Given a model run_id and the dataset_version_id recorded at inference time When a replay is requested for that run_id Then the pipeline produces an identical dataset (row count and content hash match) using persisted lineage, transformation_steps, assumptions, and FX rates And daily dataset snapshots are retained for >=24 months, addressable by dataset_version_id and timestamp And each record’s lineage trail can be traversed from unified record back to source file/message within a single query using lineage_id and source_record_id
VIN-to-Asset Mapping & Timestamp Reconciliation
Given incoming records with VINs and optional asset_ids from multiple sources When the VIN maps to multiple active assets within a tenant Then the record is quarantined within 5 minutes with conflict_reason and a mapping_resolution task is created And when the VIN maps uniquely, asset_id is resolved deterministically and persisted And records missing timezone info are normalized to timestamp_utc using source-level defaults; all records persist both source_timestamp and timestamp_utc And cross-source events within a 5-minute window for the same asset are time-reconciled and marked as the same logical event for deduplication
Usage-Based Degradation & Repair Forecasting
"As a fleet manager, I want ROI to factor in how and where each vehicle is used so that repair and wear predictions match our real-world patterns."
Description

Develop predictive models that estimate component wear and repair frequency per vehicle based on duty cycle, mileage, engine hours, idle %, load profile, climate, and historical failures. Produce monthly expected events and cost distributions for high-impact systems (engine, battery, brakes, tires). Incorporate survival/hazard modeling by VIN/engine family with cold-start defaults when history is sparse. Calibrate models with backtesting, and continuously learn as new data arrives. Output feeds the TCO projections with confidence intervals for each asset.

Acceptance Criteria
Monthly System-Level Failure and Cost Forecast per Vehicle
Given a vehicle with at least 90 days of telematics and environmental data (duty cycle, mileage, engine hours, idle %, load profile, climate) and any available historical failures When the monthly forecasting job executes Then the system produces a 12-month horizon for each system (engine, battery, brakes, tires) with fields: month, expected_event_rate, cost_mean, cost_p50, cost_p90, ci95_lower, ci95_upper, model_version, reason_code And all expected_event_rate and cost values are non-negative and costs are in USD And the forecast is stored with a timestamp and is retrievable via the Forecast API by vehicle_id within 500 ms p95 for a single vehicle request And forecasts older than the latest run remain versioned and queryable for 24 months
VIN/Engine-Family Survival Modeling with Cold-Start Fallback
Given a vehicle with sparse history (less than 3 months of telematics or zero recorded failures) or missing key features for the last 30 days When a forecast is requested Then the system uses the engine-family and regional climate survival/hazard baseline and marks reason_code = "cold_start" And the output includes valid monthly horizons and confidence intervals as defined for fully observed vehicles And in backtests on the cold_start cohort, 95% CI coverage for monthly cost is between 92% and 98%, and event-rate calibration error (|predicted - observed|) averaged over deciles is ≤ 0.10 absolute
Backtesting and Calibration Accuracy Thresholds
Given a 24-month labeled dataset per system partitioned into train (first 12 months) and test (last 12 months) with no leakage When the backtesting pipeline runs Then for each system and vehicle decile of predicted monthly event probability, the observed event frequency lies within ±10 percentage points of the prediction And the empirical coverage of the predicted 95% cost interval on test months is between 92% and 98% And the calibration curve (probability scale) has slope in [0.9, 1.1] and intercept in [-0.02, 0.02] And CRPS for monthly cost distributions per system is ≤ the incumbent model’s CRPS by at least 5% relative or ≤ a fixed threshold agreed by Data Science (whichever is easier to meet)
Continuous Learning, Drift Monitoring, and Safe Promotion
Given new labeled data (events, costs, telematics) ingested daily When the monthly retraining job executes Then candidate models are trained per system and evaluated on the fixed rolling 12-month test set using the acceptance metrics And a candidate is auto-promoted only if it meets or exceeds all calibration and coverage thresholds and improves CRPS by ≥5% relative to the incumbent And population drift is monitored weekly with PSI; PSI > 0.2 on any key feature triggers an out-of-cycle retrain And the promoted model is versioned, registered, and forecasts for all active vehicles are regenerated within 24 hours of promotion And rollback to the prior model is possible within 30 minutes if monitoring alerts fire
Feature Engineering, Data Quality, and Fallback Rules
Given raw inputs for duty cycle, mileage, engine hours, idle %, load profile, climate, and historical failures When the preprocessing pipeline runs Then units are normalized (e.g., miles, hours, °C/°F mapped consistently), and climate is mapped from GPS/ZIP to a standard climate index And outliers beyond the 0.5th/99.5th percentiles are winsorized; remaining extreme values are flagged And if any key feature has >20% missingness in the lookback window, the model uses an imputed-and-flagged value and/or engine-family fallback with an appropriate reason_code And a data_quality score (0–100) is computed and included; forecasts with data_quality < 70 use widened intervals (e.g., CI width × 1.25)
TCO Projection Integration and Verdict Consistency
Given monthly system-level forecasts exist for a vehicle When the Lifecycle ROI module computes projected TCO and break-even month Then it consumes the forecast payload (vehicle_id, system, month, expected_event_rate, cost_mean, cost_p50, cost_p90, ci95_lower, ci95_upper, model_version, reason_code) And the computed break-even month and the Repair/Replace/Monitor verdict are reproducible given the same forecast inputs And the UI/API display the verdict with the 95% cost confidence band and indicate reason_code when present And contract tests validate the schema and field semantics between Forecasting and TCO modules
Warranty & Coverage Modeling
"As a fleet manager, I want remaining warranty coverage included in projections so that I see true expected out-of-pocket costs when deciding to keep or replace."
Description

Ingest OEM and extended warranty terms, coverage windows (miles, months), deductibles, exclusions, and policy limits; map to components and failure codes. Calculate remaining coverage per asset and adjust projected repair costs to covered vs. out-of-pocket amounts over time. Support manual entry for aftermarket contracts and store proof-of-coverage artifacts. Surface upcoming coverage expirations as inputs to the break-even analysis.

Acceptance Criteria
Import and Parse Warranty Terms (OEM and Extended)
Given a valid JSON or CSV payload containing policy name, issuer, effective date, in-service anchor, coverage windows (months, miles), deductible, per-claim limit, aggregate limit, covered components, and exclusions, When the payload is uploaded via UI or POSTed to the import API, Then a policy record is created with all fields populated and a success response including policy_id is returned. Given a payload missing any required field (policy name, effective date, coverage windows, deductible, limits), When import is attempted, Then the import fails with a 400 response and field-level error messages are displayed without creating a policy. Given a payload with invalid data types or units (e.g., negative miles, non-date effective date), When import is attempted, Then validation errors specify the offending fields and no policy is created. Given duplicate import of an already-existing policy (same issuer, policy number, asset, and effective date), When import is attempted, Then the system de-duplicates and returns the existing policy_id with an idempotent success notice.
Map Coverage to Components and Failure Codes
Given a DTC-to-component mapping table and a policy with covered components and exclusions, When a failure event with DTC codes and component IDs is processed, Then the system resolves each event component to a coverage category and determines covered or excluded status per policy. Given an unmapped DTC code is encountered, When processing coverage, Then the event is marked Unknown Coverage, no policy is applied, and the code is queued for mapping review. Given a component is explicitly excluded in the policy, When a related event is processed, Then the event is marked Excluded and out-of-pocket is set to 100% for that component under that policy.
Compute Remaining Coverage by Time and Mileage
Given an asset with an in-service date and current odometer reading and a policy with coverage windows (months, miles), When remaining coverage is calculated, Then remaining_months and remaining_miles are computed and stored to the asset-policy record with a next_expiration_date. Given either remaining_months <= 0 or remaining_miles <= 0, When coverage status is evaluated, Then the policy status for the asset is set to Expired; otherwise it is Active. Given current odometer or in-service date is updated, When recalculation runs, Then remaining coverage values are updated within 5 seconds and any status change is emitted as an event.
Apply Coverage to Projected Repair Costs (Deductible, Exclusions, Limits, Multi-Policy)
Given a projected repair event with itemized parts and labor and one or more Active policies, When coverage is applied, Then eligible costs are split into covered_amount and out_of_pocket using policy rules, excluding components marked Excluded. Given a per-claim deductible is defined, When multiple line items belong to the same repair event, Then the deductible is applied once per event before limits and the applied_deductible is returned. Given per-claim and aggregate limits, When covered_amount exceeds a limit, Then coverage is capped at the limit and any excess is assigned to out_of_pocket and aggregate_remaining is decremented accordingly. Given multiple policies are applicable, When coordination is executed, Then policies are applied in configured priority order without exceeding total event cost and each policy’s contribution is returned with policy_id and contribution_amount.
Manual Entry and Validation of Aftermarket Contracts
Given a user creates a manual warranty contract via the UI, When required fields (issuer, policy number, effective date, coverage windows, deductible, limits, covered components/exclusions, asset assignment) are provided, Then the contract is saved and linked to the asset with a generated policy_id. Given required fields are missing or invalid (e.g., end date before effective date, negative miles), When save is attempted, Then inline validation errors prevent save and identify each invalid field. Given a manual contract overlaps an existing policy for the same asset and issuer/policy number, When save is attempted, Then a duplicate/overlap warning is shown and the user must confirm override or cancel. Given a manual contract is updated, When changes are saved, Then a versioned audit log entry captures before/after values, editor, and timestamp.
Store and Retrieve Proof-of-Coverage Artifacts
Given an asset-policy record, When a user uploads proof-of-coverage files (PDF, PNG, JPG) up to 25 MB each, Then the files are stored, virus-scanned, and linked to the policy with filename, size, uploader, and upload timestamp. Given an authorized user requests an artifact, When download is initiated, Then the original file is returned intact within 3 seconds and the access is logged. Given a newer artifact version is uploaded with the same filename, When save completes, Then the prior file is retained as a previous version and the latest is marked current. Given a user attempts to delete an artifact, When deletion is confirmed, Then the file is no longer retrievable from the UI but an audit trail of the deletion is preserved.
Surface Coverage Expirations to Break-Even Analysis
Given expiration thresholds are configured (e.g., 60 days and 3,000 miles), When an asset-policy’s remaining_months or remaining_miles falls within thresholds, Then an Upcoming Coverage Expiration signal is emitted with asset_id, policy_id, and expected expiration date. Given Lifecycle ROI analysis is executed for an asset, When coverage signals are present, Then the engine consumes remaining coverage and expiration data to adjust projected repair costs (covered vs out-of-pocket) for the analysis window and includes the assumptions in the analysis payload. Given a policy transitions from Active to Expired, When the next analysis runs, Then the model stops applying coverage to projected repairs beyond the expiration point and the UI displays an expiration notice in the verdict context.
TCO & Break-even Calculator (Keep vs Replace)
"As a fleet manager, I want a precise break-even month and projected savings for keep versus replace so that I can make financially sound replacement decisions."
Description

Compute forward-looking cash flows for each asset under Keep and Replace strategies, including fuel, maintenance/repairs, parts, downtime cost, insurance, registration, financing, taxes, and residual/resale value. For Replace, support candidate vehicle profiles (class, make/model/year), purchase price, lead time, incentives, expected MPG, and maintenance baselines. Apply fleet cost of capital to discount cash flows and derive the month where Replace cumulative cost falls below Keep (break-even), along with projected savings and NPV. Persist assumptions and inputs per run for traceability.

Acceptance Criteria
Dual-Strategy Cash Flow Projection (Keep vs Replace)
Given an asset with baseline assumptions for fuel economy, maintenance/repairs, parts, insurance, registration, financing terms, taxes, downtime cost, residual/resale value, analysis horizon H months starting at S, and currency And a Replace candidate profile with class, make/model/year, purchase price, incentives, expected MPG, maintenance baseline, lead time L, and residual/resale assumptions When the user runs the TCO & Break-even calculator Then the system generates monthly cash-flow tables for Keep and Replace for months 0..H inclusive, with line items: fuel, maintenance/repairs, parts, downtime cost, insurance, registration, financing (principal and interest), taxes, and residual/resale value at disposal And the output includes per-month, per-category amounts, monthly totals, and cumulative totals for each strategy
Replace Candidate Profile Validation
Given the user selects Replace strategy When required fields are missing or invalid (class, make, model, year, purchase price > 0, expected MPG > 0, maintenance baseline provided, lead time L ≥ 0) Then the calculator does not execute and displays field-level error messages enumerating all invalid or missing inputs And when all required fields are valid, the calculator executes and applies configured defaults for optional fields (e.g., incentives default to 0 if not provided)
Cost of Capital Discounting
Given an annual fleet cost of capital r_annual is provided When the calculator computes discounted cash flows Then it converts r_annual to a monthly discount rate r_month = (1 + r_annual)^(1/12) − 1 And each month t cash flow is discounted by 1/(1 + r_month)^t from the analysis start month And when r_annual = 0%, discounted totals equal undiscounted totals within ±0.01 of the currency unit
Break-even Month Derivation
Given discounted cumulative cost series for Keep and Replace across the horizon When the Replace cumulative cost first becomes strictly less than the Keep cumulative cost Then the break-even month equals that month index relative to the analysis start and the corresponding calendar month is displayed And if no crossing occurs within the horizon, the break-even month is null and the result clearly states "No break-even within horizon"
Projected Savings and NPV Reporting
Given discounted cash flows for both strategies When results are presented Then the system displays NPV difference over the horizon (NPV Keep − NPV Replace) and projected savings at the break-even month (Keep cumulative − Replace cumulative at break-even) And values include the currency code/symbol, are rounded to two decimals, and use positive numbers to indicate savings with Replace And a downloadable detail includes per-month discounted totals for both strategies
Lead Time Handling and Overlap
Given a Replace lead time L months is specified When L > 0 Then Keep costs continue through months 0..L−1, the purchase price and incentives are applied at month L, and Replace operating costs (fuel, maintenance, insurance, registration) begin no earlier than month L, with no double-counting across strategies And if L = 0, Replace costs begin at month 0 And if L ≥ horizon length, the purchase cash flow is excluded from the horizon and no break-even can occur within the horizon
Run Persistence and Audit Trail
Given a calculator run completes When the run is saved automatically Then the system persists a unique run ID, user ID, asset ID, timestamp, analysis horizon and start month, discount rate, all Keep and Replace inputs, and all outputs (cash-flow tables, cumulative series, break-even, NPV, savings) And the user can retrieve prior runs by asset and run ID and view the exact inputs and results used for each run
What‑if Scenario Configurator
"As a fleet manager, I want to test different assumptions and replacement options so that I can see how sensitive the verdict is before committing to a decision."
Description

Provide an interactive layer to vary assumptions such as monthly miles, fuel price, shop labor rate, parts inflation, downtime hourly cost, financing rate/term, resale value, and lead time. Allow saving named scenarios, comparing multiple candidates side-by-side, and applying presets (e.g., fuel spike, high utilization, supply delay). Validate inputs, show sensitivity to top drivers, and allow default values from historical fleet data. Feed scenarios directly into the TCO and break-even engine.

Acceptance Criteria
Assumption Inputs Validation
Given the configurator is open, When the user enters monthly miles outside 100–30,000, Then an inline error “Monthly miles must be between 100 and 30,000” is shown and Save and Run are disabled. Given the user enters fuel price outside $1.00–$15.00 per gallon or more than 2 decimal places, When focus leaves the field, Then an inline error “Fuel price must be between $1.00 and $15.00 with max 2 decimals” is shown and Save and Run are disabled. Given the user enters a shop labor rate outside $0–$500/hr, When focus leaves the field, Then an inline error is shown and Save and Run are disabled. Given the user enters parts inflation outside −20% to +100% per year, When focus leaves the field, Then an inline error is shown and Save and Run are disabled. Given the user enters downtime hourly cost outside $0–$10,000/hr, When focus leaves the field, Then an inline error is shown and Save and Run are disabled. Given the user enters financing APR outside 0%–40% or a term outside 1–84 months, When focus leaves the fields, Then inline errors are shown and Save and Run are disabled. Given the user enters resale value below $0 or above 150% of current valuation, When focus leaves the field, Then an inline error is shown and Save and Run are disabled. Given the user enters lead time outside 0–365 days, When focus leaves the field, Then an inline error is shown and Save and Run are disabled. Given all inputs are valid, When the user clicks Save or Run Analysis, Then the action proceeds without validation errors.
Save and Load Named Scenarios
Given a valid set of inputs, When the user clicks Save As and provides a unique name up to 60 characters, Then the scenario is saved with an ID, owner, and timestamp, and a success confirmation is shown. Given the user enters a name that already exists in their organization, When they attempt to Save As, Then the action is blocked and an inline error “Name already exists—choose a different name” is shown. Given an existing saved scenario is open with changes, When the user clicks Save, Then the scenario overwrites the previous version and updates the last-modified timestamp. Given saved scenarios exist, When the user opens the Load dialog, Then a list of scenario names with last-modified timestamps is displayed and selecting one loads all fields exactly as saved within 1 second local render time. Given a scenario is loaded, When any field value changes, Then an Unsaved Changes indicator appears until the scenario is saved or reverted.
Side-by-Side Comparison of Scenarios
Given at least two saved scenarios exist, When the user opens Compare, Then they can select up to 4 scenarios to include in a side-by-side view. Given scenarios are selected for comparison, Then each column displays: Verdict (Repair/Replace/Monitor), Break-even month, TCO at 12/24/36 months, Projected savings vs baseline, Confidence range, and Top 3 cost drivers. Given the compare view is open, Then numeric values use consistent units and formatting across scenarios and any differing assumptions are highlighted. Given scenarios were computed with different engine versions, When the compare view loads, Then a banner warns of version mismatch and a Recalculate action is available to recompute all scenarios on the current engine. Given a scenario included in compare is edited and re-saved, When the user refreshes the compare view, Then the displayed metrics update within 2 seconds of the rerun completing.
Apply and Revert Presets
Given an open scenario with valid inputs, When the user applies the Fuel Spike (+30%) preset, Then the fuel price field increases by 30% and is tagged as modified by preset. Given an open scenario with valid inputs, When the user applies the High Utilization (+25%) preset, Then the monthly miles field increases by 25% and is tagged as modified by preset. Given an open scenario with valid inputs, When the user applies the Supply Delay (+60 days) preset, Then the lead time field increases by 60 days and is tagged as modified by preset. Given at least one preset has been applied, When the user clicks Revert Preset, Then all preset-applied changes revert to their prior manual values and preset tags are removed. Given applying a preset would violate validation constraints, When the user attempts to apply it, Then the application is blocked with an explanatory message and no values change.
Sensitivity to Top Cost Drivers
Given a complete, valid scenario, When the user triggers Run Sensitivity, Then the system varies each input one-at-a-time by ±10% (or logical step) for: monthly miles, fuel price, shop labor rate, parts inflation, downtime cost, financing rate, resale value, and lead time, and computes the impact on 24‑month TCO and break-even month. Given sensitivity results are computed, Then the UI displays a ranked list or tornado chart of the top 5 drivers with each driver’s ΔTCO and Δ break-even month and direction (+/−). Given normal system load, When sensitivity is run on a scenario with up to 20 inputs, Then results return in ≤3 seconds at p95. Given sensitivity is one-at-a-time, Then a tooltip discloses the method used and assumptions about independence.
Default Values from Historical Fleet Data
Given a vehicle (or group) is selected, When the user creates a new scenario, Then fields auto-populate with defaults derived from the last 12 months of historical data for that vehicle or its class, and a “Defaults applied from historical data” banner is shown. Given less than 90 days of vehicle-level history is available, When defaults are applied, Then fleet-level averages are used; if unavailable, industry defaults are used, and the banner indicates the source. Given defaults are applied, When the user toggles Reset to defaults, Then any overridden fields revert to the default values and are marked accordingly. Given defaults are applied, Then each defaulted field shows a data source tag (Vehicle/Fleet/Industry) and can be manually overridden without error if within validation constraints.
Scenario Runs Feed TCO and Break-even Engine
Given a valid scenario is open, When the user clicks Run Analysis, Then the payload sent to the engine includes monthly miles, fuel price, shop labor rate, parts inflation, downtime hourly cost, financing APR, financing term, resale value, lead time, scenario ID, vehicle ID, and timestamp. Given the engine processes the request, Then it returns Verdict, Break-even month, TCO for the selected horizon, Projected savings, and Confidence range, and the UI renders results within 2.5 seconds at p95. Given an analysis run completes, Then the system logs scenario ID, engine version, and response time to audit logs. Given the engine fails or times out, When the user runs analysis, Then a clear error banner with Retry is shown and prior results remain visible without being overwritten.
Verdict, Confidence & Explainability
"As a fleet manager, I want a simple verdict with a confidence range and reasons why so that I can act quickly and defend the decision to stakeholders."
Description

Generate a clear Repair, Replace, or Monitor verdict per asset with a confidence range derived from model uncertainty and scenario variance. Present top contributing factors (e.g., rising fuel burn, expiring warranty, high repair likelihood) and show how they influence the outcome. Define guardrails that default to Monitor when uncertainty exceeds thresholds. Enable alerts when the verdict changes or confidence drops. Log versioned model metadata to support audit and continuous improvement.

Acceptance Criteria
Per-Asset Verdict and Confidence Output
Given an asset with >=90% data completeness over the last 12 months and no active guardrails When Lifecycle ROI analysis is executed for the asset Then the system returns a single verdict in {Repair, Replace, Monitor} And returns class probabilities for {Repair, Replace, Monitor} to two decimal places that sum to 100% ± 0.1% And returns a 90% confidence interval [L,U] for the chosen verdict where 0 <= L <= U <= 100 and (U - L) <= 30 percentage points And the chosen verdict equals the argmax of the class probabilities
Explainability: Top Contributing Factors and Influence
Given a generated verdict with probabilities and a confidence interval When the result is presented Then the system displays the top 5 contributing factors with human-readable labels, units, and signed contributions in percentage points toward the chosen verdict And each factor includes its underlying feature value and directionality (+ increases likelihood, - decreases likelihood) for the chosen verdict And a what-if sensitivity is available for each factor showing the change in chosen verdict probability for a ±10% change in the factor value And the list includes at least one cost-based, one reliability-based, and one coverage-based factor when available
Guardrails: Default to Monitor Under High Uncertainty
Given an asset where any of the following is true: data completeness < 70%, the 90% confidence interval width for the top class > 50 percentage points, or the input is detected out-of-distribution When Lifecycle ROI analysis is executed Then the verdict is forced to Monitor regardless of argmax probabilities And the UI displays a Guardrail badge with reason codes {Low Data, Wide CI, OOD} and a link to remediation steps And class probabilities and the confidence interval are still shown but flagged as Low Confidence And an audit event is logged with guardrail=true and the triggering reason codes
Alerts on Verdict Change or Confidence Degradation
Given an asset with a previously stored verdict snapshot When the new verdict differs from the last snapshot or the top-class probability decreases by >=15 percentage points or the 90% confidence interval width increases by >=20 percentage points since the last snapshot Then an alert is generated within 5 minutes and delivered via all enabled channels {in-app, email, SMS} And the alert payload contains asset identifier, previous verdict and top probability, new verdict and top probability, confidence intervals, and top 3 contributing factors And duplicate alerts for the same condition are suppressed for 24 hours per asset And all alerts are recorded in the audit log with delivery status per channel
Versioned Model Metadata Logging
Given any verdict computation in any environment When the computation completes Then a metadata record is written containing model_name, model_version, training_data_hash, feature_schema_version, hyperparam_hash, explainability_method_version, uncertainty_method_version, threshold_config_version, code_commit_sha, input_snapshot_id, run_id, and timestamp And the metadata is queryable by asset_id and run_id within 2 seconds for 99% of requests And metadata records are immutable and retained for >=24 months And attempts to compute without a registered model_version are rejected with HTTP 409 and no verdict is stored
Reproducibility of Verdicts for Audit
Given a prior verdict’s run_id and input_snapshot_id When the audit replay job is executed using the same model_version and threshold_config_version Then the recomputed verdict matches the original verdict And class probabilities match within ±0.5 percentage points and factor contributions within ±5% relative error And >=99% of a nightly sample of 100 random assets meet the above tolerances; failures are flagged with incident severity and linked metadata
Lifecycle ROI Dashboard & Report Export
"As a fleet manager, I want a dashboard and exportable report of ROI results so that I can review decisions quickly and share them with owners and accounting."
Description

Deliver a FleetPulse UI module with per-asset and fleet-level views: break-even month, projected savings, verdict, confidence range, key drivers, and assumption snapshots. Provide filters by vehicle class, age, utilization, and health indicators. Enable sharing via PDF/CSV exports and a read-only share link; expose a secure API endpoint for downstream systems (accounting, procurement). Ensure responsive design and role-based access controls consistent with FleetPulse standards.

Acceptance Criteria
Per-Asset ROI View Completeness
Given I open the Lifecycle ROI module for a specific asset with available data When the per-asset view loads Then the UI displays: break-even month (MMM YYYY), projected savings in org currency, verdict (Repair|Replace|Monitor), confidence range (low–high %), top 3 key drivers with % impact, assumption snapshot (name, version, timestamp), and last data refresh timestamp And all numeric values are formatted per org locale settings (currency, date, numbers) And the verdict matches the configured business rule for keep vs replace based on projected TCO delta And loading shows skeletons until data resolves; on failure, a retry control with an actionable error message is shown
Fleet-Level ROI Aggregation and Consistency
Given I navigate to the Fleet-Level view with filters applied When the fleet view renders Then it shows: total projected savings (sum across filtered assets following their current verdicts), counts of assets by verdict, median break-even month for assets with defined break-even, confidence band for projected savings (P10–P90), and top 5 aggregated key drivers ranked by contribution And each displayed value equals the aggregation of the currently filtered per-asset data within 1% or rounding tolerance And a data freshness timestamp is shown and matches the freshest underlying asset data
Filtering by Class, Age, Utilization, and Health
Given a fleet with diverse vehicle classes, age bands, utilization tiers, and health indicators When I multi-select any combination of filters and apply them Then both per-asset and fleet-level views update to reflect the selection and show the filtered asset count And filter chips reflect selections, support remove per-chip, and Clear All resets results to the unfiltered state And the filter state is encoded in the URL query string and is restored on reload and propagated to exports and share links And for up to 5,000 assets, P95 time from Apply to completed render is ≤ 2.0 seconds And an informative empty state appears when zero assets match
PDF and CSV Export Fidelity
Given I have a current view (per-asset or fleet) with filters applied When I export to PDF Then the PDF contains the visible view content, filters summary, assumption snapshot details, charts/tables, generated timestamp, page numbers, and FleetPulse branding; supports A4 and Letter; file size ≤ 10 MB for ≤ 200 assets; completes within 15 seconds And when I export to CSV Then the CSV includes a header row and columns: asset_id, vin, vehicle_class, age_months, utilization_rate, health_indicator, break_even_month, projected_savings_amount, projected_savings_currency, verdict, confidence_low_pct, confidence_high_pct, top_driver_1, top_driver_2, top_driver_3, snapshot_id, snapshot_version, snapshot_name, generated_at_utc, filter_summary And all exports respect RBAC (only assets within user scope) and exactly match on-screen values from the same refresh window
Read-Only Share Link Security and Controls
Given I have permission to share When I generate a read-only share link for the current scope Then the system creates a tokenized, non-guessable URL with ≥ 128-bit entropy and a default expiry of 14 days, configurable between 1–90 days And recipients accessing the link can view only the shared scope and filters; all edit, export, and recalculation controls are hidden/disabled And the link can be revoked; after revocation or expiry, access returns HTTP 404 and is removed within the UI And all access via the link is logged (issuer, first/last access, IP, user agent) and rate-limited; excessive requests return HTTP 429
Secure API Endpoint for ROI Data
Given a client with OAuth 2.0 client-credentials and scope fleetpulse.roi:read When it calls GET /api/v1/roi with filter parameters (class, age_months, utilization_tier, health_indicator, asset_id) and pagination (page or cursor, per_page) Then the API returns HTTP 200 with JSON items including: asset_id, vin, vehicle_class, age_months, utilization_rate, health_indicator, break_even_month, projected_savings_amount, projected_savings_currency, verdict, confidence_low_pct, confidence_high_pct, key_drivers[1..3], snapshot_id, snapshot_version, snapshot_name, generated_at_utc And responses include pagination metadata, ETag for caching, and adhere to rate limits (≥ 60 req/min per client); 429 includes Retry-After And error conditions return appropriate codes with structured errors: 400/422 (validation), 401/403 (authz), 404 (not found), 500 (server) And P95 response time is < 800 ms for up to 1,000 items per page; an OpenAPI spec is published and matches behavior
RBAC and Responsive UI Compliance
Given roles Owner/Admin, Manager, Technician, and Viewer are configured When users access the Lifecycle ROI module Then permissions are enforced: View (all roles within assigned scope), Export/Share (Owner/Admin and Manager only), API (service accounts with roi:read), Technician limited to assigned assets; unauthorized actions are hidden or return 403 And the UI is responsive with no horizontal scroll at common breakpoints (mobile 320–480, tablet 481–1024, desktop ≥ 1025); charts and tables adapt and maintain readability; tap targets ≥ 44 px; keyboard navigation works for interactive elements; contrast meets WCAG AA And performance budgets are met: P95 initial load ≤ 2.5 s on desktop fast 3G; primary interactions (tab switch, filter apply) have ≤ 100 ms input latency

Residual Pulse

Fetches live market comps and buy‑bid signals for your exact VIN, mileage, and condition to estimate net resale proceeds and the optimal sell window. Alerts you before value cliffs (e.g., mileage bands) so you capture maximum equity instead of holding past peak.

Requirements

VIN-Matched Market Data Ingestion
"As a fleet manager, I want Residual Pulse to ingest live VIN-matched market and buy-bid data so that valuations reflect current market conditions and buyer appetite."
Description

Implement a resilient data pipeline that continuously ingests and normalizes market comps and buy-bid signals mapped to exact VINs, trims, factory options, mileage, and regional factors. Sources include retail listings, wholesale auction feeds, dealer networks, and pricing indices under proper licenses. The pipeline must deduplicate, reconcile conflicting records, and standardize attributes (condition grades, asking vs. transacted, days-on-market) with latency targets (<15 minutes) and provider failover. It integrates with FleetPulse’s vehicle registry, OBD mileage, and service history to anchor the subject vehicle, caches recent comps, enforces rate limits, monitors data quality (coverage, freshness, anomalies), and exposes a versioned internal data contract for downstream models and UI. Security, usage auditing, and compliance with provider terms are mandatory.

Acceptance Criteria
E2E Ingestion Latency and Provider Failover
Given a new market record is published by any licensed provider, when it is fetched by the ingestion service, then the normalized record is available to downstream consumers within 15 minutes at p95 and within 30 minutes at p99. Given the primary provider returns 5xx or times out for 5 consecutive minutes, when health checks fail, then the pipeline routes requests to the configured secondary provider within 2 minutes and continues ingestion with no single VIN freshness gap exceeding 10 minutes. Given a provider outage is resolved, when connectivity is restored, then the pipeline backfills the missing window and reaches parity with the provider's latest data within 2 hours, with deduplication preventing duplicates. Given failover occurs, when audit logs are reviewed, then an incident entry exists containing provider, start/end timestamps, affected VIN count, and actions taken.
Cross-Source Deduplication, Standardization, and Reconciliation
Given multiple source records for the same VIN in the same region with identical trim/options sets and mileage delta <= 100 miles, when records are ingested, then they are merged into a single canonical listing with source_ids preserved. Given conflicting attributes across sources, when reconciliation runs, then the canonical value is selected by descending trust_score, then most recent transacted_at, then lower price, and reconciliation_strategy is recorded. Given provider-specific condition grades, when normalized, then all grades are mapped to an internal 0.0–5.0 scale with mapping tables versioned and applied deterministically. Given asking and transacted prices, when both are present, then they are stored in separate fields and price_type is correctly labeled; days_on_market is computed as now - first_seen_at. Given a weekly sample of 500 merged clusters, when evaluated, then the estimated duplicate false-merge rate is < 1% and the missed-merge rate is < 3%.
Vehicle Anchoring with FleetPulse Registry, OBD Mileage, and Service History
Given a FleetPulse-registered VIN, when comps are ingested, then each canonical record includes anchored_vehicle_id, last_known_mileage_at, mileage_source, and region_code matching the vehicle. Given OBD mileage timestamp is newer than comp mileage by > 500 miles or > 14 days, when anchoring, then the comp is flagged mileage_stale=true and excluded from exact-mileage-band comps for that vehicle. Given OBD telemetry is unavailable for > 7 days, when anchoring, then the system falls back to last recorded mileage and sets mileage_source=fallback without blocking ingestion. Given service history indicates no branded/salvage title, when a comp with branded/salvage flag is ingested, then it's excluded from like-for-like comps for that vehicle but retained in the broader dataset with reason code.
Comps Caching and Invalidation
Given a request for comps for a VIN+region, when the same key is requested again within 30 minutes without newer source data, then the response is served from cache within 200 ms at p95. Given a new transacted price or listing update for VIN+region arrives, when normalization completes, then the corresponding cache entry is invalidated within 60 seconds and the next request reflects the update. Given steady-state traffic at 100 RPS on the comps endpoint with realistic mix, when measured over 15 minutes, then cache hit rate is >= 60% with no stale data served beyond TTL. Given cache service failure, when it occurs, then the system gracefully degrades to direct fetch with increased latency but no data loss, and emits an alert.
Rate Limiting, Security, and Usage Auditing Compliance
Given provider A's license caps at 10 RPS and 100k/day, when load tests run, then the pipeline maintains <= 9 RPS sustained and <= 95k requests/day with exponential backoff on 429/5xx and zero provider-imposed bans. Given secrets for provider access, when stored, then they reside in a managed secret store, are never logged in plaintext, and are rotated at least every 90 days with rotation events auditable. Given data in transit and at rest, when inspected, then TLS 1.2+ is enforced to providers and internal services, and all persisted normalized data is encrypted with AES-256 or equivalent. Given any access to normalized comps, when audit logs are queried, then entries include subject, action, resource_id, timestamp, source_ip, and are immutable, tamper-evident, and retained for >= 13 months. Given provider TOS constraints on data reuse, when export attempts are made by downstream services, then policy checks prevent prohibited redistribution and log denials.
Versioned Internal Data Contract for Downstream Consumers
Given schema version v1.x, when a breaking change is proposed, then a new major version (v2.0) is published with side-by-side availability for >= 90 days and a documented migration guide; no unannounced breaking changes are deployed. Given minor additive changes, when released, then they are backward compatible and validated by contract tests for both UI and model pipelines before deploy. Given the contract, when fetched from the registry, then machine-readable schemas (JSON Schema and Parquet) and examples are available, with field definitions for VIN, trim, options, region, mileage, condition_score, price_ask, price_transacted, dom, source_ids, canonical_id, reconciliation_strategy, timestamps. Given downstream staging environments, when nightly integration tests run, then all consumers pass against the current and next minor version without schema errors.
Data Quality Monitoring and Alerting
Given continuous ingestion, when monitoring runs, then freshness p95 < 15 minutes per provider and VIN coverage >= 95% for registered vehicles in supported regions are reported on dashboards. Given price outliers beyond 5 median absolute deviations for a segment (VIN/trim/region), when detected, then the records are quarantined, alerts are sent to on-call within 5 minutes, and unquarantined only via approval workflow. Given provider degradation, when health checks fail thresholds, then an alert is emitted with provider name, error rate, and impacted VIN count; auto-suppression prevents alert storms during planned maintenance windows. Given weekly data quality audits of 1,000 random records, when executed, then anomaly rate (invalid fields, missing critical attributes) is <= 0.5% and all anomalies have a ticket created.
Condition & Mileage Normalization Model
"As a fleet manager, I want valuations adjusted for my vehicle’s exact condition, options, region, and mileage so that resale estimates are accurate and actionable."
Description

Build a valuation adjustment engine that normalizes external comps and bids to the subject vehicle’s exact mileage, condition, options, region, and seasonality. Combine inputs from inspections, photos, OBD fault history, maintenance records, and known equipment to derive a condition score with explainable factors. Apply mileage curve adjustments, option premiums/discounts, and regional demand multipliers to generate a comparable set and a fair-market value with confidence intervals. Provide interpretable outputs (top factors, comparable vehicles used), fallback heuristics when data is sparse, and a feedback loop that learns from realized sale prices. Integrates with a feature store, supports periodic retraining, and logs lineage for auditability.

Acceptance Criteria
Condition Score Computation from Multi-Source Inputs
Given a subject VIN with inspection results, photo metadata, OBD-II fault codes, maintenance records, and options in the canonical schema When the engine computes the condition score Then it returns condition_score in [0,100] and sub_scores {powertrain, exterior, interior, tires_brakes} that sum to condition_score within ±0.5 And identical inputs produce identical scores (max absolute delta ≤ 0.1) And missing sources are flagged with reason_codes and default priors are applied without failing the request And condition_score and derived features are persisted to the feature store with VIN, event_timestamp, and feature_view versions
Mileage Curve Normalization to Subject Mileage
Given a subject vehicle with mileage M_s and a set of external comps each with mileage M_i and price P_i When normalizing comps to M_s Then the engine applies a segment/model/year-specific mileage curve f such that adjusted_price_i = P_i + f(M_s) - f(M_i) And the adjustment is monotonic in mileage: for a fixed comp, higher M_s yields non-increasing adjusted_price_i And boundary handling caps effective mileage below 5,000 and above 300,000 miles using the curve’s defined limits And the output includes curve_version and per-comp mileage_adjustment values And unit test fixtures produce adjusted prices within ±0.5% of reference values
Option, Region, and Seasonality Adjustments
Given a subject options list decoded to equipment codes, a region R, and a valuation date D When applying contextual adjustments Then option premiums/discounts are applied using the latest option_pricing table (additive or multiplicative per config), with each adjustment reported per option code And a regional demand multiplier for R and a seasonal index for month/quarter of D are applied and reported And conflicting or redundant options (e.g., packages) are de-duplicated via option hierarchy rules And unknown options default to zero adjustment and are logged for curation And toggling R or D changes the final value exactly by the published multiplier/index
Comparable Set Selection, Weighting, and Fallbacks
Given a subject vehicle defined by VIN-decoded year/make/model/trim, region, and mileage When selecting comparable vehicles Then at least 8 qualified comps (including ≥3 live bids if available) are returned or the fallback strategy is invoked And similarity weighting considers trim match, drivetrain, mileage delta, age delta, and region with documented weights that sum to 1.0 And outliers with influence above the configured threshold or residual beyond 2.5 standard deviations are excluded And if comps < 8 after exclusion, the search radius and year range expand stepwise up to configured limits; otherwise fall back to segment priors And the response includes the selection parameters used, number of comps, fallback_reason if any, and a reproducible list of comp IDs
Explainable Outputs: Top Factors and Comps Traceability
Given a completed valuation run When producing outputs Then the response includes top_factors (max 10) each with {name, type, direction, absolute_impact_currency} sorted by absolute impact And includes the list of comparable vehicles used with {id, source, similarity_weight, raw_price, adjustments_breakdown, adjusted_price} And all monetary fields are in the configured currency with 2 decimal places and clearly labeled And the schema validates against the versioned API contract and contains model_version and data_timestamp
Confidence Interval Estimation and Coverage Calibration
Given a valuation result with point_estimate V When computing uncertainty Then the response includes ci_68 and ci_95 bands such that ci_95 width ≥ ci_68 width and both contain V And empirical backtest over the last 1,000 valuations shows coverage of 68%±5% and 95%±5% respectively And the response includes uncertainty_method_version and inputs used for uncertainty (e.g., comp dispersion, model variance) And if sparse data triggers fallback, uncertainty bands widen according to the configured inflation factor and the reason is logged
Feature Store Integration, Retraining, Feedback Loop, and Lineage Audit Logs
Given the platform feature store and training pipeline When a weekly retraining job executes Then it reads features by feature_view versions, trains the normalization model, logs model_version, dataset_hash, code_commit, and hyperparameters, and writes artifacts to the registry And each online scoring call logs lineage_id linking request payload, feature versions, model_version, and comp IDs, retained for at least 365 days And realized sale prices ingested post-sale are linked by VIN and valuation_id; a feedback job updates the label store, computes error metrics (MAPE, calibration), and raises drift alerts if thresholds are breached And the next retraining incorporates feedback data and produces a report showing equal or improved MAPE by ≥2% absolute unless blocked by drift policy; otherwise rollback to prior model_version
Net Proceeds Calculator
"As an owner-operator, I want to see net proceeds by channel after fees, liens, taxes, and reconditioning so that I can choose the most profitable disposal path."
Description

Deliver a calculator that estimates channel-specific net resale proceeds (retail, wholesale, dealer buy, auction) by accounting for liens, taxes, title/registration fees, marketplace and auction fees, transport, and recommended reconditioning. Pull default assumptions from organization settings and location tax tables, suggest reconditioning based on fault codes and maintenance history, and allow user overrides with saved templates. Output includes expected time-to-cash, price sensitivity, and scenario comparisons with export to CSV/PDF. Persist assumptions per vehicle for audit, support multiple currencies, and integrate with FleetPulse cost tracking to reconcile projected vs. realized proceeds after sale.

Acceptance Criteria
Channel-Specific Net Proceeds Computation
- For a given VIN, mileage, and condition, the calculator displays four channels: Retail, Wholesale, Dealer Buy, Auction. - Net proceeds per channel = Gross sale price estimate − (Outstanding lien payoff + applicable sales taxes + title/registration fees + marketplace/auction fees + transport/shipping + recommended reconditioning + other channel-specific fees). - Tax calculation sources the vehicle’s location tax table and displays jurisdiction and rate; tax is applied according to jurisdiction rules configured in the tax table. - Monetary values show currency code/symbol and two-decimal precision (or currency minor unit); rounding is half-up and consistent across UI and exports. - Negative net proceeds are supported and clearly indicated. - Recalculation occurs immediately upon input change and updates all per-channel outputs in the same view.
Defaults, Tax Tables, Overrides, and Templates
- On initial load, default assumptions (fees, transport, recon, time-to-cash) are populated from org settings and the vehicle’s location tax table based on garage ZIP and selected channel. - Users can override any assumption field; outputs update immediately upon change. - Users can save the current set of overrides as a named template; applying a template reproduces saved values exactly. - Templates are scoped to the organization and can be set as default per channel; a reset control restores system defaults. - Changing the vehicle location refreshes tax rates/fees from the tax table and recalculates outputs accordingly. - Overridden fields are visually indicated to distinguish them from defaults.
Reconditioning Suggestions from Fault Codes and History
- The system generates a list of recommended reconditioning items using current fault codes, inspection notes, and maintenance history, each with an estimated cost range and source reference. - Users can accept or dismiss each suggested item; only accepted items contribute to the reconditioning total used in net proceeds. - A toggle allows inclusion of an org default reconditioning template; overlapping items are not double-counted. - Users may edit cost estimates for accepted items within allowed bounds; updates recalculate net proceeds immediately. - The recon summary shows itemized totals and the aggregate amount included in each channel’s calculation.
Outputs: Time-to-Cash, Price Sensitivity, Scenario Comparison, Export
- Each channel displays an expected time-to-cash (days) derived from defaults; users can override per scenario and channel. - Price sensitivity shows at least 5 price points within ±10% of the chosen list price, with predicted time-to-sell and resulting net proceeds per channel. - Users can create and label at least 3 scenarios; comparison view shows net proceeds, time-to-cash, and key assumptions side-by-side. - Export to CSV and PDF includes per-channel net proceeds breakdown, time-to-cash, price sensitivity table, and scenario comparison; exported figures match on-screen values within rounding rules. - Exports include metadata: VIN, mileage, condition grade, location, currency, assumption template, calculation timestamp, and user ID.
Per-Vehicle Assumption Persistence and Audit Trail
- Every calculation session saves a versioned record per vehicle, including all inputs, assumptions, outputs, currency, user, and timestamp. - Users can view a chronological history of versions and open any version in read-only mode. - Editing assumptions creates a new version; prior versions remain immutable. - The audit log records field changes (old/new), template applications, overrides, exports generated, and reconciliation events. - Audit records are retained for at least 24 months and can be exported.
Projected vs. Realized Proceeds Reconciliation
- When a sale record is captured in FleetPulse cost tracking, the system links it to the most recent calculator version within the previous 60 days, with manual link/unlink controls. - The reconciliation report computes variance per component (sale price, lien, taxes, fees, transport, recon) and overall net proceeds delta. - Variances exceeding an org-defined threshold (amount or %) are flagged for review. - Reconciliation adds a realized outcome to the vehicle’s history without altering the original projected version.
Multi-Currency Support and Localization
- Currency can be set at org or per-vehicle level; all calculations and displays use the selected currency. - FX conversions use the configured rate source; displayed rate timestamp is no older than 24 hours. - Taxes are computed per the vehicle’s location tax table and converted to the display currency; converted totals reconcile within ±0.01 of display currency due to rounding. - UI and exports apply correct currency symbols/codes and locale-specific number formatting; rounding follows the currency’s minor unit (e.g., 0 decimals for JPY).
Optimal Sell Window & Value Cliff Alerts
"As a fleet manager, I want proactive alerts ahead of value cliffs and an optimal sell window forecast so that I can time sales to maximize equity and minimize downtime."
Description

Create a forecasting service that projects value over time and mileage, identifying upcoming value cliffs (e.g., 100k/150k mileage bands, warranty expirations, model-year transitions, seasonal dips/spikes). Use live OBD mileage rates and planned routes to estimate when thresholds will be crossed and compute an optimal sell window maximizing expected net proceeds. Provide configurable alert lead times (date and mileage deltas), smart snooze, and multi-channel notifications (in-app, email, push). Surface fleet-level rollups of at-risk vehicles and integrate with FleetPulse’s scheduling and tasking to initiate disposition workflows.

Acceptance Criteria
Vehicle-Level Value Cliff Forecasting Using Live Mileage
Given a vehicle with VIN, current odometer, and 30 days of OBD mileage readings And planned routes with estimated miles and dates are present When the forecasting service runs Then the system shall list the next value cliffs (mileage bands, warranty expiration dates, model-year transitions, and seasonal dip/spike windows) for that vehicle And for each cliff provide: cliff type, threshold value, predicted crossing date, and predicted odometer at crossing And the prediction shall incorporate both trailing 30-day average daily mileage and miles from planned routes And the forecast shall refresh within 30 minutes of a new OBD mileage reading or route update
Optimal Sell Window Estimation and Net Proceeds Output
Given live market comps and buy-bid signals are available for the vehicle’s VIN, mileage, and condition And configurable fees and estimated reconditioning costs are set at the account or vehicle level When the forecasting service computes resale projections Then it shall output an optimal sell window (start date/mileage and end date/mileage) that maximizes expected net proceeds And include the expected net proceeds value for the optimal window and the baseline (holding) scenario And expose the output via API and in-app UI for the vehicle detail view
Configurable Alert Lead Times by Date and Mileage
Given a user has set alert lead times in days before a cliff and miles before a cliff When a predicted cliff crossing approaches within the configured thresholds Then the system shall generate an alert at the earlier of the two conditions (date or mileage) And allow per-vehicle overrides that take precedence over fleet defaults And shall not trigger duplicate alerts for the same cliff and threshold within a 24-hour window
Smart Snooze and Re-notification Logic
Given a user snoozes a value-cliff alert for a chosen duration or until a chosen mileage When the predicted crossing shifts earlier by more than 10% of the remaining snooze duration or 250 miles (whichever is smaller) Then the system shall resurface the alert immediately with the updated prediction And when a vehicle is marked disposed or a disposition task is completed, alerts for that cliff shall auto-cancel And all snooze and re-notification events shall be logged in the vehicle’s alert history
Multi-Channel Notifications Delivery and Content
Given user notification preferences are enabled for in-app, email, and push When a value-cliff or optimal-sell-window alert is triggered Then the system shall deliver the alert to all enabled channels within 2 minutes And each notification shall include: vehicle identifier, cliff type, predicted crossing date and mileage, optimal sell window summary, and a call-to-action link to start a disposition workflow And delivery status (sent, delivered, failed) shall be recorded per channel with timestamps
Fleet-Level At-Risk Rollup and Disposition Workflow Initiation
Given a fleet with multiple vehicles and active forecasts When viewing the Residual Pulse fleet dashboard Then vehicles with cliffs predicted within selectable horizons (30/60/90 days and 3k/5k/10k miles) shall be listed with sorting and filtering And bulk-selecting vehicles and choosing “Initiate Disposition” shall create scheduling tasks for each vehicle with pre-filled predicted cliff date/mileage and optimal sell window And created tasks shall appear in FleetPulse scheduling with assigned owner, due date at least 7 days before the cliff, and links back to each vehicle
Bid & Offer Aggregation
"As a small fleet manager, I want live, comparable buy-bids I can accept or counter in-app so that I can execute dispositions quickly when the price is right."
Description

Integrate partner marketplaces, dealers, and auction platforms to surface live, comparable buy-bid signals for each vehicle, with standardized offer schemas (price, fees, expiration, pickup terms). Support authenticated partner connections, webhook updates for bid refresh/expiry, and in-app actions to Accept/Counter/Reject with required disclosures. Enforce offer freshness SLAs, display provenance and confidence, and protect customer identity with consent-driven data sharing. Maintain a full audit trail, rate limiting, error handling, and sandbox endpoints for partner certification. Connect accepted offers to downstream logistics and accounting workflows in FleetPulse.

Acceptance Criteria
Standardized Offer Schema Mapping
Given partner offers are received via API in heterogeneous formats When the ingestion service processes an offer Then the offer is normalized to the standard schema with required fields: offerId, partnerId, vehicle.vin, vehicle.mileage, vehicle.conditionGrade, price.amount, price.currency (ISO 4217), fees.itemized[].type and .amount, expirationAt (ISO 8601 UTC), pickup.terms, pickup.window.start and .end (ISO 8601), transportIncluded, paymentTerms, provenance.source, confidence.score (0.0–1.0), and terms.disclosures[] And numeric amounts are validated as >= 0 with two-decimal precision and currency codes are valid ISO 4217 And expirationAt must be in the future at time of ingestion And netProceeds is computed consistently as price.amount minus sum(fees.itemized[].amount) And offers missing any required field or failing validation are rejected with error code OFFER_SCHEMA_INVALID and details indicating the offending fields
Authenticated Partner Connection and Consent Controls
Given a fleet admin initiates connection to a partner marketplace When OAuth2 authorization completes with requested scopes Then access and refresh tokens are stored encrypted, least-privilege scopes are enforced, and connection status becomes Connected And a consent record is created capturing allowed data elements (VIN, mileage, location, photos, owner PII) with timestamp and version And until consent explicitly allows PII and precise location, outbound partner payloads redact customer identity and geolocation and use an anonymized contact proxy; VIN is shared only if consented, otherwise masked (e.g., last 5 only) When consent is revoked by the admin Then all partner tokens are invalidated within 60 seconds, subsequent partner API calls return 401, and the connection status becomes Revoked with an audit entry
Webhook Bid Refresh/Expiry and Idempotency
Given the system exposes a signed webhook endpoint for offer events When a partner sends offer.created, offer.updated, or offer.expired with eventId and signature Then the signature is verified and the request is accepted only if the timestamp skew is <= 5 minutes; otherwise respond 401 And events are deduplicated by eventId within a 24-hour window and processing is idempotent And on offer.updated that changes price or expirationAt, the system updates the record, recalculates netProceeds, and pushes UI updates within 2 seconds And when an offer expires (by event or TTL), its status becomes Expired, all in-app actions disable within 1 second, and the webhook is acknowledged with HTTP 2xx within 500 ms after durable persistence And inbound requests exceeding the partner’s rate limit receive 429 with a Retry-After header and are not processed
In-App Offer Actions with Disclosures and Audit Log
Given a fresh (non-expired, non-stale) offer is visible to an authorized user When the user selects Accept Then a disclosures modal is displayed and acceptance requires checking all mandatory disclosures; upon confirmation, the offer status changes to Accepted and an immutable audit record captures userId, role, timestamp, ip, previousStatus, newStatus, and a content hash of the offer snapshot When the user selects Counter Then the user must supply a counter price and/or terms within partner-configured bounds; validation errors are shown inline; on submit the status becomes Countered and the partner is notified via API with the structured counter payload When the user selects Reject Then a rejection reason (from a predefined list) is required; status becomes Rejected and the partner is notified; an audit record is created And actions are blocked with an explanatory message on expired or stale offers or when disclosures have not been acknowledged
Offer Freshness SLA, Provenance, and Confidence Display
Given each partner has a configured freshness SLA in minutes When an offer’s age exceeds its SLA while not expired Then the offer is labeled Stale, Accept is disabled, and a Request Refresh action is available that triggers a partner refresh callback and logs an audit entry And each offer displays provenance (source name and type) and a confidence score between 0.0 and 1.0 with method; values outside the range are rejected at ingestion with error OFFER_CONFIDENCE_INVALID And if a refresh is not received within 5 minutes after a refresh request, the offer is auto-hidden from the default list while remaining accessible in history
Partner Sandbox Certification
Given a new partner registers for sandbox access When they execute the certification test suite Then they must successfully complete: OAuth2 authorization, webhook signature verification, idempotent event delivery, schema validation for offers, standardized error responses, and compliance with runtime limits (<= 600 requests/min, <= 60 requests/sec burst) And passing criteria are 100% of 50 required tests passing with p95 API latency < 800 ms and no more than 1 transient retry per test group And on pass, certification status becomes Approved and production credentials are issued automatically; on fail, a detailed report with reproducible examples is generated
Accepted Offer Handoff to Logistics and Accounting
Given an offer transitions to Accepted When the acceptance is persisted Then a Logistics Order is created with pickup window, pickup location, transport terms, and a proxy contact; and an Accounting Transaction is created with gross amount, itemized fees, estimated net proceeds, currency, payment terms, and partner payee profile And downstream systems receive logistics.order.created and accounting.transaction.created webhooks; failures are retried with exponential backoff until acknowledged or a max retry threshold is reached And subsequent updates from logistics (e.g., pickup rescheduled) and accounting (e.g., payment Posted) are reflected on the offer timeline within 2 seconds and captured in the audit trail
Residual Pulse Dashboard & API
"As an operations lead, I want a dashboard and API exposing valuations, alerts, and recommendations across my fleet so that I can prioritize vehicles and integrate decisions into my workflows."
Description

Provide a unified dashboard and secure API exposing per-vehicle valuations, confidence, sell window, value-cliff alerts, live bids, and net proceeds scenarios. Embed widgets in the vehicle detail and fleet overview pages with filters, sorting by expected equity uplift, and bulk recommendations. Support role-based access control, organization-level defaults, localization (currency, tax rules, units), and export. The API must be versioned with OAuth scopes, pagination, and usage quotas, enabling external workflow integration (e.g., listing creation, ERP). Include performance SLAs, observability, and graceful degradation when upstream data is delayed.

Acceptance Criteria
Vehicle Detail and Fleet Overview Widgets Render Core Residual Metrics
Given a permitted user views a vehicle detail page with VIN, mileage, and condition available When the Residual Pulse widget loads Then it displays valuation (localized currency), confidence (0–100%), sell window (start/end), value‑cliff alert (if within 1,000 miles or 14 days), live bids (count and highest bid), and net proceeds scenarios (low/base/high) Given a fleet overview page with 3–100 vehicles When the Residual Pulse columns render Then each row shows expected equity uplift and value‑cliff indicator where applicable Given normal operating conditions When widgets load Then p95 render time is <= 2.0s and p99 <= 3.5s for fleets up to 100 vehicles Given market data freshness SLA of 24 hours When last update age > 24 hours Then the UI shows a Stale badge with last‑updated timestamp and interaction remains enabled
Fleet Overview Recommendations: Sorting by Uplift, Filters, and Bulk Actions
Given vehicles with computed expected equity uplift and confidence When sorting by Expected Equity Uplift (desc) Then rows are ordered by uplift descending with ties broken by higher confidence, and the sort is stable across pagination Given filters for sell window (next N days), mileage bands, confidence >= threshold, location, and make/model When filters are applied Then only matching vehicles are returned, result counts update, and clearing filters restores the full set Given a user selects up to 100 vehicles When Bulk Recommend is executed Then recommendations complete within 10 seconds, a summary shows counts (recommended/deferred/insufficient data), and an audit entry records user, timestamp, and criteria used
Role-Based Access Control and OAuth Scope Enforcement
Given role = Owner or Manager When accessing Residual Pulse UI modules and API endpoints Then access is granted Given role = Technician When accessing Residual Pulse UI modules or endpoints Then UI modules are hidden and API responses return 403 with error code forbidden Given an OAuth token without residual:read scope When calling GET /api/residual/v1/valuations Then response is 403 with error code insufficient_scope and required scopes listed Given a token with residual:read but not residual:write When calling POST /api/residual/v1/recommendations Then response is 403 Given a user from Org A When requesting vehicle data for Org B Then response is 404 and no cross‑org identifiers are leaked
Localization of Currency, Tax Rules, and Units
Given org defaults: locale en-US, currency USD, units miles, tax rate 8.25% When viewing net proceeds Then values are in USD with $ symbol, miles displayed, tax applied per rule, and rounding to 2 decimals Given a user preference override to de-DE, EUR, kilometers When viewing the same vehicle Then values are in €, kilometers displayed, numeric formatting follows de-DE, and currency conversion accuracy is within ±0.01 of computed exchange rate Given an export action When exporting CSV/JSON Then values use the active currency, units, and tax rules; CSV uses UTF‑8 with dot decimal; JSON numeric fields are unformatted numbers
Public API v1: Versioning, Pagination, and Quotas
Given Accept: application/vnd.fleetpulse.residual.v1+json When calling GET /api/residual/v1/valuations?limit=50 Then status is 200 with up to 50 items, next_cursor when more data exists, and a schema including vin, mileage, condition_grade, valuation.amount/currency, confidence, sell_window.start/end, value_cliff.alert/miles_to_cliff/days_to_cliff, live_bids[], net_proceeds.scenarios[low|base|high] Given no explicit version header When calling the endpoint Then v1 is served and Content-Type indicates v1 Given repeated pagination using next_cursor until exhaustion When consuming all pages Then no item is duplicated or missing Given requests exceed 10 req/sec or 1,000 req/day per org When limits are exceeded Then responses are 429 with X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers
Export of Residual Pulse Data to CSV and JSON
Given a filtered and sorted fleet overview When exporting to CSV or JSON for selected vehicles or all results up to 5,000 rows Then the export reflects current filters and sort, includes required fields (valuation, confidence, sell window, value‑cliff, live bids, net proceeds), and contains a CSV header row Given an export job is initiated When processing completes Then a downloadable link is available within 30 seconds for <= 5,000 rows, the link expires after 24 hours, permissions are enforced, and an audit record is created
Graceful Degradation and Observability Under Upstream Delays
Given upstream market data latency > 60 minutes When rendering UI or serving API Then last known data is served with freshness_age metadata and a visible Data Delayed banner; interaction remains non‑blocking Given upstream outage > 15 minutes When requests are made Then a circuit breaker serves cached data and suppresses upstream calls for 5 minutes before retrying with exponential backoff; error rate remains < 1% with p95 latency within SLA Given normal operation When observing monitoring Then metrics exist for p50/p95 latency, error rate, cache hit ratio, freshness age, and quota denials; each request is traced and logged with request_id and org_id Given published SLAs (UI p95 <= 2.0s for 3–100 vehicles; API p95 <= 300ms cached and <= 800ms cache‑miss; availability 99.9% monthly) When measured over a rolling 30‑day window Then SLOs meet or exceed targets or paging alerts are triggered

Reliability Index

Scores each vehicle’s likelihood of repeat failures by combining DTC recurrence, component age, environment, and prior fix outcomes. Flags emerging money pits early and explains the top risk drivers, helping you avoid throwing good money after bad.

Requirements

Unified Telemetry & Maintenance Data Normalization
"As a fleet manager, I want FleetPulse to unify my diagnostics, service records, and operating context so that the Reliability Index reflects a complete and accurate vehicle history."
Description

Ingest and normalize all inputs required for the Reliability Index, including OBD-II DTC occurrences with timestamps and mileage/engine hours, maintenance and repair records (parts replaced, costs, warranty status), component age and service life, vehicle usage intensity (idle %, load, trip length), and environmental context (temperature bands, humidity, road-salt region, terrain). Map all signals to vehicles and components using a canonical component taxonomy. De-duplicate and time-bucket recurring DTCs, standardize units (miles, hours, Celsius/Fahrenheit), and compute derived features such as days since last repair, DTC recurrence over rolling windows, part age in miles/hours, and trailing repair cost-to-vehicle value ratios. Enforce data quality checks, handle missing data with explicit defaults, and support near real-time updates (under 5 minutes) when new diagnostic or service events arrive. Provide a versioned schema to ensure downstream scoring and explanations consume consistent, reliable data.

Acceptance Criteria
DTC Ingestion and Vehicle/Component Mapping
Given an inbound OBD-II payload containing VIN, one or more DTC codes, event timestamp with timezone, and mileage or engine hours When the payload is processed Then a normalized record is created per vehicle+dtc_code with fields [vehicle_id, dtc_code, occurred_at_utc, mileage_mi, engine_hours_hr, source_event_id, raw_payload_ref] And occurred_at is converted to UTC with millisecond precision retained And vehicle_id is resolved from VIN; if no match, the event is quarantined to unknown_vehicle with retry flag set And dtc_code is mapped to a canonical component_id via the taxonomy; if no mapping, component_id='unknown' and mapping_status='unmapped' And processing result is recorded with status in ['accepted','quarantined'] and processing_duration_ms <= 1500 at p95
Maintenance/Repair Record Normalization and Warranty Mapping
Given a repair order containing VIN, service_date, parts list with part_number and description, line and total costs, warranty status, and mileage or engine hours When the record is processed Then each part line is mapped to a canonical component_id; unmapped lines are flagged mapping_status='unmapped' And a component_installation event is emitted for replaced parts with start_mileage_mi and/or start_engine_hours_hr captured And service_date is normalized to UTC date, and warranty_status is one of ['in_warranty','expired','unknown'] And costs are recorded in USD cents with sum(line_cost_cents) = total_cost_cents And the record links to vehicle_id via VIN or is quarantined to unknown_vehicle with retry
Recurring DTC De-duplication and Time Bucketing
Given multiple occurrences of the same dtc_code for the same vehicle within dedup_window_minutes (default 30) When processed Then only one normalized occurrence is emitted for that window with recurrence_count equal to the number of occurrences suppressed And the normalized occurrence carries the earliest occurred_at_utc among duplicates and references suppressed source_event_ids And occurrences are aggregated into fixed buckets of bucket_interval_minutes (default 5); counts per vehicle+dtc_code per bucket equal the number of deduped occurrences within that bucket And dedup/bucket parameters are externally configurable and captured on each record [dedup_window_minutes, bucket_interval_minutes]
Units Standardization and Versioned Schema Compliance
Given inputs containing distance in kilometers or miles, temperature in Fahrenheit or Celsius, and time in local timezones When normalized Then distances are stored as miles (mi), engine runtime as hours (hr), and temperature as Celsius (C), using precise conversions And every persisted normalized and derived record validates against JSON Schema 'fleetpulse.reliability.normalized.v1' and includes schema_version in semver format And records with a major version mismatch are rejected and quarantined with error_code='SCHEMA_VERSION_MISMATCH' And a sample of 1,000 records validates with 0 schema errors
Derived Feature Computation for Reliability Index Inputs
Given new diagnostic or service events for a vehicle/component When normalization completes Then derived fields are updated: days_since_last_repair, dtc_recurrence_7d, dtc_recurrence_30d, dtc_recurrence_90d, part_age_miles, part_age_hours, trailing_12mo_repair_cost_to_value_ratio And part_age_miles = current_mileage_mi - last_install_mileage_mi (if both present) and part_age_hours = current_engine_hours_hr - last_install_engine_hours_hr (if both present) And trailing_12mo_repair_cost_to_value_ratio = sum(repair_cost_cents last 365 days) / vehicle_estimated_value_cents; if denominator missing, set ratio_default = -1 and missing_reason='vehicle_value_missing' And derived fields become queryable within 5 minutes p95 of the triggering event's ingestion And unit fields and value ranges are validated against the schema with 0 validation errors in test fixtures
Data Quality Validation and Missing-Data Defaults
Given any normalized record When validation runs Then required fields per record type are present at a completeness rate >= 98% over a 24-hour window; violations are quarantined with explicit error_code And optional fields missing are populated with explicit defaults plus missing_reason codes per field (e.g., component_age_miles_default=-1, missing_reason='no_install_history') And environmental context is derived or defaulted: temperature_band, humidity_band, road_salt_region, terrain_class; if source data is missing, set value='unknown' and missing_reason accordingly And a daily automated data-quality report is produced showing completeness, quarantine counts, and top error codes, and is stored to the ops dashboard
Near Real-Time Processing and Idempotent Updates
Given a stream of new diagnostic or service events When processed under normal operating load Then end-to-end latency from ingestion to availability in the normalized store and derived features is < 5 minutes at p95 and < 10 minutes at p99 over a rolling 24-hour window And processing is idempotent: re-submission of the same source_event_id does not create duplicate normalized or derived records And per-vehicle ordering is preserved: events with earlier occurred_at_utc are not applied after later ones; if out-of-order arrival happens, final state reflects correct temporal order And operational alerts fire if p95 latency breaches 5 minutes for 3 consecutive 5-minute intervals
Reliability Score Engine (0–100) with Component Sub-scores
"As an operator, I want a single 0–100 reliability score and component sub-scores so that I can compare vehicles at a glance and focus on the riskiest systems first."
Description

Compute a calibrated 0–100 Reliability Index per vehicle with optional component-level sub-scores by combining weighted factors: DTC recurrence frequency and recency with exponential decay; DTC severity; component age versus expected life adjusted for environment and duty cycle; prior fix outcomes and number of repeat failures post-repair; repair cost trajectory relative to vehicle value and TCO; and usage intensity. Produce a score, confidence band, and model version for every calculation. Support event-driven recalculation on new DTCs, repairs, or mileage updates with sub-second scoring latency per vehicle. Define minimum data thresholds and graceful fallbacks when information is sparse. Provide documented formulas/weights with configuration hooks for future model iterations and maintain a changelog to ensure traceability.

Acceptance Criteria
Reliability Score Output Completeness
Given valid input data for a vehicle, When the score engine runs, Then it returns reliability_score ∈ [0, 100] as a numeric value with max 2 decimal places. And it returns a confidence_band with lower_bound and upper_bound ∈ [0, 100] where lower_bound ≤ reliability_score ≤ upper_bound. And it returns a non-empty model_version in semantic versioning format (MAJOR.MINOR.PATCH). And with identical inputs, repeated runs return identical reliability_score, confidence_band, and model_version.
Component Sub-scores and Aggregation with Explanations
Given sub_scores are requested, When the engine runs, Then it returns sub_scores for applicable components with each sub_score ∈ [0, 100]. And each component includes top_risk_drivers listing at most 3 drivers ranked by contribution. And the overall reliability_score equals the documented weighted aggregation of sub_scores plus non-component factors within ±0.1 tolerance. And components lacking sufficient data are either omitted from sub_scores or marked with data_sparsity=true and a widened component confidence_band; aggregation uses documented fallback priors only.
Event-Driven Recalculation on New DTC, Repair, or Mileage
Given a new DTC, repair, or mileage update for a vehicle, When the event is received, Then a recalculation is completed within 500 ms at p95 and 900 ms at p99 per-vehicle from event receipt to score persistence. And duplicate events (same event_id) within 60 seconds do not trigger additional recalculations and produce a single changelog entry. And recalculation is scoped to the affected vehicle and does not block scoring for other vehicles.
Correct Application of Weights and Exponential Decay
Given historical DTCs with timestamps and severities, When scoring, Then recurrence frequency is weighted by an exponential decay using the configured half-life H, giving more weight to recent DTCs. And DTC severity weights are applied according to the documented mapping and combined with recurrence using the documented formula. And component age is normalized against expected life and adjusted by environment and duty cycle per documentation. And prior fix outcomes and post-repair repeats adjust risk in the direction and magnitude defined in the documentation. And usage intensity is included as a factor per the documented formula. And unit tests validate outputs for a set of reference fixtures, each matching the expected score within ±0.5.
Minimum Data Thresholds and Graceful Fallbacks
Given inputs that meet the documented sparse-data thresholds, When scoring, Then the engine uses fallback priors and returns low_data=true with a confidence_band width greater than or equal to the configured minimum for sparse data. And if no usable domain signals are present, Then the engine returns a prior-based score with very_low_confidence=true and a confidence_band width greater than or equal to the configured minimum for empty data, and includes an explanation indicating fallback was used. And thresholds and fallback parameters are externally configurable and changes are reflected without code redeploy.
Repair Cost Trajectory Relative to Vehicle Value and TCO
Given repair cost history, vehicle value, and TCO inputs, When scoring, Then the repair_cost_factor reflects the trajectory of repair spend relative to value and TCO as defined in the documented formula. And an increasing repair spend trend raises the risk contribution; a decreasing trend lowers it; zero or flat spend yields baseline contribution. And the repair_cost_factor is normalized and bounded per configuration to avoid dominating the overall score.
Traceability, Configuration Hooks, and Changelog
When any formula or weight is changed via configuration, Then a new model_version is generated and used for all subsequent calculations. And every change produces a changelog entry with timestamp, actor, description, and the before-and-after values or references. And given a past calculation's inputs and model_version, Then the system can reproduce the same score within ±0.1 by loading the versioned configuration. And documentation of formulas, weights, factor definitions, and thresholds is published and accessible to authorized users.
Risk Driver Explanations & Evidence Links
"As a fleet manager, I want clear explanations of what is driving a high risk score so that I can decide whether to repair, monitor, or retire a vehicle."
Description

Surface the top contributing factors that drive each vehicle’s Reliability Index and quantify each factor’s impact. Present plain-language explanations with evidence, such as: “P0420 occurred 4 times in 30 days after catalyst replacement (+18),” “Brake rotor age 85k miles above route norm (+12),” or “Average ambient below 20°F increases cold-start strain (+6).” Link each explanation to underlying events (DTC instances, repair orders, costs, mileage snapshots). Provide API fields for contributors, weights, and referenced event IDs. Ensure explanations remain available historically alongside the score at that point in time, enabling audits and stakeholder transparency.

Acceptance Criteria
Display Top Risk Drivers with Quantified Impacts
Given a vehicle has a Reliability Index computed for date T and at least one contributor When the user opens the Reliability Index details for that vehicle at date T Then the UI shows the top 5 contributors (or all if fewer) sorted by absolute impact descending And each contributor displays a plain-language explanation and a signed integer impact in parentheses (e.g., +18 or -6) And the sum of listed impacts equals the displayed Reliability Index score within ±1 And no contributor without sufficient evidence is displayed
Evidence Links to Underlying Events
Given a contributor references events (DTCs, repair orders, costs, mileage, environmental snapshots) When the user expands the contributor row Then a list of evidence items is shown with event type, event ID, timestamp, and a link to the item And clicking a link navigates to the event detail successfully (HTTP 200) And the number of evidence items matches the counts stated in the explanation (e.g., "4 times" => 4 DTC instances) And all referenced event IDs exist and are not soft-deleted
API Fields for Contributors, Weights, and Event References
Given an API consumer calls GET /vehicles/{vehicleId}/reliability-index?asOf=ISODate When the response is returned Then it includes contributors[] with fields: id (UUID), type (enum: DTC_RECURRENCE|COMPONENT_AGE|ENVIRONMENT|FIX_OUTCOME|OTHER), code (nullable string), description (string), impact (int), weight (float), evidence[] with eventId (string), eventType (enum), occurredAt (ISO 8601) And contributors are ordered by absolute impact descending And all fields validate against the published OpenAPI schema And no personally identifiable information is present in the payload
Historical Explanations Snapshot
Given a vehicle had a Reliability Index snapshot on date T0 When the user requests the Reliability Index and contributors as of T0 (via UI or API) Then the explanations, impacts, weights, and evidence IDs reflect the state at T0 regardless of data added or changed after T0 And snapshotCreatedAt is at or before T0 and remains stable across subsequent requests And snapshots are retained and retrievable for at least 24 months from T0 And the snapshot includes a calculationVersion identifier
Plain-Language Explanation Formatting
Given a contributor is generated When the explanation is rendered Then it follows the format "<phenomenon> <quantifier/context> <time window or baseline> (<signed impact>)" without internal system jargon And numeric values include units (miles, days, °F) and are rounded appropriately (counts to 0 decimals; miles/°F to nearest whole unit) And DTC codes are accompanied by a short human-readable label when available And the explanation meets a readability target of Flesch–Kincaid grade level ≤ 10 And locale formatting rules are applied to numbers
Edge Cases and Data Quality Rules
Given incomplete or conflicting input data When contributors are computed Then contributors requiring unavailable data are omitted or labeled "insufficient evidence (0)" And no duplicate contributor type is emitted for the same underlying factor And an individual contributor's absolute impact is capped at 40 and the total number of contributors does not exceed 10 And ties in impact are broken by recency (newer evidence first) And environment-related contributors are only emitted if telemetry coverage in the last 30 days is ≥ 80%
Money Pit Detection Rules & Alerting
"As an owner-operator, I want to be alerted when a vehicle is becoming a money pit so that I can avoid throwing good money after bad."
Description

Detect emerging money pits using rule thresholds and trends: high and rising Reliability Index, repeated failures after two or more repairs, and trailing six-month repair spend exceeding a configurable percentage of vehicle value. Generate actionable alerts with a concise summary, risk score, key drivers, and recommended next steps (deeper diagnosis, second opinion, or disposal). Support in-app, email, and SMS channels with deduplication, rate limiting, snooze, and user-scope configuration of thresholds and channels. Log alert history and resolution state to enable learning loops and downstream reporting.

Acceptance Criteria
Trigger — High & Rising Reliability Index
Given account_config.ri.high_threshold = H, account_config.ri.rise_delta = D, and account_config.ri.rise_window_days = P And a vehicle’s Reliability Index today (RI_today) >= H And (RI_today − min(RI over last P days)) >= D When the Money Pit evaluation job runs Then exactly one new alert with rule = "High & Rising RI" is created for the vehicle if no active alert of this rule exists And the alert state is "open" And the alert captures observed values {RI_today, RI_min_last_P, H, D, P} as metadata
Trigger — Repeat Failures After 2+ Repairs
Given account_config.repeat_failures.window_days = W And a vehicle has >= 2 closed repair orders addressing the same DTC/component within the last W days And the same DTC recurs (active or stored) within W days of the second repair’s close date When the Money Pit evaluation job runs Then exactly one new alert with rule = "Repeat Failures ≥ 2 Repairs" is created for the vehicle if no active alert of this rule exists And the alert captures observed values {dtc, component, repair_order_ids, recurrence_timestamp, W} as metadata
Trigger — 6-Month Repair Spend Exceeds Vehicle Value Threshold
Given trailing_6_months_repair_spend S = sum(parts + labor) from closed work orders type = "Repair" (exclude preventive maintenance) And vehicle_value V = latest non-null value according to account_config.value_source precedence And account_config.spend_to_value_percent = T When (S / V) * 100 >= T Then exactly one new alert with rule = "High Repair Spend vs Value" is created for the vehicle if no active alert of this rule exists And if V is null, no alert is created and a skipped evaluation event with reason = "missing_vehicle_value" is logged And the alert captures observed values {S, V, T} as metadata
Alert Content — Summary, Risk Score, Drivers, Next Steps
Given a Money Pit alert is created by any rule When the alert is persisted Then the alert payload contains: title, vehicle_id, rule_id, state, created_at, risk_score (0..100), summary (≤ 280 chars), top_risk_drivers (≥ 3 items with label and contribution%), recommended_next_steps (≥ 1 from {deeper_diagnosis, second_opinion, disposal}), and links {vehicle_profile_url, diagnostics_url} And the explanation includes triggering rule parameters and observed values used in the decision And in-app view displays all fields; email and SMS bodies include summary, risk_score, top driver, and a link; SMS body length ≤ 160 characters
Delivery, Configuration & Deduplication — In-App, Email, SMS with Rate Limiting
Given account-level default thresholds and user-level channel preferences exist And an alert enters state = "open" When notification dispatch runs Then in-app notifications are created for all users whose permission scope includes the vehicle And email/SMS are sent only to users who have those channels enabled And each user receives at most one notification per channel per alert (per-user-per-channel deduplication) And per-vehicle rate limiting enforces max account_config.alerts.max_per_rule_per_24h and max_per_vehicle_per_24h across Money Pit rules And suppressed deliveries due to deduplication or rate limiting are logged with reason When an admin updates thresholds/channels in Settings Then inputs are validated (ranges, types, required) and persisted And new evaluations use updated thresholds within ≤ 15 minutes of change
Snooze, Acknowledge, and Resolve Controls
Given a user with permission to manage alerts When the user snoozes an alert for 7, 14, 30 days, or a custom duration within configured bounds Then the alert remains in state = "open" with snooze_until set, and no notifications are sent for that alert until snooze expiry And evaluations during snooze do not create new alerts for the same vehicle+rule When the user acknowledges an alert Then state transitions to "acknowledged" with timestamp and actor recorded When the user resolves an alert with resolution_reason in {repaired, decommissioned, false_positive, other} and optional note Then state transitions to "resolved" and a cool-off period of account_config.alerts.cool_off_days prevents re-triggering of the same rule for the vehicle until expiry
Logging, Audit, and Reporting Enablement
Given any alert lifecycle or delivery event occurs (created, deduplicated, rate_limited, snoozed, unsnoozed, delivered, bounce, acknowledged, resolved) When the event is processed Then an immutable audit record is written with {alert_id, vehicle_id, rule_id, event_type, timestamp_utc, actor_id (if applicable), channel (if applicable), outcome, metadata} And alert history is queryable and exportable (CSV) by time range, vehicle, rule, channel, and resolution_reason And per-recipient delivery outcomes are recorded (sent, opened if available, failed) And audit records are retained for ≥ 24 months
Reliability Trend Visualization & Event Timeline
"As a small fleet manager, I want to see how a vehicle’s reliability score changes after repairs so that I can validate whether fixes worked and track improvements over time."
Description

Provide visualizations for per-vehicle and fleet-level Reliability Index trends with overlays for key events: DTC occurrences, repairs, part replacements, and costs. Support daily/weekly/monthly aggregation, change-point annotations, and pre/post-repair comparison windows to measure fix effectiveness. Include filters by route, region, vehicle type, and component. Enable export to PDF/CSV and ensure mobile-responsive layouts for field use. Persist view configurations and integrate with the existing vehicle profile page for fast drill-down.

Acceptance Criteria
Per-Vehicle Reliability Trend with Event Overlays & Change-Points
Given a vehicle with telemetry and service history for the selected date range, When the user opens the Reliability tab on the vehicle profile, Then a Reliability Index line chart renders for the last 90 days by default using daily aggregation within 2 seconds (p95). Given events exist in the range, When the chart loads, Then DTC, repair, part replacement, and cost overlays appear as distinct markers with a legend, and the total markers equal the backend event count for the range. Given a user hovers or taps a marker, When the tooltip opens, Then it shows event type, timestamp (local timezone), component, and cost (if any) matching backend values within ±$0.01. Given change-points exist, When the “Show change-points” toggle is ON, Then annotations render at correct timestamps with delta values; When the toggle is OFF, Then no change-point markers are visible. Given a 365-day range is selected, When the chart renders, Then p95 tooltip open latency is < 150ms.
Aggregation Controls: Daily, Weekly, Monthly
Given daily granularity is the default, When the user switches to weekly or monthly, Then the number of data points equals the number of complete periods within the selected date range. Given a granularity is selected, When values are computed, Then the Reliability Index per period equals the arithmetic mean of the daily values in that period rounded to 2 decimals. Given event overlays are enabled, When granularity changes, Then event counts per period equal the sum of events in that period and marker clustering aligns to period boundaries. Given the selected range contains no data, When the chart loads, Then an empty state message “No data in selected range” is displayed and controls remain enabled.
Fleet Trend with Filters and Drill-Down to Vehicle
Given the fleet Reliability view is open with no filters, When the chart loads, Then the fleet Reliability Index trend displays for the default date range using weekly aggregation and includes a total vehicle count. Given filters for route, region, vehicle type, and component, When the user applies one or more filters, Then the trend, event overlays, and counts update to match backend-filtered results and the displayed vehicle count equals the filter result. Given a data point or vehicle is selected in the fleet view, When the user clicks “Drill down,” Then navigation to the vehicle profile Reliability tab occurs within 1.5 seconds (p95) and the date range, granularity, and relevant filters are preserved. Given filters are active, When the user clears all filters, Then the view reverts to the unfiltered fleet trend and vehicle count updates accordingly.
Pre/Post-Repair Comparison Window & Fix Effectiveness
Given a repair event marker is selected, When the user enables comparison and chooses window lengths (e.g., 14 days pre and 14 days post), Then the chart highlights both windows and displays average Reliability Index per window, absolute delta, and percent change. Given either window has fewer than 3 data points, When the user requests comparison, Then the UI shows “Insufficient data” and disables delta metrics. Given multiple repair events exist, When the user navigates between events, Then comparison metrics update within 300ms and reflect the newly selected event. Given the user changes window lengths, When metrics recompute, Then values match backend computations to 2 decimal places.
Export Current View to PDF/CSV
Given a current view (vehicle or fleet) with a selected date range, granularity, and filters, When the user exports to PDF, Then the file contains the chart, visible overlays, legend, applied filters, date range, granularity, and any change-point annotations exactly as displayed. Given the same view, When the user exports to CSV, Then the file contains one row per time bucket with columns: bucket_start_utc (ISO 8601), reliability_index, dtc_count, repairs_count, parts_replaced_count, total_cost, and the number of rows equals the number of buckets in view. Given an export is initiated, When processing completes, Then the file is delivered within 5 seconds (p95) and the filename matches FleetPulse_Reliability_<scope>_<YYYYMMDD-YYYYMMDD>_<granularity>_<timestamp>. Given filters are applied, When exporting, Then both PDF and CSV reflect the same filters and date range as the on-screen view.
Mobile-Responsive Visualization and Controls
Given a device width between 375px and 768px, When the view loads, Then chart, legend, and filters stack vertically, horizontal scrolling is enabled for the timeline as needed, and all interactive targets are at least 44px in touch area. Given the device rotates, When switching between portrait and landscape, Then the layout reflows without losing selected filters, date range, or granularity. Given a mid-tier mobile device on 4G, When loading the last 90 days view, Then First Contentful Paint ≤ 2.5s and Time to Interactive ≤ 3.5s (p95). Given a marker is tapped on mobile, When the tooltip opens, Then it is fully visible within the viewport and dismissible with a second tap or outside tap.
Persisted View Configuration Across Sessions
Given a signed-in user, When they change date range, granularity, filters, or change-point toggle, Then the configuration is saved per user and scope (fleet vs vehicle) within 500ms of the change. Given the same user returns within 30 days, When the Reliability view loads, Then the last saved configuration is restored, including date range, granularity, filters, and toggles, across devices. Given the user selects “Reset to defaults,” When confirmed, Then defaults are applied immediately and persisted for subsequent visits. Given a saved configuration includes routes, regions, or vehicles no longer accessible, When loading, Then unavailable filters are ignored and a non-blocking notice informs the user.
Maintenance Scheduling Integration & Recommendations
"As a dispatcher, I want the Reliability Index to influence maintenance scheduling so that high-risk vehicles are prioritized and downtime is minimized."
Description

Integrate the Reliability Index into maintenance workflows by prioritizing service queues, inserting inspection tasks when a score exceeds a threshold, and suggesting component-focused diagnostics based on risk drivers. Surface scores and explanations directly in work orders and upcoming service reminders. Provide bulk actions to schedule inspections for the top-risk vehicles and include the score in exports and APIs consumed by external CMMS/accounting systems. Offer configurable policies per fleet to align with budgets and uptime goals.

Acceptance Criteria
Service Queue Prioritization by Reliability Index
Given a user with permission "View Maintenance" opens the Service Queue for fleet F When the page loads Then vehicles are sorted by Reliability Index score in descending order by default Given vehicles without a Reliability Index score exist When the list is displayed Then those vehicles appear after all scored vehicles and show "N/A" for score Given the user selects an alternate sort (e.g., Due Date) When the user returns to the Service Queue Then the last chosen sort persists for that user in fleet F Given the user applies the "High Risk" filter When scores are present Then only vehicles with score >= policy.threshold_high are shown Given pagination is enabled When navigating between pages Then the sort and filter remain applied and item counts remain accurate
Automatic Inspection Task Insertion on Score Threshold Breach
Given fleet policy P defines an inspection insertion threshold T for the vehicle class When a vehicle’s Reliability Index score crosses from below T to >= T Then the system creates an Inspection task using template P.template within 5 minutes of the score update Given an open Inspection task already exists for the same vehicle and risk window When the score crosses T again Then no duplicate task is created Given a task is created When viewing the task Then it includes a score snapshot timestamp, top 3 risk drivers with contribution percentages, due date = now + P.due_in_days, and assignee = P.assignee (or default assignee if null) Given the score drops below T before task completion When policy P.auto_cancel_on_drop is true Then the task auto-cancels with reason "Score dropped below threshold"; otherwise the task remains open Given any task is created by threshold breach When auditing Then the task record stores the applied policy version ID
Component-Focused Diagnostic Suggestions Based on Risk Drivers
Given risk drivers identify components with contribution >= P.min_component_contribution When a work order or inspection task is generated Then an ordered checklist of diagnostic steps mapped to those components is attached, with at least 1 step per component and at most P.max_steps total Given a component has no mapping in the knowledge base When generating suggestions Then a generic diagnostic for the corresponding system is added and flagged as "generic" Given a user accepts, reorders, or removes suggested steps When the work order is saved Then the selection and order are persisted and auditable with user ID and timestamp
Surface Scores and Explanations in Work Orders and Upcoming Reminders
Given a work order WO is created or opened for a vehicle When the WO detail view loads Then the UI shows the current Reliability Index score, the score snapshot at WO creation time, the top 3 risk drivers with percent contribution, DTC recurrence count, and a link to Reliability Index details Given a work order is exported or printed When the PDF is generated Then the score snapshot and top risk drivers are included in the PDF output Given Upcoming Service Reminders are displayed for the next P.reminder_horizon_days When the list loads Then each reminder shows a score badge and risk band; reminders with score >= policy.threshold_high are highlighted as "Attention" Given a user lacks permission "View Reliability" When accessing work orders or reminders Then the score and driver details are hidden and replaced with "Restricted"
Bulk Schedule Inspections for Top-Risk Vehicles
Given a user selects "Bulk schedule inspections" with filter "Top X by score" and provides a scheduling window W When the user confirms the action Then the system creates at most one inspection task per selected vehicle, scheduled within window W and respecting daily capacity P.daily_capacity Given the bulk operation is retried with identical parameters When executed Then no duplicate tasks are created and the same operation_id is returned (idempotent) Given the bulk operation completes When results are displayed Then the user sees counts of tasks created, skipped (with reason), and failed, and can download a CSV of the results Given capacity or conflicts prevent scheduling within W When scheduling Then affected tasks are queued to the next available slot per policy and labeled "Queued (capacity)"
Reliability Score Included in Exports and APIs
Given a user exports Vehicles or Work Orders to CSV/Excel When the file is generated Then it includes columns: reliability_score_current, reliability_score_band, top_risk_drivers_json (array of {component, contribution}), score_snapshot_at_event (ISO 8601), policy_id Given an authenticated API client calls GET /api/v1/vehicles or /api/v1/work_orders with include=reliability When the response is returned Then it includes reliability.score, reliability.band, reliability.top_risk_drivers, reliability.snapshot_at (ISO 8601), reliability.policy.id and p99 latency <= 800ms for up to 1000 vehicles Given a fleet has Reliability integration disabled by policy When exporting or calling the API Then the fields are present with null values and reliability.enabled=false
Fleet Policy Configuration and Enforcement
Given a fleet admin opens Reliability Policy settings When saving a policy Then they can configure threshold_high, threshold_medium, due_in_days, auto_cancel_on_drop, min_component_contribution, max_steps, daily_capacity, reminder_horizon_days, monthly_budget, max_downtime_hours and the policy is validated and versioned Given a policy is updated and activated When creating tasks, highlighting reminders, or enforcing capacity/budget Then the latest active policy version is applied and its version ID is recorded on affected records Given monthly budget P.monthly_budget is reached When auto-insertion or bulk scheduling attempts to create additional tasks Then excess tasks are deferred to the next available window per policy and marked "Queued (budget cap)" with an estimated date
Model Validation, Backtesting & Ongoing Monitoring
"As a product owner, I want the Reliability Index to be validated and monitored so that we can trust its recommendations and continuously improve accuracy."
Description

Establish an evaluation pipeline that defines outcome labels (e.g., repeat failure of the same component within 90 days or more than N repair events in a period) and measures precision, recall, AUROC, calibration, and cost savings versus baseline maintenance practices. Run backtests on historical data, produce release notes with performance deltas per model version, and set guardrails for deployment (minimum calibration slope, maximum false-positive rate). Monitor live performance for drift, data gaps, and latency SLO breaches with alerts. Capture user feedback tags (e.g., “helpful,” “false alarm”) to inform retraining and threshold adjustments.

Acceptance Criteria
Outcome Labels and Evaluation Window Definition
Given historical repair, DTC, and telemetry data, When labels are generated, Then a "repeat-failure-90d" label is assigned if the same component fails again within 90 days of a repair completion date, using only data available as of the original prediction time. And Given event counts per vehicle, When computing high-repair-frequency labels, Then a label is assigned if repair events exceed N within a rolling P-day window (default N=3, P=180) with no lookahead leakage. And Then label prevalence, positive/negative counts, and class balance per cohort are logged and versioned with a dataset snapshot hash. And Then unit tests assert zero label leakage by shifting prediction time forward/backward and verifying invariant labels.
Historical Backtest Metrics and Cohort Reporting
Given a locked historical dataset and a fixed forecast horizon, When backtests are executed via time-based splits, Then the pipeline computes and stores precision, recall, AUROC, AUPRC, Brier score, calibration slope/intercept, and reliability curves overall and per cohort (make, model, mileage band, climate region). And Then results include confidence intervals via bootstrapping and are reproducible with a run ID, random seed, data snapshot hash, and model version. And Then artifacts (JSON metrics, plots) are persisted to the registry and object storage at model_version/date paths and are linkable from the CI run summary.
Cost-Savings Versus Baseline Calculation
Given a defined baseline policy (time/mileage-based maintenance) and a cost model (inspection cost, false-alarm cost, avoided failure cost, downtime cost), When backtest predictions are thresholded at the selected operating point, Then the pipeline computes net cost savings per vehicle and portfolio versus baseline with 95% confidence intervals. And Then the report includes methodology, parameter values, and a threshold sensitivity analysis; any changes to assumptions are parameterized and audited. And Then pass criteria are met if estimated net savings >= 8% overall and non-negative in at least 80% of defined cohorts.
Release Notes with Versioned Performance Deltas
Given a new candidate model, When compared against the currently deployed model on the same locked dataset, Then release notes are auto-generated summarizing metric deltas (precision, recall, AUROC, calibration slope, net cost savings) overall and by cohort, highlighting any regression > 3% absolute. And Then notes include training data window, feature set version, hyperparameters, git commit, and data snapshot hash, plus known limitations and risk drivers. And Then release cannot be marked Ready without recorded sign-offs from Product and Data Science in the notes.
Deployment Guardrails and Blocking Logic
Given evaluation outputs for a candidate model, When guardrails are checked, Then deployment is automatically blocked if calibration slope is outside [0.9, 1.1] or false-positive rate exceeds 15% at the chosen operating point in any top-5 volume cohort. And Then the blocking reason is recorded with timestamps and a link to artifacts, and an alert is sent to the Reliability Index channel. And Then manual override requires justification text, a JIRA ticket reference, and approvals from Engineering and Product; overrides are logged and reported in the next release notes.
Live Monitoring, Drift & SLO Alerts with Feedback Capture
Given the model is deployed, When monitoring jobs run on schedule, Then data availability, null rates, and schema checks must pass; PSI >= 0.2 or KS p-value <= 0.01 for key features or score distribution triggers an alert. And Then weekly online estimates of precision/recall and monthly calibration checks are computed against delayed labels; drops > 5% absolute trigger a retraining proposal ticket. And Then 95th percentile scoring latency <= 300 ms and batch scoring completion <= 15 minutes; any SLO breach emits an alert within 5 minutes with incident severity. And Then in-app user feedback tags ("helpful", "false alarm", "missed") are stored with alert and vehicle IDs within 2 minutes, surfaced on monitoring dashboards, and incorporated into quarterly threshold reviews with documented decisions.

Downtime Meter

Converts expected shop time and parts lead times into true cost per day using your utilization and revenue assumptions. Shows the cost of waiting vs acting now, so schedulers can minimize lost days and owners see the full economic impact behind every decision.

Requirements

Assumptions & Revenue Configurator
"As a fleet owner, I want to set and manage my fleet’s utilization and revenue assumptions by asset or template so that the Downtime Meter computes realistic, comparable cost-of-downtime values."
Description

Provide an organization- and asset-level configuration surface to define utilization and revenue assumptions that drive downtime cost calculations. Supports defaults at org, templates by vehicle class, and per-asset overrides for metrics such as revenue per day, average daily utilization (hours or miles), load factor, variable operating cost per day, driver cost, rental/substitute vehicle cost, and margin assumptions. Pulls suggested defaults from FleetPulse telematics (e.g., last 30–90 day utilization, engine hours, trips) and allows manual entry with validation, units, and currency handling. Stores versions and effective dates, exposes values to the calculation engine via API, and ensures assumptions are consistently applied across scenario comparisons.

Acceptance Criteria
Set Organization-Level Defaults and Propagation
Given an org admin opens the Assumptions Configurator, When they set revenue/day, utilization mode (hours or miles), load factor, variable operating cost/day, driver cost/day, rental/substitute cost/day, and margin %, Then the system saves them as org defaults with an effective date and timestamp. - Given new assets are created or existing assets without overrides, When org defaults have an effective date <= today, Then those assets inherit the defaults for calculations. - Given a future-dated default version exists, When today < effective date, Then calculations use the current version; and when today >= effective date, the new version applies automatically without user action. - Given multiple currencies, When the admin selects an org currency, Then all monetary fields display and persist in that currency with 2-decimal precision and currency code. - Given org defaults change, When a user opens Downtime Meter, Then cost calculations reflect the effective defaults for the selected calculation date(s).
Create and Apply Vehicle Class Templates
Given an admin creates a template for a vehicle class, When they enter assumptions and save with an effective date, Then the template is stored and available for assets tagged with that class. - Given an asset is tagged to a class and has no per-asset override, When a class template exists, Then the asset inherits template values; otherwise it inherits org defaults. - Given both class template and org defaults exist, When a field is not set in the template, Then the org default for that field is used. - Given a template update is saved with a future effective date, When the effective date is reached, Then assets using the template automatically use the new version and the change is logged with author and timestamp.
Per-Asset Overrides With Versioning and Effective Dates
Given a fleet manager opens an asset’s assumptions, When they override any field and save with an effective date, Then a new version is created recording author, timestamp, fields changed, and effective date. - Given overlapping effective date ranges for an asset, When saving an override, Then the system prevents save and displays an inline error indicating the conflict. - Given backdated overrides exist, When calculations run for historical dates, Then the version effective on each calculation date is used. - Given a user selects "Revert to template/default" on an asset, When saved, Then the asset resolves values from the class template, or org defaults if no template applies. - Given a scenario comparing two dates, When the calculation engine requests assumptions, Then the API returns values from the versions effective on each date respectively.
Telematics-Driven Suggested Defaults
Given the system has 30–90 days of telematics data, When the admin opens the configurator, Then suggested utilization (hours/day or miles/day), load factor estimate, and revenue hints are displayed with the lookback window used. - Given the user clicks Apply on a suggestion, When saved, Then the field is populated and marked with source = telematics and the lookback period. - Given less than 14 days of data or no data, When opening the configurator, Then no suggestion is shown and a "No data" message appears for the affected fields. - Given an asset reports both engine hours and miles, When the user selects the utilization mode, Then the suggestion adapts to the chosen mode and unit.
Validation, Units, and Currency Handling
Given a user enters monetary values with more than 2 decimals, When focus leaves the field, Then the value is rounded half up to 2 decimals and stored with the org currency code. - Given utilization mode is hours/day, When a value outside 0–24 is entered, Then the field is rejected with an inline validation message; for miles/day, negative values are rejected and values above the org-configured max trigger an error. - Given a margin percentage field, When a value < 0% or > 100% is entered, Then validation fails with a clear message. - Given required fields are missing, When the user attempts to save, Then save is blocked and an error list highlights each missing field. - Given the user switches utilization mode (hours ↔ miles), When prompted, Then no automatic conversion occurs and the user must confirm or re-enter values; previous values persist until confirmed.
Expose Effective Assumptions via API to Downtime Meter
Given a calculation request with assetId and calculationDate, When the Downtime Meter calls the Assumptions API, Then the API returns the effective values for that asset on that date with source flags (override/template/default) and versionId in ≤ 200 ms P95. - Given a scenario comparison with multiple assets/dates, When the API is called in batch, Then it accepts up to 100 items per request and returns per-item results consistently. - Given an unknown assetId, When requested, Then the API returns HTTP 404 with error code ASSET_NOT_FOUND; given no version covers the date, Then HTTP 422 with error code NO_EFFECTIVE_VERSION is returned. - Given monetary values are returned, When the API responds, Then each amount includes a currency code and numeric amount with no implicit cross-currency conversion.
Consistency Across Scenario Comparisons
Given a scheduler compares "Wait" vs "Act Now" in Downtime Meter, When assumptions are pulled, Then each scenario uses the same effective version per asset/date and changes after page load do not alter the current comparison until refresh. - Given a user duplicates a scenario, When duplication occurs, Then the assumptions snapshot (including versionIds and timestamps) is copied to the new scenario. - Given multiple assets are selected, When calculations run, Then each asset’s effective assumptions are used and a per-field source indicator is available for review. - Given future-dated defaults/templates exist, When comparing a future date range, Then the comparison uses the versions effective on those future dates where applicable.
Lead Time & Shop Time Modeling
"As a scheduler, I want accurate lead-time and shop-time estimates for each repair scenario so that I can see the true number of lost days and choose the least disruptive plan."
Description

Capture and compute expected downtime duration by combining parts lead times, shop queue/appointment windows, business-day calendars, and repair task durations. Supports inputs from maintenance templates or work orders, manual overrides, and vendor calendars with weekends/holidays. Factors transport to shop, diagnostic time, and multi-step jobs (e.g., inspect → order part → install). Provides heuristics from historical work orders to suggest durations when unknown. Updates estimates dynamically as vendor ETAs change and exposes a normalized duration (in business and calendar days) to the calculator and UI.

Acceptance Criteria
Compute Downtime from Parts, Shop Queue, Calendars, and Task Durations
- Given a work order created 2025-01-06 09:00 local, vendor lead time 2 business days (vendor calendar Mon–Fri 09:00–17:00), shop slot 2025-01-07 13:00–17:00 (shop calendar Mon–Fri 09:00–17:00), and repair duration 6 shop-hours, when downtime is computed, then earliest_start_at = 2025-01-08 09:00, completion_at = 2025-01-08 15:00, calendar_days = 2.25, business_days = 2.75 (assuming fleet business day = 8h). - Given non-working days defined in calendars, when computing business_days, then non-working hours are excluded and calculations respect shop hours for active repair and vendor hours for lead times. - Given the out_of_service_at timestamp is provided, when the vehicle is marked out of service at 2025-01-06 09:00, then downtime start uses that timestamp; otherwise it uses work order creation time.
Input Sources and Overrides Integration (Templates, WOs, Vendor Calendars)
- Given a maintenance template with default repair duration 4h and lead time 1 business day, when a new work order is created from this template, then the fields prepopulate with those defaults. - Given the same work order has a manual override setting lead time to 3 business days, when downtime is computed, then the override value is used and the template default is ignored. - Given a vendor calendar with a holiday on 2025-01-08 (Wednesday), when a 2-business-day lead time starts 2025-01-06 09:00, then part_arrival_at = 2025-01-09 09:00 (holiday skipped). - Given both work order inputs and template defaults, when fields conflict, then the precedence is: manual override > work order field > template default.
Multi-Step Job Phasing with Transport and Diagnostics
- Given a job with steps: transport_to_shop=4h, diagnostics=2h, order_part (triggers part lead time 3 vendor business days), install=3h, and shop hours Mon–Fri 09:00–17:00, vendor hours Mon–Fri 09:00–17:00, when out_of_service_at=2025-01-06 08:00 and the system computes downtime, then order_part lead time starts at diagnostics completion (2025-01-06 14:00), part_arrival_at=2025-01-09 09:00, install runs 3h on 2025-01-09 09:00–12:00, and completion_at=2025-01-09 12:00. - Given a phase has zero duration (e.g., transport_to_shop=0), when computing, then the phase is skipped without adding delay. - Given phases with prerequisites, when any prerequisite is not met (e.g., part not yet arrived), then repair work does not begin and waiting time accumulates toward downtime.
Heuristic Suggestions from Historical Work Orders
- Given a new work order with unknown repair duration for make=Ford, model=Transit, fault_code=P0420, when heuristics are requested, then the system suggests the 50th percentile duration from the last 20 matching work orders within the past 12 months, rounded to the nearest 0.5 hour, and labels the field as Suggested with source=historical. - Given fewer than 5 matching records in the last 12 months, when suggesting, then the system falls back to class-level (same system/failure category) median; if still fewer than 5, falls back to global default configured value. - Given a user accepts a suggested value, when they save the work order, then the chosen value populates the field, the suggestion banner is removed, and an audit log entry records {field, value, source, confidence_percent}.
Dynamic ETA Updates and Normalized Duration Exposure
- Given a vendor updates part ETA via webhook at 2025-01-07 16:00 pushing arrival by +1 business day, when the event is received, then downtime is recomputed and the earliest_start_at/completion_at/calendar_days/business_days fields update within 60 seconds. - Given a manual ETA edit by a user, when saved, then the same recomputation occurs and the old estimate is retained in a change history with {prior, new, changed_at, actor}. - Given the calculator/API is queried, when estimates exist, then it returns normalized_duration: {calendar_days:number, business_days:number, start_at:ISO8601, completion_at:ISO8601, updated_at:ISO8601} with days rounded to 0.1 and times in fleet timezone.
Critical Path Across Multiple Parts and Vendors
- Given a job requires Part A (vendor A lead time 2 business days) and Part B (vendor B lead time 4 business days) and both are required, when computing earliest_start_at, then it equals the later of the two arrival times (Part B) respecting each vendor’s business-day calendar. - Given Part B is split into two shipments where the second contains the required component arriving 1 business day after the first, when computing, then arrival time uses the latest required shipment, not the first. - Given the shop queue wait (next slot in 3 calendar days) overlaps with parts waiting (4 business days), when computing earliest_start_at, then overlapping waits are not double-counted; earliest_start_at equals the later of slot start and all-parts-available. - Given an in-stock part, when computing, then its arrival_at equals out_of_service_at (no added wait). - Given alternative supplier selection that reduces lead time, when changed, then recomputation reflects the new critical path immediately upon save.
True Cost Calculator Engine
"As a fleet owner, I want an explainable calculation of the cost of acting now versus waiting so that I can make data-driven maintenance decisions."
Description

Compute cost-per-day and total economic impact for alternative actions (repair now vs defer N days), combining utilization/revenue assumptions, modeled downtime duration, and risk of escalation from active fault codes/inspections. Includes optional line items for rental replacement, SLA penalties, lost load revenue, and cascading failures. Provides explainable outputs with itemized components and sensitivity bands. Exposes a deterministic API endpoint and library used by UI and automations, with idempotent requests, currency normalization, and unit tests for accuracy.

Acceptance Criteria
Baseline Cost-per-Day and Total Impact Calculation
Given revenuePerDay = 1000 USD, utilization = 0.6, shopTimeDays = 2, partsLeadTimeDays = 0, deferDays = 7, and no optional line items or risk When the engine computes scenarios "repair_now" and "defer_7d" Then repair_now.costPerDay = 600 USD/day And repair_now.totalCost = 1200 USD And defer_7d.costPerDay = 600 USD/day And defer_7d.totalCost = 5400 USD And delta.totalImpact = 4200 USD And the response includes scenarioIds ["repair_now","defer_7d"]
Optional Line Items Inclusion/Exclusion
Given revenuePerDay = 1000 USD, utilization = 0.6, shopTimeDays = 2, deferDays = 7 And rentalReplacementPerDay = 150 USD, slaPenaltyPerDay = 300 USD, lostLoadRevenueFlat = 4000 USD When the engine computes scenarios "repair_now" and "defer_7d" with optional items enabled Then repair_now.itemization = { opportunityCost: 1200, rentalReplacement: 300, slaPenalty: 600, lostLoadRevenue: 4000 } And repair_now.totalCost = 6100 USD And defer_7d.itemization = { opportunityCost: 5400, rentalReplacement: 1350, slaPenalty: 2700, lostLoadRevenue: 4000 } And defer_7d.totalCost = 13450 USD And omitting any optional item from the request removes it entirely from itemization and totalCost
Explainable Itemization and Sensitivity Bands
Given revenuePerDay = 1000 USD, utilization = 0.6, shopTimeDays = 2, sensitivity.bandPercent = 20% When the engine computes scenario "repair_now" Then itemization includes component "opportunityCost" with formula "revenuePerDay * utilization" and amountPerDay = 600 USD And sensitivityBands.costPerDay = { min: 480, expected: 600, max: 720 } And sensitivityBands.totalCost = { min: 960, expected: 1200, max: 1440 } And response includes calculationTrace entries that sum exactly to totalCost
Currency Normalization and Labeling
Given targetCurrency = USD and fxRates = { EURUSD: 1.10 } And revenuePerDay = 900 EUR, utilization = 0.5, shopTimeDays = 2, slaPenaltyPerDay = 100 USD When the engine computes scenario "repair_now" Then normalized costPerDay(opportunityCost) = 495 USD/day And normalized slaPenaltyPerDay = 100 USD/day And repair_now.totalCost = 1190 USD And all monetary fields in the response are labeled currencyCode = "USD"
Deterministic Idempotent API and Library Parity
Given a POST /true-cost-calculator request with payload P and idempotencyKey = "abc-123" When the request is executed twice Then the JSON response bodies are byte-for-byte identical and have the same contentHash And the second call is served from idempotency without re-computation And calling the library compute(P) returns a JSON structurally and numerically identical to the API response (same contentHash)
Risk-Adjusted Impact from Fault Severity
Given revenuePerDay = 1000 USD, utilization = 0.6, shopTimeDays = 2, deferDays = 7 And an activeFault with escalationThresholdDays = 5, escalationProbability = 0.3, escalationAdditionalDowntimeDays = 3, escalationAdditionalRepairCost = 2000 USD When the engine computes scenarios "repair_now" and "defer_7d" Then repair_now.risk.itemization = 0 USD and repair_now.totalCost remains unaffected And defer_7d.baseTotalCost (without risk) = 5400 USD And defer_7d.risk.itemization = 0.3 * (3 * 600 + 2000) = 1140 USD And defer_7d.totalCost = 6540 USD And the risk component is listed as a separate item in itemization
Scenario Comparison UI
"As a dispatcher, I want to compare the cost and downtime of several repair timing options so that I can pick the plan that minimizes lost days and revenue impact."
Description

Deliver an interactive interface that compares multiple scheduling options (e.g., repair now, defer 3/7/14 days, align with next PM) with visualizations of total cost, cost per day, and days lost. Displays break-even points, key drivers (lead time, revenue, risk), and confidence ranges. Allows users to adjust assumptions inline, select a preferred scenario, and proceed to create or update a work order. Supports desktop and mobile, accessibility standards, export to PDF, and persistent deep links for sharing.

Acceptance Criteria
Multi-Scenario Comparison Rendering
Given a vehicle with available telematics and default assumptions When the user opens the Downtime Meter Scenario Comparison UI Then the UI displays at least five scenarios: "Repair Now", "Defer 3 Days", "Defer 7 Days", "Defer 14 Days", and "Align with Next PM" And each scenario shows Total Cost (currency), Cost per Day (currency/day), Days Lost (integer days), and a Confidence Range (min–max) And the scenario with the lowest Total Cost is visually highlighted and labeled "Lowest Cost" And all values are computed using the current assumptions and latest vehicle data And initial render completes within 2 seconds on desktop (broadband) and 3 seconds on mobile (3G)
Break-Even and Drivers Visualization
Given at least two scenarios are displayed When the user focuses, hovers, or taps the Break-Even indicator Then the UI reveals the break-even point in days where deferral cost equals repairing now with accuracy of ±1 day And a drivers visualization displays contributions for Lead Time, Revenue Loss, Risk Cost, and Parts Cost that sum to the scenario’s Total Cost within ±$1 And confidence ranges are visible as bands or error bars with numeric min/mean/max values And all tooltips/annotations are reachable via keyboard and dismissible via ESC
Inline Assumptions Editing with Live Recalculation
Given the Scenario Comparison is open When the user edits Utilization %, Revenue per Day, Parts Lead Time (days), or Risk Probability inline Then all scenario metrics recalculate and update on-screen within 1 second on desktop and 2 seconds on mobile And a recalculation timestamp updates within 1 second of completion And invalid inputs (e.g., negative numbers, Utilization > 100%) show inline errors and block save until corrected And the user can Reset to Defaults with one action that restores baseline assumptions And saved assumptions persist for the vehicle and user tenant and are restored on reload
Scenario Selection and Work Order Creation/Update
Given multiple scenarios are displayed When the user selects a preferred scenario and clicks "Proceed to Work Order" Then a Work Order draft is created (201) or an existing Work Order is updated (200) with scenario name, scheduled date, estimated duration, parts lead time, and notes prefilled from the scenario And the UI shows a success confirmation within 1 second and navigates to the Work Order detail view And returning to the comparison view shows the selected scenario marked as Chosen And in case of API failure, an error message is shown, no duplicate Work Orders are created, and the user can retry
Responsive and Accessible Visualization (Desktop and Mobile)
Given the app is viewed between 320 px and 1440 px viewport widths When the Scenario Comparison UI loads Then scenario cards and charts reflow without horizontal scrolling and maintain readable labels And all tap targets are at least 44x44 px on touch devices And color contrast is ≥ 4.5:1 for text/icons, focus is visible, and all interactive elements are keyboard navigable And charts expose ARIA labels that announce Total Cost, Cost/Day, Days Lost, and Break-Even for the focused scenario And text can scale to 200% without loss of content or functionality
Export to PDF with Fidelity and Performance
Given a configured set of up to 8 scenarios is visible When the user selects Export to PDF Then the generated PDF includes title, vehicle identifier, timestamp, assumptions, each scenario’s Total Cost, Cost/Day, Days Lost, Break-Even, drivers breakdown, and confidence ranges And charts render without truncation and with at least 300 DPI effective resolution And the filename follows fleetpulse_{vehicleId}_downtime_{yyyymmddHHMM}.pdf And the PDF matches on-screen numeric values (within rounding rules) and is under 5 MB And generation completes within 5 seconds server-side and downloads successfully
Persistent Deep Links with Access Control
Given the user clicks Copy Share Link When a recipient with valid authentication opens the URL Then the app restores the exact scenarios, assumptions, selected scenario, and active tab/scroll state as shared And access is restricted to the same tenant; users outside the tenant receive a 403 and no scenario data is rendered And links remain valid for at least 90 days unless revoked, and regenerating a link after changes points to the updated state without invalidating prior links And the deep link works on desktop and mobile, preserving responsive layout
Work Order Auto-Link & Prefill
"As a maintenance manager, I want downtime calculations to be created and kept in sync with my work orders so that I don’t have to re-enter data and can act with current information."
Description

Automatically generate and attach a Downtime Meter analysis to new or existing work orders triggered by inspections, DTCs, or maintenance reminders. Prefill job types, parts, and vendor options to seed lead-time and duration estimates. Keep the analysis in sync as appointment dates, vendor ETAs, or task scope changes, and record the chosen scenario on the work order for later review.

Acceptance Criteria
Auto-Link on Inspection-Triggered Work Order
Given an inspection with at least one failed item mapped to a job type and a vehicle VIN, and organization settings for revenue per day and utilization exist When a user creates a new work order from that inspection Then a single Downtime Meter analysis is auto-generated within 3 seconds and attached to the work order And the analysis is prefilled with job types from the failed items, default task durations from job templates, and vendor options from the preferred vendors list And parts from the mapped job templates are listed with current stock levels and lead-time estimates from vendor catalogs or zero if in stock And cost-per-day is computed using org defaults and vehicle utilization, and the baseline scenario is labeled "Act Now" And a linkage event is written to the audit log with inspection ID, work order ID, and user ID
Auto-Link on DTC-Triggered Work Order (New or Existing)
Given an active DTC for a vehicle mapped to one or more job types When a work order is created for that vehicle within 24 hours of the DTC Then the system attaches a single Downtime Meter analysis to the work order within 3 seconds And if an open work order already exists for the same vehicle and DTC family within 24 hours, the analysis is attached to that work order instead of creating a new one And the analysis pre-fills job types, tasks, parts, and vendor options based on DTC-to-repair mappings And duplicate analyses are not created upon repeated DTC messages for the same fault and work order And audit logs include DTC code, source device ID, and linkage outcome
Auto-Link on Maintenance Reminder Work Order
Given an active maintenance reminder with a mapped service package and preferred vendors When a work order is generated from the reminder Then a Downtime Meter analysis is attached within 3 seconds with the service package tasks and parts prefilled And vendor options include at least three candidates ranked by ETA (preferred vendors first) with their current quoted lead times And in-stock parts at any owned location are reflected with zero lead time and the stocking location specified And the analysis shows two default scenarios: "Next Available" (soonest vendor and parts) and "Delay One Week" And computed downtime cost deltas between the two scenarios are displayed in currency with two decimals
Real-Time Sync on Appointment and ETA Changes
Given an existing work order with an attached Downtime Meter analysis When the appointment date, promised date, or any vendor ETA is changed Then the analysis recalculates projected downtime cost and updates scenario timelines within 5 seconds of save And a new version is recorded with previous and new dates, ETAs, and cost values And the work order detail shows the latest analysis values without page refresh And a change event is written to the audit log with user ID and changed fields
Scope Change Updates to Analysis
Given an attached Downtime Meter analysis When tasks or parts are added to or removed from the work order Then the analysis updates the total duration, parts lead-time critical path, and cost per scenario within 5 seconds And if a newly added task has no template duration, a default duration per job type is applied and flagged "Assumed" And if a required part has no vendor lead time, the system requests quotes asynchronously and uses a configurable placeholder lead time until quotes return And all changes are reflected in the version history with user, timestamp, and delta
Chosen Scenario Capture and Persistence
Given multiple scenarios exist in the Downtime Meter analysis When a user selects a scenario as "Chosen" and saves the work order Then the chosen scenario ID, label, computed total cost, and optional rationale note are stored on the work order And the chosen scenario remains immutable as a snapshot even if later recalculations occur, while new recalculated values appear under "Current" And the chosen scenario is included in work order exports, API responses, and reports And permission rules restrict selection to users with Scheduler or Owner roles; attempts by others are denied and logged
Threshold Alerts & Actionable Recommendations
"As a scheduler, I want to be alerted when waiting becomes more expensive than acting now so that I can reschedule before we incur unnecessary losses."
Description

Notify owners and schedulers when the cost of waiting exceeds configurable thresholds or when a break-even point is reached. Provide actionable recommendations such as earliest cost-optimal slots, nearest vendors with shorter lead times, or aligning with existing PMs to minimize total downtime. Deliver alerts via in-app, email, and push, with quiet hours and per-asset preferences.

Acceptance Criteria
Cost Threshold Exceeded Alert Trigger
Given a user-configured cost-of-waiting threshold for an asset and a computed Cost of Waiting per Day that exceeds that threshold When the Downtime Meter recalculates costs for that asset Then create a new alert with type=ThresholdExceeded including assetId, costPerDay, thresholdValue, calculationTimestamp, and linkToRecommendations And deliver the alert to all enabled channels per recipient within 60 seconds of calculation And do not create a new alert if the cost remains above threshold without a material change (>=10%) within the last 24 hours And do not create an alert if costPerDay is less than or equal to the threshold
Break-Even Point Alert Trigger
Given an asset with utilization and revenue assumptions, repair duration, and parts lead time estimates When the cumulative cost of waiting reaches or surpasses the cost break-even point between repairing now versus delaying Then create an alert with type=BreakEvenReached including assetId, breakEvenDateTime, nowOptionCost, delayOptionCost, delta, and linkToRecommendations And deliver the alert within 5 minutes of the break-even calculation And do not re-alert for the same asset and issue for 24 hours unless the breakEvenDateTime shifts by at least 12 hours
Multi-Channel Alert Delivery and Deduplication
Given a single alert event generated for an asset When sending notifications across in-app, email, and push Then use a single alertId across all channels and ensure the title, key metrics (e.g., costPerDay or breakEvenDelta), and CTA deep link are consistent And show the in-app alert within 10 seconds and dispatch email and push within 60 seconds And do not send more than one notification per channel per alertId And if a channel fails to send (non-2xx response or timeout >10s), retry up to 3 times with exponential backoff and log the failure without impacting other channels
Quiet Hours and Deferred Delivery
Given organization or user-level quiet hours configured with a specific timezone When an alert is generated during quiet hours for a recipient Then suppress email and push for that recipient and queue them for delivery at the end of quiet hours with the original calculationTimestamp preserved and a "Queued during quiet hours" label And allow the in-app alert to be visible only when the user opens the app (no push or email during quiet hours) And send alerts immediately when generated outside quiet hours And apply quiet hours based on the recipient's local timezone
Per-Asset Notification Preferences
Given per-asset notification channel preferences defined for a recipient (in-app, email, push) When an alert is generated for that asset Then deliver notifications only via the channels enabled for that asset for that recipient; if no per-asset preference exists, use the organization default And if all channels are disabled for that asset for that recipient, record the alert and mark as not-delivered-by-preference with no outbound notifications And apply any changes to preferences to alerts generated after the change, with an audit record of who changed what and when
Actionable Scheduling Recommendations (Cost-Optimal Slots and PM Alignment)
Given an alert requiring scheduling When generating recommendations Then compute and display at least three ranked time slots within the next 14 days that minimize total downtime cost (shop time + parts lead time + utilization impact) And for each slot, show startDateTime, expectedDownHours, estimatedTotalDowntimeCost, rationale, and an alignmentWithPM flag And if a PM window exists within 10 days, include at least one recommendation aligned to the PM when it reduces total downtime cost; otherwise display "No PM alignment advantage" And selecting a recommended slot opens the scheduler pre-filled with assetId, selected slot, and (if chosen) vendor, and saving creates a work order linked to the alertId
Vendor Suggestions with Lead Times and Proximity
Given an alert with a required service category and location When suggesting vendors Then list up to five vendors ranked by lowest estimated total downtime cost, showing for each: vendorId, capability match, distance, earliestAvailableDate, partsLeadTime, estimatedShopTime, and a book/contact CTA And if fewer than three vendors are found within 50 miles, expand the radius to 100 then 200 miles until at least three vendors are found or none remain, and indicate the final search radius used And selecting a vendor updates the recommended time slots and recalculates estimated total downtime cost accordingly
Assumptions Audit Trail & Snapshots
"As an owner-operator, I want a record of the assumptions behind each decision so that I can justify costs and improve future estimates."
Description

Maintain a versioned audit trail of all assumptions and inputs used in each calculation, storing an immutable snapshot alongside the chosen scenario and work order. Log who changed what and when, support rollback to prior versions, and enable export for cost analysis and stakeholder reporting. Ensure read permissions align with roles while limiting edits to authorized users.

Acceptance Criteria
Auto Snapshot on Calculation Commit
Given a user has entered Downtime Meter assumptions for a vehicle and scenario When the user saves the calculation or confirms a decision Then the system creates a new immutable snapshot with a unique version ID for that calculation context And the snapshot stores all input assumptions, calculated outputs, vehicle ID, scenario label, optional work order ID, user ID, and an ISO 8601 UTC timestamp And the snapshot becomes visible in the audit trail within 2 seconds of save And subsequent edits create a new version without modifying the prior snapshot
Change Log with Field-Level Diff
Given a prior snapshot exists When any tracked assumption value is changed and saved Then a new version is created (version number increments by 1) and the previous version remains unchanged And the audit log entry records user ID, role, timestamp, affected fields, previous values, new values, and save source (UI or API) And an API endpoint and UI view display the field-level diff between the two versions And failed validations or canceled saves do not create a new version or audit entry
Authorized Rollback to Prior Version
Given a user with the 'rollback_snapshots' permission selects a prior version When the user confirms rollback Then the system creates a new version that clones the selected version’s assumptions, tags it as 'rolled back from <version>', and increments the version number And the system recalculates Downtime Meter outputs based on the cloned assumptions And the audit log records the rollback action with user ID, timestamp, source version, and resulting version And any linked work order requires explicit confirmation to update to the rolled-back version; if declined, the work order remains linked to its current committed snapshot
Export Snapshots and Audit Trail
Given a user with 'export_snapshots' permission selects filters (date range, vehicles, scenarios, work orders) and format (CSV or JSON) When the export is requested Then the exported file includes one row/object per version with fields: version ID, vehicle ID, scenario, work order ID (if any), user ID, timestamp (UTC ISO 8601), changed fields, prior values, new values (if applicable), full assumptions, calculated outputs, and content hash And the export honors filters and returns only matching versions And the download starts within 5 seconds for up to 10,000 versions and completes without server errors And exported timestamps are in UTC and numerics use a dot as decimal separator
Role-Based Access Controls for Read and Edit
Given role permissions are configured for 'read_snapshots' and 'edit_assumptions' When a user without 'read_snapshots' access attempts to view a snapshot Then the system returns HTTP 403 and no snapshot data is disclosed When a user without 'edit_assumptions' access attempts to modify assumptions Then the system returns HTTP 403 and no new version is created And users with 'read_snapshots' can only view snapshots within their assigned vehicle/work order scope And all access denials and edits are logged with user ID, role, timestamp, and reason
Immutable Snapshot Enforcement
Given an existing snapshot version ID When any attempt is made to update or delete the snapshot via UI, API, or direct write Then the operation is rejected with HTTP 409 (conflict) or 403 (forbidden), and the snapshot remains unchanged And the snapshot content hash verifiably matches the stored hash on retrieval And the only allowed mutation path is creating a new version; this path is enforced by application logic and database constraints
Work Order and Scenario Linkage
Given a Downtime Meter scenario is selected to create or update a work order When the work order is created or linked Then the chosen snapshot is marked as 'committed' and linked to the work order ID And the work order details view and API include the committed snapshot reference and metadata And later assumption changes create new versions that do not alter the committed snapshot And multiple scenario alternatives (e.g., 'Act Now' vs 'Wait') are stored as separate snapshots and remain traceable to the same work order for comparison

Scenario Sandbox

Interactive what‑if modeling for repair cost, replacement price, financing/lease terms, utilization, and fuel prices. Outputs payback period, 3–5 year cost delta, and cash‑flow curve—plus a shareable summary—to align owners, ops, and finance on the best path.

Requirements

Unified Scenario Input Builder
"As a small fleet manager, I want to quickly enter and adjust assumptions for repair, replacement, financing, utilization, and fuel so that I can model scenarios without spreadsheets and see the impact immediately."
Description

Interactive module to capture all what-if parameters for a single vehicle or a selected fleet subset. Supports repair cost assumptions (baseline scheduled maintenance, unscheduled repair allowances), replacement price (purchase price, taxes, incentives), financing/lease terms (APR/MF, term, down payment, residual/balloon, fees, mileage caps), utilization (miles or engine hours with optional seasonality), and fuel inputs (current price, projected trend, regional adjustments). Includes residual value and depreciation assumptions, per-vehicle overrides, and fleet-level aggregation. Provides unit and currency controls, input validation with guardrails, default templates by vehicle class, inline guidance/tooltips, and persistent drafts with autosave. Cleanly integrates with FleetPulse data models and feeds the modeling engine via a normalized schema.

Acceptance Criteria
Financial Modeling Engine (Payback & Cash-Flow)
"As an owner-operator, I want the sandbox to calculate payback, 3–5 year cost deltas, and monthly cash flow so that I can choose the most cost-effective path."
Description

Deterministic calculation engine that ingests scenario inputs and FleetPulse baselines to produce monthly cash-flow curves, cumulative payback period, and 3-year and 5-year total cost deltas for keep/repair vs. replace/lease options. Models maintenance schedules, predicted repairs from OBD-II alerts, fuel spend from utilization and MPG, financing/lease amortization, taxes/fees, depreciation, and residual value. Supports per-vehicle and fleet rollups, multiple option stacks (A/B/C), and exposes a versioned API for UI charts and exports. Implements deterministic formulas with unit tests, currency/locale handling, and performance targets to return results under 500 ms for typical fleets (≤100 vehicles).

Acceptance Criteria
Telematics & Maintenance Prefill
"As an operations lead, I want the model to prefill assumptions from our actual telemetry and maintenance history so that I spend less time on data entry and trust the baseline."
Description

Automatic prefill of scenario assumptions from FleetPulse telemetry and records: utilization (miles/hours), average MPG, recent fuel prices, maintenance costs by category, upcoming service items, and current fault/anomaly signals. Applies data quality checks, shows data freshness, and falls back to class-based defaults when data is sparse. Prefilled fields remain fully editable with provenance indicators so users know which values came from live data vs. manual entry. Honors RBAC and data access scopes, and caches per-vehicle baselines for fast load.

Acceptance Criteria
Sensitivity Analysis & Monte Carlo
"As a finance stakeholder, I want to run sensitivity analysis on uncertain inputs so that I understand the risk and confidence around the decision."
Description

Capability to assign ranges/distributions to key uncertain inputs (fuel price, utilization, repair cost, residual value) and run Monte Carlo simulations (1k–10k iterations) to produce confidence intervals for payback and cost deltas. Provides tornado charts for one-way sensitivities and percentile bands over the cash-flow curve. Includes presets for best/likely/worst cases and supports scenario risk summaries (e.g., probability payback < 24 months). Executes efficiently via web workers or backend jobs with progress feedback and cancellability. Results are cacheable and exportable.

Acceptance Criteria
Scenario Comparison & Versioning
"As a small fleet manager, I want to create, save, and compare multiple scenarios so that I can present tradeoffs and choose the best option."
Description

Save, label, and clone scenarios with immutable version snapshots. Enable side-by-side comparison of baseline vs. options (A/B/C) with metric deltas and overlaid cash-flow charts. Allow setting a recommended option, adding notes, and toggling per-vehicle vs. fleet views. Provide diff views of assumptions between versions and a one-click revert. Store scenarios as structured JSON with metadata (author, timestamp) and ensure compatibility as the model evolves through schema versioning.

Acceptance Criteria
Shareable Summary & Export
"As an owner, I want to share a clean scenario summary with my finance partner so that we can align quickly without giving full app access."
Description

Generate a concise, branded summary that includes key assumptions, payback period, 3–5 year cost deltas, and a cash-flow curve. Share via secure link with view-only permissions, expiration, and optional passcode, or export to PDF suitable for email/board packets. Ensure numerical consistency with the current scenario state, include disclaimers/notes, and optimize layout for mobile and desktop. Track share events and prevent access to underlying editable data unless explicitly granted.

Acceptance Criteria
Assumption Audit & Change Log
"As a fleet controller, I want a history of assumption changes with reasons so that decisions are transparent and auditable."
Description

Comprehensive history of assumption edits and data imports with timestamp, user, and before/after values. Supports inline comments and reason codes, plus quick links to revert to prior snapshots. Exposes a readable audit trail to align owners, operations, and finance and to support post-decision reviews. Integrates with RBAC to control who can edit or comment, and stores events in an append-only log for integrity.

Acceptance Criteria

Spec Match

For vehicles marked Replace, recommends right‑sized successors based on duty cycle, payload, route mix, PTO usage, and idle patterns. Estimates MPG and maintenance savings vs current units and provides a checklist to brief vendors or procurement.

Requirements

Duty Cycle Profiling Engine
"As a fleet manager, I want FleetPulse to automatically summarize each vehicle’s duty cycle, payload, route mix, PTO usage, and idle patterns so that I can base replacement specs on real‑world usage without manual analysis."
Description

Continuously aggregate and normalize OBD‑II and GPS data to produce per‑vehicle duty profiles capturing trip segmentation, payload proxies (e.g., axle load or acceleration patterns when available), route mix (urban/suburban/highway via speed and stop density), PTO engagement duration, and idle ratio over a rolling 90‑day window. Apply de‑noising and outlier handling, compute standardized metrics, and refresh profiles at least daily. Provide APIs and a data model that integrates with existing FleetPulse telematics ingestion and maintenance modules. Deliver per‑vehicle summaries and fleet benchmarks to power downstream matching and ROI calculations.

Acceptance Criteria
Successor Specification Matching
"As a procurement lead, I want the system to recommend right‑sized replacement vehicles that meet our workload and constraints so that I can quickly shortlist viable options."
Description

Recommend right‑sized replacement configurations by mapping duty profiles to candidate vehicles across OEM catalogs. Enforce fit criteria such as GVWR/class, body/upfit compatibility, wheelbase, powertrain type, PTO capability, tow ratings, braking systems, and range needs. Incorporate operating context (terrain, climate, emissions zones) and depot infrastructure (e.g., charging availability). Rank top matches with a transparent fit score, tunable weighting of factors, and graceful fallbacks for incomplete data. Maintain an up‑to‑date spec catalog with pricing bands and required metadata.

Acceptance Criteria
Savings and TCO Estimation
"As an owner‑operator, I want to see estimated fuel and maintenance savings for each recommendation so that I can justify the purchase with clear ROI."
Description

Compute duty‑cycle‑adjusted projections for fuel/energy consumption, maintenance costs, and downtime reduction for each recommended successor versus the current unit. Use historical FleetPulse maintenance records, local fuel/energy prices, warranty coverage, labor rates, and mileage assumptions to produce annual and multi‑year (e.g., 5‑year) ROI scenarios with sensitivity ranges. Support ICE, hybrid, and BEV models, including tariff schedules and charging efficiency. Present side‑by‑side comparisons and exportable summaries for decision support.

Acceptance Criteria
Vendor Brief Checklist & Export
"As a buyer, I want an editable checklist I can send to vendors so that I can get apples‑to‑apples quotes quickly."
Description

Generate an editable, vendor‑ready checklist per recommendation that includes required specs (GVWR, payload, body/upfit, wheelbase, PTO, tow, braking, range), preferred options, compliance notes, duty profile highlights, delivery timelines, and evaluation criteria. Allow users to customize fields, attach notes, and export via PDF/CSV/email with organization branding and reference IDs. Link each checklist to its underlying recommendation and assumptions for traceability.

Acceptance Criteria
Policy and Compliance Constraints
"As a fleet admin, I want recommendations to comply with our policies and regulations so that we avoid noncompliant purchases."
Description

Provide an admin interface to encode procurement policies and regulatory constraints, including approved OEMs/upfitters, powertrain preferences, emissions targets (e.g., CARB, city restrictions), mandatory safety features, budget caps, and lease vs. purchase rules. Apply these constraints during matching and TCO calculations, clearly flagging conflicts, suggested alternatives, and required approvals. Support regional policy sets and audit logging of overrides.

Acceptance Criteria
Explainability and Audit Trail
"As a CFO, I want transparent rationale and an audit trail for recommendations so that I can defend procurement decisions."
Description

Expose a detailed rationale for each recommendation, including input metrics from the duty profile, constraint checks, factor weightings, catalog version, and data timestamps. Persist versioned snapshots of inputs, algorithm parameters, and outputs to enable reproducibility and governance. Provide exportable rationale reports and retain logs for at least 24 months with searchable access for reviews and audits.

Acceptance Criteria

Bridge Plan

If replacement is planned but not immediate, proposes a minimal keep‑alive strategy: essential fixes only, inspection cadence, and risk-of-failure watchpoints. Cuts sunk costs while protecting uptime and compliance until the handoff or sale.

Requirements

Bridge Plan Eligibility & Triggering
"As a fleet manager, I want the system to flag vehicles that fit a short-term keep-alive profile so that I can quickly place them on a Bridge Plan and avoid over-investing before replacement."
Description

Implements configurable rules to identify vehicles that qualify for a Bridge Plan and prompts managers to initiate it. Criteria include target replacement date, odometer/engine hours thresholds, asset age, fault severity trends, and rising cost-per-mile indicators. Supports one-click conversion of an asset into Bridge state with inherited settings from vehicle-class templates. Provides APIs and UI to set eligibility rules, preview impacted vehicles, and bulk-apply plans. Integrates with FleetPulse asset lifecycle states to ensure consistent reporting and with notifications to surface timely prompts.

Acceptance Criteria
Minimal Keep‑Alive Fix Selector
"As a maintenance lead, I want an auto-generated list of only essential fixes for a soon-to-be-replaced vehicle so that I can control spend while maintaining safety and uptime."
Description

Delivers a decision engine that classifies issues into essential (safety/compliance/uptime-critical) versus deferrable, producing a minimal repair worklist for the Bridge horizon. Consumes OBD-II DTCs, inspection defects, and maintenance backlog, and outputs an ordered list with estimated cost, risk impact, and suggested shop tasks. Integrates with preferred vendors and work-order creation while automatically excluding non-essential repairs unless explicitly approved. Supports per-vehicle-class templates and rule overrides, with audit logs for decisions.

Acceptance Criteria
Inspection Cadence Scheduler
"As a driver supervisor, I want an automated inspection cadence with mobile checklists for Bridge vehicles so that compliance is maintained and emerging issues are caught early."
Description

Creates a risk-adjusted inspection schedule for Bridge assets, factoring in usage intensity, regulatory requirements (e.g., DOT pre/post-trip), and component health. Automatically issues reminders, assigns mobile checklists, and escalates missed inspections. Syncs with calendars, supports offline mobile capture, and feeds results back into the Keep-Alive Fix Selector and risk scoring. Includes templated checklists per asset class and configurable cadence windows for different Bridge durations.

Acceptance Criteria
Component Risk Watchpoints & Alerts
"As an owner-operator, I want targeted alerts on critical components during the Bridge period so that I can prevent unexpected breakdowns without over-servicing the vehicle."
Description

Defines and monitors watchpoints for engine, battery, and brakes tailored to Bridge scenarios, with thresholds and trend detection tuned for near-term reliability. Uses telematics streams and historical data to estimate failure risk within the Bridge horizon and surfaces prioritized alerts with recommended actions. Provides per-asset watchpoint dashboards, noise suppression for non-actionable events, and configurable alert routes (SMS, email, in-app). Integrates with anomaly detection and feeds directly into the essential fix list.

Acceptance Criteria
Spend Cap & Approval Guardrails
"As a finance-conscious fleet manager, I want spend caps and approval gates on Bridge vehicles so that we avoid sunk costs beyond the minimal keep-alive strategy."
Description

Enforces per-asset Bridge budgets with configurable spend caps, real-time tally of approved and pending work, and soft/hard stops when approaching limits. Requires approvals for exceptions, logs rationales, and blocks non-essential repairs by default. Generates weekly spend-versus-cap reports and notifies stakeholders of variance. Integrates with purchasing, vendor quotes, and accounting exports to ensure accurate cost tracking throughout the Bridge period.

Acceptance Criteria
Handoff & Exit Packet
"As a fleet operations lead, I want a ready-to-share exit packet at the end of the Bridge period so that sale or handoff is smooth and compliant with minimal administrative effort."
Description

Produces a comprehensive exit packet when a Bridge Plan ends, including consolidated service history, inspection logs, fault trends, remaining warranties, and unresolved advisories, formatted for sale or internal handoff. Supports device deactivation or reassignment workflows, data archival, and proof-of-compliance exports. Provides exit criteria checks (e.g., no open critical defects) and a single-click packet export to PDF/CSV and partner marketplaces.

Acceptance Criteria

VendorMatch

Chooses the best shop or mobile tech for each fault based on specialization, past fix times, warranty handling, rates, and proximity—then predicts downtime and estimated cost. Prioritizes your preferred vendors, offers top alternatives, and enables one‑click booking so you spend less time calling around and more time keeping vehicles earning.

Requirements

Vendor Scoring & Ranking Engine
"As a fleet manager, I want VendorMatch to rank vendors based on fit, performance, price, warranty handling, and proximity so that I can select the best option quickly with predictable downtime and cost."
Description

Create a unified vendor-scoring service that consolidates each shop/technician’s capabilities (OEM certs, categories like engine/brake/electrical), mobile vs. in‑shop service, pricing/rate cards, warranty processing capability, historical fix times, first‑time‑fix rate, quality issues, geography/coverage, and responsiveness. Combine these attributes with the detected fault context (vehicle, location, severity, service category) to compute an explainable composite score and produce a ranked list. Support configurable weighting (global defaults and per‑account overrides) and hard constraints (e.g., exclude blacklisted vendors, require mobile service on roadside events). Expose the ranking rationale to the user, handle sparse data with fallbacks, and degrade gracefully when no ideal matches exist by offering best alternatives with reasons.

Acceptance Criteria
Fault-to-Service Mapping
"As a maintenance planner, I want faults automatically translated into clear service categories and requirements so that VendorMatch recommends vendors who can actually perform the work safely and on time."
Description

Map OBD‑II/DTC codes, sensor anomalies, and inspection findings to normalized service categories and required capabilities (e.g., brake caliper replacement, alternator diagnostic, ABS module). Tag severity/safety implications, recommend shop vs. mobile, and set baseline SLA targets. Use rules plus learnings from resolved cases to improve mappings over time. Provide manual classification and override when codes are unknown or ambiguous. Feed the mapped category and constraints directly into the scoring engine and booking flow to ensure accurate vendor selection and job scoping.

Acceptance Criteria
Downtime & Cost Prediction
"As an owner‑operator, I want reliable downtime and cost estimates for each recommended vendor so that I can choose the option that minimizes lost earnings and surprise expenses."
Description

Predict estimated downtime and total repair cost for each candidate vendor using historical fix durations by fault/vendor, parts lead times, vendor availability/queue, travel time, and vehicle utilization constraints. Output P50/P90 time and cost ranges with confidence, highlight drivers of uncertainty (e.g., parts backorder), and update predictions as new data arrives (confirmation, ETA, diagnostics). Surface predictions alongside rankings and feed them into routing rules and alerts to minimize unplanned downtime and budget variance.

Acceptance Criteria
One‑Click Booking & Dispatch
"As a dispatcher, I want to book the chosen vendor in one click and get a confirmed time/ETA so that I spend less time coordinating and get vehicles back on the road faster."
Description

Enable frictionless booking of the selected vendor via API, email, or SMS gateways, packaging fault details, VIN, location, photos, required capabilities, and warranty info. Support appointment confirmation, live ETA for mobile techs, shop drop‑off scheduling, reschedule/cancel, and escalation if vendors do not respond within SLA. Sync events to FleetPulse calendars and vehicle timelines, and write back job IDs for traceability. Include permission controls, audit logs, and a fallback to manual contact details when integrations are unavailable.

Acceptance Criteria
Warranty‑Aware Routing
"As a fleet owner, I want VendorMatch to factor in warranty coverage and claim handling so that I avoid paying for repairs that should be covered and choose vendors who can process claims smoothly."
Description

Ingest and manage warranty rules per vehicle/part (OEM, extended, vendor), detect coverage for the identified fault, and prioritize vendors capable of handling claims and OEM procedures. Show expected covered vs. out‑of‑pocket amounts and required documentation. Attach warranty data to the booking package and track claim status. Adjust rankings and predicted cost accordingly to reduce unnecessary spend and ensure compliance with warranty conditions.

Acceptance Criteria
Preferred Vendors & Routing Rules
"As a fleet admin, I want to encode our preferred vendors and routing policies so that recommendations align with our contracts, safety standards, and operational practices."
Description

Allow accounts to configure preferred vendors, blacklists, service geofences, maximum travel distance, and weighting tweaks (e.g., prefer existing vendors unless performance falls below threshold). Enforce routing policies (safety critical → nearest qualified; roadside → mobile first) while still presenting top alternatives with clear trade‑offs. Support A/B testing of weighting schemes, per‑vehicle exceptions, and temporary overrides during outages or peak demand.

Acceptance Criteria
Live Vendor Data Integration
"As a service coordinator, I want up‑to‑date vendor availability and rates so that VendorMatch recommendations reflect real options I can book right now without surprises."
Description

Integrate with vendor networks/APIs and internal directories to retrieve live availability, lead times, service areas, rate updates, and holiday hours. Provide self‑serve vendor onboarding and verification, data freshness SLAs, retry/backoff for failures, and monitoring. Normalize heterogeneous data into the vendor profile store used by the scoring engine and booking flow. When live feeds are unavailable, use cached data with staleness indicators and prompt users to confirm details during booking.

Acceptance Criteria

RO Builder

Auto-translates DTCs and inspection defects into a vendor-ready repair order with complaint/cause/correction lines, photos, notes, and likely tests and parts. Vendors get exactly what they need up front, cutting phone tag, accelerating diagnosis, and improving first-pass fix rates.

Requirements

DTC-to-3C Translation
"As a fleet manager, I want DTCs and inspection defects automatically converted into clear complaint, cause, and correction lines so that vendors receive an actionable RO without me rewriting technical codes."
Description

Automatically converts OBD-II DTCs and inspection defects into standardized complaint/cause/correction lines, enriching each line with severity, downtime risk, system classification (engine, battery, brake), and consolidated duplicates. Pulls code metadata (description, occurrences, active/history), freeze-frame data, and recent inspection notes to produce clear vendor-ready narrative text. Includes a reusable template library and business rules to ensure consistent tone and structure, supports multiple issues per RO, and allows user overrides with change tracking.

Acceptance Criteria
Test-and-Parts Suggestions
"As a service coordinator, I want suggested tests and parts prefilled based on codes and vehicle context so that the vendor can start diagnosis faster and arrive with the right parts."
Description

Generates likely diagnostic tests and parts kits based on DTCs, vehicle make/model/year, mileage, service history, and common fix patterns. Provides confidence scores, alternative paths, and recommended labor ops, with configurable rules per fleet and vendor. Prefills line items with quantities, OEM/aftermarket options, and estimated labor hours, while surfacing dependencies (e.g., gaskets, fluids) and safety checks. Supports user approval and edits, and logs overrides to improve future recommendations.

Acceptance Criteria
Media Capture & Annotation
"As a driver or technician, I want to attach annotated photos and notes to specific RO lines so that the vendor can see the issue clearly and verify the symptom."
Description

Enables capture and attachment of photos, short videos, and audio notes from web and mobile, with automatic timestamp, location, and vehicle linkage. Supports inline annotation (arrows, highlights), compression, and validation of required media per defect type. Allows associating media to specific RO lines, preserving EXIF metadata and chain-of-custody for warranty and compliance. Stores securely with access controls and generates vendor-friendly thumbnails and full-resolution downloads.

Acceptance Criteria
Vendor-Ready Export & Dispatch
"As a fleet manager, I want to send standardized ROs with all details to my preferred vendor in their required format so that I avoid back-and-forth and speed up acceptance."
Description

Produces complete, standardized repair orders in vendor-ready formats (PDF, structured email, and JSON via API) including 3C lines, tests/parts, media links, vehicle identifiers (VIN, plate), odometer, contact details, authorization limits, and requested SLA. Supports per-vendor templates, time zone normalization, and localization. Dispatches via email, portal, or API with delivery tracking, retries, and error handling, and records a shareable link for vendors who do not accept attachments.

Acceptance Criteria
Cost Estimator & Approvals
"As an owner-operator, I want an estimated cost with automated approval rules so that minor repairs proceed immediately while larger ones are reviewed."
Description

Calculates preliminary parts and labor costs using fleet rate cards, vendor-specific labor rates, taxes, shop fees, and configurable markups. Flags warranty coverage and recalls when applicable, and estimates total cost per line and RO-level subtotals. Enforces approval rules with thresholds, auto-approves below limits, and routes higher estimates for review with audit trail. Displays variance between estimate and historical averages to highlight outliers.

Acceptance Criteria
Vehicle Context Enrichment
"As a vendor technician, I want each RO to include VIN, mileage, freeze-frame data, and recent service history so that I can triage quickly and avoid redundant checks."
Description

Augments each RO with real-time vehicle context: VIN decode, odometer from telematics, last service actions, active/inactive faults, freeze-frame snapshots, and recent inspection findings. Highlights related symptoms (e.g., misfire with fuel trim anomalies) and environmental conditions at fault time. Surfaces warranty status and parts compatibility to reduce misorders, and ensures data freshness with timestamped sources and fallbacks when sensors are offline.

Acceptance Criteria

SlotSync

Two‑way calendar sync with preferred shops shows real‑time availability by bay and service type. Hold a slot, confirm in one tap, and get automatic rebooking suggestions if delays or tow‑ins occur. Drivers receive directions and reminders, while geofenced check‑in notifies the shop as the vehicle arrives.

Requirements

Shop Calendar Connectors
"As a fleet manager, I want to link preferred shops’ calendars to FleetPulse so that I can see and book real-time availability without phone calls or email back-and-forth."
Description

Implement secure, two-way integrations with preferred shops’ calendars and scheduling systems to read and write availability, appointments, and acknowledgments. Support OAuth and token-based auth, ICS feeds, and API-based connectors with scoped permissions. Normalize shop hours, time zones, bay counts, blackout dates, and service catalogs. Provide an admin UI to link/unlink shops, test connectivity, and set sync frequency. Ensure idempotent writes, rate-limit handling, retries with backoff, and audit logs for all sync operations. Fallback to email-based booking for shops without APIs, while keeping a consistent internal model for appointments.

Acceptance Criteria
Bay & Service Availability Modeling
"As a dispatcher, I want to view bay-level availability by service type so that I can schedule the earliest compatible slot for each vehicle and job."
Description

Model and surface real-time availability by bay and service type, including estimated durations and required resources. Map FleetPulse service codes to shop service catalogs, showing earliest viable slots that match vehicle, service type, and bay capability (e.g., heavy-duty lift). Respect shop constraints such as lunch breaks, technician shifts, and parts lead times. Present availability with time-zone awareness and conflict detection, caching results for performance while ensuring freshness via incremental syncs.

Acceptance Criteria
Slot Hold & Conflict Resolution
"As a fleet manager, I want to place a temporary hold on a slot so that I don’t lose availability while confirming with my driver or supervisor."
Description

Enable temporary slot holds with a configurable TTL to prevent double-booking while approvals occur. Create a distributed lock across FleetPulse and the shop’s system to reserve capacity, with automatic expiration and cleanup. Provide real-time conflict detection and alternative-slot suggestions if a collision occurs. Expose hold status in UI, send hold notifications to shops where supported, and maintain an audit trail of holds, expirations, confirmations, and cancellations.

Acceptance Criteria
One-Tap Confirmation & Acknowledgment
"As a dispatcher, I want to confirm a held slot in one tap so that the booking is finalized quickly and accurately across systems."
Description

Allow users to confirm a held slot in one tap from web or mobile, finalizing the booking across both FleetPulse and the shop system. Send structured appointment details (vehicle, VIN, service, bay, ETA, notes) and require shop acknowledgment where supported; handle fallback workflows if acknowledgment is unavailable. Update calendars bidirectionally, issue driver and dispatcher confirmations, and support reschedule/cancel with policy-aware prompts and ICS invites. Ensure idempotency of booking operations and clear error recovery paths.

Acceptance Criteria
Delay & Tow-In Rebooking Engine
"As a dispatcher, I want automatic rebooking suggestions when delays or tow-ins occur so that vehicles are serviced promptly with minimal downtime."
Description

Continuously monitor vehicle telemetry, GPS, and shop response SLAs to detect late departures, traffic delays, breakdowns, and tow-in events. Generate rebooking suggestions that consider shop capacity, bay specialization, distance, and urgency, and allow auto-propose or auto-rebook based on configurable business rules. Notify all stakeholders with clear options (keep, shift, change shop), and synchronize changes across calendars. Provide an explanation trail for suggestions and outcomes for operational analytics.

Acceptance Criteria
Driver Directions & Smart Reminders
"As a driver, I want clear directions and timely reminders so that I arrive on time at the correct location and bay."
Description

Deliver turn-by-turn directions, parking/bay instructions, and time-aware reminders to drivers via in-app notifications, SMS, and email as configured. Include deep links to the appointment, support multiple mapping providers, and handle time-zone changes. Allow configurable reminder cadence (e.g., day-before, 2 hours, 30 minutes) with escalation if the driver hasn’t acknowledged. Provide a simple ‘On My Way’ action that updates ETA and notifies the shop and dispatcher.

Acceptance Criteria
Geofenced Check-In & Arrival Alerts
"As a service writer at the shop, I want automatic arrival notifications so that I can prepare the bay and parts before the vehicle pulls in."
Description

Implement geofenced detection around shop locations to trigger automatic check-in on arrival with low battery impact. Update appointment status to ‘Arrived’, notify the shop in real time, and surface the event in FleetPulse. Provide fallbacks for poor GPS conditions and a manual check-in option. Ensure user-consented location permissions, configurable geofence radius, platform-specific optimizations (iOS/Android), and safeguards against false positives (speed thresholds, dwell time).

Acceptance Criteria

Parts Hold

Checks OEM/dealer/distributor inventories for likely parts based on the fault, then reserves stock with the chosen vendor and attaches a PO. If parts are back‑ordered, it recommends an alternate vendor with stock or shifts the appointment automatically—reducing dwell time and missed promises.

Requirements

Fault-to-Parts Recommendation Engine
"As a fleet manager, I want the system to suggest the most likely parts from a fault code so that I can order the right parts quickly and reduce diagnostic and sourcing time."
Description

Translate incoming OBD-II fault codes, VIN, mileage, and recent service history into a prioritized list of likely replacement parts with confidence scores, OEM part numbers, and aftermarket equivalents. Normalize parts data (brand, SKU, description, core charges) and quantities, and group suggestions into a proposed “parts basket” per vehicle/work order. Expose a deterministic rules layer (initial) with a pluggable model for continuous refinement, support supersessions and cross-references, and allow technicians to override selections. Surface fitment validation against year/make/model/engine, and persist recommendations to the vehicle and work order record for traceability and cost analytics.

Acceptance Criteria
Vendor Inventory & Pricing Connectors
"As a service coordinator, I want to see live availability and pricing across my preferred vendors so that I can choose the fastest and most cost-effective source."
Description

Integrate with OEM, dealer, and distributor systems to query real-time stock, pricing, lead times, and locations for candidate parts. Provide a normalized inventory API and connector framework (OAuth/API key, retries, timeout <5s, circuit breakers, and 10-minute cache) with fields for on-hand, on-order, ETA, price, discounts, taxes, shipping options, and store proximity to vehicle/garage. Support multiple vendors per part, multi-currency readiness, and SLA-based fallbacks when vendors are offline. Log telemetry, errors, and response times for reliability and vendor performance reporting.

Acceptance Criteria
One-click Reserve & PO Generation
"As an owner-operator, I want to reserve required parts and create a PO in one step so that I can lock inventory and keep my repair on schedule."
Description

Enable users to reserve selected parts with a single action, automatically generating a purchase order per vendor with all required details: PO number, vendor account, ship-to/pickup location, item lines (SKU, description, qty, unit price, core charges), taxes, shipping method, requested delivery date, and references to vehicle, VIN, and work order. Ensure idempotency to prevent duplicates, support EDI/API submission or vendor-confirmation email with PDF attachment, and display hold expirations and cancellation flows. On confirmation, attach the PO and reservation status to the work order and push expected part arrival to maintenance scheduling and cost tracking.

Acceptance Criteria
Backorder Routing & Substitution
"As a fleet manager, I want automatic alternatives when parts are back-ordered so that repairs aren’t delayed and vehicles return to service faster."
Description

Detect backorders and insufficient quantities, then automatically recommend alternates: nearby vendor locations, equivalent part numbers, or aftermarket brands that meet OEM specifications. Present trade-offs (ETA, price, distance) and, when policy allows, automatically route the reservation to the best alternative or split the order across vendors. Record substitution rationale, enforce vendor and brand preferences, and update the work order and PO(s) accordingly. If no viable option exists, capture interest lists and notify when stock returns.

Acceptance Criteria
Appointment Auto-Reschedule
"As a dispatcher, I want appointments to shift automatically when part ETAs change so that I can keep commitments realistic and minimize vehicle dwell time."
Description

Recalculate and update maintenance appointments based on confirmed part ETAs and delivery methods, preventing bookings before parts arrival. Respect technician capacity, bay availability, and vehicle downtime constraints, propose the earliest feasible slot, and notify stakeholders (driver, technician, vendor if pickup) of changes. Write back schedule updates to the work order and calendar, and maintain a change log for missed-promise analysis.

Acceptance Criteria
Approvals & Spend Controls
"As a fleet administrator, I want approval rules on parts reservations and POs so that we control spend and enforce vendor policies without slowing routine purchases."
Description

Implement role-based controls and spend thresholds to govern reservations and POs: auto-approve under configurable limits, require manager approval over limits or for brand/vendor exceptions, and block purchases from non-whitelisted vendors. Provide an approval queue with SLA reminders, full audit trail (who, what, when), and reasons for overrides. Integrate with fleet budgets and cost categories to prevent overages and surface projected vs actual part costs on the work order.

Acceptance Criteria
Parts Hold Lifecycle Tracking & Alerts
"As a shop lead, I want real-time status and alerts on parts holds and POs so that I can act on delays and keep repairs moving."
Description

Track the end-to-end status of each reservation and PO (Reserved, Confirmed, Backordered, Split, Shipped, Delivered, Received, Canceled) with timestamps, expected vs actual ETAs, and exceptions. Ingest vendor webhooks or polling updates, reconcile partial shipments, and prompt receiving in the work order to close the loop and update repair-cost tracking. Send proactive alerts for hold expirations, delayed shipments, or price changes via in-app, email, and push, and surface a dashboard of at-risk repairs.

Acceptance Criteria

Route Bundler

Clusters jobs by location, time windows, and shop capacity to consolidate visits or build efficient mobile‑tech routes. Minimizes deadhead miles and repeat trips while aligning with driver dispatch windows, keeping more vehicles in service with fewer interruptions.

Requirements

Constraint-Aware Route Optimization Engine
"As a fleet manager, I want the system to automatically bundle jobs into efficient routes that respect time and capacity limits so that we reduce travel waste and keep more vehicles in service."
Description

Build a scalable optimization core that formulates the Route Bundler as a vehicle routing problem with time windows and capacity (VRPTW). The engine must cluster service jobs by geospatial proximity while honoring hard constraints (shop bay counts, technician slot limits, customer time windows, service durations, driver dispatch windows) and penalizing soft constraints (preferred shops, technician skill matching, bundling across adjacent days). It should minimize deadhead miles and repeat trips using a tunable cost function (distance, drive time, tolls, labor, out-of-service impact) and output consolidated bundles or mobile-tech routes with ordered stops, ETAs, and appointment windows. Provide deterministic results with stochastic tie-breaking, support incremental re-optimization, and expose the solver via an internal service API with SLAs for typical fleet sizes (3–100 vehicles, 10–300 jobs/day).

Acceptance Criteria
Shop and Technician Capacity Sync
"As a service coordinator, I want shop and technician availability reflected in bundling so that proposed routes only include slots we can actually fulfill."
Description

Implement two-way synchronization with shop calendars and mobile technician schedules to surface real-time capacity into the bundling process. Pull bay counts, appointment holds, blackout dates, technician shifts, skill tags, and average service durations by job type; push tentative and confirmed bundles back as calendar holds. Detect conflicts (double-booked bays, tech skill mismatches) and propose alternates. Support both in-house shops and partner vendors, with API and CSV connectors and per-shop business hours, holidays, and lead-time rules.

Acceptance Criteria
Dispatch Window Alignment and Driver Assignment
"As a dispatcher, I want bundles aligned to driver shifts and territories so that assignments are realistic and do not disrupt operations."
Description

Ingest driver dispatch windows, start locations, vehicle types, and allowable service interruptions to align bundles with driver availability. Assign routes to drivers based on proximity, shift length, certifications, and vehicle compatibility while respecting work-hour limits and preferred territories. Provide fallback rules (unassigned pool, split bundles) and guardrails to avoid assigning vehicles currently flagged as critical or unavailable. Output driver-ready route manifests compatible with FleetPulse dispatch.

Acceptance Criteria
Map and Timeline Visualization with Manual Overrides
"As a planner, I want to see and adjust proposed bundles on a map and timeline so that I can fine-tune routes while understanding the impact."
Description

Deliver a planning UI that visualizes bundles and routes on a map and timeline (Gantt), showing stop sequences, ETAs, travel times, and capacity usage. Enable drag-and-drop reordering, reassignment between routes, locking of stops or time windows, and quick splitting/merging of bundles. Surface constraint violations and cost deltas in real time as the planner makes changes, with undo/redo and an audit trail of manual overrides. Support clustering heatmaps, traffic overlays, and shop capacity indicators.

Acceptance Criteria
Event-Driven Re-optimization and What-if Simulation
"As an operations lead, I want automatic and on-demand re-optimization so that plans adapt quickly to changes without starting from scratch."
Description

Introduce event listeners and triggers to re-optimize when key data changes (new fault alerts, cancellations, delays, weather/traffic updates, shop capacity shifts). Provide fast incremental recalculation that preserves locked decisions. Add a sandbox mode to simulate scenarios (e.g., add a mobile tech, close a bay, change time windows) and compare outcomes on KPIs before committing. Allow scheduling of nightly batch bundling and mid-day touch-ups.

Acceptance Criteria
Notifications and Confirmations Workflow
"As a coordinator, I want automatic notifications and confirmations for bundled appointments so that everyone stays aligned and capacity is secured."
Description

Create a communication workflow that sends proposed bundles and routes to stakeholders (shops, mobile technicians, drivers) via email, SMS, and in-app notifications. Include appointment confirmations, reschedule requests, and fallback options if a slot is declined. Generate iCal attachments, provide deep links to route manifests, and capture confirmations to lock capacity in the optimizer. Support localization, notification throttling, and delivery status tracking.

Acceptance Criteria
Optimization Analytics and Savings Attribution
"As an owner-operator, I want clear metrics on miles saved and uptime improvements so that I can justify the value of Route Bundler."
Description

Provide dashboards and exports that quantify Route Bundler impact, including deadhead miles avoided, trips reduced, on-time service rate, shop utilization, technician productivity, and vehicle downtime reduction. Attribute savings to specific bundles versus baseline heuristics, track exceptions and manual overrides, and surface SLA adherence. Offer per-fleet, per-shop, and per-route breakdowns with date filters and data retention aligned to FleetPulse policies.

Acceptance Criteria

Approval Guard

Enforces pre‑set spend caps and approval paths per vehicle or job. Vendors see the authorized limit up front; any overage triggers instant digital approval with documented estimates and change orders—preventing surprise bills and preserving a clean audit trail.

Requirements

Spend Cap Profiles per Vehicle/Job
"As a fleet manager, I want to set and manage spend caps per vehicle and job so that repair costs stay within policy without manual tracking."
Description

Provide configurable spend caps that can be assigned at the vehicle, job, or work order level, including options for hard stops vs. soft warnings, tax/fees inclusion rules, effective dates, reset cadence (per job, per month, per odometer interval), and currency. Caps must support inheritance from fleet-wide policies with local overrides, and expose remaining budget calculations in real time as estimates and change orders are submitted. Integrates with FleetPulse work orders and maintenance schedules to pre-populate caps when tasks are created.

Acceptance Criteria
Vendor Authorization Display & Work Token
"As a vendor service advisor, I want to see the authorized limit and scope up front so that I can quote and perform work without risking unpaid overages."
Description

Expose the authorized spend limit and scope to vendors via a secure, expiring link or QR code, showing current cap, remaining amount, and covered job details. Generate a signed Work Authorization Token that vendors reference on estimates and invoices; enforce caps in the vendor portal and via API callbacks. Reflect updates instantly when approvals change, and clearly communicate out-of-scope items. Include vendor acknowledgment logging upon viewing the limit.

Acceptance Criteria
Overage Approval Workflow
"As an approver, I want instant, context-rich overage requests with one-tap actions so that work can proceed without delays while maintaining control of spend."
Description

Trigger instant approval requests when a submitted estimate or change order exceeds the cap or adds out-of-scope items. Route to designated approvers using tiered paths, approval thresholds, and backup delegates. Support one-tap approve/deny with notes via mobile, email, and SMS, with SLA timers, reminders, and auto-escalation. Allow emergency overrides with required justification and optional temporary cap increases. Persist decisions back to the work order and update vendor-facing limits in real time.

Acceptance Criteria
Digital Estimates & Change Orders
"As a vendor, I want to submit and revise detailed digital estimates and change orders so that approvals are accurate and fast without back-and-forth phone calls."
Description

Enable vendors to submit structured, line-item estimates with parts, labor, shop fees, taxes, and discounts, including attachments (photos, diagnostics, VIN scans) and notes. Support versioning, side-by-side diffs, and tracked approvals per revision. Allow creation of change orders tied to the original estimate with impact to remaining cap automatically calculated. Validate totals against rate cards and policy rules before routing for approval.

Acceptance Criteria
Immutable Audit Trail & Exports
"As an owner-operator, I want a complete, exportable audit trail of approvals and changes so that I can resolve disputes and pass audits with confidence."
Description

Record an immutable, time-stamped log of all key events: cap creation/changes, vendor views, estimate submissions, approvals/denials, escalations, overrides, and token usage. Capture actor identity, role, method (mobile/email/API), and device/IP metadata. Provide searchable timeline views per vehicle, job, and vendor. Support legally admissible e-signatures (ESIGN/UETA compliant) and one-click exports to PDF/CSV with configurable retention policies.

Acceptance Criteria
Notifications & Escalations Engine
"As a fleet manager, I want timely, configurable alerts and automatic escalations so that approvals aren’t a bottleneck and downtime is minimized."
Description

Deliver configurable notifications for cap thresholds, estimate submissions, pending approvals, SLA breaches, and vendor acknowledgments across in-app, email, SMS, and push. Support quiet hours, time zone awareness, batching/digests, and fallback recipients when approvers are unavailable. Provide per-user preferences and per-policy defaults, with delivery tracking and retry logic for reliability.

Acceptance Criteria
Policy Engine & Auto-Approvals
"As a fleet manager, I want to encode approval policies with auto-approvals for routine work so that we reduce delays while keeping exceptions under tight control."
Description

Offer a rule-based policy engine to auto-approve low-risk work under defined thresholds (e.g., tires under $200, PM services) and to flag exceptions. Validate vendor quotes against rate cards, parts markups, warranty coverage, and blacklisted vendors. Integrate with FleetPulse OBD-II diagnostics and maintenance schedules to pre-authorize routine tasks and propose caps automatically. Log every policy decision for transparency and tuning.

Acceptance Criteria

Smart Auto-Pause

Set-and-forget rules that pause billing per vehicle based on season dates, storage geofences, no-ignition streaks, or low utilization thresholds. Bulk-select or one-tap pause/resume while protecting exceptions (e.g., compliance-critical units). Cuts admin work and ensures you never pay for parked assets.

Requirements

Pause Rules Engine
"As a fleet manager, I want to define automatic pause rules based on season dates, geofences, ignition streaks, and utilization so that billing pauses happen reliably without manual work."
Description

Implements a configurable rules engine that evaluates per-vehicle conditions to automatically pause or resume billing based on season date windows, storage geofences with dwell thresholds, no‑ignition streaks, and low utilization targets over configurable lookback periods. Supports rule scoping (vehicle, group, tag), condition combinators (AND/OR), priority and conflict resolution, hysteresis to prevent flapping, time zone awareness at the asset level, effective dates, and blackout calendars. Integrates with telematics data services and the billing subsystem, executes on a scheduled cadence with on‑demand evaluation, and exposes metrics and health checks for observability.

Acceptance Criteria
Utilization & Ignition Detection
"As an operations analyst, I want accurate detection of no‑ignition streaks and low utilization so that pauses only occur when assets are truly idle."
Description

Calculates accurate per-vehicle utilization and ignition state signals required by pause rules, including rolling no‑ignition streaks, trip counts, distance, and engine hours over configurable windows. Applies smoothing and outlier handling for GPS/OBD dropouts, distinguishes heartbeat from true ignition events, handles device offline scenarios with grace periods, and normalizes metrics by vehicle type. Maintains cached state for fast rule evaluation and exposes confidence scores for each signal.

Acceptance Criteria
Storage Geofence Triggering
"As a yard manager, I want vehicles to auto‑pause when parked in designated storage geofences for a set dwell time so that stored units stop incurring charges."
Description

Enables designation of storage geofences and triggers auto‑pause when a vehicle dwells within a storage zone beyond a configurable threshold while remaining inactive. Accounts for GPS drift via tolerance buffers, supports entry/exit and dwell‑time logic, and optionally requires ignition‑off confirmation to avoid false positives. Provides per‑geofence defaults, inherits rule scopes, and auto‑resumes upon exit or verified utilization recovery.

Acceptance Criteria
Exception Safeguards & Compliance Lock
"As a compliance lead, I want to mark certain vehicles as non‑pausable and require approvals for overrides so that regulatory‑critical assets remain billed and monitored."
Description

Provides a protection mechanism to mark vehicles as non‑pausable (e.g., compliance‑critical units) and enforce that bulk actions and rules respect these locks. Includes role‑based permissions, override workflows with reason capture and optional approval, time‑bound exemptions, and visual indicators in UI. Blocks rule execution on protected assets while logging attempted actions for auditability.

Acceptance Criteria
Bulk Pause/Resume & Quick Actions
"As a fleet admin, I want to bulk‑select vehicles and pause or resume them with one tap while seeing protected exceptions so that I can manage billing states quickly and safely."
Description

Delivers multi‑select workflows to pause or resume many vehicles at once with a single tap, including filters (tags, status, last ignition, utilization) and saved selections. Presents pre‑action impact summaries (vehicles affected, protected exceptions, projected credits), includes confirmation and progress feedback, supports undo within a short window, and guarantees idempotent, rate‑limited operations resilient to partial failures.

Acceptance Criteria
Billing Proration & Sync
"As a finance manager, I want paused vehicles to sync to billing with proper proration and reconciliation so that invoices are accurate and we never overcharge or undercredit."
Description

Integrates pause/resume state changes with billing to apply per‑vehicle proration and credits accurately across plans and currencies. Ensures idempotent updates, handles mid‑cycle plan changes, and supports sandbox testing. Produces reconciliation reports, exposes back‑billing guardrails (e.g., backdating limited without approval), and recovers from transient failures with retry and dead‑letter handling.

Acceptance Criteria
Notifications, Webhooks & Audit Trail
"As a fleet owner, I want notifications and an audit trail for all auto‑pause actions so that I have visibility, can contest mistakes, and integrate events into my systems."
Description

Generates real‑time and digest notifications for rule‑driven and manual pause/resume events, with per‑role preferences and channels (in‑app, email, webhook). Emits structured webhooks for external integrations, provides upcoming auto‑resume alerts, and maintains an immutable audit log capturing who/what/when/why, rule IDs, inputs, and outcomes. Supports export and retention policies.

Acceptance Criteria

Pause Ledger

A transparent, per‑VIN timeline showing pause start/stop, prorated credits, and projected vs actual savings on each invoice. Includes reason codes, approver, and notes for clean audits, plus CSV/PDF export for finance and owner updates.

Requirements

Per-VIN Pause Timeline View
"As a fleet manager, I want a per-VIN timeline of pause events with credits and savings so that I can quickly understand and explain billing adjustments."
Description

Deliver a chronological, per-VIN timeline that visualizes pause start/stop events, the computed duration of each pause, associated plan rates, prorated credit amounts, and the running total for the selected period. The view must display reason codes, approver identity, and notes inline, support filtering by date range, status, reason code, and approver, and offer deep links to the related invoice(s). Ensure consistent timezone handling, responsive layout for mobile/tablet, accessibility compliance (keyboard navigation, screen-reader labels), and performant rendering for vehicles with high event counts. Provide empty, error, and loading states, and guard against duplicate or overlapping events with clear messaging.

Acceptance Criteria
Prorated Credit Calculation Engine
"As a billing analyst, I want prorated credits to be calculated automatically and accurately so that invoices reflect the exact paused usage."
Description

Implement a rules-driven service that calculates pause credits precisely based on plan rate, billing frequency, and actual pause duration, supporting per-minute granularity, daylight-saving transitions, leap years, and partial-period proration. The engine must prevent double-crediting by resolving overlapping or backdated pauses, enforce currency rounding rules, respect account-specific tax treatment, and handle multi-currency readiness. Expose idempotent APIs for create/update/cancel of pause events with deterministic recalculation and write immutable, traceable entries into the ledger. Include comprehensive unit and property-based tests and observability (metrics, logs) for financial accuracy.

Acceptance Criteria
Projected vs Actual Savings Computation
"As an owner-operator, I want to compare projected versus actual pause savings on my invoice so that I can verify impact and plan cash flow."
Description

Generate a projection at pause creation that estimates savings for the remainder of the current billing period and update to actuals upon pause end or invoice finalization. Display both values and their delta on the ledger and invoice, with clear labels and tooltips describing calculation methods. Recompute projections when a pause is edited or extended, snapshot values at invoice generation, and maintain historical snapshots for prior invoices. Handle pauses spanning multiple billing periods by apportioning projections and actuals per period. Provide safeguards and alerts when projections materially deviate from actuals beyond a configurable threshold.

Acceptance Criteria
Reason Codes, Approver, and Notes Audit Trail
"As a compliance auditor, I want each pause to include a reason code, approver, and notes with an immutable trail so that I can validate controls and approvals."
Description

Require a standardized reason code (from a configurable taxonomy), approver identity, and free-text notes for every pause start/stop event. Persist an immutable audit trail that records who did what and when, including edits, approvals, and cancellations, with versioning and diffs for changed fields. Ensure tamper-evidence, time synchronization, and retention policies aligned with finance audit requirements. Expose read APIs and export support for auditors and finance, with privacy controls for PII in notes. Surface audit details contextually in the timeline and on invoices to support clean, end-to-end audits.

Acceptance Criteria
Finance Exports (CSV/PDF)
"As a finance user, I want to export the pause ledger to CSV and PDF so that I can share and reconcile data with accounting."
Description

Provide export of the pause ledger per VIN or across a selected vehicle set and date range as CSV and branded PDF. Include all relevant fields (VIN, event timestamps, duration, plan rate, prorated credit, projected vs actual savings, reason code, approver, notes, invoice references, currency) with configurable column order and GL-friendly headers. Support large datasets via asynchronous export jobs with progress notifications and downloadable links, consistent number/date/currency formatting, page totals and summaries in PDF, and secure access governed by RBAC. Validate files for spreadsheet compatibility and ensure deterministic file naming for reconciliation.

Acceptance Criteria
Invoice Integration and Line-Item Sync
"As a finance manager, I want pause credits to appear as clear line items on invoices so that billing is transparent and reconcilable."
Description

Integrate the pause ledger with invoicing to generate clear, per-VIN line items that show pause period, prorated credit, and projected vs actual savings, along with reason code and notes references. Ensure idempotent invoice generation and safe re-runs, lock ledger values once an invoice is finalized, and apply adjustments in the next invoice when late changes occur. Handle pauses that span multiple billing periods with correct apportionment, apply tax rules correctly, and expose these line items to accounting exports and APIs. Include validation to prevent invoice creation when ledger inconsistencies are detected and provide actionable error messages.

Acceptance Criteria
RBAC and Approval Workflow for Pauses
"As an account admin, I want role-based controls and approvals for pauses so that credit issuance is governed and auditable."
Description

Enforce role-based permissions for creating, approving, editing, and ending pauses, with configurable policies at the account level. Support optional multi-step approvals triggered by configurable thresholds (e.g., credit amount, pause duration, vehicle class) and notify approvers via in-app and email alerts. Provide SLA timers, escalation paths, and automatic cancellation or re-request if approvals time out. Record all approval decisions in the audit trail and reflect approval status in the timeline and invoice annotations. Ensure usability with clear state indicators (Requested, Approved, Rejected, Active, Ended) and guardrails to prevent unauthorized credits.

Acceptance Criteria

Reminder Hibernation

Keeps maintenance schedules intact while suppressing non‑essential reminders during layup. Auto-shifts due dates by parked days and tags items as “Deferred—Parked,” so drivers aren’t nagged and nothing gets lost when the vehicle returns.

Requirements

Layup Mode Controls
"As a fleet manager, I want to place one or more vehicles into hibernation for a date range so that non-essential maintenance reminders pause without losing schedule integrity."
Description

Provide per-vehicle and bulk controls to place assets into hibernation for a defined date range or indefinitely, with optional reasons (e.g., seasonal layup, accident repair). Integrates with OBD-II telemetry to suggest hibernation when inactivity thresholds are met or when a vehicle remains within a designated yard geofence. Persist hibernation state on the vehicle profile, disable non-essential inspection prompts, and expose a clear UI banner indicating parked status. Ensure idempotent transitions (active → hibernating → active), permission checks for who can set/clear hibernation, and safeguards against accidental activation (confirmation + undo).

Acceptance Criteria
Rule-Based Reminder Suppression
"As a fleet manager, I want to suppress only non-essential reminders while keeping safety-critical alerts active so that parked vehicles don’t trigger noise but compliance is preserved."
Description

Implement a configurable rules engine that suppresses non-essential maintenance reminders when a vehicle is hibernating while continuing to surface safety-critical alerts (e.g., brake system faults, battery thermal warnings, theft/tamper). Provide default FleetPulse policies with fleet-level and vehicle-level overrides to classify reminders by severity, regulatory impact, and asset state. Support time windows (full suppression, digest to managers, or critical-only) and maintain compatibility with existing reminder generation logic. Include simulation mode to preview which reminders will be suppressed under current rules before activation.

Acceptance Criteria
Auto-Shift Due Dates & Meters
"As a maintenance planner, I want due dates to automatically move forward by the layup duration and meter-based tasks to pause so that schedules stay accurate without manual edits."
Description

Automatically shift time-based maintenance due dates forward by the number of days a vehicle remains in hibernation and pause meter-based schedules (miles, engine hours) when OBD-II indicates no usage. Preserve the original cadence and next-due offsets so recurring schedules remain aligned post-resume. Handle edge cases such as sporadic OBD pings (require minimum movement thresholds), maximum allowable deferral caps, and regulatory constraints that cannot be postponed. Provide rounding rules (nearest day) and show before/after due dates in the UI and API for transparency.

Acceptance Criteria
Deferred—Parked Tagging & Audit Trail
"As a compliance officer, I want a clear tag and audit log for items deferred due to parking so that I can demonstrate why reminders were paused during downtime."
Description

Tag all reminders suppressed during hibernation with a visible "Deferred—Parked" label, capturing reason, parked interval, actor (user or system), and rule that caused suppression. Display tags in vehicle timelines, maintenance queues, and work order candidates. Produce an immutable audit log with timestamps and state changes for compliance reviews and export via CSV/API. Ensure tags follow the reminder when hibernation ends, retaining provenance for future inspections and audits.

Acceptance Criteria
Auto-Resume & Catch-Up Orchestration
"As a fleet manager, I want reminders to resume automatically with a catch-up plan when a vehicle returns to service so that nothing is lost and drivers aren’t overwhelmed."
Description

On hibernation end (manual or automatic on movement/geofence exit), lift suppression, recalculate due dates, and generate a catch-up plan that sequences deferred tasks within configurable grace windows. Avoid overwhelming drivers by batching notifications and prioritizing items by criticality and proximity to due. Provide a pre-resume preview to managers and an option to soft-resume (resume schedules but keep driver notifications digest-only for N days). Escalate any items that exceeded maximum deferral limits during layup.

Acceptance Criteria
Minimal-Nag Notifications
"As a driver, I want to stop receiving non-essential maintenance pings while my truck is parked so that I’m not distracted by noise."
Description

While in hibernation, suppress driver-facing non-essential notifications and surface only critical alerts, replacing routine pings with a single weekly manager digest summarizing deferred items and any compliance exceptions. Add a vehicle-level "Parked" banner and a quiet-mode icon in mobile apps. Support configurable quiet hours, per-role notification policies, and overrides for specific reminder types. Ensure notification templates reflect the hibernation state and provide one-tap access to resume or adjust rules.

Acceptance Criteria
Hibernation Reporting & KPIs
"As an operations analyst, I want reports on deferred maintenance and downtime impact so that I can quantify savings and risks from hibernation."
Description

Deliver dashboards and exports showing parked days by vehicle, count and type of deferred reminders, compliance exceptions, catch-up completion rates, and estimated cost avoidance from reduced unplanned maintenance. Include filters by fleet, vehicle group, date range, and reminder category. Provide API endpoints and scheduled email reports to share insights with stakeholders and support budgeting and seasonal planning.

Acceptance Criteria

Wake-Up Checklist

Guided reactivation that surfaces the first‑day essentials—battery/voltage check, PSI verification, quick DTC scan, and resurfaced deferred tasks. One tap books services or road tests so vehicles re-enter duty safely and without surprise downtime.

Requirements

Step-by-Step Wake-Up Flow
"As a fleet manager, I want a guided wake-up checklist that leads drivers through the first-day essentials so that vehicles return to service safely and consistently with a complete audit trail."
Description

A guided, linear checklist that orchestrates vehicle reactivation with clear steps for battery/voltage check, tire PSI verification, OBD-II quick scan, resurfaced deferred tasks, and optional road test. Each step shows pass/fail status, contextual guidance, required inputs, and allows photo/notes attachments and skip-with-reason where permitted by role. Progress auto-saves and can be resumed on mobile or web. The flow pulls vehicle specifics (make/model, tire specs, service thresholds) from FleetPulse records, enforces prerequisites (e.g., all deferred tasks must be dispositioned before completion), and writes all outcomes to the vehicle’s maintenance history with timestamps and user identity. Integrates with notifications to alert managers on failed steps and with scheduling to branch into service booking when issues are detected.

Acceptance Criteria
Instant OBD-II Quick Scan
"As a driver, I want an instant DTC scan at wake-up so that I can catch critical issues before putting the vehicle back into service."
Description

Initiates a rapid diagnostic scan via the vehicle’s connected OBD-II device upon checklist start or ignition-on, retrieving stored and pending DTCs, MIL status, and freeze-frame data. Maps codes to severity and component (engine, battery/charging, brakes), highlights critical anomalies, and provides plain-language guidance and links to recommended actions. Results are logged to the vehicle record, compared against prior scans, and used to trigger one-tap service booking when thresholds are met. Supports intermittent connectivity by caching results locally and syncing when online; gracefully handles vehicles without active telematics by prompting for a manual scan entry.

Acceptance Criteria
Battery & Voltage Health Check
"As a technician, I want an automated battery and charging check so that I can identify weak batteries or charging issues before they cause roadside failures."
Description

Reads battery voltage, cranking voltage dip, and alternator charging levels from available PIDs or telematics metrics, evaluating against configurable thresholds for 12V/24V systems and historical baselines. Produces an easy health indicator with detailed metrics, flags low resting voltage or weak charging, and recommends actions (charge, replace, schedule test). Stores measurements with ambient temperature and engine state for context and trends. If data is unavailable, prompts a guided manual test entry with photo capture of multimeter reading.

Acceptance Criteria
Tire PSI Verification
"As a driver, I want a quick way to confirm tire pressures so that I can prevent unsafe handling and premature tire wear on the first day back."
Description

Verifies tire pressures using live TPMS data when available, validating against vehicle-specific target PSI per axle and load configuration. Calculates variance, highlights under/over-inflation, and provides recommended adjustment ranges. If TPMS is unavailable, supports quick manual entry per tire with optional photo evidence and a simplified layout for common configurations (e.g., 4x2, 6x4). Results are recorded to maintenance history and can block checklist completion if deviations exceed safety thresholds unless overridden with justification by authorized roles.

Acceptance Criteria
Deferred Task Resurfacing
"As a fleet manager, I want deferred tasks to resurface during wake-up so that nothing critical is overlooked when reactivating vehicles."
Description

Aggregates all deferred defects, failed inspection items, and open work orders tied to the vehicle and presents them during wake-up for explicit disposition: schedule, complete, or justify deferment. Sorts by safety impact and due date, shows parts availability and last notes, and prevents checklist completion until high-severity items are scheduled or resolved. Updates task statuses, creates follow-up reminders, and writes decisions and rationale to the vehicle’s audit log to ensure compliance and accountability.

Acceptance Criteria
One-Tap Service Booking
"As a dispatcher, I want to book service from the checklist with one tap so that I can minimize downtime when issues are found."
Description

Enables immediate scheduling of shop or mobile service from any failed or borderline checklist step with pre-populated vehicle details, detected issues, DTCs, and recommended job codes. Integrates with preferred vendor directories and in-house calendars for availability, supports time-slot selection, and auto-creates a work order in FleetPulse. Sends confirmations and updates to the driver and manager, and links the booking to the wake-up session and maintenance history for end-to-end traceability.

Acceptance Criteria
Road Test Session Logging
"As a service lead, I want to log a short road test with acceptance criteria so that I can verify the vehicle performs normally before returning it to duty."
Description

Provides an optional, templated road test after core checks pass, including a predefined route duration, pre/post quick scans, and real-time telemetry capture (speed, RPM, temps, DTC changes). Defines acceptance criteria (no new DTCs, stable temps/voltage, no abnormal vibrations reported) and records driver feedback, notes, and media. Marks the checklist as passed only when acceptance criteria are met or an authorized override is recorded, and logs all results to the vehicle’s history.

Acceptance Criteria

Device Sleep Guard

Puts OBD devices into a low‑chatter state to protect batteries and data plans while parked. Sends a heartbeat to confirm health and flags movement or voltage drops—catching theft risks and dead batteries before they derail reactivation.

Requirements

Intelligent Sleep Activation
"As a fleet manager, I want devices to automatically enter a low‑chatter state after parking so that batteries and data plans are preserved without manual intervention."
Description

Automatically transitions OBD devices into a low‑chatter, ultra‑low current state when ignition is off and no bus activity is detected for a configurable idle timeout. Supports per‑fleet and per‑vehicle thresholds, detection of recent diagnostics activity to defer sleep, and a safe ramp‑down sequence that stops polling and unsubscribes from high‑chatter PIDs without waking vehicle ECUs. Persistently stores sleep state across power cycles and resumes normal telemetry upon valid wake conditions. Integrates with FleetPulse’s vehicle profile to apply make/model‑specific sleep strategies and with the notification center to surface sleep state changes. Expected outcome is materially reduced parasitic draw and cellular data usage while parked without degrading data fidelity when vehicles are active.

Acceptance Criteria
Low‑Bandwidth Heartbeat Telemetry
"As a fleet manager, I want a lightweight heartbeat from parked vehicles so that I can verify device health without draining power or data."
Description

During sleep, emits a periodic, size‑constrained heartbeat containing device health and minimal context (timestamp, last known location source, battery voltage at OBD, device temperature, firmware version, uptime, and sleep policy ID). Uses adaptive cadence (e.g., 30–120 minutes) with backoff when voltage is low and catch‑up when connectivity resumes. Supports UDP/MQTT QoS‑aware delivery with retry caps, time‑windowed transmission to avoid peak charges, and payload compression. Heartbeats are ingested by FleetPulse to update vehicle status, drive health badges, and confirm device availability without waking CAN buses. Provides safeguards for clock drift and deduplication on the backend.

Acceptance Criteria
Movement and Voltage Drop Alerts
"As an owner‑operator, I want to be alerted if a parked vehicle moves or the battery is dropping so that I can prevent theft or a dead start."
Description

Monitors accelerometer, GNSS displacement, and OBD voltage trends while in sleep to detect towing, unauthorized movement, or at‑risk batteries. Implements configurable thresholds (e.g., displacement radius, motion sensitivity, voltage drop rate per hour) with suppression windows and environment‑aware sensitivity (e.g., high‑wind false‑positive mitigation). On trigger, issues low‑latency alerts via FleetPulse notifications, SMS, email, and webhook; links to live location and last voltage reading; and optionally auto‑creates a maintenance task for battery check. Supports escalation policies, recurring alert damping, and per‑vehicle arming/disarming. Integrates with geofences to classify events as expected moves vs potential theft.

Acceptance Criteria
Wake‑on‑Event and Scheduled Wake
"As a fleet manager, I want devices to wake automatically for important events or maintenance windows so that data capture resumes and updates apply without site visits."
Description

Defines clear wake pathways from sleep: ignition ON, validated motion pattern, remote wake command, and scheduled maintenance sync windows. On wake, the device re‑establishes full telemetry, re‑subscribes to PIDs, and performs a lightweight health handshake with the backend. Supports wake throttling to prevent flapping, do‑not‑wake quiet hours, and a failsafe auto‑exit from sleep after N missed heartbeats. Allows planned wake windows (e.g., nightly 03:00) for firmware update checks and inspection syncs, then auto‑returns to sleep. Integrates with FleetPulse OTA update service, inspections module, and live map presence to ensure data continuity.

Acceptance Criteria
Power Draw and CAN Quiet Mode
"As a maintenance lead, I want Sleep Guard to minimize current draw and avoid waking the vehicle’s networks so that parked vehicles don’t suffer parasitic drain or ECU faults."
Description

Guarantees sleep current draw stays below a target threshold (e.g., ≤2 mA device draw, configurable by hardware) and enforces CAN/LIN/K‑Line quiet behaviors per vehicle network to avoid keeping ECUs awake. Implements vendor‑specific bus‑silent techniques, delayed line release, and PID polling halt. Includes a compatibility matrix for makes/models to select safe modes, with automatic fallback to standard telemetry if sleep would risk ECU faults. Provides lab and field validation hooks to record bus wake events and alerts when Quiet Mode cannot be honored. Integrates with FleetPulse diagnostics to flag vehicles requiring alternate adapters or settings.

Acceptance Criteria
Remote Policy Controls and Overrides
"As a fleet admin, I want to centrally set and override Sleep Guard policies so that I can tailor behavior to each vehicle and operating context."
Description

Adds fleet‑ and vehicle‑level policy management in FleetPulse for sleep idle timeout, heartbeat cadence ranges, alert thresholds, escalation channels, geofence arming, and scheduled wake windows. Supports role‑based access, change audit logs, and bulk apply with previews. Provides API endpoints and UI controls for one‑click remote sleep, remote wake, and temporary bypass (with automatic expiry) for service visits or diagnostics. Includes safety interlocks (e.g., MFA for disabling movement alerts) and policy versioning with rollback. Ensures policies propagate to devices with confirmation receipts and drift detection.

Acceptance Criteria
Sleep Metrics and Savings Reporting
"As a business owner, I want reports showing battery and data savings from Sleep Guard so that I can quantify ROI and tune settings."
Description

Calculates per‑vehicle and fleet rollups for sleep time, estimated data saved, estimated battery mAh saved, prevented dead‑start incidents, and alert outcomes. Surfaces anomalies (e.g., devices that rarely sleep, frequent wake flaps, excessive heartbeats) and recommends tuning actions. Provides dashboards, exportable CSV, and API access; supports time filters and cohort comparisons. Integrates with FleetPulse maintenance and cost modules to correlate battery replacements and roadside events with Sleep Guard adoption for ROI tracking.

Acceptance Criteria

Parked Map

A live map and list of parked units with storage location, layup start date, and planned return window. Color-coded durations help plan staggered reactivations; bulk actions and calendar previews streamline bringing vehicles back online.

Requirements

Parked State Determination & Live Map/List Rendering
"As a fleet manager, I want FleetPulse to automatically identify and display all currently parked vehicles on a live map and list so that I can quickly understand where idle assets are and plan their reactivation."
Description

Automatically determine and continuously update which vehicles are "parked" by combining OBD-II ignition state, GPS speed (0 mph), last movement timestamp, and presence within a configured storage geofence. Surface these units on a synchronized live map and list: cluster markers at scale, provide responsive performance for fleets up to 100 vehicles, and auto-refresh at configurable intervals without disrupting user context (preserving selection, filters, and scroll position). The list view mirrors the map selection and supports sort by days parked, return window, and location. Leverages FleetPulse’s existing telematics ingestion pipeline and mapping SDK; includes graceful fallbacks when signals are missing (e.g., treat as parked only if stationary in geofence for N minutes). Ensures consistent unit status across Parked Map, Maintenance Scheduling, and Alerts modules.

Acceptance Criteria
Storage Location & Geofence Management
"As an operations coordinator, I want to define and maintain our storage locations with accurate geofences so that parked vehicles are correctly recognized and organized by site."
Description

Provide CRUD management for storage locations (yards/depots) with polygon and radius geofences, address metadata, timezone, and optional capacity notes. Allow assigning default layup locations per vehicle or group and support bulk assignment from the Parked Map list. Integrate reverse geocoding for quick entry and validation, and show geofence overlays on the map. Persist locations in FleetPulse’s shared directory service so Maintenance Scheduling and Alerts can reference the same canonical locations. Include role-based permissions to control who can add/edit locations.

Acceptance Criteria
Layup Timeline & Color-Coded Duration Buckets
"As a maintenance planner, I want parked units color-coded by how long they’ve been idle and how close they are to return so that I can prioritize which vehicles to reactivate first."
Description

Track layup start date/time for each parked unit and a planned return window (date range). Visually encode duration parked and return proximity using configurable color buckets (e.g., 0–7 days, 8–30 days, >30 days; approaching, due, overdue). Apply color consistently to map markers and list rows with a legend and accessible contrast that meets WCAG AA. Tooltips and list columns show layup start, total days parked, planned return window, and notes. Handle unknown dates with a neutral state and allow backfilling layup start based on last movement. Thresholds and color schemes are tenant-configurable and reused by Alerts and Reports.

Acceptance Criteria
Bulk Reactivation Actions
"As a fleet manager, I want to select several parked vehicles and schedule their reactivation and required checks in one step so that I can bring assets back online efficiently."
Description

Enable multi-select actions from the Parked Map list and map lasso to schedule return dates, create pre-reactivation inspections, and open work orders for required services (battery checks, brake inspections, fluid top-offs). Allow assigning responsible users/shops, target dates, and notes, and automatically update each unit’s status from parked to scheduled. Provide confirmation, undo, and conflict checks (e.g., double-booked unit). Expose corresponding API endpoints for integrations. Respect RBAC and log all bulk actions for audit.

Acceptance Criteria
Calendar Preview & Capacity-Aware Scheduling
"As a service manager, I want a calendar view of upcoming reactivations with conflict checks so that I can stagger work within our capacity and avoid bottlenecks."
Description

Offer a calendar preview that visualizes planned return windows and scheduled reactivations created from the Parked Map. Support day/week/month views, drag-and-drop adjustments, and automatic conflict detection against shop capacity, technician availability, and bay limits sourced from Maintenance Scheduling. Provide warnings and suggestions (e.g., stagger to next available slot) and support exporting/syncing via ICS or Google/Microsoft calendar. Maintain timezone accuracy, link calendar items back to the originating parked units, and keep both views synchronized.

Acceptance Criteria
Advanced Filters, Search, and Saved Views
"As a dispatcher, I want to quickly filter and save views of parked vehicles by site and return status so that I can monitor the subsets relevant to my region."
Description

Deliver powerful filtering and search across parked units by location, vehicle group, duration buckets, asset type, anomaly flags (engine/battery/brake), and planned return status (approaching/due/overdue). Provide quick search by unit ID, VIN, or license plate. Allow users to save, name, and share views with team members; persist last-used filters per user. Ensure low-latency interactions with server-side pagination and incremental loading. Include mobile-responsive layouts and keyboard navigation for accessibility.

Acceptance Criteria
SLA Alerts for Overdue Return Windows
"As a fleet owner, I want to be notified when parked vehicles miss their planned return windows so that I can intervene before reactivation impacts operations."
Description

Generate proactive notifications when a parked unit is approaching its planned return window or becomes overdue, with configurable lead times and quiet hours. Support in-app, email, and SMS channels with digest options to reduce noise. Alerts deep-link to the Parked Map selection and offer one-click actions to reschedule or create work orders. Provide escalation rules to notify supervisors if items remain unaddressed beyond a threshold. All alerts are logged and deduplicated across the Alerts module.

Acceptance Criteria

Product Ideas

Innovative concepts that could enhance this product's value proposition.

Warranty Whisperer

Match VIN, DTCs, mileage to warranty and recall databases; auto-generate claim packets with dates and service history to recover costs fast.

Idea

Idle Nudge Coach

Detect excessive idling and low tire pressure in real time; send driver-friendly nudges and manager summaries that cut fuel burn and keep engines healthy.

Idea

Bay Balancer

Auto-group services by parts, technician skill, and due dates; sequence jobs to minimize vehicle idle time and verify post-repair alerts clear.

Idea

Audit Armor

One-click, time-stamped inspection-to-repair dossiers with photos, signatures, and defect closure trail; export PDF bundles that satisfy auditors and calm insurers.

Idea

Replace-or-Repair Radar

Track cost-per-mile, repeat spend, and downtime; flag vehicles crossing economic thresholds with clear replace-now vs repair recommendations and projected savings.

Idea

Fault-to-Fix Booker

Convert DTCs and inspection defects into shop appointments; share codes, photos, and parts notes with preferred vendors and auto-bundle jobs by location to reduce trips.

Idea

Parked Plan Pause

Pause billing per vehicle during seasonal downtime; keep data and reminders intact while cutting costs with one click.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.