Advocacy campaign management

RallyKit

Target. Act. Measure. Win.

RallyKit is a lightweight advocacy campaign manager for small nonprofit directors and scrappy grassroots organizers. Ditch spreadsheets and bloated CRMs for one real-time dashboard that auto-matches supporters to legislators, generates district-specific scripts by bill status, launches one-tap action pages, tracks every action live, sets up in 45 minutes, and boosts completed calls and emails while delivering audit-ready proof.

Subscribe to get amazing product ideas like this one delivered daily to your inbox!

RallyKit

Product Details

Explore this AI-generated product idea in detail. Each aspect has been thoughtfully created to inspire your next venture.

Vision & Mission

Vision
Empower small nonprofits worldwide to win lasting policy change by turning everyday supporters into sustained, measurable civic action.
Long Term Goal
By 2029, enable 20,000 campaigns to generate 1 million verified actions monthly, increase repeat advocates by 25%, and equip 80% of organizations with audit-ready outcome reports proving policy impact.
Impact
For small advocacy nonprofit directors and organizers, pilots show RallyKit drives 42% more completed calls and 31% more emails per campaign, cuts setup from 3 days to 45 minutes, converts 18% more supporters into repeat actors, and delivers audit-ready reports in under five minutes—without juggling spreadsheets or apps.

Problem & Solution

Problem Statement
Small advocacy nonprofit directors and grassroots organizers juggle spreadsheets and disconnected apps to run call, email, and petition drives, losing conversions and proof of impact. Existing CRMs are costly, complex, and omit legislator matching, district scripts, and real-time action tracking.
Solution Overview
RallyKit replaces spreadsheet chaos with a single, real-time dashboard that unifies contacts, campaigns, and results. Built-in legislator matching auto-generates district-specific call and email scripts, while one-click action pages track every call, email, and signature as it happens, without CRM complexity or app switching.

Details & Audience

Description
RallyKit is a lightweight advocacy campaign manager that turns supporters into action with targeted emails, calls, and petitions. Built for small nonprofit directors and grassroots organizers who need simple, data-driven outreach. It replaces spreadsheet chaos and bloated CRMs by unifying contacts, actions, and results in one real-time dashboard, cutting setup from 3 days to 45 minutes and proving impact fast. A built-in legislator matcher auto-generates district-specific call and email scripts based on bill status.
Target Audience
Small advocacy nonprofit directors and organizers (25-45) juggling campaigns, craving simple, data-driven outreach; scrappy.
Inspiration
In a fluorescent gym, a volunteer coordinator fumbled between a laptop, clipboards, and three apps while sixty supporters clutched their phones in a wavering line. Wi‑Fi stalled, scripts didn’t match districts, numbers were buried; people drifted away. Driving home with a cold coffee and a racing pulse, I sketched one screen: auto-match districts, instant scripts, one-tap actions, a live counter proving every call. That sketch became RallyKit.

User Personas

Detailed profiles of the target users who would benefit most from this product.

C

Coalition Convener Casey

- 35–44, urban hub, works statewide coalitions. - Title: Coalition manager at advocacy umbrella org. - Coordinates 8–20 partner nonprofits per campaign. - Budget-constrained, relies on shared tooling grants. - Tech stack: Slack, Google Workspace, Airtable, Zoom.

Background

Former field organizer who scaled a 15-org coalition during a pivotal bill fight. Burned by attribution disputes and messy spreadsheets, they standardized tooling to keep partners aligned. Now they codify shared scripts and reporting from day one.

Needs & Pain Points

Needs

1. Partner-level attribution and deduped reporting 2. Shared scripts with role-based permissions 3. Quick UTM tagging and link variants

Pain Points

1. Attribution fights stall coalition momentum 2. Conflicting scripts confuse volunteers 3. Spreadsheets break cross-org tracking

Psychographics

- Obsessed with clarity, credit, and coordination - Values transparency over turf and ego - Motivated by collective wins, measurable impact - Pragmatic operator, allergic to tool sprawl

Channels

1. Slack – coalition threads 2. Gmail – partner updates 3. Zoom – weekly standups 4. Google Sheets – shared trackers 5. X – policy monitoring

V

Volunteer Wrangler Val

- 26–36, regional base, constant travel between sites. - Title: Volunteer coordinator at grassroots nonprofit. - Oversees 30–200 volunteers across campaigns. - Mobile-first; Android phone, portable battery packs. - Tools: WhatsApp, Twilio SMS, EveryAction/VAN.

Background

Started as a weekend phonebank lead and grew into managing multi-county volunteer teams. After too many drop-offs from clunky tools, they standardized simple, mobile-first flows. Now they coach new captains on fast conversion habits.

Needs & Pain Points

Needs

1. Mobile-first one-tap action links 2. QR codes and kiosk mode on-site 3. Simple volunteer onboarding flows

Pain Points

1. Volunteers drop off during clunky forms 2. Spotty wifi kills event momentum 3. Tool logins confuse first-timers

Psychographics

- Energized by people power and momentum - Prefers simple checklists over long manuals - Motivated by visible completions, real-time buzz - Calm under chaos, fixates on follow-through

Channels

1. WhatsApp – volunteer chats 2. Twilio SMS – shift reminders 3. Instagram – story blasts 4. Gmail – weekly briefings 5. Facebook Groups – community updates

R

Rural Reach Riley

- 34–52, rural districts, long driving routes. - Title: Regional organizer for statewide nonprofit. - Serves 10–25 sparsely populated counties. - Connectivity: low-bandwidth, intermittent service. - Tools: Hustle, Facebook Groups, Gmail, YouTube.

Background

Former PTA leader turned district organizer, known at county fairs and co-ops. After city-built tools failed offline, they curated SMS-first workflows. They anchor trust through local stories and names.

Needs & Pain Points

Needs

1. SMS-first action pages and short links 2. Low-data pages with large buttons 3. Offline capture syncing later

Pain Points

1. Pages time out on spotty service 2. Scripts ignore local realities 3. Long forms scare off neighbors

Psychographics

- Values neighbor trust and local credibility - Chooses practical over polished every time - Motivated by being heard by distant capitols - Patient, relationship-first communicator

Channels

1. Hustle – P2P SMS 2. Facebook Groups – local chatter 3. YouTube – short explainers 4. Gmail – county summaries 5. X – legislative updates

C

Campus Catalyst Camila

- 19–23, public university undergraduate. - Title: Campus chapter lead, part-time barista. - Lives on-campus; acts nights and weekends. - Organizes 3–5 clubs in coalition. - Tools: Instagram, GroupMe, TikTok, Gmail.

Background

Sparked by a tuition protest, they mastered QR-driven mobilization and link-in-bio funnels. Admin delays taught them to favor tools that don’t need IT. They thrive on rapid fire sprints around key votes.

Needs & Pain Points

Needs

1. QR codes tied to student groups 2. One-tap pages optimized for Stories 3. Instant counts to motivate turnout

Pain Points

1. Link-in-bio clutter hurts conversions 2. Campus wifi throttles heavy pages 3. Admin approval delays stall actions

Psychographics

- Thrives on rapid, visible wins - Loves DIY design and shareable content - Skeptical of bureaucracy, seeks autonomy - Peer influence drives adoption

Channels

1. Instagram – Stories, Reels 2. GroupMe – chapter coordination 3. TikTok – quick explainers 4. Gmail – listserv blasts 5. Discord – committee chat

A

Accessibility Advocate Alex

- 27–41, midsize nonprofit in metro area. - Title: Accessibility and inclusion lead. - Oversees content in 2–5 languages. - Coordinates with legal and comms teams. - Tools: Gmail, Slack, Google Docs, YouTube.

Background

Former interpreter and disability rights organizer who standardized alt-text and plain language across campaigns. After mistranslations derailed a bill push, they built glossary and review workflows. They advocate budget for accessibility as table stakes.

Needs & Pain Points

Needs

1. Multilingual scripts and toggleable pages 2. WCAG-compliant templates and audits 3. TTY-friendly call options and captions

Pain Points

1. Tools break screen readers 2. Auto-translation mangles legislative nuance 3. Audio-only calls exclude supporters

Psychographics

- Inclusion is a baseline, not a bonus - Pedantic about clarity and readability - Motivated by removing participation barriers - Data-driven yet human-first decision-maker

Channels

1. Gmail – content workflows 2. Slack – cross-team coordination 3. Google Docs – translation drafts 4. YouTube – captioned explainers 5. WhatsApp – multilingual groups

E

Event Energizer Evan

- 29–45, city-based events producer. - Title: Events manager at advocacy nonprofit. - Runs 5–20 civic events per quarter. - Manages AV vendors and volunteers. - Tools: Eventbrite, Instagram, Gmail, Twilio SMS.

Background

Shifted from music festivals to civic rallies, mastering crowd flow and stage pacing. After venue wifi meltdowns, they built redundancy with hotspots and offline capture. They chase conversions-per-minute like a sport.

Needs & Pain Points

Needs

1. Kiosk mode with offline queueing 2. Big-screen live counters and leaderboards 3. Fast QR generation with UTM bundles

Pain Points

1. Venue wifi collapses under load 2. Tiny forms bottleneck crowd flow 3. Hard to attribute from-stage scans

Psychographics

- Obsessed with throughput and crowd energy - Measures success by conversions per minute - Plans meticulously, improvises gracefully - Champions eye-catching, simple CTAs

Channels

1. Instagram – event promotion 2. Gmail – attendee confirmations 3. Twilio SMS – day-of alerts 4. X – live updates 5. Eventbrite – ticketing hub

S

Systems Sync Sam

- 30–40, operations/tech at small–mid nonprofit. - Title: Data/Operations manager, CRM admin. - Maintains NationBuilder/EveryAction/Action Network. - Comfortable with SQL and APIs. - Tools: Slack, GitHub, Postman, BigQuery.

Background

Self-taught SQL tinkerer who inherited a spaghetti stack. After losing actions to silent webhook failures, they now demand idempotency and logs. Their mandate: one source of truth, zero manual CSVs.

Needs & Pain Points

Needs

1. Robust APIs and stable webhooks 2. Custom field mapping and validation 3. Event-level logs with retries

Pain Points

1. Silent failures lose actions 2. Inflexible schemas break syncs 3. Manual CSVs waste hours

Psychographics

- Trusts systems, not anecdotes - Perfectionist about schemas and deduping - Motivated by automation and reliability - Prefers documentation over demos

Channels

1. Slack – #data channels 2. GitHub – issues 3. RallyKit Docs – API reference 4. Gmail – release notes 5. Stack Overflow – quick fixes

Product Features

Key capabilities that make this product valuable to its target users.

Confidence Lights

Tri-color verification scoring (Green/Amber/Red) with clear reason codes. Auto-approve high-confidence constituents, prompt quick fixes for medium confidence, and quarantine risky submissions. Reduces supporter friction while keeping organizers focused on edge cases and audit-proof review.

Requirements

Real-time Confidence Scoring Engine
"As an organizer, I want an automated confidence score that evaluates each supporter submission in real time so that my team can triage workload and protect outreach quality without slowing legitimate actions."
Description

Compute a confidence score for each action submission in under 150ms using multi-signal validation (address geocoding and district match, email/phone format and deliverability checks, IP-to-location proximity, duplicate detection, prior action history, and device/session velocity). Map scores to Green/Amber/Red thresholds with deterministic, versioned logic and attach machine-readable reason codes. Execute synchronously within RallyKit’s one-tap action flow with graceful degradation when external validators are unavailable. Expose scoring results via internal service API and event payloads for the dashboard, analytics, and downstream routing. Ensure idempotency and consistent outcomes across retries and distributed nodes.

Acceptance Criteria
P95 Latency <=150ms for Synchronous Scoring Call
Given external validators are healthy And synthetic submissions at 50 RPS sustained for 5 minutes When the scoring engine is invoked synchronously per submission Then the 95th percentile end-to-end latency at the service boundary is <= 150 ms And the 99th percentile latency is <= 250 ms And the success rate is 100% with no timeouts
Deterministic Versioned Logic and Reason Codes
Given a fixed scoring-config version V and identical normalized inputs When the engine evaluates the submission on any node or retry Then the numeric score, color tier, and ordered reason_codes are identical byte-for-byte And the response includes version=V And changing any scoring rule produces version=V+1 and a different version value in the response
Multi-signal Validation Coverage
Given a submission with address, email, phone, IP, device/session, and actor history When the engine scores the submission Then reason_codes include entries for ADDRESS_GEOCODE, DISTRICT_MATCH, EMAIL_SYNTAX, EMAIL_DELIVERABILITY, PHONE_FORMAT, PHONE_DELIVERABILITY, IP_PROXIMITY, DUPLICATE_DETECTION, PRIOR_HISTORY, DEVICE_SESSION_VELOCITY And each entry includes code, outcome in {pass, fail, warn}, and details payload And missing or unavailable signals are included with outcome=waived and a cause code
Graceful Degradation on External Validator Failure
Given the address geocoder times out and the email deliverability service is degraded When the engine scores a submission Then a score and color tier are returned without error And reason_codes include GEOCODER_TIMEOUT and EMAIL_DELIVERABILITY_DEGRADED with outcome=waived And degraded signals contribute zero weight to the score And the 95th percentile latency remains <= 200 ms
API and Event Payload Contract
Given an internal API request POST /score is made with valid inputs and idempotency_key When the engine responds and emits an event Then the API response and event payload each include submission_id, score (0-100), color in {GREEN, AMBER, RED}, reason_codes[], version, evaluated_at (ISO-8601), idempotency_key, request_id And both payloads are JSON conforming to schema version v1.0 and share identical values for common fields And an event is published to topic rallykit.confidence.score within 100 ms of the API response
Idempotency and Duplicate Suppression
Given the same normalized inputs and idempotency_key=K are sent three times within 24 hours to different nodes When the engine processes the requests Then only one event is emitted for K And every response is identical for score, color, reason_codes, version, and evaluated_at And responses include idempotent=true and first_seen_at timestamp And requests sent without an idempotency_key still produce consistent outputs but emit distinct events labeled duplicate=true when a duplicate is detected
Color-Mapped Routing Outcomes
Given a submission scores GREEN When the one-tap action flow continues Then the submission is auto-approved without additional prompts Given a submission scores AMBER When the one-tap action flow continues Then the user is shown up to two corrective prompts derived from reason_codes and may resubmit Given a submission scores RED When the one-tap action flow continues Then the submission is quarantined, not sent to legislators, and an audit record is persisted with reason_codes and reviewer_queue=true
Reason Codes and Explanations
"As a supporter, I want clear, plain-language reasons and suggested fixes when my submission needs attention so that I can quickly correct issues and complete my action."
Description

Provide standardized, hierarchical reason codes for each score outcome, coupled with human-readable explanations and actionable remediation tips. Render concise supporter-facing messages inline on action pages for Amber results (e.g., missing apartment number, ZIP+4 mismatch) and organizer-focused detail in the dashboard. Localize explanations (i18n-ready) and store codes in submission records for analytics, exports, and audits. Ensure consistent mappings from validator signals to reason codes and de-duplicate overlapping messages to reduce friction.

Acceptance Criteria
Standardized Reason Code Taxonomy Applied to All Confidence Outcomes
Rule: Every scored submission (Green, Amber, Red) includes at least one reason code from the approved registry. Rule: Reason code format is hierarchical with 2–3 UPPERCASE segments separated by dots (e.g., ADDRESS.MISMATCH.ZIP4). Rule: Only codes present in the registry may be attached; attempts to attach unknown codes are rejected and logged. Rule: Each attached reason code has a defined title, explanation, and remediation tip in the registry. Rule: No submission contains duplicate reason codes.
Supporter Sees Concise Amber Inline Message with Remediation Tip
Given a supporter submission scores Amber due to a missing apartment number, When the action page renders, Then exactly one concise inline message (<=140 characters) is shown near the address field with a remediation tip and without exposing raw codes. And when the supporter adds the apartment number and resubmits, Then the message disappears and the score re-evaluates. Rule: Only Amber results display supporter-facing inline messages; Green shows none, Red uses quarantine flow (no inline remediation here).
Organizer Dashboard Shows Detailed Reason Codes and Explanations
Given an organizer opens a submission in the dashboard, When viewing the Confidence Lights panel, Then the score color and the full list of unique reason codes are displayed with human-readable explanations and remediation suggestions. And each reason code item includes: code, title, explanation, remediation tip, and originating validator signal reference. Rule: Reason codes are shown in a stable order (by severity Red>Amber>Green, then alphabetical within severity).
Reason Codes Persist in Submission Records for Analytics and Exports
Given a submission is created or updated, When it is stored, Then the record includes: score outcome, array of unique reason codes, created_at timestamp, and registry_version. And when exporting CSV and JSON, Then exports include score outcome and the complete set of reason codes (CSV as semicolon-separated; JSON as an array) for each submission. Rule: Reloading a stored submission returns the same set of reason codes that were saved at the time of submission (immutable snapshot for audit).
i18n Localization for Explanations and Tips
Given the supporter browser sends Accept-Language=es-ES, When an Amber message is rendered on the action page, Then the explanation and remediation tip are displayed in Spanish using i18n keys with fallback to English if a translation is missing. Given an organizer’s workspace locale is fr-FR, When viewing the dashboard details, Then explanations and remediation tips render in French using the workspace locale. Rule: No hard-coded strings are used for explanations or tips; all are retrieved via i18n keys.
Deterministic Mapping from Validator Signals to Reason Codes
Rule: Each validator signal maps to exactly one canonical reason code in a versioned mapping table; multiple signals may map to the same canonical code. Given the validator emits ZIP4_MISMATCH, When mapping is applied in any environment, Then the resulting reason code is ADDRESS.MISMATCH.ZIP4. Rule: Mapping is pure and deterministic (same inputs produce the same codes) and is covered by unit tests for all registered signals.
De-duplication of Overlapping Reason Messages
Given multiple validator signals indicate the same underlying issue (e.g., ZIP5_MISMATCH and ZIP4_MISMATCH), When reason codes are generated, Then only one canonical reason code is attached (e.g., ADDRESS.MISMATCH.ZIP) and duplicates are suppressed. And on the supporter action page, Then only one inline message is shown for that issue. Rule: A documented priority rule selects the most specific applicable canonical code when overlaps occur.
Auto-Approval and Flow Control
"As an organizer, I want high-confidence submissions automatically sent and risky ones quarantined so that we maximize completed actions while minimizing spam and misrouted messages."
Description

Automatically approve Green submissions and route them to the correct legislators with district-specific scripts, updating live tracking without additional clicks. For Amber, trigger inline correction prompts with prefilled suggestions and re-validate on submit; preserve form state and progress to avoid abandonment. Quarantine Red submissions into a secure review queue, prevent outbound calls/emails until approved, and surface non-blocking confirmation to supporters. Provide configurable fallbacks (e.g., allow-through with flag during peak actions) and ensure transactional integrity so partial sends cannot occur.

Acceptance Criteria
Green Auto-Approval and Routing
Given a supporter submission with Confidence=Green and a valid address-to-district match When the supporter taps Submit Then the system auto-approves the submission without additional user actions And routes the action to all applicable legislators for the supporter’s district and chamber(s) And generates district-specific scripts that reflect each legislator’s chamber and the bill’s current status And records the applied reason codes with the action And emits a tracking event that appears on the live dashboard within 2 seconds And marks the action’s delivery status per legislator as progressing from "Queued" to "Sent" with no additional clicks by the supporter
Amber Inline Correction and Re-Validation
Given a supporter submission with Confidence=Amber and at least one fixable field When the supporter taps Submit Then the system displays inline correction prompts adjacent to the relevant fields with prefilled suggested values And preserves all previously entered form data and progress (no fields reset, no page reload) And allows the supporter to accept a suggestion or edit manually When the supporter resubmits after corrections Then the system re-validates in real time And if Confidence becomes Green, auto-approves and continues the standard send flow And if Confidence remains Amber, re-prompts with updated suggestions without losing state And at no point are outbound calls/emails sent until approval occurs
Red Quarantine and Non-Blocking Confirmation
Given a supporter submission with Confidence=Red When the supporter taps Submit Then the system quarantines the action in a secure review queue And prevents any outbound calls/emails from being initiated for that action And shows the supporter a non-blocking confirmation indicating "Received — pending verification" without preventing further navigation And assigns a review ID and records reason codes And limits visibility and approval controls to authorized reviewer roles only And displays the quarantined item in the reviewer queue within 5 seconds And logs an audit entry for quarantine with timestamp and reason codes
Configurable Fallback Allow-Through with Flag
Given an admin with proper permissions has enabled the "Allow-through with flag" fallback for Amber confidence When an Amber submission is submitted during the fallback window Then the system auto-approves the action and proceeds with routing and district-specific script generation And flags the action as "Fallback-Approved" in tracking and audit logs with the enabling user, time, and configuration scope And adds a non-blocking notice on the reviewer dashboard to prioritize post hoc review of flagged actions And ensures the fallback auto-expires at the configured end time and can be disabled immediately And Red submissions remain quarantined unless the fallback explicitly includes Red in its configuration
Transactional Integrity and Idempotency
Given an approved action must be delivered to N legislators When the system prepares and attempts delivery Then the system stages all deliveries and commits them atomically so that either all N deliveries are sent or none are sent And if any pre-send validation fails, no deliveries are sent and the action is marked "Not Sent" with a retry scheduled And each action is assigned a unique idempotency key to prevent duplicate deliveries on retries or double-submits And the live dashboard does not show "Sent" until all N deliveries have succeeded And supporters never see a success confirmation if any delivery failed
Reason Codes and Flow Mapping Compliance
Given the confidence engine produces a tri-color score and reason codes When a submission is scored Green, Amber, or Red Then the reason codes are persisted with the action, visible to reviewers, and included in exports And flow control adheres to mapping: Green auto-approve; Amber prompt corrections and re-validate; Red quarantine with outbound blocked And supporter-facing surfaces never display raw PII within reason codes And threshold edge cases are covered by automated tests to verify correct color assignment and flow path And if the mapping configuration is missing or invalid, the system fails safe to Red and alerts admins
Review Queue and Audit Trail
"As a campaign lead, I want a centralized review queue with evidence snapshots and immutable logs so that I can efficiently resolve edge cases and provide audit-ready proof of decisions."
Description

Deliver an organizer dashboard queue for Red and unresolved Amber cases with filters, search, bulk approve/reject, and assignment. Present an evidence snapshot per submission (submitted data, validator results, geocoding map, history, and reason codes) and require disposition reasons on manual decisions. Maintain an immutable audit log capturing timestamps, actor, decision, and configuration version used at scoring time; support CSV/JSON export and API retrieval. Notify reviewers via in-app and email when queue volume or aging breaches SLA thresholds.

Acceptance Criteria
Queue Shows Only Red and Unresolved Amber Cases
Given cases are scored Green, Amber, or Red When an organizer opens the Review Queue Then only Red and Amber cases with status Unresolved are listed and Green or Resolved Amber are excluded Given a case’s status changes from Unresolved to Resolved (auto or manual) When the queue refreshes Then the case is removed from the queue Given a new Red or Amber (Unresolved) case is created When scoring completes Then it appears in the queue and the queue count updates
Organizer Filters and Searches the Review Queue
Given the Review Queue is open When the organizer applies filters for color (Red/Amber), reason code, assigned/unassigned, assignee, date range, state/district, and aging threshold Then the results include only cases matching all selected filters Given a search term for supporter name, email, phone, address, or submission ID When the organizer executes the search Then matching cases are returned and non-matching cases are excluded Given filters or search are active When the organizer clears them Then the queue returns to the unfiltered state
Case Assignment and Ownership
Given an unassigned case in the Review Queue When an organizer assigns it to a reviewer Then the assignee is recorded and displayed on the case row and detail Given a case already has an assignee When it is reassigned Then the new assignee replaces the previous one and the change is captured in the audit log Given a reviewer opens My Queue When filters for Assigned to Me are applied Then only cases assigned to that reviewer are shown
Bulk Approve/Reject With Required Disposition Reasons
Given one or more cases are selected When the organizer chooses Approve or Reject Then a disposition reason code is required before the action can be submitted Given a disposition reason is provided When the bulk action is confirmed Then each affected case is updated, removed from the queue if resolved, and an audit log entry is created per case with actor, timestamp, decision, and reason Given some cases fail during a bulk action When the action completes Then successes are committed, failures are reported with per-case errors, and no successful updates are rolled back Given a single-case manual decision When Approve or Reject is submitted without a reason Then the action is blocked and a validation message is shown
Evidence Snapshot Completeness and Accuracy
Given a case is opened in detail view When the evidence snapshot loads Then it displays submitted data, validator results, geocoding map with matched district, history timeline, and reason codes used for the score Given the evidence snapshot is displayed When any element is unavailable (e.g., validator outage) Then a clear indicator shows the missing element and the last attempted retrieval time Given the evidence snapshot is displayed When timestamps are shown Then they use ISO 8601 format and the organization’s configured timezone
Immutable Audit Log and Data Access (Export/API)
Given a manual decision or assignment change occurs When the action is saved Then an audit log entry is appended capturing timestamp (UTC), actor, action type, decision, disposition reason (if any), and the configuration version used at scoring time Given existing audit entries When a user attempts to edit or delete an entry Then the system prevents modification and records the attempt as a separate audit event Given the organizer exports cases via CSV or JSON with filters applied When the export completes Then the file contains the selected fields plus the complete audit log per case or a link/reference to retrieve it, and only cases matching the filters are included Given an authenticated client calls the Audit API When filtering by date range, actor, decision, reason code, configuration version, or case ID Then the API returns paginated results that match the filters and include all audit fields
SLA Breach Notifications In-App and Email
Given SLA thresholds for queue volume and case aging are configured When the volume exceeds the volume threshold or any case age exceeds the aging threshold Then an in-app notification banner is displayed to reviewers with current counts and oldest case age Given the same SLA breach persists within a configured cooldown window When additional checks run Then duplicate email alerts are not sent more frequently than the cooldown allows Given an SLA breach is detected When notifications are sent by email Then recipients are the organization’s designated reviewers and include deep links that open the queue filtered to the breached condition Given conditions return below thresholds When the next check runs Then the in-app banner is cleared
Configurable Risk Policies
"As an admin, I want to configure thresholds, rules, and allow/deny lists per campaign so that Confidence Lights aligns with our risk tolerance and legal constraints."
Description

Provide admin controls to tune thresholds and rules at org and campaign levels, including required signals, duplicate submission limits, geo-restrictions, and allow/deny lists for domains, IP ranges, and partner sources. Support versioned, auditable policy changes with draft, preview, and scheduled publish, plus a dry-run mode that simulates impact on recent traffic. Offer backtesting on historical submissions to estimate Green/Amber/Red distribution and predicted completion uplift before changes go live. Expose policy as code via JSON schema for import/export across environments.

Acceptance Criteria
Org-Level Threshold Tuning with Versioned Policies
Given I am an org admin with edit permissions When I create a new policy draft with tri-color score thresholds and required signals Then the draft validates against the JSON schema without errors And the draft is assigned an incremented semantic version number And no live traffic behavior changes until the draft is published And an audit log entry records author, timestamp, change diff, and a required reason up to 500 characters And overlapping or gap thresholds are rejected with explicit validation errors
Campaign-Level Overrides and Inheritance
Given a campaign inherits the org default policy version X When I set campaign overrides for duplicate_submission_limit and geo_restrictions Then the effective policy for that campaign reflects the overrides while all other rules inherit from the org policy And the UI and API surface both source (inherited vs overridden) and effective values And removing an override reverts the campaign to the org value immediately without residual effects And conflicts are resolved by campaign override taking precedence over org default for that campaign only
Draft Preview and Scheduled Publish
Given a valid policy draft exists When I click Preview Then the system computes the effective outcomes under the draft without impacting production decisions When I schedule the draft for publish at an exact UTC timestamp Then the draft becomes the live policy at that timestamp And the audit log records scheduler identity, scheduled time, actual publish time, and resulting version And attempting to schedule with invalid time windows or missing approvals is blocked with clear errors
Dry-Run Simulation on Recent Traffic
Given a policy draft and at least 1,000 submissions in the last 7 days are available When I run a dry-run simulation Then up to 50,000 recent submissions are rescored within 2 minutes And the result returns counts and percentages for Green, Amber, and Red; top 5 reason codes per color; and a list of impacted rules And no submission state, queues, or user-facing outcomes are altered by the simulation And the simulation result is timestamped and can be exported as CSV
Backtesting with Predicted Completion Uplift
Given historical submissions with known completion outcomes over selectable windows (7, 30, 90 days) When I execute a backtest comparing the draft to the current live policy Then the system reports Green/Amber/Red distributions and a predicted completion uplift with the calculation method displayed And confidence intervals and sample size are shown And results are stored and retrievable for at least 30 days and exportable as CSV And backtests with insufficient sample sizes are prevented with clear guidance
Allow/Deny Lists, Geo Rules, and Duplicate Limits Enforcement
Given allow list entries for partner_source and deny list entries for email domains and IP ranges are configured When a submission matches a deny list rule Then it is quarantined as Red with reason codes referencing the exact matching rule And submissions from allowed partners bypass duplicate limits but must still satisfy required signals And duplicate_submission_limit=N is enforced per supporter per campaign within a rolling 24-hour window And geo_restrictions reject non-permitted regions and return Amber with a fix prompt when location is ambiguous
Policy as Code: JSON Schema Import/Export Across Environments
Given I export the current policy as JSON Then the export validates against the published JSON schema version and includes version, checksum, and environment metadata When I import a policy JSON with unknown fields or invalid types Then the import is rejected with field-level error messages and no draft is created When I import a valid policy JSON from staging into production Then a new draft is created with identical values, a link to the source environment, and a matching checksum And imports are fully audited with author, timestamp, and diff
Accuracy Metrics and Monitoring
"As a program manager, I want performance metrics and alerts for Confidence Lights so that I can optimize settings and demonstrate impact to stakeholders."
Description

Instrument key performance indicators including Green/Amber/Red rates, Amber fix completion rate, false-positive/negative rates inferred from reviewer outcomes, average time-to-approve, completion uplift, and abandonment at prompts. Visualize trends in the RallyKit dashboard with campaign and timeframe filters and provide exportable reports. Add anomaly detection and alerts for spikes in Red or validator failures, and support A/B testing of thresholds with statistically sound reporting. Stream events to the analytics pipeline and provide an API endpoint for BI tools.

Acceptance Criteria
KPI Instrumentation: Confidence Lights and Reviewer-derived Accuracy Metrics
Given active campaigns with Confidence Lights enabled and reviewer outcomes captured When daily metrics are computed for a selected timeframe Then Green/Amber/Red rates are calculated as count(label)/count(classified) per day and sum to 100% ±0.1% And Amber fix completion rate = count(Amber sessions that submit after successful fix)/count(Amber prompts) and is displayed per day And False-positive rate = % of auto-approved later overturned by reviewers; False-negative rate = % of quarantined later approved by reviewers And Average time-to-approve is reported separately for auto-approvals and manual reviews (p50/p95) in minutes And Completion uplift is computed versus a selectable baseline or control (relative % and absolute) with 95% CI when N ≥ 500 per period And Abandonment at prompts is reported as % drop-off at Amber fix prompt and at quarantine notice And Metrics are available within 5 minutes of event ingestion (p95) and reflect timezone selected by user
Dashboard Trends with Campaign and Timeframe Filters
Given a user selects one or more campaigns and a timeframe (Last 7/30/90 days or custom range) When the dashboard loads trends for key KPIs Then line or bar trends render for each KPI with auto granularity (hourly ≤7 days, daily ≤90 days, weekly >90 days) And filters apply consistently across widgets and persist on refresh and share links And empty or low-sample periods (N<30) are visually indicated and excluded from averages by default with a toggle to include And cross-campaign views show both aggregated and per-campaign series with clear legends And initial render completes in ≤2.0s for 12 months of data and interactions complete in ≤1.0s (p95)
Exportable Reports for KPIs
Given dashboard filters for campaign(s), timeframe, and granularity When the user exports KPIs as CSV or JSON Then the export matches on-screen filters and includes columns: timestamp, campaign_id, kpi_name, value, numerator, denominator, granularity, timezone, data_version And row counts in export match charted data points ±0.5% And files up to 1 million rows generate within 30 seconds or provide an async job with email link on completion And exports include a data dictionary link and metric definitions version And numeric fields use dot decimal, timestamps are ISO 8601 with timezone offset
Anomaly Detection and Alerts for Red Spikes and Validator Failures
Given baseline windows of 14 days per campaign and KPI When Red rate or validator failure rate deviates by z-score ≥3 for two consecutive intervals with N≥100 per interval Then an anomaly is created with severity Amber (single interval) or Red (two+ intervals) and appears in the dashboard within 2 minutes And alerts are sent via configured channels (email and Slack) including campaign, KPI, current value, baseline mean, z-score, sample size, and deep link And alert suppression prevents duplicate notifications for the same KPI/campaign within a 30-minute window unless severity escalates And resolved status auto-clears when metric returns within 1 SD of baseline for three consecutive intervals
A/B Testing of Confidence Thresholds with Statistical Reporting
Given two threshold configurations (A and B) and random assignment at supporter level with 50/50 target When the experiment runs Then sample ratio mismatch is monitored and alerts at |A%−B%|>5% after N≥1000 And the report shows per-variant KPIs (completion rate, G/A/R rates, false-positive/negative) with 95% CIs and p-values (two-sided, alpha=0.05) And minimum sample size guardrails (e.g., N≥500 per arm) are enforced before declaring significance And stopping rules are pre-registered and results labeled “Exploratory” if violated And the variant applied is logged on each action event for auditability
Real-time Event Streaming to Analytics Pipeline
Given the analytics pipeline endpoint is configured When supporter classification and action events occur Then events are emitted within 60 seconds (p95) with at-least-once delivery and idempotency keys for de-duplication And a versioned schema includes event_time, campaign_id, supporter_id_hash, classification_before, classification_after, reviewer_outcome, action_submitted, prompt_type, latency_ms And transient failures retry with exponential backoff up to 24 hours and dead-letter to a monitored queue And PII is excluded or hashed per data policy, and schema changes are backward compatible with migration notes
BI Metrics API Endpoint
Given authenticated BI clients with OAuth2 credentials When they request GET /metrics?campaign_id=…&start=…&end=…&kpi=…&granularity=… Then the API returns 200 with JSON: series, metadata (timezone, data_version), and totals, respecting filters And invalid params return 400 with actionable error codes; unauthenticated returns 401; unauthorized campaign access returns 403 And responses paginate for large result sets and enforce 95th percentile latency ≤800 ms under 50 RPS with rate limiting And an OpenAPI spec is published and kept in sync with the deployed API
Privacy and Compliance Safeguards
"As a data steward, I want PII-minimizing processing with strong security and retention controls so that we comply with privacy laws and donor expectations."
Description

Minimize PII processed for scoring by hashing emails and phones for duplicate detection, truncating IP addresses where policy requires, and limiting retention via configurable TTLs. Present consent language that reflects scoring purposes on action pages, and provide data subject tools for export and deletion of related scoring records. Enforce role-based access controls so only authorized staff can view evidence snapshots, with encryption in transit and at rest. Maintain a processing register and configuration-aware logs to support audits and DPIA reviews.

Acceptance Criteria
Hashed Identifiers for Duplicate Detection
Given a supporter submits an email and/or phone on an action page When the scoring pipeline persists the submission Then only SHA-256 hashes of email and phone with a tenant-scoped static salt are stored in any persistent datastore And no raw email/phone values exist in databases, queues, or logs after processing completes Given two submissions with identical emails/phones When duplicate detection runs Then duplicates are detected by comparing the hashed values and linked within 200 ms at p95 Given application and infrastructure logs for a submission When searched by regex for email/phone patterns Then zero matches of raw PII are returned
Policy-Based IP Address Truncation
Given truncation policy is enabled for region = EU When a submission from an EU IP is stored Then IPv4 addresses are masked to /24 and IPv6 to /48 before persistence And full IPs are never written to storage or logs Given truncation policy is disabled for region = US When a submission from a US IP is stored Then the configured setting is honored (no truncation) and the decision is recorded in the audit log with policy key and version Given data at rest is inspected When scoring records are queried Then only truncated IPs are present for regions requiring truncation
Configurable Retention TTL Enforcement
Given tenant T sets scoring_record_ttl_hours = 72 and evidence_snapshot_ttl_days = 14 When time since record creation exceeds the TTL Then the record is hard-deleted within 30 minutes and removed from indices and caches And an audit log entry is written with tenant, data_class, count, and job_run_id Given a deleted record ID When retrieval is attempted via API Then the API returns 404 Not Found and no data is returned Given a backup snapshot When a restore is performed Then restored datasets exclude records past TTL at the time of restoration
Consent Language for Scoring Purposes on Action Pages
Given an action page with scoring enabled and locale = en-US When the page renders for a first-time visitor Then consent text explicitly stating scoring, deduplication, and retention purposes is displayed with a link to the privacy notice And submission is blocked until the user actively consents (checkbox unticked by default) Given a submission with consent When stored Then the consent version, timestamp, locale, and policy key are recorded with the scoring record Given a user declines consent When submit is pressed Then the action is not processed and a non-PII rejection event is logged
Data Subject Export and Deletion Self-Service
Given a verified data subject request with email = foo@example.org When an export is requested Then the system returns all scoring-related records linked via the hashed email in machine-readable JSON within 7 days And includes configuration snapshots (policy keys, TTLs) and evidence metadata but excludes secrets and internal salts Given a verified deletion request for the same subject When deletion executes Then all scoring records and evidence snapshots linked to the subject are purged from primary stores, search indices, and backups per the backup purge policy within 30 days And a confirmation receipt with counts by data class is issued Given an export request after deletion completes When processed Then the system returns an empty dataset and notes prior deletion in the audit log
Secured Access and Storage of Evidence Snapshots
Given roles {Admin, Compliance, Organizer} When accessing evidence snapshots Then only Admin and Compliance can view or download; Organizer receives 403 Forbidden Given any access to an evidence snapshot When logged Then logs include user_id, role, tenant_id, purpose_of_access, record_id, and timestamp, and are immutable Given data in transit When connections are established Then TLS 1.2+ with HSTS is enforced and weak ciphers are disabled Given data at rest for databases, object storage, and backups When inspected Then encryption using KMS-managed keys is enabled with rotation <= 90 days and no plaintext copies exist
Processing Register and Configuration-Aware Audit Logging
Given processing activities for scoring When the system operates Then a processing register is maintained per tenant listing purposes, data categories, legal basis, recipients, retention TTLs, and DPO contact Given any scoring decision executed When an audit log entry is written Then the log includes the active config versions (hashes), policy flags (e.g., IP truncation), consent version, and evaluation timestamps Given an auditor requests a DPIA evidence bundle When generated Then the bundle includes the processing register export, a 30-day log slice with config hashes, and verification that controls (hashing, RBAC, TTL) were active

Address Autocorrect

Instant USPS/NCOA-backed address cleanup and geocoding that suggests the most likely correct address when typos or unit numbers are missing. One-tap fix via SMS or page prompt recalculates districts in real time, boosting match rates and completed actions.

Requirements

USPS/NCOA Address Validation & Standardization
"As a campaign organizer, I want supporter addresses validated and standardized so that deliveries succeed and district matching is accurate."
Description

Integrate USPS CASS/NCOA services to validate and standardize supporter-entered addresses in real time and via scheduled batch. Normalize fields (street line, city, state, ZIP+4, DPV), correct common typos, append missing ZIP+4, and flag deliverability. Generate a suggested canonical address with confidence score and reason codes; never auto-overwrite without user confirmation. Provide a single RallyKit service endpoint other modules can call, with idempotent requests, result caching, and clear error codes for downstream handling.

Acceptance Criteria
Real-Time CASS Validation on Action Page Submission
Given a supporter submits a US street address via a RallyKit action page form When the client calls the RallyKit /address/validate service with the entered address components (street, city, state, ZIP) in real time Then the service must call USPS CASS and return a standardized address with fields: primary_number, predirectional, street_name, suffix, postdirectional, unit_type, unit_number, city, state, ZIP5, ZIP4 And the response must include DPV code in {Y, N, S, D} and a boolean deliverable flag (true only when DPV=Y) And common typos (e.g., street suffix, transposed numbers) are corrected in the standardized result And response time must be <= 800 ms p95 for real-time requests at 50 RPS And the original user-entered address is preserved unchanged until explicit user confirmation
User-Confirmed Autocorrect Suggestion with Confidence and Reason Codes
Given the validation service produces a suggested canonical address different from the user input When the UI presents a one-tap "Use Suggested Address" prompt on the page or via SMS deep link Then the suggestion must include: canonical formatted address, confidence score (0–100), and reason_codes (e.g., FIXED_TYPO, NORMALIZED_SUFFIX, APPENDED_ZIP4, UNIT_MISSING, MULTIPLE_CANDIDATES, NCOA_MOVE) And the system must never overwrite the stored address without explicit user confirmation And upon user confirmation, the stored address is updated to the canonical form atomically And an immutable audit log record is created capturing original address, suggested address, confidence, reason_codes, user_id, timestamp, and channel (web/SMS) And if the user declines or ignores the suggestion, the stored address remains unchanged
Immediate District Recalculation After Address Confirmation
Given a supporter confirms a suggested canonical address When the address update is persisted Then the system must geocode the address and recalculate legislative districts in real time And updated district and legislator mappings must be reflected in the session and downstream modules within 2 seconds end-to-end And an event "districts.updated" is emitted with supporter_id, old_districts, new_districts, and address_id for analytics and audit And any pending action targeting must use the recalculated districts for dispatch
NCOA Move Detection and New Address Suggestion
Given NCOA indicates a change-of-address for the supporter’s input When the validation service processes the address (real-time or batch) Then the response must include new_address, move_type (INDIVIDUAL|FAMILY|BUSINESS), move_date (YYYY-MM), and ncoa_match_confidence (0–100) And if a forwarding address is available and DPV=Y for the new address, deliverable=true only for the new address And the system must present a suggestion to switch to the NCOA new_address and require explicit user/admin confirmation before updating records And if NCOA returns Moved Left No Address or no forwarding address, the record is flagged NON_DELIVERABLE with appropriate reason_codes and no overwrite occurs
RallyKit Address-Validation Service: Idempotency, Caching, and Error Codes
Given a caller module submits a validation request with an X-Idempotency-Key and identical payload When duplicate requests are received within the cache TTL Then the service must return the same result with idempotency_hit=true and must not trigger additional upstream USPS/NCOA calls And results for identical normalized payloads are cached for at least 7 days with cache_status reported as MISS|HIT And the service must return machine-readable error.code in {INVALID_INPUT, MULTI_MATCH, NON_DELIVERABLE, TIMEOUT, UPSTREAM_UNAVAILABLE, RATE_LIMITED} with HTTP status mapping and human-readable error.message And every response includes a correlation_id for downstream logging and traceability
Scheduled Batch Address Cleanup and Reporting
Given a nightly batch job is configured for address validation and NCOA processing When the batch runs Then only addresses not validated in the last 30 days are queued, deduplicated by supporter_id+address_hash, and processed via the same service endpoint And the batch must achieve a throughput of >= 10,000 addresses/hour with retry-once for transient errors and a dead-letter queue for persistent failures And a summary report is generated with totals: processed, standardized, appended_ZIP4, DPV_Y, DPV_N, NCOA_moves, non_deliverable, errors by error.code And individual record results include confidence, reason_codes, and timestamps, available for export and audit
Real-time Geocoding & Legislative District Recalculation
"As a supporter, I want my corrected address to update my district instantly so that I contact the right legislators without extra steps."
Description

Geocode corrected or newly entered addresses and immediately recalculate federal, state, and local legislative districts using maintained boundary datasets. Trigger recomputation on every accepted correction and propagate updated district mappings to active action pages and supporter records without page reload. Expose precision/confidence to the UI and apply graceful degradation rules when geocode accuracy is insufficient for district assignment.

Acceptance Criteria
Accepted Correction Recomputes Districts in Real Time
Given a supporter’s address correction is accepted via SMS or page prompt When the correction event is received Then the system geocodes the corrected address and recalculates federal, state upper, state lower, and local districts using the current boundary dataset version And updates the supporter record’s geo fields and district mappings atomically And pushes updated district mappings to any open action pages for that supporter without a page reload within 2 seconds p95 And no stale districts are displayed after update
Initial Address Entry Geocodes and Assigns Districts
Given a new address is submitted on an action page When geocoding succeeds with accuracy sufficient for district assignment Then districts for all chambers resolvable by the geocode are assigned And the action page reflects the assigned districts (targets/scripts) without page reload within 2 seconds p95 And the supporter record is created/updated with the same district mappings
Expose Geocode Precision and Confidence in UI
Given a geocode result exists for the displayed address When rendering the supporter panel and action page Then show a precision label (e.g., rooftop, parcel, interpolated, ZIP centroid) and a numeric confidence score (0–100) And surface the same precision and confidence via the public API fields precision and confidence And the values update in real time after any accepted correction
Graceful Degradation When Accuracy Is Insufficient
Given the geocode precision/confidence is insufficient to uniquely determine a district for a chamber When attempting to assign districts Then mark that chamber’s district as Unresolved and do not route actions requiring that district And prompt the user to add missing unit or additional address details And still assign any districts that are unambiguous (e.g., statewide/at‑large) based on the geocode And criteria for insufficiency is confidence < 80 or multiple candidate districts intersect the geocode point for that chamber
Realtime Propagation Consistency Across UI and API
Given districts have been recalculated for a supporter When the dashboard, an open action page session, and the public API fetch the supporter Then all surfaces show identical district IDs and boundary dataset version And updates are visible within 2 seconds p95 of recalculation And no intermediate state presents mixed old/new districts
Boundary Dataset Versioning Applied
Given a maintained boundary dataset has a current version Vn When recalculating districts for any address Then the system uses Vn for all polygon lookups And records boundaryDatasetVersion alongside assigned districts on the supporter And after switching to version Vn+1, subsequent recalculations use Vn+1 while existing records retain their recorded version until recalculated
Intelligent Unit Number Inference
"As a supporter, I want helpful suggestions for my apartment or unit so that my message is matched and mail is deliverable."
Description

Detect multi-unit addresses and infer likely missing secondary designators (Apt, Unit, Suite) using USPS data, building directories, and RallyKit historical records. When a unit is missing or ambiguous, present ranked suggestions with confidence and allow quick selection or manual entry. Enforce DPV rules, prevent fabrication, and require user confirmation before committing any inferred unit.

Acceptance Criteria
Suggest Units for Known Multi-Unit Address with Missing Secondary
Given a submitted domestic US address whose primary street address is DPV-confirmed and identified as multi-unit and no secondary designator is present When the address is processed on web or API submit Then the system queries USPS, building directories, and RallyKit historical records for candidate unit designators And ranks up to 5 candidates by confidence; ties are broken with source priority (USPS > Building Directory > RallyKit) And displays each candidate as a unit label (e.g., Apt 3B) with a source tag and a confidence percentage rounded to 0 decimals And excludes any candidate whose secondary fails USPS DPV secondary confirmation for the primary And hides all suggestions if the highest confidence is below 30% And returns the suggestions payload within 800 ms at the 95th percentile
User Confirms Suggested Unit or Enters Manual Unit
Given unit suggestions are available for the address When the user selects a suggestion Then a clear confirmation action is required before persisting And upon confirmation, the unit is saved with status "user confirmed" and the suggestion ID, source, and confidence are stored And USPS DPV secondary confirmation status is stored as "confirmed" When the user opts for manual entry instead Then the input is restricted to 1–10 characters matching ^[A-Za-z0-9\-\/# ]+$ And on submit, the unit must pass USPS DPV secondary confirmation for the primary; otherwise a blocking error is shown and nothing is persisted And no unit (suggested or manual) is persisted without explicit user confirmation
Real-Time District Recalculation After Unit Confirmation
Given an address missing a unit has just had a unit confirmed or a valid manual unit entered When the unit is committed Then the system geocodes the full address including the unit And updates congressional and state legislative districts in the user session and the supporter record And refreshes any action page targeting and scripts to match the new districts before the user can proceed And emits an audit event "district_recalculated" with old and new district IDs And completes geocoding and updates within 1200 ms at the 95th percentile And if geocoding fails, the unit commit is rolled back, a clear error is shown, and no action proceeds with an un-geocoded unit
SMS One-Tap Fix Flow for Missing Unit
Given a supporter has a valid mobile number and an address identified as multi-unit with a missing secondary When RallyKit triggers the SMS one-tap fix Then an SMS is sent within 30 seconds containing a single-use, TLS-protected link that expires after 24 hours and encodes only a nonce (no PII) And the hosted page loads the same top suggestions (respecting confidence thresholds) and preselects the top candidate When the supporter taps Confirm Then the unit is persisted, USPS DPV-validated, districts are recalculated, and an audit record is created within 2.5 seconds at the 95th percentile from tap And delivery, click, and confirmation events are logged with timestamps and channel
No Suggestions for Single-Unit or Undeliverable Primary
Given a submitted address whose primary is not USPS DPV-confirmed or the building is identified as single-unit When the address is processed Then no unit inference is attempted and no unit prompt is shown And the user is routed to the standard address correction flow if the primary is undeliverable And the system proceeds without a unit if the building is single-unit
Auditability and Anti-Fabrication Controls
Given any suggested or manually entered unit is committed When the record is saved Then an immutable audit log entry is created with: supporter ID/hash, primary address ID, normalized full address, selected unit, USPS DPV secondary status/codes, data sources used with weights, confidence score, actor and channel, timestamps (received, suggested, confirmed), and geocoding result And the admin audit UI/API can retrieve the entry by supporter ID and date range And server-side validation rejects any attempt (UI or API) to persist a unit that is not USPS DPV secondary-confirmed for the given primary, returning HTTP 422 with error code "dpv_secondary_unconfirmed" And the client cannot bypass server validation; all failed attempts are logged as security events
One-Tap Correction Prompts (SMS & In-Page)
"As a supporter on mobile, I want a one-tap fix for my address so that I can complete the action quickly without retyping."
Description

Surface address correction suggestions via secure SMS deep link and in-page banner/modal on action pages. Show the top suggestion with one-tap Accept and an Edit option; upon acceptance, apply the correction, re-geocode, and refresh district matches inline. Provide accessible, brandable UI components, localized copy, and event tracking for accept/decline metrics. Use signed time-bound links in SMS for security.

Acceptance Criteria
SMS Deep Link One-Tap Accept Applies Correction and Recalculates Districts
Given a supporter received an SMS with a correction deep link for an address they previously entered with errors And the link includes a signed token valid for 15 minutes and bound to that supporter and action id When the supporter taps the link within the validity window Then the action page opens with the top USPS/NCOA-backed corrected address preselected And the UI presents Accept and Edit options And when the supporter taps Accept Then the system applies the corrected address, re-geocodes it, and refreshes district matches inline without a full page reload And 95% of re-geocode-and-refresh operations complete within 2 seconds measured from tap to updated districts And the applied address replaces the prior entry in the session state
In-Page Banner/Modal Shows Top Suggestion With Accept and Edit
Given an action page where a supporter enters an address with typos or a missing unit And a top suggestion is available from USPS/NCOA and geocoding When the suggestion is ready Then display a non-blocking banner or modal containing the suggested address, Accept, and Edit controls And the component meets WCAG 2.1 AA (proper focus management, full keyboard access, ARIA live announcement, 4.5:1 contrast, visible focus) And when Accept is activated (click or keyboard) Then the corrected address is applied, re-geocoded, and district matches refresh inline within 2 seconds P95 And when Decline/Dismiss is chosen Then no changes are applied and no further prompts appear for this session unless the address input changes again
Edit Flow for Manual Address Correction and Unit Selection
Given a suggestion is presented via SMS landing or in-page component When the supporter selects Edit Then an address editor opens with the suggested address prefilled and fields for street, city, state, ZIP, and unit/apartment And if authoritative data indicates multiple units, the unit field offers a selectable list and also allows manual entry And on Save with a valid address, the system applies the entered address, re-geocodes, and refreshes district matches inline within 2.5 seconds P95 And on Cancel, the user returns to the action page with no address changes applied And validation errors are shown inline with accessible messaging and do not clear user input
Localization and Brandable UI for Prompts Across Channels
Given the organization has configured brand colors, typography, and logo, and i18n resources for supported locales When prompts render in SMS landing and in-page contexts for a user with locale X Then all visible strings (headers, buttons, error messages) are sourced from the i18n dictionary for locale X with zero missing keys in integration tests And right-to-left locales render layout and icons mirrored appropriately And the component respects theme tokens for colors and fonts while maintaining minimum 4.5:1 contrast and 44x44 touch targets And fallback to the default locale occurs if a string is missing, with a logged warning for each missing key And code scanning finds no hardcoded English strings in the prompt components
Event Tracking for Accept/Decline and Suggestion Exposure
Given a suggestion is shown, accepted, edited, or declined via SMS or in-page When the user performs one of these actions Then emit analytics events: suggestion_shown, correction_accepted, correction_declined, correction_edited with properties {source, action_id, suggestion_id, original_address_hash, corrected_address_hash, time_to_action_ms, geocode_status} And events exclude raw PII (only salted hashes of addresses and no phone numbers or emails) And events are deduplicated within a 10-minute window per {session_id, suggestion_id, event_type} And 99% of events are delivered to the analytics sink within 5 minutes, with offline retry for up to 24 hours And QA can filter events by source=SMS and source=InPage in the analytics dashboard to verify counts
Expired or Tampered SMS Link Fails Securely Without Applying Changes
Given a supporter taps an SMS deep link after the token has expired or the signature is invalid When the app validates the token Then show an error state explaining the link expired or is invalid and offer a Request New Link call-to-action And do not apply or display any corrected address from the link And return HTTP 401/403 for token validation API calls and log a security_audit event with token_invalid=true and no PII And attempts to reuse a consumed token are rejected with the same error state
Data Sync & Write-Back to Supporter Profiles
"As an organizer, I want corrected addresses to update supporter profiles automatically so that future actions and outreach use accurate data."
Description

When a correction is accepted, write the standardized address back to the supporter profile and any pending action contexts, de-duplicating records and preserving the original input for audit. Implement versioning with timestamps and source attribution (USPS/NCOA, user confirmed), and emit events/webhooks so external integrations (CRMs, email tools) receive the update.

Acceptance Criteria
Write-Back on Corrected Address Acceptance
Given a supporter profile with an existing address and a proposed standardized address When the supporter accepts the correction via SMS or page prompt Then the standardized address fields (address1, address2, city, state, postalCode, country) replace the current profile address without loss of other profile data And the geocoded coordinates (lat, lon) and geocoding precision are stored with the address And the address normalization sourceAttribution is recorded as USPS, NCOA, or User Confirmed as applicable And the update is persisted within 2 seconds at p95 and returns a success response
Pending Action Context Update & District Recalculation
Given a supporter has one or more pending action contexts that depend on district targeting When the supporter’s corrected address is written back to the profile Then legislative districts (federal and state) are recalculated from the new geocode and stored And all pending action contexts are updated to reference the new districts and targets before the next render or send And any dynamic, district-specific scripts are regenerated using the updated bill status and targets And previously completed actions remain unchanged And the recalculation and context updates complete within 5 seconds at p95
Audit Trail Preservation of Original Input
Given a supporter submitted an address that is later corrected When the standardized address is written back Then the original rawAddress input, entryChannel (SMS or Web), and enteredAt timestamp are preserved unmodified in an audit record linked to the supporter And the audit record references the resulting standardized address versionId And all audit records are exportable and queryable by time range and supporterId And audit data access is restricted to authorized roles and is retained for at least 24 months
Versioning with Timestamps and Source Attribution
Given a supporter’s address is updated to a materially different standardized value When the write-back occurs Then a new address version is created with an incremented version number, ISO 8601 UTC timestamp, and sourceAttribution (USPS, NCOA, User Confirmed) And the supporter profile’s currentAddress pointer references the newest version And previous address versions remain retrievable in chronological order And no new version is created if the normalized address is identical to the current version (idempotent write)
De-duplication of Supporter Records on Address Update
Given multiple supporter profiles exist that match on normalized email and phone and the updated address fingerprint When the corrected address is written back Then potential duplicates are detected and a merge is executed per dedup rules And one canonical supporterId remains with all actions, tags, and subscriptions preserved And audit trails from merged profiles are retained and linked to the canonical profile And a merge event is recorded and emitted without exposing PII in logs And if auto-merge confidence is below threshold, a review task is created instead of merging
Event/Webhook Emission to External Integrations
Given webhooks are configured for address updates When a supporter’s address version is created or the canonical profile changes due to merge Then an event supporter.address.updated is emitted with payload including supporterId, eventId, occurredAt, idempotencyKey, oldAddressVersionId, newAddressVersionId, sourceAttribution, and affectedPendingActionIds And the webhook is delivered within 10 seconds at p95 with HMAC-SHA256 signature in X-RallyKit-Signature And failed deliveries are retried with exponential backoff at least 6 times and deduplicated by idempotencyKey And a 2xx response from the subscriber marks the event as delivered; non-2xx responses retry
Idempotency and Concurrency Controls
Given multiple acceptance confirmations or API calls occur nearly simultaneously for the same supporter When processing the address correction Then only one write-back produces a new version and subsequent identical requests return the prior result using the same idempotencyKey And concurrent updates use ETag/If-Match to prevent lost updates; conflicts return HTTP 409 with guidance to retry And internal processing ensures version numbers are strictly monotonic per supporter And at sustained load of 100 updates/second, 95% of write-backs complete within 2 seconds and events enqueue within 3 seconds
Audit Logging & Evidence Export
"As a nonprofit director, I want verifiable logs of address corrections so that I can demonstrate compliance and campaign integrity to funders and auditors."
Description

Record every correction attempt and outcome, including original input, suggested canonical address, confidence, DPV/NCOA reason codes, user confirmation method (SMS/Page), timestamps, and user/session identifiers. Provide export and API access for audit-ready proofs linked to campaign actions, with filters by campaign, date range, and outcome.

Acceptance Criteria
Log Data Completeness on Address Correction Attempt
Given any address correction is attempted (auto-suggest, user-initiated, or system retry) When the attempt completes (success, rejection, or failure) Then a log record is persisted containing: attempt_id, campaign_id, action_id (if present), original_input, suggested_canonical_address, confidence_score (0–1), dpv_code, ncoa_code, outcome, user_confirmation_method (SMS|Page|None), user_id (if authenticated) or session_id, created_at and updated_at timestamps (ISO-8601 UTC) And the record is retrievable via export and API within 2 seconds of write
Timestamp, Identity, and Uniqueness Integrity
Given a new audit log record is created Then created_at and updated_at are stored in ISO-8601 UTC with millisecond precision And attempt_id is globally unique and immutable And within a session, records sort deterministically by created_at ascending with no duplicate attempt_ids
Outcome and Reason Code Normalization
Given any correction attempt response When the result is recorded Then outcome is one of [accepted, auto_applied, rejected_by_user, no_change, invalid_address, service_error, timeout] And dpv_code and ncoa_code, when present, conform to USPS/NCOA code sets and are stored verbatim And if outcome is service_error or timeout, provider_error_code and reason_message are captured And if no DPV/NCOA code is returned, the fields are stored as null
User Confirmation Channel Capture (SMS vs Page)
Given a user confirms or rejects a suggested address via SMS link or page prompt When the action is taken Then user_confirmation_method is recorded as SMS or Page And channel_reference_id (e.g., sms_message_id or page_session_id) is stored And if the correction is auto-applied without user input, user_confirmation_method is recorded as None
Filtered Evidence Export (CSV and JSON)
Given a user selects campaign(s), date range (UTC), and outcome filters When Export is requested Then a downloadable file is produced in CSV and JSON formats with identical field sets and record counts And only records matching the filters are included; CSV includes a header row; all text is UTF-8 And exports of up to 100,000 records complete within 60 seconds; larger exports are chunked without data loss And if no records match, an empty file with headers is produced
Audit API Retrieval with Filtering and Pagination
Given an authenticated API client with audit.read scope When it requests GET /audit-logs with campaign_id, start_date, end_date, outcome, page, and limit Then the API returns 200 with JSON records matching the filters, sorted by created_at desc, and pagination metadata (next_cursor or page/limit and total_count) And returns 400 for invalid filters, 401/403 for missing/insufficient auth And P95 response time is <= 500 ms for pages up to 1000 records
Linkage to Campaign Actions and District Recalculation Evidence
Given a supporter action triggers address correction and potential district recalculation When logging the attempt Then the record includes campaign_id, action_id, supporter_id (or anonymous_session_id), and district_before and district_after if known And district_change is stored as true when districts differ, else false And the linked log is retrievable by action_id via export and API
Privacy, Consent, and Data Retention Controls
"As an organizer, I want privacy-safe correction flows with clear consent so that our outreach complies with laws and supporter expectations."
Description

Obtain and record explicit consent before updating addresses via SMS or web prompt. Allow organizers to configure retention windows for raw inputs and correction logs, automatically redact PII in exports where required, and display clear notices to supporters. Ensure compliance with USPS/NCOA licensing terms, TCPA for SMS, and applicable privacy laws; provide settings to disable autocorrect for sensitive campaigns.

Acceptance Criteria
SMS Consent Before Address Update
Given a supporter has an address with low confidence or missing unit and a valid SMS-capable number on file And no prior autocorrect consent exists for the current address instance When RallyKit sends a consent request SMS including organization name, purpose of correction, brief data-use notice, privacy policy link, and instructions to reply YES to confirm or NO to decline Then a reply of YES within 30 days records consent with supporter ID, phone, message ID, timestamp (UTC ISO-8601), language, method=SMS, and a hash of the disclosure text shown And the USPS/NCOA-standardized address is applied and legislative districts are recalculated within 5 seconds of receipt And the before/after address, correction source, confidence score, and operator (system) are written to an immutable audit log And a reply of NO or no reply keeps the address unchanged and suppresses further prompts for 30 days And the SMS includes HELP/STOP; a STOP reply immediately halts the flow, confirms opt-out, and marks the phone as SMS opt-out
Web Prompt Consent Before Address Update
Given a supporter is on an action page and the entered address is detected as likely incorrect or incomplete When a modal prompt displays a standardized address suggestion with clear disclosure text and a link to the privacy policy Then selecting Accept applies the corrected address, recalculates districts within 2 seconds, and records consent with method=Web, consent_text_version, IP, user agent, and timestamp (UTC ISO-8601) And selecting Decline leaves the original address unchanged, records a decline event, and suppresses re-prompting for the session and for 30 days for that address instance And no address updates occur unless Accept is explicitly selected And the modal is WCAG 2.1 AA compliant (focus trapped, keyboard navigable, ARIA labels present)
Configurable Data Retention Windows and Automated Purge
Given an organizer with Admin role opens Privacy and Retention settings When they set Raw Address Input Retention to X days and Correction Log PII Retention to Y days and click Save Then the settings are validated (X,Y within allowed bounds) and persisted with who/when old->new values in the admin audit log And a daily purge job removes raw address inputs older than X days and irreversibly redacts PII fields in correction logs older than Y days (name, phone, email, street lines replaced with [REDACTED] and a salted one-way hash) And a Test Purge button displays the count of items that would be affected without making changes And each purge run produces a timestamped report with item counts deleted/redacted and success/failure status
PII Redaction in Exports and Reports
Given an organizer generates a Supporters or Actions export When Redact PII is enabled (explicitly or enforced by campaign policy) Then phone, email, and street address fields are masked (only last 2 digits of phone visible; email local part replaced with ***; street lines replaced with ***), while city, state, and ZIP5 are retained And correction logs in the export exclude raw inputs and include only standardized address tokens and jurisdiction codes (FIPS, district IDs) And a pseudonymous export identifier is provided per supporter/action to allow linkage without exposing PII And export metadata includes redaction status, requesting user, timestamp (UTC ISO-8601), and legal basis note And if Redact PII is disabled and not required, a warning modal explains compliance implications and requires Admin confirmation before proceeding And API exports honor the same redaction flag and unit tests verify no PII appears when redaction is on
USPS/NCOA Licensing Compliance Safeguards
Given Address Autocorrect uses USPS/NCOA data sources When address suggestions are shown or applied Then only USPS-standardized address elements permitted by license are stored and displayed (no exposure of NCOA proprietary move data or non-permitted codes) And suggestion UI includes required attribution text/link as specified by USPS/NCOA And exports do not include USPS/NCOA-only fields beyond permitted standardized address components And a compliance report can be generated showing counts of lookups, fields stored, and retention policy settings for the period And if the vendor signals a terms change or quota breach, autocorrect operations are halted and admins are prompted to review/accept terms before resumption
TCPA SMS Compliance and Opt-Out Handling
Given a supporter does not have SMS opt-in recorded When the system attempts to send a consent request SMS for address correction Then the message is not sent, a web prompt is used if available, and the attempt is logged with reason=No SMS opt-in Given a supporter has SMS opt-in recorded When a consent request SMS is sent Then the message includes organization name, purpose of the message, HELP and STOP instructions, message/data rates disclosure, and that it is a one-time consent confirmation And messages respect quiet hours configured for the campaign timezone And a STOP reply immediately suppresses further SMS, sends a confirmation, and updates the supporter record to SMS opt-out And non-YES/NO replies trigger at most one clarification message and then the flow ends And all delivery, reply, and suppression events are logged with timestamps and message IDs
Campaign-Level Autocorrect Disable for Sensitive Campaigns
Given an organizer toggles Disable Address Autocorrect in a campaign marked sensitive When supporters submit addresses via action pages under that campaign Then no external address correction lookups are performed, no SMS prompts are sent, and no automatic updates are applied And the UI displays a tooltip or banner indicating autocorrect is disabled by the organizer And logs and exports reflect autocorrect_status=disabled and zero correction attempts for the campaign And the setting inherits to child action pages unless explicitly overridden by an Admin; all changes are captured in the admin audit log

Redistricting Watch

Always-on boundary monitoring that revalidates past supporters when lines shift. Notifies campaigns of affected records, updates targets automatically, and tags actions with the district version used—keeping outreach accurate through mid-cycle changes.

Requirements

Boundary Change Ingestion
"As a data manager, I want RallyKit to automatically ingest official redistricting changes from trusted sources so that our records stay aligned without manual uploads."
Description

Continuously ingest updated district boundary data from authoritative sources (state GIS portals, Census TIGER/Line, local election boards) via scheduled polling and webhooks. Support common geospatial formats (GeoJSON, Shapefile, CSV with WKT) and normalize into a canonical, versioned schema with effective and publication dates. Validate topology, handle partial-county splits, at-large and multi-member districts, and municipal/state/federal layers. Deduplicate and sign updates, store provenance metadata, and expose a dashboard view of new map versions awaiting approval.

Acceptance Criteria
Scheduled Polling of State GIS Portal Updates
Given a configured state GIS portal endpoint with a stored last_successful_poll timestamp When the scheduler runs at the configured interval Then the system issues a single request using If-Modified-Since or ETag and backs off per HTTP 429 Retry-After And if the response is 304 Not Modified, no new version is created and the poll is logged as successful And if the response is 200 with newer data, the payload is downloaded, checksumed, stored, and a draft map version is created And the poll job completes or fails within 5 minutes per endpoint with up to 3 retries using exponential backoff And all outcomes (success, not-modified, failure) are recorded with timestamps and response codes
Webhook Ingestion With Signature Verification
Given a registered webhook endpoint with a configured HMAC secret or public key When the provider POSTs a notification with signature headers Then the signature is validated; invalid signatures return 401 and are not enqueued And valid deliveries return 202 within 2 seconds and are enqueued for async processing And duplicate deliveries (same event_id) are ignored without creating a new version And end-to-end processing completes within 10 minutes for 95% of events, with failures retried up to 3 times And provenance fields (source, event_id, received_at, signature_alg) are stored with the version
Multi-Format Geospatial Parsing and Normalization
Given an input in GeoJSON, a zipped ESRI Shapefile (.shp/.shx/.dbf/.prj), or CSV with WKT and EPSG code When the dataset is ingested Then the system auto-detects format and CRS and reprojects to EPSG:4326 And required fields are mapped to the canonical schema (jurisdiction_level, layer, district_id, district_name/number, chamber, effective_date, publication_date, source_id, geometry, version_id) And record counts match between source and normalized output within ±0 (no loss); mismatches block creation with a descriptive error And missing effective_date or publication_date triggers requires_review status on the draft version with a visible warning And parsing errors identify file/row and field names and do not create a draft version
Topology and Geometry Validation Across Layers
Given a normalized dataset for a single jurisdiction and layer-period When validation runs Then all geometries are valid (no self-intersections, rings closed, area > 0) and multi-part geometries are supported And single-member layers have no overlapping districts beyond a tolerance of 1e-9 degrees and cover the jurisdiction boundary within 2% gap/overage And multi-member and at-large districts are flagged as such and either share geometry with a member_seats attribute or have separate member features linked by a group_id And partial-county splits include county identifiers for each piece and preserve attribution And a validation report (pass/fail counts and issues list) is attached; any failure blocks approval
Versioning, Deduplication, and Cryptographic Signing
Given a newly ingested normalized dataset When it is compared to the latest approved version for the same jurisdiction and layer Then identical content (attributes + geometries after normalization) yields no new version and logs a duplicate event And differing content creates a new monotonically increasing version number with status Awaiting Approval and recorded effective/publication dates And the system computes a SHA-256 content hash and creates a digital signature with the platform private key; the signature and key fingerprint are stored And a diff summary (added/removed/changed district counts) is generated and matches the geometric delta And only one pending draft per jurisdiction/layer exists; concurrent ingestions are queued
Provenance Metadata Capture and Auditability
Given any successful ingestion event (poll or webhook) When the draft version is created Then the system stores immutable provenance: source name, URL, access method, fetch_time, received_bytes, license, contact, event_id/ETag/Last-Modified, CRS, parser version And the provenance is viewable in the UI and retrievable via API as JSON And an audit log records actor, timestamp, action (ingest, validate, approve, reject), and affected version_id And missing mandatory provenance fields set the draft to requires_review and block approval
Dashboard View of Map Versions Awaiting Approval
Given a new draft version exists When the dashboard is opened by an authorized user Then the draft appears under Awaiting Approval within 30 seconds of creation And each row shows jurisdiction, layer, version number, effective/publication dates, source, diff summary, validation status, and a link to provenance And filters by jurisdiction_level and layer, and search by source, return results within 500 ms for up to 5,000 drafts And opening a draft shows a geometry preview and approve/reject actions; approving sets status Approved and rejecting archives the draft with a reason And all approval/rejection actions capture user identity and are written to the audit log
Supporter Revalidation Engine
"As an organizer, I want supporters to be revalidated against new boundaries as soon as maps change so that we don’t contact the wrong legislators."
Description

Recompute each supporter’s district assignment when a new boundary version is approved using rooftop geocoding and point-in-polygon matching with fallbacks (parcel, ZIP+4, centroid) and confidence scoring. Queue revalidation jobs in near real time with tenant isolation and rate limiting, flag ambiguous or low-confidence addresses for review, and re-run automatically when supporter addresses change. Persist mappings by district version, maintain idempotency, and surface revalidation progress and exceptions in the RallyKit dashboard and APIs.

Acceptance Criteria
New Boundary Version Triggers Revalidation Jobs
Given a new boundary version is approved and published for a jurisdiction And the tenant has at least one supporter with an address in that jurisdiction When the boundary version ingestion completes Then a revalidation job is enqueued for the tenant within the configured SLA (default 60s) And the job payload includes tenant_id, boundary_version_id, idempotency_key, and jurisdiction And re-posting the same boundary version does not create a duplicate job (idempotent) And supporters outside the jurisdiction are excluded from the job scope
Geocoding with Fallbacks and Confidence Scoring
Given a supporter address requires geocoding When revalidation runs Then rooftop geocoding is attempted first And if unavailable, parcel centroid is used; else ZIP+4 centroid; else address centroid as final fallback And the persisted result includes latitude, longitude, geocoding_method, and confidence_score [0..1] And addresses with confidence_score below the configured threshold (default 0.70) are flagged for review and not auto-updated
Point-in-Polygon Assignment and Versioned Persistence
Given a geocoded point and a boundary_version_id When matching districts Then districts are assigned using point-in-polygon against the specified boundary_version_id And the mapping is persisted with supporter_id, district_ids, boundary_version_id, effective_at, and geocoding_method And prior mappings remain intact (no destructive overwrite), enabling historical queries by boundary_version_id And re-running the same inputs produces the same mapping (idempotent)
Automatic Revalidation on Supporter Address Change
Given a supporter’s address is created or updated via UI, API, or import When the change is saved Then a revalidation job is enqueued within the configured SLA (default 30s) And the job uses the latest approved boundary versions for all applicable jurisdictions And previous address-to-district mappings are archived with an ended_at timestamp And an audit log entry records who/what changed, when, and the revalidation job_id
Tenant Isolation and Rate Limiting of Revalidation Queue
Given multiple tenants and revalidation jobs are active When jobs are queued and executed Then data access is scoped to tenant_id in all read/write operations And per-tenant concurrency and external provider rate limits respect configured ceilings (e.g., max_concurrency_per_tenant, geocoder_qps) And 429/5xx responses trigger exponential backoff and retry up to the configured max_retries, after which jobs are dead-lettered And dead-lettered jobs are visible with error details and are not retried automatically without manual action
Dashboard and API Visibility of Progress and Exceptions
Given at least one revalidation job exists When a user views the RallyKit dashboard or calls the jobs API Then progress shows total, processed, succeeded, flagged_for_review, failed, start_time, and estimated_time_remaining And the UI auto-refreshes progress at the configured interval (default 5s) without full page reload And exceptions are listed with supporter_id, reason_code, message, and export capability (CSV) And the API returns the same fields with pagination and filters by tenant_id, boundary_version_id, and status
Ambiguous or Low-Confidence Address Flagging for Review
Given a revalidation yields multiple candidate districts or low geocoding confidence When the result cannot meet the confidence threshold or uniqueness constraints Then the supporter is marked Needs Review with reason_code in {LOW_CONFIDENCE, MULTIPLE_MATCHES, NO_GEOCODE} And no district mapping update is applied until resolved And the Review queue allows manual override or re-run after address correction, recording actor, timestamp, and chosen resolution And resolving an item triggers revalidation of dependent mappings and clears the Needs Review flag
Auto-Target Realignment
"As a campaign director, I want supporter targets to update automatically when districts shift so that outreach remains accurate without recreating campaigns."
Description

Automatically update targeted legislators for active campaigns and one-tap action pages when a supporter’s district changes, applying campaign-specific rules (e.g., freeze targets after send, exclude certain chambers). Recompute recipient lists, update dynamic scripts tied to bill status and district, and provide a preflight simulation with diffs before applying changes. Support bulk approvals, rollback on failure, and API/webhook signals to downstream integrations to keep outreach accurate without recreating campaigns.

Acceptance Criteria
Real-Time Auto-Target Realignment on District Change
Given a supporter’s district has changed and Redistricting Watch ingests the change And the supporter is associated with at least one active campaign or one-tap action page When Auto-Target Realignment runs Then the supporter’s target legislator list is recomputed per the latest district within 5 minutes of ingest And active campaigns and one-tap pages reflect the new recipients without recreating the campaign or altering share URLs And dynamic scripts tied to bill status and district are regenerated to match the new district context And a change log entry is recorded containing supporter_id, campaign_id, previous_targets[], new_targets[], previous_script_id, new_script_id, district_version_before, district_version_after, timestamp
Campaign-Specific Target Rules Respected During Realignment
Given a campaign has rules configured (e.g., freeze_targets_after_send=true, exclude_chambers=[Senate]) When Auto-Target Realignment computes new targets Then for any supporter who has already completed a send within that campaign, the previously sent action’s recipients remain unchanged And excluded chambers are never included in the recomputed target list And if rule evaluation yields an empty target list, the supporter is flagged with reason=NO_VALID_TARGETS and no update is applied And all rule evaluations are persisted per supporter/campaign in the audit log with rule_name, rule_value, outcome
Preflight Simulation with Diffs and Bulk Approval
Given redistricting affects N supporters across M campaigns When a preflight is generated Then the UI and API present per-campaign and per-supporter diffs including old_targets[], new_targets[], script_change=true|false, count_delta, and a summary totals row And no production data is mutated until an approval action is received And an authorized user can approve all changes, approve by campaign, or approve by supporter in bulk (>= 1000 records in a single operation) within the UI or API And rejected items are excluded from apply and remain visible with status=REJECTED until dismissed
Transactional Apply with Automatic Rollback on Failure
Given an approved change set for a campaign When Auto-Target Realignment applies changes Then updates to targets, scripts, and tags are performed atomically per campaign And if any step fails or downstream notifications exhaust retries, all changes for that campaign are rolled back to the pre-apply state And the system emits an error event with correlation_id and marks the apply job as FAILED with a human-readable reason And a retry can be safely re-attempted without creating duplicate updates
Webhook and API Notifications to Downstream Integrations
Given a change set is successfully applied When notifications are dispatched Then a webhook event named targets.realigned is sent for each affected campaign with payload fields campaign_id, supporter_ids[], previous_targets_by_supporter, new_targets_by_supporter, district_version, scripts_changed, applied_at And requests are signed (HMAC-SHA256) and delivered within 60 seconds of apply completion And failed deliveries are retried up to 3 times with exponential backoff and moved to a dead-letter queue with operator alerting And an API endpoint exposes an idempotent GET by correlation_id returning the final status and payload checksum
District Version Tagging and Historical Integrity
Given district boundaries have changed When supporters take actions after realignment Then each new action record is tagged with district_version_used matching the version used for target computation And historical action records retain their original district_version_used and targets unchanged And exports and reports include district_version_used and can be filtered by version
One-Tap Action Pages Reflect Realignment Without URL Change
Given a one-tap action page has a share URL and cached target/script content When Auto-Target Realignment is applied for affected supporters Then the page URL remains unchanged And CDN/UI caches for targets and scripts are invalidated and refreshed within 2 minutes And the displayed recipients and scripts match the recomputed values for the supporter’s new district
District Version Tagging
"As a compliance officer, I want each action labeled with the district version used so that audits can verify the correct jurisdiction at the time of contact."
Description

Tag all actions, events, exports, and API/webhook payloads with the district version ID, effective date, and source to provide audit-ready proof of the jurisdiction basis at time of contact. Surface tags in UI detail views and reporting filters, and ensure tags persist through data pipelines and external CRMs. Enable comparison reports across versions to quantify impact and maintain historical integrity.

Acceptance Criteria
Action Creation Tagging
Given district version DV1 is effective on 2025-06-01 and DV2 is effective on 2025-07-01 When a supporter completes an action at 2025-06-30T23:59:59Z Then the action record contains district_version_id=DV1, district_version_effective_date=2025-06-01, and district_version_source ∈ {official, vendor, manual} And the three fields are non-null and immutable thereafter When a supporter completes an action at 2025-07-01T00:00:00Z Then the action record contains district_version_id=DV2 and district_version_effective_date=2025-07-01 And attempts to update any district version tag on the action return a validation error and do not change stored values Then 100% of newly created actions in a batch of 100 have all three fields populated (0 nulls) And the action audit log records the applied district_version_id and timestamp
Event RSVP and Check-in Tagging
Given an event generates legislator targets or scripts at RSVP/check-in time When a supporter RSVPs or is checked in Then the RSVP/check-in record is saved with district_version_id, district_version_effective_date, and district_version_source matching the version used to derive targets/scripts at that moment And these fields are read-only after creation When RSVPs span multiple district versions due to a mid-cycle change Then the event shows a Mixed tag at the event level and each RSVP/check-in row displays its specific version values Then 100% of new RSVP/check-in records created during a test window include non-null district version fields
UI Detail Surfacing and Reporting Filters
Given an existing action, RSVP/check-in, export job, and API delivery log When a user with Analyst or higher role opens each detail view Then district_version_id, district_version_effective_date, and district_version_source are displayed, copy-to-clipboard enabled, and not editable When a user opens Reporting and applies the District Version filter to select DV1 Then only rows tagged DV1 are returned and totals update accordingly When the user multi-selects DV1 and DV2 Then results include both versions and the version facet counts reflect the same totals And clearing the filter returns all versions Then the District Version filter is available on Actions, Events, and Outcomes reports
Exports Include Per-row Version Tags and Manifest
Given a CSV export of actions across multiple district versions When the export completes Then each row includes columns: district_version_id, district_version_effective_date (ISO-8601 date), district_version_source, all non-null And the export package contains a manifest.json listing distinct district_version_id values, their effective_date, source, row counts per version, and export timestamp When an export is limited to a single district version via filter Then the manifest lists exactly that version and the per-row columns match it And downstream ingestion of the CSV validates the presence of these columns; missing columns cause the export job to be marked Fail
API and Webhook Payload Tagging
Given outbound webhooks for action.created and rsvp.created When a payload is delivered Then the JSON includes districtVersion: { id: <string>, effectiveDate: <YYYY-MM-DD>, source: <official|vendor|manual> } with all fields present And the public API GET /actions and GET /events/{id}/rsvps return the same districtVersion object for each record When API docs are generated Then the new fields are documented as read-only and present in responses without breaking existing clients (additive change) And contract tests validate that 100% of sampled webhook deliveries include districtVersion with non-null values
Data Pipeline and External CRM Persistence
Given connectors sync actions and RSVPs to external CRMs and the data warehouse When a sync runs Then the destination schemas contain district_version_id, district_version_effective_date, and district_version_source as non-nullable fields And sampled records in each destination retain the exact values from RallyKit (string-equal for id/source and date-equal for effectiveDate) When a transformation job aggregates actions by district version Then the row counts per version in the warehouse equal the counts in RallyKit reporting for the same time range and filters And integration tests for two connectors (e.g., Generic REST, CSV SFTP) confirm pass-through of all three fields
Cross-Version Comparison Reporting
Given actions exist under versions DV1 and DV2 within the same campaign and districts When a user opens the Version Comparison report and selects DV1 vs DV2 for 2025-06-01 to 2025-08-01 Then the report displays totals by district and campaign for each version, absolute delta, and percent change And clicking any metric drills down to the exact constituent rows tagged with that version When the report is exported Then the CSV includes columns for district_version_id and version_comparison metrics and row counts reconcile with the on-screen totals And queries complete within 5 seconds for datasets up to 500k rows
Change Impact Notifications
"As a nonprofit director, I want proactive alerts when boundary changes affect our supporters so that we can review impacts and act quickly."
Description

Notify campaign owners when redistricting updates affect their records with in-app alerts and configurable email/Slack digests summarizing impacted supporters, campaigns, and urgency. Provide drill-down lists, CSV export, and links to simulate or approve auto-target realignment. Allow threshold settings (e.g., notify only if >1% of active supporters change) and quiet hours, and localize messaging for multi-tenant organizations.

Acceptance Criteria
Boundary Change Triggers In-App Alert
Given an organization has active campaigns and supporters with stored district assignments And new district boundaries are imported at time T When any supporter’s assigned district changes and the percent of affected active supporters in any campaign is >= the organization’s alert threshold Then an in-app alert is created within 5 minutes of import completion And the alert appears in the Redistricting Watch panel for that organization with: total impacted supporter count, affected campaign names (max 10 plus “+N more”), percent impacted per campaign, urgency label, import timestamp, and a “View Impact” link And the urgency label is assigned as: High if any campaign has >=10% or >=500 supporters impacted; Medium if >=1% or >=50; Low otherwise And duplicate alerts for the same import are not created
Role-Based Notification Delivery
Given an organization with roles (Org Admin, Campaign Owner, Member) When an alert is generated for affected campaigns A and B Then only Org Admins and owners of campaigns A or B receive the alert in-app and via configured digests And recipients without access to a campaign do not see its metrics or supporter details in the alert or drill-down And removing a user’s role revokes access to the alert and drill-down within 1 minute
Configurable Email and Slack Digests
Given an organization has enabled email and/or Slack digests with frequency F ∈ {Immediate, Hourly, Daily} When alerts are generated Then a digest is sent per frequency setting: immediately within 5 minutes, hourly at the next hour, or daily at 08:00 org local time And each digest includes: total impacted supporters, affected campaigns with percent impacted, urgency label per alert, links to View Impact and Simulate, and a unique digest ID And if Slack delivery fails, a fallback email is sent within 10 minutes and the failure is logged
Drill-Down View and CSV Export
Given a user with access clicks “View Impact” on an alert When the drill-down loads Then it lists all impacted supporters with columns: Supporter ID, Name, Email, Old District, New District, Affected Campaign(s), Change Detected At, Processed Status And supports filters by campaign, district change type, and date range And clicking “Export CSV” generates a CSV with the visible filters applied and the same columns, delivered for download within 60 seconds, with row count matching the on-screen total
Simulate and Approve Auto-Target Realignment
Given an alert with impacted campaigns When the user clicks “Simulate Realignment” Then a simulation report shows per campaign: current target legislators, proposed target legislators, number of supporters moved, and net change in outreach targets And clicking “Approve” applies the changes atomically per campaign, updates campaign targets, revalidates supporter–legislator mappings, and records an audit log entry with user, timestamp, counts changed, and alert ID And the system prevents double-apply by rejecting duplicate approvals for the same alert with an idempotency message
Threshold and Quiet Hours Settings
Given an organization config with alert threshold type ∈ {Percent, Absolute} and value V, quiet hours window [Qstart, Qend], and organization time zone TZ When an import occurs that meets or exceeds the threshold Then alerts respect quiet hours: in-app alerts are created immediately but are marked “Queued” and external digests are held until Qend in TZ And when outside quiet hours, digests are sent per frequency And changing the threshold takes effect for subsequent imports within 5 minutes and is logged
Localization per Tenant
Given an organization language preference L ∈ {en, es, fr, ...} When an alert and digests are generated for that organization Then all user-facing text in the in-app alert, drill-down headers, and email/Slack digests are localized to L, with dates/times formatted per locale And if a translation key is missing for L, the message falls back to English and a missing-key event is logged
Audit Log & Rollback Controls
"As a program lead, I want a clear audit trail and the ability to roll back erroneous map updates so that we can remedy mistakes and satisfy funder compliance."
Description

Maintain an immutable audit log capturing map source, checksum, approver, applied time, and per-record changes with before/after district assignments. Provide one-click rollback to a prior district version per tenant or per campaign with automated revalidation and target reapplication, conflict detection, and safety checks. Offer exportable logs and retention policies to meet funder and regulatory compliance requirements.

Acceptance Criteria
Audit event captured for district map application
Given an Approver applies a new district map version at tenant or campaign scope When the apply action is confirmed Then an immutable audit event is written with: event_id, tenant_id, scope, map_version_id, map_source, map_checksum (SHA-256), approver_user_id, applied_at (UTC), reason, total_records_evaluated, total_records_changed And per-record change entries are created for each evaluated supporter with: supporter_id, prior_district, new_district, prior_target_ids, new_target_ids, address_hash, change_type And the audit event and its per-record entries are queryable within 5 seconds of apply completion And duplicate map_version_id + scope + tenant_id combinations are rejected with a validation error before any data is written
Immutability enforcement and tamper-evidence
Given any user or service attempts to update or delete an audit event or its per-record entries via UI or API When an update or delete request is sent Then the system returns 403 for updates and 405 for deletes and no records are modified And audit records are stored append-only and include a hash_chain value derived from the previous record and current digest And a daily integrity check validates the hash chain and emits a critical alert if any mismatch is detected
One-click rollback per tenant with automated revalidation
Given a tenant admin selects a prior district map version and clicks Rollback When the admin confirms the safety prompt Then the system creates a new current map version referencing the selected prior version and records a rollback audit event with initiator_user_id and target_version_id And all supporter records in the tenant are revalidated against the selected version and district assignments and target_ids are recomputed And active campaigns automatically receive updated target assignments based on the selected version And a preview step shows counts of records to be changed, conflicts detected, and estimated duration before confirmation And the rollback completes within 30 minutes for up to 1,000,000 supporter records or provides resumable progress with no data loss on retry And actions created after rollback are tagged with the new current district_version_id
Per-campaign rollback with targeted reapplication
Given a campaign admin initiates rollback for a specific campaign to version X When the rollback is confirmed Then only that campaign’s targeting state and action district_version_tag are switched to version X; tenant default remains unchanged And supporter master district assignments remain unchanged; campaign-level target selections are recomputed using version X And historical actions retain their original district_version_tag; new actions for that campaign use version X And the system updates campaign metadata with campaign_district_version_id and records an audit event
Conflict detection and safety checks during rollback
Given a rollback is initiated at tenant or campaign scope When conflicts exist (e.g., supporter address changed after selected version, compliance lock active, pending bulk import, provider mismatch) Then the system produces a conflict report with counts and sample IDs, requires explicit typed confirmation to proceed, and isolates or skips conflicted records per policy And rollback is blocked if conflicts exceed an admin-configured threshold or if compliance locks are active, returning a descriptive error And a dry-run mode generates the same report without persisting changes And all conflicts and resolutions are recorded in the rollback audit event
Exportable audit logs and retention policies
Given an auditor requests an export of audit logs with filters (date range, tenant_id, campaign_id, map_version_id, event_type) When the export is requested via UI or API Then the system produces downloadable CSV and JSONL within 2 minutes for up to 1,000,000 events with pagination for larger datasets And exports include event metadata and per-record change fields as specified And an export audit event is recorded with requester_user_id, filters, format, row_count, started_at, completed_at And retention policies are configurable per tenant (default 7 years); purge jobs remove expired per-record entries while honoring legal holds And records under legal hold are excluded from purge and labeled with hold_id in metadata

Privacy Proof Token

Generates a signed, time-stamped attestation that a supporter is district-verified without exposing their full address. Drops into ProofLock-style receipts for auditors and partners, proving legitimacy while minimizing PII spread.

Requirements

Signed District Attestation Token
"As a nonprofit organizer, I want a signed proof that a supporter is district-verified without sharing their full address so that I can satisfy auditors and partners while protecting supporter privacy."
Description

Generate a cryptographically signed, time-stamped token that attests a supporter’s legislative district eligibility without exposing their full address. The token uses an asymmetric signature scheme (e.g., JWS/PASETO) and contains a minimal claim set: district identifiers (state, chamber, district), verification method and source, issuance time, expiration, nonce, issuing org and campaign IDs, and a privacy-preserving address fingerprint derived via salted hashing. It is produced immediately after RallyKit’s address-to-district verification flow and attached to the supporter action record. Tokens are retrievable for receipts and exports and are verifiable offline or via a public endpoint, enabling auditors and partners to confirm legitimacy while minimizing PII spread.

Acceptance Criteria
Immediate Token Issuance and Attachment
Given a supporter’s address is successfully verified to a legislative district When the verification flow completes Then a signed attestation token is generated within 2 seconds And the token is attached to the supporter’s action record And a retrievable reference to the token is stored with the action When address verification fails or is skipped Then no token is generated or attached
Minimal Claim Set and PII Minimization
Given an attestation token is generated Then the payload contains exactly: state, chamber, district, verification_method, verification_source, iat, exp, nonce, org_id, campaign_id, address_fingerprint And the token contains no full street address, name, email, or phone And address_fingerprint is a salted cryptographic hash with >=128-bit output And the salt is not present in the token And with the same normalized address and salt, the fingerprint value is identical across token generations And with a different salt, the fingerprint value is different
Offline Verification by Auditor
Given an auditor possesses a token and the issuer’s published public keys When the auditor verifies the token offline Then the signature validates using an asymmetric scheme with the appropriate public key And a key identifier (kid or equivalent) is available to select the correct public key And iat is not in the future by more than 5 minutes and exp is in the future And all required claims are present and correctly typed And state/chamber/district values conform to the supported code sets And verification_method and verification_source are from allowed enumerations
Public Verification Endpoint
Given a caller submits a token to the public verification endpoint When the token is valid Then the endpoint responds 200 with a JSON body including valid:true, issuer, state, chamber, district, iat, exp And the response includes no PII and does not reveal the address_fingerprint salt When the token is invalid, expired, or malformed Then the endpoint responds 200 with valid:false and a machine-readable reason in {signature, expired, malformed, issuer_unknown, claim_missing}
Expiration and Time Claims Enforcement
Given a token is generated Then it includes iat and exp as UNIX timestamps (seconds) And exp is greater than iat by at least 60 seconds Given current_time > exp When the token is verified offline or via the public endpoint Then verification fails with reason "expired"
Retrieval for Receipts and Exports
Given a supporter action with a generated token When a receipt for that action is created Then the token string and a verification URL are embedded in the receipt data When an authorized user requests an actions export Then the export includes a dedicated field containing the token for each eligible action And an authorized API endpoint returns the token by action_id for org users
Nonce Uniqueness and Collision Resistance
Given tokens are generated for an organization When 1,000,000 tokens are issued Then no nonce value is reused within that organization And each nonce has at least 128 bits of entropy and is base64url-safe And if a token is reissued for the same action, the nonce and iat are new and unique
KMS-Based Key Management and Rotation
"As a security admin, I want automated, auditable key management and rotation so that token integrity remains high and compromise risk stays low."
Description

Store private signing keys in a managed KMS/HSM with role-based access, audit trails, and envelope encryption. Embed key IDs in tokens to support seamless rotation. Implement automated rotation schedules, emergency rollover procedures, and dual-operator approval for key changes. Ensure services consuming tokens fetch current public keys via a JWKS or equivalent discovery endpoint. Integrates with RallyKit’s deployment pipeline to prevent key material in code or config and logs all key operations for security audits.

Acceptance Criteria
KMS/HSM Custody and RBAC Enforcement
Given a production environment, When the signing key is created, Then its KeyUsage is SIGN_VERIFY and its Origin is a managed KMS/HSM with key material non-exportable. Given IAM RBAC is configured, When any principal outside the signer service role calls kms:Sign on the signing key, Then the request is denied with AccessDenied and logged with principal identity and source IP. Given export of private key material is forbidden, When kms:ExportKeyMaterial or equivalent is attempted, Then the operation is blocked by policy and the attempt is logged. Given envelope encryption is required for any persisted sensitive metadata, When the service stores key-related metadata, Then it is encrypted using KMS data keys and decryptable only by the service role.
JWKS Discovery and kid-Based Verification
Given a token is issued, When the Privacy Proof Token is signed, Then the JWS header contains kid matching the active KMS key/alias and alg matching the key spec. Given the JWKS endpoint is requested over HTTPS, When a client fetches /.well-known/jwks.json, Then it returns valid RFC 7517/7518 JSON with all active (current and grace-period) public keys, correct kid values, and Cache-Control max-age ≤ 300s with ETag. Given a verifier encounters an unknown kid, When verification first fails due to cache miss, Then the verifier refreshes JWKS and verification succeeds without manual intervention. Given a rotation has occurred, When tokens signed before and after rotation are verified, Then both validate successfully against the corresponding keys present in JWKS.
Automated Rotation Schedule Without Downtime
Given a 30-day rotation policy, When the last rotation age reaches 30 days, Then an automated workflow creates a new key version, updates aliases, and promotes the new kid without manual steps. Given rotation promotion occurs, When new tokens are issued, Then issuance switches to the new key within 60 seconds and token issuance error rate during the window remains ≤ 0.1%. Given rotation completes, When verifiers process tokens during the rotation window, Then verification success rate is ≥ 99.9%. Given rotation completed, When audit logs are reviewed, Then they show key creation, alias updates, and policy changes with correlated change/request IDs.
Emergency Rollover and Key Compromise Response
Given a suspected key compromise, When an authorized operator triggers emergency rollover, Then the compromised key is disabled and a new key is created and promoted within 5 minutes. Given JWKS distribution, When rollover completes, Then JWKS publishes the new key and removes or marks the old key as disabled within 60 seconds, and verifiers reject tokens signed by the disabled key thereafter. Given issuance protection, When the old key is disabled, Then all new token issuance attempts using the old key fail and all services issue tokens only with the new kid. Given incident auditability, When the incident concludes, Then logs contain the disable action, new key creation, JWKS publication events, and on-call acknowledgements linked to the incident ID.
Dual-Operator Approval for Key Changes
Given governance requirements, When any key change (create, rotate schedule change, disable, delete, alias change) is proposed, Then execution requires approvals from two distinct operators with MFA prior to applying changes. Given enforcement, When a single operator attempts to enact a protected key change without a second approver, Then the pipeline blocks the change and no KMS state is altered. Given completion, When an approved change is executed, Then the audit record includes both approver identities, timestamps, and the exact diff of key policy or alias changes.
CI/CD Guardrails and No Key Material in Code or Config
Given repository protections, When code is scanned pre-commit and in CI, Then any PEM/PKCS private key material or secret patterns trigger a hard failure and block merge. Given infrastructure as code, When environments are provisioned, Then resources reference KMS key ARNs/aliases and no private key files are written to disks or images. Given runtime configuration, When services start, Then they obtain key identifiers from configuration and perform signing via KMS APIs without embedding private key material in environment variables or files. Given policy-as-code, When a pull request introduces exceptions to key handling policies, Then the pipeline fails unless an approved exception is attached with dual-operator approval.
Comprehensive Audit Logging and Retention
Given CloudTrail and logging are enabled, When kms:Sign, kms:CreateKey, kms:DisableKey, kms:ScheduleKeyDeletion, or key policy changes occur, Then logs include requester, key ARN, result, reason, and correlation/change IDs. Given log shipping to SIEM, When logs are emitted, Then they arrive in the centralized SIEM within 2 minutes with at-least-once delivery. Given compliance retention, When auditors query historical logs, Then ≥ 400 days of immutable records are available for all key operations and JWKS publication events. Given anomaly detection, When signing volume exceeds a defined threshold or deviates from baseline, Then an alert is generated and escalated to on-call within 5 minutes.
Verification API and SDKs
"As an external auditor, I want an easy way to verify tokens without accessing supporter data so that I can confirm campaign legitimacy efficiently and securely."
Description

Provide a public verification endpoint that validates token signatures, checks expiration and revocation, and returns a pass/fail result with reason codes without returning or requiring PII. Offer lightweight SDKs (e.g., Node and Python) and documentation for partners and auditors, plus instructions for offline verification using embedded public keys/JWKS. Apply rate limiting and service monitoring to ensure reliability. Integrates with RallyKit’s partner export and ProofLock receipt links to enable one-click verification.

Acceptance Criteria
Public Verification Endpoint: Signature, Expiry, No-PII Response
Given a valid Privacy Proof Token signed by RallyKit and not expired or revoked When a client sends POST /verify with body { token: "<jwt>" } and no other fields Then respond 200 with { result: "pass", reason_code: "OK" } and the response contains no PII fields And the signature is validated against the active public key Given any request including address, name, or email fields When POST /verify is called Then respond 400 with { result: "fail", reason_code: "PII_NOT_ALLOWED" } Given a token with an invalid signature When POST /verify is called Then respond 200 with { result: "fail", reason_code: "BAD_SIGNATURE" } Given a token with an exp in the past When POST /verify is called Then respond 200 with { result: "fail", reason_code: "EXPIRED" } Given a valid token When GET /verify?token=<jwt> is called Then response semantics match POST and the response contains no PII fields
Token Revocation Check and Reason Codes
Given a token ID present in the revocation list When /verify is called Then respond 200 with { result: "fail", reason_code: "REVOKED" } Given a token revoked within the last 60 seconds When /verify is called Then the revocation is effective within 60 seconds end-to-end Given a revoked token is retried When /verify is called Then an audit log entry is recorded with token_id, reason_code, and timestamp, and no PII is logged
Reliability Controls: Rate Limiting and Monitoring
Given an unauthenticated client exceeds 60 requests per minute per IP to /verify When additional requests arrive within the same minute Then respond 429 with a Retry-After header and { result: "fail", reason_code: "RATE_LIMITED" } Given a partner API key provided via Authorization header When requests remain within 600 requests per minute per key Then responses are not rate limited Given the service error rate exceeds 1% for 5 consecutive minutes When this condition occurs Then an alert is sent to on-call and the incident is recorded with timestamp and metric snapshot Given normal production traffic over 30 days When service performance is measured Then p95 latency for /verify is <= 300 ms and uptime is >= 99.9% as reported by monitoring
SDKs: Node.js and Python Verification Clients and Docs
Given the Node SDK package "rallykit-verifier" on npm and Python package "rallykit_verifier" on PyPI When developers call verify(token) with a valid token Then the function returns { result: "pass", reason_code: "OK" } within 300 ms Given an invalid or expired token When verify(token) is called Then the function returns { result: "fail", reason_code: "<appropriate code>" } and the SDK never requests PII Given developers follow the README Quick Start for each SDK When they run the example commands Then verification succeeds end-to-end without additional configuration beyond providing a token Given the SDK CI workflows run When coverage is calculated Then line coverage is >= 85% and tests pass on latest LTS Node and Python versions
Offline Verification via Embedded Public Keys and JWKS Rotation
Given access to the published JWKS at /.well-known/jwks.json When downloading keys Then the JWKS contains at least one active key with kid and alg matching issued tokens Given a token with header kid matching the current JWKS When a verifier uses the provided offline script after caching the JWKS Then signature and exp are validated and the result is "pass" for a valid token without any network calls Given key rotation occurs When a new key is published and used to sign tokens Then the previous key remains available in JWKS for a minimum overlap of 24 hours to allow offline validation of previously issued tokens Given a token signed with a retired key after the overlap window When verified offline or online Then the result is "fail" with reason_code "UNTRUSTED_KEY"
ProofLock Receipt One-Click Verification
Given a ProofLock-style receipt containing a verification link of the form https://verify.rallykit.org/verify?token=<jwt> When an auditor clicks the link Then a verification page loads over HTTPS, invokes /verify, and displays Pass/Fail and reason_code without exposing PII Given the token is invalid, expired, or revoked When the link is used Then the page displays Fail with the correct reason_code while the HTTP response for the page load remains 200 Given the verification page is inspected for network calls When requests and storage are reviewed Then the token is not persisted, logged, or sent to third-party domains, and a strict Referrer-Policy prevents token leakage
Partner Export Integration with One-Click Verification
Given a partner export is generated in RallyKit for an advocacy action When a partner downloads the export Then each row includes a verification_url containing a signed token and no additional PII beyond the export's defined schema Given a partner opens the verification_url within the token validity window When accessed Then it resolves to a "pass" or "fail" result consistent with /verify, including reason_code Given a token present in a prior export is revoked When a new export is generated Then the corresponding verification_url yields { result: "fail", reason_code: "REVOKED" }
Consent Capture and Audit Logging
"As a supporter, I want to understand and consent to how my verification is proven so that my privacy preferences are respected."
Description

Present clear, localized consent language on action pages before generating a Privacy Proof Token, with an explicit opt-in and a link to data practices. Record consent events with timestamp, action ID, user agent, and a privacy-preserving address fingerprint. Store consent logs separately from campaign data with strict retention controls and exportability for audits. Integrates with RallyKit’s action builder so organizers can enable/require consent per campaign and display consent status in receipts and reports.

Acceptance Criteria
Consent Gate Blocks Token Generation
Given an action page with consent set to Required in Action Builder, When a supporter attempts to submit without checking the consent box, Then submission is blocked, an inline error is shown adjacent to the consent control, and no Privacy Proof Token is generated. Given an action page with consent set to Required, When a supporter checks the consent box and submits, Then a Privacy Proof Token is generated and the action is recorded successfully. Given an action page with consent set to Optional, When a supporter submits without checking consent, Then the action and token generation proceed and consent_status is recorded as "not_collected".
Explicit Opt-In UI and Accessibility
Given the action page loads, Then the consent checkbox is unchecked by default and cannot be auto-checked by client scripts. Given a keyboard-only user, When tabbing through the form, Then the consent control is focusable, the label is programmatically associated, and the error message is announced by screen readers upon validation failure. Given form validation triggers, Then the error message references the consent control via aria-describedby and prevents submission until consent is checked (when required).
Localized Consent Copy and Data Practices Link
Given the page locale is set (e.g., en, es, fr), When the action page renders, Then the consent text and data practices link are displayed in that locale with a recorded consent_locale field. Given a locale without a translation, When the page renders, Then English is used as a fallback and the consent_locale is recorded as the fallback value. Given the consent text is updated by admins, When supporters consent, Then the stored consent record includes a consent_version_id matching the displayed copy version.
Consent Event Logging and PII Minimization
Given a supporter provides required consent, When the submission succeeds, Then a consent log record is created containing: timestamp (ISO8601 UTC), action_id, campaign_id, user_agent, address_fingerprint, consent_locale, consent_version_id, org_id, and token_id (if generated). Given a consent log record is stored, Then no raw address fields (e.g., street, city, state, postal_code) are present anywhere in the consent log storage. Given an address is provided, When the fingerprint is generated, Then address_fingerprint is a one-way hashed value rendered as a 64-character lowercase hex string and is consistent for the same normalized input within the same environment.
Segregated Storage and Retention Controls
Given the system is configured, Then consent logs are stored in a logically separate datastore/collection/table from campaign analytics and action payloads. Given role-based access control, When a non-admin campaign editor views data, Then they cannot access raw consent logs but can see consent_status aggregates; only Org Admins and Compliance Auditors can view/export raw logs. Given an org-level retention period (in days) is set, When the period elapses for a record, Then that consent log is purged within 24 hours and the deletion is irreversible and audited.
Audit Export of Consent Logs
Given an Org Admin or Compliance Auditor requests an export with filters (date range, campaign, action_id), When the export is generated for ≤100k records, Then a CSV is delivered within 2 minutes containing headers and fields: timestamp, action_id, campaign_id, consent_status, consent_locale, consent_version_id, user_agent, address_fingerprint, token_id. Given an export is generated, Then the file contains no raw address data and passes a schema validation for required columns. Given an export is initiated, Then an export activity log is recorded with requester id, timestamp, filters, and row_count, and the downloadable link expires automatically after a defined short interval.
Action Builder Integration and Consent Status Displays
Given a campaign editor configures a campaign, When they toggle "Require Consent Before Token" in Action Builder, Then the setting is saved, reflected in preview, and enforced on the live action page. Given an action completes, When a receipt (ProofLock-style) is generated, Then it displays consent_status (consented/not_collected/declined) and consent_at timestamp; if consent is required and missing, no token is present and the receipt indicates consent-blocked. Given organizers view reports, When the actions table loads, Then columns for consent_status and consent_at are present and filterable, and the values match the underlying consent logs.
Receipt and Export Integration
"As a campaign director, I want tokens embedded in receipts and exports so that partners and auditors can verify actions without extra coordination."
Description

Embed the Privacy Proof Token into ProofLock-style receipts and partner exports as a compact string, QR code, and verification link. Display a 'District Verified' badge with issue and expiry times, and surface verification result statuses in the campaign dashboard. Ensure tokens are included in CSV/JSON exports with associated action IDs and campaign context. Integrates into RallyKit’s receipt templating, ensuring minimal layout impact and mobile-friendly rendering.

Acceptance Criteria
Receipt Embeds Token, Badge, QR, and Verification Link
- Given a district-verified supporter completes an action When the receipt is generated Then the receipt includes the Privacy Proof Token as a compact string not exceeding 256 characters - Given the receipt is generated When rendered Then a QR code encoding the verification link is displayed and scannable with a size of at least 128x128 px on mobile and 192x192 px on desktop - Given the receipt is generated When rendered Then a 'District Verified' badge is shown with issue_time and expiry_time in ISO 8601 (UTC) adjacent to the token - Given the receipt includes the verification link When clicked Then it opens the verification page in a new tab - Given a standard ProofLock-style receipt template When the token block is rendered Then the overall receipt height increases by no more than 15% compared to the same receipt without the token block - Given the receipt is viewed on a screen width <= 480px When rendered Then there is no horizontal scrolling and text does not overflow the viewport
CSV and JSON Exports Include Token and Campaign Context
- Given a partner CSV export is generated When downloaded Then each action row contains token_string, token_issue_time, token_expiry_time, token_status, action_id, campaign_id, and campaign_slug columns - Given a partner JSON export is generated When downloaded Then each action object contains token.string, token.issue_time, token.expiry_time, token.status, action.id, campaign.id, and campaign.slug fields - Given exports are generated When reviewed Then no full street address or apartment/suite fields are present in the token or adjacent fields - Given tokens exist for actions When exporting Then the token_string values are non-null for >= 99% of actions created after feature enablement - Given a token is expired at export time When exported Then token_status equals "expired"
Dashboard Displays Verification Statuses and Filters
- Given an action with a valid token exists When viewed in the campaign dashboard Then the action row shows a 'District Verified' status badge labeled "Valid" - Given an action with an expired or invalid token exists When viewed in the dashboard Then the action row shows "Expired" or "Invalid" respectively - Given the dashboard filters When the user selects a verification status filter Then only actions with the selected status are displayed - Given a new action is completed When the dashboard is open Then the verification status appears or updates within 5 seconds - Given a token status badge is clicked When opened Then a side panel displays token_issue_time, token_expiry_time, and a link to the verification page
Verification Link and QR Code Validate Token Without PII
- Given a valid token verification link is requested When the page loads Then the status displayed is "Valid" and includes issue_time, expiry_time, district identifier, and no full address - Given an expired token link is requested When the page loads Then the status displayed is "Expired" and includes expiry_time and re-verify instructions - Given a token with a bad signature is requested When the page loads Then the status displayed is "Invalid" - Given the verification endpoint receives a token When processed Then the response time is <= 800 ms p95 over the last 1000 requests - Given the QR code is scanned with a common smartphone camera When opened Then it resolves to the verification link without intermediate errors
Receipt Templating Variables and Fallback Behavior
- Given RallyKit receipt templating is used When rendering a receipt Then template variables {{proof_token.string}}, {{proof_token.verify_url}}, {{proof_token.qr_svg}}, {{proof_token.issue_time}}, {{proof_token.expiry_time}}, and {{proof_token.status}} are available - Given token generation fails for an action When the receipt is generated Then the receipt shows a 'Verification Unavailable' badge and omits the token string and QR code - Given token generation fails for an action When exporting Then token_status equals "unavailable" and token fields are null in CSV/JSON - Given feature flag "privacy_proof_token" is disabled When rendering receipts or exports Then no token fields or badges are present
Mobile Rendering, Layout Impact, and Accessibility
- Given the receipt is viewed on a device with width <= 480px When rendered Then the token string wraps without breaking words and there is no horizontal scroll - Given a screen reader user opens the receipt When navigating Then the 'District Verified' badge announces role "status" and text "District Verified, issued {date}, expires {date}" - Given high contrast mode is enabled When viewing the receipt Then the badge and token meet WCAG 2.1 AA contrast ratio >= 4.5:1 - Given keyboard-only navigation When tabbing through the receipt Then the verification link and QR code are reachable and visible focus states are present - Given the QR image fails to load When the receipt is displayed Then a text fallback "Scan not available—use verification link" is shown
Expiry and Revocation Controls
"As a compliance officer, I want configurable expiration and revocation so that invalid proofs cannot be used downstream."
Description

Enforce short-lived tokens with configurable TTL per organization or campaign and support immediate revocation for compromised keys, mis-verification, or policy changes. Maintain an online revocation list and embed kid/nonce to enable targeted invalidation. Ensure the Verification API checks revocation status and emits reason codes. Integrates with key rotation workflows and admin UI to trigger revocations and view status.

Acceptance Criteria
Organization-Level TTL Enforcement
Given Organization "OrgA" has token_TTL_minutes=15 When a Privacy Proof Token is issued at 2025-08-24T12:00:00Z Then the token includes iat=2025-08-24T12:00:00Z and exp=2025-08-24T12:15:00Z And the Verification API validates the token as valid at t=12:14:59Z (HTTP 200, reason_code=valid) And the Verification API rejects the token at t=12:15:01Z (HTTP 401, reason_code=token_expired) And tokens issued for OrgA do not exceed the configured TTL
Campaign-Specific TTL Override
Given Organization "OrgA" has token_TTL_minutes=15 and Campaign "HB123" has token_TTL_minutes=5 When a Privacy Proof Token is issued for Campaign "HB123" at time t0 Then the token's exp equals t0 + 5 minutes And tokens for other campaigns under OrgA retain exp = t0 + 15 minutes And when both org and campaign TTLs are set, the shorter effective TTL is applied
Immediate Revocation via Admin UI
Given a valid token with nonce=N and kid=K When an admin user revokes nonce=N in the RallyKit Admin UI with reason=mis_verification Then the token appears as revoked in the system within 60 seconds with reason=mis_verification and revoked_at timestamp And the Verification API rejects validations for nonce=N (HTTP 401, reason_code=revoked_mis_verification) And the Admin UI displays status=Revoked, revoked_at, actor, and reason for nonce=N And when an admin revokes kid=K with reason=compromised_key, all tokens with kid=K are rejected (HTTP 401, reason_code=revoked_compromised_key)
Online Revocation List Publication
Given an unauthenticated GET to the Online Revocation List endpoint When the service is healthy Then the response is HTTP 200 with Content-Type=application/json and fields: updated_at (ISO8601), entries[] And each entry contains type in ["kid","nonce"], value (string), reason in ["revoked_compromised_key","revoked_mis_verification","revoked_policy_change"], and revoked_at (ISO8601) And the response includes ETag and Cache-Control: max-age=30 And the payload contains no PII (no name, email, phone, or address fields) And when a new revocation is created, updated_at changes and the new entry is visible within 60 seconds And when the client supplies If-None-Match with the current ETag, the server returns 304 Not Modified
Verification API Revocation Check with Reason Codes
Given a token is presented for verification When the token is expired Then the API responds HTTP 401 with body reason_code=token_expired And when the token's nonce is on the revocation list, the API responds HTTP 401 with reason_code in [revoked_mis_verification, revoked_policy_change] And when the token's kid is revoked, the API responds HTTP 401 with reason_code=revoked_compromised_key And when the token references an unknown kid, the API responds HTTP 401 with reason_code=unknown_kid And when the token is valid and not revoked, the API responds HTTP 200 with reason_code=valid
kid/nonce Embedding and Targeted Invalidation
Given a new Privacy Proof Token is generated Then the token header contains kid=<active_key_id> And the token claims contain nonce=<base64url-encoded 128-bit random value> unique per token And no two nonces collide across at least 1,000,000 issued tokens in testing And given a nonce-specific revocation exists, only the token with that nonce is rejected while other tokens with the same kid remain valid And given a kid-wide revocation exists, all tokens signed with that kid are rejected even if unexpired
Key Rotation Workflow Integration
Given Organization "OrgA" rotates signing keys via the Admin UI When a new keypair is created and promoted to active with kid=K2 Then newly issued tokens use kid=K2 And the Verification API validates tokens signed with both old kid=K1 and new kid=K2 during the overlap period And if the admin chooses "Revoke old key K1", the revocation is published within 60 seconds and tokens with kid=K1 are rejected (HTTP 401, reason_code=revoked_compromised_key) And the Admin UI displays the active key, previous keys, and any key-level revocation status
Privacy Safeguards and Data Minimization
"As a data privacy lead, I want strong minimization and redaction built into the token workflow so that RallyKit reduces PII exposure by default."
Description

Design the token claim schema to exclude full addresses, names, emails, and phone numbers. Derive any address evidence via salted and peppered hashing kept in KMS, and process addresses in memory with no persistent storage beyond what is strictly required for campaign action records. Redact PII from logs, restrict access via RBAC and purpose-based controls, and enforce retention limits with automated deletion. Provide a DPIA-style checklist and unit tests to prevent accidental PII leakage across services.

Acceptance Criteria
Token Schema Excludes Direct PII
Given the token schema definition, When validating a sample token against the JSON Schema, Then the token does not contain fields for full address, name, email, or phone. Given an attempt to include any restricted fields (street, city, state, zip/postcode, name, email, phone), When serializing the token, Then serialization fails with a schema validation error and the token is not issued. Given the allowed fields list ["ver","iss","iat","exp","sub","district","verification_level","proof","sig"], When a token is generated, Then only these fields are present and no free-text fields are allowed.
Address Proof Derivation Uses Salted and Peppered Hashing in KMS
Given a supporter address input, When deriving proof, Then a per-record random salt is generated and the pepper is applied via a KMS HMAC operation without exposing pepper material to application memory. Given the same address input, When deriving proof twice, Then outputs differ due to unique salts. Given different addresses, When deriving proof across 1,000,000 samples, Then zero collisions are observed and the proof length is at least 256 bits. Given only the token contents and public keys, When attempting to reconstruct the original address, Then reconstruction is infeasible and unit tests assert no reversible mapping exists. Given KMS is unavailable, When derivation is attempted, Then issuance fails closed and no plaintext address is cached or logged.
Ephemeral Address Processing With No Persistent Storage
Given address intake, When token generation executes, Then the full address exists only in process memory, swap is disabled, and no writes occur to databases, object storage, queues, or caches. Given debug logging enabled, When token generation runs, Then no PII is logged at any level and redaction middleware masks inputs before log sinks. Given a database and storage audit, When scanning all tables/buckets post-run with PII detectors, Then zero rows/objects match full address, email, phone, or name in non-permitted locations. Given process completion or failure, When memory sanitization runs, Then in-memory address buffers are zeroed and discarded within 500 ms.
Logs and Metrics Contain No PII
Given a synthetic PII-rich input, When executing end-to-end through all services, Then application logs, access logs, traces, and metrics contain no raw address, email, phone, or name. Given structured logging, When examining emitted fields, Then only redacted placeholders appear for sensitive values and no high-cardinality labels include PII. Given log sinks, When sampling 1,000 recent entries during a load test, Then 0 entries contain PII per detector rules and manual spot-check. Given an error or exception, When a stack trace is generated, Then request bodies are not included and headers with PII are filtered.
RBAC and Purpose-Based Access Enforcement
Given an actor without the "privacy-token:issue" role, When calling the issuance endpoint, Then a 403 is returned and no side effects occur. Given a service with the "privacy-token:verify" role and purpose="auditing", When verifying a token, Then only verification outcome and minimal metadata are returned and no PII or raw proofs are exposed. Given a human admin with broad rights, When attempting to access underlying address inputs, Then access is denied unless a break-glass policy is invoked and all such events are logged with a ticket reference. Given access logs, When reviewing the past 30 days, Then 100% of successful access events include a non-empty purpose tag and actor identity.
Retention Enforcement and Auto-Deletion
Given retention_ttl set to 30 days (configurable), When a token claim exceeds ttl, Then all associated records and proofs are automatically deleted from primary storage and backups per policy. Given a created token claim, When running the deletion job in dry-run mode, Then a report lists items scheduled for deletion with counts matching query results with 0% variance. Given legal hold enabled on a campaign, When retention job runs, Then items tagged with hold are skipped and counters reflect hold status. Given S3/object storage and database backups, When TTL expires, Then expired artifacts are removed from object versions and backup retention aligns with policy, evidenced by logs and deletion metrics.
DPIA Checklist and PII Leakage Unit Tests
Given the DPIA checklist template, When changes touch schemas, logs, or data flows, Then a completed checklist is attached to the PR and passes review gates. Given CI execution, When unit and contract tests run, Then tests fail if any event, schema, or log includes fields labeled as PII or matches PII regex detectors. Given service-to-service messages, When contract tests validate payloads, Then payloads exclude direct PII and include only minimal tokens and proofs. Given a new dependency is added, When dependency scanning runs, Then it reports no known data exfiltration risks or the build fails.

Astroturf Shield

Behavioral and velocity checks flag suspicious bursts (shared IPs, repeat patterns, disposable emails) and auto-rate-limit or hold for review. Pairs with person-level dedupe to mute manufactured volume and protect campaign credibility.

Requirements

Real-time Behavioral Anomaly Detection
"As a campaign director, I want suspicious actions automatically detected and scored in real time so that fabricated volume is caught before it skews results."
Description

Implements a streaming rules engine that evaluates every advocacy action in real time against campaign-specific baselines and global heuristics to identify manufactured activity. Calculates a suspicion score using velocity (actions per IP/device/email), repeat pattern similarity, cross-campaign clustering, and timing irregularities, then tags each action with reason codes. Integrates with RallyKit’s event pipeline to annotate action records, trigger mitigations, and feed metrics dashboards. Delivers immediate detection without adding friction for legitimate supporters while reducing false positives via campaign-tunable thresholds.

Acceptance Criteria
Real-time Evaluation and Annotation Latency
Given a new advocacy action event enters the streaming pipeline When the anomaly detection engine evaluates the event Then 95th percentile evaluation latency is <= 50 ms and 99th percentile <= 150 ms over a rolling 10k-event window per campaign And the action record is annotated with suspicionScore (0-100), reasonCodes (array), mitigation (none/rate_limited/held_for_review), and evaluationTimestamp (ISO-8601) And if the engine is temporarily unavailable, the action is marked detection_status="deferred" and retried, with 99.9% of deferred actions evaluated within 60 seconds and <0.1% remaining deferred beyond 5 minutes And actions with suspicionScore below the configured mitigation thresholds proceed without added user friction (no extra prompts or blocks)
Suspicion Score Composition and Reason Codes
Given a set of test actions that individually exhibit velocity spikes, repeat-pattern similarity, timing irregularities, and cross-campaign clustering When the engine processes these actions Then the computed suspicionScore reflects contributions from all triggered heuristics and is capped within 0-100 And reasonCodes include machine-readable entries for each triggered heuristic (e.g., velocity_ip_burst, pattern_similarity_high, timing_uniform_spacing, cross_campaign_cluster) with attached feature values and weights And actions with no triggered heuristics receive suspicionScore <= 10 and an empty reasonCodes array And the same input produces deterministic scores (std dev = 0 over 3 identical replays)
Campaign-Tunable Thresholds and Dry-Run Simulation
Given a campaign admin updates anomaly thresholds via API or settings (e.g., ipBurstPerMin, patternSimilarityThreshold, timingUniformityIndex, crossCampaignClusterSize, rateLimitThreshold, holdThreshold) When the admin saves the configuration Then validation enforces allowed ranges and dependencies and rejects invalid configs with actionable errors And accepted changes take effect within 60 seconds and are audit-logged with actor, diff, timestamp And a dry-run simulation on the last 24 hours of the campaign’s events returns projected flag rates by reason and overall, without impacting live mitigations And a rollback to the previous configuration is possible within 1 click/API call and reverts behavior within 60 seconds
Mitigation Triggers: Rate-Limit and Hold for Review
Given mitigation thresholds are configured for a campaign When an action’s suspicionScore >= rateLimitThreshold but < holdThreshold Then the engine applies rate limiting at the appropriate entity level (IP/device/email) and marks the action mitigation="rate_limited" and excludes it from outbound deliveries and topline counts And when an action’s suspicionScore >= holdThreshold Then the engine marks the action mitigation="held_for_review", routes it to the review queue, excludes it from deliveries and counts, and emits a mitigation_applied event with reason And allowlisted entities bypass mitigations regardless of score, with allowlist hits recorded in reasonCodes And p95 client-facing action submission time increases by no more than 20 ms due to mitigation logic
Cross-Campaign Correlation and Clustering
Given streaming events across multiple campaigns within a 24-hour window When an entity (hashed email, device fingerprint, or IP) appears across >= N distinct campaigns within T minutes (defaults: N=3, T=30) Then the engine attaches reasonCodes entry cross_campaign_cluster with clusterSize and timeWindow and increases suspicionScore by the configured weight And cluster detection operates in-stream with added p95 latency <= 50 ms and p99 <= 150 ms And entities limited to a single campaign within the window do not trigger cross_campaign_cluster
Metrics, Auditability, and Data Export
Given the detection engine is operating on live traffic When a user opens the Astroturf Shield dashboard or queries the reporting API Then they can retrieve per-campaign metrics for the last 1h/24h/7d including: total actions, flagged rate, counts by reason code, mitigation counts, and evaluation latency percentiles And 99.9% of actions have a non-null suspicionScore and evaluationTimestamp within 5 minutes of receipt And all configuration changes and mitigation applications are audit-logged and filterable by campaign, actor, and time range And held_for_review items are exportable via API/CSV with fields: actionId, suspicionScore, reasonCodes, mitigation, timestamps, and entity hashes
IP and Device Fingerprinting with Privacy Safeguards
"As a compliance-conscious organizer, I want reliable origin signals with privacy protection so that I can identify coordination without collecting more personal data than necessary."
Description

Captures network and device signals (IP, ASN, region, user agent, cookie/device hash) to spot coordinated bursts and repeated submissions from the same origin while preserving user privacy. Hashes and salts device identifiers, redacts IPs to configurable precision, and enforces retention limits. Flags VPN/proxy ranges and shared IP anomalies, feeding the anomaly score and dedupe layer. Provides allow/deny lists and consent-aware handling to comply with privacy policies and regulations.

Acceptance Criteria
Configurable IP Redaction and Storage
Given an admin sets IPv4 redaction to /24 and IPv6 redaction to /48 When a supporter submits an action Then only the redacted IP prefixes (/24, /48) are persisted and the full IP is never stored beyond transient memory (<5 seconds) Given the redaction config is changed When new submissions arrive Then stored IP prefixes reflect the new mask and prior records remain unchanged and queryable by their original prefixes Given application logs and data exports are generated When reviewed Then no full IP addresses appear in logs, exports, or analytics datasets
Salted Device Hashing for Dedupe
Given device signals (e.g., user agent, platform, timezone) and an environment-specific secret salt When a device identifier is computed Then it uses HMAC-SHA256 with the salt, is non-reversible, and the plaintext signals are not persisted Given salts are rotated every 90 days When rotation occurs Then new events use the new salt, legacy hashes remain queryable for 180 days for rolling dedupe, and salts are stored in KMS with restricted access (2 approvers) Given a collision test across 100,000 randomly sampled inputs When executed Then the observed hash collision rate is 0
Data Retention Limits and Purge
Given retention is configured to 90 days When fingerprint records exceed 90 days of age Then all fingerprint fields (IP prefixes, device hash, UA) are purged within 24 hours and the purge is auditable with timestamp and count Given an admin executes a manual purge for a campaign or device hash When the purge completes Then the selected records are deleted within 1 hour and no longer appear in reporting or APIs Given a verified data-subject deletion request When processed Then associated fingerprint records are deleted within 30 days and a deletion receipt is available to administrators
VPN/Proxy and ASN Flagging with Burst Detection
Given the threat intel list of VPN/proxy ranges is updated daily When a submission originates from a listed range Then the record is flagged vpn_proxy=true with the source list name and version Given more than 50 submissions from the same /24 occur within 10 minutes and 80% share an identical user agent When evaluated Then shared_ip_anomaly=true is set and the anomaly score is incremented by the configured points Given an ASN allowlist is configured When an IP belongs to an allowlisted residential ASN Then the anomaly score is reduced by the configured points
Consent-Aware Fingerprinting and Regional Compliance
Given a visitor has not consented to non-essential tracking When they perform an action Then only essential fraud-prevention signals are collected, the device hash uses a session-ephemeral salt, and no cookies or localStorage are written Given a visitor withdraws consent When processed Then subsequent actions use consent-minimized collection and prior non-essential identifiers are deleted within 30 days Given a visitor is detected from a GDPR region and consent is not granted When data is stored Then the processing purpose is logged as legitimate_interest_fraud_prevention and the record is excluded from marketing exports
Allow/Deny Lists with Auto Hold and Bypass
Given an admin adds an IP prefix or ASN to the deny list with action=hold When a submission matches the deny rule Then the submission is held for review, the user sees a neutral queue message, and staff receive a notification within 2 minutes Given an admin adds a device hash or IP to the allow list When a submission matches the allow rule Then rate limits and anomaly holds are bypassed while the event remains fully logged and auditable Given a deny rule TTL of 7 days is set When the TTL expires Then the rule automatically deactivates and no longer applies to new submissions
Signal Integration to Anomaly Scoring and Dedupe Layer
Given a submission is flagged vpn_proxy=true and the same device hash submits more than 3 actions within 15 minutes When the anomaly score is computed Then the score increases by the configured weights and if it exceeds threshold T, the submission is auto rate-limited or held per campaign settings Given the same device hash is used with multiple distinct emails within 24 hours When dedupe runs Then duplicate actions are muted from aggregate volume metrics and retained in the audit log with dedupe_reason=repeat_device Given an admin exports the anomaly report for a date range When the export is generated Then each record includes redacted IP prefix, ASN, device hash, flags, anomaly score, and final disposition (allowed/held/blocked)
Disposable Email and Domain Reputation Filter
"As a digital organizer, I want disposable and low-reputation emails automatically flagged so that automated signups don’t inflate supporter counts or trigger junk outbound traffic."
Description

Validates supporter emails using syntax checks, DNS/MX lookups, and a continuously updated list of disposable/temporary domains to reduce low-quality or automated submissions. Assesses sender domain reputation and applies campaign-level allowlists/denylists. Emits structured reason codes, adjusts suspicion scores, and optionally suppresses delivery of messages triggered by flagged addresses while preserving a reviewable record.

Acceptance Criteria
Syntax and MX Validation on Email Submission
Given a supporter submits an email address on an action page When the email fails RFC 5322 syntax validation Then the submission is rejected with HTTP 422 and reason_code="E_SYNTAX" and no downstream delivery is attempted And an audit record is stored with status="rejected" and timestamp Given a syntactically valid email address When DNS lookup for the domain returns no MX and no A record Then reason_code="E_NO_MX" is emitted and suspicion_delta=25 is applied And delivery is suppressed if suppression_enabled=true, otherwise deliver with tag="suspect" Given a syntactically valid email address with at least one MX record When validation completes Then email_check="passed" and end-to-end validation latency <= 500ms at p95
Disposable/Temporary Domain Detection
Given a submission from a domain listed in the disposable domain list updated within the last 24h When validation runs Then reason_code="E_DISPOSABLE_DOMAIN" is emitted and suspicion_delta=50 is applied And delivery is suppressed if suppression_enabled=true, otherwise delivered with tag="suspect" Given a submission from a domain not present in the disposable domain list When validation runs Then no reason_code="E_DISPOSABLE_DOMAIN" is emitted Given the disposable domain list provider is unreachable When validation runs Then the last cached list (age <= 24h) is used And if cache age > 24h then reason_code="W_LIST_STALE" is emitted and no suppression occurs solely due to staleness
Domain Reputation Assessment and Thresholding
Given a domain reputation score on a 0–100 scale is retrievable for the sender domain When score < 20 Then reason_code="E_DOMAIN_REP_LOW" is emitted and suspicion_delta=40 is applied And delivery is suppressed if suppression_enabled=true Given a domain reputation score on a 0–100 scale is retrievable for the sender domain When 20 <= score < 50 Then reason_code="W_DOMAIN_REP_WARN" is emitted and suspicion_delta=20 is applied And delivery proceeds with tag="watch" Given domain reputation cannot be retrieved due to provider error When validation runs Then reason_code="W_REP_UNAVAILABLE" is emitted and suspicion_delta=10 is applied And no suppression occurs solely due to unavailability
Campaign-level Allowlist and Denylist Overrides
Given the exact email is on the campaign allowlist When validation runs Then the submission is delivered regardless of other checks And all detected reason_codes are recorded with severity downgraded to "info" Given the exact email is on the campaign denylist When validation runs Then the submission is suppressed with reason_code="E_EMAIL_DENYLIST" And this override applies over domain-level rules Given the domain is on the campaign allowlist and the email is not on the exact email denylist When validation runs Then the submission is delivered regardless of other detection results Given the domain is on the campaign denylist and the exact email is not on the campaign allowlist When validation runs Then the submission is suppressed with reason_code="E_DOMAIN_DENYLIST"
Structured Reason Codes and Audit Record Emission
Given any email validation outcome When a submission is processed Then an audit record is written containing submission_id, campaign_id, email, domain, reason_codes[], suspicion_score_total, decision (delivered|suppressed|queued_for_review), timestamps, and validator_version And each item in reason_codes[] contains code, message, severity (info|warn|error), suspicion_delta, check_source, and evidence payload And audit records are retrievable by campaign_id and date range and exportable as NDJSON with stable field names
Optional Suppression and Review Queue
Given suppression_enabled=true for the campaign When total suspicion_score >= 60 after all checks Then decision="suppressed" and the submission is placed into the review queue with all reason_codes attached Given suppression_enabled=false for the campaign When total suspicion_score >= 60 after all checks Then decision="delivered" and tag="suspect" is attached while preserving all reason_codes Given a reviewer approves a queued submission When the review action is recorded Then the system delivers the original message exactly once using an idempotency key and updates the audit record decision to "delivered"
DNS/MX Resilience, Timeouts, and Performance
Given DNS/MX lookups are performed during validation When any single resolver call exceeds 300ms Then the call is aborted, reason_code="W_DNS_TIMEOUT" is emitted, and suspicion_delta=10 is applied Given transient DNS SERVFAIL is returned When validation runs Then one retry is attempted before falling back to cached results Given the system is under 100 requests per second per instance When validation runs Then end-to-end validation latency is <= 300ms at p50 and <= 800ms at p95
Adaptive Rate Limiting and Auto-Hold Workflow
"As a campaign owner, I want suspicious bursts automatically throttled or held so that bad traffic is contained without blocking legitimate participation."
Description

Applies dynamic, context-aware rate limits to IPs, devices, and identities when suspicion thresholds are met, throttling or temporarily holding actions before they are sent to targets. Supports per-campaign policies, cooldown windows, progressive penalties, and auto-release conditions. Surfaces clear supporter-facing messages, triggers alerts on spikes, and records all mitigations on the action timeline. Integrates with delivery services to ensure held actions are not dispatched until approved or released.

Acceptance Criteria
Rate Limit Trigger on IP Velocity Spike
Given a campaign policy with ip_actions_per_60s_threshold=20 and suspicion_threshold=70 And 25 actions from the same IP occur within a 60-second window When the 21st through 30th actions are submitted Then the system throttles that IP to a maximum of 1 action every 10 seconds for 10 minutes And returns HTTP 429 with Retry-After=10 on throttled requests And tags the affected actions with mitigation="throttle", reason="ip_velocity", and a rule_id And records a mitigation event with timestamp, ip_hash, action_count_window, and policy_version
Auto-Hold Prevents Dispatch Until Review or Release
Given policy hold_threshold=85 and integrated delivery services (email, voice) And an action is submitted with suspicion_score >= 85 When the action is received by the platform Then the action state is set to "held" within 200 ms And no outbound requests are made to delivery services for that action (0 dispatch attempts) And the action appears in the review queue with SLA=30 minutes And the supporter receives HTTP 202 Accepted and a message_key="action_held" And upon manual approval or automatic release, the action is dispatched exactly once and state transitions to "sent"
Per-Campaign Policy Configuration and Enforcement
Given Campaign A with ip_threshold=20/60s and suspicion_threshold=70 And Campaign B with ip_threshold=10/60s and suspicion_threshold=60 When identical traffic patterns occur on both campaigns Then each campaign enforces its own configured thresholds without cross-campaign interference And if a policy key is missing, the platform applies the global default policy And a policy change takes effect within 60 seconds and is versioned with policy_version incremented
Progressive Penalties, Cooldown Windows, and Auto-Release
Given a penalty ladder: 1st trip in 15 minutes -> throttle 1/30s for 10 minutes; 2nd trip in same 15 minutes -> hold for 15 minutes; 3rd trip in 60 minutes -> block for 24 hours And a cooldown reset after 48 hours of clean activity When the same identity (composite of ip_hash + device_hash + email_hash) triggers successive trips within the defined windows Then the appropriate penalty tier is applied to that identity And penalties decay per the cooldown schedule And held actions auto-release when (cooldown elapses) OR (suspicion_score < 60) OR (identity verified), whichever occurs first And auto-released actions are dispatched within 60 seconds of eligibility with no duplicates
Supporter-Facing Messaging for Throttled or Held Actions
Given a supporter’s action is throttled, held, or blocked When the response is returned or the UI renders the result Then the supporter sees a clear message with reason_category in {throttle, hold, block} and a next_allowed_at timestamp or countdown where applicable And no sensitive detection details or PII are revealed And messages are localized for en and es with fallback to en And responses use HTTP 429 for throttle, 202 for hold, and 403 for block And UI messaging meets WCAG 2.1 AA contrast and uses aria-live="polite" for throttle notifications
Real-Time Spike Alerting to Campaign Admins
Given an anomaly rule: spike if >200 actions in 60 seconds AND suspicion_rate >= 25% When a campaign meets the spike condition Then an alert is sent to the campaign’s configured channels (email, Slack) within 60 seconds And the alert includes campaign_id, time_window, total_actions, suspicion_rate, top_3_ip_hashes, top_3_domains, rule_hits, and a link to the review queue And alerts are deduplicated to at most one per 5 minutes per campaign and severity And an incident record is created with severity computed from volume and suspicion_rate and supports ack and resolve timestamps And admins can mute spike alerts for 30 minutes per campaign
Mitigation Audit Trail on Action Timeline and Export
Given any mitigation (throttle, hold, block) is applied to an action When the timeline is inspected or exported Then a timeline entry exists within 2 seconds containing action_id, UTC timestamp (ISO-8601), mitigation_type, rule_id, suspicion_score, ip_hash, device_hash, identity_scope, policy_version, and actor (auto/manual) And timeline entries are immutable via API and UI And entries are visible in the action detail view and exportable as CSV or JSON with filters by date, campaign_id, mitigation_type, and rule_id And PII is masked; only hashed identifiers are stored/shown And exports of up to 100k rows complete within 60 seconds and data is retained for at least 24 months
Moderator Review Console with Evidence Trail
"As a moderation lead, I want an evidence-rich review console so that I can quickly approve real actions and reject manufactured ones with an auditable record."
Description

Provides a dashboard queue for reviewing held or high-risk actions with full context: suspicion score, reason codes, timelines, origin signals, and similarity clusters. Enables bulk approve/reject, comment threads, and re-queueing for delivery. Enforces role-based access, captures immutable decision logs with timestamped actor records, and supports filters, search, and exports to CSV/JSON. Integrates with notifications to inform moderators of spikes needing attention.

Acceptance Criteria
Spike Notification Trigger and Moderator Alerting
- Given Astroturf Shield detects a suspicious spike exceeding a configured threshold (e.g., >100 held actions in 5 minutes or >60% shared-IP rate), When the threshold is crossed, Then an alert is sent to configured channels (email and Slack) within 60 seconds including spike window, total held count, rate, top 3 origin signals, and a deep link pre-filtering the queue. - Given an alert for a spike signature was sent, When the same signature reoccurs within the cooldown window (10 minutes), Then no duplicate alerts are sent and a single consolidated alert is updated with the latest counts. - Given a moderator with Console access clicks the deep link, When the console loads, Then the queue is filtered to the spike’s signature and displays the matching count within ±1 of the alert summary. - Given a user without Console access receives the alert link, When they attempt to open it, Then access is denied (HTTP 403) and no queue data is exposed. - Given alerts are enabled, When no spikes are present, Then no alerts are sent and alert volume is 0.
Evidence Panel Displays Suspicion Context and Similarity Clusters
- Given a held action is opened in the review console, When the evidence panel renders, Then it displays: suspicion score (0–100), reason codes with descriptions, event timeline with UTC timestamps, and origin signals (IP, ASN, geo, UA/device fingerprint, email domain/disposable flag) within 2 seconds. - Given the action belongs to a similarity cluster, When viewing the evidence panel, Then the panel shows cluster_id, cluster_size, similarity metric summary, and 3 exemplar actions with links. - Given the moderator clicks “View Raw Signals,” When requested, Then a JSON blob of raw signals is returned and downloadable without PII redaction loss for authorized roles. - Given the evidence fails to load, When a recoverable error occurs, Then a user-facing error message is shown and a retry control is available; no blank or partial fields are silently shown. - Given a moderator opens the comments tab, When adding a comment or reply, Then the comment is appended with actor, timestamp (UTC), and action_id; comments are not editable and are visible to all console users with read access.
Bulk Approve/Reject and Re-Queue for Delivery with Evidence Trail
- Given a user with Approver role selects 200 held actions, When they choose Approve and supply a reason code, Then exactly 200 actions transition to Approved and are re-queued for delivery; processing completes in ≤5 seconds and a confirmation summary lists counts by outcome. - Given approved actions are re-queued, When delivery triggers, Then each action is delivered at most once and downstream dedupe prevents duplicate constituent contact events. - Given mixed selection includes actions from multiple clusters, When bulk decision is applied, Then per-item results are recorded; failures (if any) are reported inline without blocking other items. - Given a user with Moderator role (no Approver permission) attempts bulk approve/reject, When they submit, Then the operation is blocked with an authorization error and no state changes occur. - Given a comment is added with a bulk decision, When the decision is committed, Then the comment is attached to each affected action’s thread with a reference to the bulk operation id.
Role-Based Access Control Enforcement
- Given defined roles (Viewer, Moderator, Approver, Admin), When a user signs in, Then the console restricts capabilities: Viewer=read-only; Moderator=hold/review/comment; Approver=approve/reject/re-queue; Admin=exports/config. - Given a user’s role lacks a capability, When they attempt the corresponding action via UI or API, Then the system returns HTTP 403 and no decision or data change is persisted. - Given a user’s role changes mid-session, When they next perform a privileged action, Then the new role permissions are enforced without requiring a full sign-out (max 60s cache). - Given an audit probe, When accessing an action outside permitted scope, Then no sensitive fields (raw signals, PII) are returned and the access denial is logged with actor_id and timestamp.
Immutable Decision Logging with Timestamps and Actor Records
- Given any decision (approve, reject, re-queue) is made on an action, When the operation completes, Then an immutable decision log entry is created capturing actor_id, actor_role, action_id, decision, reason_code, prior_state, new_state, suspicion_score, timestamp (ISO 8601 UTC), and request_id. - Given an existing decision log entry, When an admin attempts to edit or delete it via UI or API, Then the system denies the operation and records a non-destructive audit event noting the attempted modification. - Given multiple decisions occur on the same action, When logs are viewed, Then entries are ordered by timestamp and monotonically increasing sequence number with no gaps for that action. - Given an audit export is generated, When the export completes, Then all decision log entries for the filtered set are included and checksums match the server-side hash for integrity verification.
Filters, Search, and Advanced Sorting
- Given the review queue is loaded, When filters are applied (status, suspicion score range, reason codes, origin signal types, cluster size range, date/time range), Then the result set updates within 2 seconds and the active filter chips reflect selections. - Given a moderator enters a search term, When searching by email, IP, domain, device fingerprint, or action_id, Then matching actions are returned with the term highlighted; no false positives outside the specified fields. - Given the moderator sorts by suspicion score descending, When paging through results, Then the sort order remains stable and consistent across pages with multi-select state preserved. - Given filters are cleared, When the reset control is used, Then all filters and search terms are removed and the default sort (most recent held) is restored.
CSV/JSON Export of Queue and Decision Logs
- Given a filtered queue view of up to 25,000 actions, When the user exports to CSV, Then a file is generated within 30 seconds containing one row per action with selected fields, using UTF-8 encoding and ISO 8601 UTC timestamps. - Given the same filtered view, When exporting to JSON, Then a newline-delimited JSON (NDJSON) file is generated with action records including embedded evidence summary and decision logs. - Given large field values or special characters, When exported, Then CSV values are properly quoted/escaped and JSON is valid per RFC 8259 with no truncation. - Given an export is requested, When the file is ready, Then a secure download link is provided and remains valid for 24 hours; access requires the same role permissions as the console view. - Given an export completes, When record counts are compared, Then the export record count equals the on-screen filtered count within the same time window.
Person-level Dedupe Integration
"As a data analyst, I want person-level dedupe to factor in fraud signals so that performance metrics reflect unique supporters rather than repeated attempts."
Description

Extends existing person-level deduplication to incorporate Astroturf signals, merging identities across channels (email, phone, device) and suppressing repeat counts from the same individual within configurable windows. Distinguishes between unique constituent outreach versus repeated spam, keeping topline metrics accurate. Offers campaign-level controls for what qualifies as a unique action and emits dedupe reason codes for analytics and exports.

Acceptance Criteria
Cross-Channel Identity Merge
Given two action events share the same normalized email address but different phone/device, When dedupe runs, Then both events resolve to the same canonicalPersonId and are counted as one unique within the configured window. Given two action events share the same normalized E.164 phone number across different emails, When dedupe runs, Then both events resolve to the same canonicalPersonId and are counted as one unique within the configured window. Given two action events have no email or phone but share the same device fingerprint, When dedupe runs and device-based merging is enabled, Then both events resolve to the same canonicalPersonId and are counted as one unique; When disabled, Then they remain separate. Given two action events with only matching name or ZIP/postal code, When dedupe runs, Then they are not merged.
Configurable Dedupe Window Enforcement
Given a campaign sets dedupeWindow=24h, When the same person submits multiple actions within 24 hours, Then the unique action count increases by 1; When an action occurs at 24h+1s, Then the unique count increases by 1 again. Given dedupeWindow=0 (disabled), When the same person submits multiple actions, Then each action increments the unique action count. Given dedupeWindow=15m, When actions occur at t0 and t0+14m59s, Then the unique count increments once; When at t0+15m+1s, Then it increments again. Given any suppression due to the window, Then the suppressed actions are present in raw event logs with dedupeApplied=true and windowSeconds populated with the configured value.
Campaign-Level Unique Action Definition
Given uniqueBy=bill, When the same person takes actions on two different bills within the window, Then the unique count increases by 2; When taking two actions on the same bill, Then the unique count increases by 1. Given uniqueBy=channel, When the same person emails and calls for the same bill within the window, Then the unique count increases by 2; When two emails are sent, Then the unique count increases by 1. Given uniqueBy=actionType, When the same person completes actions of types "petition" and "call" within the window, Then the unique count increases by 2. Given uniqueBy=campaign (default), When the same person takes any number of actions within the window, Then the unique count increases by 1. Given the campaign updates uniqueBy settings, When new actions arrive after the change, Then the new rules apply within 60 seconds and historical counts are not retroactively altered.
Dedupe Reason Codes in Analytics and Exports
Given an action is merged based on email, Then the event has reasonCode="MERGED_EMAIL" and includes canonicalPersonId and dedupeGroupId in analytics and CSV/JSON exports. Given suppression due to within-window, Then reasonCode="WITHIN_WINDOW" with windowSeconds populated. Given suppression due to astroturf signal threshold, Then reasonCode="ASTROTURF_SIGNAL" with signalType and signalScore populated. Given an action qualifies as unique, Then reasonCode="UNIQUE" and dedupeApplied=false. Given analytics dashboards, When filtering by reasonCode, Then counts and charts update accordingly and match export totals for the same time range.
Astroturf Signal–Informed Suppression
Given IP rate threshold is set to 40 actions/5m, When 50 actions from 30 disposable emails originate from the same IP in 5 minutes, Then actions beyond the threshold are held for review and do not increment unique counts; affected events have reasonCode="ASTROTURF_SIGNAL" and reviewStatus="HELD". Given same-person repeat pattern via the same device with randomized emails within the configured window, When astroturfScore exceeds the threshold, Then subsequent events from that person are suppressed from unique counts and flagged with reasonCode="ASTROTURF_SIGNAL". Given a reviewer releases held events, When released, Then unique counts are recomputed under dedupe rules and reasonCode is updated to "UNIQUE" or "WITHIN_WINDOW" accordingly. Given astroturf features are disabled for a campaign, Then dedupe uses standard person-level rules only and no events carry the "ASTROTURF_SIGNAL" reason code.
Real-Time Dedupe for Live Metrics
Given a live action event is received, When dedupe executes, Then the campaign dashboard unique counts update within 1 second for p95 and within 3 seconds for p100 under a sustained load of 100 events/second. Given the same event is replayed or retried up to 3 times, When dedupe processes it, Then unique counts and person records remain idempotent (no duplicate increments). Given concurrent actions for 10,000 distinct persons, When dedupe processes them in parallel, Then accuracy is ≥ 99.99% (≤ 1 duplicate per 10,000) and no cross-person merges occur.
Audit Logs and Fraud Impact Reporting
"As an executive director, I want transparent reports and audit logs so that I can prove campaign integrity to partners, funders, and the media."
Description

Generates immutable audit logs of detections, mitigations, and reviewer decisions, and provides reporting on flagged rates, suppression impact, root causes, and time-based trends. Supports drill-down by campaign, channel, geography, and reason code, with exportable CSV/JSON and scheduled email/Slack summaries. Exposes API endpoints for BI tools and enables shareable, read-only report links for external stakeholders to validate campaign credibility.

Acceptance Criteria
Immutable Audit Log Chain Creation
- Given a detection, mitigation, or reviewer decision occurs, when the event is recorded, then an append-only audit entry is written with fields: id (UUIDv4), event_type, timestamp (UTC ISO8601 ms), actor_type, actor_id (nullable), campaign_id, channel, reason_code, rule_id (nullable), subject_ref, prior_state, new_state, metadata (JSON), previous_hash, hash. - Given an entry is written, when recomputing SHA-256 over canonicalized payload + previous_hash, then the stored hash matches and previous_hash links to the most recent prior entry. - Given the daily integrity job runs, when it verifies the last 1,000,000 entries, then it reports 100% integrity or flags the first mismatch with severity=critical and creates an alert. - Given any API or admin attempt to update or delete an audit entry, when executed, then the operation is rejected with HTTP 403 and the entry remains unchanged.
Fraud Impact Metrics and Trends
- Given a date range and interval (hour/day/week), when generating the report, then it returns totals: total_actions, flagged_actions, suppressed_actions, reviewer_overrides, flagged_rate=(flagged_actions/total_actions), suppression_impact=(suppressed_actions/total_actions) with values rounded to 2 decimals. - Given the report query, when grouping by interval, then the time series contains one data point per interval with zero-filled gaps. - Given root cause breakdown is requested, when grouping by reason_code, then results include count and percent_of_flagged sorted by count desc. - Given comparison mode is enabled, when comparing current vs previous period, then deltas are returned as absolute and percent change for each metric. - Given filters are applied, when the report is regenerated, then all metrics and series reflect only the filtered dataset.
Drill-Down by Campaign/Channel/Geography/Reason Code
- Given filters campaign_id (multi), channel in [call,email,form], geography (state,district), and reason_code (multi), when applied in any combination, then results include only matching records and the total_count reflects the filtered set. - Given a filter combination yields zero records, when the report is generated, then it returns an empty dataset with headers only for exports. - Given a dataset up to 1,000,000 records after filtering, when computing summary metrics, then the response completes within 2.5 seconds p95.
CSV/JSON Exports and Asynchronous Large Exports
- Given current filters, when exporting CSV, then the file is RFC 4180-compliant UTF-8 with header row and includes only filtered rows and columns: id,event_type,timestamp,actor_type,actor_id,campaign_id,channel,reason_code,rule_id,subject_ref,prior_state,new_state,metadata,previous_hash,hash. - Given current filters, when exporting JSON, then the response is application/json with an array of objects using the same schema and timestamps in UTC ISO8601 ms. - Given an export exceeds 250,000 rows, when requested, then an asynchronous job is created and a downloadable link is delivered within 15 minutes; progress is visible until completion. - Given identical filters and time range, when the export is rerun within 24 hours, then the row count matches and a SHA-256 of sorted rows is identical.
Scheduled Email and Slack Summaries
- Given a schedule is created for Mondays 09:00 America/New_York, when the time occurs, then email and Slack summaries are delivered containing past 7-day metrics: total_actions, flagged_rate, suppression_impact, top_5_reason_codes, top_5_geographies, review_queue_size, and a link to the full report. - Given a delivery failure (SMTP 5xx or Slack 429/5xx), when sending, then the system retries up to 3 times with exponential backoff and logs success/failure in audit with reason_code=notification_failure. - Given the schedule is edited or canceled, when saved, then changes take effect before the next scheduled run and a confirmation notification is sent.
Read-Only Shareable Report Links
- Given a user generates a shareable link for a filtered report, when they set an expiry between 1 hour and 90 days, then a tokenized URL is created scoped to those filters with read-only permissions. - Given an external user opens the link, when loading the report, then no create/update/delete actions are available and only viewing and drill-down within the scoped filters is permitted. - Given the link is expired or revoked, when accessed, then the response is HTTP 403 and no data is revealed. - Given a share link is viewed, when the view completes, then a view event is logged with timestamp, ip, user_agent, and share_link_id.
BI API Endpoints for Reporting
- Given an API key with scope reports:read, when GET /api/v1/reports/audit-logs is called with valid filters and time range, then 200 is returned with data, pagination cursor (next_cursor), and X-Total-Count header. - Given requests exceed 60/min per API key, when additional requests arrive, then 429 is returned with a Retry-After header. - Given the client sends Accept: application/vnd.rallykit.reports.v1+json, when calling any reports endpoint, then v1 schema is returned; otherwise latest stable is returned. - Given If-None-Match is provided with a current ETag, when data has not changed, then 304 is returned with empty body. - Given a typical report query scanning ≤10,000 rows, when executed under normal load, then p95 response time is ≤800 ms.

Trust Pass

A consented, time-limited ‘verified once’ pass linked to phone/email that carries across RallyKit action pages. Returning supporters skip re-checks for a set window, cutting friction at peak moments while preserving compliance.

Requirements

One-Time Consent Capture
"As a supporter, I want to explicitly consent to a short reuse of my verification so that I can act faster while knowing how my data is used."
Description

Implement an explicit, opt-in consent step that explains the Trust Pass purpose, reuse window, and covered channels (phone/email). Capture timestamp, consent language version, locale, and supporter identifier linkage. Support accept/decline with accessible, localized copy; if declined, fall back to the standard verification flow. Persist consent metadata server-side for the configured retention period and surface a lightweight indicator of active consent on action pages. Integrate with existing RallyKit supporter records without duplicating PII, and ensure consent is required before any reuse of verification occurs.

Acceptance Criteria
Consent Prompt and Acceptance on First Verification
Given a supporter without an active Trust Pass reaches the verification step on a RallyKit action page, When the consent step is presented, Then the UI clearly states the Trust Pass purpose, the configured reuse window duration, and that it covers phone and email, and provides Accept and Decline actions. Given the supporter chooses Accept, When the verification completes, Then a Trust Pass is created and associated to the supporter’s verified identifier(s), and the current action proceeds without any additional verification in that session. Given acceptance is recorded, Then the UI displays a confirmation and the "Trust Pass active" indicator with the pass expiry time.
Decline Consent and Fallback to Standard Verification
Given a supporter selects Decline on the consent step, When the flow continues, Then the system performs the standard verification process and does not create a Trust Pass. Given consent was declined, Then no active Trust Pass indicator is shown on the action page, and no reuse occurs for later actions unless consent is subsequently accepted. Given a declined supporter completes the action, Then any subsequent action prompts the consent step again.
Consent Metadata Capture and Secure Persistence
Given a supporter accepts consent, Then the system records: consent timestamp (ISO 8601 UTC), consent language version ID, locale (BCP 47), supporter linkage (supporter ID and hashed phone/email), configured reuse window value, and computed pass expiry timestamp. Given metadata is recorded, Then it is stored server-side in encrypted-at-rest storage and retrievable by authorized systems via supporter ID for audit. Given the supporter already exists in RallyKit, Then the consent record links to the existing supporter record without creating a duplicate or storing additional copies of raw PII.
Trust Pass Reuse Within Configured Window
Given a supporter has an active Trust Pass, When they initiate another RallyKit action within the configured reuse window using the same identifier(s), Then verification is skipped and the "Trust Pass active" indicator is displayed. Given multiple actions occur within the window, Then each action is attributed to the same consent record ID in logs for audit. Given both phone and email are covered channels, Then reuse applies to both within the active window.
Trust Pass Expiry and Retention Enforcement
Given the current time is after the pass expiry timestamp, When the supporter initiates an action, Then reuse is blocked and the consent step is presented again before any verification is reused. Given the configured retention period elapses since consent timestamp, Then the consent metadata is purged from active storage and is no longer returned by audit queries, while aggregate metrics remain unaffected. Given a pass has expired or metadata is purged, Then no active Trust Pass indicator is shown on action pages.
Localized and Accessible Consent Experience
Given the browser/user locale is supported, Then consent copy is displayed in that locale; otherwise the default locale is used, with all text sourced from the localized content set matching the consent language version ID. Given keyboard-only navigation, Then all consent elements are reachable in logical order with visible focus and operable via keyboard (no traps). Given a screen reader is used, Then the consent modal has appropriate ARIA roles and labels, and Accept/Decline controls are announced clearly and unambiguously. Given the consent UI is rendered, Then Accept and Decline controls have equal visual prominence and meet WCAG 2.1 AA contrast (>= 4.5:1 for text, >= 3:1 for large text/icons). Given the reuse window is configured, Then the displayed copy includes the correct dynamic duration value.
Active Consent Indicator on Action Pages
Given a supporter has an active Trust Pass, When an action page loads, Then a lightweight, non-intrusive indicator is shown stating Trust Pass is active and the remaining time until expiry. Given no active Trust Pass exists, Then the indicator is not displayed. Given assistive technologies are used, Then the indicator is announced with an accessible name and is not focus-trapping or blocking primary actions. Given the indicator is displayed, Then it contains no sensitive PII and no control that alters consent; it is informational only.
Verified Contact Linking (OTP)
"As a supporter, I want to verify my phone or email once so that RallyKit can recognize me across action pages without repeated codes."
Description

Enable one-time verification of phone and/or email via OTP with intelligent retries, rate limiting, and deliverability checks. Link verified channels to a single supporter profile and mark them as eligible for Trust Pass issuance when consented. Provide channel fallback (e.g., switch from SMS to email) and handle edge cases (carrier filtering, spam folders). Integrate with existing RallyKit comms services and ensure verification outcomes are recorded for analytics and compliance.

Acceptance Criteria
SMS OTP Verification and Profile Linking
- Given a supporter enters a valid mobile number not yet verified and requests verification, When they tap "Send code", Then the system issues a single-use 6-digit numeric OTP, stores a hashed version with TTL 10 minutes, and sends via the configured SMS provider within 3 seconds. - Given the OTP is valid and not expired, When the supporter submits the correct code, Then the phone is marked verified=true with verified_at timestamp, linked to the current supporter profile, and the OTP is immediately invalidated. - Given the OTP is expired or already consumed, When the supporter submits it, Then the system rejects it with a clear error and offers a resend option. - Given the phone is already verified on this profile, When the supporter requests a new OTP, Then the system informs "Already verified" and does not send a new code unless the supporter selects "Reverify". - Given the phone is already verified on a different supporter profile, When a new verification is attempted, Then the system blocks linking, presents an account recovery option, and records an audit event; no duplicate link is created.
Email OTP Verification and Profile Linking with Deliverability Checks
- Given a supporter enters an email address not yet verified and requests verification, When they tap "Send code", Then a single-use 6-digit OTP is emailed via the configured provider and a provider message_id is recorded. - Given the email provider webhook reports a hard bounce or invalid address, When delivery fails, Then the system marks the attempt undeliverable, does not retry email for that address during the session, and prompts for an alternate channel (SMS) or corrected email. - Given the supporter receives the OTP and submits the correct code within 10 minutes, Then the email is marked verified=true with verified_at timestamp, linked to the current supporter profile, and the OTP is invalidated. - Given the email is already verified on a different supporter profile, When a new verification is attempted, Then the system blocks linking, presents an account recovery option, and records an audit event; no duplicate link is created.
Automatic Channel Fallback from SMS to Email
- Given an SMS OTP send attempt returns provider status filtered/blocked or no delivery receipt within 60 seconds, When fallback is enabled for the organization, Then the system prompts the supporter to switch to email and pre-fills their email if available. - Given the supporter accepts fallback, When they confirm, Then the system cancels pending SMS retries, generates a new OTP, invalidates the SMS OTP, and sends via email. - Given the supporter declines fallback, When they choose "Try SMS again", Then the system respects rate limits and displays the next-allowed resend time. - Given email fallback is not available (no email provided), When SMS is filtered, Then the UI requests an email entry and prevents further SMS sends until the user updates the channel or the suppression window ends.
Intelligent Retries and Rate Limiting
- Given a supporter requests OTP resends on a single channel, When resends are triggered, Then allow at most 3 resend attempts per 15 minutes per supporter per channel with exponential backoff (0s, 30s, 60s) and display time-to-next resend. - Given repeated invalid OTP submissions, When 5 invalid attempts occur within 15 minutes for a channel, Then verification for that channel is locked for 15 minutes and returns HTTP 429 with a human-friendly message. - Given abusive patterns, When more than 10 OTP sends are requested from the same IP across any supporters within 10 minutes, Then subsequent requests from that IP are throttled for 20 minutes and logged for analytics. - Given a provider returns a retryable error (e.g., 5xx/timeout), When sending an OTP, Then the system retries up to 2 times with jittered backoff without exceeding resend limits and records the error codes.
Handling Carrier Filtering and Spam Folder Cases
- Given the SMS provider callback indicates carrier filtering or blocked, When processing delivery status, Then the system tags the attempt as filtered, suppresses further SMS sends to that number for 24 hours, and recommends email fallback in the UI. - Given an email complaint (spam report) webhook is received for the verification message, When processing, Then the system halts further email OTP sends to that address, records a compliance event, and requires an alternate channel to proceed. - Given a soft bounce or temporary email failure is detected, When processing, Then the system schedules one automatic retry within 5 minutes, respecting resend limits, and updates delivery status accordingly.
Consent Capture and Trust Pass Eligibility Flag
- Given a supporter has successfully verified at least one channel, When they explicitly check the consent box for Trust Pass, Then the system records consent with timestamp, policy version, and scope, and sets trust_pass_eligible=true with an expiry equal to the organization-configured TTL. - Given trust_pass_eligible is set, When the expiry time is reached, Then the eligibility flag automatically reverts to false without user action and an expiry event is recorded. - Given a supporter withdraws consent, When they uncheck or request removal, Then trust_pass_eligible becomes false immediately and a revocation event is recorded; existing channel verification statuses remain intact.
Analytics and Compliance Recording
- Given any OTP lifecycle event (requested, sent, delivered, bounced, verified_success, verified_fail, fallback_selected, locked_out), When it occurs, Then an immutable event record is stored with supporter_id, channel, timestamp (UTC), provider message_id, IP, user agent hash, and outcome code. - Given verification outcomes, When viewed in the dashboard analytics, Then admins can filter by date range, channel, provider, outcome, and export CSV with the fields above; plaintext OTP values are never included. - Given data security requirements, When storing OTP-related data, Then only hashed codes are stored, sensitive data is encrypted at rest, and access is restricted to authorized roles; attempted unauthorized access is logged.
Trust Pass Token Issuance & Storage
"As a supporter, I want my verified status stored securely on my device for a limited time so that I can take actions without re-verifying."
Description

After successful verification and consent, issue a signed, time-limited Trust Pass bound to the supporter and device/browser. Encode expiry, scopes, and risk markers; prefer HTTP-only, Secure, SameSite cookies for storage with server-side validation and revocation lists. Minimize PII by storing hashed identifiers and rotate signing keys per policy. Support multiple devices with independent passes and refresh on re-verify within policy. Ensure resilience to token theft and replay via device binding and short-lived TTLs.

Acceptance Criteria
Issue Signed, Time-Limited Trust Pass After Consent
- Given a supporter completes verification of phone or email and explicitly consents to Trust Pass issuance When the server issues a Trust Pass Then the token is signed with the current active signing key and includes claims: supporter_hash, device_id, scopes, risk_markers, iat, exp - And exp <= policy-configured TTL (default 30 minutes, max 24 hours) - And the token is bound to the current device/browser via a device_id unique per device - And issuance is logged with non-PII telemetry: supporter_hash prefix, device_id, exp, scopes, risk_markers - And no raw PII is embedded in the token payload
Secure Cookie Storage Configuration
- Given a Trust Pass is issued When it is stored client-side Then it is set as an HttpOnly, Secure cookie with SameSite=Lax and Path restricted to the action domain - And Domain is scoped to the minimal required host (no top-level wildcard) - And the cookie is not written to localStorage or sessionStorage - And in non-TLS environments, issuance and storage are blocked with a 426 or 400 response - And cookie size is <= 4KB and total cookies per domain stay within browser limits
Server-Side Validation and Revocation Enforcement
- Given a request presents a Trust Pass cookie When the server validates the token Then it verifies signature using the current or previous valid keys, ensures exp is in the future, confirms required scopes, and checks device binding - And if the token's jti is present on the revocation list, the request is rejected with 401 and the cookie is cleared - And revocations propagate to all application nodes within 60 seconds - And validation failures emit security telemetry without PII - And successful validations record last_seen_at per device_id
PII Minimization and Hashed Identifiers
- Given a supporter is verified by phone or email When identifiers are persisted or encoded in a Trust Pass Then only salted+peppered hashes (e.g., Argon2id) of phone/email are stored server-side and included as supporter_hash in token claims - And no raw phone numbers or emails appear in tokens, cookies, logs, or analytics - And salts are per-tenant and peppers reside in a separate secrets store - And a redaction test demonstrates that exporting token payloads cannot reconstruct original identifiers
Signing Key Rotation and Backward Verification
- Given signing key rotation occurs per policy When a new key becomes active Then new tokens are signed with the new key and include a kid header - And tokens signed by the previous key remain verifiable until their natural expiry - And the previous key can be revoked; upon revocation, affected tokens fail validation immediately - And key metadata (e.g., JWK set) is refreshed by app nodes at least every 5 minutes
Multi-Device Independent Passes and Refresh on Re-Verify
- Given the same supporter uses multiple devices or browsers When each device completes verification and consent Then each device receives an independent Trust Pass with a unique device_id and jti - And re-verifying within policy on a device refreshes that device's pass (new exp) without invalidating passes on other devices - And re-verify does not extend validity beyond the policy maximum rolling window (e.g., 24 hours) - And device-specific revocation only revokes the targeted device_id unless a global revoke is requested
Device Binding and Replay Protection
- Given a Trust Pass-bearing request is received When server compares the token's device_id to the bound device cookie value Then a mismatch results in 401 and token revocation for that jti - And each token includes a unique jti and is tracked for replay; reuse of the same token from a different IP/UA within TTL is rejected and flagged - And action endpoints require a per-request nonce tied to the device cookie; missing or stale nonce yields 403 - And default TTL is 30 minutes with ability to configure shorter durations for high-risk campaigns
Cross-Page Auto Recognition & Skip Flow
"As a supporter, I want RallyKit to recognize me automatically on any action page so that I can complete calls and emails with minimal friction."
Description

On load of any RallyKit action page, detect a valid Trust Pass and bypass re-verification, pre-filling known fields and advancing users directly to one-tap action. Display a clear, dismissible indicator (e.g., “Trusted for X hours”) with a link to revoke. If the pass is missing, expired, or invalid, gracefully fall back to the standard flow. Ensure compatibility with district-specific script generation and live action tracking, and instrument events for conversion and latency metrics.

Acceptance Criteria
Valid Trust Pass Bypass to One‑Tap Action
Given a supporter has a valid Trust Pass linked to their phone/email with at least 1 minute remaining And the supporter navigates to any RallyKit action page that supports Trust Pass When the page finishes initial load Then the standard verification UI does not render And known profile fields (name, email, phone, ZIP) are pre-filled And the interface advances to the one‑tap action state with the primary action enabled And no more than one navigation is required by the supporter to initiate the action
Trusted Indicator and Dismiss/Revoke Controls
Given a valid Trust Pass is recognized on an action page When the page displays the trust status Then a banner or inline indicator shows “Trusted for X hours/minutes” reflecting remaining TTL (minutes if <60, hours if ≥60, rounded down) And the indicator has a close control that hides it for the current session without affecting the pass And the indicator includes a Revoke link/button that is keyboard accessible and visible to screen readers
Fallback on Missing/Expired/Invalid Trust Pass
Given no Trust Pass is present, or the pass is expired, or signature/issuer validation fails When the supporter loads an action page Then the standard verification flow renders as the first step And no profile fields are pre-filled from Trust Pass And no trust indicator is displayed And the user can successfully complete verification and proceed
Pre‑Fill and District Script Compatibility
Given a valid Trust Pass provides name, contact, and district identifiers When the action page initializes one‑tap mode Then contact fields are pre‑filled exactly with the latest values bound to the supporter’s pass And district-specific targets are resolved before the script is shown And the generated script matches the supporter’s current district and bill status And the one‑tap action uses the resolved targets without requiring additional input
Immediate Revocation Behavior
Given a valid Trust Pass is recognized and the indicator is visible When the supporter clicks the Revoke control and confirms Then the local Trust Pass token is cleared And the server revocation endpoint is called and returns success And the page immediately returns to the standard verification flow And subsequent page loads in the same browser use the standard flow until a new pass is created
Cross‑Page Recognition Within Time Window
Given a Trust Pass was issued at T0 with a configured TTL When the supporter opens multiple different RallyKit action pages in the same browser within the TTL Then each page recognizes the pass without re‑verification and enters one‑tap mode And opening a new tab or refreshing preserves the bypass behavior while TTL > 0 And after TTL expiry, the next page load follows the standard verification flow
Instrumentation for Conversion and Latency
Given Trust Pass detection and one‑tap flow are enabled When an action page loads and detects a pass Then events trust_pass_detected and trust_pass_bypass are emitted with properties {page_id, org_id, supporter_hash, pass_remaining_seconds, detection_ms, time_to_ready_ms} And when a pass is missing/invalid, event trust_pass_fallback is emitted with {page_id, org_id, reason} And when revoke is used, event trust_pass_revoked is emitted with {page_id, org_id} And one‑tap action start and submit events include pass_present=true and correlate to live action tracking records
Pass Management & Revocation Controls
"As a supporter, I want to revoke or opt out of the Trust Pass at any time so that I stay in control of my privacy and security."
Description

Provide supporter-facing controls to view status and immediately revoke the Trust Pass on any action page, plus organizer/admin tools to revoke passes by supporter or campaign. Support automatic expiry at configurable TTL, forced re-verify on suspicious activity, and global opt-out flags that prevent future issuance. Propagate revocations across devices where feasible and confirm actions with clear messaging. Maintain a smooth fallback to re-verification after revocation.

Acceptance Criteria
Supporter Self‑Revokes Trust Pass from Action Page
Given a supporter with an active Trust Pass lands on any RallyKit action page with a visible “Manage Trust Pass” control When the supporter selects “Revoke” and confirms in a modal Then the Trust Pass linked to the supporter’s phone/email is invalidated server-side within 1 second And Trust Pass client storage (cookies/local storage) on that device is cleared immediately And the page transitions the supporter into the re‑verification flow within 2 seconds without losing prefilled action context (legislator, script, bill status) And a success message “Trust Pass revoked” is displayed with a link to learn more And an audit log entry is recorded with supporter ID (hashed), contact type, timestamp (UTC), action page ID, device fingerprint hash, and IP country code
Organizer Revokes Pass by Supporter (All Campaigns)
Given an organizer with Revocation permission selects a supporter in the dashboard by phone or email When the organizer clicks “Revoke Trust Pass” and confirms Then all active Trust Passes for that phone/email across all campaigns are invalidated within 5 seconds And the organizer sees a confirmation toast with count of revoked passes And the supporter’s next visit to any RallyKit action page requires re‑verification And an audit log entry is created including initiator user ID, supporter identifier, revocation scope=global, and reason=manual And API responds 200 with a machine‑readable result {revoked:true, count:N}
Admin Bulk Revokes Passes by Campaign
Given an admin opens a campaign’s Security settings and chooses “Revoke all Trust Passes for this campaign” When the admin confirms the bulk action Then all passes issued under that campaign are queued for revocation immediately and invalidated within 2 minutes for up to 10,000 passes (or processed in batches of 5,000/min thereafter) And supporters visiting any page of that campaign are forced to re‑verify on next request And the UI displays a progress indicator with total, completed, and failures, refreshing at least every 5 seconds until completion And a downloadable CSV report of revoked pass IDs and supporter identifiers (hashed) is available after completion And an audit entry records campaign ID, counts, duration, and initiating admin
Automatic Pass Expiry at Configurable TTL
Given a workspace‑level default TTL (15 minutes–30 days) and optional campaign override is configured When a Trust Pass reaches its TTL since issuance or last successful re‑verification (according to configured policy) Then the pass is automatically marked expired server‑side without user action And any action page request with an expired pass redirects the supporter to re‑verification with a message “Trust Pass expired — please verify again” And changing the TTL affects newly issued passes immediately; existing passes honor their original expiry unless an admin selects “Apply to existing” And TTL values outside the allowed range are rejected with validation error
Forced Re‑Verification on Suspicious Activity
Given the risk engine flags a supporter’s Trust Pass with reason in {device change burst, geo velocity anomaly, IP reputation, action rate threshold} When the supporter initiates an action page request Then the pass state changes to suspended and the supporter is required to re‑verify before continuing And the page displays a neutral message “Please re‑verify to continue” without exposing sensitive risk details And upon successful re‑verification, a new Trust Pass is issued and the action continues; upon failure or timeout, access to one‑tap actions is blocked And an audit log captures the trigger type, risk score, and correlation IDs
Global Opt‑Out Prevents Future Pass Issuance
Given a supporter submits a Do‑Not‑Issue Trust Pass request or an admin sets a global opt‑out flag for that identifier (phone/email) When any workflow attempts to issue a Trust Pass for that identifier Then issuance is blocked and the system returns {issued:false, reason:opt_out} And any existing Trust Passes for that identifier are revoked within 5 seconds And action pages allow the supporter to proceed only via standard verification each time, with a clear message “Trust Pass disabled by preference” And opt‑out can be reversed only by explicit supporter consent recorded with timestamp and channel
Cross‑Device Revocation Propagation with Clear Messaging
Given a Trust Pass is revoked or expires on one device When the same supporter opens or interacts with an action page on another device or tab Then the server rejects the pass on the next request and the client prompts re‑verification within 30 seconds of the revocation event And if a live session is open, the UI displays a non‑blocking banner “Your Trust Pass is no longer valid — please re‑verify” and disables one‑tap actions until completion And all SDK/event endpoints reflect pass_status=revoked/expired consistently within 30 seconds And an audit log correlates the revocation event with subsequent cross‑device invalidations
Compliance & Audit Logging
"As a nonprofit director, I want audit-ready records of consent and verification reuse so that I can demonstrate compliance to funders and regulators."
Description

Record immutable, audit-ready events for consent capture, verification outcomes, pass issuance/refresh, usage to skip checks (with page identifiers), revocation, and expiry. Store timestamps, policy/version references, and minimal hashed identifiers to protect privacy. Provide exportable reports (CSV/PDF) and an evidence bundle for audits, with configurable retention aligned to organizational policy and regulations (e.g., GDPR/CCPA considerations). Include automated alerts for anomalies (unusual reuse rates, failed verifications).

Acceptance Criteria
Immutable Audit Log Ledger
- Given any Trust Pass–related event occurs, when it is recorded, then the system writes an append-only entry to the audit store. - Each entry includes: event_id (UUIDv4), tenant_id, event_type, created_at (UTC ISO 8601, ms precision), schema_version, previous_hash, entry_hash. - Any attempt to update or delete an existing entry is rejected and a security event is emitted. - A hash chain across entries validates without gaps for any queried time window. - System time is normalized to UTC; clock skew >2s is corrected and the correction noted in the entry metadata.
Consent Capture Event Recording & Privacy Safeguards
- Given a supporter submits consent, when the consent is captured, then an audit event includes: consent_type, method, policy_reference (version/URL), consent_text_hash (SHA-256), locale, action_page_id, campaign_id, created_at. - The supporter identifier is stored only as hashed_identifier_type and hashed_identifier_value (SHA-256 with per-tenant salt; no raw phone/email stored). - Source IP is stored in privacy-preserving form (last octet zeroed or hashed); user_agent is hashed. - Given a supporter revokes consent, when the revocation is submitted, then a corresponding consent_revoked event is recorded with optional reason and timestamp. - All consent-related events validate against the same schema_version and are retrievable by date range and campaign_id.
Verification Outcomes Logging & Anomaly Detection
- Given a verification attempt (email or SMS) occurs, when it completes, then an audit event includes: channel, method, outcome (pass/fail), failure_reason_code (if any), provider_reference_id, latency_ms, retry_count, rate_limit_applied (bool), created_at, hashed_identifier. - Sensitive materials (OTP/token values) are never persisted; only non-secret metadata is stored. - Given failed verifications exceed a configurable threshold T within window W for a tenant, when the threshold is crossed, then an anomaly_alert event is created and notifications are sent to configured channels (email and/or webhook) within 60 seconds. - Given a Trust Pass is reused more than threshold R across distinct action pages or devices within window W, when the threshold is crossed, then an anomaly_alert event with type "unusual_reuse_rate" is emitted and notifications are sent. - All alerts include counts, window, sample event_ids, and suppress duplicate alerts for the same condition within a configurable cooldown C.
Trust Pass Lifecycle: Issuance, Refresh, Revocation, Expiry
- Given a Trust Pass is issued after successful verification, when issuance occurs, then an audit event includes: pass_id, window_start, window_end, linked_verification_event_id, policy_version, issuer (system), created_at. - Given a pass is refreshed/extended, when the refresh occurs, then an event records prior_pass_id, new_window_end, trigger (e.g., action_page_id), policy_version, created_at. - Given an admin or automated rule revokes a pass, when revocation occurs, then an event includes: pass_id, actor_id/type, reason_code, effective_at, created_at. - Given a pass naturally expires, when window_end is reached, then an expiry event is auto-recorded with pass_id and created_at. - All lifecycle events are linked by pass_id and are queryable by range and pass_id.
Skip-Check Usage Logging with Page and Campaign Identifiers
- Given a supporter with a valid Trust Pass lands on an action page, when verification is skipped, then a pass_skip_verification event includes: pass_id, decision="skip", action_page_id, campaign_id, action_id (if applicable), policy_version, reuse_count_before, reuse_count_after, created_at. - Given the Trust Pass is invalid or expired, when verification is required, then a pass_decision event records decision="verify" and reason_code (e.g., expired, revoked). - Each skip event validates that window_start <= created_at <= window_end; otherwise the event is rejected and a security event is logged. - Skip events are emitted at most once per action_id per pass_id to prevent double counting and include idempotency_key to ensure exactly-once semantics.
Exportable Reports (CSV/PDF) and Evidence Bundle
- Given an admin requests an audit export for a tenant and date range, when generated, then the system provides: CSV of events (documented schema), PDF summary (by event_type, anomalies, counts), and manifest.json (row counts and SHA-256 of each file). - The evidence bundle includes a detached signature file for the manifest using the platform signing key; signature verification succeeds with the published public key. - The export includes a hash-chain verification report proving integrity across the range; verification passes with zero chain breaks. - Exports up to 100k events complete within 60 seconds and are available via a time-limited, access-controlled URL; all PII remains hashed. - Exported CSV columns include: event_id, tenant_id, event_type, created_at, schema_version, and event-specific fields (e.g., pass_id, action_page_id, policy_reference).
Retention, Purge, and Legal Hold Configuration Enforcement
- Given a tenant configures a retention period (90–1095 days), when the scheduled purge runs, then events older than the period are deleted and a purge_summary event (job_id, range, counts) is recorded. - Given a legal hold is applied by tag or case_id, when purge runs, then events matching the hold are retained; hold application/removal is fully audited. - Retention configuration changes are logged with old_value, new_value, actor_id, created_at. - Exports for ranges overlapping purged data exclude deleted events and manifest counts are consistent with the remaining dataset. - No raw identifiers are ever stored; hashed identifiers use per-tenant salts managed by KMS; salts are not exportable; configuration is documented in the export summary.
Organizer Controls & Analytics
"As an organizer, I want to configure the Trust Pass window and see its impact on conversions so that I can balance speed and compliance."
Description

Add dashboard settings to enable/disable Trust Pass per campaign, configure the reuse window (e.g., 2–24 hours), select allowed channels, and edit consent copy by locale. Provide analytics on adoption rate, conversion lift, time-to-action, re-verify rate, and revocations, with segmenting by campaign and device type. Support A/B testing of TTL and consent copy, and expose pass status via API/webhooks for external tooling. Include inline guidance and defaults aligned to best-practice compliance.

Acceptance Criteria
Campaign-level Trust Pass Toggle
Given an organizer with Manage Campaigns permission views a campaign’s Trust Pass settings When they set Enable Trust Pass = Off and save Then action pages under that campaign require full verification and do not accept an existing Trust Pass within 60 seconds of save And the saved state persists and is reflected on subsequent page loads And an audit log entry captures actor, timestamp, previous value, and new value Given the same organizer sets Enable Trust Pass = On and saves When a returning supporter with a valid Trust Pass within TTL visits any action page in the campaign Then the supporter bypasses re-checks and can take action without re-verifying within 60 seconds of save propagation And analytics attribute actions to the “Trust Pass enabled” period, splitting metrics pre/post toggle
Reuse Window (TTL) Configuration
Given default TTL is 12 hours with inline guidance recommending 2–12 hours for compliance When the organizer enters a TTL outside 2–24 hours or a non-integer value Then the save action is blocked with an inline validation message specifying the acceptable range When the organizer updates TTL to a valid integer hour value and saves Then newly issued Trust Passes adopt the new TTL immediately after save And existing passes retain their originally issued expiry time And the settings panel displays the exact expiry time example in the organizer’s local timezone When TTL is set above 12 hours Then a non-blocking compliance warning is displayed and logged in the audit trail
Allowed Channels Enforcement
Given checkboxes for allowed channels [Phone Calls, Emails] are shown with both selected by default When the organizer deselects Emails and saves Then Trust Pass reuse is disabled for email actions only, and returning supporters are prompted to re-verify for email while phone calls continue to honor the pass And this enforcement takes effect across all action pages in the campaign within 60 seconds And analytics record re-verify events with reason = channel_restricted When channels are re-enabled and saved Then the change is logged and reuse resumes for those channels
Locale-specific Consent Copy Management
Given a locale selector (e.g., en-US, es-ES) with best-practice default consent templates When the organizer edits consent copy for a locale and attempts to save without required tokens {org_name}, {ttl_hours}, {privacy_link} Then save is blocked and inline errors identify missing tokens When the organizer saves valid consent copy for a locale Then a preview renders with sample values and the copy propagates to matching-locale action pages within 60 seconds And the system falls back to the default template for locales without custom copy And an audit log records locale, diffs, actor, and timestamp
Analytics: Adoption, Conversion Lift, Time-to-Action, Re-verify, Revocations
Given the analytics dashboard with filters for Date Range (UTC), Campaign, and Device Type (Mobile/Desktop) When the organizer applies filters and refreshes Then the following metrics compute and display within 5 seconds: - Adoption Rate = unique supporters who consented to Trust Pass / unique eligible supporters on action pages in range - Conversion Lift = conversion_rate(Trust Pass traffic) − conversion_rate(control) where control is active A/B control or matched disable period; if no control, display N/A - Median Time-to-Action (seconds) for Trust Pass vs non-Trust Pass cohorts - Re-verify Rate = re-verify events among returning supporters within TTL / returning supporters within TTL - Revocations = count and rate per 1,000 passes issued And results can be exported to CSV and match API values within ±0.1% And charts/tables annotate any configuration changes (enable/disable, TTL changes) with timestamps
A/B Testing: TTL and Consent Copy
Given an experiment creation form targeting a campaign When the organizer defines variants with TTL values in 2–24 hour integers and/or consent copy variants by locale And assigns traffic splits that sum to 100% in 1% increments Then the experiment can be Started, Paused, or Stopped, and changes take effect within 60 seconds And supporters are consistently assigned to a variant (sticky) for 30 days or until experiment end Then the experiment report displays per-variant metrics: conversion rate, adoption rate, median time-to-action, sample size, and lift vs control And 95% confidence intervals for conversion rates using a two-proportion z-test are shown when sample size ≥ 500 per arm; otherwise status shows Underpowered and lift callouts are suppressed And experiment assignments and outcomes are included in CSV export
API/Webhooks: Trust Pass Status Exposure
Given an OAuth2 client with scope trust_pass.read When the client calls GET /v1/trust-pass/status?campaign_id={id}&identifier_hash={sha256_salted} Then respond 200 with JSON fields: status [active|expired|revoked|none], ttl_expires_at (ISO-8601 UTC), consent_version, allowed_channels, device_type, audit_id And do not return PII; identifier_hash uses platform salt and normalized identifier And apply rate limit 600 requests/min per client; on 429 include Retry-After header Given a webhook subscription with a shared secret When events occur (pass.issued, pass.reused, pass.expired, pass.revoked) Then deliver POST within 5 seconds with event_id, type, occurred_at (ISO-8601 UTC), campaign_id, identifier_hash, status, ttl_expires_at, allowed_channels And sign with HMAC-SHA256 in X-RallyKit-Signature including a timestamp; retry up to 8 times with exponential backoff on non-2xx; ensure idempotency via event_id

Fallback Verify

Multi-channel verification alternatives—email magic links, IVR phone codes, and WhatsApp—plus multilingual prompts and accessibility cues. Ensures constituent checks succeed even with SMS blocks, low data, or assistive tech needs.

Requirements

Email Magic Link Verification
"As a grassroots supporter with unreliable SMS, I want to verify via a magic link sent to my email so that I can complete my advocacy action without delays."
Description

Implements passwordless email verification by sending time-limited, single-use magic links to supporters, enabling reliable identity confirmation when SMS is blocked or unavailable. Includes secure tokenization, link expiration (e.g., 10 minutes), device and IP binding checks, and replay protection. Deep-links return users to the exact RallyKit action page to complete calls/emails with minimal friction, including a low-bandwidth HTML fallback. Integrates with RallyKit’s real-time dashboard to log send, open, and verify events for audit readiness. Admins can configure sender domain, copy, TTL, resend cooldowns, and per-campaign overrides, all within the existing 45-minute setup flow.

Acceptance Criteria
Email Magic Link Send and Resend Cooldown
Given a supporter requests email verification after an SMS failure or opt-out When the request is submitted Then the email service acknowledges a magic link send within 5 seconds And a send event is logged with supporter_id, campaign_id, message_id, and UTC timestamp And the email includes both HTML and plain-text parts and a tracking pixel And a resend cooldown (default 60 seconds) is enforced; requests during cooldown are blocked with a visible message and no additional email is sent And a throttled event is logged for each blocked resend attempt
Magic Link Expiration and Single-Use Replay Protection
Given a magic link is issued with a TTL (default 10 minutes) When the link is clicked for the first time within TTL Then the token signature is validated, not previously used, and verification succeeds And a verify_success event is logged When the same link is clicked again Then verification is rejected with link_already_used and no state changes occur And a verify_replay_blocked event is logged When the link is clicked after TTL Then verification is rejected with link_expired and a resend option is presented And a verify_expired event is logged
Device and IP Binding Checks
Given a verification request originated from device fingerprint F and IP address I When the magic link is clicked Then verification only succeeds if the user agent fingerprint matches F and the source IP is within the same /24 as I And mismatches result in device_ip_mismatch rejection and a resend option And all outcomes are logged with observed masked IP and user agent
Deep-Link Return to Original Action Page
Given a supporter starts verification from a RallyKit action page with state (campaign_id, district_id, language, query params, UI step) When the magic link is clicked and verification succeeds Then the server issues a 302 redirect to the exact original action URL with state restored And the action sheet is focused and ready to complete in one tap/click And on verification failure, the user is redirected to a retry page with a working resend control
Low-Bandwidth HTML Fallback Verification
Given the client blocks images, CSS, and JavaScript or has <100 kbps bandwidth When the magic link page loads Then a minimal HTML page (<20 KB) renders with all required text and controls And verification completes via a standard POST without client-side JavaScript And email open is inferred on link click if the tracking pixel cannot load And page structure meets WCAG 2.1 AA semantics for labels, focus order, and contrast
Admin Config: Sender Domain, Copy, TTL, Cooldowns, Overrides
Given an admin is in the 45-minute setup flow When configuring Email Magic Link settings Then the sender domain can be selected and DNS (SPF/DKIM/DMARC) status is validated and displayed And subject/body templates accept variables {{supporter_name}}, {{campaign_name}}, {{link}} with live preview And TTL is configurable between 1–60 minutes (default 10) And resend cooldown is configurable between 30–300 seconds (default 60) And per-campaign overrides can be set and take precedence over organization defaults And a test email can be sent and corresponding events appear in the dashboard
Real-Time Audit Logging and Export
Given the RallyKit dashboard is open for a campaign When magic link emails are sent, opened, or verified Then send, open, verify_success, verify_expired, verify_replay_blocked, and device_ip_mismatch events appear within 2 seconds with UTC timestamp, supporter_id, campaign_id, message_id, request_id, masked IP, and user agent And tokens are never logged in plaintext and are stored hashed with HMAC And logs are filterable by campaign and exportable to CSV from the UI
IVR Phone Code Verification
"As a constituent using a landline, I want to receive a verification code by phone call so that I can prove my identity even without internet or SMS."
Description

Provides a voice-call verification path that reads a one-time code via text-to-speech to the supporter, supporting landlines and low-data scenarios. Users request a call, receive an automated IVR that announces a 6–8 digit code, then enter it on the RallyKit page to confirm identity. Supports DTMF input fallback, multiple language voices, rate limiting, and configurable retry/timeout logic. Captures call outcome events (busy, no-answer, voicemail detected) into RallyKit’s live activity feed and audit logs. Admins can set call window hours, code length, maximum attempts, and TTS voice/language per campaign.

Acceptance Criteria
Landline IVR Code Delivery and Web Entry
Given a supporter selects "Verify by phone call" on a RallyKit verification page for a campaign with IVR enabled When the request is submitted Then the system places an outbound call within the configured dial-out SLA And the TTS voice announces a one-time numeric code of the campaign-configured length (6–8 digits) twice And the code is valid only for the configured TTL When the supporter enters the exact code on the RallyKit page before expiry Then the supporter is marked Verified for the campaign and the UI advances to the next step When the supporter enters an incorrect or expired code Then verification is rejected, the remaining attempts count decrements, and a clear error message is shown
DTMF Fallback Verification via IVR
Given the supporter answers the IVR call When they select the keypad-entry option as prompted Then the IVR collects the one-time code via DTMF with the configured inter-digit timeout When the entered code matches an unexpired code for the session Then the system marks the supporter Verified and the IVR confirms success before ending the call When the entered code is invalid or expired Then the IVR informs the supporter, decrements remaining attempts, and reprompts until the configured maximum attempts is reached Then, upon exhausting attempts, the IVR ends the call and the page displays an appropriate error
Language and TTS Voice Selection per Campaign
Given a campaign has a configured TTS language and voice When an IVR call is placed for that campaign Then all prompts and the code digits are spoken using the configured language/voice And at minimum English and Spanish voices are available for selection And if the configured language is unavailable, the system falls back to English and logs the fallback
Rate Limiting on Call Requests
Given a supporter has requested IVR verification for a campaign When additional call requests exceed the campaign-configured rate limit per time window Then no additional calls are placed, the API responds with a rate-limit error, and the UI displays a retry-after message And a rate-limited event is logged with supporter, campaign, and window metadata When the cooldown window elapses Then the supporter can request a new call successfully
Retry and Timeout Behavior with Answering Outcomes
Given an outbound IVR call attempt is initiated When the call is busy or not answered within the configured ring timeout Then the system retries up to the campaign-configured maximum retry attempts with the configured backoff When voicemail is detected before a human greeting Then the IVR does not read the code, the attempt is marked voicemail_detected, and the retry policy is applied When a human answers Then the call proceeds to code delivery as normal
Call Outcome Events in Live Activity Feed and Audit Logs
Given any IVR verification call attempt occurs When the attempt reaches an outcome (answered, busy, no_answer, voicemail_detected, verify_success, verify_failure) Then an event is appended to the campaign’s live activity feed and audit logs within the configured event latency And each event includes timestamp, outcome, attempt number, language/voice, and call identifier And logs are immutable and queryable by campaign and supporter
Call Window Hours Enforcement
Given a campaign has configured call window hours When a supporter requests an IVR call outside the defined window (based on the campaign timezone) Then the system does not place a call and informs the supporter of the next available window (or schedules per configuration) When the window opens and a call was scheduled Then the system places the call automatically and logs that it was delayed due to window enforcement
WhatsApp Verification Flow
"As a mobile-first supporter, I want to confirm my identity through WhatsApp so that I can verify quickly using the app I already rely on."
Description

Enables verification over WhatsApp Business by sending a templated message with either a one-time code or a tap-to-verify quick reply. Handles opt-in capture, region-specific template approvals, and localized content. Supports graceful fallback to email or IVR if WhatsApp delivery fails or if the user does not respond within a configurable timeout. All message sends, deliveries, reads, and verifications sync to RallyKit’s real-time dashboard and audit logs. Admin controls include template selection, locale routing, timeout thresholds, and per-country enablement to comply with platform policies.

Acceptance Criteria
Opt-In Capture and Consent Enforcement
Given a supporter has not provided WhatsApp opt-in and has a valid phone number on file When they initiate verification via RallyKit and choose WhatsApp Then the system presents localized opt-in text and capture UI, and records consent (timestamp, channel=WhatsApp, source=RallyKit, locale, policy_version) before any message send And no WhatsApp message is sent until consent is stored and retrievable in audit logs Given a supporter has a valid, unexpired opt-in and has not opted out When verification is initiated Then the system uses the existing opt-in without re-prompting and proceeds to send Given the supporter replies STOP/UNSUBSCRIBE via WhatsApp at any time When RallyKit receives the inbound message Then the system marks opt-in as revoked, blocks further sends, and logs the change with reason=opt_out
Region and Locale Template Approval Enforcement
Given an admin has enabled WhatsApp verification for a recipient's country and selected templates per locale When sending a verification message to that recipient Then the system selects a template approved for the recipient's country and locale, and validates that all required placeholders are provided And if no approved template exists for the country-locale pair, the send is blocked, reason is logged, and the configured fallback channel is initiated And the dashboard event shows template_id, country, locale, approval_status=verified
Localized Content and Accessibility Cues
Given a recipient's locale can be resolved from user profile, campaign settings, or phone country mapping When a WhatsApp verification message is sent Then the message body, button labels, and consent text are rendered in the resolved locale And if the locale is unsupported, English (en) is used and the fallback is recorded in logs And the OTP (if used) is formatted with spaced digits (e.g., '1 2 3 4 5 6') and includes a plain-text instruction for screen-reader users (e.g., 'Reply with the six digits') And quick reply options (if used) include numeric prefixes (e.g., '1. Verify me') for accessibility
OTP or Quick Reply Verification over WhatsApp
Given admin configuration method=OTP When a WhatsApp verification is initiated Then a unique 6-digit code is generated, bound to the session, and sent via an approved template; TTL equals the configured timeout; max attempts <= configured limit And entering the correct code within TTL marks the user as verified and prevents code reuse; incorrect attempts increment a counter and return an error message; exceeding max attempts locks verification for the configured cooldown and logs reason Given admin configuration method=QuickReply When the recipient taps the 'Verify' quick reply within TTL Then RallyKit validates the payload against the session, marks the user as verified, and prevents replay of the same payload
Fallback to Email or IVR on Failure or Timeout
Given a WhatsApp verification message is sent When delivery status is 'failed' or not 'delivered' within the configured delivery SLA Then RallyKit triggers the next available fallback channel according to admin order (email magic link, IVR code), and logs fallback_reason and start_time Given a WhatsApp message is delivered but no response is received within the configured response timeout When the timeout elapses Then RallyKit triggers the next fallback channel and records the path taken And if no fallback channels are available or contact data is missing, RallyKit displays an actionable error to the user and logs verification_status=failed, reason=no_available_fallback
Real-Time Dashboard and Audit Logging
Given WhatsApp verification events occur (send, delivery, read, reply, success, failure, fallback) When viewing the RallyKit real-time dashboard Then each event appears within 5 seconds of occurrence with fields: user_id, channel=WhatsApp, event_type, template_id, locale, country, message_id, timestamp, status And aggregate counters for 'Sent', 'Delivered', 'Read', 'Verified', 'Failed', and 'Fallbacks' update accordingly And the audit log contains an immutable trail linking user, admin configuration snapshot, and all event transitions
Admin Controls for Templates, Locales, Timeouts, and Country Enablement
Given an admin opens RallyKit's verification settings When they configure WhatsApp templates per locale and country enablement Then the UI validates that a country cannot be enabled unless at least one approved template exists for that country and default locale And the admin can set delivery SLA and response timeout in minutes within allowed ranges, and save changes And saved changes are applied within 1 minute, versioned, and visible in audit logs; invalid configurations are blocked with specific validation messages
Auto-Fallback Routing and Retry Logic
"As an organizer, I want verification to automatically switch to the next best channel when one fails so that supporters can complete actions without getting stuck."
Description

Centralized logic that detects delivery or engagement failures (e.g., SMS blocked, undelivered WhatsApp, no email open) and automatically offers or triggers alternative verification channels in a single, unified flow. Maintains a single verification session across channels, preventing duplicate tokens and ensuring the first successful verification completes the action. Configurable channel priority, retry intervals, and maximum attempts at org or campaign level, with clear, progressive UX prompts to switch methods. All transitions and outcomes are tracked in real time for auditability and conversion analytics.

Acceptance Criteria
SMS Failure Auto-Fallback to Email Magic Link
Given a verification initiated via SMS for a supporter with a valid email on file and campaign channel priority [SMS, Email, IVR, WhatsApp] When the SMS delivery status returns blocked or undelivered within 20 seconds or no delivery receipt is received within the configured deliveryTimeout Then the system automatically offers and sends an email magic link within 5 seconds, maintains the same verification session ID, and records transition reason "sms_failed" And the UI displays a prompt to switch to email with a clear call-to-action in the user’s locale And no new verification token is created outside the session; the email token is bound to the existing session And if the email link is clicked and verified, the session state changes to "verified" and any pending SMS retries are canceled
Email No-Open Retry to IVR
Given verification via email magic link with a valid phone number and channel priority [Email, IVR, SMS, WhatsApp] When the email is delivered but no open or click is detected within the configured noOpenTimeout Then the system triggers an IVR call with a one-time code bound to the same session and logs transition reason "email_no_engagement" And the UI displays an option "Call me now" and a fallback notice in the user’s locale And on successful IVR code entry, the session is marked "verified" and any later email link click returns "already verified" without duplicating the action
WhatsApp Undelivered Auto-Fallback to IVR
Given verification via WhatsApp with a registered phone and IVR enabled When the WhatsApp API returns undelivered or failed within the configured deliveryTimeout Then the system places an IVR call within 10 seconds using the same session and logs attemptNumber incremented and reason "whatsapp_failed" And WhatsApp is not retried more than maxAttempts for that channel during this session And if IVR fails, the next channel is attempted according to priority without exceeding per-channel maxAttempts
Single Verification Session and First Success Wins
Given multiple channels have been initiated for the same supporter within a verification flow When any channel completes verification successfully Then the session state is set to "verified", a single action completion event is emitted, and all other in-flight channel attempts are canceled or made invalid And any subsequent attempt to verify using other channel tokens within the session validity window returns "already verified" without side effects And no more than one audit event of type "action_completed" exists for the session, and there are no duplicate tokens active
Configurable Channel Priority and Disabled Channel Handling
Given org-level default priority exists and a campaign may override it, and certain channels may be disabled at either level When a verification flow requires a fallback Then the sequence of attempted channels strictly follows the effective priority with disabled channels skipped And updates to campaign priority are reflected within 60 seconds and used for new sessions And the observed attempt order in logs matches the effective configuration for the session
Retry Intervals and Max Attempts Enforcement
Given per-channel settings for retryInterval and maxAttempts are configured When a channel attempt fails transiently Then the system schedules retries no sooner than retryInterval and no more than maxAttempts for that channel And upon reaching maxAttempts for a channel, the system proceeds to the next channel or ends with status "exhausted" if none remain And the UI communicates retry or exhaustion states with clear, progressive prompts
Real-Time Audit Logging and Analytics
Given a verification session with fallbacks and retries When any transition occurs (attempt, success, failure, cancel) Then an audit event is recorded within 2 seconds containing sessionId, userId (or anonymous hash), channel, attemptNumber, timestamp (ISO-8601 UTC), reasonCode, and outcome And analytics surfaces per-session timeline and aggregate conversion/fallback metrics, including channel success rates and average attempts And an export endpoint provides these events in NDJSON or CSV and contains a complete, ordered record for the session
Multilingual Prompts and Localization
"As a bilingual supporter, I want verification instructions in my preferred language so that I can understand each step and confidently complete the process."
Description

Delivers fully localized verification prompts and system messages across email, IVR, and WhatsApp, supporting right-to-left scripts and regional formats. Auto-detects locale from browser, device, or phone metadata with user override, and falls back gracefully when translations are missing. Includes translation keys for all verification steps, error states, and accessibility cues, with per-organization copy overrides in the RallyKit dashboard. Ensures consistent tone and compliance with platform rules (e.g., WhatsApp templates) while minimizing setup time via prebuilt translations for common languages used by nonprofits.

Acceptance Criteria
Auto-Detect Locale with User Override on Verification Start
Given a constituent begins verification via email, IVR, or WhatsApp When RallyKit determines locale using available signals Then it selects the locale according to precedence: user override > browser/device Accept-Language > phone country/WhatsApp profile > organization default > platform default (en-US) And the initial prompt is rendered in the selected locale for the active channel And a language switcher is available within 2 taps/clicks/keypresses and labeled in-language (e.g., Español) And switching language immediately updates subsequent prompts and is persisted for the session and verification record And the final chosen locale is logged with channel and organization ID for audit and analytics
Org-Level Copy Overrides in Dashboard for Verification Prompts
Given an organization admin edits a translation key (e.g., verify.code_instructions) for a specific locale in the RallyKit dashboard When they save and publish the change Then the override is applied to all new verification flows across email, IVR, and WhatsApp for that organization within 60 seconds And placeholder validation prevents saving if required tokens (e.g., {code}, {expires_in}) are missing or malformed And a preview for each channel displays the localized message exactly as it will render (including placeholders resolved with sample values) And an audit log entry records who changed what, previous value, new value, timestamp, and affected locales And the admin can roll back to a previous version in one action, restoring prior content within 60 seconds
Graceful Fallback When Translation Key Is Missing
Given a verification message is being rendered for a selected locale When the required translation key is missing for that locale Then the system falls back in this order: organization override for selected locale > organization override for org default locale > RallyKit stock translation for selected locale > RallyKit stock translation for org default locale > platform default (en-US) And end users never see raw keys or placeholder identifiers And an internal event i18n_missing_key is logged with organization ID, channel, key, active locale, and chosen fallback locale And a daily report aggregates missing keys per organization if counts exceed a configurable threshold
Right-to-Left Scripts and Regional Formats Across Channels
Given the active locale uses a right-to-left script (e.g., ar, he) When rendering email and any on-page prompts linked from verification Then directionality is set via dir="rtl" and Unicode isolation to ensure correct punctuation mirroring and word order And numerals follow the organization’s setting for the locale (Arabic-Indic or Latin digits) and verification codes are grouped for readability (e.g., 123 456) And dates, times, and numbers use CLDR formats for the active locale and the appropriate timezone And for WhatsApp (plain text) the message order and bracket characters are correctly mirrored And for IVR the TTS voice matches the locale and reads verification digits individually with culturally appropriate pacing
WhatsApp Locale Template Compliance
Given verification is initiated over WhatsApp When sending a localized verification message Then RallyKit uses only Meta-approved templates for that specific locale and template family And all required placeholders are populated and validated before send And if a locale-specific template is unavailable, the send is blocked unless a fallback default-locale template is pre-approved and policy-permitted, in which case that template is used And the outbound message logs include template name, locale, template ID, and WhatsApp message ID for audit And any compliance failure surfaces an actionable error to the admin without exposing internal codes to the end user
IVR Multilingual Delivery and Accessibility
Given a constituent opts to receive the verification code via phone (IVR) When the active locale is determined Then the IVR plays prompts in that language using a matching TTS voice And the verification code is spoken as single digits, repeated twice, with 0.7–1.0 second gaps between digits And callers can press 1 to repeat, 0 to change language, or 9 to switch to an alternative channel (SMS/email/WhatsApp) where permitted And DTMF inputs are recognized with >99% accuracy and timeouts trigger a clear, localized retry prompt And call recordings and prompt selections are captured in the audit log with timestamps and locale
Prebuilt Translation Packs and Setup Time
Given a new organization enables localization for Fallback Verify When the admin opens the setup wizard Then at least 10 prebuilt language packs are available (including en, es, fr, pt, zh, ar, hi, fil, vi, sw) covering all verification steps, error states, and accessibility cues with ≥95% key coverage per pack And the wizard recommends languages based on organization region and supporter demographics (if provided) And the admin can enable three languages, preview channel-specific prompts, and publish in ≤45 minutes under standard network conditions And any pack with <100% coverage clearly lists missing keys and applies the defined fallback chain without end-user exposure of raw keys
Accessible Verification UX (WCAG 2.2 AA)
"As a supporter using assistive technology, I want an accessible verification flow so that I can complete actions independently without barriers."
Description

Implements accessible verification interfaces across web and voice, meeting WCAG 2.2 AA: keyboard-only navigation, focus management, ARIA labels, high-contrast themes, adjustable text size, and clear error messaging. Provides screen-reader-optimized flows, audio speed controls for IVR, and alternatives to visual CAPTCHAs. Includes low-bandwidth modes with reduced imagery and semantic HTML to ensure assistive tech compatibility. Accessibility checks are added to CI with automated linting and manual QA scripts to ensure ongoing compliance across channels and languages.

Acceptance Criteria
Keyboard-Only Verification: Web Magic Link and Code Entry
Given a user navigates the verification pages using only a keyboard When they move focus through inputs, buttons, links, modals, and toasts Then all interactive elements are reachable with Tab and Shift+Tab in a logical order without traps And a visible focus indicator is present on every interactive element with at least 3:1 contrast against adjacent colors And Enter or Space activates the focused control consistently And closing any modal or toast returns focus to the invoking control And a Skip to main content link is the first focusable element on page load and moves focus correctly
Screen Reader Semantics, Live Regions, and Error Handling
Given a user runs a screen reader with the site set to their selected language When the verification form loads and when state changes or errors occur Then each input has a programmatic label via label or aria-labelledby, and helper text via aria-describedby And the page and form elements expose correct lang attributes; switching language updates labels and hints accordingly And status updates (code sent, verified) are announced once via aria-live=polite without stealing focus And validation errors are announced via aria-live=assertive and are associated to their fields; the first invalid field receives focus And error text states the problem and remedy without relying on color alone and provides a resend code action
High Contrast and Adjustable Text Size Persist Across Verification
Given a user enables High Contrast theme and/or increases text size up to 200% or applies recommended text spacing When they use all verification screens and components Then normal text has contrast ratio >= 4.5:1 and large text >= 3:1; interactive controls meet >= 3:1 against adjacent colors And no content or functionality is lost at 200% zoom or with text spacing (1.5 line height, 0.12em letter, 0.16em word, 2em paragraph) And interactive targets meet a minimum size of 24 by 24 CSS pixels or provide equivalent spacing per WCAG 2.2 AA And user-selected visual settings persist across verification steps and subsequent sessions for at least 30 days
IVR Verification: Speed Control, Repetition, and DTMF Input
Given a caller uses the IVR verification flow When prompts play and the one-time code is read Then the caller can adjust playback speed via DTMF with at least three speeds (e.g., slower, normal, faster) And the caller can repeat the last prompt and have the code read at least twice on demand And DTMF input accepts each digit with clear confirmation tones and a timeout of at least 5 seconds between digits And the caller can switch languages during the call and hear prompts in the selected language without restarting And speech recognition is not required to complete verification
Human Verification Without Visual CAPTCHA
Given automated traffic risk is elevated during verification When additional human verification is required Then the user is offered at least one non-visual alternative (email magic link or IVR one-time code) without any image- or puzzle-based CAPTCHA And the alternative is fully operable by keyboard and screen readers and works in low-bandwidth mode And the selection is localized to the user’s language and persists if they retry within the same session And verification outcomes are logged without collecting biometric or behavioral profiling data
Low-Bandwidth Mode: Reduced Imagery and Semantic HTML
Given a user enables Low Bandwidth mode or a slow connection is detected When loading and using verification pages Then total transfer size per verification page is <= 200KB excluding cached fonts, with images and nonessential scripts deferred or removed And all core interactions (enter contact, send code, enter code, submit) remain functional with semantic HTML and minimal scripting And nonessential animations are disabled when prefers-reduced-data or prefers-reduced-motion is set And pages load in under 3 seconds at 400 kbps down and 150 ms RTT in controlled tests
Accessibility CI Gates and Manual QA for Verification Flows
Given the codebase is built and tests are executed in CI When automated accessibility checks run on verification UIs (web) and transcripts/prompts (IVR) Then axe-core or equivalent reports zero critical and zero serious violations on supported pages at desktop and mobile viewports And linting enforces semantic HTML, ARIA validity, color contrast, and focus rules with the build failing on violations And manual QA scripts covering keyboard-only, screen reader, high-contrast, zoom 200%, and IVR speed controls are executed each release with results stored and regressions blocked
Verification Audit Trail and Webhooks
"As a nonprofit director, I want a verifiable history of each supporter’s verification attempts so that I can provide audit-ready proof and improve conversion over time."
Description

Creates an immutable, exportable log of verification attempts and outcomes across channels, including timestamps, channel, status, and minimal PII needed for compliance. Exposes webhooks for verification events (initiated, delivered, verified, failed) to sync with external CRMs or data warehouses and supports CSV exports for board or funder reporting. Data retention, encryption at rest, and role-based access controls align with RallyKit’s audit-ready promise while honoring opt-in/consent requirements for email, voice, and WhatsApp.

Acceptance Criteria
Immutable Verification Event Logging Across Channels
Given a supporter performs verification via Email Magic Link, IVR Phone Code, or WhatsApp When a verification event occurs (initiated, delivered, verified, failed) Then an immutable log record is appended containing: event_id (UUID), attempt_id, org_id, campaign_id, channel, event_type, ISO 8601 timestamp with timezone, sequence_number, status_code, failure_reason (if any), consent_reference_id (if any), and supporter_ref (hashed) And contact identifiers are never stored in plaintext; stored values are hashed or masked (last-4 where applicable) And retrieval of records for an attempt returns events ordered by sequence_number and timestamp ascending And attempts to update or delete existing log records via API or UI are rejected with 403 and the denial is itself recorded in the audit log
Webhook Delivery for Verification Events
Given an organization configures a webhook endpoint with a shared secret When verification events (initiated, delivered, verified, failed) are produced Then a POST is sent within 10 seconds of event creation with JSON payload including event_id, attempt_id, org_id, campaign_id, channel, event_type, timestamp, sequence_number, status_code, failure_reason (if any), and consent_reference_id (if any) And headers include X-RallyKit-Event, X-RallyKit-Signature (HMAC-SHA256), X-RallyKit-Delivery-Timestamp, and X-RallyKit-Idempotency-Key And the signature validates against the shared secret for the exact payload delivered And deliveries for the same attempt_id are emitted in order of sequence_number And on non-2xx response or 5s timeout, the delivery is retried with exponential backoff up to 6 attempts over 60 minutes; a final failure is recorded and visible in delivery logs And repeated deliveries for the same event_id use the same idempotency key to enable downstream deduplication
CSV Export for Board/Funder Reporting
Given a user with Export permission selects a date range and optional filters (channel, campaign, event_type, status) When they request a Verification Audit CSV Then a CSV is generated with a header row and fields: event_id, attempt_id, org_id, campaign_id, channel, event_type, timestamp_utc, sequence_number, status_code, failure_reason, consent_reference_id, supporter_ref_hashed And the CSV is UTF-8 encoded, RFC 4180 compliant, uses comma delimiter, and quotes fields with embedded commas/newlines And contact details are excluded or masked; no plaintext emails/phone numbers are present And for up to 100,000 rows the file is available to download within 60 seconds; larger exports run asynchronously and notify upon completion And the download link is a pre-signed HTTPS URL expiring in 60 minutes or less; accessing after expiry returns 403 And the number of rows equals the filtered result count within ±0 discrepancy
Role-Based Access Control for Audit Trail and Webhook Secrets
Given organization roles Admin, Compliance Analyst, and Organizer When accessing the Verification Audit Trail UI or API Then Admin and Compliance Analyst can view records; Organizer receives 403 And only Admin and Compliance Analyst can generate CSV exports; Organizer receives 403 And only Admin can create, rotate, or delete webhook endpoints and shared secrets; other roles receive 403 And all access attempts and configuration changes are themselves recorded in the audit log with actor_id, action, target, and timestamp
Consent Enforcement by Channel
Given a supporter without explicit opt-in for Email, Voice, or WhatsApp When a verification is initiated for a non-consented channel Then no outbound message or call is sent And an audit event is appended with event_type=failed and failure_reason=no_consent And no webhook with event_type=delivered is emitted for that channel Given a supporter with valid opt-in for a channel When a verification is initiated on that channel Then events are produced normally (initiated, delivered, verified/failed) and include consent_reference_id
Data Retention and Legal Hold
Given an organization-level retention policy is set in days (default 365) When an audit record exceeds the retention period and is not under legal hold Then it is purged from primary storage and indexes and no longer appears in queries or exports And a purge summary entry is recorded including org_id, time_window, count_purged, and actor/system reference Given an Admin places an attempt_id or campaign under legal hold with a reason When the scheduled purge runs Then records under legal hold are retained and marked hold_active=true until the hold is removed by Admin And webhook redelivery or export of purged records is not possible
Encryption and Transport Security for Audit Data
Given webhook endpoints are configured When a user attempts to save a non-HTTPS endpoint (http or ws) Then validation fails with an error and the endpoint is not saved When delivering webhooks Then TLS 1.2+ is required; endpoints with invalid certificates cause delivery to fail with reason=tls_error and follow the retry policy When generating CSV exports Then the download is provided only via a pre-signed HTTPS URL that expires in 60 minutes or less; access after expiry returns 403 And an automated configuration check endpoint reports encryption_at_rest=true for audit storage and export storage; if encryption is disabled, the system status is Degraded and exports are blocked until encryption is enabled

Ring Templates

Prebuilt permission tiers tailored for coalitions (e.g., Owner, Publisher, Script Editor, Data Viewer, Attribution-Only, Billing Partner). Apply in one click to a campaign or partner, then fine-tune per need. Cuts setup time from hours to minutes, reduces permission mistakes, and keeps multi-org teams consistent as they scale.

Requirements

Default Ring Template Library
"As a coalition owner, I want to choose from prebuilt permission tiers so that I can onboard partners quickly and consistently without designing roles from scratch."
Description

Seed RallyKit with a curated set of coalition-ready permission tiers (Owner, Publisher, Script Editor, Data Viewer, Attribution-Only, Billing Partner). Each tier maps to explicit capabilities (e.g., create/edit scripts, publish action pages, view/export supporter PII, manage billing, view attribution-only metrics, access audit logs). Provide one-click selection from a library with clear, human-readable summaries and detailed capability matrices. Templates are localized, accessible, and tenant-scoped, with sensible least-privilege defaults to reduce setup time and mistakes. Integrates with existing user, partner, and campaign entities so templates can be referenced anywhere permissions are required.

Acceptance Criteria
Seeded Default Templates Visible for New Tenant
Given a newly created tenant and an Owner user is logged in When the user opens Settings > Permissions > Ring Templates Then the page loads within 1.5 seconds (p95) for tenants with <=100 templates And exactly six default templates are listed with names: Owner, Publisher, Script Editor, Data Viewer, Attribution-Only, Billing Partner And each template shows a human-readable summary and a capability matrix with the following capabilities: create/edit scripts; publish action pages; view/export supporter PII; manage billing; view attribution-only metrics; access audit logs And no duplicate template names are present
One-Click Apply Template to Campaign or Partner
Given Campaign X and Partner Y exist and the current user has Owner role When the user selects the "Publisher" template and clicks Apply to Campaign X Then the assignment completes within 2 seconds and a success confirmation is displayed And users granted access to Campaign X inherit Publisher capabilities immediately And an audit log entry records who applied which template to which entity with timestamp When the user selects the "Billing Partner" template and clicks Apply to Partner Y Then Partner Y receives Billing Partner capabilities and cannot access supporter PII or scripts
Least-Privilege Defaults Enforced by Role
Given a user with Script Editor on Campaign X When they attempt to publish an action page Then access is denied with an explanatory message and HTTP 403 is logged Given a user with Data Viewer on Campaign X When they attempt to view or export supporter PII Then access is denied and they can view aggregate metrics only Given a user with Attribution-Only on Campaign X When they open analytics Then only attribution metrics are shown and supporter-level records are hidden Given a user with Billing Partner at tenant scope When they open Billing settings Then access is granted and attempts to access scripts, publish actions, or supporter PII are denied Given a user with Publisher on Campaign X When they publish an action page Then access is granted and attempts to access audit logs or billing are denied Given a user with Owner at tenant scope When they perform any capability listed Then access is granted
Tenant-Scoped Template Isolation
Given tenants A and B exist and a user belongs to tenant A When the user queries the template library API Then only templates from tenant A are returned When the user attempts to reference a template_id belonging to tenant B Then the operation fails with 404 or 403 and no data leakage occurs When applying a template in tenant A Then it cannot be assigned to entities in tenant B
Localization of Templates and Capability Labels
Given the application locale is set to es-ES When the user opens the Ring Templates library Then all visible template names, summaries, capability labels, action buttons, and messages appear in Spanish with no missing keys or fallback tokens Given a translation key is missing When the library renders Then a human-readable English fallback is shown and the missing key is logged for remediation
Accessibility Compliance of Ring Templates UI
Given a keyboard-only user When navigating the library and applying a template Then all interactive elements are reachable in logical tab order and show visible focus states Given a screen reader user When reading a template card Then role name, summary, and capability matrix are announced with meaningful labels and relationships Then color contrast of text and interactive elements meets WCAG 2.1 AA And automated accessibility scan (axe-core or equivalent) reports zero serious or critical violations on the library and apply flows
Capability Matrix Accuracy and Override Indicators
Given the default library is displayed Then the capability matrix for each template shows Allowed/Not Allowed for exactly these capabilities: create/edit scripts; publish action pages; view/export supporter PII; manage billing; view attribution-only metrics; access audit logs Given a template is applied to Campaign X and specific capabilities are fine-tuned When viewing permissions for Campaign X Then overridden capabilities are visually marked as "Overridden" with a link to view the base template, and unaffected capabilities inherit from the template
Scoped Apply & Inheritance
"As a campaign admin, I want to apply a ring template to a specific campaign with a few exceptions so that I grant only what’s needed without manual per-permission toggling."
Description

Enable one-click application of a ring template at multiple scopes (workspace/coalition, partner org, campaign, action page). Support hierarchical inheritance where lower scopes inherit from higher ones, with the ability to add safe, explicit exceptions. Provide a real-time effective-permissions preview before applying, show impacted users, and offer atomic apply with rollback on failure. Integrates with RallyKit’s campaign and action page models to ensure permissions propagate immediately to script editing, publishing, call/email action tracking, and data export areas.

Acceptance Criteria
Workspace Apply With Inheritance Preview Parity
Given a Ring Template with defined role permissions and a workspace containing partner orgs, campaigns, and action pages When an admin selects "Apply template" at workspace scope and opens the effective-permissions preview Then the preview computes and displays inherited permissions for all child scopes within 5 seconds And the preview lists counts of impacted partners, campaigns, action pages, and users And the preview shows per-role gain/loss deltas for users And after confirming apply, the final effective permissions across all scopes match the preview snapshot (zero diffs)
Atomic Apply and Rollback on Failure
Given at least one child scope will fail to update (e.g., downstream API error) When the admin confirms apply Then no permission changes are persisted at any scope And the system rolls back any partial writes automatically And an error banner displays a human-readable reason and correlation ID And an audit log entry records the attempt, failure point, and rollback summary And the admin can retry idempotently with the same outcome if the failing condition persists
Impacted Users List and Deltas
Given users are mapped to roles across inherited scopes When the admin opens the "Impacted users" view in preview Then the list includes every user whose effective permissions would change with columns: user, scope(s), from role(s), to role(s), change type (gain/reduce/revoke) And the totals of gains, reductions, and revocations equal the header summary counts And the list supports filtering by scope and export to CSV And if notifications are toggled on, users receive a single consolidated notice within 10 minutes post-apply
Safe Explicit Exceptions at Lower Scopes
Given a template defines maximum allowed permissions per role When an admin adds an exception at a campaign or action page Then the system allows only changes that do not exceed the template’s maximum privilege for that role And attempts to escalate beyond the maximum are blocked with a clear validation message And exceptions apply only to the chosen scope and do not affect siblings And removing the exception restores the inherited baseline within 10 seconds
Immediate Permission Propagation to Functional Areas
Given a successful apply at workspace or partner scope When affected users attempt actions in script editing, publishing, call/email action tracking, and data export Then new permissions take effect within 60 seconds of apply completion And authorization checks permit allowed actions and deny disallowed actions consistently across all four areas And event logs record the first enforcement timestamp for at least one user per area
Selective Scope Apply and Dry-Run
Given a workspace with multiple partners, campaigns, and action pages When an admin selects a subset of partners and specific campaigns/action pages and chooses "Dry-run" Then the preview and impacted counts include only the selected scopes And dry-run completes within 5 seconds for up to 5,000 child objects And confirming apply affects only the selected scopes and writes an audit entry summarizing scope coverage
Concurrency Control and Stale Preview Handling
Given two admins open previews for the same workspace within a 10-minute window When Admin A applies Template X and Admin B attempts to apply Template Y based on a now-stale preview Then Admin B’s apply is blocked with a "stale preview" message and a prompt to refresh And the system uses optimistic concurrency with version tokens to prevent overwrite without refresh And the audit log records both attempts with outcomes and version tokens
Granular Permission Matrix & Resource Scoping
"As a security-conscious director, I want granular, scoped permissions so that partners can work on their campaigns without exposing unnecessary supporter data."
Description

Define fine-grained capabilities across RallyKit resources: scripts (create/edit/publish), action pages (create/edit/publish/archive), call/email action tracking (view live activity, export), supporter data (view masked vs. full PII, export controls), legislators/district data (view-only), billing (view/approve), attribution (credit view-only), and audit logs. Allow scoping by resource (organization, coalition, campaign, action page), with time-bound access and conditional PII masking rules. Provide allow/deny semantics and conflict resolution that favors least privilege. Ensure matrix is machine-readable for API enforcement and human-readable for admins.

Acceptance Criteria
Apply Ring Template With Partner Overrides
Given a coalition, partner P, and ring template T with defined allows and denies When an admin applies T to P at coalition scope Then P’s effective permissions for all campaigns in the coalition match T’s definition with denies overriding allows Given campaign C1 under the coalition and an override deny action_pages.publish scoped to C1 When the admin saves the override Then P cannot publish action pages in C1 (UI publish controls hidden or disabled, API returns 403 PERM_DENIED) and retains publish rights per T in other campaigns Given audit logging is enabled When T is applied and overrides are saved Then audit entries record subject, scope, change delta (added and removed privileges), actor, timestamp, and reason
Resource Scoping Enforcement Across Levels
Given user U has allow scripts.edit scoped to campaign C123 and no other grants When U edits a script within campaign C123 Then the request succeeds (200) and an audit record is written Given U attempts to edit a script in campaign C999 or at organization scope When the request is evaluated Then the request is denied (403 PERM_SCOPE_MISMATCH) and the UI shows insufficient permissions Given U has allow action_pages.publish scoped to action_page AP1 only When U publishes AP1 Then the request succeeds (200) And when U attempts to publish AP2 Then the request is denied (403) Given legislators and districts data are view-only resources When U attempts POST, PUT, PATCH, or DELETE on legislators or districts endpoints Then the request is rejected (405 or 403) regardless of grants and the UI has no write controls Given U has billing.view but not billing.approve When U opens billing requests Then approve controls are disabled and POST to billing approve returns 403 Given U has attribution.view_only When U views attribution credit data Then data is visible but any attempt to modify attribution is denied (403)
Time-Bound Access and Auto-Expiry
Given a grant G with start 2025-09-01T00:00:00Z and end 2025-09-30T23:59:59Z When current time is before start Then access is denied Given current time is within the window When U performs an action covered by G Then access is allowed (200) Given current time is after end When U performs the action Then access is denied within 60 seconds even if a session token is cached When G expires Then an audit entry of type grant_expired is written with subject, scope, privilege, start, end, and expired_at When an admin extends the end time of G Then effective permissions update within 60 seconds and actions are allowed only within the renewed window
Conditional PII Masking Rules
Given user U has supporter_data.view_masked and does not have supporter_data.view_full When U views the supporters list Then name, email, and phone are redacted per masking policy, export controls are disabled, and GET exports supporters returns 403 PERM_DENIED Given U is granted supporter_data.export scoped to campaign C123 with condition full_pii true When U exports supporters for C123 Then the file contains unmasked PII as permitted And when U exports for any other campaign Then the request is denied (403) Given conditional masking is active for attribution_only true When U views attribution credit fields Then attribution metadata is visible while PII fields remain masked and direct PII field access returns masked values
Allow Deny Conflict Resolution (Least Privilege)
Given U has allow scripts.publish at coalition scope via template T and deny scripts.publish at campaign C555 via manual override When U attempts to publish in campaign C555 Then the request is denied (403) and an audit entry includes reason deny_precedence Given U has allow supporter_data.view_full at campaign C1 and deny supporter_data.view_full at coalition scope When U views supporters in campaign C1 Then PII remains masked because deny takes precedence over allow Given multiple templates and manual grants apply to the same action When policy evaluation runs Then any matching deny results in an overall deny and the decision response includes a trace of all matched rules and their precedence
Machine-Readable Policy and API Enforcement
Given an admin requests GET /api/v1/permissions with subject U and scope campaign C123 When the response is returned Then it is JSON including version, evaluated_at, subject, scope, effective privileges, grants with source (template or manual), conditions, and precedence order Given U calls a protected endpoint without the required effective privilege When the request is processed Then API returns 403 with error code and missing privilege identifier; if the privilege is present then API returns 200 and includes X-Permissions header echoing the evaluated privilege Given a template policy is updated When evaluations occur after the update Then API enforces the new policy within 60 seconds and an audit entry policy_version_changed is recorded
Admin UI Human-Readable Matrix
Given an admin opens the Permissions view for partner P When the matrix renders Then it displays resources (scripts, action pages, tracking, supporters, legislators, billing, attribution, audit logs) by capabilities (create, edit, publish, archive, view_live, export, view_masked, view_full, approve, view_only) with clear allow and deny indicators When the admin filters by scope campaign C123 and subject P Then only relevant rows are shown and a pending change diff is displayed when toggling capabilities; saving writes changes and shows a success confirmation When the admin exports the matrix Then CSV and JSON exports match the machine-readable policy and Impersonate View mode previews the UI exactly as P would see it with hidden or disabled controls for denied actions
Safe Overrides & Guardrails
"As a coalition owner, I want to make limited permission exceptions safely so that I can handle edge cases without compromising security or compliance."
Description

Permit per-user or per-partner overrides to template permissions with embedded guardrails: elevation to Owner/Billing requires dual approval, destructive actions require explicit confirmation, and PII access can be gated behind purpose justification and auto-expiring time windows. Provide validations and warnings when overrides conflict with higher-level policies, and a single-click reset to template defaults. Integrates with RallyKit’s compliance features to support audit-ready proof and data minimization.

Acceptance Criteria
Dual Approval for Owner/Billing Elevation
Given a permission elevation to Owner or Billing is requested for a user or partner within a coalition scope When the request is submitted Then the requester cannot approve their own request And two distinct approvers with Owner ring on the same scope must approve And any explicit rejection immediately closes the request with no permission change And the request expires after 24 hours if not fully approved and is auto-rejected And on second approval, the elevation takes effect and appears in the subject’s permissions within 5 seconds And notifications are sent to requester and approvers on submission, approval, rejection, and expiry And an audit record stores requester, approvers, decision timestamps, reason text, IP addresses, and scope identifiers
Explicit Confirmation for Destructive Actions
Given a user with sufficient permission initiates a destructive action (delete campaign, delete partner, permanently purge PII, remove ring template, or bulk revoke permissions) When the user attempts to execute the action Then a confirmation dialog displays the action name, item count, irreversible effects, and affected teams And the Confirm button is disabled for 3 seconds and requires typing DELETE to enable And the action does not execute unless the exact confirmation phrase is entered and the user confirms And non-PII destructive actions provide a 15-minute undo; PII purges have no undo And the system logs actor, confirmation phrase entry time, UI version, and action metadata
PII Access Justification and Time-Boxed Window
Given a user attempts to view or export masked PII fields When the user requests access Then the system requires selecting a predefined purpose category, entering a justification of at least 15 characters, and choosing a duration up to the policy maximum (e.g., 60 minutes) And if org policy PII_Approval_Required is enabled, the request remains pending until approved by a Data Owner; otherwise it is auto-granted And during the active grant, only PII fields aligned to the selected purpose are unmasked; all others remain masked And access auto-revokes at expiry or on manual revoke, immediately re-masking fields And each view or export is logged with grant ID, purpose, fields accessed, record counts, timestamps, and actor And any exported file embeds the grant ID and purpose as a watermark
Override Conflict Validation and Warnings
Given an admin creates a per-user or per-partner override that conflicts with higher-level policies or the active ring template When the override is saved Then the system validates against organization policies and template rules And hard-conflict overrides are blocked with an error listing the violated policy keys and links to documentation And soft-conflict overrides display warnings and require typed acknowledgment with a reason before proceeding And a conflict badge appears on the affected profile until resolved or reset to defaults And conflicts are included in compliance reports with status (blocked, warned, acknowledged)
Single-Click Reset to Template Defaults
Given an Owner views a subject with custom permission overrides When the user clicks Reset to Template Defaults and confirms Then the system shows a diff preview of permissions to remove and add And on confirmation, all overrides are removed and permissions revert to the assigned ring template within 5 seconds And if the reset would interrupt an in-progress destructive action or an active PII grant, the reset is blocked with guidance to complete or revoke first And the reset is logged with before/after permission snapshots, actor, and timestamps And the UI reflects the template state without page refresh
Compliance Export for Guardrailed Actions
Given a compliance user requests an export of guardrailed events for a specified date range When the export is generated Then the export includes elevation approvals, destructive confirmations, PII grants/views/exports, override changes, conflict validations, and resets And each record contains actor ID, subject ID, scope, event type, policy/version IDs, request/approval/execution timestamps, IP, user agent, reason text, and a SHA-256 hash for tamper evidence And exports are available in CSV and JSON and complete within 30 seconds for up to 100000 records And a checksum is displayed and stored to verify file integrity And the export action itself is logged with requester, filters, format, and completion status
Audit Trail & Permission Diff History
"As a compliance officer, I want a detailed history of permission changes so that I can demonstrate proper access control during audits."
Description

Capture a complete, immutable audit log for template changes and assignments, including who made the change, when, scope affected, and a before/after diff of capabilities. Expose searchable logs in the UI with filters by user, campaign, partner, and date. Provide export (CSV/JSON) and an API endpoint to deliver audit-ready proof to funders and compliance reviewers. Trigger webhooks on permission changes for downstream systems.

Acceptance Criteria
Log Template Definition Changes with Before/After Diff
Given a user with permission edits a Ring Template’s capabilities When they add, remove, or change any capability or scope Then an audit record is created within 3 seconds containing: eventType=template.updated, templateId, templateName, actorUserId, actorEmail, actorOrgId, timestamp (UTC ISO8601), beforeCapabilities, afterCapabilities, and a machine-readable diff And the record is visible in the Audit Log UI and retrievable via the API And no duplicate audit record is created for the same change
Audit Record on Template Assignment to Campaign or Partner
Given a Ring Template is applied to or removed from a campaign or partner via UI or API When an assignment is created, updated, or removed Then an audit record is created containing: eventType in {assignment.created, assignment.updated, assignment.removed}, assignmentId, templateId, templateName, scopeType (campaign|partner), scopeId, scopeName, actorUserId, actorEmail, timestamp (UTC ISO8601), beforePermissions (if applicable), afterPermissions And the record includes sourceChannel (UI|API) and requestId for traceability And the record is visible in the Audit Log UI and retrievable via the API
Search and Filter Audit Logs in UI
Given a user opens the Audit Log UI When they apply filters by user (email or ID), campaign, partner, and date range Then the results show only matching records, sorted by newest first, with pagination controls And clearing filters restores the unfiltered list And a search box supports free-text across templateName, actorEmail, and scopeName And each row displays: timestamp, actor, eventType, scope, and a link to view before/after diff
Export Audit Logs to CSV and JSON
Given audit log results are visible with filters applied When the user exports to CSV Then the downloaded file includes exactly the filtered records and a header with fields: id, timestamp, eventType, actorUserId, actorEmail, templateId, templateName, scopeType, scopeId, scopeName, sourceChannel And timestamps are in UTC ISO8601 and text is UTF-8 encoded When the user exports to JSON Then the downloaded file contains the same records with camelCase keys and correct data types for numbers/booleans And exports of large datasets complete without truncation and match the count shown in the UI
Audit Log API Endpoint with Pagination and Auth
Given a client presents a valid API token with scope audit.read When it calls GET /v1/audit-logs with any combination of userId, campaignId, partnerId, from, to, q, page, and pageSize Then the response is 200 with JSON { data:[], page, pageSize, total, nextPageToken } And results are ordered by timestamp desc and are consistent within a pagination window using nextPageToken And each item includes fields matching the UI plus before/after payloads when present And invalid params return 400 with error details; missing/invalid token returns 401; insufficient scope returns 403 And rate limiting headers are included
Permission Change Webhooks with Delivery Guarantees
Given a permission template is updated or assigned/unassigned When the event occurs Then an HTTPS POST is sent to each configured webhook endpoint with JSON { id, type, createdAt, actor, resource, before, after } And the request includes an HMAC-SHA256 signature header computed with the shared secret And a 2xx response marks delivery success; non-2xx triggers retries up to 6 times with exponential backoff over 30 minutes And deliveries are logged with status, response code, and next retry time And events can be replayed by ID via API and are idempotent using the event id
Immutability and Tamper Evidence of Audit Records
Given an existing audit record When any user attempts to edit or delete it via UI or API Then the operation is rejected and no change occurs (405/409) And no edit/delete controls are available in the UI for audit records And each record contains a server-generated integrity signature over its contents And a system integrity check reports Pass when records are unaltered and flags any tampering
Bulk Assignment & Import/Export
"As a project manager, I want to assign templates to many users and campaigns at once so that I can set up a coalition quickly without repetitive manual work."
Description

Support bulk assignment of ring templates to users, partners, and campaigns via the UI and CSV import. Provide dry-run validation with error reporting, idempotent operations, and progress feedback for large batches. Allow exporting template definitions and assignments for backup and replication across coalitions. Provide API endpoints for programmatic assignment and synchronization with external directories.

Acceptance Criteria
UI Bulk Apply With Preview and Progress
Given I am an Owner of a coalition with permission to manage ring templates And I select 2 ring templates (Publisher, Data Viewer) And I select 25 users, 3 partners, and 4 campaigns in the bulk assignment UI When I click Preview (Dry Run) Then the system validates all selections without persisting changes And shows a summary of would-change counts by entity type and template And flags conflicts, missing entities, or unauthorized operations with row-level messages And indicates zero changes for entities already matching the target state (no-op) When I confirm Apply Then a background job starts with a visible progress bar and percent complete And real-time counts of processed, succeeded, failed, and skipped are shown And I can download a CSV of failures with error_code and error_message And upon completion a summary report is accessible from the job history for at least 30 days
CSV Dry-Run Validation With Row-Level Errors
Given a CSV is uploaded with headers: operation,entity_type,entity_id,template_slug,scope_id And operation ∈ {assign, remove} And entity_type ∈ {user, partner, campaign} When I select Dry Run and submit the file Then the system validates 100% of rows and produces a downloadable results CSV And each row includes status ∈ {ok, no-op, error} and error_code and error_message when applicable And no database changes occur during Dry Run And invalid template_slug, unknown entity_id, malformed CSV, or unauthorized operation produce specific error_codes And exact-duplicate rows (same operation, entity_type, entity_id, template_slug, scope_id) are identified and marked duplicate without blocking other rows
Idempotent Bulk Assignment Re-Run
Given a previously completed job J applied 5,000 assignments successfully When I re-run the exact same CSV with operation=assign and Dry Run=false Then 100% of rows are treated as no-op and no changes are persisted And the job result shows succeeded=0, failed=0, skipped=0, no_op=100% When I submit the same payload to the API with an identical Idempotency-Key within 24 hours Then the API returns 200 with the original job reference and does not execute a new job
Export Templates and Assignments for Backup/Replication
Given I have Owner permissions When I export Ring Template Definitions Then I receive downloadable JSON and CSV files with deterministic ordering And each template record includes slug, version, permissions list, created_at, updated_at When I export Ring Template Assignments Then I receive a CSV with columns: entity_type,entity_id,template_slug,scope_id,assigned_at And the export reflects a consistent snapshot at request time (no partial state) And all timestamps are ISO 8601 UTC And export download links remain valid for at least 24 hours
API Batch Assignments With Async Progress and Cancellation
Given I have a valid API token with scope ring.assign When I POST /api/v1/ring-assignments/batch with up to 10,000 rows and dry_run=false Then the API responds 202 Accepted with job_id and a progress URL And the progress endpoint returns percent_complete, processed, succeeded, failed, skipped, no_op, and eta_seconds And I can cancel a running job via DELETE /api/v1/ring-assignments/jobs/{job_id} before completion and remaining rows are not processed And Idempotency-Key is supported; duplicate submissions within 24 hours return the original job And rate limits return 429 with Retry-After and no partial execution occurs
External Directory Sync by external_id
Given assignment rows include external identifiers (e.g., user_external_id, partner_external_id, campaign_external_id) And the API request specifies resolve_by=external_id When I submit the batch Then the system maps external_ids to internal entities for all supported types And rows with unknown external_ids are marked error with error_code=not_found_external_id And successfully mapped rows are applied or reported as no-op according to current state And the job report includes a mapping summary: resolved_count, unresolved_count by entity type
Round-Trip Import From Export Across Coalitions
Given I export templates and assignments from Coalition A at time T When I import the templates into Coalition B Then all templates are created or updated to match slug and version, preserving permissions When I import the assignments file into Coalition B using resolve_by=slug for templates and entity_id or external_id as provided Then assignments in Coalition B mirror the source without duplicates And rows referencing missing entities are reported with specific error codes (e.g., missing_entity, missing_template) And the final report includes parity metrics showing the percentage of assignments applied vs. source
Template Versioning & Staged Rollout
"As a system administrator, I want to version and roll out template updates safely so that I can improve policies without disrupting active campaigns."
Description

Introduce template versioning with draft, review, and publish states. Allow comparing versions side-by-side, deprecating old versions, and migrating assignments with staged rollout (percentage- or group-based) and instant rollback. Notify impacted users of capability changes and log all migrations in the audit trail. Ensures controlled, low-risk evolution of permission policies as coalitions scale.

Acceptance Criteria
Workflow State Transitions for Template Versions
Given a template version is in Draft and the acting user has Script Editor or higher permissions When the user requests a transition to Review Then the version status changes to Review and the actor/timestamp are recorded in the audit trail And users without Script Editor or higher receive a 403 error on transition attempts And a visible status chip updates in the UI within 2 seconds Given a template version is in Review and the acting user has Publisher or Owner permissions When the user requests a transition to Published Then the version becomes Published and is immutable to edits (only new versions can be created) And direct Draft -> Published transitions are blocked with an explanatory message Given a template version is Published and the acting user is an Owner When the user marks it Deprecated Then the status changes to Deprecated and all create/assign APIs reject it for new assignments with 409 Conflict
Side-by-Side Version Diff Visibility
Given two template versions (source and target) are selected for comparison When the compare view loads Then the system renders a side-by-side diff within 2 seconds for templates up to 50 tiers and 500 permission toggles And added/removed/modified tiers and permissions are highlighted with counts of each change type And unchanged sections can be collapsed/expanded And the visible diff matches an exported JSON diff artifact byte-for-byte for the same versions
Deprecated Version Behavior for Assignments
Given a template version is marked Deprecated by an Owner When a user attempts to create a new assignment to that version via UI or API Then the action is blocked and returns a 409 Conflict with a deprecation message and suggested target version And existing assignments remain active and unaffected And the Deprecated badge is displayed wherever the version appears in selection lists And the assignment creation UI omits Deprecated versions by default with a toggle to "Show Deprecated"
Staged Rollout Controls (Percentage and Group)
Given a source version with N existing assignments and a target Published version When a user configures a percentage rollout of P% and launches Then floor(N * P/100) assignments are migrated to the target version within 10 minutes for N <= 10,000 And the selection is deterministic within the launch (stable hashing) and uniformly distributed across partners And a dry-run preview shows counts by partner/campaign before launch Given partner groups A and B are selected for a group-based rollout with an immediate schedule When the rollout starts Then only assignments within groups A and B are migrated and all others remain on the source version And an exclusion list of assignments is honored and reported in the summary And live progress displays migrated, pending, failed counts and estimated time remaining And cancelling a rollout halts further migrations without reverting already-migrated assignments
Instant Rollback of a Rollout
Given a rollout R has migrated assignments from a source version to a target version When an Owner triggers Rollback for R Then 100% of assignments migrated by R revert back to the source version within 60 seconds for up to 10,000 assignments And a rollback summary shows total reverted, not-applicable, and failed counts And the system prevents new migrations under R after rollback starts And a new audit record links the rollback to R via correlation ID
Impacted User Notifications on Capability Changes
Given a publish, rollout, or deprecation will change effective capabilities for users in affected organizations When the change is applied Then impacted users receive an in-app notification within 1 minute and an email within 5 minutes (respecting notification preferences) And the notification includes template name, from->to version IDs, summary counts of capability changes, effective time, and links to the diff and help article And Owners/Publishers also receive a digest summarizing affected campaigns/partners and next steps
Audit Trail for Versioning and Migrations
Given any publish, deprecate, staged rollout, assignment migration, or rollback event occurs When the event completes (success or failure) Then an immutable audit record is written within 10 seconds containing: event type, actor, timestamp, template ID, fromVersionID, toVersionID, cohort size, migrated count, failed count, rollout method (percentage/group), schedule time, start time, end time, correlation ID, and error details (if any) And the record is visible in the Audit UI within 60 seconds and exportable as CSV/JSON And tamper attempts are blocked (no update/delete API for audit entries) returning 405 Method Not Allowed

Approval Ladders

Configurable, multi-step approval gates for scripts, targets, and page publishes. Set who must review, add SLAs and escalations, and approve from email or mobile. Speeds safe releases, prevents off-message posts, and gives clear accountability without slowing day-of-action momentum.

Requirements

Visual Approval Flow Builder
"As a campaign director, I want to configure conditional multi-step approvals so that sensitive assets are reviewed by the right people without slowing day-of-action launches."
Description

Provide a drag-and-drop builder to define multi-step approval ladders for scripts, target lists, and action page publishes. Support serial and parallel steps, conditional branching by asset type, bill status, campaign risk level, and environment (draft vs live). Include reusable templates, step-level rules (required/all-of/any-of), validation to prevent unreachable states, and a preview of the path an item will take. Integrate natively with RallyKit objects so each draft is automatically bound to its configured ladder, with versioning of ladder definitions and change impact warnings.

Acceptance Criteria
Drag-and-Drop Builder for Serial and Parallel Steps
Given I am creating a new approval ladder in the Visual Approval Flow Builder When I drag three approval steps onto the canvas and connect them in a serial sequence Then the UI numbers the steps 1, 2, 3 and the connections persist after saving and reloading Given I branch Step 2 into two parallel sub-steps (2A and 2B) that rejoin before Step 3 When I validate the ladder Then the downstream Step 3 cannot start until both 2A and 2B are marked complete Given Step 2A assigns approvers Alice and Bob with rule "all-of" When only Alice approves Then Step 2A remains pending until Bob also approves Given Step 1 assigns approvers Chris and Dana with rule "any-of" When Chris approves Then Step 1 is marked complete Given Step 3 assigns approver Evan with rule "required" When any user other than Evan attempts to approve Step 3 Then the approval is rejected with a message indicating Evan is required
Conditional Branching by Asset, Bill Status, Risk, and Environment
Given a ladder contains conditional branches defined by assetType (Script, TargetList, ActionPage), billStatus (Introduced, Committee, Floor, Passed), campaignRisk (Low, Medium, High), and environment (Draft, Live) When I evaluate an ActionPage publish with billStatus=Floor, campaignRisk=High, environment=Live Then the engine selects the branch whose predicates match all these attributes and the preview highlights that path Given a Script draft with billStatus=Introduced, campaignRisk=Low, environment=Draft When I evaluate the ladder Then the path taken matches the Script/Draft/Low predicates and skips non-matching branches And branches whose predicates are not fully satisfied are not eligible for selection
Validation Blocks Unreachable or Cyclic Flows
Given a ladder includes a step with no incoming connection and it is not the designated start When I click Validate Then an error "Unreachable step detected" is shown and Publish is disabled Given I create a cycle (Step A → Step B → Step A) When I click Validate Then an error "Cycle detected" is shown and Save/Publish are blocked until the cycle is removed Given conditional branches are configured such that some possible attribute combinations have no terminal (end) path When I click Validate Then an error lists each uncovered combination and Publish is disabled until at least one terminal path exists for each
Path Preview Simulation for Draft Item
Given a Script draft bound to a ladder with attributes assetType=Script, billStatus=Committee, campaignRisk=Medium, environment=Draft When I click Preview Path and select this draft Then the builder highlights each step in execution order, including parallel groupings, and displays the approver rule for each step And the preview lists the matched branch predicates at each decision point And the preview is read-only and does not modify the draft or ladder
Automatic Binding of New Drafts to Configured Ladder Version
Given ladder "Standard Script Approvals" v2 is published and set as the active ladder for assetType=Script When a user creates a new Script draft Then the draft’s Approval Ladder field is automatically set to "Standard Script Approvals v2" and Preview Path evaluates against v2 Given "Standard Script Approvals" v3 is later published When viewing the previously created Script draft Then it remains bound to v2 until explicitly re-bound by a user with permission And a non-blocking notice indicates "A newer ladder version exists (v3)" with a link to view differences
Versioning and Change Impact Warnings on Edit
Given a published ladder v2 has 12 active drafts bound to it When an editor modifies the ladder and clicks Publish New Version Then v3 is created, v2 remains immutable, and an impact summary shows "12 drafts bound to v2; 0 drafts to v3" And a change summary (steps added/removed, rules changed, conditions changed) is displayed before confirmation Given an edit would remove a step such that some bound drafts would have no valid terminal path When the editor attempts to publish Then publishing is blocked with a warning listing the affected drafts and the missing paths
Reusable Templates: Create, Apply, and Customize
Given I have configured a ladder in the builder When I click Save as Template and name it "High-Risk Live Gate" Then the template appears in the Templates library with the same steps, rules, and conditions When I create a new ladder from the "High-Risk Live Gate" template Then the canvas initializes identically to the template and can be edited And edits to the new ladder do not alter the saved template
Role-Based Approver Assignment & Delegation
"As an organization admin, I want to assign approvers by role with fallback and delegation so that approvals continue even when primary reviewers are unavailable."
Description

Allow approver assignment by role (e.g., Communications, Policy, Legal), team, or ownership (campaign/bill/target set), with support for groups (any-one vs all-must-approve), alternates, and out-of-office delegation with effective dates. Enforce conflict-of-interest rules (no self-approval), per-asset overrides with audit, and automatic resolver when a role is unfilled. Sync with RallyKit user directory/permissions and surface clear step owners on each item.

Acceptance Criteria
Role/Team/Ownership-Based Approver Resolution
Given an approval step configured by role, team, or ownership mapping When the approval ladder is saved or the asset enters the step Then approvers are resolved from the RallyKit user directory using current mappings And inactive or deprovisioned users are excluded from assignment And each step displays the resolved approver name(s) with a reason tag (role/team/ownership) on the asset and ladder views And resolution completes within 3 seconds for ladders with up to 10 steps and 100 candidate users And an audit event "approver_resolution" is recorded with step ID, mapping source, resolved user IDs, and timestamp
Group Approval Modes: Any-One vs All-Must
Given a step with group mode "Any-One" and N assigned approvers When any one approver records an approval Then the step status becomes Approved And remaining approvers are notified that their action is no longer required And duplicate approvals from the same user are prevented Given a step with group mode "All-Must" and N assigned approvers When approvals are recorded by all assigned approvers Then the step status becomes Approved And the UI displays progress as X of N approved in real time When any approver requests changes Then the step status becomes Changes Requested and no further approvals are accepted until resubmission
Alternates and Out-of-Office Delegation with Effective Dates
Given a primary approver with a configured alternate When the alternate approves Then it satisfies the primary approver’s slot for that step and is recorded as "approved by alternate" Given a primary approver with an out-of-office delegation configured with start and end dates When the asset enters the step during the effective window Then approval requests and reminders route to the delegate instead of the primary And the primary is blocked from approving with a message indicating delegation is active And approvals recorded by the delegate are audited as "acted as delegate for [primary]" When the OOO window ends Then routing reverts to the primary within 60 seconds and new requests go to the primary And circular delegation chains are prevented at configuration time
Conflict-of-Interest Enforcement (No Self-Approval)
Given a user who created or last edited the asset or who is the owner of the campaign/bill/target set is mapped as an approver for that asset When they attempt to approve Then the action is blocked with a clear message citing the conflict-of-interest policy And an audit event "coi_block" is recorded with actor, asset, step, and reason When a conflict-of-interest is detected during approver resolution Then the system excludes the conflicted user and attempts resolution via alternates or the unfilled-role resolver And if no eligible approver can be resolved, the step is marked Needs Approver and a notification is sent to workspace admins and the ladder owner
Automatic Resolver for Unfilled Roles
Given a step whose role/team/ownership mapping resolves to no eligible users (after exclusions and COI) When the step is entered or the ladder is saved Then the system assigns the step to the configured step-level fallback group And if no step-level fallback is set, it assigns to the ladder-level fallback group And if no ladder-level fallback is set, it assigns to the workspace Approvers group And if no fallback exists, the step is flagged Needs Approver and the ladder owner is notified And an audit event "resolver_fallback_applied" is recorded with the path taken When directory data changes and a previously unfilled role becomes filled Then the step re-resolves within 60 seconds to the newly eligible user(s) and pending notifications are updated
Per-Asset Approver Overrides with Full Audit
Given an editor with Manage Approvals permission on an asset When they override the approver list for a step on that asset and provide a reason Then the override applies only to that asset and is effective immediately And the change is logged with before/after approver lists, actor, timestamp, and reason And overrides that would introduce a conflict-of-interest are blocked with an error When the editor selects "Revert to default mapping" Then the step returns to role/team/ownership-based resolution and the reversion is audited
Directory Sync and Permission Validation on Assignment
Given a user is mapped as an approver for a step When the user lacks permission to approve that asset Then the assignment is prevented and a warning is shown to the ladder owner to correct mappings And another eligible approver is selected via alternates or resolver if available Given a mapped approver is deactivated or removed from the role/team When the directory change occurs Then the step re-resolves within 60 seconds to exclude the user and select eligible replacements When an approver accesses an approval link Then they must authenticate and authorization is verified; unauthorized users receive Access Denied and no state change occurs And each re-resolution or validation event is written to the audit log
SLA Timers, Reminders, and Escalations
"As a communications lead, I want SLA timers with automated reminders and escalations so that time-sensitive campaigns are not blocked waiting for approval."
Description

Enable step-level SLAs (e.g., 2h, 8h, 24h) with business-hour calendars, quiet hours, and weekend rules. Send time-based reminders and pre-breach alerts via email, SMS, and Slack, and auto-escalate to alternates or supervisors on breach with optional auto-reassignment. Display live countdowns and breach indicators on item cards and the campaign dashboard, and log all notifications for auditability.

Acceptance Criteria
Step-Level SLA with Business Hours, Quiet Hours, and Weekend Skip
Given a campaign timezone "America/New_York" and a step SLA of 2h using business hours Mon–Fri 09:00–18:00, quiet hours 18:00–09:00, and weekends excluded When an approval request is created on Friday at 17:00 local time Then the computed due time is Monday at 10:00 local time And the due time is displayed identically on the item card and campaign dashboard And the SLA timer excludes quiet hours and weekend intervals from the countdown
Pre-Breach Alerts and Reminders Across Channels with Quiet-Hour Deferral
Given a step with SLA 8h using business hours Mon–Fri 09:00–18:00 and quiet hours 20:00–08:00, with a pre-breach alert at T-30m and reminders every 60m until breach When the computed due time is next business day 08:15 local time Then the pre-breach alert scheduled for 07:45 is deferred to 08:00 local time And reminders are sent at 08:00 via Email, SMS, and Slack to the assigned approver And no reminders are sent before 08:00 due to quiet hours And each sent notification records a delivery attempt and outcome
Breach Escalation with Auto-Reassignment to Alternate
Given a step assigned to Approver A with SLA 2h, escalation chain Alternate B then Supervisor S, and Auto-Reassign on Breach = true When the step breaches at its due time Then the assignee changes to Alternate B immediately And breach notifications are sent to B via Email, SMS, and Slack And Supervisor S receives an escalation notification And the item card and dashboard show "Escalated" with assignee B within 1 minute
Notification and Escalation Audit Log Completeness
Given notification channels Email, SMS, and Slack are configured and escalations are enabled When pre-breach alerts, reminders, and breach escalations are triggered for a step Then an audit log entry is recorded for each notification and escalation with: item ID, step ID, recipient, channel, template ID, trigger type (pre-breach|reminder|breach|escalation), scheduledAt, sentAt, delivery status (delivered|failed|suppressed), and suppression reason if applicable And audit entries are immutable and queryable by time range, item, step, recipient, and channel
Live Countdown and Breach Indicators Display
Given an active approval step with remaining SLA time When viewing the item card and the campaign dashboard Then a countdown timer shows time remaining in hh:mm, updating at least every 60 seconds And the timer turns amber at T-30m and red at breach And at breach the timer switches to "Overdue +Hh:Mm" format and a red breach badge is shown And a tooltip reveals the due timestamp, timezone, and calendar basis (business hours)
Weekend Rules Toggle Includes vs Skips Weekends
Given a step with SLA 24h and two configurations: (A) Weekend Rule = Skip, (B) Weekend Rule = Include When a request is created on Friday at 12:00 local time Then in configuration A the due time is Monday at 12:00 local time And in configuration B the due time is Saturday at 12:00 local time And both due times are consistent across UI and API responses
Cancellation of Timers and Pending Notifications on Step Completion or Withdraw
Given an approval step with active SLA timers and scheduled reminders When the step is approved or rejected, or the item is withdrawn before the due time Then all pending reminders and escalation jobs are cancelled within 60 seconds And no further notifications are sent after cancellation And audit logs record the cancellations with timestamps and reasons
Inline Email/Mobile Approval Actions
"As an approver on the go, I want to take approval actions from my phone or email with full context so that I can keep campaigns moving between meetings."
Description

Allow approvers to approve, reject, or request changes directly from email and mobile with secure, time-limited links and optional 2FA for high-risk steps. Include in-context previews: redlined script diffs, target list summaries by district, and an action page preview with bill status tags. Capture comments and change requests inline, support quick-reply keywords, and persist all actions to the audit log. Optimize flows for iOS/Android and low-bandwidth usage to support field organizers.

Acceptance Criteria
Approve From Email With Time-Limited Link
Given an approver receives an approval email containing a secure, signed link bound to their identity and the approval step When they click the link within the configured TTL (default 24h, configurable 1–72h) Then the approval screen loads with context and action buttons enabled Given the secure link has already been used or the TTL has expired When any user attempts to open it Then access is denied with an "expired or used" message and a flow to request a new link Given the approver lacks permission for the step When they open the link Then access is denied and an audit entry is recorded with reason "insufficient permissions" Given a valid approval/rejection/change-request is submitted from the linked page When processing completes Then the step state updates accordingly and subscribed stakeholders receive notifications
High-Risk Approval Requires 2FA
Given an approval step is flagged as high-risk When the approver opens the secure link Then a 2FA challenge (TOTP/SMS/email OTP) is required before any action can be taken Given the approver enters a valid code within 60 seconds and ≤3 attempts When verification succeeds Then the approval actions are enabled for that session Given 3 failed attempts or code timeout occurs When additional attempts are made Then the session is locked, the link becomes unusable, and a new link must be requested Given 2FA is enforced When the approver completes or fails verification Then the audit log records factor type (not the code), result, timestamp, and approver ID
Mobile & Low-Bandwidth Approval Flow
Given an approver uses iOS 15+ or Android 10+ on a 3G/400 kbps connection When the approval screen loads Then first contentful render occurs in ≤2.5s and total transferred payload is ≤300 KB Given JavaScript is blocked or fails When the page loads Then an HTML-only fallback renders with Approve/Reject/Request Changes controls and submission works Given a 320–414 px viewport When the approver interacts with controls Then tap targets are ≥44x44 px and the UI meets WCAG 2.1 AA color contrast and focus order Given network drops during submit When the user takes an action Then the action queues with user feedback and retries up to 3 times without creating duplicate actions
Redlined Script Diff Preview In-Context
Given a script differs from the last approved version When the approval email and mobile page are generated Then additions are indicated with + green highlight and deletions with − red highlight in a redline diff Given the diff exceeds 2,000 characters in email When rendering the email Then the diff collapses after 2,000 characters with a "View full diff" link to the mobile web view Given only whitespace/punctuation changes exist When generating the summary Then the summary states "no substantive content changes" with an option to view full diff Given the approver selects "View full diff" When the page opens Then a line-by-line diff displays with counts of lines added and removed
Target List Summary by District in Approval
Given a target list is associated with the approval When the email/mobile approval view renders Then it displays total targets and a per-district breakdown (e.g., CA-12: 3, CA-14: 1) up to 10 rows with a "View all" link Given the target list changed since the last approved version When rendering the summary Then added and removed counts by district are highlighted Given the approver selects "View all" When the list opens Then a mobile-optimized list shows target name, office, and district with search and pagination for lists >100
Action Page Preview With Bill Status Tags
Given an action page draft exists for the approval When the email/mobile approval view renders Then a lightweight preview shows the headline, primary CTA, and current bill status tag (e.g., "HB 123 — In Committee") Given the upstream bill status changes within 15 minutes When the approver opens the preview page Then the status tag reflects the latest status and shows a "Last updated" timestamp Given the approver taps the preview When the staging URL opens Then it requires the same secure token (non-indexed) and does not expose the page publicly
Inline Comments, Change Requests, and Quick-Reply Keywords Persisted and Audited
Given an approver selects Request Changes When they submit the form Then a non-empty comment ≥5 characters is required and stored with markdown/plaintext preserved Given the approver replies to the email thread containing the approval token When the system receives the email/SMS Then it parses and applies keywords: APPROVE, REJECT, CHANGE: <text>; invalid keywords are rejected with a reply explaining valid options Given a quick-reply action is processed When duplicates arrive within 5 minutes Then only the first is applied and subsequent duplicates are logged and ignored Given any action (approve/reject/request changes) is taken from email or mobile When it is recorded Then the audit log persists step ID, approver ID, action, timestamp, source (email/mobile), IP, device fingerprint (if available), and message ID as an immutable entry
Publish Gate Enforcement & Safe Rollback
"As a campaign owner, I want publishing automatically blocked until approvals are complete and a one-click rollback option so that only approved content goes live and mistakes can be reversed quickly."
Description

Gate publishing so that scripts, target lists, and one-tap action pages cannot go live until all required approvals in the configured ladder are complete. Lock approved content against changes that would bypass review and trigger re-approval on material edits. Support scheduled publishes after final approval, preflight checks, and one-click rollback to the last approved version if issues arise. Show clear status banners and blockers in the publishing workflow.

Acceptance Criteria
Block Publish Until All Approvals Complete
Given an item (script, target list, or action page) has a configured approval ladder with required approvers When any required approval is pending or has been rejected Then attempts to Publish or Schedule return 409 PUBLISH_BLOCKED and the UI shows a blocking banner listing pending approvers and steps And the Publish/Schedule controls are disabled until all required approvals are recorded as Approved And when the final required approver submits an approval, the item’s Gate Status changes to "Approved to Publish" within 5 seconds and publishing controls become enabled
Material Edit Triggers Re-Approval and Re-Gate
Given content has an "Approved to Publish" gate status When a material field (script text, talking points, call/email templates, dynamic logic rules, target list membership, geo/district filters, action page visibility, CTA buttons/links, bill status selection, merge tag definitions) is modified Then the gate status flips to "Re-Approval Required", all prior approvals are invalidated, and Publish/Schedule controls are disabled And approvers in the configured ladder are notified via email and mobile of re-approval required And when all required re-approvals are recorded, the gate status returns to "Approved to Publish" And when only non-material fields (internal notes, internal labels) are modified, no re-approval is triggered and approvals remain valid
Scheduled Publish After Final Approval
Given an item is not fully approved When a user attempts to schedule a publish Then the request is blocked with 409 PUBLISH_BLOCKED and an onscreen reason "Awaiting approvals" Given an item is Approved to Publish and has passed preflight When a user schedules for a future date/time with a specified timezone Then the schedule is saved, shown in the status banner, and no re-approval is required for schedule changes And scheduling in the past is rejected with 400 INVALID_SCHEDULE And at the scheduled time, the system publishes the exact approved version, provided no subsequent material edits exist; otherwise the publish is skipped, a notification is sent, and the schedule remains pending
Preflight Checks Gate Publish and Schedule
Given an item is Approved to Publish When a user clicks Publish now or saves a schedule Then preflight checks run and must pass 100% before the action is accepted And failing checks list includes missing required fields (title, script copy per bill status), empty target list, invalid/broken links, unresolved merge tags, orphaned targets, and permission mismatches And if any check fails, the action is blocked with 422 PREFLIGHT_FAILED and the UI lists each failing check with a link to fix And for scheduled publishes, preflight re-runs at execution time; if it fails then, the publish is skipped and owners are alerted
One-Click Rollback to Last Approved Version
Given a live item has at least one prior approved version When an authorized user clicks Rollback Then the live item reverts to the last approved version within 30 seconds and a rollback banner is shown And the reverted version’s approvals remain intact; the superseded version is preserved as a draft requiring re-approval to go live again And all associated assets (scripts, target lists, page settings) are restored to that version, caches are purged, and tracking continues across the version change And an audit entry "Rollback" with actor, timestamp, from-version, and to-version is recorded and notifications are sent to owners and approvers
Status Banners and Blockers in Publishing Workflow
Given any item in the publishing workflow When a user views the Publish screen Then a status banner displays current gate state, pending approvers with SLA timers, schedule info, and any blockers (preflight failures, re-approval required) And the banner updates within 5 seconds of any approval action or material edit And clicking View details shows the full approval ladder, who approved/when, and escalation status And when publishing is blocked, the Publish/Schedule controls are disabled with a tooltip explaining the reason
Audit Trail for Approvals, Edits, Publishes, and Rollbacks
Given any approval, edit (material or non-material), publish, schedule, preflight result, or rollback occurs When the event is committed Then an immutable audit log entry is written with actor, timestamp (UTC), item type, version ID, action, before/after diff of material fields, and outcome And authorized users can view the audit trail in-app and export CSV for a specified date range And the audit trail shows a contiguous chain proving the published version followed final approval with no intervening material edits And attempts to delete or alter audit entries are blocked and return 403 FORBIDDEN
Draft Version Control & Redline Diff
"As an approver, I want clear version history and redlined diffs so that I can review exactly what changed and respond quickly."
Description

Maintain draft and approved states with per-change version history for scripts and target lists. Provide side-by-side and inline redline diffs highlighting additions, removals, and changed language, plus change summaries for district targeting. Support comment threads, request-changes loops, restore-to-previous, and auto-linking to the bill status script generator so dynamic content changes are visible and reviewable.

Acceptance Criteria
Draft and Approved States with Per-Change Version History
Given a user edits a script or target list and saves as Draft When the draft is saved Then a new version is created with unique ID, author, timestamp, change summary, and status 'Draft' Given at least one Approved version exists and a new Draft is created When viewing the public/live content Then the last Approved version remains live until the Draft is approved Given multiple sequential edits to the same item When viewing version history Then versions are ordered newest-first and each entry shows status, author, timestamp, and a link to Diff Given a Draft is approved via the configured Approval Ladder When approval is recorded Then its status updates to 'Approved' and it replaces the previously live version
Side-by-Side and Inline Redline Diffs for Scripts
Given any two script versions are selected When the user opens Diff Then both Side-by-Side and Inline redline modes are available and switchable without reload Given the diff is rendered Then additions are highlighted in green with '+', deletions in red with '-', and modified text appears as deletion+addition at the smallest changed span Given the user toggles 'Ignore whitespace changes' When recalculating diff Then purely whitespace-only changes are hidden Given a script up to 10,000 characters in each version When opening diff Then initial render completes in <= 2 seconds on a standard broadband connection
Redline Diffs and Summaries for Target Lists by District
Given two target list versions are selected When viewing Diff Summary Then totals for Added, Removed, and Unchanged targets are displayed Given the district summary is expanded When viewing a district Then all added/removed targets for that district are listed with name, chamber, and district code Given the user exports the change summary When clicking Export CSV Then a CSV downloads containing one row per change with columns: action (added/removed), target_id, name, chamber, district Given lists up to 2,000 targets per version When generating diff Then results return in <= 2 seconds
Comment Threads and Request-Changes Loop
Given a Draft version is under review When a reviewer leaves an inline comment on a specific line or target Then a comment thread is created with author, timestamp, and resolve/unresolve controls Given unresolved comment threads exist When a reviewer selects 'Request Changes' Then the Draft status updates to 'Changes Requested', the approval step remains pending, and the author is notified via email and in-app Given all threads are resolved and the author resubmits When a reviewer opens the Draft Then the status displays 'In Review' and the history logs the transition from 'Changes Requested' to 'In Review' Given a comment is edited When the author updates their comment Then an edit history is retained and visible with timestamps
Restore to Previous Version Creates New Draft
Given a prior version vK of a script or target list exists When an editor selects Restore on vK Then a new Draft version vN is created with content identical to vK and a system-generated summary 'Restored from vK', leaving the live version unchanged Given the restored Draft is approved When approval is completed Then it becomes the live version and the history shows a link back to vK Given a restore action occurs When viewing the audit log Then the action records user, timestamp, source version ID, and target new version ID
Auto-Linking to Bill Status Script Generator Changes
Given a script contains dynamic placeholders driven by the bill status generator When the underlying bill status changes and alters generated output Then a new Draft version is auto-created, tagged 'Auto-generated', and the Diff highlights only the affected generated sections Given an auto-generated Draft exists When a reviewer views the Draft Then a 'View source rule' link is available for each changed placeholder that opens the corresponding generator rule/details Given an auto-generated Draft is pending review and a new auto-update occurs When the system detects a subsequent generator change Then a separate Draft version is created and the in-review Draft is not overwritten; reviewers are notified
Audit Trail, Reporting & Compliance Export
"As a nonprofit director, I want exportable, audit-ready approval records so that we can demonstrate governance to boards, funders, and regulators."
Description

Record an immutable audit trail of every approval event: timestamps, actor identity, step, decision, comments, device/IP, and notifications sent. Provide filters by campaign, bill, asset type, approver, and date range, with export to CSV/PDF and a signed hash to detect tampering. Generate audit-ready reports that tie approvals to the exact versions published and the actions taken by supporters for end-to-end accountability.

Acceptance Criteria
Immutable Audit Log and Tamper Detection
Given any approval event occurs (approve, reject, request changes, revert), When the event is recorded, Then the audit record is persisted as write-once and cannot be updated or deleted through any UI or API. Given an authenticated user attempts to modify or delete an existing audit record via UI or API, When the request is made, Then the system returns 403 Forbidden and no changes are written. Given an audit export is generated, When the CSV or PDF file is downloaded, Then a companion signature (e.g., .sig) and checksum (.sha256) are provided and the export manifest includes the checksum and signature algorithm. Given the exported file and its signature, When verification is performed using the RallyKit public key published at /compliance/public-key, Then signature verification succeeds without warnings. Given any byte in the exported file is changed after download, When verification is run again, Then signature verification fails and reports a tampering error.
Complete Approval Event Metadata Capture
Given an approval event is recorded for any asset (script, targets, page publish), When the record is created, Then it includes: ISO 8601 UTC timestamp, actor user ID and full name, actor role, approval step ID and name, decision (approve/reject/request-changes/revert), free-text comments, asset type and asset ID, campaign ID, bill ID if linked, request channel (web/email/mobile/API), device user agent, IP address (IPv4/IPv6), notification IDs associated to the step, and a correlation ID. Given the approver acts from an email or mobile deep link, When the event is logged, Then channel reflects the source (email-web/email-mobile) and the device/IP values reflect the client used. Given an optional field is not applicable (e.g., no bill linked), When the record is stored, Then the field is present and null rather than omitted. Given comments are empty, When the record is stored, Then comments is an empty string and passes CSV/PDF export without formatting errors.
Audit Trail Filtering by Campaign, Bill, Asset Type, Approver, and Date Range
Given audit records exist across multiple campaigns, bills, asset types, and approvers, When a user applies a single filter (campaign OR bill OR asset type OR approver OR date range), Then only matching records are returned. Given multiple filters are applied simultaneously, When results are computed, Then only records satisfying all filters (AND logic) are returned. Given a date range is selected with start and end in UTC, When results are computed, Then records with timestamps within the inclusive range [start, end] are returned. Given filters yield no matching records, When results are displayed, Then the UI shows 0 results and the export actions produce empty CSV/PDF files with headers only. Given results are displayed, When the user clears all filters, Then the full unfiltered list is shown again.
CSV and PDF Export with Applied Filters and Signatures
Given a user with permission views filtered audit results, When Export CSV is clicked, Then a CSV downloads containing only the currently filtered records in the same sort order with a header row and UTF-8 encoding. Given the same filter set, When Export PDF is clicked, Then a paginated PDF generates containing the same dataset and a cover section listing applied filters, record count, exporter identity, and generation timestamp (UTC). Given either export is generated, When the files are created, Then filenames include dataset type (audit), primary filter key(s), date range, and UTC timestamp, and the files include a footer with record count and generation timestamp. Given an export exceeds 50,000 records, When the export is initiated, Then the system streams the file and completes without truncation or timeouts, and the record count matches the on-screen count. Given CSV fields contain commas, quotes, newlines, or non-ASCII characters, When exported, Then CSV is RFC 4180-compliant with proper quoting and UTF-8, and the PDF faithfully renders characters. Given an export is generated, When downloaded, Then a checksum and signed hash (signature) are provided and successfully verify the file contents.
Approval-to-Published Version and Supporter Action Linkage Report
Given an asset version is approved and subsequently published, When the accountability report is generated for a date range, Then it lists each published version with its approval decision(s), approver identities, approval timestamps, publisher identity, publish timestamp, and a content checksum that matches the published snapshot. Given supporters take actions (calls, emails) while a specific version is live, When the report is generated, Then those actions are attributed exclusively to that version with counts per action type and a list of action IDs and timestamps. Given a version is superseded by a newer version, When the report is generated, Then actions are partitioned by version live windows with no overlap or double counting. Given a version ID or action ID is clicked in the report, When navigated, Then the system opens the corresponding version snapshot/diff or action detail view. Given the report view is filtered (e.g., by campaign or bill), When exported to CSV/PDF, Then the export contains the same linkage data and filter context on the cover/manifest.
Notification Events Logged for Approvals and Escalations
Given SLA-based escalations trigger notifications for a pending approval step, When notifications are sent (email/SMS/push), Then each notification is recorded with UTC timestamp, notification type, channel, recipient identity and address, related approval step ID, message template ID, delivery provider message ID, and delivery status (queued/sent/delivered/bounced/failed). Given an approver completes an approval action via a notification link, When the approval is recorded, Then the notification event is linked to the approval record via a shared correlation ID. Given a notification delivery fails, When viewing the audit trail filtered by notification status=failed, Then the failed notification appears with provider error details. Given notifications are sent as part of a step, When exporting the audit, Then notification records associated with approval events are included and referenceable by their IDs.

Scoped Sharing

Granular asset sharing by page, script section, district subset, or channel. Timebox access, restrict by domain/IP, and choose rights (edit, publish, view-only). Enables partners to collaborate where needed while protecting sensitive targets, lists, and language outside their remit.

Requirements

Permission Scopes & Inheritance Model
"As a campaign owner, I want to define precise scopes and rights for each collaborator so that partners can work only where needed without risking exposure of sensitive assets."
Description

Design and implement a fine-grained permission model that scopes access by asset type (page, script section, district subset, channel) and supports rights levels (view-only, edit, publish). Define hierarchical scope boundaries and inheritance rules with explicit conflict resolution (most-restrictive wins) and sensible defaults (deny by default). Integrate with RallyKit’s existing roles and organizations to support internal users and external partners under a single, multi-tenant authorization layer. Include performant, cacheable permission checks at request time and background re-evaluation when scope definitions change. Outcome: partners can collaborate only within explicitly granted slices of content and geography without exposing sensitive targets, lists, or language outside their remit.

Acceptance Criteria
Default Deny and Most-Restrictive Conflict Resolution
Given a user has no applicable grants for an asset When they attempt to access the asset Then the system responds 403 Forbidden and returns no sensitive metadata beyond asset ID Given a user has overlapping grants with different rights for the same asset (e.g., view-only and edit) When effective rights are computed Then the most-restrictive right applies (view-only) Given a user has edit at page-level and view-only at a specific script section under that page When they access that script section Then effective rights are view-only for that section and edit remains for other sections under the page Given a user has publish on content but only edit on a required channel When they attempt to publish to that channel Then the operation is blocked because effective rights are the most-restrictive across all applicable scopes
Asset-Type Scoped Access (Page, Script Section, District Subset, Channel)
Given an external partner is granted edit on Page P only When they attempt to access script sections not under Page P, district subsets not linked to Page P, or any channels Then access is denied (403) for those assets while Page P is editable Given a partner is granted publish on Channel C and edit on Page P When they publish content from Page P Then publish is permitted only to Channel C and blocked for other channels Given a partner is granted view-only on District Subset D for Page P When they view action results and scripts Then only data and generated language for districts in D are visible; other districts are hidden and aggregate counts reflect only D
Timeboxed Access with Domain and IP Restrictions
Given a share with a start and end time window When a request occurs before the start time Then access is denied (403) Given the same share When a request occurs within the window Then access is evaluated against other rules and allowed only if all other rules pass Given the same share When the end time has passed Then access is denied (403) no later than 60 seconds after expiry (accounting for cache TTL) Given a share restricted to email domains {example.org, partner.io} When a user with a verified email at a non-allowed domain authenticates Then access is denied (403) Given a share restricted to IP CIDR ranges When the request IP is outside all allowed ranges Then access is denied (403) Given both domain and IP restrictions are configured When a request is evaluated Then both conditions must pass (logical AND) for access to be allowed
Hierarchical Inheritance Across Scopes
Given a user has edit rights at Page P When they access script sections and district subsets under Page P Then rights default to edit unless a more-restrictive grant exists on a child scope Given a user has view-only at Page P and publish at Script Section S under P When they access S Then effective rights on S are view-only (most-restrictive wins) Given an operation spans multiple scopes (e.g., publish requires content-scope publish and channel-scope publish) When effective rights are computed Then the effective right equals the most-restrictive across all applicable scopes, and the operation is blocked if any required scope is below the needed level
Cacheable, Low-Latency Permission Checks at Request Time
Given permission decisions are served from a warmed in-memory cache When evaluating access Then P95 decision latency is ≤ 5 ms and P99 is ≤ 20 ms per check under nominal load, with zero functional errors Given a cold cache miss When evaluating access Then P95 decision latency is ≤ 25 ms and the result matches the authoritative policy store Given repeated identical requests within the cache TTL When evaluating access Then cache hit ratio is ≥ 90% and decisions remain correct even as unrelated grants change
Background Re-evaluation on Scope Changes
Given a grant (add/update/remove) or scope definition change occurs When the change is committed Then effective permissions are recomputed and relevant caches are invalidated or updated within 60 seconds Given users have active sessions during a scope change When they make the next request after re-evaluation completes Then their effective rights reflect the change and operations exceeding updated rights are denied (403) Given the re-evaluation job encounters a transient failure When retries execute Then the system retries with backoff and achieves eventual consistency within 5 minutes, with failures logged for audit
Multi-Tenant Isolation and Role Integration
Given a user belongs to Org A and has no shares into Org B When they search, list, or directly request assets from Org B Then no assets from Org B are discoverable or accessible (404/403) Given a partner from Org B is granted view-only to Page P in Org A When they navigate RallyKit Then only Page P and its allowed sub-scopes per grant are visible; no other Org A assets are visible Given a user role lacks publish capability but a grant attempts to grant publish When they try to publish Then the operation is denied because effective rights are the intersection of role capabilities and explicit grants Given a user is removed from an org or a share is revoked When they make subsequent requests Then access is denied (403) no later than 60 seconds after revocation and any cached permissions are purged
Secure Share Links & Access Controls
"As a nonprofit director, I want to create secure, expiring share links restricted by partner domain and IP so that only intended collaborators can access our scoped assets."
Description

Provide a mechanism to generate scoped share links and invitations that encapsulate scope and rights, with timeboxed access windows (start/end), single-click revocation, and optional one-time-use tokens. Enforce domain allowlists and IP allow/deny rules at session creation and on every request. Use signed, expiring tokens and short-lived sessions; support SSO or email-verified magic links for partner identity. Include rate limiting, CSRF protection, and bot mitigation. Integrate link-level attribution into RallyKit’s live action tracking so actions taken via a share are auditable to the partner and scope. Outcome: external collaboration is secure, traceable, and easy to enable/disable without platform-wide risk.

Acceptance Criteria
Generate Scoped Share Link with Rights and Timebox
Given an org admin selects specific assets (page, script section, district subset, channel), rights (view-only, edit, publish), and start/end times When Generate Link is clicked Then a share URL is created with a signed, expiring token whose payload exactly matches selected scopes/rights and whose expiry is at or before the configured end time And the link metadata includes link_id, optional partner_id, scopes, rights, and channel; metadata is stored and visible in the dashboard And the link is inactive before start time (requests receive 403 E_LINK_NOT_ACTIVE) and active only within the window And rights are enforced: view-only cannot modify or publish; edit allows edits within scope only; publish allows publishing within scope only; out-of-scope actions return 403 E_FORBIDDEN with no side effects And an audit log entry is recorded with actor, timestamp, scopes, rights, start/end, and one_time_use flag And actions initiated via the link are attributed with link_id, partner_id, scope, and channel; they appear in live tracking within 5 seconds and in data exports with those fields
Partner Identity via SSO or Email Magic Link
Given an invitation requires SSO When the partner authenticates via a supported IdP (SAML/OIDC) Then RallyKit verifies email from IdP assertion, creates/associates a partner identity, and establishes a session tied to the share link Given an invitation requires a magic link When the partner submits a matching email Then a time-limited magic link is sent; when clicked, email is verified and a session is created And if the invitation restricts to specific email domains, only addresses matching the allowlist are accepted; non-matching are rejected with 403 E_DOMAIN_BLOCK and a user-facing message And all identity events (invited, challenged, verified, failed) are logged and viewable by the org admin
Domain Allowlists and IP Allow/Deny Enforcement
Given a valid share link When a session is created Then the verified email domain must be in the link’s allowlist (if present) and the source IP must be in allowlist and not in denylist; otherwise return 403 with E_DOMAIN_BLOCK or E_IP_DENY And on every subsequent request for that session, the same domain/IP checks are re-evaluated; if the client IP moves to a denied range, the session is invalidated with 401 E_SESSION_REVOKED And allow/deny rules support exact domains and CIDR ranges; domain checks are case-insensitive And every decision is recorded with link_id, session_id, ip, outcome, and reason code
Timeboxed Access and Single-Click Revocation
Given a share link with start and end times When current time is before start Then access attempts return 403 E_LINK_NOT_ACTIVE When current time is after end Then access attempts return 403 E_LINK_EXPIRED Given the owner clicks Revoke on the link Then all new requests using the link are rejected immediately with 410 E_LINK_REVOKED And all active sessions created via that link are invalidated within 60 seconds And revocation is idempotent and recorded in the audit log with actor and timestamp And re-enabling access requires generating a new tokenized URL; previously issued URLs remain unusable
Optional One-Time-Use Tokens
Given a link is created with one_time_use=true When the first authenticated request creates a session Then the token is atomically marked spent and cannot create additional sessions And all subsequent attempts using the same URL return 410 E_TOKEN_SPENT within 1 second, even under concurrent requests And analytics report exactly one successful activation for the link_id
Short-Lived Sessions, Rotation, and CSRF Protection
Given a session is created from a share link Then the session has a max idle timeout of 30 minutes and an absolute lifetime of 12 hours; after either, requests receive 401 E_SESSION_EXPIRED And session cookies are HttpOnly, Secure, SameSite=Lax (or Strict where compatible) and rotated every 15 minutes; rotation preserves user state And all state-changing requests require a valid CSRF token tied to the session; missing/invalid CSRF returns 403 E_CSRF with no side effects And session refresh revalidates domain/IP rules; if checks fail, the session is terminated
Rate Limiting and Bot Mitigation on Shared Endpoints
Given unauthenticated or link-based endpoints When requests exceed the configured threshold (default 100/min per IP and 500/day per link_id) Then return 429 E_RATE_LIMIT with a Retry-After header And suspicious patterns (e.g., high-velocity attempts, headless fingerprints) trigger a mitigation challenge; failures return 403 E_BOT_DETECTED; successes proceed without data loss And trusted IP ranges configured by the org are exempt up to a higher ceiling; all exemptions and throttles are logged and visible in admin metrics And rate limiting and bot mitigations are counted per IP, per link_id, and per partner_id
Scoped Collaboration UI
"As a campaign manager, I want a guided UI to choose what to share and with what permissions so that I can set up partner access quickly and correctly."
Description

Build an administrative UI that lets owners select assets to share (page, script section, district subset, channel), choose rights (view-only, edit, publish), set time bounds, and configure domain/IP restrictions. Provide a preview-as-partner mode that shows exactly what a collaborator will see and be able to do. Surface scope conflicts, inherited restrictions, and affected content, with clear summaries before sending an invite. Generate a share summary page with link management (copy, pause, revoke) and real-time indicators of active sessions. Outcome: fast, low-friction setup of precise collaboration without misconfiguration.

Acceptance Criteria
Select Assets and Scope Summary
Given I am on the New Share setup, When I select assets by type (Page, Script Section, District Subset, Channel), Then each selected item appears in a scope summary with counts by type and the Continue button is enabled only after at least one asset is selected. Given I deselect an asset, When I view the scope summary, Then the item is removed and counts update immediately (<=500 ms UI update). Given I select a District Subset, When I enter district identifiers, Then the UI validates format (e.g., CC-XX, CC-HD-###), flags invalid entries inline, displays valid/invalid counts, and blocks progression until all entries are valid. Given a Page is selected, When I open Channel options, Then only channels applicable to the selected Page are available; inapplicable channels are hidden or disabled with a tooltip reason.
Rights Assignment Matrix
Given rights options View, Edit, Publish, When I select Publish, Then Edit and View are auto-selected and cannot be deselected while Publish remains selected. Given no rights are selected, When I attempt to continue, Then I see an inline error "Select at least one right" and cannot proceed. Given I choose View-only, When I preview or save, Then the permissions summary explicitly lists permitted actions (read-only) and disallowed actions (edit, publish) and these are enforced in preview. Given I change rights, When I navigate between steps, Then the selection persists and is reflected in the final summary.
Time-Bound Access Controls
Given I set a Start and End time, When I save the share, Then End must be after Start and both are saved in the org’s default timezone with the timezone clearly displayed. Given End is in the past or Start is in the future, When I attempt to save, Then I receive a blocking validation message describing the issue and cannot proceed until corrected. Given a share has expired, When a collaborator opens the link, Then access is denied with an "Access expired" message and a 403 response, and the event is logged. Given I extend the End time, When sessions are active, Then access is restored/maintained within 60 seconds for those sessions without requiring a new link.
Domain and IP Restriction Enforcement
Given I add allowed email domains, When I input values, Then only valid domain formats (example.org, *.example.org) are accepted, duplicates are deduplicated, and invalid formats are rejected with inline errors. Given I add allowed IP rules, When I input IPv4/IPv6 or CIDR ranges, Then valid entries are accepted and invalid ones are blocked with inline errors; overlapping ranges are merged or deduplicated in the summary. Given restrictions are configured, When a collaborator accesses the link, Then access is granted only if the request matches at least one allowed domain (for invited email) and one allowed IP rule; otherwise access is blocked with a clear reason and the attempt is audit-logged. Given I am in preview, When I test restrictions, Then the UI displays the detected IP and a sample partner email domain to verify allowlist logic without granting real access.
Preview-as-Partner Fidelity
Given I click "Preview as partner", When rights and scope are configured, Then the preview renders exactly the scoped assets and channels with UI controls that reflect granted rights (View-only hides edit/publish controls; Edit shows editing controls; Publish exposes publish actions). Given preview mode, When I attempt an action that would modify data, Then a non-destructive simulation is used and a banner states "Preview — changes are not saved". Given restrictions (timebox, domain/IP) are set, When in preview, Then the banner displays active scope, rights, restrictions, and expiry, and these match the share summary values. Given owner-only assets outside the configured scope exist, When in preview, Then they are not visible or discoverable via navigation or search.
Conflict, Inheritance, and Impact Summary
Given selected assets have inherited restrictions or conflicts (e.g., page-level blocks or channel unavailability), When I configure scope and rights, Then conflicts are detected and listed with type, affected items, and severity (Blocker vs Warning). Given unresolved Blockers exist, When I attempt to send an invite, Then the action is disabled and I am directed to resolve or remove the conflicting items. Given I view the Impact Summary, When items are listed, Then counts by type (pages, sections, districts, channels) are shown with expandable details; resolving a conflict updates counts and removes the notice in real time (<=1 s UI update).
Share Summary, Invite, and Link Management
Given configuration is valid, When I create the share, Then a summary page is generated showing scope, rights, time bounds, domain/IP restrictions, and recipients exactly as configured. Given a share link exists, When I click Copy, Then the URL is copied to clipboard, a success toast appears, and the copy event is audit-logged. Given an active share, When I click Pause, Then collaborator access is suspended within 30 seconds, the link state shows Paused, and a Resume control is available; resuming restores access within 30 seconds. Given an active or paused share, When I click Revoke, Then the link is immediately invalidated and cannot be resumed; all active sessions terminate and the Active Sessions indicator drops to zero within 5 seconds. Given the summary page, When sessions are active, Then an Active Sessions indicator updates in near real time (<=5 s latency) and a detail drawer lists session partner identifier, IP, and start time without exposing additional PII.
Review & Publish Workflow Controls
"As a content lead, I want scoped review and publish controls so that partner edits can move quickly while preserving our standards and boundaries."
Description

Introduce a scoped review workflow where partners with edit rights submit changes for owner review, and partners with publish rights can publish only within their granted scope. Provide versioning, diffs by script section and channel, and rollback per scope. Trigger notifications to owners on submissions and to partners on approvals or requests for changes. Ensure publishing updates live action pages and scripts only for in-scope districts/channels and never leaks hidden targets or language. Outcome: safe, auditable collaboration that increases speed to publish while maintaining message control.

Acceptance Criteria
Edit-Right Partner Submits Change for Owner Review
Given a partner has edit rights within a defined scope and is within the access window When they save and submit proposed edits Then the system creates a pending Change Request with unique ID, captures author, timestamp, scope, and before/after diffs by affected script sections and channels, with status "Pending Owner Review" Given a partner with edit rights attempts to publish directly When the publish action is initiated Then the system blocks the action and displays an error indicating owner review is required, and no changes are published Given domain/IP restrictions are configured When an edit-right partner outside allowed domain/IP attempts submission Then the system returns 403 and no Change Request record is created
Publish-Right Partner Publishes Within Granted Scope
Given a partner has publish rights limited to specific sections, districts, and channels When they publish an approved Change Request Then only in-scope assets are updated and out-of-scope assets remain unchanged Given a publish includes out-of-scope targets or language When publish is triggered Then the system rejects the publish, lists the out-of-scope items, and performs no partial updates Given a successful publish When action pages for in-scope districts are loaded Then the updated scripts render only for the channels and districts within scope, and do not appear elsewhere
Versioning and Diffs by Section and Channel
Given any workflow event occurs (submit, approve, request changes, publish, rollback) When the event is processed Then a new immutable version snapshot is created tagged with actor, event type, scope, and timestamp Given two versions are selected for comparison When viewing diffs Then the system displays per-section and per-channel differences with line-level additions and deletions and counts of affected districts Given a Change Request spans multiple sections and channels When viewing the Change Request Then diffs are grouped by section and channel with expandable items and summary counts
Rollback Per Scope
Given a previously published version exists within a scope When an owner selects rollback for that scope Then only in-scope assets revert to the selected version while out-of-scope assets remain unchanged Given a rollback is executed When completion occurs Then a new version is recorded with event type "Rollback" and the audit log links it to the original publish version Given a rollback is initiated When validation runs Then the system presents a preview of per-section and per-channel changes that will be reverted and requires explicit confirmation
Workflow Notifications and Alerts
Given a partner submits a Change Request When submission succeeds Then owners receive in-app and (if configured) email notifications containing change ID, scope summary, and a deep link to the diffs Given an owner approves or requests changes When the decision is recorded Then the submitting partner receives a notification with outcome, reviewer comments, and next steps Given a publish completes or fails When the event occurs Then owners and relevant publish-right partners receive success/failure notifications including environment, scope, and error details if any
Live Update Without Leakage
Given an approved in-scope change is published When a supporter visits an action page for an in-scope district Then the updated script content is visible within 60 seconds and cache is invalidated only for that scope Given the same change When a supporter visits an out-of-scope district or channel Then the updated content is not visible and hidden targets or language remain inaccessible via UI and API Given CDN and API caching layers are present When publish occurs Then cache keys are purged for affected scopes only and no logs indicate responses containing hidden content
Auditability and Export
Given workflow events occur (submit, approve, request changes, publish, rollback) When viewing the audit log Then each event shows actor, role, IP/domain, scope, before/after checksums, timestamps, and outcome Given audit evidence is requested When exporting Then the system generates a time-bounded CSV/JSON export of events and versions filtered by scope with a cryptographic checksum of the file Given data retention rules are configured When an event exceeds retention Then content details are redacted while summary metadata and hashes are preserved for traceability
Audit Logging & Evidence Reports
"As an operations lead, I want detailed audit logs and exports tied to scopes so that I can prove compliance and investigate issues quickly."
Description

Capture immutable, timestamped records for all share lifecycle events (create, modify, revoke), access attempts (granted/denied with reasons like IP/domain mismatch or expired), and content actions (view, edit, publish) tied to user, partner, scope, and asset. Provide searchable logs, filters by partner and timeframe, and exportable reports (CSV/PDF) suitable for audits and funder reporting. Integrate alerts/webhooks for anomalous activity (e.g., repeated denied access, out-of-scope edit attempts). Outcome: audit-ready proof of who accessed and changed what, when, and under which authorization.

Acceptance Criteria
Immutable Share Lifecycle Logging
Given a user or partner creates, modifies, or revokes a share When the action is committed Then an audit entry is appended with fields: event_type ∈ {share_created, share_modified, share_revoked}, actor_user_id, actor_partner_id (nullable), asset_id, scope_id, previous_values (for modified), new_values, reason (optional), occurred_at (UTC ISO 8601), request_id, immutable_id, sequence_number Given any API/UI attempt to update or delete an existing audit entry When the request is executed Then the system responds 403 Forbidden and appends a tamper_attempt event; the original entry remains unchanged Given 10,000+ lifecycle events exist When retrieving an audit entry by immutable_id Then the entry is returned correctly with P95 latency ≤ 300 ms and exact field integrity
Access Attempt Logging With Reasons
Given a share link is accessed When authorization is evaluated Then an audit entry is appended with event_type=access_attempt, outcome ∈ {granted, denied}, denial_reason ∈ {expired, ip_mismatch, domain_mismatch, scope_violation, rate_limited, invalid_token} (nullable if granted), requester_ip, user_agent, geo_country (if available), actor_user_id (nullable), partner_id (nullable), asset_id, scope_id, occurred_at (UTC ISO 8601), request_id Given an access attempt is denied due to IP/domain mismatch or expiration When viewing the audit entry Then denial_reason reflects the correct cause and the evaluated policy identifiers are recorded Given 5 denied access_attempt events from the same requester_ip within 10 minutes When the 6th denied attempt occurs within the window Then the system flags anomalous_activity for alerting and webhook dispatch Given normal traffic conditions When logging access_attempt Then the logging overhead adds P95 ≤ 20 ms to request processing
Content Action Audit Trail
Given a user or partner views, edits, or publishes a scoped asset When the action completes Then an audit entry is appended with event_type ∈ {content_view, content_edit, content_publish}, actor_user_id, actor_partner_id (nullable), asset_id, scope_id, change_summary (for edit/publish), occurred_at (UTC ISO 8601), request_id Given an edit or publish changes structured fields When the audit entry is stored Then previous_values and new_values capture field-level diffs for changed fields only Given content_view events can be high-volume When 50,000 content_view events occur in an hour Then write throughput sustains without loss and P95 write latency ≤ 50 ms
Search and Filter by Partner and Timeframe
Given audit logs exist across multiple partners and dates When a user filters by partner_id and a date range [start, end] Then only matching entries are returned and the range is inclusive of boundaries Given a user searches by keyword When a query is submitted for actor email, asset name, or request_id Then matching entries are returned with those fields highlighted in results Given a large result set (≥50,000 entries) When the user paginates with page_size ∈ [25, 100] Then consistent ordering by occurred_at desc is maintained and P95 query latency ≤ 2 s per page
Exportable Audit Evidence (CSV/PDF)
Given a filtered audit result set ≤ 100,000 entries When the user requests CSV export Then a CSV is generated with canonical columns [immutable_id, event_type, outcome/denial_reason (if applicable), actor_user_id, actor_partner_id, asset_id, scope_id, occurred_at (UTC ISO 8601), request_id, requester_ip, user_agent] and is available for download within 60 s; files >50 MB are zipped automatically Given a filtered audit result set ≤ 5,000 entries When the user requests PDF export Then a paginated PDF is generated within 90 s containing a summary (counts by event_type and denial_reason), timeframe, partner filter, and sample entries with page numbers and report metadata (generated_at UTC, filter criteria) Given an export is created When a download link is issued Then the link is signed, single-use, and expires in 24 hours
Anomalous Activity Alerts and Webhooks
Given ≥ 6 denied access_attempt events from the same requester_ip within 10 minutes or any out_of_scope edit attempt When the threshold condition is met Then an alert is created in-app within 10 s and a webhook payload is queued with type=anomalous_activity and aggregated metrics (count, window, denial_reason) Given a webhook is delivered When the receiver validates X-Signature (HMAC-SHA256 over payload with shared secret) Then the signature matches and delivery status is recorded as delivered; on failure, the system retries up to 3 times with exponential backoff (10 s, 60 s, 300 s) Given alert fatigue controls are enabled When identical anomalous conditions recur Then duplicate alerts/webhooks are suppressed for 30 minutes while counts continue to aggregate in the dashboard
District & Channel Scope Enforcement
"As a regional organizer, I want district- and channel-limited access that adapts to changing bill status and mappings so that partners only affect their intended jurisdictions and mediums."
Description

Enforce district and channel boundaries at data and UI layers so collaborators can target only the allowed geographies and outreach channels. Bind scoped shares to RallyKit’s legislator auto-matching, ensuring that changes in district mappings or bill status dynamically update the visible and editable targets within scope. Redact out-of-scope targets and language from all partner views, exports, and APIs. Provide graceful handling for boundary changes (e.g., redistricting) with notifications when a share’s effective coverage shifts. Outcome: collaboration remains tightly aligned to intended districts and channels even as data changes in real time.

Acceptance Criteria
UI enforcement of district scope for collaborator views
Given a collaborator has a scoped share limited to districts D1 and D2 When they open the Targets list or detail views Then only legislators mapped to D1 and D2 are visible and count totals exclude out-of-scope targets Given out-of-scope target IDs exist in the database When the collaborator navigates directly via URL or ID to an out-of-scope target Then the API returns 403 FORBIDDEN and the UI displays a redacted state with no target metadata Given the collaborator performs search, filter, or sorting operations When queries are executed Then results never include out-of-scope targets and performance remains under 500ms p95 for 10k in-scope targets Given the collaborator has view-only rights When opening an in-scope target Then edit controls are disabled and no mutations are permitted Given the collaborator has edit rights When opening an in-scope target Then edit controls are enabled and changes persist successfully with 200 OK
Channel-scope enforcement across UI, scripts, exports, and API
Given a scoped share allows channels {Call, Email} and disallows {Tweet, Letter} When composing or viewing scripts Then only Call and Email script sections are visible/editable and Tweet/Letter sections are hidden and excluded from payloads Given an API token bound to the scoped share When attempting to POST an action to a disallowed channel Then the API returns 403 FORBIDDEN with error code CHANNEL_OUT_OF_SCOPE and no action is created Given a public action page rendered via the share When the page loads Then only buttons for allowed channels are displayed and disabled/hidden states match configuration across desktop and mobile Given the collaborator generates an export When the file is created Then columns and rows for disallowed channels are omitted and row counts match in-app channel-filtered totals
Auto-matching binds visible/editable targets to scoped share
Given a supporter address auto-matches to legislators entirely outside the scoped districts When the collaborator shares or previews the action page Then zero targets are returned and the page displays a message "No targets in your area for this action" with HTTP 200 and no errors Given a supporter address matches a mix of in-scope and out-of-scope legislators When the action page renders Then only in-scope targets are displayed, selectable, and counted; out-of-scope targets are excluded from the DOM and API payloads Given bill status changes affect targeting rules When status transitions occur Then the set of visible/editable in-scope targets updates within 60 seconds without manual refresh and any open sessions reflect the new set on next server call Given server-side caches exist When scope or bill status changes Then stale out-of-scope targets do not appear and cache TTL is <=60s for targeting responses
Redistricting and boundary change handling with notifications
Given district mappings are updated by a data sync When a scoped share’s effective coverage gains or loses any target Then a notification is sent to share owners within 5 minutes including before/after in-scope target counts and a diff by district Given a scoped share loses 100% of targets due to remap When a collaborator visits the share Then the UI shows a "No current coverage" banner, publishing is disabled, and guidance to review scope is displayed Given newly mapped targets fall into scope after redistricting When the mapping is active Then targets become visible automatically without reconfiguration and are included in exports and APIs Given scheduled exports or caches exist When coverage shifts Then previous exports are marked stale and regenerated on next request; API ETags change within 10 minutes to reflect new scope
Redaction across partner views, webhooks, reports, and audits
Given a collaborator generates an export or report When the data set is produced Then only in-scope targets, actions, and script text are included; out-of-scope fields are redacted (empty) and aggregate metrics match in-app scoped totals Given webhooks are configured for action events When events are emitted Then only in-scope action events are delivered; out-of-scope events are suppressed and not retried Given audit logging is enabled When any out-of-scope access is attempted via UI or API Then an audit record is written with timestamp, share ID, actor ID, resource type/ID, and outcome 403 FORBIDDEN
Rights enforcement within scope (view-only, edit, publish)
Given a collaborator has view-only rights within the scoped share When accessing in-scope assets Then save/publish actions are disabled and mutation endpoints return 403 FORBIDDEN Given a collaborator has edit rights but not publish rights When editing in-scope script copy Then edits save successfully (200 OK) but publish controls are disabled and publish endpoints return 403 FORBIDDEN Given a collaborator has publish rights When publishing an in-scope asset Then publish succeeds (201/200) and the system logs the share ID and rights level; scope cannot be expanded and attempts to do so return 403 FORBIDDEN

Export Guardrails

Policy-backed export controls with tiers: No Export, Aggregates Only, or Row-Level with PII redaction. Watermarked downloads, scoped API/webhook tokens, rate limits, and just-in-time access requests. Keeps data promises, satisfies compliance, and stops uncontrolled list spread.

Requirements

Tiered Export Policy Enforcement
"As a nonprofit director, I want to set granular export modes per campaign and role so that my team accesses only the minimum data needed while honoring our data-sharing promises."
Description

Implement policy-backed export modes—No Export, Aggregates Only, and Row-Level with PII Redaction—configurable per organization, campaign, and role. Enforce policies consistently across the UI, CSV/Excel/PDF downloads, APIs, and webhooks via a centralized authorization and policy engine at the service layer. Provide sensible defaults, explicit fallbacks (deny by default on missing policy), and clear in-product messaging that explains the allowed export scope before an action is taken. Integrate with existing permissions and campaign settings so administrators can quickly apply policies to new campaigns and templates. Deliver consistent, testable behavior with unit and integration tests that verify policy decisions for each surface area.

Acceptance Criteria
No Export Blocks All Surfaces
Given an effective export policy of No Export for the user and campaign When the user views any export control in the UI Then all export controls are hidden or disabled and a message states exports are not allowed by policy Given an effective export policy of No Export When a CSV, Excel, or PDF download endpoint is requested Then the service responds 403 with error_code=POLICY_DENY_EXPORT and no file is generated Given an effective export policy of No Export When an API export endpoint is called with a valid token Then the service responds 403 with error_code=POLICY_DENY_EXPORT and returns no rows Given an effective export policy of No Export When a webhook would deliver exportable data Then no payload is delivered and an audit entry records reason=policy_deny_export Given the policy engine is unreachable When any export is attempted Then the request is denied (fail closed) with 503 and error_code=POLICY_EVALUATION_UNAVAILABLE
Aggregates Only Enforcement and Schema Guarding
Given an effective export policy of Aggregates Only When the user opens the export dialog Then only aggregate export options are shown and row-level options are not available Given an effective export policy of Aggregates Only When a CSV, Excel, or PDF is exported Then the file contains only aggregated metrics by permitted dimensions and contains no PII or unique row identifiers; the row count equals the number of groups Given an effective export policy of Aggregates Only When calling any row-level API export endpoint Then the service responds 403 with error_code=POLICY_AGGREGATES_ONLY; aggregate endpoints succeed and include no PII fields Given an effective export policy of Aggregates Only When configuring webhooks Then only aggregate webhook types can be created; row-level webhook subscriptions are rejected with 403 Given an effective export policy of Aggregates Only When validating output schemas Then fields first_name, last_name, email, phone, address_line, city, state, zip, ip_address, and device_id are absent from all outputs
Row-Level With PII Redaction Applied Consistently
Given an effective export policy of Row-Level with PII Redaction When exporting via UI (CSV/Excel/PDF), API, or webhook Then all PII fields (first_name, last_name, email, phone, address_line, city, state, zip, ip_address, device_id) are redacted using the sentinel value "REDACTED" and no unredacted PII appears Given an effective export policy of Row-Level with PII Redaction When the user previews the export scope Then a notice enumerates the redacted fields and shows example redaction before confirmation Given an effective export policy of Row-Level with PII Redaction When requesting explicit PII columns Then the response includes the columns with redacted values and metadata indicates redaction=true for those fields Given an effective export policy of Row-Level with PII Redaction When automated leak checks scan outputs Then zero matches are found for PII regex patterns across all surfaces
Effective Policy Resolution and Deny-by-Default Fallback
Given policies defined at organization, campaign, and role levels When computing the effective export mode for a user and campaign Then the most restrictive applicable mode is enforced with ordering: No Export > Aggregates Only > Row-Level with PII Redaction Given conflicting policies across levels When evaluating the decision Then test cases confirm the most-restrictive-wins behavior for all combinations Given no explicit policy applies at any level When an export is attempted Then the request is denied by default with 403 and error_code=POLICY_NOT_DEFINED and the UI explains that no export policy is set Given the policy engine returns an error or empty decision When an export is attempted Then the system fails closed and denies the request with 503 and error_code=POLICY_EVALUATION_UNAVAILABLE
Pre-Action Scope Messaging and Controls
Given any user initiates an export action When the export dialog is presented Then it displays the effective policy name, allowed data scope (None, Aggregates Only, Row-Level with PII Redaction), and the policy source (org/campaign/role) before confirmation Given an export is disallowed by policy When the user attempts to confirm Then the confirm action is disabled or results in 403 with error_code reflecting the policy, and the message remains visible Given an API client queries the export scope endpoint When provided with user, campaign, and token context Then the response includes a machine-readable policy_scope object detailing allowed mode and redactions
Admin Configuration and Inheritance for New Campaigns and Templates
Given an administrator with Manage Export Policies permission When setting the organization default export mode Then the selection is saved and becomes the default for newly created campaigns and templates Given a new campaign is created from a template When the template has an explicit export policy Then the campaign inherits that policy unless explicitly overridden during creation Given role-specific overrides are configured for a campaign When users with those roles attempt exports Then enforcement reflects the override and the settings UI shows the effective mode per role Given a campaign-level policy is removed When the effective policy is recalculated Then the system reverts to the most restrictive applicable higher-level policy; if none exists, exports are denied by default
Cross-Surface Consistency and Automated Test Coverage
Given the continuous integration pipeline runs When policy enforcement tests execute Then there are automated tests covering 3 policy modes x 4 surfaces (UI downloads, CSV/Excel/PDF, API, webhooks) and policy resolution cases, and all tests pass Given sample datasets containing synthetic PII When end-to-end integration tests run Then expected fields are included/excluded or redacted per mode, and any attempt to bypass policy via query parameters or alternate formats is denied and logged Given regression tests are scheduled When a policy or schema changes Then tests detecting PII leakage fail the build and prevent deployment
PII Redaction & Field-Level Masking
"As a data steward, I want personally identifiable fields automatically redacted in row-level exports so that we can share operational data without exposing sensitive supporter information."
Description

Provide a redaction engine that automatically masks or removes personally identifiable information in row-level exports based on a configurable field catalog (e.g., name, email, phone, street address, donor notes). Support irreversible hashing for identifiers, partial masking for contact data, and complete omission for highly sensitive fields. Annotate exported files with metadata that indicates which fields were redacted and why, and expose a preview so users can see the resulting schema before exporting. Maintain a central configuration with versioning and environment-specific defaults, and integrate with the existing data model for supporters, actions, and legislators. Ensure redaction executes server-side at export time to prevent client bypasses and is covered by policy-driven tests.

Acceptance Criteria
Row-Level Export: Identifier Hashing
Given a field catalog marks supporter_id and email as "hash" and an environment-specific salt is configured When a user initiates a row-level supporter export Then supporter_id and email values in the export are replaced by SHA-256 hashes computed with the environment salt and no plaintext values appear And identical inputs hash to identical outputs within the same environment and export job, but differ across environments with different salts And null or empty inputs remain null (no hash of empty string is emitted) And column names remain unchanged; hashed status is indicated in the export metadata
Contact Data Masking and Omission Rules
Given the catalog sets email to "mask_partial", phone to "mask_partial", and street_address and donor_notes to "omit" When a user exports row-level supporter data Then email is rendered as first-character + "*****" + "@" + domain (e.g., j*****@example.org), preserving the domain exactly And phone shows only the last 4 digits with other digits replaced by "X" while preserving formatting (e.g., (XXX) XXX-1234 or XXX-XXX-1234) And street_address and donor_notes columns do not appear in the export headers or any row data And fields not designated for redaction remain unchanged And masking is applied uniformly across all rows regardless of client parameters
Export Metadata: Redaction Annotations
Given an export job is executed When the export file is generated Then a companion JSON manifest named <data_filename>.redaction.json is produced alongside the data file And the manifest contains export_id, requesting_user_id, environment, UTC timestamp, and redaction_config_version And the manifest lists each field with the applied action (none | mask_partial | hash | omit) and the governing policy/rule identifier (reason) And the manifest summarizes counts per action type (e.g., fields_masked, fields_hashed, fields_omitted) And the manifest is downloadable by the same user and included in API responses for the job
Pre-Export Schema Preview
Given a user selects a redaction config version and target dataset When the user opens the export preview Then the preview displays the exact columns that will be included, with per-field badges indicating none/mask_partial/hash/omit And sample data is rendered server-side for at least 5 records with the redactions applied exactly as in the final export And toggling the config version or environment updates the preview within 2 seconds And the final export uses the same pinned config version shown in the preview
Server-Side Enforcement at Export Time
Given a client attempts to bypass redaction by altering request parameters or front-end code When the export endpoint processes the request Then all redactions are applied on the server based solely on the active policy and config version, ignoring any client hints to relax redaction And no unredacted PII is present in any response payload or user-accessible job artifact And API requests for disallowed fields are rejected with 403 or the fields are stripped per policy, and the event is audit logged with export_id and config version And security tests that simulate client tampering still yield redacted outputs
Central Configuration: Versioning and Environment Defaults
Given a central redaction catalog supports versioning and environment defaults When a privileged user saves a change to redaction rules Then a new immutable config version is created with a change log entry, and prior versions remain selectable And in-flight exports continue using their pinned version; subsequent exports use the selected or environment-default version And each environment (dev, staging, prod) can define defaults that apply automatically when starting an export And overrides of the default version require appropriate permission and are audit logged And CI generates and executes policy-driven tests from the active config; any failing test blocks deployment of that config
Data Model Integration: Supporters, Actions, Legislators
Given exports may target supporters, actions, or legislators datasets (including denormalized joins) When exporting any of these datasets Then redaction rules are applied to each entity's fields per the catalog, including joined supporter PII within action exports And legislator non-PII remains unredacted while any PII present in related records follows policy And computed fields derived from redacted sources inherit the strictest rule (e.g., if any component is omit, the computed field is omitted) And JSON/NDJSON exports have nested fields redacted consistently with their flat counterparts And cross-entity identifiers that are hashed produce the same hash for the same source value within the same environment
Watermarked Downloads
"As a campaign lead, I want every exported file to be watermarked with who downloaded it and when so that off-platform sharing can be traced and discouraged."
Description

Stamp all downloadable files with a persistent watermark that includes organization name, campaign identifier, exporting user, timestamp, and policy scope. For CSV/Excel, inject a header watermark and a unique export watermark ID column; for PDFs, apply an unobtrusive visual overlay; for ZIP bundles, include a manifest with watermark metadata. Make watermark IDs traceable to the audit trail for later attribution and deterrence. Present watermarked previews and communicate to users that downloads carry identifiable marks to discourage improper redistribution. Ensure watermarking is applied in streaming pipelines to avoid large memory footprints and covers all export file types supported by RallyKit.

Acceptance Criteria
CSV/XLSX exports contain header watermark and export_watermark_id column
Given a user with export permission initiates a CSV export for a specific campaign When the export completes Then the file's first line contains watermark metadata including org_name, campaign_identifier, exporting_user_id, timestamp_utc (ISO-8601), policy_scope, and watermark_id And the column header row follows immediately after the metadata line And an additional column named export_watermark_id is appended as the last column And every data row's export_watermark_id value equals the file's watermark_id Given a user initiates an XLSX export for the same campaign When the export completes Then the workbook contains a hidden worksheet named _watermark with the same metadata keys and values And the main worksheet includes a last column named export_watermark_id with the same watermark_id for all rows Given any CSV/XLSX export When validated Then the watermark_id matches a UUIDv4 format
PDF exports display an unobtrusive, persistent watermark overlay
Given a user downloads a PDF export When the PDF is opened Then each page displays a diagonal overlay containing org_name, campaign_identifier, exporting_user_id, timestamp_utc, policy_scope, and watermark_id And the overlay opacity is between 8% and 15% And the overlay is embedded in the page content stream (not in a removable optional content group/layer) And body text remains legible and machine-extractable Given the PDF is printed or rasterized When inspected Then the watermark overlay remains visible
ZIP bundle exports include a root manifest with watermark metadata and file hashes
Given a user downloads a ZIP bundle export When the archive is opened Then a root-level file named watermark_manifest.json exists And the manifest contains org_name, campaign_identifier, exporting_user_id, timestamp_utc (ISO-8601), policy_scope, watermark_id, generator_version And the manifest includes an array of entries with relative_path and sha256 for each file in the bundle And all listed sha256 hashes validate against the included files And the watermark_id in the manifest is identical to any per-file watermark_id found within contained CSV/XLSX/PDF files
Watermark IDs are traceable via audit trail for attribution
Given a completed export with watermark_id X When an admin queries the audit API with watermark_id X Then the API returns a single export event including exporting_user_id, org_id, campaign_identifier, policy_scope, created_at, file_type, and source_ip And the returned watermark_id equals X Given watermark_id X When searched in the admin UI Then the matching export event appears with a direct link to details And the event is created within 5 seconds of export completion
Streaming watermarking prevents large memory footprints on big exports
Given a CSV export of at least 1 GB or 5 million rows on staging data When executed under instrumentation Then peak server memory attributable to the export process does not exceed 300 MB And the export stream writes in chunks no larger than 1 MB And no temporary file larger than 2x the chunk size is created Given any supported export type (CSV, XLSX, PDF, ZIP) When generated Then the file is produced via streaming without loading the entire payload into memory And the watermark is applied during the stream before any bytes are sent to the client
Pre-download preview and user messaging communicate identifiable watermarking
Given a user initiates a download from the UI When the confirmation modal opens Then the modal displays a preview showing how the watermark will appear for the selected file type And a notice states: "This download is watermarked and traceable to your account" And the modal shows the policy_scope and timestamp_utc that will be embedded And the user must confirm before the download begins Given the preview for CSV/XLSX When rendered Then it shows the export_watermark_id column and sample watermark metadata Given the preview for PDF When rendered Then it shows the first page with the overlay
Watermarking is enforced across all export endpoints and failures block downloads
Given any export request via UI, API, or webhook When the file is generated Then the resulting file contains a watermark per its type specification And the watermark_id is present and valid Given watermarking fails for any reason When the export pipeline detects the failure Then the download is aborted and no unwatermarked bytes are delivered And the user receives an error stating the export could not be completed due to watermarking And an audit event is recorded with outcome=failed and reason
Scoped API & Webhook Tokens
"As an integration engineer, I want API tokens that are scoped to aggregate-only or redacted data so that external systems never ingest disallowed supporter details."
Description

Introduce least-privilege, time-bound tokens for export-related APIs and webhook deliveries with scopes aligned to export policy tiers (e.g., aggregates.read, rows.read.redacted). Support token expiration, rotation, explicit revocation, IP allowlists, and HMAC-signed webhook payloads. Expose an admin UI to issue, restrict, and audit tokens, and surface scope errors with actionable messages. Ensure tokens cannot escalate beyond the configured campaign/organization policy and that aggregate-only scopes return pre-aggregated datasets. Provide SDK examples and documentation to make integration straightforward for third-party tools while keeping data exposure within policy limits.

Acceptance Criteria
Scope Enforcement and Policy Ceiling
Given an organization export policy of "Aggregates Only" and a request to create a token with scope rows.read.redacted, When the admin submits the create-token form or API call, Then the request is rejected with HTTP 400 and error code scope.policy_violation, includes suggested_scopes ["aggregates.read"], and a docs_url. Given an organization export policy that permits Row-Level (Redacted) and a token with scope rows.read.redacted, When the token is used on GET /exports/rows, Then the response excludes PII fields (email, phone, street_address) and includes only organization- and campaign-scoped data. Given a token with scope aggregates.read, When it is used against any /exports/rows* endpoint, Then the API returns HTTP 403 with error code scope.missing and required_scopes ["rows.read.redacted"]. Given any token, When it is used to access a campaign outside its assigned campaign_ids, Then the API returns HTTP 403 with error code scope.campaign_violation and docs_url. Given any request that fails due to scope, When the API returns an error, Then the response includes a human-readable message with the missing scope(s) and a link to remediation steps.
Aggregate-Only Scope Returns Pre-Aggregated Datasets
Given a token with scope aggregates.read, When it requests GET /exports/aggregates with filters (campaign_id, date_range), Then the JSON response returns only pre-aggregated metrics (totals, by_legislator, by_district) and no row-level identifiers or PII. Given a token with scope aggregates.read, When it requests CSV export for aggregates, Then the CSV columns are limited to aggregate fields and match the JSON totals for the same filters. Given the same filters applied in the RallyKit dashboard, When comparing totals to the API aggregate response, Then the values match for the same time window and campaign. Given a token with scope aggregates.read, When it attempts to request row-level endpoints or fields, Then the API denies the request with HTTP 403 and error code scope.missing.
Token Lifecycle: Expiration, Rotation, Revocation
Given a token created with an expires_at timestamp, When the current time is before expires_at, Then requests authorized by its scopes succeed with HTTP 200/2xx. Given the same token, When the current time is after expires_at, Then requests fail with HTTP 401 and error code token.expired, include a rotation_hint and docs_url. Given an organization-configured maximum TTL, When a create-token request exceeds that TTL, Then the API returns HTTP 400 with error code token.ttl_exceeds_policy and includes the allowed maximum. Given an active token, When an admin performs a Rotate action, Then a new token value is issued and the prior token value is immediately invalid for all endpoints (HTTP 401 token.rotated). Given an active token, When an admin performs a Revoke action, Then subsequent API calls using that token return HTTP 401 token.revoked and associated audit logs record who, when, and why. Given any token, When it is used successfully, Then last_used_at and last_ip are updated and visible in the audit trail.
Token IP Allowlists
Given a token configured with an IP allowlist of CIDR ranges, When a request originates from an IP within those ranges, Then the API authorizes based on scope and returns 2xx as applicable. Given the same token, When a request originates from an IP outside those ranges, Then the API returns HTTP 403 with error code ip.not_allowed and includes the observed origin_ip. Given a token allowlist, When IPv4 or IPv6 CIDR blocks are provided, Then both formats are accepted and validated; invalid CIDR inputs return HTTP 400 with error code ip.invalid_cidr. Given a token without an IP allowlist configured, When requests are made from any IP, Then no IP-based restriction is applied (authorization continues to rely on scope and policy).
Webhook HMAC Signatures and Replay Protection
Given a webhook destination configured for a campaign, When RallyKit sends a webhook event authorized by the token's scopes, Then the request includes headers X-RallyKit-Id, X-RallyKit-Timestamp, and X-RallyKit-Signature containing an HMAC-SHA256 signature over the raw body using the destination's shared secret. Given the receiver computes the signature with the shared secret and compares it to X-RallyKit-Signature, When the signatures match and the timestamp is within the allowable window, Then the event is accepted (HTTP 2xx) and marked verified. Given an invalid signature, When the receiver validates the request, Then it rejects with HTTP 401 signature.invalid and does not process the payload. Given a replayed delivery (duplicate X-RallyKit-Id or stale X-RallyKit-Timestamp), When the receiver validates the request, Then it rejects with HTTP 401 signature.replay_detected. Given official verification helpers, When used with sample payloads from the docs, Then verification passes for valid examples and fails for tampered or stale examples.
Admin UI for Token Issuance, Restriction, and Audit
Given an organization admin, When they open Admin > API & Webhooks, Then they can create a token by selecting scopes (constrained by org/campaign export policy), setting expires_at or TTL, and optionally configuring IP allowlists. Given token creation is successful, When the UI displays the token, Then the full token value is shown once with copy action, and thereafter only a masked prefix is shown in lists. Given the token list view, When loaded, Then it shows columns: name, scopes, campaign scope, expires_at, status (Active/Expired/Revoked), last_used_at, last_ip, created_by, created_at. Given an existing token, When the admin clicks Revoke or Rotate and confirms, Then the token status updates immediately and subsequent API calls reflect the change. Given any token lifecycle or scope change, When the admin views Audit Log, Then entries include actor, action, timestamp, token_id, prior/new values, and IP. Given a non-admin user, When they attempt to access the token management UI or APIs, Then access is denied with HTTP 403 and a message indicating required role.
SDKs and Documentation for Safe Integration
Given the public docs site, When a developer follows the Quickstart, Then they can create a scoped token, call the aggregates endpoint, and verify a webhook signature end-to-end within 30 minutes. Given official SDKs or examples for Node.js and Python, When executed against the sandbox, Then sample scripts successfully: (1) call /exports/aggregates with aggregates.read, (2) call /exports/rows with rows.read.redacted, (3) handle HTTP 401/403 with actionable messages, and (4) verify webhook HMAC. Given the error catalog documentation, When referencing an API error, Then each error includes http_status, error_code, human_message, remediation, and docs_url. Given the docs repository CI, When link checks and code sample tests run, Then all reference links return HTTP 200 and all code samples compile/run without errors. Given the scope matrix page, When viewing, Then it clearly maps export policy tiers to allowable scopes and includes copy-paste examples for each scope.
Adaptive Rate Limits & Exfiltration Guard
"As a security officer, I want rate limits and anomaly detection on exports so that suspicious high-volume downloads are throttled or blocked before data leaks occur."
Description

Apply layered rate limits and anomaly detection to export endpoints and download workflows, including per-user, per-organization, per-campaign, and per-token thresholds with burst and rolling windows. Implement daily row-count caps, file-size limits, and circuit breakers that temporarily block or challenge suspicious activity. Surface informative throttle messages and notify admins on threshold breaches. Integrate with existing monitoring to log metrics such as rows exported per minute and export failures, and tune defaults for small nonprofits. Provide configuration hooks for stricter regimes when required by funders or policy commitments.

Acceptance Criteria
Per-User and Per-Token Rolling and Burst Limits on Export API
Given per-user limits of 1,000 rows per 5 minutes with a burst of 200 rows per 10 seconds and per-token limits of 2,000 rows per 5 minutes When a user using token T exceeds the 10-second burst threshold Then the request is rejected with HTTP 429, a Retry-After header indicating seconds until the burst window resets, and a JSON body containing error_code="RATE_LIMIT_BURST", scope="user", window_seconds=10, limit=200, remaining=0 And when subsequent requests stay under burst but exceed the rolling 5-minute limit for either the user or the token Then the request is rejected with HTTP 429 and a JSON body containing error_code="RATE_LIMIT_ROLLING", scope set to the first limit breached ("user" or "token"), window_seconds=300, limit matching the configured cap, and remaining=0 And when requests remain within both burst and rolling windows Then exports succeed (HTTP 200) and no throttling error is returned
Org- and Campaign-Scoped Daily Caps and Aggregation
Given an organization daily cap of 20,000 rows and a campaign daily cap of 10,000 rows across all users and tokens When multiple users within the same org export data for the same campaign and the cumulative successful exports for that campaign reach 10,000 rows within the current 24-hour window Then any further export attempts for that campaign by any user in the org are rejected with HTTP 429 and JSON body error_code="CAMPAIGN_LIMIT_DAILY_CAP", scope="campaign", limit=10000, remaining=0, and reset_at set to the campaign window reset timestamp And when cumulative exports across all campaigns in the org reach 20,000 rows within the current 24-hour window Then any further export attempts for that org are rejected with HTTP 429 and JSON body error_code="ORG_LIMIT_DAILY_CAP", scope="org", limit=20000, remaining=0, and reset_at set to the org window reset timestamp And successful exports below these caps are counted accurately and atomically to prevent races and double counting
File Size Limits on Exports and Download Links
Given a maximum export file size of 100 MB per generated file and a download rate limit of 5 downloads per minute per token When an export job estimate indicates the resulting file would exceed 100 MB Then the job is aborted prior to file generation and the API responds with HTTP 413 and JSON body error_code="FILE_SIZE_LIMIT", limit_mb=100, suggested_action including guidance to narrow filters or choose aggregate exports And when a generated file is requested more than 5 times within a minute by the same token or user Then the download endpoint responds with HTTP 429 and JSON body error_code="DOWNLOAD_RATE_LIMIT", scope="token", window_seconds=60, limit=5, remaining=0 And when requests stay under both limits Then the export is generated and downloadable without size/rate limit errors
Anomaly Detection Circuit Breaker with Temporary Block/Challenge
Given anomaly thresholds defined as: rows_exported_per_minute > 3x the caller’s 7-day p95 for 2 consecutive minutes OR > 50,000 rows attempted within 10 minutes OR more than 5 distinct token uses by the same user within 5 minutes When any threshold condition is met for a user, token, campaign, or org scope Then a circuit breaker is activated for the smallest offending scope for 15 minutes and subsequent export or download requests receive HTTP 429 with JSON body error_code="CIRCUIT_BREAKER", scope set to the blocked scope, cooldown_seconds=900, and challenge_required=true And an optional step-up challenge (e.g., re-authentication or admin approval) allows early unblock upon successful completion, which is recorded in the audit log And after the cooldown expires without further anomalies, the circuit breaker automatically resets and exports proceed
Admin Notifications and Audit Log on Threshold Breach
Given admin notification is enabled for the organization When any rate limit, daily cap, file-size limit, or circuit breaker is triggered Then all org admins receive an in-app alert and email within 60 seconds containing: timestamp, scope (user/token/campaign/org), identifiers (org_id, campaign_id, user_id, token_id), limit type, configured thresholds, current counts, request_id, and recommended next steps And an immutable audit log entry is created with the same fields and outcome (blocked/challenged/unblocked) and is viewable in the admin console with search by request_id And notifications are deduplicated to at most one per distinct condition per 5 minutes to avoid alert fatigue
Monitoring and Metrics Emission to Existing Observability
Given the platform monitoring pipeline is available When exports and downloads occur Then the system emits metrics within 60 seconds including: rows_exported_total, rows_exported_per_minute, throttles_total by reason (burst, rolling, daily_cap, file_size, circuit_breaker, download_rate), export_failures_total by reason, and export_duration_seconds histogram And each metric is labeled with scope (user/token/campaign/org), org_id, campaign_id, endpoint (export_api/download), and status (success/throttled/failed), without exposing PII beyond stable IDs And structured logs are written for every throttle/deny event with request_id, scope, reason, thresholds, counts, and retry_after/cooldown where applicable
Configurable Policy Hooks and Small Nonprofit Defaults
Given no custom configuration is provided When the system is installed for a new small nonprofit tenant Then the following defaults are active and documented: per-user 1,000 rows/5 minutes with 200 rows/10-second burst, per-token 2,000 rows/5 minutes, per-campaign 10,000 rows/day, per-org 20,000 rows/day, max file size 100 MB, download rate 5 per minute per token And when an authorized operator applies a stricter policy via configuration (environment variables or management API) that reduces any limit by a specified percentage or absolute value Then the new limits take effect within 60 seconds without service restart, are reflected in throttle responses (limit, window, reset), and are validated to prevent nonsensical values (e.g., negative limits, windows < 1 second) And a change event is logged with actor, old_value, new_value, scope, and timestamp
Just-in-Time Export Access Requests
"As a volunteer coordinator, I want to request temporary access to row-level exports with redaction so that I can fulfill a time-sensitive task without permanent elevated permissions."
Description

Offer an in-product workflow for requesting temporary elevation from No Export or Aggregates Only to a higher tier with PII redaction. Collect justification, scope (campaign, fields, time window), and approver, then automatically grant and revoke access based on time-bound policies. Notify approvers via email/Slack, record decisions with reasons, and display pending/approved requests in an admin queue. Integrate with exports so that users encountering a blocked action can initiate a request inline without losing context. Ensure all temporary grants are captured in the audit log and are discoverable for compliance reporting.

Acceptance Criteria
Inline Blocked Export Triggers JIT Access Request
Given a user with "No Export" or "Aggregates Only" permission attempts an export from a campaign page When the system blocks the export Then a JIT Access Request modal opens prefilled with the campaign, export type, and requested fields And the form requires: justification (minimum 20 characters), scope selection (campaign(s) and field set), time window (start/end within the next 24 hours and duration <= 24 hours), and approver And the Submit button remains disabled until all required fields are valid And on submission, the request is created, the user sees a success confirmation within 2 seconds, and remains in the original export context And the request appears in the Admin Queue with status "Pending" within 5 seconds And no PII values are revealed in the modal beyond field names
Approver Notifications and Escalation
Given a JIT Access Request is created and assigned an approver When the request is saved Then an email and a Slack message are delivered to the approver within 60 seconds And the messages include requester name, organization, justification, scope (campaign, fields), time window, and deep links to Approve or Deny And delivery outcomes (success/failure) are logged And if no action is taken within 2 hours, a reminder is sent and the request escalates to the backup approver list per policy And if Slack delivery fails, email is still sent
Approval Grants Scoped, Redacted, and Time-Bound Export
Given the approver approves a request When the approval is submitted Then the requester’s export permission elevates within 60 seconds only for the specified campaigns and fields And all row-level exports performed under the grant have PII fields redacted per policy And downloads are watermarked and API/webhook tokens are scoped to the grant And rate limits defined by policy are enforced during the grant And the grant auto-expires at the specified end time (drift <= 60 seconds) and is revoked across web and API sessions And attempts outside the scope or after expiry are blocked with a descriptive error
Denial Requires Reason and Notifies Requester
Given the approver denies a request When the denial action is submitted Then a non-empty denial reason of at least 10 characters is required And the requester receives an in-app notification and email within 60 seconds And no permission changes are applied And the request status updates to "Denied" with timestamp and reason recorded in the audit log
Admin Queue Visibility and Decision Workflow
Given an admin opens the JIT Access Requests queue When the queue loads Then the admin can filter by status (Pending, Approved, Denied, Expired), requester, campaign, date range, and approver, and search by justification text And each row shows requester, scope, time window, status, and decision reason if applicable And selecting a request reveals full details and Approve/Deny controls And queue updates reflect new requests and decisions within 5 seconds without a full page refresh
Audit Log and Compliance Reporting for Temporary Grants
Given any lifecycle event for a JIT request or any export executed under a temporary grant When the event occurs Then an immutable audit entry is written with: request ID, requester ID, approver ID, timestamps (created/approved/denied/expired), scope (campaigns, fields), justification, decision, reason, export artifact IDs, actor IP/user agent, and notification delivery statuses And audit entries are queryable by date range, user, campaign, and request ID And a compliance report lists all active and historical temporary grants over a selected period and can be exported as a watermarked CSV
Tamper-evident Export Audit Trail
"As a compliance manager, I want a complete, tamper-evident record of every export attempt and decision so that I can produce audit-ready proof and detect policy violations."
Description

Create an immutable, append-only audit log that records every export attempt and outcome, including actor, source surface (UI/API/webhook), policy evaluation, scope granted, file metadata, watermark ID, row counts, and cryptographic checksums. Store logs in WORM-capable storage with retention policies aligned to compliance needs and provide an auditor-facing UI to search, filter, and export receipts. Emit real-time events for SIEM ingestion and generate scheduled compliance summaries per campaign and organization. Include integrity verification routines and access controls to ensure logs themselves are protected and admissible as audit-ready proof.

Acceptance Criteria
Append-only WORM Storage and Retention Enforcement
Given any export attempt is initiated from UI, API, or webhook When the attempt completes with success or failure Then an audit record is appended to WORM storage within 2 seconds of outcome And the record is immutable (no modify/delete) until retention expiry enforced by storage lock And the per-tenant retention policy (configurable 2–7 years, default 3 years) is stored with the record and enforced And any attempt to alter or delete a record before expiry is rejected and generates a security alert within 60 seconds And storage lock status for a sampled record can be verified via provider API and returns "compliance mode: locked"
Complete Export Receipt Fields and Cryptographic Proof
Given an export attempt occurs from any surface When the export receipt is generated Then the receipt includes: actor_id, actor_role, auth_method, source_surface in {UI, API, webhook}, request_id, timestamp (UTC ISO8601), policy_decision in {No Export, Aggregates Only, Row-Level Redacted}, policy_version, scope_granted, redaction_rules_applied, file_metadata {format, mime_type, byte_size}, watermark_id, row_counts {requested, redacted, delivered}, payload_checksum_sha256, receipt_checksum_sha256, outcome in {success, failure}, failure_reason_code (nullable) And the receipt is signed with the platform signing key producing a detached .sig file And signature verification with the published public key succeeds for the stored receipt
Auditor Console: Search, Filter, and Export Receipts
Given a user with Auditor role and MFA is authenticated When they query receipts by date range (up to 365 days), org, campaign, actor, surface, policy_decision, outcome, watermark_id, request_id Then results return within 2 seconds for queries spanning up to 100,000 records with server-side pagination of 250 per page And filters can be combined and saved; a saved filter returns identical results when re-run And export produces a ZIP within 60 seconds for up to 100,000 receipts containing: receipts.jsonl, signatures/, manifest.json with bundle_sha256 And all auditor UI actions (search, view, export) are themselves recorded in the audit trail
Real-time SIEM Event Emission and Delivery Guarantees
Given a SIEM destination is configured and reachable When any export attempt is processed Then a normalized event is emitted within 3 seconds with at-least-once delivery and idempotency key=request_id And the event includes all receipt metadata fields except PII values and file payload content And failed deliveries are retried with exponential backoff for up to 24 hours; after exhaustion the event is placed on a DLQ and a high-priority alert is generated within 2 minutes And the event schema includes schema_version and changes maintain backward compatibility for one minor version
Scheduled Compliance Summaries per Campaign and Organization
Given scheduled reporting is enabled for an organization When the daily job runs at 02:00 UTC and the monthly job runs at 03:00 UTC on the 1st Then summaries per organization and per campaign are generated with counts by policy_decision, surface, outcome, actors involved, and row totals And anomalies are flagged when volume exceeds 3x the 7-day moving average or failure rate > 5% And each summary is checksummed (sha256), signed, stored in WORM, and delivered via secure link to designated recipients And job completion (success/failure) is logged and a real-time event is emitted; completion occurs within 10 minutes of schedule
Integrity Verification and Tamper-Evidence Attestation
Given a compliance admin triggers verification or the nightly schedule at 01:00 UTC runs When the verification job executes Then it re-computes sha256 for a random 5% sample (min 1,000; max 50,000) of receipts and validates their signatures And it verifies monotonic sequence IDs without gaps for the evaluated window; any gap is recorded as a critical finding And it produces a signed attestation report containing: sample_size, pass_count, fail_count, gap_count, and timestamps And any fail_count > 0 or gap_count > 0 triggers a high-severity alert and temporarily disables new exports until acknowledged by a Compliance Admin And the attestation report is stored in WORM and visible in the auditor console
Restricted Access Controls and PII Handling for Audit Logs
Given organization role mappings and MFA policies are enforced When a user attempts to access audit logs via UI or API Then only users with roles in {Auditor, Compliance Admin, Org Owner} and active MFA can view or export receipts And PII fields are redacted by default; viewing unredacted PII requires the View PII permission plus a JIT approval token valid for 60 minutes tied to a ticket_id And every access returns 403 for unauthorized users and emits a security event within 60 seconds And exported auditor bundles are watermarked with recipient identity and access timestamp; the watermark_id appears in the audit trail entry for the export

Attribution Locks

Immutable partner tags and credit-split rules enforced at publish. Dispute workflow allows proposed adjustments with dual confirmation and a traceable outcome. Eliminates post-hoc credit fights, keeps dashboards honest, and preserves trust with coalition partners and funders.

Requirements

Immutable Partner Tagging
"As a campaign director, I want to lock partner tags at publish so that coalition members trust that attribution won’t change after we go live."
Description

Allow campaign creators to assign coalition partner organizations and roles (e.g., lead, co-sponsor) to a campaign/action prior to publish, then lock those tags at the moment of publish. The locked partner set is stored in an append-only record with timestamp and actor, and becomes read-only across UI, API, and reporting. Tags propagate to action pages, attribution pipelines, and exports to ensure consistent, audit-ready attribution across RallyKit. Validation prevents duplicates and unknown partners; permissions restrict pre-publish edits to authorized users. Post-publish changes are disallowed and must route through the dispute workflow, preserving historical integrity and trust among partners and funders.

Acceptance Criteria
Assign Partners and Roles Pre-Publish
- Given a draft campaign/action, When an authorized user opens Partner Tagging and selects existing partner orgs from the directory, Then the user can assign a role from the allowed set (e.g., lead, co-sponsor) per partner and save successfully. - And When the same partner org is added again, Then an inline error "Partner already added" is shown and the duplicate is not saved.
Validate Known Partners and Valid Roles
- Given a draft campaign/action, When a user attempts to add a partner not found in the org directory, Then an error "Unknown partner" prevents save. - And When a user selects a role not in the allowed role list, Then an error "Invalid role" prevents save.
Lock Partner Tags on Publish
- Given a draft with at least one partner tag, When the campaign/action is published, Then the system writes a locked PartnerSet record capturing partner org IDs and roles, actor ID, and UTC timestamp. - And Then the partner tagging controls for that campaign/action become read-only across UI and API.
Append-Only Audit Record Integrity
- Given a published campaign/action, When the PartnerSet audit trail is queried via UI or API, Then the locked PartnerSet record is returned and cannot be altered. - And When any write operation targets an existing PartnerSet record, Then the operation is rejected and no mutation occurs; new PartnerSet records cannot be created except via the dispute workflow.
Propagate Locked Tags to Pages, Pipelines, and Exports
- Given a published campaign/action, When an action page is rendered, Then the locked partners and roles are displayed consistently. - And When a supporter completes an action, Then the emitted attribution event includes partner org IDs and roles from the locked set. - And When a data export is generated, Then partner org IDs and roles appear in dedicated columns and exactly match the locked set.
Block and Route Post-Publish Changes to Dispute Workflow
- Given a published campaign/action, When a user attempts to add/edit/remove partner tags in the UI, Then controls are disabled and an option to "Start Attribution Dispute" is presented. - And When an API client attempts POST/PATCH/DELETE to the partners endpoint for a published campaign/action, Then a 403 Forbidden is returned with error code "immutable_partner_tags" and no changes are applied.
Enforce Pre-Publish Edit Permissions
- Given a draft campaign/action, When a user without Manage Partners permission opens Partner Tagging, Then the section is hidden or read-only and they cannot save changes. - And When a user with Manage Partners permission performs add/edit/remove before publish, Then the changes save successfully and are visible to authorized collaborators on the draft.
Credit Split Rules Engine
"As a coalition lead, I want to set fair credit percentages before launch so that reporting consistently reflects each partner’s contribution."
Description

Enable defining percentage-based credit splits among tagged partners prior to publish, with validation that totals equal 100% and configurable minimum increments. The selected split rule is locked at publish and applied deterministically to all resulting supporter actions (calls, emails, sign-ups) for attribution. Support reusable templates, per-campaign overrides, and default org policies. Expose the locked rule across dashboards, exports, and APIs, and ensure the rule is enforced in real time during action ingestion to keep numbers consistent with what was agreed pre-launch.

Acceptance Criteria
Validate total equals 100% and enforce minimum increments
Given the org minimum increment policy is 0.5% and three partners are selected When the user enters 49.5%, 30.0%, and 20.5% and clicks Save Then the rule saves successfully and the computed total equals 100.0% Given the org minimum increment policy is 0.5% When the user enters splits that sum to 99.5% or 100.5% and clicks Save Then validation fails with an error: "Total must equal 100%" Given the org minimum increment policy is 0.5% When any entered split is not a multiple of 0.5% and the user clicks Save Then validation fails with an error: "Each split must be in 0.5% increments" Given the user assigns the same partner more than once When they click Save Then validation fails with an error: "Duplicate partner not allowed"
Apply org minimum increment policy and non-retroactivity
Given the org minimum increment policy is 1% When a user creates or edits a draft credit split rule Then only values that are multiples of 1% are accepted Given a campaign was published with a locked rule under a 1% policy When the org updates the minimum increment policy to 0.25% Then the locked rule on the published campaign remains unchanged Given a draft campaign exists created under a 1% policy When the org updates the policy to 0.25% and the user next saves the draft rule Then the 0.25% policy is enforced on that save
Lock rule at publish and prevent edits
Given a draft campaign with a valid credit split rule When a user with publish permission publishes the campaign Then the selected rule is locked and stored as an immutable snapshot with rule_id, version, published_at, and publisher_id Given a campaign is published with a locked rule When a user attempts to change partner selections or percentages via UI Then inputs are disabled and an immutable state is indicated Given a campaign is published with a locked rule When an API client attempts to modify the credit split or template Then the request is rejected with HTTP 403 and error code CREDIT_RULE_LOCKED Given a campaign is published with a locked rule When the underlying template is later edited or deleted Then the campaign’s locked rule remains unchanged
Deterministic real-time application and idempotency
Given a published campaign with a locked credit split of 20%, 30%, 50% When a call action is ingested Then partner credit records are written as 0.2, 0.3, 0.5 and their sum equals 1.0 in the same transaction as the action Given a published campaign with a locked rule When email and sign-up actions are ingested Then the same credit split is applied per action and each action’s partner credits sum to 1.0 Given an action event is delivered more than once (duplicate message with same action_id) When ingestion processes the duplicate Then no additional partner credit records are created (idempotent) Given multiple actions for the same campaign are ingested concurrently When processing completes Then each action has exactly one set of partner credit records and aggregate partner totals equal the number of actions ingested
Templates, defaults, and per-campaign overrides
Given a credit split template named "Coalition A 40/60" exists and is set as the org default When a new campaign is created Then the campaign’s draft credit split is auto-populated from the default template Given multiple templates exist When a user selects a different template for a draft campaign Then the draft split updates to that template’s values Given a draft campaign with a template-applied split When the user manually edits percentages before publish and saves Then the edits are validated and stored on the draft, overriding the template for that campaign only Given a campaign is published using a template-derived split When the template is later edited Then the published campaign’s locked split remains unchanged Given a campaign is published When a user attempts to change its template or percentages Then the system blocks the change and indicates the rule is locked
Expose locked rule across dashboards, exports, and APIs
Given a campaign is published with a locked rule When viewing the campaign dashboard Then a Locked Credit Split panel displays each partner name and percentage exactly as in the locked snapshot Given a campaign is published with a locked rule When exporting action-level CSV Then each row includes credit_rule_id, credit_rule_version, and a locked_credit_rule_json field reflecting the locked snapshot Given a campaign is published with a locked rule When calling the API GET /campaigns/{id}/credit-split Then the response returns the locked snapshot (partners and percentages) with rule_id and published_at and values match the UI and export Given partner credit is applied per action When viewing aggregate dashboards or exporting aggregates Then the sum of partner credits across partners equals the total number of actions for the selected scope
Publish-Time Lock Enforcement
"As an org admin, I want the system to enforce locks at publish so that no one can alter attribution later without creating a visible audit trail."
Description

At the moment of publish, generate a lock artifact that captures the partner set and credit split rule, write it to an append-only audit store with a unique lock ID and hash, and flip the campaign to a read-only state for those fields. Display a visible “Attribution Locked” badge in the UI, include the lock ID on receipts, and expose it via API and exports. Any subsequent configuration change requires creating a new version and republishing, ensuring a clear lineage between versions and preventing silent post-hoc edits.

Acceptance Criteria
Publish Generates Immutable Lock Artifact
Given a draft campaign has a configured partner set and credit split and passes validation When the user publishes the campaign Then the system creates a lock artifact capturing partner set, credit split rule, publisher user ID, campaign version, and publishedAt timestamp And assigns a unique lockId And computes and stores a SHA-256 hash of the artifact payload And writes the artifact to the append-only audit store And persists lockId and lockHash on the campaign version record And returns a publish success response containing lockId Given a duplicate publish request for the same version is received with the same idempotency key When the request is processed Then no additional audit record is created and the original lockId is returned
Post-Publish Read-Only Enforcement for Attribution Fields
Given a campaign version is published When a user opens the campaign in the UI Then the partner set and credit split fields are disabled and labeled "Attribution Locked" Given a campaign version is published When a client attempts to PATCH partnerSet or creditSplit via API Then the API responds 409 Conflict with errorCode "ATTRIBUTION_LOCKED" and no changes are persisted Given a background job or internal service attempts to modify partnerSet or creditSplit on a published version When the write is validated server-side Then the operation is rejected and an immutable-state violation is logged
"Attribution Locked" Badge Displays with Lock Metadata
Given a published campaign version is viewed in the campaign UI When the details page renders Then an "Attribution Locked" badge is visible And the badge displays the lockId (at least 8-character prefix) and the publishedAt timestamp And the badge links to a modal or detail view showing full lockId and lockHash And the badge is not displayed for draft versions
Lock ID Included on Supporter Receipts
Given a supporter completes an action for a published campaign version When the receipt (email and in-app confirmation) is generated Then the receipt includes the campaign version number and lockId in the metadata section And the lockId is included in the emitted activity event payload Given the campaign is republished creating a new version with a new lockId When a supporter completes an action on the new version Then the receipt contains the new version's lockId and not the previous one
Lock Metadata Exposed via API and Export
Given an API client requests GET /campaigns/{campaignId}/versions/{versionId} When the response is returned Then it includes fields: lockId, lockHash, publishedAt, and lockedFields = ["partnerSet","creditSplit"] Given a data export of campaign activities is generated When the CSV is downloaded Then it includes columns lock_id, lock_hash, and campaign_version with non-empty values for rows associated with published versions
Versioning Required for Attribution Changes Post-Publish
Given a campaign version is published When a user attempts to modify the partner set or credit split Then the system requires creating a new draft version and prevents edits to the published version Given a new draft version is created from a published version When the new version is published Then a new lock artifact and lockId are created And the new version stores previousVersionId linking to the prior version And both versions remain accessible and independently immutable
Append-Only Audit Store and Hash Integrity
Given a lock artifact is retrieved from the audit store by lockId When the system recomputes the SHA-256 hash of the stored artifact payload Then the recomputed hash matches the stored lockHash Given an operation attempts to update or delete an existing lock artifact record When the audit store processes the request Then the operation is rejected with errorCode "APPEND_ONLY" and no changes occur And the attempted mutation is recorded in the security audit log
Dispute & Adjustment Workflow
"As a partner manager, I want a transparent dispute process with dual confirmation so that credit disagreements are resolved fairly without rewriting history."
Description

Provide a structured, time-bound workflow for partners to propose adjustments after publish without altering the original lock. A disputing partner submits a proposal with scope (date range, campaigns, channels), requested reallocation, and rationale; counterparties are notified and must provide dual confirmation (initiator plus campaign owner or designated arbiter) for approval. On approval, create a signed adjustment entry that offsets credits prospectively or for a specified historical window, without modifying the original locked data. Track statuses (Open, Under Review, Approved, Rejected, Expired), keep all communication and decisions in the audit trail, and reflect outcomes in reporting with clear indicators.

Acceptance Criteria
Submit Dispute Proposal with Scoped Parameters
Given a published attribution lock and an authorized partner, When the partner opens the Dispute and Adjustment form, Then the form requires scope fields date range, campaigns list, and channels list. Given the form is completed, When the partner enters requested reallocation, Then the request defines from and to parties and percentages that sum to 100 percent of the impacted share with no negative values. Given the form is completed, When the partner provides a rationale, Then the rationale is required and must be at least 50 characters. Given all validations pass, When the partner submits, Then a dispute record is created with status Open and a unique ID and timestamp and initiator identity and immutable hash of the submitted payload and the original lock remains unchanged. Given any validation fails, When the partner submits, Then no record is created and field level error messages are displayed.
Counterparty Notification and Dual Confirmation
Given a dispute is created with status Open, When creation succeeds, Then notifications are sent to the campaign owner or designated arbiter and to counterparties by in app alert and email within 5 minutes and include a deep link to the dispute. Given a dispute is Open, When any non initiator posts a comment or records a decision, Then the status changes to Under Review. Given a dispute requires dual confirmation, When the initiator confirms and the campaign owner or designated arbiter approves, Then the status changes to Approved and both confirmations are recorded with timestamps. Given a dispute is Open or Under Review, When the campaign owner or designated arbiter rejects, Then the status changes to Rejected and a rejection rationale is required. Given a dispute is Open or Under Review, When the configured SLA elapses without dual confirmation, Then the status changes to Expired.
Approved Adjustment Entry Is Signed and Immutable
Given a dispute becomes Approved, When the system creates the adjustment, Then an immutable adjustment entry is recorded with a unique ID and a reference to the original lock ID and signers and timestamps and scope and reallocation details and a digital signature. Given an adjustment entry exists, When users view the original lock, Then the locked data remains unchanged and is presented with a clear indicator that an adjustment exists. Given an adjustment entry exists, When any user attempts to edit or delete it, Then the system prevents modification and allows only a new reversing or superseding entry with separate authorization, preserving full history.
Apply Adjustments Prospectively or for Historical Window
Given an Approved adjustment with a historical date range, When reporting is generated, Then only actions within the specified date range campaigns and channels are offset according to the reallocation and outputs show original values adjustment delta and net values. Given an Approved adjustment with a prospective effective date, When new attributions occur on or after the effective date, Then the updated credit split rules are applied only to new attributions and existing locked data remains unchanged. Given a new dispute targets a scope that overlaps an Open or Under Review dispute, When the initiator attempts submission, Then the system blocks submission and identifies the conflicting dispute IDs.
Status Lifecycle and Audit Trail
Given the system manages dispute statuses, When options are displayed, Then the available statuses include exactly Open Under Review Approved Rejected and Expired. Given any status transition occurs, When the transition is saved, Then the audit trail records the previous status new status actor timestamp and rationale when provided in a read only log. Given comments or messages are posted in the dispute thread, When they are submitted, Then each entry is appended to the audit trail with actor timestamp and content and cannot be edited.
Reporting and API Reflection of Outcomes
Given an adjustment exists, When dashboards and exports are generated, Then adjusted entities display an indicator and the reports include columns for original values adjustment delta and net values. Given an adjustment exists, When users drill into a metric, Then a link to the dispute and adjustment IDs and scope and decision details is available from the indicator. Given reporting data is requested via API, When the response is returned, Then it includes original values net values and the list of applied adjustment IDs with scope time window and signers.
Tamper-Evident Audit Ledger
"As a funder, I want verifiable receipts for attribution decisions so that I can trust reported outcomes during audits."
Description

Maintain a tamper-evident, append-only ledger of publish locks, disputes, approvals, and adjustments, recording timestamps, actors, cryptographic hashes, and references to affected campaigns and actions. Generate verification receipts that partners and funders can use to validate that reported totals match ledger entries. Provide read-scoped access in the UI, CSV/JSON exports, and APIs while honoring data privacy constraints. Integrate with RallyKit’s existing audit infrastructure to support audits and compliance reviews without bespoke data pulls.

Acceptance Criteria
Append-Only Ledger for Attribution Events
Given an attribution lock is published for campaign C by actor A at time T When the publish is committed Then a ledger record is appended with fields: id, event_type=publish_lock, timestamp=ISO8601 UTC ms, actor_id, actor_role, campaign_id, affected_action_ids[], credit_split_snapshot, prev_hash, record_hash=SHA-256(canonical_json) And no existing ledger record can be updated or deleted via any API or data path; attempts return 409 and are logged with alert=security And for dispute_opened, dispute_confirmed, dispute_rejected, approval_granted, adjustment_applied events When each transition occurs Then each is captured as a separate append-only ledger record linked by prior_event_id and prev_hash
Cryptographic Tamper Evidence and Integrity Checks
Given the ledger has N>=1 records When the hourly integrity job recomputes the hash chain and a Merkle root over the last 24h and the full chain Then the recomputed head_hash and merkle_root equal stored values and the job result status=pass is recorded And any mismatch immediately raises a Sev-High alert to Ops, marks ledger_writes_suspended=true, and shows an admin UI banner Given an exported JSON of records and the daily signed checkpoint When an external verifier recomputes hashes and verifies the signature against the published public key Then the computed root equals the checkpoint and the signature is valid
Verification Receipts for Reported Totals
Given an authenticated partner with scope to campaign C requests a verification receipt for date range [D1,D2] When Generate Receipt is invoked Then a receipt is produced within 10 seconds containing: campaign_id, date_range, totals_by_action_type, included_entry_ids_count, head_hash, merkle_root, generated_at, nonce, signature And the Validate Receipt API returns status=valid when supplied the receipt and recomputed totals match ledger-derived totals; otherwise returns status=invalid with reason And dashboard totals for C in [D1,D2] equal receipt totals with zero variance
Scoped Read Access and Privacy Controls
Given a user with role=Partner Viewer scoped to campaigns S When they view the Audit Ledger UI or call the ledger API Then only entries with campaign_id in S are visible and PII fields are redacted or hashed per policy (e.g., actor_name->actor_role, email/phone masked) And direct requests to entries outside scope return 403 and are logged with user_id, entry_id, timestamp And CSV/JSON exports and API responses enforce the same scope and redactions And a user with role=Org Auditor may view PII only when consent=true and within retention window; otherwise PII is suppressed
CSV/JSON Exports and API Pagination/Filters
Given a user requests an export with filters campaign_id=C, date_range=[D1,D2], event_types=E, format in {CSV, JSON} When the export runs Then the system streams results within 60 seconds for up to 1,000,000 rows and includes a header with schema_version, parameters, head_hash, file_hash=SHA-256(payload) And the API listing endpoint supports cursor-based pagination (page_size<=5000), stable order by timestamp asc then id asc, with no gaps or duplicates across pages And repeating the same export with identical parameters produces an identical file_hash And rate limiting of 60 requests/min per token is enforced with 429 and Retry-After on exceedance And a fields parameter allows inclusion/exclusion of optional PII columns subject to role-based policy
Audit Infrastructure Integration
Given the nightly audit sync job runs at 02:00 UTC When it extracts ledger entries since the last watermark and loads them to the audit warehouse Then the job completes in <15 minutes, advances the watermark, and emits a success metric and log entry And the loaded schema aligns with the existing audit schema with zero manual mapping steps And compliance queries for campaign C and [D1,D2] return in <30 seconds and match application totals within 0.5% And PII suppression and retention windows in the warehouse match privacy rules (expired PII absent)
Dispute Workflow Traceability and Dual Confirmation
Given a dispute on credit split for campaign C is opened by Partner A When Partner A submits a proposed adjustment with rationale Then a dispute_opened ledger record is appended with dispute_id, proposed_delta, rationale, timestamp, actor_id, prev_hash When Partner B confirms or rejects within an SLA of 5 business days Then a dispute_confirmed or dispute_rejected record is appended with decision, timestamp, actor_id, prev_hash, and SLA_breached flag when applicable When a confirmed adjustment is applied Then an adjustment_applied record is appended with effective_from, new_credit_split summing to 100%, and linkage to prior events; reports after effective_from reflect the new split while historical reports remain unchanged
Attribution Reporting & Exports
"As a program officer, I want dashboards and exports that reflect locked splits and resolved disputes so that my team can report accurate, defensible numbers."
Description

Update dashboards and exports to compute partner credit using the locked split rules plus any approved adjustment entries, with real-time reconciliation that always sums to 100% for each action. Surface lock IDs, dispute statuses, and adjustment deltas, and provide filters by campaign, partner, date range, and channel. Ensure caches and aggregates are rebuilt when adjustments are approved, and include clear visual indicators when figures have been adjusted. Deliver export files suitable for grant reporting with partner-level totals, action counts, and references back to lock and adjustment IDs for traceability.

Acceptance Criteria
Real-Time Credit Reconciliation per Action
- Given a completed action with a Lock ID and locked partner split rules, When credit is computed, Then partner shares equal locked splits plus the sum of all Approved adjustments for that action only. - Then per-action partner shares are stored at ≥4-decimal precision and displayed at 2 decimals using banker's rounding. - Then displayed partner shares sum to exactly 100.00% by applying a deterministic balance-to-largest-fraction rule when rounding. - And no partner share is negative or exceeds 100%. - And if there are no Approved adjustments, displayed shares equal the locked splits.
Dashboard Traceability: Locks, Disputes, Adjustments
- Given any dashboard view at partner or action level with active filters, When rows are rendered, Then each row displays Lock ID, Dispute Status (None|Pending|Approved|Rejected), and Adjustment Delta per partner (net percentage points at 2 decimals). - Then Lock ID and Adjustment ID values are clickable and open detail views for the underlying records. - Then displayed Lock IDs, Dispute Statuses, and Adjustment Deltas match backend records for the selected range. - Then if no adjustments exist for the row, Adjustment Delta shows 0.00 and adjusted_flag=false.
Filter Combinations by Campaign, Partner, Date Range, Channel
- Given Campaign, Partner, Date Range (org timezone), and Channel filters, When any combination is applied, Then results include only actions satisfying all selected filters (logical AND). - Then the default Date Range is the last 30 days; boundaries are inclusive of start 00:00 and end 23:59:59 in org timezone. - Then applied filters persist in the URL/state and are passed to exports so exported data matches on-screen data. - Then clearing filters restores the unfiltered view and totals.
Auto-Rebuild on Adjustment Approval
- Given an adjustment changes status to Approved, When the approval is saved, Then caches and aggregates for affected actions, partners, campaigns, channels, and org totals are invalidated and rebuilt. - Then dashboards and exports reflect the new credits within 5 minutes of approval. - Then the rebuild is idempotent, retried up to 3 times on failure, and an audit log entry is recorded linking job_id and adjustment_id. - Then no stale totals remain after rebuild; a manual Refresh triggers a consistency check against the rebuilt aggregates.
Grant-Ready Export Files with Traceability
- Given any dashboard view with applied filters, When the user exports, Then CSV and JSON exports are available for download. - Then each export includes per partner per campaign totals: partner_id, partner_name, campaign_id, campaign_name, channel, total_actions, credited_actions_count, credited_share_percent (2 decimals), adjusted_flag, lock_ids[], adjustment_ids[], generated_at (UTC ISO8601), and filters_applied metadata. - Then totals in the export match the dashboard exactly for the same filters; the sum of credited_share_percent across partners for a campaign/date range equals 100.00% where applicable. - Then CSV has a header row and UTF-8 encoding; JSON has a metadata block and a data array; filenames include org, campaign (if any), and a timestamp. - Then exporting an empty result produces a valid file with headers/metadata and zero data rows.
Adjusted Figures Visual Indicators
- Given any metric affected by at least one Approved adjustment, When it is displayed, Then an Adjusted indicator is shown and hovering reveals net delta and the list of adjustment_ids applied. - Then partner-level tables show per-partner delta (± percentage points) alongside the credited share. - Then exports include adjusted_flag=true and delta columns where applicable; if no adjustments apply, adjusted_flag=false and delta shows 0.00.
Pending or Rejected Adjustments Excluded from Reporting
- Given adjustments in Pending or Rejected status, When computing dashboard and export credits, Then these adjustments do not affect any totals or percentages. - Then a dispute_status filter allows users to list items by None|Pending|Approved|Rejected without altering credit computation logic. - Then when an adjustment transitions from Pending to Approved, its effects appear only after the rebuild cycle defined in the auto-rebuild criteria, and the transition is audit-logged with user_id and timestamps.

Draft Sandboxes

Safe forks of pages or scripts for partners to propose edits without touching live assets. Side-by-side diffs, inline comments, and test links roll into the Approval Ladder. Encourages contribution and rapid iteration while preventing accidental or unauthorized changes.

Requirements

Sandbox Fork & Access Control
"As a campaign admin, I want to fork a safe draft of a live action page and invite a partner editor so that they can propose changes without touching the live page."
Description

Enable users to create draft sandboxes by forking any live action page or script without affecting production. Drafts inherit the source asset’s metadata (bill, district targeting, tags) and lock against live mutations until merge. Provide role-based access (owner, editor, commenter, viewer) with partner-scoped permissions, invite links, and optional NDA/acknowledgment gates. Support multiple concurrent drafts per asset, draft naming/notes, and provenance (source version, creator, timestamp). All draft activity is captured in the audit trail as non-production events, ensuring compliance while encouraging collaboration.

Acceptance Criteria
Draft Fork Does Not Affect Live Asset
Given a published live action page or script exists at version V When a user with Owner or Editor role clicks "Create Draft Sandbox" on the asset Then a new draft is created with a unique draftId and initial status "Draft" And the live asset remains unchanged at version V and continues serving production traffic without interruption And the draft is not publicly accessible and is flagged environment = "draft" And the draft content equals the live content at version V at the time of fork
Draft Inherits Metadata From Source
Given a source asset with metadata bill, districtTargeting, and tags When the asset is forked into a draft Then the draft's bill, districtTargeting, and tags equal the source values at fork time And subsequent edits to the source asset do not modify the draft's metadata And the draft shows provenance fields: sourceAssetId, sourceVersionId, forkedBy, forkedAt (UTC)
Multiple Concurrent Drafts With Naming and Notes
Given an asset has existing drafts When additional drafts are created Then the system allows at least 5 concurrent drafts for the same asset And each draft name must be unique within the asset; duplicate names are rejected with validation error And a notes field (up to 2000 characters) can be added and edited on the draft And the drafts list displays name, creator, createdAt, and sourceVersionId
Role-Based Access Enforcement
Given roles Owner, Editor, Commenter, Viewer are assigned for a draft (including partner-scoped assignments) When a user with each role attempts actions on the draft Then Owner can create/rename/edit/delete the draft; manage roles; generate invite links; request merge And Editor can edit draft content and metadata; add comments; cannot delete draft or manage roles And Commenter can add and resolve inline comments; cannot edit content or metadata And Viewer can view draft content and comments only; no write actions And unauthorized actions return 403 and are logged to audit
Partner-Scoped Invite Links With NDA Gate
Given an Owner generates an invite link scoped to Partner A and role = Editor with NDA required When a recipient opens the link Then the user must authenticate and be verified as a member of Partner A before access is granted And the NDA text is displayed and must be accepted before accessing the draft And acceptance is recorded with userId, draftId, timestamp, and NDA version And if verification fails or NDA is declined, access is denied and the attempt is logged
Draft Is Isolated From Live Mutations Until Merge
Given a draft was forked from source version V0 When the live asset is updated to V1 or later Then the draft remains based on V0 and its content and metadata do not change unless edited within the draft And merge to production requires an explicit merge action by an Owner and cannot occur implicitly And the merge preview compares the draft to the latest live version at merge time
Audit Trail For Non-Production Draft Activity
Given audit logging is enabled When any draft event occurs (create, edit, comment, role change, invite created/accepted, access denied) Then an audit record is written with eventType, draftId, assetId, actorId, timestamp, environment = "non-production" And audit entries are queryable by draftId and exportable And deleting a draft does not delete its audit records
Side-by-Side Diff Viewer
"As a reviewer, I want to see exactly what changed between the draft and the live asset so that I can quickly assess risk and scope of the edits."
Description

Provide a fast, accessible diff UI that renders live vs. draft side-by-side with syntax highlighting for scripts and rich-text/block-level diffs for pages. Show field-level changes (copy, form fields, CTAs, targeting rules), media diffs (images, links), and bill-status token differences. Include unified view toggle, whitespace/formatting ignore options, change filtering (copy-only, logic-only), and export/share of diffs for external review. Optimized for large assets with virtualized rendering and responsive layout for mobile reviewers.

Acceptance Criteria
Script Diff with Syntax Highlighting and Bill-Status Tokens
Given a draft script and its live counterpart are available When the diff viewer is opened Then both versions render side-by-side with line numbers and syntax highlighting applied to code and templating tokens Given bill-status tokens are present in either version When tokens differ by name, parameters, or placement Then token differences are highlighted distinctly from plain text and listed in a Token changes summary with counts Given only tokens changed and no surrounding text changed When the copy-only filter is active Then no diffs are shown Given only tokens changed and no surrounding text changed When the logic-only filter is active Then the token diffs are shown
Page Rich-Text and Media Block Diff
Given a draft page and its live version include block-level content (headings, paragraphs, lists, images, embeds, CTA blocks) When the diff viewer is opened Then changes are computed at block level and each added, removed, and modified block is labeled with its type Given an image differs between versions When viewing the diff Then old and new image thumbnails, filenames, dimensions, and alt-text are displayed side-by-side and link target changes are shown Given a hyperlink URL inside text changed When viewing the diff Then the URL change is highlighted while preserving anchor text highlighting only if it changed
Field-Level Changes for Forms, CTAs, and Targeting Rules
Given a page includes a form When fields are added, removed, or edited Then the diff lists each changed field with label, type, required flag, validation pattern, and default value old vs new Given CTAs are present When a CTA's label, destination, or visibility rules change Then the change is shown with old vs new values and a change badge Given targeting rules are configured When any rule condition is added, removed, or modified Then the diff displays rule-level changes with readable expressions for old vs new
Unified View, Ignore Whitespace, and Change Filtering
Given the side-by-side view is open When the user toggles Unified Then the layout switches to unified within 200 ms and preserves the current scroll position within 100 px Given Ignore whitespace/formatting is enabled When only whitespace or markup formatting differs Then those hunks are suppressed from the view and change counts update accordingly Given change filters exist When copy-only is selected Then only textual and media content diffs are shown Given change filters exist When logic-only is selected Then only form fields, CTAs, targeting rules, and token diffs are shown Given a user sets view and filter toggles When the viewer is reopened for the same draft in the same browser session Then the previous selections are restored
Export and Share Diff for External Review
Given any diff state with filters applied When Export is clicked Then a downloadable PDF and a self-contained HTML file are generated that preserve highlighting, inline media previews, and the current filters, with generation completing within 10 seconds for assets up to 3 MB Given Share external is clicked When confirmed Then the system creates a read-only, tokenized URL that reflects the current filters and expires in 7 days by default, and the creator can revoke the link at any time Given an external reviewer opens the share link When viewing on desktop or mobile Then the diff renders without requiring login and without any edit controls
Performance and Virtualized Rendering for Large Assets
Given a large script or page (up to 25,000 lines or 3 MB of content) When opening the diff viewer on a modern desktop browser Then first contentful paint occurs within 2.5 seconds and interactive within 3.5 seconds Given continuous scrolling through a large diff When measuring rendering Then virtualization ensures only visible regions are rendered and average scroll frame rate is at least 45 FPS with no frame longer than 50 ms over a 10-second scroll Given the diff is open for a large asset When monitored Then the tab's memory usage attributable to the diff remains under 300 MB
Accessibility and Mobile Responsiveness
Given the diff viewer is used with keyboard only When navigating Then all interactive controls and change hunks are reachable in a logical tab order and operable via keyboard with visible focus indicators Given a screen reader is in use When reading the diff Then additions and deletions are announced with roles and labels (e.g., added line, removed block), summary counts are exposed to assistive tech, and semantics meet WCAG 2.2 AA Given color perception limitations When viewing additions/deletions Then differences are conveyed by both color and icons/text patterns and color contrast for all text and indicators meets WCAG 2.2 AA Given a mobile device with viewport width ≤ 414 px When opening the diff Then the layout adapts to a stacked view with readable text, tap targets ≥ 44 px, and no horizontal scrolling
Inline Comments & Mentions
"As a partner editor, I want to leave inline comments and mention the campaign owner so that we can discuss proposed edits in context before requesting approval."
Description

Allow threaded, inline comments anchored to specific blocks or lines of script with resolve/reopen states. Support @mentions (users, teams, partners), attachments (screenshots, docs), and suggested edits for small copy tweaks. Enforce comment permissions by role, provide per-thread subscriptions, and send notifications via in-app, email, and optional Slack. Persist comment history in the audit log, with timestamps and actor identity for accountability.

Acceptance Criteria
Inline Threaded Comments Anchored to Script Block/Line
- Given a Draft Sandbox script with identifiable blocks or line numbers, when a user selects a block or line and posts a comment, then a new thread is created anchored to that location and is visible in the diff sidebar. - Given an existing thread, when another user replies, then the reply is nested under the thread and ordered by timestamp ascending. - Given an open thread, when a user with resolve permission clicks Resolve, then the thread state becomes Resolved, collapses by default, and a resolution event is recorded with timestamp and actor; when Reopen is clicked, the state returns to Open and a reopen event is recorded. - Given the underlying text changes in a new draft version, when the diff can map the change, then the thread remains anchored to the corresponding text; when the mapping fails, then the thread is marked Orphaned with a link to the last mapped version.
@Mentions Notify Users, Teams, and Partners
- Given a comment composer, when the user types "@" and selects a user, team, or partner, then the mention is inserted and resolved to the entity ID. - Given a posted comment containing mentions, when the comment is saved, then in-app notifications are sent immediately to all mentioned entities, email notifications are sent within 60 seconds, and Slack notifications are sent within 60 seconds if Slack integration is enabled and the recipient has Slack configured. - Given permission constraints, when a user attempts to mention an entity without comment-view access to the draft, then the entity is not shown in mention search and posting fails with an authorization error if forced via API. - Given a mention, when the comment is posted, then each mentioned user is auto-subscribed to the thread.
Comment Attachments: Screenshots and Docs
- Given a comment composer, when a user with comment-post permission attaches files via drag-and-drop or file picker, then files of type PNG, JPG, GIF, PDF, or DOCX up to 25 MB per file are accepted after malware scan and checksum verification. - Given an attached image or PDF, when the comment is posted, then a thumbnail or inline viewer is rendered; for other file types, a filename, size, and download link are shown. - Given an upload error, when the attachment fails to store, then the comment is not posted, the user sees an error message, and no partial attachments are saved. - Given posted attachments, when a user downloads an attachment, then the download is authorized and recorded with timestamp and actor.
Suggested Edits for Copy Tweaks
- Given a comment composer anchored to a text block, when the user selects "Suggest Edit" and proposes a change of up to 300 characters within that block, then a diff highlighting insertions and deletions is displayed inline. - Given a suggested edit, when a user with Edit Draft permission clicks Apply, then the draft content is updated, the thread is marked Resolved with status Applied, and an event is recorded; when a user clicks Reject, then the thread records status Rejected and remains Open unless explicitly resolved. - Given a suggestion that attempts to change more than 300 characters or structural elements, when the user submits, then the submission is blocked with a message stating the limit and no change is saved. - Given an apply or reject action, when it completes, then all thread subscribers and the suggester receive notifications via their subscribed channels.
Per-Thread Subscriptions and Notification Channels
- Given any visible thread, when a user clicks Subscribe or Unsubscribe, then their subscription state is updated immediately and reflected in the UI without page reload. - Given channel preferences for a thread, when a user toggles in-app, email, or Slack for that thread, then the setting is saved and honored for subsequent events. - Given multiple events in the same thread, when events occur within a 5-minute window, then notifications are batched per user and channel into a single message. - Given a user unsubscribes from a thread, when new events occur, then the user receives no notifications for that thread on the channels they disabled.
Role-Based Comment Permissions and Visibility
- Given system roles Admin, Org Member, Partner, and External Reviewer with defined permissions (comment-view, comment-post, resolve, apply-suggestion, mention, attach, moderate), when a user opens a Draft Sandbox, then only actions permitted to their role are enabled in the UI. - Given server-side enforcement, when a user attempts an unauthorized action via UI or API, then the request is denied with HTTP 403 and the attempt is recorded. - Given an Internal thread designation, when a thread is marked Internal, then only Admins and Org Members can view it; Partners and External Reviewers cannot see it in UI or API responses. - Given a moderated or deleted comment, when the moderation occurs, then the content is hidden from non-admin viewers while an audit placeholder remains visible to Admins.
Audit Log for Comment History and Accountability
- Given any comment or thread action (create, reply, edit, resolve, reopen, mention, subscription change, attachment upload, attachment download, suggest apply, suggest reject, delete, moderate), when the action completes, then an immutable audit entry is written with ISO 8601 UTC timestamp, actor ID, actor role, thread ID, action type, target IDs, and before/after values where applicable. - Given an admin filters audit logs by thread ID or date range, when they request export, then a CSV file is generated and downloaded containing matching entries. - Given audit data stored in UTC, when displayed in the UI, then timestamps are converted to the viewer’s timezone while retaining UTC in storage. - Given any thread view, when a user clicks View Audit Trail, then a panel lists only audit entries related to that thread in reverse chronological order.
Test Links & Isolated Previews
"As a field organizer, I want a test link I can share with partners so that they can walk through the full action flow without generating real actions or skewing metrics."
Description

Generate shareable preview URLs for each draft that render in a production-like sandbox with safe sample or permissioned real data. Previews are clearly labeled as test, do not trigger real calls/emails, and do not affect live metrics. Allow simulation of supporter address/district matching and bill status to validate dynamic scripts. Provide optional password protection, expiration, and QR codes for field testing. Track preview events separately for quality checks and report them as draft-only telemetry.

Acceptance Criteria
Generate Draft Preview Link
Given a draft page or script exists and the user has edit rights When the user clicks "Generate Test Link" Then a unique preview URL is created and associated with that draft version And the URL renders the draft in a sandbox mirroring production styles, routes, and feature flags And the default data source is safe sample data unless permissioned real data is explicitly enabled And the link is copyable and can be opened in an incognito session without editing rights
Test Labels and Action Suppression
Given a user opens a draft preview URL Then a persistent, visible "TEST PREVIEW — NO LIVE ACTIONS" banner is displayed on all screens And call/email/action CTAs are routed to stub endpoints and cannot reach real targets And no events are written to live production metrics or CRM/webhooks And a "This is a test" confirmation is shown before any stubbed action is simulated
Simulate Address, District, and Bill Status
Given a user has the preview open When they open the Simulation panel and enter a supporter address Then the system resolves the address to the expected district and legislators using the sandbox mapping And dynamic scripts render with correct placeholders (legislator names, districts, offices) for that context When the user changes the bill status in the simulation (e.g., Introduced, In Committee, Floor Vote) Then the call/email script updates immediately to the appropriate variant for the selected status
Permissioned Real Data in Previews
Given the organization has enabled permissioned real data for previews and the viewer has the required role When the viewer toggles Real Data mode in the preview Then only scoped records with explicit preview consent are accessible And personally identifiable fields are masked except those necessary to validate script rendering And all outbound actions remain stubbed; no real calls/emails are sent And an audit log entry is created with user, timestamp, draft ID, and data fields accessed
Password, Expiration, and Revocation Controls
Given a preview link exists When the creator sets a password Then subsequent access requires that password and locks out after 5 consecutive failures for 15 minutes When the creator sets an expiration timestamp Then access after expiry returns an "Preview expired" message and HTTP 410 Gone When the creator revokes the link Then the link becomes invalid within 60 seconds and returns HTTP 403 Forbidden
QR Code Generation and Field Testing
Given a preview link exists When the user selects "Generate QR" Then QR codes are generated in PNG and SVG, downloadable, and embeddable And scanning the QR with iOS or Android opens the preview URL in the device browser And any password or expiration settings are enforced when accessed via QR
Isolated Preview Telemetry and Reporting
Given any preview page view or simulated action occurs Then an event is recorded in a separate "draft-preview" telemetry stream including draft ID, version, actor type, and timestamp And these events are excluded from live campaign dashboards and metrics by default And a Draft-only telemetry report exposes counts for views, simulations, and stubbed CTAs And exports can be filtered by preview-only events without mixing with production data
Approval Ladder Integration
"As a campaign owner, I want draft changes to flow through our existing approval ladder so that nothing merges to live without the right reviewers signing off."
Description

Integrate drafts with RallyKit’s Approval Ladder: Draft → Ready for Review → Changes Requested → Approved → Merged. Allow assignment of required approvers per organization/partner, enforce gate checks (content policy, legal, accessibility), and block merges until all checks pass. Provide status badges, due dates, and notifications for state changes. Capture all transitions, approver identities, and rationales in an audit-ready record. Support admin overrides with mandatory justification and automatic reviewer notification.

Acceptance Criteria
Enforce Approval Ladder State Transitions
Given a draft sandbox exists When a user attempts a state change Then only these transitions are permitted: Draft→Ready for Review; Ready for Review→Approved; Ready for Review→Changes Requested; Changes Requested→Ready for Review; Approved→Merged And any other attempted transition is rejected via API with HTTP 400 and an error message listing allowed transitions And the UI disables buttons/links for disallowed transitions And the system prompts for an optional rationale on Ready for Review, Approved, and Changes Requested and stores it with the transition
Required Approvers Blocking Merge
Given required approvers are configured for the draft's organization or partner When the draft enters Ready for Review Then the system assigns and displays those approvers as Required And only assigned approvers can perform the Approve action; others see it disabled and API returns HTTP 403 And the Merge action remains disabled until all Required approvers have approved And any API attempt to merge before all required approvals exist returns HTTP 409 with reason "pending_required_approvals" and a list of missing approvers
Gate Checks Enforcement (Content, Legal, Accessibility)
Given the draft is in Ready for Review or later When automated gate checks (content policy, legal, accessibility) execute Then each check's pass/fail status is displayed as a badge with timestamp And attempting to Approve while any gate check is failing disables the Approve action and shows failing checks with remediation links And any API attempt to Approve or Merge while any gate check is failing returns HTTP 409 with reason "gate_checks_failed" and identifiers of failing checks And checks re-run automatically on new commits/edits, updating badges within 60 seconds
Status Badges, Due Dates, and Notifications on State Changes
Given any state change occurs (Draft, Ready for Review, Changes Requested, Approved, Merged) When the transition is saved Then the draft displays a status badge reflecting the new state within 2 seconds And a due date is present and visible on the draft; if not preconfigured, the submitter must provide one before submitting Ready for Review, otherwise validation error is shown and API returns HTTP 400 And notifications are sent to the author, assigned approvers, and watchers within 60 seconds via in-app and email, including the new state, due date, and action links And items past due date display an Overdue indicator until the state advances
Audit Trail for Transitions and Approvals
Given any state transition, approval, or change request occurs When the action is confirmed Then an immutable audit record is written capturing: timestamp (UTC ISO-8601), actor ID and role, from_state, to_state, rationale text (may be empty), approver IDs (if applicable), gate check results snapshot, and draft version hash And audit records are viewable in-product to admins and exportable as CSV/JSON via API And edits or deletions to audit records are disallowed; attempts return HTTP 403
Admin Override With Mandatory Justification and Notifications
Given a user with Admin role opens a draft that has failing gate checks or missing required approvals When the Admin attempts to set the state to Approved or Merged via Override Then the system requires a non-empty justification; if empty, the action is blocked and API returns HTTP 400 And upon successful override, the action proceeds despite failing checks or missing approvals, the audit record flags override=true and stores the justification And all previously required approvers and the author are notified within 60 seconds with the override details
Merge Blocked Until All Conditions Met
Given a draft exists When a user attempts to Merge Then Merge is permitted only if current state is Approved, all Required approvers have approved, and all gate checks are passing (unless an Admin override is used) And if any condition is unmet, the Merge button is disabled and API returns HTTP 409 with a machine-readable list of unmet conditions And upon successful merge, the system records the merge in the audit log, updates the live asset, links the merged commit to the draft, and marks the draft as Merged
Safe Merge & Conflict Resolution
"As an approver, I want a safe, guided merge with clear conflict resolution so that approved changes can go live without breaking our existing campaigns."
Description

Deliver a merge engine that applies approved changes with zero downtime, creating a new live version and preserving rollback points. Detect conflicts against intervening live edits and present a guided resolution UI with block-level and field-level choices. Use semantic merge rules for scripts (tokens, conditionals) and content blocks to minimize false conflicts. Validate merges with preflight checks (links, tokens, targeting rules) and offer a dry-run preview of resulting live content. Record merge outcomes and allow one-click rollback to prior versions.

Acceptance Criteria
Zero-Downtime Merge and Versioning
Given an approved draft sandbox and a current live version Vn When a maintainer initiates Merge Then the system creates a new immutable live version Vn+1 atomically with cutover < 250 ms And all requests during cutover are served by either Vn or Vn+1 with no increase in 5xx above baseline And Vn is preserved as a rollback point with associated diff metadata and version ID
Conflict Detection Against Intervening Live Edits
Given a draft based on base version Vb and live has advanced to Vb+k When pre-merge analysis runs Then non-overlapping changes auto-merge And overlapping changes are flagged at block- and field-level with exact locations And a conflicts list is generated with counts by content type And the merge action is blocked until all conflicts are resolved or explicitly deferred (if policy allows)
Guided Resolution UI with Block- and Field-Level Choices
Given conflicts are detected for a draft When the user opens the resolution UI Then side-by-side diffs and inline comments are displayed for each conflict And per-conflict actions include Keep Live, Keep Draft, or Manual Edit where permitted And a real-time preview updates after each choice And the system prevents completion until all conflicts have a chosen resolution And an audit trail of all selections is recorded
Semantic Merge Rules for Scripts
Given changes to scripts containing tokens, variables, and conditionals When the semantic merge runs Then whitespace-only or formatting-only differences do not produce conflicts And changes scoped to different tokens do not conflict And reordering of independent clauses preserves logical equivalence without raising conflicts And the merged script compiles and passes static validation with zero errors
Preflight Validation of Links, Tokens, and Targeting Rules
Given a draft ready to merge When preflight validation runs Then all internal links resolve to valid targets And external URLs return 2xx/3xx or are on an allowlist And all referenced tokens exist and are in scope And targeting rules parse and evaluate without errors And any critical failure blocks merge with a clear message And a summary report displays counts of checks, warnings, and errors
Dry-Run Preview of Resulting Live Content
Given preflight passes or only warnings remain When the user requests a dry-run preview Then the system generates a snapshot of the resultant live version including pages and scripts And provides a shareable, time-limited test link And the preview renders with production configuration but produces no production side effects And the preview content hash matches the hash of the version that will be published on merge
Audit Logging and One-Click Rollback
Given a merge completes When viewing version history Then an entry shows who merged, when, source draft ID, base version, diff summary, and preflight results And clicking Rollback on any prior version promotes it to live within 5 seconds with zero downtime And the rollback event is logged with operator, timestamp, and target version

Time Anchor

Pins the ledger’s daily root hash to an independent timestamp authority, producing an anchor certificate that proves when entries existed. Auditors can independently verify anchors, making backdating or silent rewrites provably impossible and strengthening trust with funders.

Requirements

Daily Ledger Root Hash Snapshot
"As an org admin, I want a daily immutable snapshot of our ledger so that I can prove our activities existed as of each day without relying on internal claims."
Description

Compute and freeze a daily Merkle tree root hash over the RallyKit action ledger at a configurable day boundary (default 00:00 UTC). Ensure deterministic ordering and canonical serialization of entries so the root is reproducible across environments. Persist snapshot metadata (root hash, digest algorithm, canonicalization version, entry count, time window) in append-only storage to serve as the anchor input. Expose an internal API for downstream anchoring and inclusion proof generation.

Acceptance Criteria
Default 00:00 UTC Snapshot Creation
Given the day-boundary configuration is set to 00:00 UTC and the ledger contains entries within the previous UTC day When the boundary time 00:00:00 UTC is reached Then the system computes and freezes a daily Merkle root for the exact time window [00:00:00, 24:00:00) UTC of the previous day within 5 minutes And a single append-only metadata record is persisted with fields: rootHash, digestAlgorithm, canonicalizationVersion, entryCount, timeWindow.start, timeWindow.end And the persisted metadata references the correct window and entry count for that day
Configurable Day-Boundary Change
Given the day-boundary is reconfigured to 06:00 UTC effective at the next cutover When the next boundary after the change occurs Then the resulting snapshot covers exactly the window [previous 06:00 UTC, current 06:00 UTC) And no overlap or gap exists between the last snapshot under the old boundary and the first snapshot under the new boundary And no duplicate snapshot is produced for any window finalized prior to the configuration change And the persisted metadata reflects the new timeWindow.start and timeWindow.end values
Deterministic Ordering and Canonical Serialization
Given two independent environments (A and B) with identical ledger entries and identical configuration for a given window but ingested in different orders and serialized with varying key orders/whitespace When each environment computes the daily root with the same digestAlgorithm and canonicalizationVersion Then both environments produce an identical rootHash and entryCount for that window And recomputing the root in either environment without new entries produces the same rootHash again And any non-canonical field ordering or whitespace differences do not change the computed root
Append-Only Snapshot Metadata Immutability
Given a snapshot metadata record exists for a finalized time window When any process attempts to update or delete that record Then the operation is rejected and no changes are written And attempting to insert a second record for the same timeWindow is rejected by a uniqueness constraint and no duplicate is created And all rejected mutation attempts are logged for audit purposes
Empty-Day Snapshot Handling
Given a time window with zero ledger entries When the boundary is reached Then a snapshot is still computed and a metadata record is persisted with entryCount = 0 And the rootHash corresponds to the canonical empty-tree hash under the specified digestAlgorithm and canonicalizationVersion And recomputing the snapshot for the same empty window yields the identical rootHash
Internal API: Snapshot Metadata for Anchoring
Given a finalized snapshot exists for a time window When the internal API is called to retrieve the snapshot metadata by that window identifier Then the API responds 200 with fields: rootHash, digestAlgorithm, canonicalizationVersion, entryCount, timeWindow.start, timeWindow.end And requests for a non-existent window return 404 Not Found And the returned rootHash matches the value stored in append-only metadata
Internal API: Inclusion Proof Generation
Given a finalized snapshot exists for a time window and a specific ledger entry within that window When the internal API is called to generate an inclusion proof for that entry and window Then the API responds 200 with proof components sufficient to recompute the root (leafHash, ordered sibling hashes, positional flags, rootHash, digestAlgorithm, canonicalizationVersion) And a verifier using the returned proof can recompute a root that equals the snapshot's rootHash And requests for an entry outside the specified window or unknown entryId return 404 or 422 with no proof payload
Pluggable Timestamp Authority Anchoring
"As a compliance officer, I want our daily root hash anchored to independent authorities so that any backdating or silent tampering is provably impossible."
Description

Integrate with at least two independent timestamp authorities via a pluggable adapter (e.g., RFC 3161 TSA and OpenTimestamps-compatible service). After each daily snapshot, submit the root hash to all configured providers, capture receipts (timestamps, signatures, proofs), and store them immutably. Implement retries with backoff, provider health checks, idempotency, and timeouts; anchoring must not block the main app. All communications over TLS, with provider credentials securely managed.

Acceptance Criteria
Daily Snapshot Auto-Anchoring to Multiple TSAs
Given a daily ledger snapshot completes with root hash H at time T When the anchoring service is triggered Then jobs are enqueued within 60 seconds to submit H to all configured providers (minimum: 1 RFC 3161 TSA and 1 OpenTimestamps service) And both submissions are executed in parallel without blocking the main app thread And for each provider a response is received or timed out within the configured request timeout (default 30s, configurable 5–120s)
Receipt Capture and Immutable Storage
Given a provider returns a valid receipt for H Then the system stores an immutable anchor record including: provider_id, request_hash (H), request_ts, response_ts, signature_or_proof_blob, verification_status, and record_version And the storage layer enforces append-only semantics; update/delete operations are rejected and create a new linked version instead And anchor records are encrypted at rest and readable only by authorized roles
Idempotent Anchoring and Duplicate Trigger Handling
Given the same snapshot root hash H is triggered for anchoring multiple times (manual or automated) When a successful anchor already exists for provider P Then no new submission is sent to P and no duplicate anchor record is created And when the prior attempt for P failed or timed out, the system retries and records a single success without duplicates
Resilient Retry, Backoff, Health Checks, and Circuit Breaking
Rule: On transient failure (network error, 5xx, timeout), retry up to 6 attempts with exponential backoff starting at 5s, capped at 5m, with jitter Rule: After 3 consecutive failures for a provider, mark provider unhealthy for 10m and skip new submissions while continuing scheduled health checks Rule: Health checks run every 1m; provider returns to healthy after 2 consecutive successes Rule: All state changes and failures emit structured logs and alert events with provider_id and error details
Non-Blocking Operations and Performance Isolation
Given anchoring jobs are running or providers are slow/unreachable Then core RallyKit API p95 latency under standard load does not degrade by more than 5% versus baseline And anchoring executes in background workers with bounded concurrency; main request threads never await provider responses And per-provider request timeouts are enforced and hung connections are aborted
Secure Communications and Credential Management
Rule: All provider requests enforce TLS 1.2+ with hostname verification and full certificate chain validation; invalid chains are rejected Rule: Provider credentials are stored in a secrets manager, never logged, scoped to worker roles, and rotatable without downtime Rule: Access and rotation events are audit-logged with actor, timestamp, and change summary
Auditor-Visible Anchor Certificate and Independent Verification
Given an auditor downloads the anchor certificate for date D When verifying RFC 3161 receipts, the signature validates against the TSA cert chain and the message imprint equals the snapshot root hash H And when verifying OpenTimestamps proofs, the proof verifies against the calendar and links to H Then both verifications return pass and the certificate references the immutable anchor record ID
Anchor Certificate Generation & Storage
"As a grants auditor, I want a downloadable daily anchor certificate so that I can verify when records existed without needing internal system access."
Description

Generate an anchor certificate artifact per day that bundles: snapshot metadata, root hash, digest algorithm, canonicalization version, TSA receipts/proofs, anchoring time(s), provider metadata, and RallyKit build/version. Store certificates in versioned, append-only storage with retention controls; expose a read-only download in the dashboard and via API. Provide a stable, shareable link with optional access control (public link token or signed URL) and ensure the certificate is self-describing for offline verification.

Acceptance Criteria
Daily Certificate Contains Required Fields
Given a completed ledger snapshot for UTC day D When the anchor process runs for day D Then a certificate file for day D is generated And the certificate includes fields: snapshot_metadata, root_hash, digest_algorithm, canonicalization_version, anchoring_times, provider_metadata, tsa_receipts_or_proofs, rallykit_build, rallykit_version, schema_version And digest_algorithm and canonicalization_version match the configured values And root_hash equals the computed Merkle root of the day D snapshot And the certificate is encoded in canonical JSON per canonicalization_version and passes schema validation
Append-Only Versioned Storage With Retention Controls
Given the certificate store contains entries for day D When writing a new certificate or re-anchoring day D Then the system appends a new immutable version with a monotonically increasing version number And direct overwrites are rejected with HTTP 409 (or equivalent) And read operations never mutate existing versions And each version is addressable via a versioned URL and listed with created_at timestamps And retention_policy(days=N) prevents delete before expiry, returning HTTP 403 if attempted And after expiry, deletion removes only expired versions while preserving audit logs
Dashboard and API Read-Only Download
Given a user with read access opens the Anchors page When they download the certificate for day D Then the download succeeds with HTTP 200, Content-Type application/json, and Content-Disposition filename "anchor-D.json" And the file content matches the latest version for day D unless a specific version is selected And unauthenticated users receive HTTP 401; unauthorized users receive HTTP 403; non-existent day returns HTTP 404 And the API endpoint GET /v1/anchors/{YYYY-MM-DD}[?version=N] returns the same content with ETag and Last-Modified headers
Stable Shareable Link With Access Control
Given an admin generates a shareable link for day D When option "public token" is selected Then the URL contains a random token with at least 128 bits of entropy and does not require authentication And revoking the link invalidates the token within 60 seconds and subsequent requests return HTTP 410 When option "signed URL" is selected with expiry T Then the link is valid until T and requests after T return HTTP 403 And the shareable link remains stable across storage re-versioning and serves the latest version unless pinned to a specific version
Self-Describing Certificate Supports Offline Verification
Given only the certificate file and trusted TSA root certificates When a verifier inspects the certificate offline Then the certificate contains algorithm identifiers, canonicalization_version, provider name, TSA receipt bytes, TSA signer certificate chain, and root_hash And the verifier can validate the TSA receipt signature over the root_hash without network access And field names and formats are sufficient to reconstruct the canonical byte string that was timestamped
Independent Timestamp Verification and Tamper Detection
Given a certificate for day D and an independently exported ledger snapshot for day D When an auditor recomputes the Merkle root from the export and verifies the TSA receipt using TSA trust roots Then the recomputed root equals the certificate root_hash and the TSA signature validates And any modification to a single ledger entry causes the recomputed root to differ from root_hash, failing verification And the receipt time matches the certificate anchoring_times for the provider and cannot be altered without breaking signature validation
Anchor Job Scheduling, Retry, and Multi-TSA Support
Given the daily anchor job schedule is configured for 00:15 UTC When the day D snapshot closes at 00:00 UTC Then the anchor job starts by 00:15 UTC and completes within 5 minutes under normal conditions And on transient TSA failure, the job retries up to 5 times with exponential backoff and jitter And on persistent failure, the system records status "Unanchored", emits an alert, and exposes a retry action And when multiple TSAs are configured, receipts from all providers are obtained and included, with distinct anchoring_times per provider And re-running the job for the same inputs produces an identical certificate payload except for storage version metadata
Public Verification Tools
"As an external auditor, I want a public verification tool so that I can independently validate certificates and proofs with minimal friction."
Description

Provide an auditor-friendly verification page and a lightweight CLI script that validate anchor certificates and inclusion proofs without requiring RallyKit credentials. The verifier should re-hash provided entries or snapshots, check TSA receipts against provider public keys, and report pass/fail with detailed diagnostics. Publish step-by-step instructions and machine-readable schemas to enable third-party tools to verify independently.

Acceptance Criteria
Web Verifier Accepts Anchor Certificate Without Credentials
Given a public user with no RallyKit account When they open the verification URL Then the page loads without any authentication prompts Given a valid anchor certificate JSON and optional inclusion proof or snapshot files When the user uploads files and clicks Verify Then inputs are validated against the published schemas (current versions displayed) and any validation errors include JSON Pointer paths Given inputs up to 5 MB total When verification runs Then overall Pass/Fail is displayed within 5 seconds and a downloadable JSON report is provided
TSA Receipt Signature Checked Against Provider Public Keys
Given an anchor certificate containing a TSA receipt and provider_id When verification runs Then the TSA signature is validated against the matching provider public key from the pinned trust store and the certificate chain is time-valid Given a computed or provided anchor root hash When binding is checked Then the TSA receipt is confirmed to cover exactly that root hash and the receipt time is extracted and shown Given any signature or binding failure When verification completes Then status is Fail and diagnostics include error_code=signature_invalid|untrusted_provider|mismatched_root and the failing check name
Merkle Re-Hash and Inclusion Proof Validation
Given a set of ledger entries (canonical JSON or NDJSON) and a published hashing/canonicalization spec When verification runs Then leaf hashes are recomputed deterministically and listed Given an inclusion proof path and node ordering rules When the proof is applied Then the Merkle root recomputed from leaves and proof equals the root in the anchor certificate Given any mismatch When verification completes Then status is Fail and diagnostics include the first failing step index, expected_root, actual_root, and the leaf ids not proven
CLI Verifier Deterministic Output and Exit Codes
Given the CLI is invoked with paths to anchor certificate and optional entries/proof When all checks pass Then the process exits with code 0 and emits a JSON report to stdout containing fields: status=PASS, checks[], anchor_root, tsa_time, provider_id, versions Given any check fails When the CLI finishes Then it exits with code 2 and the JSON report includes failed checks with error_code and message; no stack traces are printed unless -v is supplied Given the --json flag When used Then only machine-readable JSON is printed with no extra text; help and version are available via -h/--version
Detailed Diagnostics on Failure
Given any verification failure When the report is generated Then diagnostics list each check with name, result, error_code, expected vs actual values, algorithm identifiers (e.g., hash, signature), and references to schema/proof locations Given a time-anchor/root mismatch When reported Then both root hashes and the differing step are included Given a schema validation error When reported Then JSON Pointer to the offending field and a human-readable message are included
Published Docs and Schemas Enable Independent Verification
Given a third party without RallyKit credentials When they visit the documentation URL Then they can access step-by-step guides for the web verifier and CLI, and download JSON Schemas and sample artifacts without login Given the published schemas When fetched via stable URLs Then they include semantic versioning and change logs Given the sample artifacts and walkthrough When executed exactly as documented Then the verifier produces a PASS with an output JSON that matches the expected sample hash and fields byte-for-byte
Per-Entry Inclusion Proofs
"As a campaign manager, I want per-entry inclusion proofs so that I can demonstrate a specific action was included in the anchored snapshot for a given day."
Description

Enable per-entry Merkle inclusion proofs linking any ledger entry to its anchored daily root. In the UI and API, allow users to fetch a proof bundle containing the entry, canonicalization details, the Merkle path, and the referenced anchor certificate. Display verification status in the entry detail view and support bulk export for a range of entries.

Acceptance Criteria
API: Fetch Single Entry Inclusion Proof
Given an authenticated user with read access and an existing anchored ledger entry ID When they GET /api/v1/ledger/entries/{entryId}/proof Then the response is 200 application/json within p95 ≤ 500 ms and includes: entry (original JSON), proofVersion (semver), hashAlgorithm = "SHA-256", canonicalization.scheme = "RFC8785", canonicalization.version = "1.0", merklePath[] with {hash, position in ["left","right"]}, rootHash (64-hex), anchorCertificate.type = "RFC3161", anchorCertificate.base64, anchorTimestamp (ISO 8601 Z), verificationStatus = "VERIFIED" And recomputing entryHash = SHA-256(RFC8785-canonicalized entry) and applying merklePath yields rootHash And the RFC3161 token messageImprint equals rootHash and validates against the configured TSA trust store And ETag header is present for cache validation Given an unknown entryId When requesting the proof Then the response is 404 with code = "ENTRY_NOT_FOUND" Given a known entry whose daily root is not yet anchored When requesting the proof Then the response is 409 with code = "NOT_ANCHORED" Given a user without permission When requesting the proof Then the response is 403 with code = "FORBIDDEN"
UI: Entry Detail Verification Status Badge
Given a user opens the entry detail view for an anchored entry When the page renders Then a "Verified" badge appears within 2 seconds and displays: anchor date/time (UTC), short root hash (first/last 6), and TSA name And copy actions for "Copy root hash" and "Copy proof JSON" place correct values on the clipboard Given an entry whose daily root is not anchored When the page renders Then the status shows "Pending anchor" with the next scheduled anchor time Given a proof verified as invalid When the page renders Then the status shows "Invalid proof" with a tooltip containing a concise reason code And the UI labels copy actions as "Unverified" And an accessible aria-label reflects "Verification: Verified|Pending|Invalid" And the status control meets WCAG AA contrast and is keyboard operable
Proof Bundle Structure and Deterministic Canonicalization
Given a proof bundle is retrieved When canonicalizing the entry using JSON Canonicalization Scheme (RFC 8785) Then the resulting bytes are identical across Node.js, Python, and Go reference tests and produce the same SHA-256 entryHash And the bundle includes: proofVersion (semver), hashAlgorithm = "SHA-256", canonicalization {scheme: "RFC8785", version: "1.0"}, merklePath[] each with position in ["left","right"] and 64-hex hash, rootHash 64-hex And bundle size is ≤ 512 KB including anchorCertificate Given the anchor certificate is omitted due to size constraints When retrieving the bundle Then anchorCertificateUrl is present, returns 200 on HEAD with Content-Length and X-Content-SHA256 headers, and GET yields the DER token Given unknown fields are present in the bundle When verifying Then verification succeeds and unknown fields are ignored while preserved in exports
Bulk Export Proofs for Range
Given an org admin selects a UTC date range and optional filters (tags, campaign, status) totaling ≤ 50,000 entries When they POST /api/v1/ledger/proofs/export with an idempotency key Then the API returns 202 with jobId and begins processing within 5 seconds And GET /api/v1/ledger/proofs/export/{jobId} yields states: QUEUED, RUNNING, COMPLETE, FAILED And when COMPLETE, a streamed ZIP downloads without exhausting server memory And the ZIP contains: manifest.json (timeRange, filters, counts, generatedAt, sha256 for each file), and per-entry JSON files named {entryId}.json And anchored entries have verificationStatus = "VERIFIED"; unanchored entries have verificationStatus = "NOT_ANCHORED" and no anchorCertificate And the response includes X-Content-SHA256 for the ZIP And rate limiting enforces ≤ 1 concurrent export per org and returns 429 otherwise
Verification Failure Handling and Messaging
Given a proof bundle where merkle recomputation does not match rootHash When verification runs Then verificationStatus = "INVALID" and errorCode = "MISMATCH_ROOT_HASH" are returned in the API and displayed in the UI Given a bundle where the canonicalized entry hash does not match the recorded entry hash When verification runs Then verificationStatus = "INVALID" and errorCode = "MISMATCH_ENTRY_HASH" Given an RFC3161 token signed by an untrusted or revoked TSA certificate When verification runs Then verificationStatus = "INVALID" and errorCode in ["UNTRUSTED_TSA_CHAIN","INVALID_TSA_TOKEN"] Given an unsupported canonicalization version When verification runs Then verificationStatus = "INVALID" and errorCode = "UNSUPPORTED_CANON_VERSION" And all invalid verifications emit an audit log event proof_verification_failed with entryId, errorCode, rootHash, and tsaSerial
Auditor Independent Verification via RFC3161 Token
Given an auditor extracts the RFC3161 token from a proof bundle When validating with a standard RFC3161 verifier and the configured trust store Then the token is valid, the messageImprint algorithm is SHA-256, and the imprint equals rootHash in the bundle And the TSA certificate chain validates to a trusted root and revocation checking (OCSP/CRL) succeeds or is recorded as not required per policy And the token genTime falls within the UTC day of the mapped root And the bundle exposes tsaPolicyOid matching the TSA policy OID And the auditor can reproduce the rootHash by recomputing the Merkle tree from the included merklePath and entryHash
Day Boundary Mapping and Anchoring Latency
Given entries are timestamped in UTC When the daily anchoring job runs at 00:10:00Z Then all entries created from 00:00:00Z to 23:59:59Z are included exactly once in that day's Merkle tree And proofs for entries created before the cutoff become available within 10 minutes after the anchor time with p95 ≤ 10 minutes And entries created after the cutoff are not included in the previous day's root and their proof endpoint returns 409 with code = "NOT_ANCHORED" until the next anchor And once a daily root is anchored, attempting to regenerate a differing rootHash for the same day is rejected and logged as an audit event
Anchoring Job Monitoring & Alerts
"As an operations admin, I want monitoring and alerts for the anchoring pipeline so that I can quickly remediate issues and maintain auditor trust."
Description

Schedule the daily snapshot and anchoring jobs, emit metrics, and alert on anomalies (job failures, delayed anchors, provider errors, or integrity mismatches). Provide visibility via a dashboard widget showing last successful anchor, coverage over the past 90 days, and any gaps. Send notifications via email/webhook/Slack on failure and auto-retry with exponential backoff.

Acceptance Criteria
Daily Schedule and Execution Order
Given an organization has configured a daily schedule time in its timezone and anchoring is enabled When the scheduled time arrives Then the snapshot job starts within ±60 seconds of the configured local time (respecting DST: on spring-forward days it runs at the next valid minute; on fall-back days it runs once) And upon snapshot success, the anchoring job starts within 30 seconds And if the snapshot fails, the anchoring job does not start And absent failures, no more than one snapshot and one anchor run occur per calendar day per organization
Metrics Emission and Scrape
Given snapshot and anchoring jobs execute When the metrics endpoint "/metrics" is scraped Then the following metric series are exposed with labels org_id and job_type: job_scheduled_total, job_started_total, job_completed_total, job_failed_total, job_retry_total, provider_error_total, integrity_mismatch_total, anchor_delay_seconds And each series includes valid TYPE and HELP descriptors And values increment exactly once per corresponding event And anchor_delay_seconds measures time from ledger day end to anchor certificate issuance
Failure Alerts and Exponential Backoff
Given a snapshot or anchoring job attempt fails for any reason When the failure is recorded Then notifications are sent to all configured channels (email, Slack, webhook) within 60 seconds And each payload includes org_id, job_id, job_type, attempt_number, failure_reason, severity, timestamp (ISO 8601), correlation_id, and last_successful_anchor_at And duplicate alerts for the same incident are de-duplicated using a stable incident_key And retries follow exponential backoff with delays 1m, 2m, 4m, 8m, 16m (maximum 5 attempts total) And webhook deliveries are retried on non-2xx responses with the same backoff policy; Slack/email are considered delivered only upon 2xx/250 acceptance And after the final failed attempt, the incident severity escalates to critical and no further retries occur
Delayed Anchor SLA Alert
Given the latest ledger day is not yet anchored When anchor_delay_seconds exceeds the configured threshold (default 6 hours per organization) Then a Delayed Anchor warning alert is emitted to all configured channels with expected_by, current_delay, org_id, and ledger_day And the alert auto-clears when the anchor certificate is stored And the effective threshold value is included in the alert payload
Provider Error Handling and Failover
Given the primary timestamp authority is unavailable or returns an error When an anchoring attempt is made Then the event is logged and provider_error_total is incremented with the provider label And if a secondary provider is configured, failover occurs within 30 seconds and anchoring is attempted with the secondary And if failover succeeds, a warning alert is sent; if all providers fail after retries, a critical alert is sent And no success status is recorded until an anchor certificate is stored and verifiable
Integrity Mismatch Detection
Given the computed daily root hash differs from the ledger's persisted root hash When pre-anchor validation is performed Then the anchoring attempt is aborted and marked failed with reason "integrity_mismatch" And a critical alert is sent with org_id, ledger_day, expected_root, computed_root, and correlation_id And no retries are attempted until a human resolves the mismatch and closes the incident
Dashboard Widget: Last Anchor and 90-Day Coverage
Given an authenticated Admin or Auditor views the dashboard When the Time Anchor widget loads Then it displays last_successful_anchor_at (ISO 8601 with timezone), last_ledger_day_anchored, and a status badge (green if >=100% coverage last 30 days, amber if >=95%, red otherwise) And it shows coverage for the past 90 days, including anchored_days, missing_days, and gap ranges with reasons (failed, delayed, pending) And it provides a link to the latest anchor certificate and a Verify action that opens the independent timestamp authority verification endpoint And the widget loads within 2 seconds at p95 and handles the empty state (no anchors yet) with explanatory text
Gap Detection and Late-Anchor Handling
"As a compliance manager, I want missed days to be explicitly recorded and late-anchored so that there is no ambiguity or appearance of backdating."
Description

Detect missed snapshot or anchoring windows and record explicit gap entries. Support catch-up anchoring that anchors the historical root with the current trusted timestamp while labeling the certificate as late and linking to the gap record. Prevent generation of artifacts that appear antedated; surface gaps prominently in the UI and reports to preserve trust with funders.

Acceptance Criteria
Record Gap Entry When Snapshot Window Missed
Given a daily snapshot schedule ending at 23:59:59 UTC And no successful snapshot was committed for the window [00:00:00, 23:59:59] When the gap detector runs at T = window_end + 5 minutes Then an immutable ledger entry of type 'gap' with subtype 'snapshot' is created And it includes fields: gap_id (UUIDv4), window_start, window_end, detected_at=T, created_by='system', reason='missed_snapshot', status='open', entry_hash And the new ledger root including the gap entry is computed and persisted within 30 seconds And an audit event 'gap.created' is emitted with gap_id within 5 seconds And the dashboard displays a red 'Snapshot gap detected' banner within 60 seconds referencing gap_id
Record Gap Entry When Anchoring Window Missed
Given a daily anchoring cutoff of 00:30:00 UTC for the prior day's root And no valid TSA anchor was obtained by the cutoff for root_hash=R When the gap detector runs at cutoff + 5 minutes Then a ledger entry of type 'gap' with subtype 'anchor' is created And it includes fields: gap_id, root_hash=R, window_start, window_end, anchor_due_by, detected_at, reason='missed_anchor', status='open', entry_hash And the updated ledger root including the gap entry is persisted within 30 seconds And an audit event 'gap.created' with subtype 'anchor' is emitted within 5 seconds And the UI 'Anchors' view shows a 'Missed anchor window' row for R within 60 seconds
Catch-Up Anchoring Produces Late Anchor Certificate
Given a historical root_hash=R has an open gap record with subtype 'anchor' When a catch-up anchoring job is triggered at time Tc Then an anchor certificate is issued with fields: certificate_id, root_hash=R, anchor_timestamp=Tc, label='Late', gap_id, tsa_token, tsa_serial And anchor_timestamp is within ±1 second of Tc and strictly greater than window_end And the TSA token validates against the independent TSA chain And the gap record transitions to status 'closed' with resolution='late_anchored' and links certificate_id And APIs and UI display a 'Late' badge wherever the certificate appears And the certificate is attached to the ledger root R page and the gap detail page
Prevent Antedated Artifacts Across APIs and Reports
Given any API or export that returns anchor metadata When the data includes a late anchor for root_hash=R Then the field 'anchored_at' equals the TSA timestamp and is not earlier than the TSA timestamp And the field 'original_period_end' is present and reflects the historical window end And no field presents the TSA timestamp as the original_period_end And attempts to override 'anchored_at' via admin API are rejected with HTTP 422 and error code 'ANCHOR_IMMUTABLE' And report PDFs/CSVs include columns: Late=true, gap_id, original_period_end, anchored_at And sorting and grouping use 'anchored_at' for late anchors
UI Surfaces Gaps and Late Anchors Prominently
Given the dashboard is loaded by a user with audit permissions When at least one open gap exists Then a top-level alert banner shows the count of open gaps and links to the Gaps view And the ledger timeline renders gap rows with a warning icon, tooltip, and click-through to gap detail And the Anchor Certificates view displays a 'Late' badge for late certificates and links to the associated gap And the Compliance export includes a 'Gaps' section listing window_start, window_end, type, detected_at, status And all gap and late badges meet WCAG AA contrast and have aria-labels for screen readers
Auditor Self-Verification of Late Anchor with Gap Link
Given an auditor opens the public verification URL for certificate_id=C When the auditor downloads the verification bundle Then it contains: anchor_certificate.json, merkle_proof.json, tsa_token.tsr, README.txt, and gap.json And verifying the TSA token via the documented RFC 3161 command succeeds against the TSA chain And verifying the merkle proof reproduces root_hash=R matching the certificate And gap.json includes gap_id, subtype, window_start, window_end, and a public URL to the gap record And the verification page shows status 'Verified (Late Anchor)' within 10 seconds
Notifications on Gap Detection and Catch-Up Events
Given organization notification settings are at defaults When a gap entry (snapshot or anchor) is created Then owners and auditors receive email and in-app notifications within 2 minutes containing gap_id, type, window, and remediation steps And when a catch-up anchor succeeds, the same recipients receive a 'Late Anchor issued' notification with certificate_id and gap_id within 2 minutes And when a catch-up anchor fails after 3 retries, a 'Catch-up failed' notification is sent and the gap remains open And notifications are deduplicated to max 1 per event type per gap_id and are recorded in the audit log

VerifyLink

Every receipt carries a short URL and QR code that opens a no-PII verification page. Auditors and partners can scan or paste a receipt to validate its signatures and chain position against the live ledger—no special software, instant confidence.

Requirements

Per-Receipt Short URL & QR Embedding
"As a small nonprofit director, I want every receipt to include a scannable QR code and short link so that auditors and partners can quickly verify the receipt without needing my team’s help."
Description

Generate a unique, non-guessable short URL and corresponding QR code for every action receipt at the time of issuance. The short URL encodes a signed token that references the receipt’s immutable record in RallyKit’s append-only ledger without exposing PII. The QR code (with configurable error-correction level and size) is embedded into downloadable/printable receipts and included in confirmation emails/SMS. Links must be durable, globally unique, and resolvable via a high-availability shortener with HTTPS, HSTS, and automatic redirect to the VerifyLink page. Provide SDK/utility methods for services that issue receipts, and graceful fallbacks (human-readable short URL) when QR rendering isn’t supported.

Acceptance Criteria
Unique, Non-Guessable Short URL at Receipt Issuance
Given a new action receipt is created When the system generates the short URL Then the URL slug is produced by a CSPRNG with at least 128 bits of entropy And the slug is URL-safe (A–Z, a–z, 0–9, -, _) and 22–32 characters long And the slug is globally unique across all existing receipts And generating 100,000 slugs in a test run yields 0 collisions Given the same receipt’s link is requested again within 24 hours When the link is generated Then the identical previously issued short URL is returned (idempotent)
Signed, PII-Free Token and Tamper Detection
Given a valid short URL token When the verification service validates it Then the signature validates against the platform public key And the token references the immutable receipt record (by receipt_id hash and/or ledger position) And neither the token nor the verification response contains PII (name, email, phone, address) Given any character of the token is altered When the verification endpoint is called Then the request is rejected with HTTP 400 Invalid Signature And no receipt metadata is returned
HTTPS, HSTS, and Redirect via High-Availability Shortener
Given a short URL is requested over HTTPS When the shortener processes the request Then it returns a 301/302 redirect to the VerifyLink page And p95 redirect latency in staging is <= 600 ms And TLS protocol version is >= 1.2 And the response includes Strict-Transport-Security with max-age >= 15552000, includeSubDomains, preload Given the short URL is requested over HTTP When the request is received Then it redirects to the HTTPS URL before any content is served Given the shortener is degraded or backend is unavailable When a redirect cannot be fulfilled Then the service returns HTTP 503 with a Retry-After header and logs the incident
Configurable QR Code Generation
Given a receipt is issued without QR overrides When the QR code is generated Then the default size is 256x256 px with a 4-module quiet zone And the default error-correction level is Q Given qr_size is set between 128 and 1024 and ec_level is one of {L, M, Q, H} When overrides are provided via API/SDK Then the QR is generated using the provided parameters Given 100 sample receipts When their QR codes are decoded by ZXing Then 100% decode successfully and the content exactly matches the HTTPS short URL
Embedding in Receipts, Emails, and SMS
Given a downloadable/printable PDF receipt is generated When rendered Then it contains the QR image at >= 25 mm on paper at 300 DPI And the human-readable short URL printed directly beneath it Given a confirmation email is sent When opened in a standards-compliant email client Then it displays the QR image inline And includes the clickable short URL in text And provides alt text describing the QR Given a confirmation SMS is sent When received Then it includes the clickable short URL
SDK/Utility Methods for Receipt Issuers
Given a service integrates the SDK When calling generateReceiptLink(receipt_id_hash, options) Then it returns { short_url, token, qr_png_base64 } within 300 ms p95 in staging And validates inputs and returns 400-equivalent errors for invalid parameters And supports overrides for QR size and error-correction level Given the SDK is installed When its test suite runs Then all unit tests pass and coverage for link/QR generation paths is >= 90% Given the SDK quickstart guide When a developer follows the steps Then they can generate a short URL and QR in <= 10 lines of code
Graceful Fallback When QR Rendering Is Unavailable
Given an email client that blocks images When the confirmation email is opened Then the human-readable short URL is visible And a text fallback notice is shown where the QR would appear Given the QR generation service fails When a receipt is issued Then the system still embeds the human-readable short URL And logs the QR failure with a correlation ID And retries QR generation asynchronously without delaying receipt delivery
Public No-PII Verification Page
"As an auditor, I want to open a verification page that instantly confirms a receipt’s authenticity without exposing personal data so that I can complete reviews quickly and safely."
Description

Deliver a mobile-first, public web page that accepts a scanned QR code or pasted short URL, resolves the token, retrieves the receipt’s proof bundle, and displays a clear human-readable validation outcome (Valid, Invalid, Revoked, Expired, Not Found). The page must never display or log PII; it surfaces only minimal metadata necessary for trust (receipt ID suffix, issuance timestamp, campaign/bill identifier, and chain position fingerprint). It explains validation checks at a glance and provides a link to RallyKit’s verification documentation. Implement fast-load (<2.5s at 4G), WCAG 2.1 AA accessibility, localization support, and responsive design. No login or special software required.

Acceptance Criteria
QR and Short URL Token Resolution
Given a valid receipt token embedded in a RallyKit short URL or QR code, when a user opens the verification page by scanning or pasting the link, then the client resolves the token and retrieves the proof bundle over HTTPS without requiring login. Given a valid token, when resolution occurs, then no more than 2 network requests are made to the verification API and a 200 response with the proof bundle is received. Given a malformed or unknown token, when resolution is attempted, then the server responds 404 and the page shows the Not Found outcome without revealing token internals. Given any token, when the page is accessed in a private/incognito window, then verification still completes without cookies or special software. Given a network failure, when the user retries, then the page re-attempts resolution and surfaces a non-PII error message with a retry control.
Validation Outcomes Rendering
Given a proof bundle that passes signature and chain-position checks and is neither revoked nor expired, when displayed, then the page shows the status label "Valid" exactly and a success icon with aria-label "Status: Valid". Given a bundle that fails signature or chain-position checks, when displayed, then the page shows "Invalid" exactly and an error icon with aria-label "Status: Invalid". Given a bundle marked as revoked, when displayed, then the page shows "Revoked" exactly with an appropriate warning icon and aria-label "Status: Revoked". Given a bundle with an expiration timestamp in the past, when displayed, then the page shows "Expired" exactly with an aria-label "Status: Expired". Given a token that cannot be resolved, when displayed, then the page shows "Not Found" exactly with an aria-label "Status: Not Found". Given multiple failing conditions, when rendering status, then precedence is Revoked > Invalid > Expired > Not Found. Given any status, when displayed, then a concise checklist of validation checks (Signature, Chain position, Revocation, Expiration) with pass/fail indicators is visible along with a persistent link labeled "Learn about verification" that opens the configured verification documentation URL in a new tab.
No PII Display or Logging
Given any verification flow, when the page renders, then the DOM, visible text, URLs/query params, local/session storage, cookies, and analytics payloads contain no personally identifiable information (PII) from the receipt or user. Given server-side access and application logs at INFO level, when verification requests are processed, then no PII fields are logged and tokens are truncated to the last 6 characters. Given client console logs in production, when the page runs, then no PII is logged and debug logging is disabled. Given synthetic receipts seeded with unique PII markers during tests, when verification completes, then zero occurrences of those markers exist in client logs, server logs, network payloads, or rendered UI.
Minimal Metadata Only
Given a successful verification, when metadata is displayed, then only the following are shown: receipt ID suffix (last 6 characters), issuance timestamp, campaign/bill identifier, and chain position fingerprint (e.g., hash suffix) — nothing else. Given additional fields present in the proof bundle, when rendering, then they are not displayed, not embedded in data-* attributes, and not exposed via page source. Given the page URL is copied or shared, when inspected, then it contains only the short URL/token and no appended metadata or PII-bearing parameters.
Performance and Fast Load on 4G
Given a cold load on a mid-tier mobile device under simulated 4G (≈400 ms RTT, 1.6 Mbps), when loading the verification page, then Largest Contentful Paint ≤ 1.8 s and Time to Interactive ≤ 2.5 s at p95 across 20 runs. Given the initial render, when measured, then the total transfer size of critical resources is ≤ 300 KB gzipped and total JavaScript is ≤ 150 KB gzipped. Given repeat visits, when measured, then LCP ≤ 1.0 s at p95 due to effective caching and resource hints. Given API latency within SLA (p95 ≤ 800 ms), when fetching the proof bundle, then the verification status renders above the fold without user scroll.
WCAG 2.1 AA Accessible Verification
Given keyboard-only navigation, when traversing the page, then all interactive elements are reachable in logical order with visible focus indicators and no keyboard traps. Given screen readers (NVDA, JAWS, VoiceOver, TalkBack), when the status is determined, then an ARIA live region announces the verification outcome and its brief explanation. Given color contrast testing, when evaluated, then text contrast ratios are ≥ 4.5:1 and essential non-text UI elements are ≥ 3:1. Given a 320 px viewport and up to 200% zoom, when reflow occurs, then content remains usable without horizontal scrolling and without loss of information. Given images and icons, when audited, then each has appropriate alt text or aria-hidden as applicable, and controls have accessible names. Given error or Not Found states, when presented, then programmatic error identification and guidance are provided to assistive technologies.
Localization and Responsive Mobile-First
Given the Accept-Language header set to a supported locale (e.g., en, es), when the page loads, then all UI strings and timestamps are localized accordingly; for unsupported locales the page falls back to English. Given localization coverage tests, when executed, then 100% of user-visible strings are sourced from translation files with no hardcoded strings detected. Given viewport widths of 320, 375, 768, 1024, and 1440 pixels in portrait and landscape, when rendered, then no horizontal scrollbar appears and primary actions remain visible without scrolling. Given touch devices, when interacting, then all tappable controls have a minimum hit area of 44×44 px and sufficient spacing to prevent accidental activation. Given prefers-color-scheme settings, when detected, then the page applies light/dark themes that maintain all required contrast ratios.
Ledger Signature & Chain Validation
"As a compliance officer, I want cryptographic validation of a receipt’s signature and chain position so that I can trust the verification result without manual cross-checks."
Description

Implement a verification service that validates the receipt’s digital signature, verifies the integrity of its proof bundle, and confirms the receipt’s position within the live append-only ledger. The service recomputes hashes, checks signature validity against the current public keys, and verifies the chain linkage/anchor at the reported height or checkpoint. It returns a normalized result with reason codes and machine-readable fields for the UI. Handle reorgs or checkpoint updates, provide deterministic outcomes, and expose an internal API used by the VerifyLink page. Include observability (metrics, tracing) without capturing PII.

Acceptance Criteria
Happy Path: Valid Receipt Verified Against Live Ledger
Given a receipt with a valid digital signature over content C and a proof bundle P anchored at ledger height H present in the current checkpoint When the internal verification endpoint is invoked with the receipt token or payload Then the service recomputes hash(C) and verifies the signature against the current public key set; result.signatureValid = true And it validates P and the chain linkage to anchor height H; result.proofValid = true; result.chainStatus = "confirmed"; result.anchorHeight = H And it returns HTTP 200 with normalized JSON: {version, requestId, determinedAt, valid=true, signatureValid=true, proofValid=true, chainStatus, anchorHeight=H, checkpointId, reasonCodes=["OK"]} And the same input within the same ledger state yields an identical response (excluding requestId and determinedAt)
Invalid Signature: Receipt Rejected with Precise Reason Codes
Given a receipt where the provided signature does not verify under any current valid public key When the internal verification endpoint processes the request Then result.signatureValid = false; result.valid = false And result.proofValid = false; result.chainStatus = "unverified"; result.anchorHeight = null And reasonCodes includes "INVALID_SIGNATURE" and excludes "OK" And the service returns HTTP 200 with the normalized response and no receipt payload echoed back
Proof Integrity: Tampered Proof Bundle Detected
Given a receipt whose proof bundle P has been altered such that the recomputed content hash or merkle/path root does not match the claimed anchor When the internal verification endpoint validates the submission Then result.signatureValid may be true, but result.proofValid = false; result.valid = false And result.chainStatus = "unverified"; result.anchorHeight = null And reasonCodes includes "PROOF_MISMATCH" (and excludes "OK") And HTTP 200 is returned with the normalized response containing reasonCodes, without any PII
Chain Position Under Reorg: Deterministic Reclassification
Given a receipt previously confirmed at height H where the current checkpoint no longer includes the anchor due to a chain reorg When the internal verification endpoint is called after the reorg Then result.chainStatus = "orphaned"; result.valid = false; result.proofValid may be true; result.anchorHeight = null And reasonCodes includes "CHAIN_REORG_DETECTED" And for identical input under an unchanged checkpoint, the response is deterministic and identical (excluding requestId and determinedAt) And when a subsequent checkpoint includes a new anchor at height H', a new verification returns chainStatus = "confirmed", anchorHeight = H', and reasonCodes includes "RECONFIRMED"
Key Rotation and Revocation Handling
Given a trust store with key K1 valid during [t1_from, t1_to] and key K0 revoked at t0_revoke, and a receipt with keyId and signedAt timestamp t When the internal verification endpoint validates the signature Then if keyId = K1 and t ∈ [t1_from, t1_to], result.signatureValid = true and reasonCodes includes "KEY_TRUSTED" And if keyId = K0 and t ≥ t0_revoke, result.signatureValid = false and reasonCodes includes "KEY_REVOKED" And if keyId is not found in the trust store, result.signatureValid = false and reasonCodes includes "KEY_NOT_FOUND" And in all failure cases, result.valid = false
Internal API Contract: Normalized, Machine-Readable Response Without PII
Given any request to the internal verification endpoint When the service returns a response Then the JSON schema includes: version:string, requestId:string, determinedAt:ISO-8601 string, valid:boolean, signatureValid:boolean, proofValid:boolean, chainStatus ∈ {"confirmed","pending","orphaned","unverified"}, anchorHeight:integer|null, checkpointId:string|null, reasonCodes:array<string> (non-empty) And reasonCodes ordering is deterministic for identical inputs and ledger state And the payload contains no supporter PII and does not echo raw receipt content; only opaque identifiers and hashes are returned And the endpoint returns application/json; charset=utf-8
Observability: Metrics and Tracing Emitted Without Sensitive Data
Given the verification service is handling requests When metrics are scraped and traces are collected Then counters exist: verify_requests_total labeled by outcome (valid/invalid), failure_reason (primary reason code or "none"), and http_status, with bounded label cardinality And a histogram verify_latency_ms is emitted with p95 latency ≤ 800ms under nominal load, and spans include traceId with parent-child relationships across the verification pipeline And logs and traces contain no PII nor raw receipt payloads; only opaque identifiers and hashes And failures (timeouts, provider errors) emit error counters and reasonCodes like "LEDGER_UNAVAILABLE" without leaking sensitive data
Revocation & Expiry Handling
"As a program manager, I want to revoke compromised or test receipts so that external verifications don’t misrepresent our records."
Description

Support revoking individual receipts and setting optional expirations for test/sample receipts. Maintain a signed revocation list and ensure VerifyLink reflects revocation/expiry instantly with human-readable reasons. Preserve tamper-evident history and do not delete ledger entries; instead mark state transitions. Provide admin UI hooks and API endpoints for revocation actions with appropriate authorization and audit trails (no PII).

Acceptance Criteria
VerifyLink shows revoked status immediately after admin revokes a receipt
Given an existing valid receipt and an authorized admin revokes it with a required reason via UI or API When the VerifyLink page is opened via its short URL or QR code Then within 2 seconds the page shows Status=Revoked, displays the human-readable reason, and shows the revocation timestamp And the receipt’s signature and chain position still validate against the live ledger And the HTTP response is 200 with Cache-Control: max-age<=5 and an ETag that changes on state transition And the revocation appears on the public signed revocation list and the page links to that entry
Expiry handling for test/sample receipts on VerifyLink
Given a receipt flagged as test/sample with an explicit expiry timestamp When VerifyLink is opened before the expiry timestamp Then the page shows Status=Valid and displays “Expires <timestamp>” When the same link is opened after the expiry timestamp Then the page shows Status=Expired with the reason “Test/sample receipt expired” and the expiration timestamp And the expiration is recorded as a state transition on the ledger (append-only) and the original receipt entry remains And attempts to set an expiry on a non-test/sample receipt return HTTP 400 with error code INVALID_EXPIRY_TARGET and no state change
Public signed revocation list integrity and accessibility
Given any receipt is revoked When the revocation list endpoint is queried Then within 2 seconds it returns a new version that includes the receipt ID, revocation timestamp, and reason code, contains no PII, and is signed by the current ledger signing key And the signature validates against the published public key and the list has a monotonic version number And the endpoint supports pagination and returns 200 within 500 ms for lists under 10,000 entries
Tamper-evident history preserved with no deletions
Given a receipt undergoes a revoke or expiry state change When the ledger is queried for that receipt’s history Then a new state transition entry is present with timestamp, chain position, and hash linking, while the original issuance entry remains unchanged And any attempt to delete or hard-remove a receipt or ledger entry via API returns 405/Forbidden and is logged And a read-only audit endpoint exposes the ordered state transitions and their hashes with no PII
Admin UI and API revocation with authorization and audit trail (no PII)
Given an authenticated user with permission receipts.revoke submits a revocation with receipt ID and a reason code When the request is confirmed (UI) or includes an Idempotency-Key (API) Then the system appends a revocation state transition, requires a human-readable reason derived from an allowed reason code list (no free-text), and returns 200 with revocation ID And an audit record is created with actor identifier, timestamp, IP, receipt ID, reason code, and outcome, containing no PII about supporters And requests by users without receipts.revoke return 403 and are audited
Concurrency and idempotency for revocation actions
Given multiple concurrent revocation requests target the same receipt with the same Idempotency-Key When the requests are processed Then exactly one revocation state transition is appended to the ledger and all callers receive the same revocation ID and already_revoked=false When subsequent revocation requests occur after the receipt is already revoked (with any Idempotency-Key) Then the API returns 200 with already_revoked=true and no new ledger entries are created And VerifyLink consistently shows Status=Revoked for that receipt
Abuse Protection & Availability
"As a security-conscious admin, I want the verification service to stay fast and available even under load or attacks so that auditors always get reliable results."
Description

Protect the VerifyLink endpoint from abuse while maintaining frictionless access. Add rate limiting per IP/ASN, burst tolerance, and soft challenges (e.g., proof-of-work or CAPTCHA) after thresholds. Serve static assets via CDN, enable geo-redundant hosting, and implement health checks and circuit breakers for the ledger backend. Ensure >99.95% availability and graceful degradation with informative error states. Log only operational telemetry free of PII.

Acceptance Criteria
IP and ASN Rate Limiting with Burst Tolerance
- Given a client IP has made <= 60 requests in a rolling 60-second window, When it requests the VerifyLink endpoint, Then responses are 200 with no throttling. - Given the same IP sends up to 30 additional burst requests within the same 60-second window, When processed, Then >= 95% return 200 with p95 latency <= 500 ms and no challenge. - Given the IP exceeds 90 requests within 60 seconds, When further requests arrive, Then responses are 429 with a Retry-After header between 5 and 30 seconds until the window resets. - Given an ASN accumulates > 5,000 requests in a rolling 60-minute window, When subsequent requests from the ASN arrive, Then a soft challenge is required before proceeding. - Given configuration changes to thresholds via environment variables, When updated, Then new limits take effect within 5 minutes without redeploy.
Soft Challenge Activation and Success Flow
- Given a soft challenge is required for an IP or ASN, When the user completes the challenge successfully within 60 seconds, Then the IP is whitelisted from further challenges for 30 minutes unless limits are exceeded again. - Given the soft challenge is presented, When accessed via assistive technology, Then an accessible alternative (e.g., audio CAPTCHA) is available and passes WCAG 2.1 AA. - When the challenge is presented, Then the HTTP status is 200 with the challenge page; When the challenge is failed, Then the response is 403 with a retry option; When solved, Then the subsequent verification request returns 200. - The challenge page and assets load with p95 total load time <= 1,000 ms globally. - No request parameters (including receipt short codes) are rendered or logged by the challenge component.
CDN Static Asset Delivery and Global Performance
- Given static assets for VerifyLink are requested from global regions, When fetched via CDN, Then p95 TTFB <= 200 ms and p95 total load time <= 1,000 ms. - Assets are served with Cache-Control: public, max-age=600, stale-while-revalidate=60 and ETag set. - When the origin is unavailable, Then the CDN serves stale cached assets for up to 10 minutes and returns 200, and a banner indicates degraded mode. - TLS 1.2+ enforced, HSTS enabled, and HTTP->HTTPS redirects occur in < 100 ms.
Geo-Redundant Hosting and Health-Checked Failover
- VerifyLink origin is deployed in at least two regions with health checks every 10 seconds; three consecutive failures mark a region unhealthy. - When a region is marked unhealthy, Then traffic is shifted to a healthy region within 60 seconds, with no more than 1% 5xx during transition. - DNS or edge load balancer uses health-based routing with TTL <= 30 seconds. - Configuration and static content are replicated across regions with RPO = 0.
Ledger Circuit Breaker and Graceful Degradation
- Given 5 ledger read timeouts or 5xx errors occur within 30 seconds, When the threshold is exceeded, Then the circuit breaker opens for 60 seconds and no ledger calls are attempted. - While the breaker is open, Then the VerifyLink endpoint returns 503 with Retry-After: 60 and an informative, non-PII message; p95 response time remains <= 300 ms. - After 60 seconds, When a half-open probe succeeds, Then normal operation resumes; When it fails, Then the breaker re-opens for another 60 seconds. - All error responses include an x-correlation-id header and display a short trace ID on the page.
Availability SLO and Monitoring 99.95%
- Over a calendar month, synthetic monitors from at least 3 regions every 60 seconds show availability >= 99.95% for the VerifyLink endpoint; downtime is counted only for 5xx responses or timeouts > 5 seconds. - Total monthly downtime <= 21 minutes 54 seconds. - Alerts fire within 2 minutes when 30-minute burn rate projects SLO breach. - Public status page reflects incidents within 15 minutes.
Operational Telemetry Without PII
- Application and edge logs contain only allowlisted fields: timestamp, anonymized client IP (hashed or /24), ASN, user agent, route template, response code, latency, challenge state, rate-limit decision, correlation ID. - No receipt short codes, query strings, or request bodies are logged; verification inputs are redacted or hashed client-side before any logging. - Automated log scans during load tests detect zero PII matches (emails, phone numbers, names) and zero receipt identifiers. - Telemetry retention <= 30 days with secure access controls and audit logs.
Custom Domain & Branding Controls
"As a partner organization, I want VerifyLink to use our domain and branding so that recipients and auditors immediately trust the verification page."
Description

Allow organizations to use a custom verified domain or subdomain for short links and the verification page, and to apply light branding (logo, colors, footer text) to increase trust. Provide DNS- and HTTP-based domain verification, automatic TLS via ACME, and fall back to RallyKit’s default domain if misconfigured. Ensure branding elements do not reveal PII and do not alter the verification semantics.

Acceptance Criteria
Domain Verification via DNS or HTTP
Given an organization enters a custom domain or subdomain to use for short links and the verification page And the system generates a unique verification token When the organization completes DNS-based verification by publishing the token in a TXT record or completes HTTP-based verification by hosting the token at a well-known path And the user clicks "Verify Now" Then the system validates the token and marks the domain as Verified within 60 seconds And only Verified domains can be set to Active And failed verification displays the failing method and record/path expected And domain ownership is exclusive: attempts by other organizations to verify the same domain are rejected with a clear error And all verification and status changes are recorded in the audit log with timestamp and actor
Automatic TLS via ACME
Given a domain is Verified When the organization sets the domain to Active Then RallyKit obtains a valid TLS certificate via ACME within 5 minutes and serves HTTPS on the custom domain And only TLS 1.2+ is accepted; insecure protocols/ciphers are disabled And certificates auto-renew at least 30 days before expiry without operator intervention And if issuance or renewal fails, the system does not serve an invalid certificate and records a critical alert and audit log entry And HTTP requests to the custom domain are redirected to HTTPS with 301
Fallback to Default Domain on Misconfiguration or Outage
Given a custom domain is Active And continuous health checks monitor DNS target, TLS validity, and HTTP reachability When health checks fail for 2 consecutive intervals (5 minutes total) or the domain becomes Unverified/Expired Then newly generated receipts and action pages use the RallyKit default domain for short links and verification URLs And requests reaching our infrastructure on the custom domain receive an HTTP 302 redirect to the equivalent default-domain URL And a banner warning is shown in the admin console until health is restored And all fallback activations/resolutions are recorded in the audit log
Branding Controls Without PII Exposure
Given an organization uploads a logo and sets brand colors and footer text for the verification page When a user opens a verification page for any receipt Then the page applies the logo, colors, and footer without including supporter PII (e.g., name, email, phone) in the DOM, URLs, or network requests And branding assets are served from static endpoints with cache-busted URLs that contain no receipt or user identifiers And footer text is limited to 140 characters and disallows HTML/script; images are PNG or SVG without scripts and <= 200 KB And the page meets WCAG 2.1 AA color contrast (>= 4.5:1 for text) and includes alt text for the logo
Verification Semantics Unchanged by Branding
Given a receipt code is verified on the default domain without branding When the same receipt is verified on a branded custom domain Then the signature validation result, chain position, and canonical verification JSON payload are identical (byte-for-byte SHA-256 hash match) And only presentation-layer elements (CSS, logo, color, footer) differ And HTTP status codes and cache headers for verification responses are unchanged And no additional third-party scripts or trackers are loaded due to branding
Custom Domain Applied to Short Links and Verification Pages
Given a custom domain is Active When a new receipt is issued Then the generated short link and QR code use the custom domain and resolve to the verification page within 500 ms server processing time And previously issued receipts continue to resolve via their original domain, with equivalence of verification results across domains And if the custom domain is later deactivated, existing short links on that domain respond with HTTP 302 to the default-domain verification URL And short link slugs remain unchanged when switching domains to preserve audit trails

Trace Diffs

Human-readable, side-by-side diffs for any action’s history show exactly what changed, who approved it, and why (reason codes). Collapses technical hashes into clear narratives you can export, cite in reports, and share during reviews without confusion.

Requirements

Unified Action History Model
"As a campaign director, I want all changes captured in a consistent, tamper-evident history so that I can reliably review what happened across any RallyKit asset."
Description

Capture every change to RallyKit entities (actions, scripts, targets, pages, automations) as immutable, append-only events normalized to a single schema (entity type/id, version, timestamp, actor/role, change set, approval info, reason code, comment, origin, correlation id). Persist a write-once event log with integrity verification, while surfacing a simple “Integrity: verified” indicator in the UI instead of raw hashes. Backfill existing records, map legacy fields, and expose internal APIs to query histories by entity, campaign, or timeframe. This foundation enables consistent diffs, approvals, exports, and audit-ready proof across the product.

Acceptance Criteria
Capture and Normalize Events Across Entities
Given a create, update, or approval occurs on any RallyKit entity (actions, scripts, targets, pages, automations) When the operation is committed Then exactly one append-only event is persisted to the unified history with fields: entity_type, entity_id, version (incremented by 1), timestamp (server-side UTC ISO-8601), actor_id, actor_role, change_set (JSON diff of changed fields only), approval_info (present when applicable), reason_code (required when public-facing content or approval status changes), comment (optional), origin (ui|api|automation), correlation_id (UUIDv4) And the event is retrievable by internal APIs within 1 second of commit And change_set contains no secrets, tokens, or PII marked as sensitive by data classification And attempts to omit a required reason_code or approval_info are rejected with a 4xx validation error
Integrity Indicator and Verification
Given an entity’s history is loaded in the UI When integrity verification over the write-once event log succeeds for all events in the history Then the UI displays “Integrity: verified” and no raw hashes are shown anywhere in the UI And the verification completes in ≤500 ms for histories of ≤500 events Given any event in the history fails integrity verification during the same check When the history is loaded Then the UI displays “Integrity: unverifiable” and the failing event ids are available via internal diagnostics API And the UI still hides raw hashes
Append-Only Immutability Enforcement
Given a persisted history event exists When any client (including admins and batch jobs) attempts to update or delete that event via storage or internal APIs Then the operation is rejected and logged with a 403/operation_not_permitted error code And no changes are made to the original event record Given a correction is needed When the system records a correction Then it is captured as a new append-only event referencing the prior version via correlation_id without altering prior events
Legacy Backfill and Field Mapping
Given a production dataset with pre-existing entities prior to the unified history release When the backfill job is executed Then 100% of existing entities (actions, scripts, targets, pages, automations) receive a baseline v1 history event And legacy fields are mapped as follows: updated_by→actor_id/actor_role, updated_at→timestamp, change_summary→comment, source→origin And the backfill is idempotent (re-running does not create duplicates) verified by unchanged event counts per entity_id And a completion report shows total entities processed, events written, and 0 mapping errors And any unmappable records are logged with entity_id and reason and do not block the job from completing
History Query APIs by Entity, Campaign, and Timeframe
Given internal consumers request history When calling GET /internal/history?entity_type={t}&entity_id={id} Then results include all events for the entity with fields: entity_type, entity_id, version, timestamp, actor_id, actor_role, change_set, approval_info, reason_code, comment, origin, correlation_id And events are ordered by version ascending by default and support sort=desc And pagination parameters (limit, cursor) return stable, non-duplicated pages Given a campaign context When calling GET /internal/history/search?campaign_id={cid}&from={ts1}&to={ts2} Then results include all events for entities in the campaign between ts1 and ts2 inclusive And the p95 response time is ≤300 ms for result sets ≤1,000 events
Diff and Export Foundation Consistency
Given two versions of the same entity are requested for comparison When calling GET /internal/history/{entity_type}/{entity_id}/diff?from_version={v1}&to_version={v2} Then the API returns a human-readable diff that enumerates changed fields with before/after values and includes actor_role, approver (if any), and reason_code And the diff format is identical across entity types Given an export is requested When calling POST /internal/history/export with filters (entity, campaign, timeframe, versions) Then the export returns CSV and JSON options containing normalized fields without cryptographic hashes And exports are reproducible: identical inputs yield byte-identical outputs
Human-Readable Side-by-Side Diffs
"As a grassroots organizer, I want a clear side-by-side view of what changed so that I can understand edits quickly without reading raw data or code-like diffs."
Description

Generate field-level diffs for text, rich text/Markdown, JSON, and structured fields with word- and sentence-aware highlighting. Provide side-by-side and inline views, with controls to collapse unchanged sections, toggle whitespace/case sensitivity, and switch between granular or summary modes. Support attachments and link changes with clear labels. Ensure fast rendering via server-side diff computation, pagination, and virtualized lists. Meet accessibility standards (keyboard navigation, ARIA roles, sufficient contrast) and embed the viewer wherever history appears in RallyKit.

Acceptance Criteria
Side-by-Side Diff for Rich Text/Markdown Fields
Given an action has two versions of a Markdown field with headings, emphasis, lists, and links, When the user opens the side-by-side diff, Then the left pane renders the previous version and the right pane renders the current version with formatting preserved in both panes. Given words are added, edited, and deleted within the same sentence, When shown in side-by-side mode, Then added words are highlighted as insertions and deleted words as deletions with word-level granularity and sentence-aware grouping that does not break inline formatting. Given contiguous unchanged paragraphs exceed five lines, When the user enables Collapse Unchanged, Then those blocks render as a single collapsed row labeled with the exact count of unchanged lines and an Expand control that reveals the content without page reload. Given the side-by-side diff is visible, When the user scrolls either pane, Then the opposite pane scrolls in sync to maintain aligned context.
JSON/Structured Fields Diff with Granular vs Summary and Ignore Toggles
Given a field contains nested JSON objects and arrays, When Granular mode is selected, Then changes are displayed at the property-path level (e.g., user.address.city) with before and after values and array index changes indicated. Given the same JSON input, When Summary mode is selected, Then only top-level keys with changes are listed with counts of added, removed, and modified child fields and an Expand control reveals granular details on demand. Given two versions differ only by whitespace in string values, When Ignore Whitespace is enabled, Then those values are not marked as changed. Given two versions differ only by letter casing in text values, When Ignore Case is enabled, Then those values are not marked as changed.
Toggle Between Inline and Side-by-Side Views
Given a diff is available for any supported field type, When the user selects Inline view, Then the viewer renders a single-column diff with in-line insertion and deletion highlights. Given the viewer is in Inline view, When the user switches to Side-by-Side view, Then the layout updates without a full page reload and preserves current scroll position and expanded/collapsed sections. Given a user has set a view preference, When navigating to a different field's diff within the same session, Then the previously selected view mode persists.
Attachments and Link Changes Representation
Given an attachment was added, removed, or replaced between versions, When viewing the diff, Then a labeled row "Attachment" displays the filename, file size, and a state badge of Added, Removed, or Changed. Given a link URL changed inside a rich text field, When viewing the diff, Then the before and after URLs are displayed as sanitized, clickable labels and the link text change is highlighted separately from the URL change. Given multiple attachments changed, When viewing the diff, Then each attachment change is shown on its own row in deterministic filename order.
Performance and Scalability of Diff Rendering
Given a history contains 1000 actions with up to 10 diffable fields each, When loading the history page, Then the first page of 50 records renders in under 2 seconds. Given the user scrolls through the history list, When new rows enter the viewport, Then virtualization keeps average frame render time under 16ms and browser memory usage under 300MB. Given a server-side diff request for a single field up to 100k characters, When processed under normal load, Then the API returns the diff payload in under 500ms at p90. Given pagination controls are used, When navigating to next or previous pages, Then only the requested page is fetched and rendered and total and current page indices are correct.
Accessibility: Keyboard Navigation and ARIA Compliance
Given the diff viewer has focus, When navigating using only the keyboard, Then all controls (view toggles, expand/collapse, pagination, ignore whitespace/case) are reachable in a logical tab order and operable with Enter and Space. Given a screen reader is active, When reading the diff viewer, Then ARIA roles, names, and states announce each change with its type (insertion, deletion, modification), position, and the count of changes. Given highlights and UI elements are displayed, When measured against WCAG 2.2 AA, Then text contrast is at least 4.5:1, non-text UI components and focus indicators are at least 3:1, and focus is visually apparent.
Embeddable Diff Viewer Across RallyKit History Surfaces
Given a history appears in an Action Detail page, a Campaign Timeline, and an Audit Report modal, When the diff viewer is embedded in each surface, Then it renders without layout overflow or content shifts within the host container. Given a host passes a record ID and field key, When the diff component mounts, Then it fetches and renders the corresponding diff without requiring additional host wiring beyond documented inputs. Given RallyKit supports light and dark themes, When the diff viewer renders, Then it inherits host theme tokens and maintains required contrast and legibility.
Approval Attribution & Reason Codes
"As a program manager, I want every approved change to show who approved it and why so that I can enforce policy and explain decisions during reviews."
Description

Require approver identity, role, and a reason code (with optional comment) for any publish-state or policy-scoped change. Display approval badges and reason summaries at the top of each diff and in timelines. Provide a manageable reason-code taxonomy per organization, with analytics on usage. Integrate with SSO/2FA for strong identity, and log delegation when approvals are performed on behalf of others. Block finalization if a required reason code is missing, ensuring every diff shows who approved it and why.

Acceptance Criteria
Publish-State Approval Requires Identity, Role, and Reason Code
- Given a draft action is being moved to a publish-state (Publish/Activate/Go Live), When an approver submits the change, Then a reason code from the organization's active taxonomy is required and must be selected before submission. - Given no reason code is selected, When the approver attempts to finalize, Then the action is not published, an error "Reason code is required" is shown, and no approval record is created. - Given a reason code is selected, When approval is submitted, Then the approval record persists approver full name, unique user ID, approver role at time of approval, UTC timestamp, reason code ID and label, and the optional comment (max 500 chars). - Given the approval is saved, Then the publish-state transition is committed exactly once and linked to the approval record via a unique approval ID. - Given the reason code is later deactivated, Then historical approvals continue to display the original reason code label and are not altered.
Policy-Scoped Change Approval Requires Identity, Role, and Reason Code
- Given a change is marked policy-scoped, When an approver submits the change, Then a reason code from the organization's active taxonomy is required and must be selected before submission. - Given no reason code is selected, When the approver attempts to finalize via UI or API, Then the request is rejected with HTTP 400 and message "Reason code is required" and no approval record is created. - Given a valid reason code is selected and an optional comment up to 500 chars is provided, When approval is submitted, Then the system persists approver identity, role at time of approval, UTC timestamp, reason code ID and label, and the optional comment. - Given the change is approved, Then the policy-scoped update is committed only after the approval record is stored and linked to the change.
Approval Badges and Reason Summaries Display in Diff and Timelines
- Given an action with an approval is viewed in Trace Diffs, When the diff loads, Then the top-of-diff banner displays an approval badge showing "Approved by {Name} ({Role}) on {ISO 8601 UTC timestamp}" and "Reason: {Reason Code Label}" and includes the optional comment if present. - Given the action has a timeline, When the timeline is rendered, Then an entry mirrors the same approval details (name, role, timestamp, reason code, optional comment). - Given the user exports or prints the diff, When the export is generated, Then the approval badge details are included in the exported artifact. - Given multiple approvals exist, When the diff is viewed, Then the most recent approval appears in the top banner and all approvals appear in the timeline in reverse-chronological order.
Organization Reason-Code Taxonomy Management
- Given an Org Admin opens Reason Codes settings, When they create a new reason code, Then they can set a unique key (<=32 chars, alphanumeric/underscore), label (<=80 chars), and description (<=240 chars) and save it active. - Given a reason code exists, When the admin edits its label or description, Then changes are saved and audited without altering historical approvals. - Given a reason code is in use, When the admin attempts to delete it, Then deletion is blocked with "Cannot delete a code with usage"; deactivation is allowed and prevents future selection while preserving history. - Given multiple codes exist, When the admin reorders or toggles active/inactive, Then the approval UI reflects the new order and only active codes are selectable.
Reason Code Usage Analytics and Export
- Given approvals exist in the org, When a user opens Analytics > Reason Codes and selects a date range, Then the dashboard displays counts of approvals by reason code and by approver role, with totals matching the sum of underlying approval records for that range. - Given a filter is applied (date range, approver role, action type), When the view refreshes, Then the counts update accordingly and zero counts are displayed as 0 (no blanks). - Given the user exports the analytics, When Export CSV is clicked, Then a CSV is downloaded containing org_id, action_id, approval_id, approver_user_id, approver_role, reason_code_key, reason_code_label, comment, timestamp (UTC), and delegate_user_id (if any).
SSO and 2FA Enforcement for Approvals
- Given the organization has SSO/2FA configured, When a user attempts to approve, Then the user must be authenticated via the org's SSO and have completed a 2FA challenge in the current session; otherwise the approval is blocked and the user is prompted to complete authentication. - Given an approval is completed, When the audit record is stored, Then it includes the IdP user identifier, 2FA method (e.g., TOTP/WebAuthn) and verification timestamp, along with the approver identity and role at time of approval. - Given SSO is disabled for the org, When a local account without 2FA attempts to approve, Then the approval is blocked with "2FA required for approvals".
Delegated Approval Capture and Audit
- Given a delegate is approving on behalf of a principal, When the approval form is submitted, Then the delegate must select the principal and provide a justification comment (min 10 chars) and the system verifies an active delegation authorization covering the action scope. - Given the authorization is valid, When the approval is recorded, Then the audit record stores delegate user ID, principal user ID, delegation ID, justification comment, and all standard approval fields (identity, role, timestamp, reason code). - Given a delegated approval exists, When the diff and timeline are viewed, Then they display "Approved by {Delegate Name} on behalf of {Principal Name}" along with reason code and comment. - Given analytics are viewed, When delegated approvals are included, Then a "Delegated" flag is available for filtering and is exported in CSV.
Exportable Narrative Reports
"As a nonprofit director, I want to export clear change narratives and citations so that I can include them in audit packets and board reports without extra formatting work."
Description

Transform diffs into readable narratives ("On 2025-08-12, Jordan Lee changed call script intro from X to Y due to Message Alignment—approved by A. Singh.") and export them to PDF, CSV, and DOCX with organization branding, timestamps, version IDs, and approval details. Provide secure, expiring share links and permission-aware access for reviewers. Include watermarks and page headers/footers suitable for audits and board packets. Generate citation permalinks that resolve to a specific version snapshot so exports can be verified later.

Acceptance Criteria
Generate Human-Readable Narrative From Diffs
Given an action history event with field name, old value, new value, actor, reason code, approver, approval timestamp When a user generates a narrative report for that action Then the narrative sentence includes local date, actor full name, field name, old value, new value, reason code text, and approver full name And the sentence follows the pattern: On {local_date}, {actor} changed {field_name} from {old_value} to {new_value} due to {reason_code}—approved by {approver} And technical hashes and internal IDs are excluded from the narrative body and only appear in a metadata section And narratives are ordered by approval timestamp ascending And multi-line values preserve line breaks in the narrative
Export Narrative Report to PDF, CSV, and DOCX With Branding
Given a set of generated narratives for one or more actions When the user exports as PDF Then the PDF includes organization branding (logo and name) in the header, generated timestamp with timezone, report title, version snapshot ID, approval details, and a watermark on every page And the PDF paginates correctly with no truncated content and is downloadable and viewable in standard PDF readers When the user exports as CSV Then the CSV contains columns: Timestamp, Actor, Field, Old Value, New Value, Reason Code, Approver, Version ID, Action ID And the CSV conforms to RFC 4180 (UTF-8, quoted fields when needed, CRLF line endings) When the user exports as DOCX Then the DOCX uses organization styles (heading, body), includes the same header/footer elements as the PDF, and opens in current MS Word and Google Docs without formatting loss And all exported files follow the naming pattern: {OrgSlug}_TraceDiffs_{YYYYMMDD_HHMMssZ}.{ext}
Create Secure Expiring Share Links
Given a completed export When a permitted user creates a share link with an expiration of 72 hours Then the system generates an unguessable token (>=128 bits entropy) and a URL containing only that token And the link resolves to the exact export for any holder with required permissions until expiration And after expiration (with up to 5 minutes clock skew tolerance) requests return 410 Gone And the export owner or org admin can revoke the link, after which requests return 410 Gone within 60 seconds And all accesses via the share link are logged with timestamp, IP, and user or anonymous marker
Permission-Aware Access Enforcement
Given organization roles with Exporter and Reviewer permissions When a user without Exporter permission attempts to generate an export Then the request is denied with 403 Forbidden and no file is produced When a share link is created for reviewers only Then users with Reviewer (or higher) in the same org can access; users outside the org or without the role receive 403 Forbidden pre-expiration And if SSO is required by the org, the link enforces SSO login before access And revoking a user's org access immediately prevents further access via previously issued links
Audit-Ready Watermarks and Page Headers/Footers
Given organization-configured header, footer, and watermark settings When exporting to PDF and DOCX Then each page header contains organization name, report title Trace Diffs Narrative, generated timestamp with timezone, and snapshot/version ID And each page footer contains page X of Y and the export filename And a diagonal watermark appears on every page at 20–30% opacity with the configured text (e.g., Confidential or Board Packet) And long narrative lines wrap without overlapping headers/footers, and tables auto-repeat headers across pages
Citation Permalink to Immutable Version Snapshot
Given an export generated from a specific version snapshot ID When the system creates a citation permalink Then the permalink resolves to a read-only snapshot whose content matches the export exactly And the permalink page displays the version snapshot ID and SHA-256 checksum of the export And subsequent changes to the underlying action history do not alter the permalink content And the permalink returns 404 Not Found if the snapshot ID does not exist and 405 Method Not Allowed for mutation attempts And the export embeds the permalink URL so reviewers can verify later
Timestamp and Version ID Consistency
Given an organization default timezone setting When narratives and exports are generated Then all timestamps display in ISO 8601 with explicit offset and the organization timezone label And events are sorted by change approval timestamp ascending throughout narrative, CSV rows, and DOCX/PDF tables And each event includes a version ID and the top-level export includes a snapshot/version ID that matches the permalink And if the org has no timezone configured, UTC is used and indicated
Redaction & Role-Based Visibility
"As a compliance officer, I want sensitive details redacted in diffs by default so that we can share histories safely without exposing PII."
Description

Mask PII and sensitive fields in diffs based on role policies (e.g., donor emails, phone numbers, internal notes), with visible placeholders and tooltips indicating redaction rules. Ensure redactions propagate to exports and share links. Provide a request-and-grant workflow to temporarily unmask specific elements with a logged justification, and record redaction actions in the audit trail. Offer admin controls to define field-level policies and organizational defaults.

Acceptance Criteria
Role-Based Redaction in Trace Diff View
Given an Organizer role with policies masking donor.email (full), donor.phone (partial last 4 only), and hiding internal_notes And a Trace Diff that contains those fields across before/after states When the user opens the Trace Diff Then masked fields display standardized placeholders (e.g., "[redacted: email]", "***-***-1234", "[redacted: internal_notes]") in both panes And each placeholder shows a tooltip containing the policy name, reason code, and last-updated timestamp And diff highlighting does not reveal any underlying characters of redacted content And copy-to-clipboard, select-all, and context menu actions return the placeholder value, not the raw content And no "show raw" controls are visible for fields the role is not permitted to unmask
Redactions Persist in Exports and Share Links
Given a user with role-based redaction policies applied in the Trace Diff When the user exports the diff to PDF and CSV Then the exported files display the same placeholders and tooltips (where supported) as the on-screen view And no raw values are embedded in the file (including metadata, alt text, hidden layers, or CSS) And text selection in the PDF yields placeholder text only When the user creates a share link Then the share view enforces the same redaction rules for viewers based on the link’s assigned role or permissions And if a specific element is unmasked via a valid grant, only that element appears unmasked in the export/share for the grant’s active window; all other redactions remain And after grant expiration, subsequent exports and share-link views revert to redacted for that element
Temporary Unmask Request-and-Grant Workflow
Given a redacted element in a Trace Diff When a user clicks "Request access" Then a modal requires a justification (minimum 20 characters) and selects a predefined reason code And submitting logs the request with requester, element path, action ID, timestamp, and justification When an approver with permission reviews the request Then they can approve with a duration (default 60 minutes) and optional scope note, or deny with a reason And upon approval, only the requested element(s) in the specified action diff are unmasked for the requester within the set duration And the UI shows an "Unmasked until <time>" indicator with a revoke control for the approver And upon denial or expiry, the element reverts to redacted without page reload
Audit Trail for Redaction and Unmask Events
Given redaction policies and the unmask workflow are enabled When any of the following occur: redaction applied, access requested, request approved/denied, grant activated/expired/revoked, export generated, share link viewed Then an audit entry is recorded with: actor, event type, element path, action ID, policy/version, timestamp (UTC), justification/reason (if applicable), grant duration, and target (export/share ID) And audit entries are immutable, sequentially ordered, and filterable by event type and action ID And exporting the audit log redacts PII consistently while preserving event context And audit entries are accessible to Admins and Read-Only Auditors per role policy
Admin Policy Configuration for Field-Level Redaction
Given an Admin opens the Redaction Policies settings When they create or edit a policy Then they can define field selectors (JSON path and regex), roles permitted to view raw values, mask strategy (full, partial with pattern), placeholder text, tooltip message, and reason code And they can set propagation rules (apply to exports, apply to share links) and default unmask grant duration And a Preview pane renders the effect on a sample Trace Diff before saving And saving creates a new policy version and applies it to subsequent views within 10 seconds And all policy changes are written to the audit trail with diff of policy changes
Consistent Redaction Across Diff, Search, and Copy
Given a Trace Diff where a redacted field changed between versions When viewing side-by-side diffs Then both before and after panes render placeholders consistently, and change markers do not reveal masked characters When using in-page search or app-level search Then queries cannot match underlying masked content and only match placeholder text When using copy, export-selected, or print-to-PDF Then only placeholder values are output, and no hidden layers contain raw content And any attempt to bypass redaction via DOM inspection or CSS toggles does not reveal raw values in the client
Cross-Action Change Lineage
"As a campaign strategist, I want to see how a change in one place affected related assets so that I can explain downstream impacts and prevent regressions."
Description

Link related events across assets to show causal chains (e.g., script update → action page text update → outreach surge). Use correlation IDs, batch IDs, and request context to infer relationships. Provide a timeline and dependency graph within Trace Diffs to traverse upstream and downstream impacts, with per-node summaries and quick jump into detailed diffs. Enable campaigns to understand how a single edit propagated through RallyKit and influenced supporter actions.

Acceptance Criteria
End-to-End Causal Chain from Script Edit to Outreach Surge
Given a script edit event with correlationId and requestContext referencing an action page And subsequent action page text update and supporter outreach events occur within 24 hours When Trace Diffs loads the action’s history Then the timeline displays the events in chronological order with second-level timestamps, responsible user, approval record, and reason code And the dependency graph links the script edit → page text update → outreach events as a single causal chain And no unrelated events without matching identifiers or context are included in the chain And the chain can be expanded/collapsed to show or hide intermediate steps
Upstream/Downstream Traversal in Dependency Graph
Given the dependency graph is rendered for a selected action When a user selects any node Then controls to traverse Upstream and Downstream are enabled And clicking Upstream or Downstream re-centers the graph on the adjacent node and highlights the traversed path And pan/zoom/traverse interactions remain responsive (p95 action-to-render < 300ms) for graphs up to 200 nodes and 400 edges And initial graph render completes in < 2s (p95) for the same size
Per-Node Summary Cards in Graph and Timeline
Given any node in the graph or event in the timeline When the user hovers or focuses the item Then a summary card shows: asset type, asset name/title, event timestamp, user/approver, reason code, and a concise before/after summary of key fields (max 160 chars) And the summary card shows counts of downstream impacted supporter actions (calls, emails) and unique supporters where applicable And clicking the summary card’s Copy Link copies a deep link to this node/event with current filters
Quick Jump to Detailed Diffs from Lineage
Given a node or timeline event is selected When the user clicks View Diff Then the side-by-side human-readable diff for that exact state transition opens within Trace Diffs in one click And the diff view displays the associated approval and reason code And using Back returns to the prior lineage view with preserved zoom, selection, and filters And the deep link includes the selected node/event and returns the same state when opened in a new session
Correlation via IDs and Request Context
Given events share the same correlationId or batchId When lineage is computed Then edges are created deterministically between those events Given events lack correlationId/batchId but share requestContext (actionPageId and sessionId or ipHash) within a 10-minute window When lineage is computed Then edges are inferred and labeled as Inferred with a visible Why link showing the matching fields and time delta And edges always display the rationale (IDs matched or context fields) on hover And false links are suppressed by requiring at least two matching context fields or a unique identifier And on a reference test dataset with ground truth, precision is ≥ 98% and recall is ≥ 95% for inferred links
Timeline Filtering and Time Window Controls
Given the lineage timeline is displayed When the user applies filters by asset type, user/approver, reason code, campaign, and time range Then the timeline and graph update within 200ms (p95) and reflect only matching events And counts and summaries refresh to reflect the filtered scope And timestamps are shown in the campaign’s default timezone with option to switch to UTC And collapsing/expanding grouped events (e.g., batch updates) preserves selection and does not alter ordering
Exportable Narrative and Audit Artifacts
Given a lineage chain is visible When the user selects Export and chooses PDF or JSON Then the export contains: ordered timeline with timestamps, users, approvals, reason codes; dependency graph nodes and edges with IDs; and impact metrics (delta calls/emails) for post-change windows of 2h and 24h And exports exclude supporter PII (emails, phone numbers) while including aggregate counts And exports complete in < 5s (p95) for timelines up to 1,000 events and graphs up to 500 nodes And a SHA-256 checksum is provided for each export to support audit verification
Diff Search & Saved Filters
"As an operations lead, I want to filter and save views of diffs by approver, reason code, and date so that I can run recurring reviews without rebuilding queries each time."
Description

Offer fast search and filtering across histories by user, role, entity type, campaign, date range, reason code, approval status, and keywords within changes. Provide saved filters and shared views for recurring audits and weekly reviews. Support export of filtered results and deep links that preserve the active query and selected diff. Optimize with server-side indexing and pagination for large histories.

Acceptance Criteria
Combined Field and Keyword Search
Given a history dataset with varied users, roles, entity types, campaigns, reason codes, approval statuses, and change texts When the user applies filters: User=Jane Doe, Role=Reviewer, Entity Type=Action, Campaign=SB123, Date Range=2025-01-01..2025-01-31, Reason Code=Compliance, Approval Status=Approved, Keyword=budget Then the results contain only records that match all selected filters and include the keyword "budget" in the diff narrative or changed fields And results outside the date range or not matching any selected field are excluded And keyword matching is case-insensitive and diacritic-insensitive And the UI displays the total results count and active filter chips
Save and Share Filters as Views
Given a composed filter set When the user clicks Save as View, names it "Weekly Compliance Review", and selects Visibility=Team Then the view is created and appears under Saved Views for all team members within 10 seconds And opening the saved view restores the exact filters and sort order And only the owner (or users with edit permission) can modify the saved view; others can duplicate but not overwrite And selecting Visibility=Private makes the view visible only to the creator
Export Filtered Diff Results
Given an active filtered result set of N records (N>=1) When the user clicks Export and selects CSV Then a CSV downloads containing only the filtered rows and the columns: Diff ID, Entity Type, Campaign ID, Campaign Name, Action ID, DateTime (UTC ISO-8601), User Name, User ID, Role, Reason Code, Approval Status, Change Summary, Deep Link URL And the export preserves the current sort order And if N>10,000, the export includes the first 10,000 rows and shows a message indicating the limit And the export initiates within 3 seconds for N<=10,000
Deep Link Restores Query and Selection
Given an active filter set and a selected diff row When the user copies a deep link and opens it in a new browser session Then the application restores the filters, result page, and sort order, and opens the selected diff detail panel And if the selected diff is deleted or inaccessible, filters are restored and a "Diff unavailable" notice is shown without exposing restricted details And the deep link remains valid for at least 180 days unless the diff is deleted or permissions change
Server-Side Indexing and Performance at Scale
Given a dataset of 1,000,000 diff records and server-side indexes on queryable fields When applying any combination of up to 6 filters and a keyword search Then the first page (50 rows) returns in <=2.0 seconds at p95 and <=4.0 seconds at p99 And subsequent page navigations return in <=1.5 seconds at p95 And API responses are sorted by created_at DESC with diff_id as a deterministic tiebreaker And requests exceeding 8 seconds return a retriable error and a user-facing timeout message without partial data corruption
Pagination and Result Navigation
Given a result set exceeding one page When the user changes page size to 25, 50, or 100 Then the table updates accordingly and the selection persists across sessions for that user And the UI shows Total results and Page X of Y as computed by the server And keyboard navigation (Arrow Up/Down to move, Enter to open detail) works and is announced by screen readers And an empty state shows "No results match current filters" with a Reset Filters action
Permission-Scoped Results and Shared Views
Given user A has access to campaigns C1 and C2 and user B only to C2 When A saves a Team view filtering on C1 and C2 Then B opening the view sees only results from C2 and an indicator that some results are hidden due to permissions And users never see counts, metadata, or row stubs for entities they lack access to And deep links to restricted diffs show an "Insufficient permissions" message and a request-access link without leaking sensitive metadata

Key Rotation

Automatic, scheduled rotation of signing keys with rollover proofs so old and new receipts remain verifiable forever. Reduces security risk while preserving continuity, complete with alerts, custody logs, and a simple ‘last rotated’ indicator for compliance.

Requirements

Automated Key Rotation Scheduler
"As an org admin, I want signing keys to rotate automatically on a defined schedule so that I reduce compromise risk without manual effort or service disruption."
Description

Implements per-tenant, time-based rotation of signing key pairs with configurable cadence (e.g., every 30/60/90 days) and maintenance window. Generates new keys in a secure KMS/HSM where available, performs an atomic cutover, updates key identifiers (kid), and ensures zero signing downtime. Enforces environment isolation (prod/staging), stores private material only in approved custody, and validates post-rotation health before committing. Provides policy controls (minimum/maximum rotation interval) and API endpoints to create, read, update, and delete rotation policies.

Acceptance Criteria
Prod Tenant 60-Day Rotation During Maintenance Window
Given a prod tenant with a rotation policy of 60 days and a maintenance window of 02:00–03:00 UTC and active signing traffic ≥100 RPS When the next eligible maintenance window begins Then the scheduler initiates rotation within 60 seconds of window start And generates a new keypair in the tenant’s configured KMS/HSM And performs an atomic cutover that updates kid across all signer instances within ≤1 second And no signing request returns non-2xx or invalid-signature responses during the cutover period And p95 signing latency increases by <10% during the 5-minute interval surrounding cutover And the rotation timestamp, new kid, and policy version are persisted and auditable
KMS/HSM Key Generation and Custody Enforcement
Given a tenant policy that specifies an approved custody provider (KMS/HSM) and key algorithm/size When a scheduled rotation executes Then the keypair is generated inside the specified KMS/HSM with private key non-exportable And the private key is never written to application storage, logs, or memory dumps And a custody log is recorded with tenant id, environment, kms reference, key attributes, and initiator (scheduler) And attempts to use an unapproved custody provider are blocked with a 403 and no keys are created And if the custody provider is unavailable, the rotation aborts safely with no partial artifacts and emits a retryable failure event
Atomic Cutover and kid Consistency Under Load
Given a fleet of signer instances with shared configuration and a current active kid When the new key is ready during rotation Then the system performs an atomic configuration update so that all new signatures use the new kid within ≤1 second And no instance signs with the old key more than 1 second after cutover And verification of artifacts signed before cutover remains successful using the previously advertised public key until their normal expiry And metrics confirm zero failed verifications attributable to kid mismatch after cutover
Environment Isolation (Prod vs Staging)
Given separate prod and staging environments with distinct rotation policies and custody scopes When a rotation is executed in staging Then no prod keys, policies, schedules, or metrics are read or modified And generated keys are tagged with the staging environment and cannot be referenced from prod And kid namespaces are environment-scoped to prevent collisions And cross-environment API calls are rejected with 403/404 based on tenancy and environment scoping
Rotation Policy CRUD with Validation and Constraints
Given API endpoints to create, read, update, and delete per-tenant rotation policies When a client creates or updates a policy with interval outside allowed min/max or invalid window format Then the API responds 400 with field-level error details and no policy change occurs When a valid policy is created or updated Then the API responds 201/200 with the persisted policy, version/ETag, and computed next_rotation_at And GET returns the same policy and next_rotation_at deterministically And DELETE disables scheduling, cancels pending jobs, and returns 204; subsequent GET shows policy absent/disabled And all changes are captured in an audit log with actor, before/after, and timestamp
Post-Rotation Health Validation and Automatic Rollback
Given post-rotation health checks (self-sign and verify, dependency availability, error budget) When any health check fails within a 2-minute guard period after cutover Then the system atomically rolls back to the prior key within ≤1 second And emits an alert and marks rotation status as failed with error details And no private material from the failed key remains referenced by signers after rollback And the next scheduled attempt follows standard backoff without violating min interval policy
Maintenance Window Adherence and Deferral
Given a tenant with min/max rotation intervals and a defined maintenance window When the next rotation becomes due outside the window Then the scheduler defers execution until the next eligible window without breaching the max interval When the window is missed due to system unavailability Then the scheduler executes within the first 60 seconds of the next available window and records the miss reason And rotations never start outside the window unless an authenticated manual override flag is present (and recorded) And the computed next_rotation_at respects both the configured cadence and the min/max policy bounds
Rollover Proof Chain & Immutable Ledger
"As an auditor, I want verifiable linkage between previous and current keys so that I can trust historical receipts even after multiple rotations."
Description

Generates a cryptographic rollover proof at each rotation that cross-signs the old and new public keys, includes timestamps and hash commitments, and stores it in an append-only, tamper-evident ledger with durable retention. Exposes the proof chain via API and embeds proof references in the JWKS metadata so historical action receipts remain verifiable indefinitely. Supports export of proofs for offline audit and third-party verification.

Acceptance Criteria
Cross-Signed Rollover Proof Generation on Key Rotation
Given an active signing key with kid=K_old and a scheduled rotation at time T When the system activates a new key with kid=K_new Then it generates a rollover proof P containing: old_kid=K_old, new_kid=K_new, fingerprints of both public keys, issued_at=T (RFC 3339 UTC), proof_id (UUIDv4), alg, and hash_commitment of the prior ledger head And P includes two signatures: Sign_old(hash(new_public_key || metadata)) and Sign_new(hash(old_public_key || metadata)) And Verify(old_public_key, Sign_old, payload)=true and Verify(new_public_key, Sign_new, payload)=true And P is created within 5 seconds of K_new activation and no records are signed with K_new before P exists And attempts to regenerate P for the same rotation are rejected with 409 and no duplicate proof_ids are created
Append-Only Tamper-Evident Ledger Entry with Durable Retention
Given a rollover proof P at rotation index N When persisting to the ledger Then an entry L_N is appended with fields: index=N, proof_id, timestamp, payload_hash, prev_hash=H(L_{N-1}), and ledger_signature And recomputing prev_hash from genesis to N matches stored values exactly And attempts to modify, delete, or reorder any prior entry are blocked (HTTP 403) and recorded in audit logs And attempting to rewrite L_N returns 409 and leaves the storage unchanged (append-only semantics enforced) And entries are retained for at least the configured retention period (default >= 7 years) and survive restarts and cross-region failover
Proof Chain Retrieval via Public API
Given an API client with scope=read:proofs When requesting GET /v1/keys/proofs?from=K_old&to=K_new Then the API responds 200 with an ordered, contiguous list of proofs linking K_old..K_new inclusive And each item includes: proof_id, timestamp, old_kid, new_kid, payload_hash, prev_hash, and verification_urls And the response includes ETag and supports conditional GET with If-None-Match And integrity validation over the returned list recomputes the terminal hash matching GET /v1/keys/proofs/head And pagination via limit and cursor preserves chain continuity; invalid ranges or gaps return 400
JWKS Metadata Embeds Proof References
Given a fetch of /.well-known/jwks.json When keys are returned Then each key contains kid, kty, use, alg plus custom fields: x-proof-last (proof hash or id), x-proof-chain (URL), x-last-rotated (RFC 3339 UTC), and x-prev-kid when applicable And the JWKS remains RFC 7517 compliant and valid JSON; unknown fields use x- prefix And Cache-Control is set per configuration; updates rotate kid and atomically update x-last-rotated and x-proof-last And a verifier can follow x-proof-chain to retrieve the proof linking the current and previous key to establish trust for historical receipts
Historical Receipt Verification Against Proof Chain
Given an action receipt R signed at time T0 using kid=K_old and a subsequent rotation to kid=K_curr with intervening proofs When a verifier validates R after rotation Then it resolves K_old either via JWKS plus x-prev-kid or via proof chain traversal from K_curr back to K_old And it verifies R's signature successfully using the resolved historical public key And tampered or missing proof links cause verification to fail with a distinct error code (e.g., PROOF_CHAIN_BROKEN) and do not authenticate R And verification can be performed offline when supplied with an exported proof bundle covering K_old..K_curr
Offline Export for Third-Party Audit
Given a request POST /v1/keys/proofs/export with a range (e.g., from=K_a&to=K_b) or all When the export completes Then a single artifact is produced containing: ordered proofs, ledger headers, a chain manifest (rolling hash or Merkle root), and a detached signature by an export key And the response includes a SHA-256 checksum and size in bytes; downloading the artifact reproduces the checksum And exporting the same range twice yields byte-identical artifacts And an offline verification procedure validates the detached signature and reconstructs the chain end-to-end without network access And optional server-side encryption with a customer-provided key is supported for remote destinations; permission errors return 403 with no partial writes
Grace Window and Backward Compatibility
"As a developer integrating RallyKit webhooks, I want a rotation grace window so that my systems continue verifying signatures while caches and keys propagate."
Description

Provides a configurable grace window during which both the retiring and newly issued keys are recognized for verification to prevent breakage across distributed verifiers and cached public keys. New receipts are signed with the new key while including a reference to the rollover proof; verification libraries and webhook signatures accept either key during the overlap. After expiry, the old key is marked retired but remains resolvable for historical verification.

Acceptance Criteria
Overlap Verification During Grace Window
Given a key rotation completes at T0 with a configured grace window W And public keys K_old and K_new are published When a verifier validates a receipt signed by K_old during [T0, T0+W) Then verification succeeds and returns valid=true, key_id=K_old, key_status=retiring When a verifier validates a receipt signed by K_new during [T0, T0+W) Then verification succeeds and returns valid=true, key_id=K_new, key_status=active
New Receipts Include Rollover Proof Reference
Given a rotation has completed at T0 When RallyKit generates any new receipt after T0 Then the receipt is signed with K_new and includes headers kid=K_new and rollover_proof_id present And resolving rollover_proof_id returns a proof chaining K_old -> K_new with fields rotation_time=T0 and grace_expires_at=T0+W And attempts to generate a new receipt with K_old after T0 are blocked and logged with event=attempted_use_of_retired_key, outcome=blocked
JWKS Key Serving and Key Status Metadata
Given a rotation at T0 with grace window W When fetching the JWKS during [T0, T0+W) Then the JWKS includes both K_old and K_new with key_status values retiring and active, and includes grace_expires_at And cache-control for JWKS responses during grace has max-age <= 300 seconds When fetching the JWKS after T0+W Then the JWKS lists only K_new; K_old is excluded from the list but remains directly resolvable by kid and returns key_status=retired with HTTP 200
Webhook Signature Acceptance During Grace Window
Given a rotation at T0 with grace window W And outbound webhooks are signed with K_new and include a signature header containing kid When recipients verify webhooks during [T0, T0+W) using the provided library Then webhooks signed with K_new verify successfully And redeliveries of webhooks signed before T0 with K_old also verify successfully during [T0, T0+W) because K_old remains resolvable And after T0+W, verifications of K_old-signed webhooks created before T0 still succeed but report key_status=retired; new webhooks are never signed with K_old
Grace Window Configuration and Enforcement
Given an admin configures grace_window=W in settings Then W is validated to be within [0 minutes, 30 days] and persisted And if unset, default W=72 hours When a rotation is initiated at T0 Then grace_expires_at equals T0 + W And changes to W after T0 do not alter the active rotation and apply only to subsequent rotations And setting W=0 ends overlap immediately; JWKS lists only K_new while K_old remains directly resolvable as retired
Post-Expiry Historical Verification
Given grace_expires_at has passed for the rotation at T0 When verifying a pre-rotation receipt signed by K_old Then verification succeeds and returns valid=true, key_status=retired, reason=historical_verification When verifying any receipt created after T0 that presents K_old Then verification fails with error=key_retired or error=invalid_signature and includes kid=K_old And the old public key and rollover proof remain accessible by identifier with HTTP 200 for audit purposes
Rotation Alerts and Anomaly Handling
"As an operations lead, I want timely alerts and clear failure handling so that I can respond quickly and maintain trust in signed receipts."
Description

Delivers proactive notifications for upcoming rotations, successful cutovers, failures, and unusual activity (e.g., repeated rotation failures, unexpected key access). Supports in-app alerts, email, and webhook events with structured payloads. On failure, triggers automatic rollback/retry with exponential backoff and flags the tenant in a protected state until remediation. All events are correlated with rotation IDs for traceability.

Acceptance Criteria
Upcoming Rotation Alerts (7-Day and 24-Hour)
Given a tenant with a rotation scheduled at <scheduled_at> in the tenant’s timezone When the timestamp is 7 days before <scheduled_at> Then in-app, email, and webhook alerts of type 'rotation.upcoming' are delivered within 60 seconds containing rotation_id, tenant_id, scheduled_at, environment, and severity='info' And duplicate alerts for the same window are suppressed per channel When the timestamp is 24 hours before <scheduled_at> Then alerts repeat with the same payload fields and a distinct notification_id and window='24h' And webhook alerts are HMAC-signed and all events are correlated by rotation_id in audit/custody logs
Successful Cutover Notifications and Traceability
Given an active rotation executing for rotation_id When cutover completes successfully and post-cutover verification checks pass Then alerts of type 'rotation.success' are delivered via in-app, email, and webhook within 60 seconds containing rotation_id, old_key_id, new_key_id, cutover_at, attempt_count, and rollover_proof_checksum And an audit/custody log entry is recorded linking rotation_id to actor='system' and outcome='success' And the in-app status for the rotation shows 'Successful' with timestamp and is filterable by rotation_id
Rotation Failure Alerts and Tenant Protection Flag
Given a rotation attempt fails for rotation_id When failure is detected Then alerts of type 'rotation.failure' are delivered within 60 seconds via all channels containing rotation_id, error_code, error_message, attempt_count, next_retry_at, and protected_state=true And the tenant is marked protected_state=true and a 'Protected' banner is shown in-app And an audit/custody log entry is recorded with outcome='failure' and correlates to rotation_id
Automatic Rollback and Exponential Retry
Given a rotation step fails When failure is detected Then the system rolls back to the last valid signing key within 30 seconds and resumes signing operations without interruption And a retry is scheduled using exponential backoff with jitter: 1m, 2m, 4m, 8m, 16m (max 5 attempts) And each retry attempt emits 'rotation.retry_scheduled' and 'rotation.retry_attempt' events with rotation_id, attempt_count, and scheduled_for And after max attempts are exhausted, rotation status becomes 'failed' and no further automatic retries occur And protected_state remains true until an admin explicitly clears it; the clear action is logged with rotation_id
Anomaly Detection: Repeated Failures and Unexpected Key Access
Given the system observes either ≥3 rotation failures within a rolling 24 hours for the same tenant, or an access event to signing keys by a non-authorized principal or outside approved conditions When either condition occurs Then an 'anomaly.detected' alert with severity='high' is delivered via in-app, email, and webhook within 60 seconds containing anomaly_type in {'repeated_failures','unexpected_key_access'}, rotation_id (if applicable), tenant_id, occurred_at, evidence, and recommended_action And the anomaly is highlighted in-app and is filterable by severity='high' And the event is correlated in audit logs by rotation_id (if applicable) and anomaly_id
Webhook Events: Structured Payload, Signing, and Delivery Guarantees
Given a webhook subscription is configured When any rotation.* or anomaly.* event is emitted Then a POST is sent within 60 seconds to the configured endpoint with JSON including id, event_type, rotation_id, tenant_id, occurred_at (ISO8601), payload, idempotency_key, and version='v1' And the request is signed with HMAC-SHA256 using the tenant's webhook secret and includes header 'X-RallyKit-Signature: t={timestamp}, v1={hex_digest}' And deliveries use at-least-once semantics with exponential backoff for non-2xx/timeout up to 24 hours and a maximum of 12 attempts, preserving the same idempotency_key And the system records delivery outcomes (status_code, attempt, latency_ms) and exposes them in the audit log filterable by rotation_id
Protected State Behavior and Remediation Workflow
Given a tenant enters protected_state=true due to rotation failure or anomaly When protected_state=true Then manual and scheduled rotations are blocked; API responds 423 Locked with error_code='TENANT_PROTECTED' and includes rotation_id And signing operations continue using the last known good key; no receipts become unverifiable And entering and exiting protected_state emit 'tenant.protected.enter' and 'tenant.protected.exit' events including rotation_id and reason And clearing protected_state requires an admin action via UI or API; the action is logged with actor, timestamp, and rotation_id
Key Custody Logs and RBAC Controls
"As a compliance officer, I want detailed custody logs with access controls so that I can demonstrate proper key management during audits."
Description

Maintains an immutable audit trail of all key lifecycle events—generation, activation, rotation, retirement, access attempts, and configuration changes—capturing actor, timestamp, tenant, IP/device, and outcome. Enforces least-privilege roles for viewing logs, managing policies, and initiating manual rotations. Supports optional two-person approval for sensitive actions and export of custody logs to SIEM or CSV with WORM retention.

Acceptance Criteria
Immutable Custody Log for Key Lifecycle Events
- Given any key lifecycle action (generation, activation, rotation, retirement, access attempt, configuration change), When the action completes or fails, Then a custody log entry is appended within 5 seconds capturing: event_type, actor_id, actor_role, timestamp_utc (ISO 8601), tenant_id, source_ip, device_fingerprint (if available), outcome (success|failure), error_code (on failure), key_identifier (non-sensitive), and correlation_id. - And Then the entry is stored on WORM media with a tamper-evident hash chain (prev_hash, entry_hash) and cannot be modified or deleted via any API for the configured retention period. - And Then any attempt to alter or delete an entry returns 403 and creates a SecurityEvent entry with details of the attempt.
Least-Privilege RBAC for Logs and Key Operations
- Rule: Roles and allowed actions: Viewer → view/search custody logs; view 'last rotated' indicator; view rollover proofs metadata only. RotationOperator → initiate manual rotation (subject to approvals), view execution status. PolicyAdmin → manage retention policies, alert channels, two-person approval settings, export policies. OrgAdmin → assign/revoke roles; cannot bypass approvals. - Given a user with a role, When they attempt an action outside their allowed set, Then the system returns 403, no side effects occur, and the attempt is logged. - Given least-privilege principle, When roles are evaluated, Then no role grants more access than listed and all permissions are covered by automated tests (positive and negative).
Two-Person Approval Workflow for Sensitive Actions
- Given two-person approval is enabled for a tenant, When a user requests a sensitive action (manual rotation, retention policy change, alert policy change), Then the request moves to PendingApproval and requires approval from a second eligible user not equal to the requester. - And Then self-approval and approval by users in the same active session are blocked and logged. - And Then approval or rejection must occur within a configurable TTL (default 24 hours); on expiry, the request auto-expires, no changes are applied, and all state changes are logged. - And Then upon approval, the action executes and the custody log records requester_id, approver_id, timestamps, action details, and outcome.
Custody Log Export to SIEM and CSV
- Given a user with Export permission (PolicyAdmin or OrgAdmin), When they request an export, Then filters include date range (UTC), event_type, outcome, actor_id, tenant_id (fixed), key_identifier, and correlation_id. - And Then CSV export generates a UTF-8 CSV with header row, schema_version, SHA-256 checksum, and optional PGP signature; completion within 2 minutes for up to 1,000,000 rows, with chunking beyond that size. - And Then SIEM export supports syslog over TLS (RFC 5425) and HTTPS webhook with JSON Lines; deliveries retry with exponential backoff up to 24 hours; all delivery attempts are logged. - And Then export operations are appended to the custody log with requester, destination, filters, record_count, checksum/signature, and outcome.
Alerting on Sensitive Events and Anomalies
- Given alert channels are configured, When sensitive events occur (failed access attempt, manual rotation initiated/completed/failed, policy change proposed/approved/rejected/expired), Then alerts are dispatched within 60 seconds to configured channels (email, Slack, webhook) and include: tenant_id, event_type, actor_id, timestamp_utc, severity, and reference_id. - And Then alert deliveries are RBAC-scoped so only authorized recipients receive notifications; delivery failures are retried and logged with failure reason. - And Then rate limiting and deduplication prevent more than one identical alert per 5-minute window per channel; changes to alert configuration are logged.
Last Rotated Indicator and Proof Visibility
- Given a successful key rotation, When the rotation completes, Then the 'last_rotated' timestamp and key_identifier update in the UI and API within 10 seconds. - And Then Viewer+ roles can see the indicator on the compliance dashboard; RotationOperator and PolicyAdmin can access rollover proof artifacts sufficient to verify old/new receipt chains without exposing private key material. - And Then the indicator reflects the most recent rotation event under load; discrepancies exceeding 10 seconds are flagged and alerted.
Tenant-Isolated Access and Exports
- Given a multi-tenant environment, When a user queries or exports custody logs, Then results include only entries where tenant_id equals the user’s tenant; cross-tenant access returns 403 and is logged without leaking data. - And Then exported files and SIEM streams are segregated by tenant and cannot be configured to deliver data across tenants. - And Then attempts to override tenant_id via request parameters are blocked by validation and WAF rules and are logged as security events.
Compliance Indicator and Policy Dashboard
"As an org admin, I want a clear ‘last rotated’ indicator and policy dashboard so that I can verify compliance at a glance and adjust settings when needed."
Description

Adds a simple, prominent UI indicator displaying the last rotated timestamp, next scheduled rotation, and current policy compliance status per tenant. Includes a dashboard to review rotation history, configure policies, and trigger a manual rotation with appropriate confirmations and access checks. Provides read-only API endpoints for external compliance reporting.

Acceptance Criteria
Tenant Compliance Indicator Displays Last/Next Rotation and Status
Given a tenant with a configured rotation policy and at least one rotation event When a user with Viewer or higher role opens the Key Rotation area Then the indicator shows Last Rotated in the tenant’s timezone with a tooltip showing ISO 8601 UTC And the indicator shows Next Scheduled computed from policy, timezone, and any blackout windows And the status reads Compliant, Due Soon, or Overdue based on policy thresholds and current time And if no rotation has ever occurred, Last Rotated shows “Never” and status shows Non‑compliant And the indicator refreshes within 10 seconds after any rotation completes without full page reload And status color and icon meet WCAG AA contrast and include accessible labels And read‑only users can view but not modify any values
Rotation History and Custody Log Dashboard
Given a tenant with recorded rotation events When a user opens the Rotation History tab Then a table lists events sorted by most recent with columns: Timestamp (localized), Actor, Method (Auto/Manual), Result (Success/Fail), Key ID (short), Proofs/Custody Log link And the user can filter by date range, method, and result and see filtered results within 1 second for up to 1,000 events And pagination or infinite scroll is available for histories larger than 100 events per page And clicking Proofs/Custody Log opens a detail view with rollover proofs and custody entries And the user can export the current filtered view to CSV and JSON and the file contains exactly the visible columns And if no events exist, an empty state explains that no rotations are recorded yet
Policy Configuration for Scheduled Rotations
Given a user with Admin role opens the Policies tab When the user sets rotation frequency (7–365 days), alert threshold (1–30 days before due), grace period (0–30 days), and timezone Then invalid values are blocked with inline validation messages and the Save button remains disabled until valid And saving persists changes, recalculates Next Scheduled immediately, and updates the compliance indicator And changes are recorded in the audit log with before/after values and actor And read‑only roles cannot edit policies and see disabled inputs with a tooltip explaining required permissions And if the new policy shortens the schedule causing immediate Due Soon or Overdue, a confirmation dialog is required before saving
Manual Key Rotation with Access Checks and Confirmations
Given a user with Security Admin role who has recently re‑authenticated within 5 minutes and has 2FA enabled When the user clicks Rotate Now Then a confirmation modal appears describing impact, expected downtime (if any), and requiring the user to type ROTATE to proceed And upon confirmation a rotation job is queued and the UI shows In Progress status within 5 seconds And while a rotation is running the Rotate Now control is disabled and a tooltip indicates another rotation is in progress And on success within 2 minutes the indicator updates Last Rotated and Next Scheduled, and a Manual rotation event with actor is added to history with custody log entries and rollover proof link And unauthorized users or missing re‑auth receive 403/reauth prompts and no job is created And if the rotation fails, the user sees an error state with a retry option, the history records a Failed event, and status becomes Overdue if applicable And manual rotation respects configured blackout windows unless an Admin checks an explicit Override blackout checkbox in the modal
Read‑Only Compliance Reporting API
Given an API key scoped to read:compliance for a tenant When calling GET /api/v1/tenants/{tenantId}/rotation/indicator Then the API returns 200 with JSON including lastRotatedAt, nextScheduledAt, status, and policy summary with timestamps in ISO 8601 UTC And GET /history supports pagination (page, perPage) and filters (from, to, method, result) and returns stable IDs and links to proofs And GET /policies returns effective policy values and metadata without secrets And requests without valid scope or wrong tenant receive 401/403; unknown tenant returns 404; rate limit is enforced at 60 requests/minute with 429 responses And responses include ETag and support If-None-Match returning 304 when unchanged And all endpoints are read‑only; attempts to POST/PUT/PATCH/DELETE return 405
Overdue and Failure Alerts with UI Banner and Notifications
Given a tenant with alert threshold configured When a rotation becomes Due Soon or Overdue or a rotation fails Then an in‑app banner appears on the Key Rotation area within 1 minute showing the condition and next steps And an email notification and configured webhook are sent within 5 minutes with tenant, status, and links; deliveries are retried with exponential backoff for up to 24 hours on transient failures And alerts are deduplicated so the same condition does not notify more than once per 24 hours unless the state changes And acknowledging the banner dismisses it for that user and records an acknowledgment entry; the banner clears automatically when status returns to Compliant And all alerts and acknowledgments are logged in the custody/audit log
Multi‑Tenant Data Isolation and Scoping
Given a user or API key scoped to Tenant A When attempting to access the dashboard or API endpoints for Tenant B Then the system returns 403 (or 404 where appropriate) and no Tenant B data is leaked And all UI lists, exports, and API responses only include records for the current tenant context And API keys and session tokens are tenant‑scoped and cannot be used across tenants And cross‑tenant access attempts are logged with tenant, actor, and outcome
Emergency Rotation and Key Revocation Workflow
"As a security responder, I want a one-click emergency key rotation and revocation process so that I can contain threats and restore trust quickly."
Description

Provides an immediate, high-priority rotation path for suspected compromise: revokes the current key, issues a new key pair, updates JWKS, and broadcasts critical alerts. Temporarily reduces or disables grace window as configured, forces all new receipts to use the replacement key, and publishes an incident-tagged rollover proof. Includes post-incident tasks to annotate custody logs and export an incident report.

Acceptance Criteria
Admin-initiated emergency rotation commits new key and revokes old key
Given an authorized org owner with Emergency Key Rotation permission and a suspected compromise, When they trigger Emergency Rotate Now via UI or POST /keys/rotate?mode=emergency, Then the current signing key is immediately marked revoked at T0, And a new key pair with a unique kid is generated and stored in HSM/KMS, And the JWKS endpoint is updated to publish the new key and remove the old key from the active set within 60 seconds, And all receipts issued after T0 are signed with the new kid, And no new signatures are produced by the revoked key after T0.
Revoked-key verification grace window enforcement
Given emergency_grace_window_minutes = G where 0 ≤ G ≤ 5 and a rotation starts at T0, When a verification request uses a signature from the revoked key, Then verification endpoints accept it only until T0 + G minutes and reject it after that time with error code KEY_REVOKED, And during the grace window a deprecation warning is logged and emitted to metrics, And configuration changes to G take effect immediately for subsequent rotations.
Incident-tagged rollover proof publication
Given an emergency rotation produces incident_id = I, When rotation completes, Then a rollover proof is published at a stable URL (e.g., /.well-known/key-rollovers/I.json) and linked from /jwks.json within 60 seconds, And the proof contains old_kid, new_kid, incident_id, reason="suspected_compromise", issuer, T0 timestamp, and cryptographic evidence binding old→new, And verifiers can validate receipts signed by either key using the proof, And the proof is immutable (content-addressed or versioned) and served without an expiration header.
Critical alert broadcast on emergency rotation
Given alert channels (email, Slack, webhook) are configured for the org, When an emergency rotation is initiated at T0, Then alerts are delivered to all configured channels within 120 seconds containing org_id, incident_id, actor, old_kid, new_kid, T0, configured grace window, and console deep-link, And each delivery attempt is retried up to 3 times with exponential backoff on failure, And failures after retries are logged and a secondary on-call alert is raised, And alert delivery outcomes are recorded per channel for auditing.
Custody logs and Last Rotated indicator update
Given emergency rotation completes, When audit records are generated, Then custody logs include append-only entries for key revocation and new key issuance with actor, timestamp, IP/device fingerprint, old_kid, new_kid, incident_id, and outcome, And entries appear in the UI and export APIs within 60 seconds, And the Last Rotated indicator displays T0 and an "Emergency" badge in settings and compliance views.
Post-incident report export
Given an incident_id = I exists for an emergency rotation, When an org owner requests Export Incident Report for I, Then a report is generated within 10 minutes in JSON and PDF containing timeline (T0..Tn), actors, alerts sent (with outcomes), JWKS before/after diff, rollover proof URI and hash, grace window settings, verification rejection counts, and remediation steps taken, And the report is downloadable for 30 days and stored in secure object storage, And the export event is captured in custody logs with requester and timestamp.
Idempotent emergency rotation requests
Given multiple concurrent Emergency Rotate Now requests for the same org within a 60-second window, When the requests are processed, Then only one rotation occurs and all callers receive the same incident_id and new_kid, And no duplicate keys are issued and the old key is revoked exactly once, And JWKS and rollover proof endpoints reach consistency within 60 seconds of T0.

Dispute Seals

Flag any entry or batch as disputed with one click, attach evidence, and invite co-signs from partners. The ledger records the dispute, actions taken, and final resolution with timestamps—no edits to history, just an immutable, transparent trail.

Requirements

One-click Dispute Flagging
"As a campaign data steward, I want to flag questionable entries or batches as disputed with one click so that I can pause reliance on them while initiating review without corrupting history."
Description

Enable users to flag any individual ledger entry or an entire batch as disputed with a single action from the dashboard or detail view. The dispute seal captures a mandatory reason code and optional notes, visually marks affected records across lists and detail views, and prevents destructive edits to sealed fields while allowing non-destructive annotations. The flagging action is idempotent, supports bulk selection, and can be reversed only by authorized roles with a required justification, all recorded in the immutable ledger. Filters and search facets expose dispute state for rapid triage and reporting, and the feature integrates with existing action pages and analytics without altering historical metrics.

Acceptance Criteria
One-click dispute on single ledger entry from dashboard list
Given a permitted user is viewing the dashboard list of ledger entries When the user triggers "Flag as Disputed" on a single entry Then the system prompts for a mandatory Reason Code and optional Notes with the Save action disabled until a Reason Code is selected And upon submit, the entry is sealed and a visible "Disputed" indicator appears on both list and detail views for that entry And destructive edits to all sealed fields are disabled on that entry while non-destructive notes remain enabled And an immutable ledger event "dispute.opened" is appended capturing entry ID, user ID, timestamp, reason code, and notes And repeating the action on the same entry does not create duplicate seals or duplicate ledger events
Batch-level dispute from batch detail view
Given a permitted user is viewing a batch detail view When the user triggers "Flag Batch as Disputed" and selects a mandatory Reason Code (optional Notes) Then all entries in the batch that are not already disputed are sealed and visually marked as "Disputed" across list and detail views And already-disputed entries remain unchanged and are reported in a summary result And the system displays a completion summary including counts of newly sealed, already disputed, and failed items And immutable ledger events are appended per affected entry and one batch-level event records batch ID, user ID, timestamp, reason code, and notes And destructive edits to sealed fields for all affected entries are disabled while non-destructive notes remain enabled
Idempotency and bulk selection behavior
Given a permitted user multi-selects N ledger entries from any list view (including across pages) When the user triggers "Flag as Disputed" with a chosen Reason Code Then only entries not already disputed are newly sealed and counted as affected And re-submitting the same operation (e.g., retry after error) does not create duplicate seals or duplicate ledger events for any entry And the result summary reports total selected, already disputed, newly sealed, and any errors And each newly sealed entry has exactly one new "dispute.opened" event appended
Edit locks and allowed annotations on disputed records
Given an entry is in a disputed state When any user attempts to edit a sealed field on that entry via UI or API Then the action is blocked, the field is read-only in the UI, and an explanatory message indicates the record is disputed And users can add new non-destructive notes on the disputed entry And added notes are stored as append-only entries without modifying sealed fields and are recorded with user ID and timestamp And the ledger history shows no edits to prior events—only appended entries
Authorized reversal with justification and full audit trail
Given an entry is disputed and a user with permission "Dispute:Unseal" is viewing it When the user selects "Remove Dispute" and provides a mandatory justification Then the disputed indicator is removed and sealed fields become editable again And an immutable ledger event "dispute.closed" is appended with entry ID, user ID, timestamp, and justification And if a user lacking the required permission attempts reversal, the action is denied and an authorization-denied event is recorded without changing the dispute state
Filtering, search facets, and export by dispute state
Given the records list and search UI When the user filters by Dispute State = Disputed (and optionally Reason Code) Then only matching records are returned and total counts reflect the filtered set And exports include columns for Dispute State, Reason Code, Dispute Opened At, and Dispute Closed At when applicable And the dispute state is available as a facet for rapid triage in all list views
Analytics and action pages unaffected by dispute flagging
Given analytics dashboards and action pages referencing these records When entries or batches are flagged as disputed Then historical metrics and totals remain unchanged in analytics And action pages continue to function without changes to user-facing content or counts And internal analytics can segment by Dispute State without altering underlying historical figures
Evidence Attachment & Chain-of-Custody
"As a compliance officer, I want to attach verifiable evidence to a dispute so that the review can rely on documented, tamper-evident proof."
Description

Allow upload and linking of supporting evidence (files, URLs, and structured notes) to a dispute, capturing checksums, uploader identity, timestamps, and source metadata to establish chain-of-custody. Evidence is append-only, virus-scanned, size-validated, encrypted at rest, and referenced via immutable content hashes; redaction is supported via additive overlays without removing originals. The UI supports drag-and-drop, previews for common file types, and evidence categorization, while the API accepts multipart submissions. All additions are logged in the ledger, ensuring traceability and reproducibility during audits.

Acceptance Criteria
Drag-and-Drop File Evidence Upload (UI)
Given an authenticated user with permission to manage disputes And an open dispute record exists When the user drags and drops a supported file type under the configured max size into the Evidence area Then the system validates file type and size before upload and shows a blocking error for invalid files with the reason And the file is virus-scanned; infected files are rejected with no bytes persisted and a clear error message And on success the system computes a SHA-256 checksum and content-addressable hash, captures uploader identity, timestamp, and source metadata, and stores the file encrypted at rest And an immutable ledger entry is written capturing action type, content hash, checksum, uploader, timestamp, and source metadata And the UI displays a preview for common file types or a download link, and prompts for category selection
Link URL Evidence With Source Metadata
Given an authenticated user on a dispute When the user submits a URL as evidence with optional title and notes Then the system validates URL format and reachability and records final resolved URL, HTTP status, content-type, and retrieval timestamp as source metadata And a content-addressable reference is generated for the canonicalized URL and associated to the dispute And an immutable ledger entry is written with content reference, uploader identity, timestamp, and URL metadata And the evidence appears in the list with a clickable link and assigned category
Structured Notes Evidence Capture
Given an authenticated user on a dispute When the user adds a structured note with fields (title, category, body, source, date observed) Then inputs are validated (required fields present, lengths within limits) and stored append-only And the note's serialized content receives a content hash and checksum and is stored encrypted at rest And an immutable ledger entry is written with the note's hash, uploader identity, timestamp, and schema version And the note appears immediately and is filterable by category
Append-Only Enforcement and Ledger Logging
Given a dispute with existing evidence items When any user attempts to edit or delete an existing evidence item Then the system blocks destructive changes and instructs the user to add a new item or a redaction overlay instead And the blocked attempt is recorded in the ledger with actor identity, timestamp, and reason And any new addition is recorded as a separate immutable ledger entry, preserving full history
Redaction via Additive Overlay
Given a stored evidence document or image When a user applies redaction overlays to specific regions or fields and saves Then the system creates a separate overlay artifact with its own content hash and checksum, linked to the original evidence, without altering original bytes And the ledger records the overlay addition with uploader identity, timestamp, linkage to the original, and redaction reason And previews render the original with overlay applied; downloads allow "with redactions" or "original" selections, each verifying against stored checksums
Content Hashing, Encryption, and Audit Reproducibility
Given any new evidence (file, URL reference, or structured note) is added When the system persists the evidence Then the evidence is referenced by an immutable content hash with a recorded checksum and stored encrypted at rest And integrity is verified on retrieval; checksum mismatches block access and are logged in the ledger with severity And when an auditor requests an export by dispute ID or list of content hashes, the system produces a bundle where each item's bytes verify against stored hashes and associated ledger entries trace uploader identity and timestamps end-to-end
API Multipart Evidence Submission
Given a client with a valid API token and a dispute ID When the client POSTs multipart/form-data containing one or more evidence files and metadata fields Then each part is validated for size and type, scanned for viruses, hashed (checksum and content hash), stored encrypted at rest, and associated to the dispute with uploader identity derived from the token and a precise timestamp And the API responds 201 Created with a list of created evidence IDs and their content hashes And an immutable ledger entry is written per item with hash, uploader identity, timestamp, and submitted source metadata
Partner Co-sign Invitations
"As a coalition lead, I want to invite partner orgs to co-sign a dispute so that we can present a unified, credible challenge backed by multiple stakeholders."
Description

Provide a secure workflow to invite external partner organizations to review a dispute and co-sign its assertion. Invitations use expiring, role-scoped tokens tied to organization identity, support SSO or email-based verification, and allow partners to add comments and their own evidence. The system tracks acceptance, declines, and pending status, shows co-sign counts and identities, sends reminders before expiration, and allows revocation by the dispute owner. All partner actions are append-only and recorded with timestamps to maintain transparency.

Acceptance Criteria
Invite Partner with Expiring, Role-Scoped Token
Given a dispute owner selects Invite Partner and specifies the invited organization and role scope When the invitation is sent Then the system generates a single-use token bound to disputeId, invitingOrgId, invitedOrgId, and role scope with an expiry timestamp Given the invitation is created When the invite link is delivered Then the invitee can only perform actions permitted by the role scope (view dispute summary, add comments/evidence, co-sign or decline) and cannot modify dispute data or invite others Given the invite link is opened before expiry When the page loads Then the invite context (dispute title, invited org, role, expiry) is shown and access is granted Given the invite link is opened after expiry or after revocation When the page loads Then access is denied and the event is logged without exposing dispute content Given the token has been used to accept or decline When it is used again Then the action is rejected and the attempt is logged
Partner Co-sign via SSO Verification
Given a pending invitation When the invitee selects SSO and authenticates with an identity mapped to the invited organization Then the system records verificationMethod=SSO, binds the session to the invited org, and grants token-scoped access Given a verified invitee When they submit a co-sign with optional comment and evidence Then the system appends a ledger entry with action=co-sign, partnerOrgId, userId/hash, timestamp, verificationMethod, and evidence references Given a co-sign is recorded When the dispute detail is refreshed Then the co-sign count increments by 1 and the partner organization identity appears in the co-sign list Given the same partner attempts to co-sign again When they submit Then the system blocks the action and shows that the organization has already co-signed
Partner Decline via Email-Based Verification
Given a pending invitation When the invitee selects Email verification, provides an email within the invited org’s verified domains, and clicks the magic link before expiry Then the system verifies organization identity and grants token-scoped access Given a verified invitee When they submit a decline with optional comment Then the system appends a ledger entry with action=decline, partnerOrgId, timestamp, and sets invitationStatus=Declined Given a decline is recorded When the dispute detail is refreshed Then the co-sign count does not increase and the partner organization appears in the Declined list with timestamp Given the provided email domain does not match a verified domain for the invited org When the magic link is used Then access is denied and no action can be recorded
Partner Adds Comments and Evidence Without Co-signing
Given a verified invitee with a pending decision When they add a comment and upload evidence without choosing co-sign or decline Then the system appends a ledger entry with action=comment-evidence, partnerOrgId, timestamp, and keeps invitationStatus=Pending Given a comment/evidence submission is saved When the dispute detail is viewed by the owner Then the new comment and evidence references are visible and attributed to the partner organization Given a verified invitee attempts to edit or delete prior submissions When they act Then the system prevents modification (append-only) and logs the attempt Given a verified invitee attempts an action outside their role scope (e.g., editing dispute fields, inviting others) When they act Then the system denies the action and logs the attempt
Invitation Reminder Before Expiration
Given an invitation remains Pending and has 24 hours until expiry When the reminder job runs Then the system sends a reminder to the invitee and appends a ledger entry with action=reminder-sent and timestamp Given an invitation is Accepted or Declined When the reminder job runs Then no reminder is sent and no reminder entry is created Given an invitation is created with less than 24 hours until expiry When the reminder schedule is evaluated Then a single reminder is sent 1 hour before expiry (or immediately if less than 1 hour remains) and recorded in the ledger
Owner Revokes Invitation
Given a pending invitation When the dispute owner revokes the invitation Then the token is immediately invalidated, any active partner session is terminated, and invitationStatus=Revoked Given a revocation is performed When the ledger is updated Then an entry is appended with action=revoked, actor=owner, and timestamp Given a revoked invitation When the invite link is accessed Then access is denied and the attempt is logged without exposing dispute content
Append-Only Ledger and Co-sign Counts/Identities Display
Given any partner-related event occurs (invite-created, verification, comment, evidence upload, co-sign, decline, reminder-sent, revoked, expired) When it is processed Then an immutable ledger entry is appended with ISO 8601 timestamp, actor identity (inviting org or partner org), action type, and relevant metadata; no entries can be edited or deleted Given the dispute detail is viewed When partner invitation data is rendered Then the UI displays total co-sign count and a list of partner organizations grouped by status (Pending, Co-signed, Declined, Revoked, Expired) with latest action timestamps Given an attempt is made to modify or delete ledger history via UI or API When the operation is executed Then the system rejects the request and logs the tamper attempt
Immutable Dispute Ledger & Timestamps
"As an auditor, I want a tamper-evident timeline of all dispute events so that I can verify what happened, when, and by whom without relying on mutable records."
Description

Implement an append-only event log that records every dispute-related action—creation, evidence additions, co-signs, status changes, reversals, and final resolution—with RFC 3339 timestamps, actor identity, IP, and cryptographic hash chaining to make tampering evident. No historical records are edited or deleted; new events supersede prior state. The ledger is queryable by dispute ID and entry/batch ID, is visible in a timeline UI, and supports export for audits while preserving verification hashes to validate integrity outside the system.

Acceptance Criteria
Append-Only Record of Dispute Creation
Given an authenticated user with permission and an existing entry or batch When the user flags the target as disputed Then the system appends a DISPUTE_CREATED event containing disputeId, targetType (entry|batch), targetId, actorId, actorIp, and an RFC 3339 timestamp And the event includes prevHash (null/absent only if this is the first event for the dispute) and eventHash derived from the canonicalized event payload And no prior events are edited or deleted And querying the ledger by disputeId returns this event as the latest for that dispute in timestamp order And recomputing the hash chain from the first event through this event validates each link (each event’s prevHash equals the previous event’s eventHash)
Evidence Attachment Event with Content Hash
Given an existing dispute When a user uploads evidence and attaches it to the dispute Then an EVIDENCE_ADDED event is appended containing evidenceId, mimeType, sizeBytes (>0), storageUri, contentSha256, actorId, actorIp, and an RFC 3339 timestamp And the event includes prevHash and eventHash consistent with the chain And the SHA-256 of the stored evidence content equals contentSha256 And no historical events are modified or removed And querying by disputeId shows the new event in chronological order after prior events
Partner Co-Sign Invitation and Acceptance Captured
Given an existing dispute and a partner contact When a user sends a co-sign invitation Then a COSIGN_INVITED event is appended containing inviteId, partnerIdentity, actorId, actorIp, RFC 3339 timestamp, prevHash, and eventHash And no prior events are altered When the partner accepts the invitation Then a COSIGN_ACCEPTED event is appended containing inviteId, partnerIdentity, signerId (or email), signerIp, RFC 3339 timestamp, prevHash, and eventHash And recomputing the chain from the first to latest event validates all links
Status Changes and Final Resolution as Events Only
Given a dispute in Open status When a user updates the status to UnderReview Then a STATUS_CHANGED event is appended containing fromStatus=Open, toStatus=UnderReview, reasonCode (if provided), actorId, actorIp, RFC 3339 timestamp, prevHash, and eventHash And no prior status events are editable When the dispute is resolved Then a RESOLUTION_ADDED event is appended containing resolution (e.g., Resolved/Rejected), resolutionNote (optional), actorId, actorIp, RFC 3339 timestamp, prevHash, and eventHash And the current status derived from the latest status/resolution event matches the UI/API reported status
Reversal via Compensating Event (No Deletion)
Given a previously appended dispute-related event that was made in error When an authorized user initiates a reversal with a reason Then a REVERSAL event is appended referencing reversedEventId, including reason, actorId, actorIp, RFC 3339 timestamp, prevHash, and eventHash And the referenced event remains in the ledger unchanged And derived state marks the referenced event as superseded by the reversal And the hash chain remains valid when recomputed end-to-end
Query, Timeline UI, and Audit Export with Verifiable Hashes
Given a disputeId or a target entry/batch ID When the ledger is queried via API Then the response returns all related events ordered by timestamp ascending, each with actorId, actorIp, RFC 3339 timestamp, prevHash, and eventHash And the timeline UI renders the same ordered events with their timestamps and actors visible When an audit export is requested for the same dispute Then the export file contains all events and their hashes sufficient to recompute and validate the chain externally And altering any exported event payload causes external recomputation to fail (prevHash != prior eventHash), demonstrating tamper evidence
Resolution Workflow & Outcomes
"As a review lead, I want a structured resolution workflow so that disputes are closed consistently with documented outcomes and accountability."
Description

Provide a guided resolution process with discrete states (Disputed, Under Review, Resolved—Upheld, Resolved—Dismissed, Withdrawn) and assignable reviewers. Require resolution notes, outcome codes, and optional corrective actions when closing a dispute, and automatically notify stakeholders of the outcome. Resolutions do not alter original records; they append outcome events and visually update seals to reflect final status. Allow controlled reopen with justification, preserving the full history. SLA timers and dashboards highlight aging disputes for timely handling.

Acceptance Criteria
Start Review and Assign Reviewer
- Given a dispute is in Disputed state and the user has Dispute Manager or Reviewer role - When the user clicks "Start Review" and selects an assignee - Then the dispute state changes to Under Review and the assignee is recorded - And the transition is timestamped and added to the immutable ledger - And the review SLA timer starts - And a notification is sent to the assignee - And the action is blocked if no assignee is selected
Close Dispute with Outcome and Required Metadata
- Given a dispute is in Under Review and the user is the assigned reviewer or a Dispute Manager - When the user clicks "Close Dispute", selects one outcome (Upheld, Dismissed, Withdrawn), selects an outcome code, and enters resolution notes of at least 20 characters (corrective actions optional) - Then an outcome event is appended to the ledger with outcome, outcome code, notes, any corrective actions, actor, and timestamp - And the dispute state updates to the corresponding final state - And the original disputed record remains unchanged - And the seal visual updates to the final state - And stakeholder notifications of the outcome are sent - And the review SLA timer stops
Controlled Reopen with Justification
- Given a dispute is in a final state (Resolved—Upheld, Resolved—Dismissed, or Withdrawn) and the user has Dispute Manager role - When the user clicks "Reopen" and enters a justification of at least 20 characters - Then a reopen event with justification, actor, and timestamp is appended to the ledger - And the state changes to Under Review - And prior resolution events remain intact and visible in history - And stakeholder notifications of the reopen are sent - And the review SLA timer restarts (previous elapsed time preserved in history)
Immutable Ledger Event Trail
- Given any dispute with one or more events - When viewing the dispute history - Then all events are listed in chronological order with type, actor, timestamp, and event data - And API/UI prohibit editing or deleting existing events; only new events can be appended - And attempts to modify historical events are rejected and logged - And the original disputed record content is read-only and unchanged by resolution actions
SLA Timers and Aging Dashboard
- Given disputes exist in various states - When viewing the SLA dashboard - Then each dispute shows time since creation and time in current state - And disputes breaching configured thresholds are highlighted and sortable - And filters for status, reviewer, and age buckets (e.g., 0–24h, 24–72h, >72h) are available - And counts of open disputes by age bucket and reviewer are displayed
Stakeholder Outcome Notifications
- Given an outcome event is saved for a dispute - When notifications are dispatched - Then subscribed stakeholders receive a notification containing dispute ID, final state, outcome code, and link to the ledger record - And delivery attempts are logged with success/failure status - And users’ notification preferences (channel and opt-out) are respected
Visual Seals Reflect Final Status Across UI
- Given a dispute has a current state - When viewing lists, detail pages, and action pages - Then the dispute seal shows the correct label and color/icon for the current state (Disputed, Under Review, Resolved—Upheld, Resolved—Dismissed, Withdrawn) - And a tooltip/accessibility label describes the state - And visual updates propagate within 30 seconds of state change across all relevant views
Audit Views & Exports
"As a grant compliance manager, I want clear audit views and exportable dispute packages so that I can satisfy oversight requests quickly with complete, verifiable records."
Description

Offer dedicated dispute views with filters (state, age, owner, partner co-sign count, reason code, campaign) and sortable lists, plus per-record timelines that compile all related events and evidence. Provide exports in CSV and JSON with optional zipped audit packs containing evidence files, co-sign attestations, and a signed ledger snapshot with verification hashes. Include shareable, access-controlled permalinks for auditors and regulators that respect data retention and privacy settings.

Acceptance Criteria
Dispute View: Filter and Sort
Given at least 500 disputed records across multiple states, owners, co-sign counts, reason codes, and campaigns exist When the user applies filters: state=CA, age_days between 0 and 30, owner=me, cosign_count >= 2, reason_code="DATA_MISMATCH", campaign="AB-123" Then the list displays only records that satisfy all filters and the total count equals the number of displayed records And results render within 2 seconds at P95 for 50k-record datasets And clearing any single filter updates the result set accordingly And sorting by age_days ascending toggles to descending on second click and persists across pagination And sorting by any visible column updates the order without altering the applied filters And an explicit "No results" state is shown when zero records match
Per-Record Timeline Integrity
Given a disputed record with events: flagged, evidence_uploaded (2 files), cosign_invited (2 partners), cosign_added (1 partner), comment, resolved When the timeline is viewed Then events are displayed in reverse chronological order and include event type, actor, and ISO 8601 UTC timestamp with millisecond precision And each evidence item shows filename, size, mime_type, and a SHA-256 hash and can be downloaded if the viewer has permission And the ledger transaction id and verification hash are shown for each mutating event And edits are not permitted; any correction appears as a new event with a link to the superseded event And the computed cosign_count matches the number of distinct cosign_added events
CSV Export from Filtered Disputes
Given a filtered dispute view with N results and the user selects "Export CSV" When the export completes Then a UTF-8 RFC4180 CSV is downloaded with a header row and N data rows And columns include: dispute_id, record_id, state, owner_id, reason_code, campaign_id, created_at, age_days, cosign_count, status, latest_event_at, verification_hash, permalink And date-times are ISO 8601 UTC; numbers use dot decimal; text is quoted when needed; line endings are LF And only records matching the current filters are included, in the same sort order as the view And if N=0 the CSV contains only the header row And P95 generation time is under 5 seconds for up to 100k rows
JSON Export with Optional Timelines
Given a filtered dispute view and the user selects "Export JSON" When the export completes Then a JSON file is downloaded containing an array of dispute objects with properties: dispute_id, record_id, state, owner_id, reason_code, campaign_id, created_at, age_days, cosign_count, status, latest_event_at, verification_hash, permalink And when include_timeline=true is selected, each object also includes timeline_events[] with event_type, actor_id, occurred_at (ISO 8601 UTC), ledger_txn_id, verification_hash, and evidence[] items (filename, size_bytes, mime_type, sha256) And the JSON validates against the published JSON Schema versioned in the API docs And only records matching the current filters are included, in the same sort order as the view And P95 generation time is under 5 seconds for up to 100k objects
Audit Pack ZIP Composition and Verification
Given the user requests an Audit Pack with options include_evidence=true, include_cosign_attestations=true, include_ledger_snapshot=true When the pack is generated Then a ZIP is delivered containing: export.csv, export.json, evidence/ files, attestations/ files, ledger_snapshot.json, manifest.json, and checksums.txt And manifest.json lists every file path with SHA-256 checksum, byte size, and last_modified; checksums.txt provides the same in a standard format And ledger_snapshot.json is signed; a detached signature file ledger_snapshot.sig and the public key location are included for verification And all file checksums in the manifest match the files in the ZIP upon verification And missing or redacted evidence appears as placeholder entries noted in manifest.json with reason codes
Shareable Audited Permalinks and Access Control
Given a user generates a shareable permalink for a specific dispute view or record with scope=view and expiry=14 days When an auditor with the link accesses it before expiry Then access is granted without edit capabilities, all content reflects the same filters and sort, and an access log entry is recorded And if the link is revoked or expired, subsequent access returns HTTP 410 Gone with an explanatory message And permissions enforce data visibility identical to the owner's configured privacy settings; restricted fields are redacted And downloads via the permalink are limited to exports and audit packs; write operations are blocked
Privacy and Retention Enforcement in Views & Exports
Given organizational privacy settings mark fields A, B as restricted and a retention policy removes evidence files older than 12 months When viewing disputes, exporting CSV/JSON, or generating an Audit Pack Then restricted fields are redacted or omitted consistently across the UI, exports, permalinks, and manifests; redactions include a reason code and policy reference And evidence older than the retention threshold is not included; timeline shows a redacted placeholder with removal timestamp And hash verifications remain valid for remaining files and events; counts and totals reflect only retained items
Role-based Permissions & Access Controls
"As an organization admin, I want granular permissions over dispute actions so that sensitive processes remain secure and compliant across teams and partners."
Description

Enforce fine-grained permissions for creating disputes, viewing sealed data, adding evidence, inviting co-signers, changing states, and resolving disputes. Access controls respect organizational boundaries, campaign scopes, and least-privilege principles, with sensitive evidence gated behind additional consent screens. All permission checks are logged, and configuration supports default roles plus custom policies to meet different nonprofit governance models.

Acceptance Criteria
Creator creates a dispute within their campaign
Given user U has permission dispute.create within organization O and campaign C and resource R belongs to O and C and no active dispute exists for R When U submits "Create Dispute" for R Then a dispute D is created with status "Open", immutable ledger event "DISPUTE_CREATED" is recorded with U and timestamps, and the response includes D.id and status "Open" And a permission-check audit log is written with subject U, action dispute.create, resource R, scope {O,C}, decision Allow, and policy reference Given user U lacks dispute.create or R is out of scope When U submits "Create Dispute" for R Then the request is rejected with 403, no dispute is created, and an audit log records decision Deny with reason
Viewer accesses sealed dispute data with consent gate
Given user U has permission dispute.view.sealed within organization O and campaign C for dispute D and D contains evidence E marked sensitive When U first attempts to view E in the current session Then a consent screen is displayed describing sensitivity and data-use terms, U must explicitly confirm to proceed, and upon confirmation E is displayed And an audit log records consent with timestamp, user, dispute D, and evidence E, and consent persists for the current session and is cleared on logout or after inactivity timeout as configured Given U lacks dispute.view.sealed or is out of scope When U attempts to view E Then the content remains masked, the request is rejected with 403, and an audit log records decision Deny
Contributor adds evidence to an open dispute
Given user U has permission dispute.evidence.add within organization O and campaign C for dispute D in status "Open" or "Under Review" When U uploads evidence E and optionally marks E as sensitive Then E is attached to D, immutable ledger event "EVIDENCE_ADDED" is recorded with U and timestamps, the response confirms E.id, and a permission-check audit log is written with decision Allow and policy reference And if E is marked sensitive, future views of E require the consent gate Given U lacks dispute.evidence.add or D is out of scope When U attempts to upload E Then the request is rejected with 403, no evidence is attached, and an audit log records decision Deny
Owner invites co-signers with boundary enforcement
Given user U has permission dispute.cosign.invite within organization O and campaign C for dispute D and partner organization P is authorized for campaign C per configuration When U sends invitations to users V1..Vn in O or P Then invitations are created, recipients receive invite tokens, immutable ledger event "COSIGN_INVITED" is recorded, the response lists invitees, and a permission-check audit log is written with decision Allow and policy reference Given an invite target belongs to a non-authorized organization or outside campaign scope When U attempts to invite that target Then the request is rejected with 403 for those targets, no invites are issued to them, and an audit log records decision Deny with reason "boundary violation"
Moderator changes dispute state using allowed transitions
Given user U has permission dispute.state.change within organization O and campaign C for dispute D and workflow policy allows transition from current_state S to target_state T When U submits a state change from S to T Then D.state updates to T, immutable ledger event "STATE_CHANGED" records S->T, U, and timestamps, the response reflects T, and a permission-check audit log is written with decision Allow and policy reference Given the transition S->T is not allowed by policy or U lacks permission When U attempts the state change Then the request is rejected with 403, D.state remains S, and an audit log records decision Deny with reason
Resolver finalizes and seals a dispute
Given user U has permission dispute.resolve within organization O and campaign C for dispute D and any required co-sign approvals per policy have been collected When U marks D as Resolved and applies the Sealed flag Then D.state becomes "Resolved", sealed=true, sealed_at timestamp is set, sealed_by=U, immutable ledger event "DISPUTE_RESOLVED" is recorded, and subsequent access to sealed content requires dispute.view.sealed permission and the consent gate for sensitive items And a permission-check audit log is written with decision Allow and policy reference Given approvals are incomplete or U lacks dispute.resolve When U attempts to resolve and seal Then the request is rejected with 403, D remains unsealed, and an audit log records decision Deny with reason
Admin configures custom RBAC policy overriding defaults
Given user U has permission rbac.policy.manage for organization O When U creates a custom policy that restricts action dispute.evidence.add to role "Reviewer" within campaign C and assigns it to campaign C Then users without role "Reviewer" in C are denied dispute.evidence.add, users with the role are allowed, and decisions reference the custom policy id And default roles continue to apply to other campaigns not bound to the custom policy, and all permission checks (allow and deny) are logged with subject, action, resource, scope, decision, and policy reference Given the custom policy is deactivated or misconfigured When a permission decision is evaluated Then the engine falls back to default-deny, returns 403, and logs decision Deny with reason "no matching allow"

GrantPack

Create a signed, tamper-evident export bundle for a campaign period or grant: a manifest, deduped counts, receipt set, and easy verification instructions. Delivers audit-ready proof in minutes and eliminates report back-and-forth with funders.

Requirements

Period Scope Lock & Data Snapshot
"As a campaign director, I want to define and lock the exact campaign period and scope so that my export reflects a reproducible, dispute-proof snapshot for grant reporting."
Description

Enable selection of a campaign period or grant-defined scope with precise time boundaries and timezone handling, then generate an immutable data snapshot identified by a unique snapshot ID. The snapshot freezes all included actions (calls, emails, form submissions) across RallyKit modules, ensuring reproducibility and dispute-proof reporting. Provide inclusion/exclusion filters (channels, dispositions, tags, bills, districts) and preview counts prior to locking. Apply data completeness checks (e.g., missing districts, unmatched geocodes, stale bill status) and block snapshot finalization until issues are acknowledged or resolved. Persist snapshot metadata in the manifest, including query parameters, data pipeline version, and environment, so regenerated exports can hash-match the original. Outcome: a consistent, auditable basis for all subsequent calculations and exports.

Acceptance Criteria
Accurate Period Selection With Timezone and DST Handling
Given an organization timezone set via IANA TZ database ID When a user selects a local Start and End datetime for the period Then the system displays normalized ISO-8601 with offset for both local and UTC boundaries And the query interval is inclusive of Start and exclusive of End [Start, End) And actions exactly at the Start timestamp are included; actions exactly at the End timestamp are excluded And during DST transitions, invalid local times cannot be selected; ambiguous times require explicit offset selection; resulting UTC bounds are correct And the chosen timezone ID is persisted in the snapshot manifest
Filterable Scope With Preview Counts Prior to Lock
Given inclusion and exclusion filters for channels, dispositions, tags, bills, and districts When the user applies filters and requests a preview Then totals and per-channel counts reflect only the filtered actions And within a filter type, multiple selections are OR; across types, filters are AND; explicit exclusions override inclusions And both deduped supporter count and raw action count are displayed And preview recomputes within 2 seconds for datasets up to 100k actions (p95) And locking is disabled if the preview is older than 5 minutes; a refresh is required And final locked counts must match the last refreshed preview
Snapshot Finalization Generates Immutable Snapshot ID and Freezes Data
Given the preview is current and no unacknowledged blocking checks remain When the user finalizes the snapshot Then a unique snapshot ID is generated and visible to the user And the snapshot content (included action IDs and derived counts) becomes immutable And subsequent edits to underlying actions/metadata do not alter the snapshot’s stored results or exported outputs And only non-functional fields (title/notes) are editable post-finalization; attempts to modify scope/content are blocked with clear errors
Blocking Completeness Checks With Acknowledge/Resolve Workflow
Given data quality checks run for the selected scope (e.g., missing districts, unmatched geocodes, stale bill status > 24h) When issues are detected Then each issue type shows counts and drill-down details And the Finalize action is disabled while unacknowledged blockers exist And resolving issues updates counts upon preview refresh And acknowledging an issue requires reason text, user, and timestamp and records the acknowledgement in the manifest And after all blockers are resolved or acknowledged, Finalize becomes enabled
Manifest Metadata Persistence and Hash-Match Regeneration
Given a finalized snapshot When an export bundle is generated Then the manifest includes: snapshotId, createdAt (ISO-8601 with offset), environment, dataPipelineVersion, org/campaign/grant IDs, query parameters (timezone, local and UTC bounds), all filters, issue acknowledgements, and source dataset versions And file ordering and serialization for manifest and receipts are deterministic And regenerating the export for the same snapshot ID yields identical file-level SHA-256 hashes and an identical bundle hash
Concurrency and Permission Controls for Snapshot Creation
Given role-based permissions are enforced When a user without Snapshot.Create permission attempts to create or finalize a snapshot Then the action is denied and audited And only one in-progress snapshot per campaign/grant scope can exist; concurrent attempts receive a lock notice that includes lock owner and timestamp And finalize requests are idempotent using a client-supplied idempotency key; duplicate submissions do not create duplicates
Performance and Stability at Nonprofit-Scale Data Volumes
Given datasets up to 200k actions within the selected scope When generating preview and finalizing the snapshot Then preview completes within 2 seconds and finalization completes within 90 seconds at the 95th percentile And the background job retries at most once with exponential backoff and surfaces actionable errors on failure And the UI shows job progress (queued, running, completed, failed) and remains usable without cancelling the job when navigating away
Configurable Deduplicated Counts Engine
"As a grants manager, I want configurable deduplicated counts so that I can report unique supporter impact according to each funder’s rules."
Description

Produce deduplicated impact counts aligned to funder rules, with configurable dedupe strategies (by unique supporter, by district, by legislator, by bill, by channel, daily-unique, action-type-specific). Generate both summary totals and breakdowns (campaign, bill, district, legislator, channel) and annotate each metric with the dedupe rules used. Detect anomalies (e.g., unusually high repeats from a single device or address) and flag them in the manifest. Persist the exact dedupe configuration and data dictionary in the manifest for transparency and reproducibility. Integrate with RallyKit’s legislator matching and bill status modules to ensure counts reflect district-specific actions and current bill states. Outcome: clear, funder-aligned metrics that withstand audit scrutiny.

Acceptance Criteria
Apply Funder-Specific Dedupe Rules to Produce Summary Totals
Given a grant reporting period and a funder profile specifying a dedupe strategy (e.g., unique supporter by email+phone; daily-unique by channel; action-type-specific) When the engine runs for that period Then it outputs summary totals deduplicated per the selected strategy And Then each total includes an annotation of the dedupe rule name, rule parameters, and a stable Rule ID And Then re-running with the same dataset and configuration produces identical totals and annotations And Then switching to a different dedupe strategy changes totals and updates annotations accordingly
Generate Dedupe-Aware Breakdowns by Campaign, Bill, District, Legislator, and Channel
Given a dataset with multiple campaigns, bills, districts, legislators, and channels When the engine runs with a specified dedupe strategy Then it produces breakdown counts for each dimension (campaign, bill, district, legislator, channel) And Then the sum of each breakdown equals the corresponding summary total for that dimension scope with zero discrepancy And Then each breakdown metric is annotated with the dedupe rule used And Then counts reflect merged/deduped supporters consistently across dimensions
Persist Exact Dedupe Configuration and Data Dictionary in Manifest
Given a completed count run When the manifest is generated Then it includes the exact dedupe configuration JSON (match fields, windows, grouping keys, exclusions, version) And Then the manifest contains a data dictionary defining each metric, its calculation method, and unit And Then the manifest stores a configuration checksum and dataset snapshot hash to enable reproducibility verification And Then the manifest records the engine version and timestamp of the run
Detect and Flag Anomalies in Repeated Actions
Given anomaly thresholds are set (defaults provided) When unusually high repeats from a single device fingerprint, IP address, email, phone, or postal address are detected within a rolling window Then the engine flags anomalies with type, threshold breached, affected identifiers, and counts And Then anomalies do not suppress counting but are included in an Anomalies section of the manifest And Then thresholds and detectors used are recorded in the manifest And Then anomalies are available as a machine-readable list for audit
Integrate Legislator Matching and Bill Status Into Counts
Given actions with supporter location and timestamps When the engine counts district-specific actions Then it uses the legislator matching module effective at the action timestamp to attribute to the correct district and legislator And Then actions tied to bill scripts are attributed to the correct bill and chamber reflecting the bill status at the action timestamp And Then actions with missing or ambiguous matches are excluded from matched counts and reported separately with reasons in the manifest And Then breakdowns by district and legislator reconcile with the matching module outputs
Export Deduplicated Counts and Manifest via GrantPack
Given a grant period selection When the GrantPack export is generated Then it includes the deduplicated summary totals, all breakdowns, dedupe annotations, anomaly flags, and the persisted configuration and data dictionary And Then the export bundle includes clear verification instructions to reproduce counts using the stored configuration and dataset snapshot And Then the manifest is signed and tamper-evident, and any modification invalidates the signature And Then the bundle passes automated verification on re-import, yielding identical counts
Receipt Set Assembly with PII Redaction
"As a nonprofit director, I want a redacted receipt set that proves each action occurred without exposing supporter PII so that I can satisfy audits and privacy obligations."
Description

Compile a verifiable receipt set for all in-scope actions, capturing timestamps, action type, disposition, legislator or target, district, script version, and transport identifiers (e.g., message IDs, call session IDs). Automatically apply configurable redaction profiles that remove or mask PII (names, emails, phone numbers, addresses) while preserving stable pseudonymous IDs and per-record salted hashes. Output options include paginated PDFs for human review and CSV/JSONL for programmatic validation, with file-level and record-level checksums. Provide sampling modes (e.g., 100% or funder-requested N% sample) and maintain referential integrity between receipts and reported counts. Store an encrypted internal lookup that maps pseudonymous IDs to originals for authorized internal audits only. Outcome: evidence that proves actions occurred without exposing supporter data.

Acceptance Criteria
All Required Receipt Fields Captured
Given an in-scope action record When the receipt set is assembled Then the receipt includes: receipt_id, supporter_pseudo_id, action_timestamp (ISO 8601 UTC), action_type ∈ {call, email, tweet, letter, meeting, sms}, disposition ∈ {completed, failed, abandoned, queued, bounced}, target_type, target_identifier, district, script_version, transport_identifier And Then 100% of receipts pass schema validation with zero nulls in required fields And Then any record with invalid or missing required fields causes the build to fail and produces an error report listing record identifiers and reasons
Configurable PII Redaction Profiles Applied
Given a selected redaction profile (strict, standard, or custom) When exporting receipts Then PII fields (name, email, phone, postal_address) are removed or masked per profile rules And Then no PII appears in any export artifact (PDF, CSV, JSONL, manifest) as verified by automated pattern scan; allowlist exceptions count = 0 And Then supporter_pseudo_id and transport identifiers remain present and unchanged And Then the applied redaction profile id and version are recorded in the manifest
Stable Pseudonymous IDs and Per-Record Hashes
Given multiple receipts for the same supporter within the campaign period When receipts are generated across all output formats Then supporter_pseudo_id is identical across those receipts and files (PDF, CSV, JSONL) And Then each receipt includes receipt_hash computed as SHA-256(redacted_payload + unique per-record salt) And Then recomputing receipt_hash using stored salt matches for 100% of receipts And Then no two receipts share the same receipt_hash within the export
Multi-format Outputs with Checksums
Given an export build request When the export completes Then outputs include paginated PDF, CSV, JSONL, and manifest.json And Then each output file has a SHA-256 checksum in the manifest and verifies successfully And Then each receipt includes receipt_id and receipt_hash to enable cross-file joins And Then PDF shows page X of Y and displays receipt_id on each receipt page And Then CSV and JSONL object counts are equal and match manifest.total_receipts
Sampling Modes and Referential Integrity
Given a sampling mode of 100% or N% with a provided random seed When building the receipt set Then sample size equals round_half_up(N% of eligible receipts) and sampled receipt_ids are reproducible with the same seed And Then reported counts (by action_type, disposition, target chamber, district) equal aggregates from the corresponding receipt set with tolerance = 0 And Then every reported count bucket references the exact set of receipt_ids that produced it, with zero orphans and zero duplicates And Then the manifest flags sampling with percent, seed, and population size
Encrypted Internal Lookup and Access Controls
Given an internal lookup mapping supporter_pseudo_id to original PII When generating the export Then the lookup is excluded from all export artifacts and stored encrypted at rest (AES-256-GCM via KMS) And Then access requires Audit.Lookup role and MFA, and all accesses are logged with timestamp, actor, purpose, and affected IDs And Then retrieval for a test set of supporter_pseudo_ids returns original PII with 100% accuracy And Then unauthorized access attempts are denied and logged; zero lookup records appear in external outputs
Cryptographic Signing & Tamper-Evident Bundle
"As a funder reviewer, I want a signed, tamper-evident export so that I can trust the report hasn’t been altered."
Description

Package the manifest, deduped counts, receipt set, and checksums into a single archive (e.g., ZIP) and make it tamper-evident using strong cryptography. Compute SHA-256 checksums for every file and a bundle-level hash. Sign the bundle and checksums with an organization-scoped key (e.g., Ed25519/RSA) managed via KMS, recording key fingerprint, algorithm, and signing timestamp in the manifest. Prevent modifications post-signing by validating checksum concordance on download and at verification time. Store the bundle hash in RallyKit for future cross-checks and expose signature status in the UI. Outcome: a trustable artifact that signals any alteration instantly.

Acceptance Criteria
Generate and Sign Complete GrantPack Bundle
Given an authorized organization admin selects a campaign period and initiates GrantPack export When the system builds the archive Then the archive contains exactly the manifest, deduped counts, receipt set, file checksums, and signature artifacts And SHA-256 checksums are computed for every included file And a bundle-level SHA-256 hash is computed and stored with the bundle record in RallyKit And the archive and checksum manifest are signed using the organization-scoped KMS-managed key And the system returns a downloadable, signed bundle with status "Signed"
Manifest Cryptographic Metadata Recorded
Given a bundle has been generated and signed When the manifest is written Then the manifest records: organization identifier, signing algorithm (Ed25519 or RSA-2048+), signing key fingerprint, signing timestamp in ISO 8601 UTC, bundle-level SHA-256, and a map of per-file SHA-256 checksums And the manifest is included inside the bundle And the recorded fields are non-empty, well-formed, and consistent with computed values
Integrity Validation on Download
Given a user requests to download a previously signed bundle When the server computes the current bundle SHA-256 and compares it to the stored bundle hash Then if the hashes match, the download proceeds and the bundle is marked "Signature: Valid" And if the hashes do not match, the download is blocked, the UI displays "Signature: Invalid (bundle hash mismatch)", and an audit event is recorded
Tamper Detection During Verification
Given a verifier follows the included verification instructions with the organization's public key When any file within the bundle is modified, added, or removed after signing Then checksum validation fails for the changed file(s) and overall signature verification reports failure And when the bundle is unmodified, checksum validation succeeds for all files and the signature verifies against the recorded key fingerprint and algorithm
Signature Status Exposed in UI
Given the bundle list and detail views are opened When the system loads bundle metadata Then each bundle displays signature status as one of: Valid, Invalid, or Unknown, along with last verification timestamp And a "Verify Now" action triggers server-side signature and checksum verification and updates the displayed status and timestamp accordingly
KMS Key Scope and Rotation Enforcement
Given an organization with a KMS-managed signing key and historical rotated keys When a new bundle is signed Then the active org-scoped key is used, the private key never leaves KMS, and the key fingerprint is recorded And verification of historical bundles succeeds using the corresponding historical public keys And attempts to sign a bundle with a key not scoped to the organization are rejected with an error and logged
Verification Kit & Clear Instructions
"As a funder program officer, I want simple, clear verification steps so that I can independently validate the export without technical assistance."
Description

Generate human-readable verification instructions (HTML/PDF) with step-by-step guidance and copy-paste commands for Windows, macOS, and Linux to verify checksums and signatures offline. Include a lightweight verifier (CLI script or hosted page) that validates bundle hash, file checksums, and signature against the embedded public key. Provide a nontechnical summary page for funders, a FAQ for common issues (mismatched hashes, clock skew), and a QR code or short link to a hosted verification page prefilled with the bundle fingerprint. Outcome: funders can independently validate authenticity without technical assistance from the nonprofit.

Acceptance Criteria
Offline Verification Instructions (HTML/PDF) with OS-Specific Commands
Given a generated GrantPack bundle When the Verification Kit is created Then Instructions.html and Instructions.pdf are placed at the bundle root And the documents contain step-by-step numbered guidance to verify the bundle hash, per-file checksums, and the signature And each step includes copy‑paste commands for Windows (PowerShell/certutil), macOS (shasum/gpg), and Linux (sha256sum/gpg) And all commands execute without internet access using the embedded public key and local files And the expected bundle fingerprint (SHA-256) is displayed and matches the manifest value
Lightweight Verifier (CLI/Hosted) Validates Hashes and Signature
Given the bundle When the user runs verify.sh (macOS/Linux) or verify.ps1 (Windows) with no arguments Then the script verifies the bundle hash, each file checksum, and the signature against the embedded public key And on total success it prints "Verification: PASS" and exits with code 0; on any failure it prints "Verification: FAIL" with the failing check(s) and exits with non-zero When the user opens the hosted verification page via the provided link Then the page displays the prefilled bundle fingerprint and allows drag-and-drop or file picker to verify locally in-browser; no file contents are transmitted to a server And both CLI and hosted flows produce identical pass/fail outcomes for the same bundle
Tamper-Evident Behavior and Clear Error Reporting
Given any file in the bundle is modified, deleted, or added after packaging When running the verifier CLI or hosted flow Then verification fails and identifies the specific file(s) with mismatched checksums And if the signature does not match the embedded public key Then verification fails with "Signature invalid" And if the manifest hash differs from the displayed fingerprint Then verification fails with "Bundle fingerprint mismatch" And the instructions and FAQ direct the user to stop and contact the nonprofit when a failure occurs
Nontechnical Summary Page for Funders
Given the Verification Kit is generated When opening Summary.html (and Summary.pdf) Then the content explains what the bundle is, what is being verified, and the meaning of outcomes in plain language at Flesch-Kincaid grade level <= 8 And the page includes estimated time to verify (<= 5 minutes), the QR code and short link, and a support contact And the page contains no command-line snippets and passes WCAG 2.1 AA color contrast
FAQ Covers Common Verification Issues with Remediation
Given the Verification Kit When opening FAQ.html Then entries exist for: mismatched hashes, invalid signature, clock skew/time warnings, missing permissions, Windows path/PowerShell execution policy, and unsupported archive viewer And each entry contains: symptoms, likely causes, OS-specific remediation steps, and expected result after remediation And following the "clock skew/time warnings" steps (including correcting system time) allows signature verification to succeed using the same bundle
QR Code and Short Link to Prefilled Hosted Verification Page
Given the Verification Kit When scanning the included QR code with default camera on iOS and Android Then it opens a short link that resolves to the hosted verification page with the bundle fingerprint prefilled And the short link length is <= 40 characters and uses HTTPS And the QR image is PNG and PDF-embedded at >= 300 DPI for print legibility And the prefilled page prominently shows the fingerprint and provides a one-step drag-and-drop verification control, plus an offline CLI fallback link
One-Click Export & Secure Delivery
"As an organizer, I want to generate and share the export in minutes so that I can meet grant deadlines without back-and-forth."
Description

Offer a streamlined flow to create the export in one click once scope is set: show progress, allow resumable generation, and surface any blockers with guided fixes. Store generated artifacts with retention policies and generate expiring, access-controlled download links with optional password protection and separate passcode delivery. Support direct email to funders and portal-based sharing, with access logging, link revocation, and reissue. Allow deterministic regeneration using the snapshot ID, producing identical hashes when inputs are unchanged. Integrate with RallyKit org roles and sharing settings, and record delivery events in the audit trail. Outcome: fast, secure distribution that eliminates back-and-forth.

Acceptance Criteria
One-Click Export with Progress & Resume
Given export scope is finalized and the user has Export:GrantPack permission When the user clicks "Generate GrantPack" Then a server-side job starts and the UI displays current step, percent complete, and ETA, updating at least every 5 seconds And if the client disconnects or page closes, the job continues server-side and the user can reattach to view progress upon return And if the job is interrupted mid-step, a "Resume generation" action restarts from the last completed step without duplicating completed work And upon completion, the UI shows success with a snapshot ID and content hash
Blockers Surfaced with Guided Fixes
Given preflight validation runs before generation When required inputs are missing or invalid (e.g., scope not locked, invalid funder contact, incomplete receipts) Then generation is blocked and a checklist of blockers is shown with severity, description, and direct Fix links to the exact settings/forms And the user can click Recheck to re-run validations without losing prior progress And no bundle can be downloaded or shared until all blockers are resolved and validations pass
Signed Bundle, Manifest, Storage & Retention
Given a successful generation Then the bundle contains manifest.json, deduped_counts.csv, receipts.zip, and verification_instructions.txt And the bundle is cryptographically signed and includes a detached signature file referenced in the manifest And following the included instructions verifies the signature successfully against the generated bundle And the artifacts are stored immutably with an applied retention policy visible in the UI and configurable per export within org limits And modification of stored artifacts is disallowed; only new versions via regeneration are permitted And deletion occurs automatically at retention expiry with the event recorded in the audit trail
Expiring Access-Controlled Download Link with Passcode Split
Given a generated bundle When creating a download link Then the user can set expiry (1–90 days), maximum downloads, and recipient restriction (allowlist by email and/or require authenticated portal login) And the user can enable passcode protection with passcode delivered via a separate channel from the link And accessing the link requires satisfying recipient restriction and, if enabled, entering the passcode And after expiry or max downloads, the link returns 410 Gone without disclosing artifact details And each access is logged with timestamp, IP, user agent, and recipient identity (if known)
Direct Email & Portal Sharing with RBAC and Access Logging
Given the user has Share:GrantPack permission and org sharing allows external emailing When the user selects funder contacts and clicks "Send Secure Link" Then emails are sent using the org-approved template including the expiring secure link (no passcode included if split delivery is enabled) And the user can enable portal sharing to grant access to selected funder accounts; access respects their account roles And users without sufficient permissions cannot create links, send emails, or enable portal sharing; UI disables controls and API returns 403 And all sends, portal grants, views, and downloads are recorded in the audit trail with actor, action, target, timestamp, and outcome
Link Revocation and Reissue
Given an active download link When an authorized user clicks Revoke Then the link becomes unusable within 5 seconds and subsequent requests return 410 Gone And the user can issue a new link (reissue) without regenerating the bundle; the new link has a new token and version number And audit trail entries record revocation and reissue with the actor, timestamp, and optional reason
Deterministic Regeneration by Snapshot ID
Given a snapshot ID and unchanged inputs When the user invokes Regenerate Then the system produces a byte-identical bundle with the same content hash and valid signature as the original Given any input changed since the snapshot When the user invokes Regenerate Then the system produces a bundle with a different hash, and the manifest includes a diff summary of changed inputs and timestamps And the snapshot ID, content hash (e.g., SHA-256), and signature fingerprint are displayed and copyable

Swarm Builder

One-click batch creation of hundreds of unique, team-tagged QR action pages from a single template. Pre-fills UTM and partner attribution, sets district filters, quotas, and event time windows, and outputs a tidy roster for quick edits. Eliminates copy‑paste errors and gets day‑of actions launch-ready in minutes.

Requirements

One-Click Batch Page Generation
"As a campaign organizer, I want to generate hundreds of unique action pages from one template with a single click so that my team can launch day-of actions without manual setup."
Description

Enable creation of hundreds of unique action pages from a single template in one operation. Each page inherits template content (including dynamic scripts by bill status), legislator auto-matching, compliance text, and branding, while receiving unique slugs, short URLs, and downloadable QR assets. Support naming patterns, team tags, and automatic manifest generation for traceability. Ensure atomic, transactional creation with progress feedback and the ability to cancel. Target performance: generate 500 pages in under 60 seconds. Integrate with the real-time dashboard so generated pages are immediately trackable and manageable under existing campaign constructs, respecting role-based permissions.

Acceptance Criteria
Atomic Batch Creation and Cancel
- Given a valid template and a request to generate N pages, When the batch job starts, Then the system performs creation transactionally so that either all N pages and their artifacts exist or none exist. - Given a user clicks Cancel during an in-progress batch, When the system acknowledges cancellation, Then all already-created pages, slugs, short URLs, and QR assets are rolled back and no orphan records remain. - Given any creation step fails for any page (e.g., unresolved slug collision, short URL provider error, QR generation error), When the failure occurs, Then the batch status is set to Failed, no partial artifacts remain, and a user-visible error report identifies the failing step and item index. - Given the batch is running, When the UI displays progress, Then it shows percent complete, created count, remaining count, and estimated time to completion, updating at least once per second.
Performance: 500 Pages Under 60 Seconds
- Given a production-like environment and a standard template, When generating 500 pages, Then total batch duration is <= 60 seconds at median and <= 75 seconds at p95 across 3 consecutive runs. - Given the batch completes, Then the manifest is generated and available for download within 3 seconds of completion. - Given the batch runs, Then progress updates are emitted at least once per second and no update gap exceeds 2 seconds.
Template Inheritance and Legislator Auto-Matching
- Given a template with dynamic scripts by bill status, branding, compliance text, and legislator auto-matching enabled, When pages are generated, Then each page inherits these settings with no modifications. - Given a sample of 10 generated pages across different districts, When test addresses are entered, Then legislator auto-matching returns targets for in-district addresses and blocks out-of-district addresses per filter rules. - Given the bill status changes in the system, When a generated page is loaded, Then the dynamic script rendered matches the expected script for the current bill status.
Unique Slugs, Short URLs, and QR Assets
- Given a naming pattern "[EVENT]-{seq:3}", When 25 pages are generated, Then slugs are unique and follow the pattern EVENT-001 through EVENT-025 without gaps unless collisions exist from prior pages, in which case the next available sequence is used and recorded in the manifest. - Given short URLs are created for each page, When resolving each short URL, Then each returns HTTP 301/302 to the correct long URL and no two pages share the same short URL. - Given QR assets are generated, When downloading the PNG and SVG for a page, Then the encoded URL matches the page’s short URL including UTM parameters and the files are retrievable within 1 second with integrity hash matching the manifest.
Pre-Filled Attribution, Filters, Quotas, and Time Windows
- Given UTM and partner attribution defaults are provided, When pages are generated, Then each page's long and short URLs include the specified utm_source, utm_medium, utm_campaign, and partner fields as query parameters. - Given district filters are set, When an address outside the specified district is submitted, Then the page displays an ineligible message and prevents action submission; in-district addresses proceed normally. - Given a per-page quota of Q actions, When Q actions are completed, Then the page stops accepting new actions and displays a quota reached message and the dashboard reflects the locked status within 5 seconds. - Given an event time window [start, end] with timezone TZ, When the current time is before start, Then the page shows a countdown and disables action; when after end, the page shows closed messaging and disables action; all times are evaluated in TZ.
Manifest and Dashboard Integration with Permissions
- Given the batch completes successfully, When the manifest is generated, Then CSV and JSON files include columns: page_id, slug, short_url, qr_png_url, qr_svg_url, team_tags, naming_pattern, utm_source, utm_medium, utm_campaign, partner, district_filter, quota, window_start, window_end, created_by, created_at, batch_id, template_id, and downloadable links return HTTP 200. - Given the dashboard is open, When the batch completes, Then all generated pages appear under the campaign's Swarm Builder section grouped by batch_id, are searchable by slug and team tag, and live action metrics start streaming within 5 seconds of first action. - Given role-based permissions, When a user without Create permissions attempts batch generation, Then access is denied with HTTP 403 and an audit log entry is created; users with Admin or Editor can generate and manage batches; Viewers can only download manifests and view pages.
Naming Patterns, Team Tags, and Traceability
- Given a naming pattern using tokens {seq}, {YYYYMMDD}, {TEAM}, When pages are generated with team tag "FieldTeam", Then slugs and display names render tokens correctly (e.g., Rally-20250824-FieldTeam-001) and the team tag is applied to each page. - Given the dashboard filter is set to team tag "FieldTeam", When browsing pages, Then only pages with that tag are listed. - Given an individual generated page is viewed in admin, When inspecting metadata, Then it displays template_id, batch_id, naming pattern used, creator, created_at, and a link back to the batch summary.
Auto UTM and Partner Attribution Prefill
"As a partnerships lead, I want UTMs and partner attribution auto-applied to each page so that downstream analytics and payouts are accurate without hand-editing."
Description

Automatically apply standardized UTM parameters and partner attribution to each generated page and corresponding short URL based on selected campaign and team/partner mapping. Support campaign-level defaults with per-page overrides via the roster, enforce consistent naming schemes, and append partner identifiers for reconciliation. Persist attribution in the analytics data layer, exports, and API responses, and ensure compatibility with link shorteners and QR codes. Provide validation on required UTM fields and prevent conflicting or malformed parameter combinations.

Acceptance Criteria
Campaign Defaults Applied to Batch-Generated Pages
Given a campaign with configured UTM defaults and naming scheme rules and a team-to-partner mapping When I create a Swarm using that campaign and generate 100 action pages with short URLs Then each page’s long URL and its short URL destination include utm_source, utm_medium, utm_campaign populated from the campaign defaults and partner_id from the team/partner mapping And all values pass the campaign’s naming scheme validation And no page is missing any required UTM field And the applied values are visible in the roster columns for each page
Per-Page Overrides via Roster Persist and Take Effect
Given batch-generated pages with default UTM and partner values When I edit a page’s UTM/partner fields in the roster and save Then the page’s long URL and short URL destination update to reflect the overrides And an "Overridden" indicator is shown for the edited fields And subsequent bulk regenerations do not overwrite the overrides And undoing the override reverts the page to the current campaign defaults
Validation of Required and Non-Conflicting UTM Parameters
Given UTM validation rules requiring utm_campaign, utm_source, utm_medium and prohibiting duplicate or conflicting parameters When I attempt to save a page whose parameters are missing, malformed, duplicated, or conflict with each other (e.g., both partner and partner_id) Then the save is blocked And field-level errors specify which parameters must be corrected and why And the roster shows an error state for the affected rows until fixed And once corrected, save succeeds without silently mutating values
Partner Attribution Precedence and Deduplication
Given campaign-level defaults, a team/partner mapping, and optional per-page overrides When attribution values differ across these sources Then precedence is applied as: per-page override > team/partner mapping > campaign default And only one partner_id parameter is present in the final URL And the resolved partner attribution is stored with the page record
Attribution Persisted to Analytics, Exports, and API
Given a generated page with final resolved UTM and partner attribution When a supporter loads the page and completes an action Then the analytics data layer includes utm_campaign, utm_source, utm_medium, utm_content, utm_term, and partner_id with the resolved values And CSV exports include columns for each UTM field and partner_id populated with the same values And the public and admin APIs return these fields for the page and action records And values in analytics, exports, and APIs match the roster values for that page
Shortener and QR Code Compatibility with Attribution
Given attribution parameters applied to a page When I generate a short URL and QR code for the page Then the short URL resolves via 301/302 to the canonical landing URL with the full query string intact and unaltered And scanning the QR code on iOS and Android opens the same final URL with all attribution parameters preserved And parameter encoding uses RFC 3986-compliant percent-encoding And no parameter truncation occurs for URLs up to 2,048 characters
Batch Performance and Integrity at Scale
Given a request to generate 250 pages with attribution applied When I run Swarm Builder Then all pages and short URLs are created with validated UTM and partner parameters within 120 seconds And the completion summary reports the count of successes and any failed rows with reasons And no more than 0.5% of pages fail due to transient shortener errors, with automatic retries attempted at least twice
District Targeting and Quota Controls
"As a field director, I want to set district filters and quotas per page so that actions target the right legislators and avoid flooding any single office."
Description

Allow per-page configuration of district filters to target supporters by legislative district, chamber, or geofenced area, integrating with RallyKit’s legislator auto-matching and district-specific scripts. Provide quotas to cap actions per district, team, or page, with real-time counters and fallback routing when limits are reached (pause, redirect to a general action, or reassign). Expose controls in the roster for quick review and bulk updates. Ensure quotas and filters are enforced at action time and reflected in live dashboards and exports.

Acceptance Criteria
Batch Creation Applies Filters and Quotas to Pages
Given a Swarm Builder template configured with chamber=Senate, district filters=[CA-12, CA-24], a geofence polygon, quotas: district=75, team=500, page=1500, fallback=redirect:https://general.example/action, event window=2025-09-01T10:00:00Z–2025-09-01T18:00:00Z, and UTM/partner attribution When the organizer generates 200 QR action pages for teams Alpha and Beta Then each page is created with the configured filters, quotas, fallback, window, and UTM/partner attribution And each page is tagged with its team and appears in the roster within 5 seconds of generation And roster columns show: Chamber, District Filter, Geofence, District Quota, Team Quota, Page Quota, Fallback Mode/Target, Event Start/End, UTM Source, Partner And any missing or invalid configuration prevents generation and surfaces an inline error per page with a descriptive message
Action-Time Enforcement With Auto-Matching and Scripts
Given a supporter opens a QR action page whose filters are chamber=House, allowed districts=[NY-10, NY-12], and a geofence polygon, with district-specific scripts enabled When the supporter provides location (GPS within 5 seconds or validated address) Then the system auto-matches the supporter to a legislator and selects the district-specific script for the current bill status And the action proceeds only if the matched legislator is within the allowed chamber and district or the supporter is inside the geofence And if not allowed, the system executes the page’s fallback mode (pause message, redirect preserving UTM/partner params, or reassignment) without presenting the disallowed script
District Quota Enforcement and Real-Time Counters
Given district quota=100 for CA-12 and current completed=98 When two supporters complete the action concurrently from CA-12 Then both completions are processed atomically and the counter reaches 100 with no overshoot And any subsequent CA-12 attempts during the event window trigger the configured fallback And district counters in the roster and dashboard update within 5 seconds of completion And exports for the time range show completed=100, blocked_by_quota>=1, rerouted>=0, with timestamps and page/team identifiers
Team and Page Quotas With Fallback Routing
Given team Alpha quota=300 and page X quota=50 with fallback=redirect:https://general.example/action When the 50th completion on page X is recorded Then page X is immediately placed in quota-reached state and all new attempts on page X execute the fallback And when team Alpha’s 300th completion is recorded across pages, subsequent attempts from Alpha execute the team-level fallback And fallbacks preserve UTM and partner attribution and append reason=quota and source_id parameters And each fallback event is logged with timestamp, quota type (page|team), remaining counts, and target
Bulk Review and Update From Roster
Given the roster displays pages with editable columns for filters and quotas and the user has edit permissions When the user selects 50 pages and bulk-edits chamber=Senate, district quota from 75 to 100, and fallback=pause Then all 50 pages are updated within 10 seconds and show updated values And rows with invalid combinations (e.g., chamber=Senate with House-only districts) fail validation, are not saved, and display error badges with reasons And a bulk-change summary shows counts of updated, skipped, and failed rows and provides a downloadable CSV of failures And an audit log entry is created with the change set, actor, timestamp, and affected page IDs
Live Dashboard and Export Reflect Enforcement
Given actions are occurring across multiple districts, teams, and pages with various quota states When the organizer opens the live dashboard Then totals by district, team, and page include counts for completed, in_progress, blocked_by_filter, blocked_by_quota, and rerouted, updated within 5 seconds And drill-downs show the top fallback targets and reasons And when the organizer exports CSV for a date range, each row includes district_id, team, page_id, chamber, completion_status, fallback_reason, fallback_target, timestamps, and UTM/partner fields
Time Windows, Updates, and Edge-Case Integrity
Given an event window from 10:00 to 18:00 local time with timezone specified and quotas configured When a supporter attempts an action outside the window Then the action is blocked and the configured fallback is executed with reason=time_window And quota counts reset at the start of a new window without affecting historical totals And mid-campaign edits to filters/quotas apply to new attempts within 2 seconds and do not retroactively change completed actions And under 500 concurrent requests, no district, team, or page registers more completions than its configured quota
Event Time Windows and Auto Activation
"As an event captain, I want pages to auto-activate and expire in specific time windows so that QR codes only accept actions during live events."
Description

Enable start and end time windows per generated page with timezone awareness to automatically publish and unpublish pages. Provide optional grace periods and conflict detection for overlapping windows. Support importing schedules via ICS/CSV and showing clear status badges (Scheduled, Live, Paused, Ended) in the dashboard. On expiry, QR links should present a friendly message or redirect according to campaign rules. Integrate with notifications to alert owners before activation/expiration.

Acceptance Criteria
Timezone-Localized Auto Publish and Unpublish
Given a generated action page with start_time and end_time configured in a specific timezone (TZ) And the page is currently not Live When the current time in TZ reaches start_time Then the page status changes to Live within 60 seconds And the page becomes publicly accessible and actionable And the dashboard shows a Live badge with TZ indicated When the current time in TZ reaches end_time Then the page status changes to Ended within 60 seconds And the page is not actionable by the public And activation and expiration are recorded in the audit log with both TZ and UTC timestamps
Grace Period Handling on Activation and Expiration
Given a page with optional start_grace_minutes (G1) and end_grace_minutes (G2) When a visitor opens the page during the interval [start_time - G1, start_time) Then the page renders a Starting soon state, actions are disabled, and the status badge remains Scheduled When a visitor opens the page during the interval (end_time, end_time + G2] Then the page remains actionable, displays a Closing soon banner, and the status badge remains Live When G1 or G2 are not set Then the default is 0 minutes and no grace behavior is applied
Overlapping Window Conflict Detection Across Batch
Given two or more pages in the same campaign with overlapping time windows and the same team_tag and target filter When the roster is generated, edited, or a schedule is imported Then overlapping pages are flagged with a Conflict badge and listed with overlapping intervals And Save/Publish is blocked if Block on conflicts is enabled for the campaign, otherwise a Warning is displayed And resolving the overlap (by adjusting times or tags) clears the Conflict badge without requiring a page reload
Schedule Import via ICS and CSV with Timezone Mapping
Given an ICS file containing events with TZID or a CSV with start_time, end_time, and timezone columns When the file is imported into Swarm Builder Then pages are created/updated with correct start/end times normalized to UTC and displayed in their source timezone And missing timezone values default to the campaign timezone and are noted in the import report And duplicate rows (matched by external_id or title+start_time) update existing pages instead of creating new ones And invalid rows are skipped with row-level error messages and counts summarized at the end of the import And the generated roster lists all imported pages with Scheduled status
Dashboard Status Badges and Real-Time Transitions
Given a batch of generated pages visible in the dashboard When time progresses or manual actions occur Then each page shows one of Scheduled, Live, Paused, Ended badges reflecting its real-time state And badge transitions occur within 60 seconds of qualifying events (start, end, pause, resume) And applying Pause immediately makes the page not publicly actionable and overrides time-based activation until Resume And hovering or tapping the badge reveals start/end times with timezone and any grace windows And the list can be filtered by each status value
Post-Expiry QR Behavior and Campaign Rules Redirects
Given a page whose end_time (plus any end grace) has passed When a visitor scans the QR or opens the short link Then the visitor sees either a friendly Ended message or is redirected per campaign rules, as configured on the page And redirects preserve UTM parameters and append source_qr_id for auditing And the response is a 302/307 (for redirect) or 200 (for message) and never a 404 And an audit event is recorded with outcome (message or redirect) and destination URL if redirected
Pre-Activation and Pre-Expiration Notifications
Given a page with start_time and end_time and an assigned owner When the time is 15 minutes before start_time or 10 minutes before end_time (configurable per campaign) Then the owner receives a notification via email and in-app with page name, local times, and status about to change And notifications are suppressed if the page is Paused at the trigger time And rescheduling the page updates the pending notification times without duplication And a notification log entry is stored with sent status, channel, and timestamp
Roster Builder with Inline Bulk Editing
"As an operations manager, I want a roster grid to quickly review and bulk-edit generated pages so that I can fix issues and standardize settings before publishing."
Description

Produce a tidy roster of all generated pages with inline editing for key attributes (name, team tag, partner, UTM params, district filters, quotas, time windows, status, URLs, QR assets). Support bulk edits, search, sort, and filters, with validation warnings surfaced inline. Provide a diff review before publishing changes, plus CSV/Google Sheets export and import to round-trip edits. Respect role-based access control and include quick actions to copy short links, download QR packs, and print-ready sheets.

Acceptance Criteria
Inline Edit of Key Attributes in Roster
Given the Swarm Builder roster is loaded and the user has Editor or higher permissions When the user clicks an editable cell (name, team tag, partner, UTM params, district filters, quotas, time windows, status) Then the cell enters edit mode with appropriate control type (text, select, multiselect, number, date/time) And validation runs on field change and on blur And valid changes are saved to a local draft state within 200 ms without page reload And invalid changes display an inline warning icon with message And dependent fields (URLs, QR assets) are recalculated in draft when inputs affecting them change And the row displays an "Edited" indicator until published And keyboard shortcuts work: Enter saves, Esc cancels, Tab moves to next editable cell And cancel reverts the field to its original value
Bulk Edit Multiple Pages at Scale
Given the user selects 200 rows in the roster When the user opens Bulk Edit and specifies changes to one or more fields (e.g., add UTM param, set team tag, adjust quotas, set time window, modify district filters using Overwrite or Merge mode) Then a preview shows the number of rows affected per field before applying And applying the bulk edit completes within 5 seconds for 200 rows And each row updated is marked Draft Edited and changes are reversible via Undo within 5 minutes or until publish And rows with field-level conflicts or validation errors are skipped with per-row error messages; successful rows are not rolled back And bulk edit respects Overwrite vs Merge semantics for multivalue fields (UTM params merge by key, district filters union or replace as selected) And all changes remain in draft until explicitly published
Inline Validation and Publish Blocking
Given a user has edited fields in one or more rows When a field value violates a rule (name unique per team, partner exists, UTM keys non-empty, quotas >= 0 integers, time window start < end and within event window, district codes valid for target jurisdiction, status in allowed set, URLs valid and use https) Then an inline error is shown at the cell with a tooltip message and the row shows a red error badge And critical errors block Publish and Export, while warnings allow proceed only after explicit acknowledgement in the review step And a validation summary bar displays total errors and warnings with quick navigation to first issue And fixing the value clears the error immediately
Diff Review and Controlled Publish
Given there are draft edits in the roster When the user clicks Review & Publish Then a diff view lists only changed rows and fields with before/after values, with filters by field and team And the user can approve all, approve per-row, or revert specific field changes inline And clicking Publish applies approved changes, records an audit log entry (user, timestamp, counts of rows/fields changed), and updates row status to Published And a CSV of the applied change set is downloadable from the confirmation dialog And publish completes within 10 seconds for up to 1,000 changed rows
Search, Sort, and Filter Usability
Given a roster of up to 5,000 pages When the user searches by keyword across name, team tag, partner, UTM values, district, and status; applies multi-select filters; and sorts by any visible column Then results update within 300 ms after typing stops for 250 ms And multiple filters are combinable and persistent for the session; Clear All resets to default And sorting is stable and remembers the last sort until changed And an empty state message appears when no rows match with a one-click Clear Filters action
CSV and Google Sheets Round-Trip Edits
Given the user has Editor or higher permissions When the user exports the current roster view Then a CSV file and a Google Sheets link are generated containing visible columns plus a required immutable Row ID, preserving filters and sort And when the user imports a CSV or a Google Sheet, a Dry Run validates headers, types, and values, mapping rows by Row ID And the import report shows counts of created (if allowed), updated, skipped, and errors with per-row messages And on Confirm, valid changes are applied to draft only; no publish occurs And the system supports importing up to 10,000 rows, completing within 60 seconds, with progress feedback And date/time fields accept ISO 8601 and locale-formatted values; all stored in UTC
Permissions and Quick Actions
Given RBAC roles are configured: Viewer (read-only), Editor (edit/import/export), Publisher (publish), Admin (manage roles and settings) When a Viewer accesses the roster Then all edit, bulk edit, import, and publish controls are disabled; Quick Actions limited to Copy Short Link And when an Editor accesses the roster, edit, bulk edit, import/export, and quick actions are enabled but Publish is hidden And when a Publisher or Admin accesses the roster, Review & Publish is enabled And per-row Quick Actions provide Copy Short Link (copied to clipboard with confirmation in <200 ms), Download QR Pack (ZIP of PNG and SVG for selected rows generated within 10 seconds for 500 items), and Print-Ready Sheets (PDF generated to spec within 10 seconds for 200 items) And all quick action downloads include consistent filenames and include team tag and date And all privileged actions are recorded in an audit log with user, timestamp, and scope
Validation and Error Guardrails
"As a QA lead, I want preflight checks and safe rollback for the batch so that we prevent bad links, overlaps, and compliance violations."
Description

Provide comprehensive preflight validation for the batch, including required fields, unique slug enforcement, UTM schema checks, time window overlaps, invalid district codes, and conflicting quotas. Offer a dry-run mode that produces a detailed report before any write operations. Ensure transactional batch creation with automatic rollback on failure, versioned changes, and an undo window for recent operations. Implement idempotent re-runs keyed by batch ID to safely retry without duplications, and generate actionable error messaging for rapid correction.

Acceptance Criteria
Preflight Required Fields and UTM Schema Validation
Given a batch request for Swarm Builder When preflight validation runs in dry-run or create mode Then the validation report lists each missing required field (template_id, batch_name, team_tag, utm_source, utm_medium, utm_campaign) with error_code "required", json_path, and message, and no write operations occur Given all required fields are present When preflight validation runs Then there are zero errors of type "required" Given UTM parameters are provided When preflight validation runs Then utm_source, utm_medium, and utm_campaign match the pattern ^[a-z0-9_-]{2,64}$, and optional utm_term and utm_content match the same if present; any violation is reported per field with error_code "invalid_utm", json_path, offending_value, and suggested_fix
Unique Slug Enforcement Across Batch and System
Given a batch proposes generated page slugs When preflight validation runs Then all slugs must be unique within the batch and not collide with existing slugs in the workspace; any conflict is reported with error_code "duplicate_slug", slug, colliding_resource_id (if any), and json_path Given any duplicate slug exists (in-batch or with existing data) When attempting to create the batch Then the batch is rejected atomically with no pages created and the validation report includes each duplicate entry
Time Window Overlap Detection and Range Integrity
Given multiple pages in the batch targeting the same district (or district set) and team_tag When preflight validation runs Then time windows for those pages must not overlap (start_a < end_b AND end_a > start_b is a violation); any overlap is reported with error_code "time_window_overlap", the conflicting page identifiers, and their time ranges Given any page has start_time >= end_time When validation runs Then an error with error_code "invalid_time_range" is reported with json_path and offending values
District Code Validation Against Workspace Schema
Given pages specify district filters When preflight validation runs Then each district code must match the workspace’s configured jurisdiction schema (e.g., pattern(s) for House/Senate districts); any nonconforming code is reported with error_code "invalid_district_code", json_path, offending_value, and expected_pattern reference Given all district codes conform When validation runs Then there are zero errors of type "invalid_district_code"
Quota Consistency and Conflict Checks
Given per-page quotas and any applicable workspace or campaign caps When preflight validation runs Then each page quota must be a non-negative integer, any min/max constraints must be consistent (min <= max), and the sum of quotas for the same district/team must not exceed the configured cap; violations are reported with error_code "quota_invalid" or "quota_conflict" including json_path, offending_value(s), and cap_reference Given a quota conflict exists When attempting to create the batch Then the batch is rejected atomically with no pages created and the validation report details each conflict
Dry-Run Mode Produces Detailed Report and Performs No Writes
Given a batch request with dry_run=true When the operation executes Then no database writes occur (no pages, no slugs, no audit entries except a read-only validation event), and the response includes a validation_report with counts by severity (errors, warnings, passes), per-item issues with error_code, json_path, message, offending_value, and suggested_fix, and a create_plan preview (predicted page_count and slugs) Given dry_run=true and issues are fixed When dry-run is re-executed Then the validation_report shows zero errors and the create_plan remains consistent across runs
Transactional Creation, Idempotent Re-run, Versioning, and Undo Window
Given a valid batch create request (dry_run=false) When any write within the batch fails Then the entire operation is rolled back with zero pages created, zero partial side-effects, and a single audit_log entry recording the failure and rollback Given a batch create request includes batch_id as an idempotency key When the same batch_id is re-submitted within 24 hours Then no duplicate pages are created; if the original succeeded, the original result is returned; if it rolled back, the operation proceeds to create as if first run Given a successful batch creation When the user invokes Undo within 15 minutes Then all created resources are reverted (soft-deleted or restored to pre-state), a new audit_log entry is recorded, and affected entities receive incremented version numbers with before/after snapshots Given any page or batch record is created or modified When inspecting metadata Then version numbers start at 1 for new records, increment on change, and include actor, timestamp, and diff in the version history
Live Tracking and Audit Readiness
"As a program director, I want live tracking and audit-ready exports for the swarm so that I can report outcomes to funders and verify partner performance."
Description

Stream real-time metrics for each generated page, aggregated by team and partner, into the existing dashboard, including actions initiated/completed, conversion rates, and quota status. Maintain an immutable audit log recording who generated, edited, and deployed each page with timestamps and change details. Provide exportable proof packages containing the generation manifest, roster, QR assets, UTM matrix, and outcome summaries, along with webhook/API endpoints for external reporting. Ensure privacy compliance and data retention alignment with organizational policies.

Acceptance Criteria
Dashboard Real-Time Metrics per Page and Aggregations
Given a batch of 500 pages generated via Swarm Builder with team and partner tags and quotas set When supporters initiate and complete actions on any of these pages Then each page's metrics (initiated, completed, conversion rate to two decimals, quota remaining/status) appear on the dashboard within 5 seconds of event receipt And team-level and partner-level aggregates equal the sum of their associated pages And conversion rate is computed as completed/initiated rounded to two decimals, with 0% when initiated=0 And quota status flips to "Reached" when completed >= quota and to "Warning" when completed >= 80% of quota
Immutable Audit Log for Page Lifecycle Events
Given a user with permission generates, edits, and deploys pages from a Swarm batch When each action is performed Then an append-only audit entry is created containing batch_id, page_id, actor_id, actor_role, action_type, timestamp (UTC ISO8601), and field-level before/after deltas (with PII fields redacted) And audit entries cannot be updated or deleted via UI or API; attempts return 403 and no change occurs And the audit log sequence includes a verifiable hash chain (prev_hash, entry_hash) to attest integrity
Exportable Proof Package for Swarm Batch
Given an authorized admin requests a proof package for a batch of up to 1,000 pages When the request is submitted via UI or API Then a single ZIP is generated within 60 seconds containing: generation_manifest.json, roster.csv, qr_assets/(one SVG and PNG per page), utm_matrix.csv, outcome_summary.csv and outcome_summary.json And the package includes checksums.txt with SHA-256 hashes for all files and a signed metadata.json including created_at (UTC) and batch_id And outcome totals in the package match the dashboard snapshot at the time of export
Webhook and API Reporting for External Systems
Given a webhook endpoint is configured with a shared secret and status=active When actions are initiated or completed or a page is deployed Then a JSON payload is POSTed within 5 seconds including event_id (UUID), event_type, occurred_at (UTC), batch_id, page_id, team, partner, initiated_count_delta, completed_count_delta, and quota_status And the request includes an HMAC-SHA256 signature header and an idempotency key; duplicate deliveries are suppressed by the receiver using event_id And non-2xx responses are retried with exponential backoff up to 6 attempts over 10 minutes And the API provides endpoints to fetch the same data with pagination and filters by date range, team, partner, and batch_id; 429 responses include Retry-After
Privacy Compliance and Data Retention Enforcement
Given an organization retention policy in days is configured (e.g., 365) and PII export toggle is off by default When the retention period elapses for stored raw action events Then raw PII (name, email, phone) is irreversibly deleted or anonymized, while audit log retains only non-PII references (actor_id, page_id, timestamps) And proof package exports exclude PII fields unless explicitly enabled by an Admin for the export; when disabled, exports contain only aggregated counts and attribution parameters And purges execute using UTC day boundaries and are logged with audit entries
Role-Based Access to Metrics, Audit, and Exports
Given roles Admin, Organizer, and ReadOnly are assigned to users When accessing features Then Admin can view all metrics, audit logs, and download proof packages And Organizer can view metrics and download proof packages for batches they created; cannot view system-wide audit logs And ReadOnly can view aggregated metrics only; audit and exports access is denied with 403 and logged
Data Consistency Between Dashboard and Exports
Given a dashboard snapshot is taken at export time T for a specific batch When the proof package is generated Then outcome_summary totals and conversion rates exactly equal the snapshot values (tolerance = 0) And the export includes window_start and window_end timestamps indicating the event window included; events arriving after T are excluded from the package and appear only in later exports

Print Packs

Auto-generate print-ready kits per code—posters, table tents, badges, and stickers—with branded QR, matching short link, and bold station ID. Includes common sizes, crop marks, and optional district-specific micro-scripts. Cuts design time to zero and standardizes on-site materials.

Requirements

Pack Creation Wizard
"As a small nonprofit director, I want to generate a complete print pack from a single code so that I can launch an event without hiring a designer."
Description

A guided flow to auto-generate a complete, print-ready kit for a campaign code. The wizard pulls campaign metadata from RallyKit (name, action page URL, brand kit), lets users choose items (posters, table tents, badges, stickers) and common sizes, and configures options such as crop marks, bleed, and district micro-scripts. It previews each item in real time with the code’s branded QR and matching short link, assigns a station ID, and persists settings as a reusable preset. The flow validates required inputs, handles fallbacks for missing brand assets, and saves the pack to the campaign’s asset library for download or re-generation.

Acceptance Criteria
Auto-Populate Campaign Metadata with Fallbacks
Given a valid campaign code exists in RallyKit with name, action page URL, and brand kit When the Pack Creation Wizard is opened for that code Then the campaign name, action page URL, and brand assets auto-populate within 2 seconds Given the campaign brand kit is missing a logo When the wizard loads Then a system default logo placeholder is applied, a non-blocking warning is shown, and the user can proceed Given the brand kit lacks color definitions When the wizard loads Then default colors that meet WCAG AA contrast are applied and recorded as fallbacks in the audit log Given the campaign code is invalid or not found When the wizard is opened Then an error message explains the issue and the user cannot proceed to the next step Given metadata successfully loads When the user proceeds to the next step Then the source of each populated field (campaign vs fallback) is stored in the generation audit record
Item and Size Selection Validation
Given the wizard is on the Item Selection step When the user attempts to continue without selecting any items Then a validation error appears and the Next button remains disabled Given the user selects at least one item type When the user clicks Next Then the selection is saved and restored if the user navigates back Given an item type is selected When the user opens size options Then only supported common sizes for that item are displayed and selectable Given an item size is selected When an incompatible option is chosen (e.g., micro-script on a size that does not support it) Then a clear validation message explains the incompatibility and prevents continuation
Print Options: Crop Marks and Bleed Correctness in PDFs
Given the user enables crop marks When the pack is generated Then each PDF includes crop marks per print standard and marks are outside the trim area Given the user enables 0.125 inch bleed When the pack is generated Then the PDF page size increases accordingly and artwork extends to the bleed edge Given the user disables bleed and crop marks When the pack is generated Then the PDFs have no bleed area and no crop marks Given print options are configured When the user previews any item Then the preview reflects the crop/bleed settings accurately within 300 ms of a change
Real-Time Preview with Branded QR and Matching Short Link
Given the wizard has loaded campaign data When the user changes any option that affects design (item, size, colors, micro-scripts toggle) Then the item preview updates within 300 ms Given the campaign has a short link When the preview renders Then the printed short link text exactly matches the QR code payload Given the QR code in the preview is scanned with a standard scanner When scanned Then it resolves to the campaign action page URL within 2 redirects and under 1 second (excluding external network variability) Given brand assets include a logo and colors When the preview renders Then the logo and color palette are applied per brand kit or fallbacks
District Micro-Scripts Configuration and Placement
Given the user enables district-specific micro-scripts When an item type that supports micro-scripts is selected Then the micro-script appears in the designated area using the correct district context Given the campaign code cannot be mapped to a district When micro-scripts are enabled Then a generic script is used with a non-blocking warning indicating district context is unavailable Given micro-scripts are disabled When the pack is generated Then no micro-script content appears on any item Given micro-scripts are enabled When the user previews multiple districts Then each preview displays the correct district-specific text and text does not overflow, truncate, or clip
Station ID Assignment Across Items
Given a pack is being generated When the station ID is assigned automatically Then a unique station ID for the campaign is generated and displayed consistently on every item Given the user overrides the station ID When saving the pack Then the overridden ID is validated against allowed format (A–Z, 0–9, 3–8 chars) and applied across all items Given station ID is rendered When checked for accessibility Then the contrast ratio against its background is at least 4.5:1 and the font weight is bold Given multiple packs are generated for the same campaign on the same day When station IDs are assigned Then each pack’s station ID remains unique
Preset Save/Load and Asset Library Save & Download
Given the user completes the wizard configuration When they save as a preset with a unique name Then the preset stores selected items, sizes, print options, micro-script settings, and station ID mode Given a saved preset exists When the user loads it Then all stored settings are reapplied and the previews match the saved configuration Given a pack is generated When it is saved to the campaign’s asset library Then individual item PDFs and a ZIP bundle are created with consistent naming: {campaignCode}_{itemType}_{size}_{stationID}.pdf and visible thumbnails Given an asset exists in the library When the user clicks Download Then the ZIP begins downloading within 2 seconds and the file contents match the latest generated outputs Given a pack is re-generated from a preset without changes When compared to the prior version Then page count, dimensions, QR payloads, short links, station ID, and micro-script content are identical
Branded QR and Vanity Short Link
"As a grassroots organizer, I want branded QR codes and short links on every item so that supporters can take action instantly from their phones."
Description

Automatic generation of a high-contrast, brand-styled QR code and a vanity short link for each campaign code. The short link resolves to the action page and supports per-item and per-station tracking parameters. QR is exported as vector (SVG/PDF) with configurable error correction, quiet zone, and dark/light contrast for print legibility, with an alternative inverted variant for dark backgrounds. The component embeds the QR and short link into each template, enforces brand domain rules, and writes UTM codes so completed scans attribute to Print Packs in RallyKit analytics.

Acceptance Criteria
Auto-generate branded QR and vanity short link on code creation
Given an organization has a verified brand domain and creates a new campaign code When Print Packs are generated for that code Then the system creates a unique vanity short link under the brand domain that resolves to the campaign action page And the short link slug conforms to allowed characters [a-z0-9-] And the QR content encodes the vanity short link URL (without per-item/per-station parameters) And the generated QR and short link are stored with the campaign code and displayed in the Print Packs UI
Per-item and per-station tracking appended on template instantiation
Given a Print Pack template item is instantiated for a specific station When the short link and QR are embedded for that item Then the embedded URL includes item and station tracking parameters as distinct query keys And the effective destination remains the same campaign action page And removing the parameters resolves to the same action page And each scan or click of the embedded code records item and station identifiers in the visit event context
Vector QR exports with configurable ECC and quiet zone
Given a campaign code with a generated QR When the user exports assets Then SVG and PDF files are produced containing vector paths only for the QR modules And the user can select error correction level from L, M, Q, H with default M And the quiet zone is configurable in whole-module units with default 4 modules And the exported files reflect the selected error correction and quiet zone settings And the exported QR passes validation by at least two reference scanners at a printed size of 25 mm
High-contrast QR with alternative inverted variant for dark backgrounds
Given brand color styling is applied to the QR When the QR is generated Then the dark-to-light color contrast ratio for modules vs background is at least 7:1 And an inverted variant (light modules on dark background) is also generated And the UI indicates which variant to use on light versus dark backgrounds And both variants pass a scan test at 25 mm printed size on representative light and dark backgrounds
Embed QR and short link into all Print Pack templates with station ID
Given templates for posters, table tents, badges, and stickers When assets are generated for a campaign code and station Then each template contains the QR sized to its designated placeholder without clipping or rasterization And the vanity short link text is visible in its designated text area and is truncated with ellipsis only if it exceeds the allotted width And the station ID label renders adjacent to the QR per template specification And the QR remains vector in exported PDFs (not flattened to bitmap)
Enforce brand domain rules for vanity short links
Given the organization has a verified brand domain configured When a vanity short link is generated Then the link uses the verified brand domain and fails generation if the domain is unverified or missing And the system blocks publishing of Print Packs until a valid brand domain is configured And short link slugs are unique across the brand domain; collisions prompt an auto-suggested alternative and allow manual edit within validation rules
UTM attribution for Print Packs scans and clicks in RallyKit analytics
Given a QR is scanned or the vanity short link is clicked from a Print Pack item When the redirect completes to the action page Then RallyKit analytics record the visit with utm_source=print_packs and utm_medium set to qr or shortlink accordingly And utm_campaign is set to the campaign code And utm_content (or equivalent) includes item and station identifiers And analytics dashboards attribute these visits to Print Packs as the source
District Micro-script Injection
"As a campaign lead, I want district-specific micro-scripts on materials so that supporters see the most relevant language for their legislators."
Description

Optional district-specific micro-scripts placed on each item, driven by RallyKit’s legislative matching and bill status engine. When enabled, the system selects the correct script variant (support, oppose, update) per targeted district or defaults to a generic script when the district is unknown. Text is auto-fit to template constraints with character limits, line wrapping, and accessibility sizes for table tents and badges. Script data is snapshotted at generation time and stored with the pack manifest for auditability.

Acceptance Criteria
Micro-scripts Enabled Renders on All Items
Given a Print Pack is generated for code "RK-001" with target district "D-001" and micro-scripts are enabled When the system renders posters, table tents, badges, and stickers Then each item includes the district-specific micro-script text in its designated region And the text matches the selected script variant snapshot for district "D-001" And no item is missing the micro-script region
Micro-scripts Disabled Suppresses Script Regions
Given a Print Pack is generated with micro-scripts disabled in pack settings When the system renders all item types Then no micro-script text is rendered on any item And layout spacing is preserved without placeholder artifacts or overlaps And the pack manifest records microScriptsEnabled=false with no script content stored
Script Variant Selection by Bill Status
Given target district "D-001" and bill "SB-123" are associated to the pack And the bill status-to-variant mapping is Support->support, Oppose->oppose, Update->update When the pack is generated while the bill status is "Support" Then the selected micro-script variant is "support" for district "D-001" When the bill status is "Oppose" and the pack is generated Then the selected micro-script variant is "oppose" for district "D-001" When the bill status is "Update" and the pack is generated Then the selected micro-script variant is "update" for district "D-001"
Unknown District Falls Back to Generic Script
Given a Print Pack is generated and the target district cannot be resolved (unknown) When the system selects a micro-script Then the generic script variant is used for all items And the pack manifest logs fallback_reason="UNKNOWN_DISTRICT" and variant="generic" And no district-specific fields are populated in the snapshot
Auto-fit Text Meets Template Constraints and Accessibility
Given template constraints define a micro-script bounding box, character_limit, and min_accessible_font_size per item type When rendering micro-scripts for table tents and badges Then the computed font size is >= min_accessible_font_size for their templates And the final text length is <= character_limit after wrapping/truncation And the rendered text fully fits within the bounding box with no clipping or overflow And line breaks occur only at word boundaries or configured hyphenation rules When rendering for posters and stickers Then the final text length is <= character_limit and fits within the bounding box with no clipping
Snapshot Stored in Manifest and Remains Immutable
Given a pack is generated at time T1 When the pack manifest is retrieved Then it contains a script snapshot including district_id (or null), bill_id, bill_status_at_generation, script_variant, script_text, template_id, character_limit_applied, font_size_applied, and snapshot_timestamp=T1 And the snapshot has a content_hash that matches the rendered assets When the bill status changes after T1 and a new pack is generated at time T2 Then the T1 manifest and assets remain unchanged And the T2 manifest contains a new snapshot reflecting the updated status and content_hash
Print-ready PDF Export with Crop Marks and Bleed
"As an operations volunteer, I want print-ready PDFs with crop marks and bleed so that any print shop can produce accurate materials quickly."
Description

Print-ready exports for all items as CMYK PDFs with bleed and crop marks, aligned to common print standards. Each template defines trim size, bleed (e.g., 0.125 in), safe areas, and font embedding or outlining to prevent substitutions. Assets are preflight-checked for resolution and color space, and ICC profiles are applied based on brand settings. The export engine can output single-item PDFs and a consolidated ZIP, preserves vector QR codes, and includes a grayscale variant where appropriate to reduce print costs.

Acceptance Criteria
Export CMYK PDF with 0.125 in Bleed and Crop Marks
Given a template with a defined trim size and 0.125 in bleed on all sides When the user exports the item as a Print-ready PDF Then the PDF color space contains only CMYK objects And the PDF includes crop marks at 0.125 in offset from the trim, with 0.25 pt line weight, in Registration color And the PDF defines TrimBox and BleedBox with BleedBox = TrimBox + 0.125 in on all sides
Preflight: Image Resolution, Color Space Conversion, and Safe Area Checks
Given placed raster images and layout elements on an item When the user attempts a Print-ready export Then a preflight report lists any raster image below 300 PPI at placed size, with page and bounding box references And any RGB or Spot objects are auto-converted to CMYK using the selected brand ICC profile and noted in the report And export is blocked if any image is <300 PPI at placed size or if any text/logo extends outside the defined safe area; the report identifies each offending element And export succeeds only when the preflight report contains 0 errors
Apply Brand ICC Profile as Output Intent
Given the brand settings specify an ICC profile P for print When any item is exported as a Print-ready PDF Then all colors are converted using profile P into CMYK And the PDF embeds exactly one OutputIntent ICC profile whose name matches P And the PDF contains no DeviceRGB/CalRGB objects and no unconverted Spot colors
Preserve Vector QR Codes
Given the item includes a system-generated QR code When the user exports as a Print-ready PDF Then the QR code remains vector (path objects), with no raster image XObject used for the QR layer And a test print at 100% scale on a standard office printer produces a QR code scannable by at least two common scanning apps at 12–24 in distance
Fonts Embedded or Outlined with No Substitution
Given the template uses branded fonts When the user exports as a Print-ready PDF Then all fonts are subset-embedded or converted to outlines if embedding is restricted And the preflight report shows 0 font substitution or missing glyph warnings And the PDF contains no Type 3 fallback fonts
Export Single Items and Consolidated ZIP with Manifest
Given a Print Pack containing N items When the user exports a single item Then a CMYK PDF is generated named {campaignCode}_{itemType}_{trimWxH}_{variant}.pdf When the user selects Export All Then a ZIP is generated containing each item PDF (and any grayscale variants) plus a manifest file (CSV or JSON) And the manifest lists for each file: fileName, itemType, trimSize, bleed, safeArea, colorProfile, grayscale(true/false), pageCount, generatedAt (UTC ISO 8601)
Grayscale Variant Generation (K-only)
Given an item is marked grayscale-eligible and the user enables Include grayscale When the export runs Then a separate grayscale PDF is generated with filename suffix -grayscale And all page objects are DeviceGray/K-only (no C/M/Y channels present per preflight) And text and vector line art are rendered as 85–100% K (not rich black)
Template and Brand Kit Integration
"As a brand manager, I want templates to enforce our brand kit so that all printed materials remain consistent and accessible across events."
Description

Layout templates that inherit campaign brand kits (logo, colors, fonts) and lock compliance rules while allowing content slots for code, QR, short link, micro-script, and station ID. Templates cover common sizes for posters, table tents, badges, and stickers. A preview renderer shows true-to-size proportions, bleed, and crop marks. The system validates color contrast, enforces minimum font sizes, and supports fallback fonts. Templates are versioned so future brand updates do not alter previously generated packs.

Acceptance Criteria
Brand Kit Inheritance on Template Application
Given a campaign brand kit with logo, primary/secondary colors, and font families is selected When the user creates a new template for posters, table tents, badges, or stickers Then the template auto-applies the brand logo, colors, and font styles to all themed elements And the default text styles in all text fields use the brand fonts and sizes per template rules And the preview and export reflect the same brand styles And the template metadata records the applied brand kit ID
Locked Compliance Regions with Editable Content Slots
Given a template with locked compliance regions and defined content slots When the user attempts to move, resize, or delete locked elements (logo, margins, safe zones) Then the system prevents the change and shows a non-blocking notice When the user edits content slots for code, QR, short link, micro-script (optional), and station ID Then the system permits text entry within character limits and field validations And the QR is generated from the short link, encoded with error correction level M+ and dark-on-light colors And the short link must match the campaign code pattern and resolve (HTTP 200) before export And the station ID is styled in bold per brand and remains within the designated area
Print Preview and Export with Bleed and Crop Marks
Given the user selects an artifact type and a common size When the preview opens Then the canvas shows true-to-size proportions with zoom indicator and rulers And bleed is shown at 0.125 in (3 mm) and crop marks at trim When exporting Then a print-ready PDF is generated with embedded crop marks and bleed And the exported trim size and bleed dimensions match the selected size And out-of-box sizes include: Posters (11x17 in, 18x24 in, A2), Table Tents (4x6 in tent flat 8x6 in, 5x7 in tent flat 10x7 in), Badges (3x4 in, 4x6 in), Stickers (3 in circle, 2x3.5 in rectangle)
Color Contrast Validation
Given text or QR foreground color is placed on a background color When the contrast ratio is below WCAG 2.1 AA thresholds (4.5:1 normal, 3:1 for 18 pt+ or 14 pt bold) Then the system shows a blocking error and disables export And the user may apply auto-adjust to the nearest accessible brand shade or switch to black/white And after adjustment, the contrast check passes and export is enabled And all checks are recorded in validation logs for the pack
Minimum Font Size Enforcement Per Artifact Type
Given the template type is Posters, Table Tents, Badges, or Stickers When text is styled below the minimum size for its role Then the system flags the field and blocks export until corrected or the text is truncated per rules And minimum sizes are: Posters (body 10 pt, header 24 pt, disclaimer 8 pt), Table Tents (body 9 pt, header 18 pt, disclaimer 7 pt), Badges (name 24 pt, subtext 8 pt), Stickers (body 8 pt) And fallback scaling does not reduce any text below these thresholds
Font Fallback and Layout Stability
Given the selected brand fonts are unavailable in the rendering environment When the system applies the configured fallback font stack Then text reflows within its bounding boxes and safe areas with <=2% overflow tolerance And no text crosses trim or bleed lines in preview or export And the preview indicates fallback is in use And if layout stability cannot be maintained, export is blocked with a descriptive error
Template Versioning and Immutable Past Packs
Given a pack was generated with Template v1 and Brand Kit v3 When the brand kit or template is updated to a newer version Then previously generated packs render identically when viewed or re-downloaded And re-generating the same pack uses the original snapshot of template and brand kit versions And the UI displays the template and brand kit versions used for each pack And rasterized comparisons at 300 DPI between old and re-generated exports have a max 1 px RMS difference per page And users can duplicate a pack to a new version explicitly, creating a new artifact linked to the latest versions
Station ID and Item Labeling
"As a field coordinator, I want bold station IDs on every piece so that setup and attribution are standardized across stations."
Description

Automatic generation and consistent placement of a bold station ID on every item in the pack to standardize on-site wayfinding and staff coordination. Station IDs follow a configurable scheme (e.g., S-01, S-02) and can be prefixed by event code. The ID is printed in a high-contrast block with an optional scannable micro-QR carrying station metadata for audit. The same ID is written into tracking parameters so scans and actions can be attributed to a station in RallyKit reports.

Acceptance Criteria
Consistent Station ID Placement Across All Pack Items
- Given a generated Print Pack for an event with items Poster, Table Tent, Badge, and Sticker, When the pack is rendered to PDF, Then each item displays the same station ID value in its template-defined location within ±2 mm. - Then the station ID appears inside the live/safe area with ≥3 mm clearance from trim and any bleed. - Then the station ID layer does not overlap the main QR, short link, or primary message areas defined by the template. - Then the station ID value exactly matches the configured ID for that station (case-sensitive, no extra whitespace). - Then the station ID is present on 100% of items in the pack; zero missing instances are allowed.
Configurable Station ID Scheme and Event Prefix
- Given configuration prefix "RALLY24" and scheme "S-##" with sequence 1..9, When generating station IDs, Then labels produced are "RALLY24-S-01" through "RALLY24-S-09". - When the prefix toggle is off, Then labels are "S-01" through "S-09" with no leading/trailing whitespace. - Then leading zeros are preserved to match the count of # in the scheme; changing scheme to "STA-###" yields "STA-001" etc. - Then all generated station IDs are unique within the event; any duplicate attempt is blocked with a clear validation error. - Then only A–Z (uppercase), digits, hyphen, and underscore are allowed in prefix and scheme; invalid characters are rejected with an error. - Then any resulting ID longer than 20 characters is rejected with a validation error indicating the limit and offending value.
High-Contrast Block and Typography Standards
- Given a station ID to render, When printed at final scale, Then the ID is displayed in a high-contrast block with text/background contrast ratio ≥ 7:1 (WCAG 2.1) verified by color values. - Then the station ID text uses bold weight ≥ 700 and uppercase letters. - Then minimum text height at final print scale is ≥ 3 mm on the smallest item template and scales proportionally on larger items. - Then the high-contrast block has ≥ 2 mm internal padding and a solid background (100% opacity), with no patterns behind the text. - Then the block width is ≥ 20 mm on posters/table tents and ≥ 12 mm on badges/stickers to ensure legibility.
Micro-QR Encoding and Scan Reliability
- Given micro-QR is enabled for the event, When items are rendered, Then a QR of side length ≥ 8 mm with error correction level ≥ M is placed adjacent to the station ID block per template. - Then the QR encodes a URL containing parameters rk_station, rk_event, rk_pack, rk_item_type, rk_ver, where rk_station equals the printed station ID and rk_event equals the event code. - Then 10 test scans per item type using common smartphone cameras decode successfully in ≥ 99% of attempts. - Then the decoded URL resolves with HTTP 200 and logs the station metadata server-side. - When micro-QR is disabled, Then no micro-QR is rendered and no empty placeholder remains.
Station Attribution in Tracking and Reports
- Given a user accesses a RallyKit action via an item’s main QR or short link, When they land on the action page, Then the URL includes a station parameter equal to the printed station ID. - Then the station parameter persists through redirects and form submission to completion. - Then the resulting action record in RallyKit includes the station field populated and appears in Reports > Stations within 5 minutes of completion. - Then station-level counts in the report exactly match a controlled test set of N completed actions initiated from that station’s materials. - When the station parameter is missing or malformed, Then the action is attributed to "UNKNOWN" and flagged in the audit export.
Print Readiness and Trim Safety for Station Elements
- Given the pack is exported as print-ready PDFs with crop marks enabled, When preflighted against PDF/X-4:2010, Then the file passes without errors. - Then the station ID block and micro-QR are fully within the live area with ≥ 3 mm distance from trim and bleed on all item templates. - Then the station ID text is vector (or outlined) and the QR image effective resolution is ≥ 600 dpi, with all raster elements ≥ 300 dpi. - Then the PDFs open without missing font or color space warnings in Acrobat/RIP; all colors used for the block/text are CMYK. - Then per-item PDF size is ≤ 5 MB while preserving the above quality constraints.
Batch Packaging and Delivery
"As a program manager, I want packs delivered as a single ZIP with a manifest so that I can share, archive, and audit materials without manual sorting."
Description

Batch packaging of all generated items into a structured ZIP with per-item folders and a manifest (JSON) detailing item type, size, file paths, short link, QR parameters, station ID, and script snapshot. The package is stored in the campaign’s asset library with a shareable link and optional expiration. Users can re-generate the pack on demand, preserving version history and diffing settings between versions. Large files are streamed for download and resumable uploads are supported to cloud storage.

Acceptance Criteria
ZIP Package Structure and Manifest Completeness
Given a campaign with a valid Print Pack code and enabled item types and sizes When the user triggers "Generate Print Pack" Then a single ZIP file is created containing top-level /manifest.json and per-item folders (posters, table-tents, badges, stickers) And each folder contains print-ready files with crop marks when selected and common sizes as configured And manifest.json lists every generated file entry with fields: itemType, size, filePath, shortLink, qr.parameters, stationId, scriptSnapshot And every filePath in manifest.json exists in the ZIP at the specified location And the shortLink and QR parameters in manifest.json match those embedded in the corresponding artwork And optional district-specific micro-scripts are included only when enabled and captured in scriptSnapshot
Asset Library Storage and Shareable Link with Expiration
Given packaging completes successfully When the system stores the package Then the ZIP is saved in the campaign’s asset library with a unique identifier And a shareable link is generated and returned in the response and in manifest.json metadata And if an expiration time is set, the link becomes inaccessible after expiration and returns HTTP 410 or equivalent And access to the shareable link respects campaign visibility/permissions when configured as non-public And users with appropriate permissions can revoke the link, rendering it immediately inaccessible
Streaming Download and Resume for Large Packages
Given a packaged ZIP larger than 100 MB When a user initiates a download Then the file is delivered via streaming/chunked transfer And if the connection is interrupted mid-download, a subsequent request with HTTP Range resumes from the last byte And the final downloaded file checksum (e.g., SHA-256) matches the checksum recorded in manifest.json And simultaneous downloads by multiple users do not block or corrupt the stream
Resumable Uploads to Cloud Storage
Given cloud storage is configured for the campaign When the system uploads the generated ZIP Then a resumable/multipart upload protocol is used (e.g., S3 multipart, GCS resumable) And if the network drops during upload, the next attempt resumes without restarting from byte 0 And upon completion, the storage ETag/versionId and URI are recorded and associated with the package And partial or failed uploads are cleaned up and do not appear as completed assets
On-Demand Regeneration Preserves Version History
Given a prior package version exists for the same Print Pack code When a user selects “Re-generate pack” Then a new, immutable version is created without overwriting prior versions And each version records timestamp, actor, generation settings, and checksum And users can view and download any prior version from the asset library
Settings Diff Between Versions
Given at least two package versions exist When a user opens the “Compare versions” view Then the UI presents differences in generation settings including: included item types, sizes, crop marks flag, district micro-scripts flag, QR parameters, short link domain/path, station IDs, and script source timestamp And unchanged settings are collapsed by default with the option to expand And the diff can be exported as JSON and attached to audit logs
Manifest Schema Validation and Content Consistency Checks
Given a manifest.json is produced When it is validated against the defined JSON Schema for Print Pack packages Then validation passes with no errors And automated checks confirm that for a sampled set of files the station ID and short link text extracted from PDFs match manifest values And the scriptSnapshot hash matches the script content used during generation And any validation failure marks the package status as failed and prevents shareable link generation

Dual Codes

Every QR is paired with a memorable short link and SMS keyword fallback so supporters without camera access or data can still take action. All paths attribute to the same station/team code, keeping totals accurate and inclusive in low-connectivity environments.

Requirements

Unified Code Creation Wizard
"As a field organizer, I want to generate a QR, short link, and SMS keyword in one step so that every supporter can take action regardless of device or connectivity."
Description

Provide a single creation flow that generates a QR code, a branded short link, and an SMS keyword from one action page setup. Ensure all three artifacts map to the same campaign object and station/team code for unified counting. Support vanity options (custom alias and keyword), preview of QR in multiple sizes, and downloadable asset formats (PNG/SVG). Enforce validation on character sets, reserved words, and uniqueness. Integrate into the existing RallyKit action page builder and publish process with clear success/error states. Apply tracking parameters consistently across all paths, ensure case-insensitive handling, and rate-limit generation to prevent abuse. Store metadata for analytics and audit, including creator, timestamp, and mappings.

Acceptance Criteria
Unified Artifact Generation and Mapping
Given an existing action page in the RallyKit builder and a selected station/team code When the user completes the Unified Code Creation Wizard and clicks Generate Then the system creates exactly three artifacts: one QR code, one branded short link, and one SMS keyword And all three artifacts map to the same campaign object ID and the selected station/team code And activating any artifact routes the supporter to the same action page destination And unified counts increment identically regardless of the path used (QR, short link, SMS) And a success state displays with previews and controls to copy/download artifacts And a metadata record is stored containing creator_id, created_at timestamp (UTC), campaign_id, station_code, artifact identifiers/values, and tracking template, auditable for later review
Vanity Alias and Keyword Selection with Validation
Given the user enters a custom short-link alias and SMS keyword in the wizard When the alias or keyword contains invalid characters Then an inline error explains the allowed character set (alias: letters, numbers, hyphens; keyword: letters and numbers only) And the Generate action is disabled until errors are resolved When the alias or keyword matches a reserved word Then the system rejects it with a "reserved" error and suggests alternatives When the alias or keyword is not unique (case-insensitive) within the workspace/provider Then the system rejects it with "already taken" and suggests up to 3 available alternatives When both values are valid Then previews update to show the vanity alias and keyword
QR Preview and Downloadable Assets
Given the wizard has generated artifacts When viewing the QR section Then preview thumbnails are available at small, medium, and large sizes And clicking Download PNG downloads a PNG file of the QR code And clicking Download SVG downloads an SVG file of the QR code And scanning any preview or downloaded QR resolves to the branded short link that includes the configured tracking parameters
Publish Flow Integration and Transactional Error Handling
Given an action page is ready to publish and the wizard is configured When the user clicks Publish Then the publish process generates/activates the QR, short link, and SMS keyword and publishes the page And a clear success message confirms activation for all three paths When any artifact creation fails Then the publish process surfaces a descriptive error message and a Retry action And no partial activation occurs; failed attempt leaves previously published state unchanged and hides any newly created partial artifacts from end users And an audit log entry records the error details and correlation ID
Consistent Tracking Parameters Across All Paths
Given tracking parameters are configured for the campaign When a supporter scans the QR code Then they are routed to the action page URL with the tracking parameters applied and a path indicator for "qr" When a supporter taps the branded short link Then they are routed to the action page URL with the same tracking parameters and a path indicator for "link" When a supporter texts the SMS keyword and receives the action link Then the returned link includes the same tracking parameters and a path indicator for "sms" And duplicate tracking parameters are not appended if already present, and values are URL-encoded
Case-Insensitive Handling and Uniqueness Enforcement
Given a custom alias or keyword is entered with mixed case When artifacts are generated Then the stored alias and keyword are normalized to lowercase for matching And lookups for short links and SMS keywords are case-insensitive And attempting to create an alias/keyword that differs only by case is rejected as not unique
Rate Limiting and Abuse Prevention
Given a configurable rate limit is set for code generation per organization and IP When a user exceeds the configured limit within the time window Then subsequent generation requests are rejected with HTTP 429 and a message indicating the retry-after time And no artifacts are created for rejected requests And rate limit response headers include limit, remaining, and reset values And an audit log records the throttling event with user, org, and IP
Unified Attribution and Deduplication Engine
"As a campaign director, I want all entry methods to roll up into one station/team total so that our reported totals remain accurate across channels and teams."
Description

Route all traffic from QR scans, short link visits, and SMS interactions to a single station/team code and campaign for accurate rollups. Maintain a shared attribution token across paths to tie actions to the originating station even if a supporter switches channels. Implement deduplication rules (e.g., same supporter within time window) and merge logic for multi-step flows. Expose real-time counters and breakdowns by path in the dashboard, and surface UTM/referrer data where available. Provide an exportable audit log mapping path->station->action with timestamps while adhering to data minimization policies.

Acceptance Criteria
Unified Station Attribution Across Paths
Given a campaign with station/team code "S" and an active dual-code set (QR, short link, SMS keyword) When a supporter completes an action (call, email, or form submit) via any single path Then the action is attributed to station/team code "S" and the correct campaign And the station/team total increments by 1 for that action type And the campaign total increments by 1 for that action type And the path breakdown increments under the originating path (QR, ShortLink, or SMS) And no duplicate increment is recorded for the same single completion
Attribution Token Persists Across Channels
Given a supporter is issued an attribution token "T" upon first entry on any path When the supporter continues or completes actions via a different path within 24 hours Then token "T" persists across paths and ties all actions to the original station/team code And dashboard unique-supporter counts treat token "T" as one supporter where applicable And station/team totals reflect merged actions without double counting
Time-Window Deduplication by Supporter
Given a configurable deduplication window of 2 hours And a supporter identity is derived via hashed phone, hashed email, or device identifier linked to token "T" When the same supporter completes the same action type for the same campaign within the window Then only the first completion increments station/team and campaign totals And subsequent completions within the window are recorded with deduplicated=true and do not increment totals And the audit log stores deduplication_reason and source_attempt_id for each deduplicated event
Merge Multi-Step Actions Into Session
Given a multi-step flow (e.g., call then email) is configured for a campaign When steps are completed across one or more paths within 24 hours under the same token "T" Then the steps are merged into a single supporter session "M" linked to token "T" And each step is recorded with step_order and completion_status And station/team totals reflect one unique supporter and separate action counts per step And the dashboard shows merged path lineage for the flow
Real-Time Dashboard Counters and Breakdown
Given the campaign dashboard is open on the counters view When 100 actions are completed across QR, ShortLink, and SMS paths Then station/team and campaign counters update within 3 seconds p95 of action completion And path breakdown charts/tables reflect the same counts within the same SLA And counters persist accurately across page refreshes And the live connection auto-reconnects within 10 seconds if dropped, resyncing missed events
Capture and Surface UTM/Referrer Data
Given a supporter arrives via a link with UTM parameters and a browser that provides a referrer When the supporter completes an action Then utm_source, utm_medium, utm_campaign, utm_term, and utm_content values are captured for the attribution record And the referrer domain is captured And values are visible in dashboard action detail and included in exports And if absent, fields are stored as "unknown" And only allowlisted query parameters are retained to meet data minimization policies
Exportable Audit Log with Data Minimization
Given an admin requests an audit log export for a date range and campaign When the export is generated Then the file contains one row per action attempt with fields: timestamp_iso, campaign_id, station_code, path, action_type, attribution_token, unique_supporter_id, deduplicated (boolean), merge_session_id, utm_source, utm_medium, utm_campaign, referrer_domain And PII (raw phone, email, name) is excluded And the export is available within 60 seconds for up to 100k rows And export counts reconcile with dashboard totals within 0.5% for the same filters And timestamps are ISO 8601 UTC with millisecond precision
SMS Fallback Flow with District Matching
"As a supporter without data, I want to text a keyword and receive tailored instructions so that I can contact my legislator immediately from my phone."
Description

Enable an SMS keyword flow that collects minimal info (ZIP+4 or full address as needed) to match supporters to their legislators and returns district-specific calling and email scripts based on live bill status. Support opt-in/consent, HELP/STOP compliance, language selection, and rate limiting. Provide phone numbers and scripts via SMS for low-data users, with optional call-through connect and email link shortcodes. Integrate with telephony/SMS provider, handle retries and error states (invalid address, timeout), and log actions to the same station/team code. Include templates managed in RallyKit so messaging stays synchronized with the web action page.

Acceptance Criteria
Keyword Opt-in and Compliance (HELP/STOP)
Given a new mobile number texts the campaign keyword, When the first inbound message is received, Then the system replies with brand identification, purpose, message frequency, and carrier rate disclaimer and requests explicit consent (reply Y/YES) before collecting any PII. Given the user replies YES or Y, When consent is received, Then the system records consent timestamp and source, subscribes the number, and proceeds to address capture. Given the user replies STOP/UNSUBSCRIBE/QUIT/CANCEL, When received at any time, Then the system confirms opt-out, ceases all messages, and adds the number to a suppression list across all station/team codes. Given the user texts HELP, When received at any time, Then the system replies with help instructions, contact information, and STOP instructions in the selected language. Given the user is already opted-in, When they text the keyword again, Then the system skips consent and resumes the flow.
Rate Limiting and Abuse Protection
Given any single MSISDN, When more than 3 initiation attempts occur within 60 seconds, Then subsequent initiations within that window are ignored and one rate-limit notice is sent. Given inbound messages from the same MSISDN exceed 20 in 10 minutes, When processing responses, Then the system throttles replies to at most 1 per 10 seconds and logs a rate-limit event. Given messages originate from known carrier test ranges or invalid number formats, When detected, Then the system drops the request and records a blocked event. Given a rate limit is triggered, When 5 minutes elapse without further messages, Then limits reset automatically.
Address Capture and District Matching
Given a subscribed user, When they provide a ZIP+4, Then the system matches their congressional and state legislative districts with ≥99% parser confidence and proceeds. Given a user provides only a 5-digit ZIP that maps to multiple districts, When ambiguity is detected, Then the system requests street address (and optional unit) and retries matching. Given the address is invalid or unresolvable after 3 attempts or 2 minutes total, Then the system sends a clear error, offers HELP, and exits the flow without logging an action. Given a successful match, When confirmation is sent, Then the system lists representative names and offices before sending scripts.
Live Bill Status and Script Personalization
Given a matched district and active campaign, When fetching content, Then scripts are generated from RallyKit templates using the current bill status at request time with staleness < 5 minutes. Given the user’s language preference, When scripts are generated, Then the localized template is used with placeholders (legislator name, bill number, committee, ask) correctly populated. Given templates change in RallyKit, When an update is published, Then subsequent SMS messages reflect updates within 2 minutes and are versioned for audit.
Script Delivery and Call-Through Connect
Given a successful match, When sending call instructions, Then the SMS includes each target’s name and a dialable number or tel: short link in priority order. Given the user replies CALL, When patch-through is supported, Then the system initiates a call to the user and connects to the first target; after hang-up the user can reply NEXT to proceed to the next target. Given the user cannot accept a patch call, When they reply NUMBERS, Then the system returns plain phone numbers with district-specific talking points via SMS. Given each call attempt, When completed or skipped, Then the system prompts for outcome tags (DONE, VM, BUSY, SKIP) and records the selection.
Email Shortcodes for Low-Data Users
Given a successful match, When the user requests EMAIL via SMS, Then the system replies with a mailto: link including subject and body or a short link to a lightweight page that pre-fills the email to all matched targets. Given the user lacks data access, When they reply EMAILS, Then the system returns target email addresses and a script ≤ 500 characters delivered in ≤ 2 concatenated SMS segments. Given a short link is provided, When clicked, Then it resolves to the email action in ≤ 1 second p95 and attributes to the same station/team code.
Error Handling, Retries, and Unified Logging
Given provider timeouts or API errors during geocoding or template fetch, When a request fails, Then the system retries up to 2 times with exponential backoff and informs the user if delays exceed 10 seconds. Given any terminal error (invalid address, opt-out, timeout), When the flow ends, Then the system sends a closing message with how to restart and logs the error category. Given any action (consent, match, CALL, EMAIL, outcome tag), When it occurs, Then the system logs it to the campaign’s station/team code with timestamp, MSISDN hashed per privacy policy, and event type; QR/short link/SMS paths de-duplicate to a single supporter session.
Keyword Reservation and Collision Management
"As a campaign admin, I want to confirm keyword availability and get alternatives so that I can launch quickly without conflicts or rerouting errors."
Description

Ensure SMS keywords are unique within the relevant scope (tenant or global) and provide real-time availability checks during setup. Offer smart suggestions when conflicts arise, allow admins to reserve keywords for a defined window, and handle expiration/release workflows. Validate length and character rules, block risky or restricted terms, and support admin overrides with logging. Display clear routing for each keyword to its campaign/station mapping to prevent misattribution.

Acceptance Criteria
Real-time Keyword Availability Check
- Given an admin is on the keyword setup form and has selected a scope (Tenant or Global), When they type at least 3 characters and pause for 300 ms, Then the system triggers a debounced availability check and returns a result within 800 ms (p95) displaying "Available" or "Unavailable" with scope context. - Given the network is slow or the check errors, When the request fails, Then the UI displays a retriable "Check failed" state, disables submission, and re-attempts on next input or manual retry. - Given a keyword is currently reserved by another admin, When availability is checked, Then the UI shows "Unavailable (Reserved)" with remaining time to expiry.
Scoped Uniqueness Enforcement and Concurrency Control
- Given scope=Tenant, When saving a keyword equal (case-insensitive) to any active or reserved keyword within the tenant, Then the save is rejected with 409 Conflict and an error message indicating the conflicting tenant record. - Given scope=Global, When saving a keyword equal to any active or reserved keyword in any tenant, Then the save is rejected with 409 Conflict and an error message indicating global conflict. - Given two admins submit the same keyword within the same scope concurrently, When both requests hit the API, Then only one succeeds and the other receives 409 Conflict; the datastore contains a single resulting record. - Given a keyword is deleted or released, When checking availability, Then it becomes Available within 5 seconds.
Conflict Handling With Smart Suggestions
- Given a keyword conflict is detected, When suggestions are requested, Then the system returns 5 available alternatives that satisfy validation rules and are available in the selected scope. - Suggestions must exclude blocked/restricted terms and any currently active or reserved keywords; each suggestion must differ by at least one character from the original. - When the admin clicks a suggestion, Then it autofills the field and passes availability without additional manual edits. - Suggestions include readable variants (e.g., year suffix, short campaign slug prefix, district code) and respect length/character rules.
Reservation Window Creation and Auto-Release Workflow
- Given appropriate permissions, When an admin reserves a keyword, Then they must specify a duration between 15 minutes and 7 days (default 24 hours); the system stores reserved_by, scope, created_at, and expires_at. - Given a reserved keyword reaches expires_at, When the scheduler runs (at least every 60 seconds), Then the reservation is released automatically and availability updates within 60 seconds. - Given an admin extends or cancels a reservation, Then the system updates the record, enforces uniqueness during the revised window, and writes an audit log entry. - While reserved, attempts by others to create/assign the keyword in the same scope are blocked with a message showing who reserved it and the remaining time.
Admin Override With Audit Logging
- Given a user with role SuperAdmin, When they override a restricted-term or uniqueness conflict, Then the UI requires an override reason (minimum 10 characters) and confirmation; on success, the system saves the keyword and writes an immutable audit entry capturing user, timestamp, IP, scope, prior state, and reason. - Given a keyword marked as Hard Block (e.g., carrier-reserved commands), When a SuperAdmin attempts override, Then the system denies the override with a "Hard-blocked keyword" error. - All overrides appear in keyword history and are included in the audit export endpoint.
Validation Rules for Length, Characters, and Restricted Terms
- When validating input, Then enforce: length 3–15 characters; allowed characters A–Z and 0–9 only; must start with a letter; case-insensitive; whitespace is trimmed; no punctuation, emojis, or spaces. - When the input matches any restricted list entry (e.g., obscene words, brand-protected, tenant blacklist), Then reject with a specific error message identifying the violated rule category. - When the input equals any carrier-reserved SMS command (STOP, UNSTOP, START, HELP, INFO), Then reject with a reserved-command error and help text. - Validation errors are shown inline next to the field and block submission until resolved.
Routing Visibility and Attribution Integrity
- Given an active or reserved keyword, When viewing its details, Then the UI displays mapping fields: scope, tenant, campaign, station/team code, paired short link/QR, status, and last-updated; all fields are read-only unless the user has edit rights. - Given a test inbound SMS using the keyword from a verified test number, When processed, Then it routes to the mapped campaign/station and appears on the live dashboard with the correct station/team code within 5 seconds (p95). - Given a mapping change is saved, When a subsequent message arrives, Then routing uses the new mapping immediately and an audit log records the change with before/after values. - The system blocks saving any mapping that creates ambiguous routing for the same scope+keyword and returns a clear error.
Branded Short Links with Custom Domains
"As a nonprofit director, I want short links that use our brand domain so that supporters trust the link and feel confident taking action."
Description

Support organization-owned custom domains for short links with automated DNS verification and TLS provisioning. Provide fallback to a RallyKit default domain. Generate human-readable slugs where possible while ensuring uniqueness and fast redirects (<100 ms p95). Propagate the branded short link to QR generation and SMS responses for consistency. Include link hygiene features (link status checks, spam/abuse safeguards) and analytics tagging aligned with attribution.

Acceptance Criteria
Custom Domain Onboarding and Auto TLS Provisioning
Given an org admin adds a custom domain or subdomain When the required DNS TXT (apex) or CNAME (subdomain) record is created correctly Then the system detects verification within 5 minutes and marks the domain Verified And a publicly trusted TLS certificate is automatically issued and bound within 10 minutes of verification And HTTP requests 301 to HTTPS and HTTPS serves a valid certificate for the domain And the certificate auto-renews at least 30 days before expiry without service interruption And the domain status (Pending, Verified, Unhealthy, Expiring) is visible via UI and API
Default Domain Fallback and Health-Based Switching
Given a branded domain is configured for an organization And active health checks run every 2 minutes for DNS resolution, TLS validity, and edge reachability When two consecutive health checks fail Then new short links, QR codes, and SMS responses automatically use the RallyKit default domain And the UI displays an Unhealthy banner and an alert is sent to org admins And when health checks pass twice consecutively, the domain is restored to Active for new assets And existing short links using the branded domain are left unchanged; clicks succeed if the domain resolves, otherwise a standard outage page is served
Human-Readable Slug Generation and Uniqueness
Given a new short link is created without a custom alias When the slug is generated Then the slug is 4–24 characters, lower-case, ASCII, hyphen-separated, and derived from the campaign title And reserved/sensitive words are excluded And uniqueness is guaranteed per domain under concurrent requests (no collisions) And on collision or reserved match, a 3–4 character random suffix is appended to achieve uniqueness And if a user supplies a custom alias that is in use or reserved, the API returns 409 with a clear reason And titles with non-ASCII characters are transliterated and diacritics removed And any slug containing whitespace or ambiguous characters is rejected with validation errors
Redirect Latency Performance p95 < 100 ms
Given a published short link on an active domain When 10,000 synthetic GET requests over 15 minutes are executed from at least 5 geographic regions to the nearest edge Then the time to first byte of the 3xx response has p95 ≤ 100 ms and p99 ≤ 200 ms And the response contains only required headers and no body exceeding 512 bytes And redirect serving is edge-terminated (no origin dependency) to meet latency targets And monthly redirect endpoint uptime is ≥ 99.95%
Propagation of Branded Link to QR and SMS Outputs
Given a campaign short link exists and the branded domain is Active When a user generates a QR code Then the QR encodes the branded short URL exactly and the downloaded filename includes the domain and slug And when an SMS keyword response includes the action link, the message body uses the branded short URL And when the branded domain is Unhealthy or Pending, both QR and SMS use the default domain without changing the underlying link ID or attribution And station/team codes and UTM tags are preserved identically across QR, short URL, and SMS channels
Link Hygiene: Target Status Checks and Abuse Safeguards
Given a destination URL is set for a short link When the link is created Then the system performs an HTTP HEAD/GET check within 2 seconds and records HTTP status, content type, and final resolved URL And a daily scheduled check marks the link Broken after two consecutive 4xx/5xx results and notifies org admins And creation is blocked for destinations flagged by threat intelligence or org blocklist, returning 422 with reason and an owner override option And click traffic is rate-limited per IP and per link (e.g., >50 clicks/min/IP returns 429) and common bot traffic is excluded from analytics
Analytics Tagging and Attribution Consistency
Given a short link is accessed via any channel (QR scan, typed short URL, SMS) When the redirect occurs Then analytics events include org_id, campaign_id, station_code, team_code, channel, domain_type, and short_link_id And existing UTM parameters in the destination are preserved; if absent, utm_source, utm_medium, and utm_campaign are appended aligned to channel and campaign And counts for all access paths roll up to the same station/team code totals And analytics export and API expose these fields without duplicate events for a single click id
Code Assets and Field Kit Exports
"As a field volunteer, I want ready-to-print sheets with our QR, short link, and SMS instructions so that I can set up stations quickly at events without design work."
Description

Automatically generate printable and digital assets that display the QR code, short link, and SMS keyword together with plain-language instructions and the station/team label. Provide multiple sizes (flyer, poster, sticker, table tent) with high-contrast, accessible designs and locale support. Allow batch export for multi-station events (CSV in, PDFs/PNGs out) and social-ready images for quick sharing. Tie exports to the campaign so assets update when codes change, with versioning for audit.

Acceptance Criteria
Single-station asset generation includes dual codes and instructions
Given a campaign with an active station/team code that has an action URL, short link, and SMS keyword mapped When a user generates a flyer asset for that station Then the rendered asset displays all of the following: QR code (encoding the action URL with station/team attribution params), the short link text, the SMS keyword with the configured short/long code, plain-language instructions from the approved template, and the visible station/team label And the station/team label matches exactly the label stored for the station code And the short link and SMS keyword correspond to the same station/team mapping as the QR target And the digital export includes both PDF and PNG variants for the selected size
Multi-size exports with accessibility compliance and locale support
Given a user selects sizes: flyer (US Letter/A4), poster (24x36 in / A1), sticker (3x3 in), and table tent (5x7 in) When assets are generated Then each file matches its target dimensions and includes required print bleeds/margins per size profile And body text is at least 12 pt (sticker may reduce to 9 pt) and fonts embed full glyph support for selected locale And color contrast between text and background meets WCAG 2.1 AA (>= 4.5:1 for normal text) And PDFs are tagged and pass PDF/UA validation; PNGs include an export manifest with equivalent alt text/description And when locale is changed (e.g., to es-ES or ar), all instructional text and labels are localized, phone numbers are formatted per locale, and RTL layouts are applied where applicable
Batch export from CSV to per-station PDFs/PNGs and social-ready images
Given a CSV containing columns: campaign_id, station_code, team_label, locale, sizes, social_presets And at least 1 and at most 200 rows are provided When the CSV is uploaded and validated Then rows with missing or invalid required columns are rejected with a downloadable error report referencing row numbers and reasons And for each valid row, the system generates PDFs and PNGs for the requested sizes and social-ready images for selected presets (e.g., 1080x1080, 1080x1920, 1200x675) with safe areas preserved And the output is returned as a single ZIP with per-station folders, consistent file naming (<campaign>-<station>-<size>-v<version>.<ext>) and a manifest.json summarizing outputs And the batch job status is visible with counts of succeeded, failed, and skipped rows
Campaign-linked assets auto-update with versioning and audit trail
Given assets have been generated for a campaign And later the campaign updates any of: action URL target, short link, SMS keyword, or station/team label When the update is saved Then a new asset version is generated for all affected stations within the campaign And prior versions remain accessible via versioned URLs and are marked as superseded And a non-versioned share link always points to the latest version And the audit log records: who made the change, what fields changed, old/new values, affected asset IDs, version numbers, timestamps, and checksums of generated files
QR scannability, encoding standards, and output quality
Given assets are generated for any supported size When inspecting the QR code payload and render Then the QR encodes the HTTPS action URL with station/team attribution and uses QR Model 2 with error correction level M or higher And the quiet zone is at least 4 modules and the minimum module size meets or exceeds 0.8 mm for print assets And print exports are at least 300 DPI; digital PNGs are at least 144 DPI And automated scanning across three reference viewport sizes (phone, small tablet, large tablet) succeeds at simulated distances appropriate to size And the short link is legible in the final output at a minimum x-height of 2.0 mm for print assets
Authorization and performance for exports
Given role-based access control is enabled for campaigns When a Viewer attempts to generate or download exports Then access is denied with an explanatory message and a logged event And when an Editor or Owner requests a single-station export, the job completes within 5 seconds at the 90th percentile And when an Editor or Owner requests a batch export of up to 200 stations, the job completes within 120 seconds at the 90th percentile and streams partial results for longer batches if configured And all export requests and downloads are logged with user, campaign, station scope, IP, timestamp, and result status
Attribution Health Monitoring and Alerts
"As an organizer, I want proactive alerts if any action path breaks so that we can fix issues before we lose supporter actions during events."
Description

Continuously test short link redirects, QR asset availability, SMS keyword routing, and station mappings with synthetic checks. Track latency and error rates, surface a status view per campaign, and send alerts via email/SMS/Slack when thresholds are breached or mappings drift. Provide one-click diagnostics to trace a path from entry to action with current configuration snapshots. Include auto-healing where safe (e.g., republishing broken redirects) and detailed logs for incident review.

Acceptance Criteria
Short Link Redirect Health Checks
- Given a campaign station with an active short link and configured target URL, When synthetic GET requests run every 1 minute from three geographic regions using a standard user-agent, Then the short link responds with 301/302 to the configured target, and the final action page responds 200 within the configured latency threshold. - Given campaign-level latency and error-rate thresholds are configured, When p95 redirect+load latency or 5-minute error rate exceeds the threshold, Then the short link path state is set to Degraded or Down accordingly and the event is recorded for alerting.
QR Asset Availability Monitoring
- Given a campaign's QR image asset is stored and published, When synthetic HEAD and GET requests run every 5 minutes, Then the asset returns 200 with correct Content-Type (image/png or image/svg+xml), an ETag/Last-Modified header, and the decoded QR payload matches the current short link URL for the same station. - Given the QR asset is rotated or republished, When the check runs after update, Then the QR decodes to the updated short link within 2 minutes, and the previous asset URL returns 410 or redirects per configuration without breaking attribution.
SMS Keyword Routing Checks
- Given a campaign SMS keyword mapped to a station via the provider sandbox/test number, When synthetic SMS messages containing the keyword are sent from at least two carrier networks, Then the inbound webhook is received, routing resolves to the correct station/team code, and an action link reply is sent within the configured SLA. - Given provider delivery errors occur, When two consecutive synthetic messages fail within a 10-minute window, Then the SMS path state is set to Degraded and an alert payload includes provider error codes and message SIDs.
Station Mapping Consistency & Drift Detection
- Given QR, short link, and SMS paths are configured for a campaign, When a reconciliation job runs every 5 minutes, Then each path resolves to the same station/team code and campaign ID; no orphaned or mismatched mappings exist. - Given mapping drift is detected (e.g., short link target points to a different campaign or station), When detected, Then the Status view shows Drift Detected with offending path details and a remediation suggestion, and an alert is generated.
Per-Campaign Status View with Latency/Error Metrics
- Given health check metrics are collected, When a user opens the campaign Status view, Then the dashboard shows state (Healthy/Degraded/Down) per path (QR, short link, SMS), p50/p95 latency, 5/15/60-minute error rates, last check time, and last incident summary. - Given a user adjusts the time window, When the time range is changed, Then charts and summaries update within 2 seconds and reflect only data from the selected interval.
Multi-Channel Alerting on Threshold Breach
- Given alert destinations are configured (email, SMS, Slack), When a threshold is breached or mapping drift is detected, Then a single alert notification is delivered to all enabled channels within 60 seconds, with deduplication during the suppression window. - Given an alert is acknowledged or the state returns to Healthy, When auto-heal succeeds or metrics recover below thresholds, Then a resolution notification is sent and the incident is auto-closed with final metrics attached.
One-Click Diagnostics, Safe Auto-Healing, and Incident Logs
- Given a failed health check exists, When a user clicks Run Diagnostics from the Status view, Then an end-to-end trace is produced including DNS resolution, redirect hops, HTTP response headers (redacted as needed), SMS provider events, resolved mappings, and a configuration snapshot, and it is displayed within 10 seconds. - Given auto-healing is enabled and the failure is a missing or corrupted short link redirect, When diagnostics confirm the target URL in config differs from the active short link, Then the system republishes the redirect to match config, retries the check, and, if successful, marks the incident as Resolved with an Auto-Healed tag. - Given diagnostics or auto-heal actions occur, When the incident timeline is viewed, Then immutable logs show actor, timestamp, action, old/new values, and outcomes; logs are exportable as JSON and CSV.

Auto-Reset Kiosk

Lock any device to a station page with auto-clear after submission or idle timeout, a privacy-safe confirmation loop, and single-tap restart. Buffers actions offline and syncs when back online. Speeds lines, protects PII in crowds, and keeps counts flowing even on shaky venue Wi‑Fi.

Requirements

Kiosk Lockdown & Admin Exit
"As a field organizer, I want to lock a tablet to a station page so that attendees can only complete the intended action without changing settings or leaving the flow."
Description

Locks the device to a designated RallyKit station page in full-screen, disabling navigation, external links, and browser UI elements to prevent session escape. Provides an admin-only exit path via PIN-protected overlay and/or secret URL gesture. Persists kiosk identity across resets to attribute actions to a physical station without exposing PII. Includes focus-loss detection with auto-restore to the station page, and documented OS-level kiosk instructions for iOS, Android, ChromeOS, and Windows. Integrates with RallyKit auth to fetch only station-scoped assets and scripts.

Acceptance Criteria
Full-Screen Lockdown Prevents Navigation Escape
Given kiosk mode is enabled on a supported OS with documented kiosk settings applied When the station page loads Then it renders in full-screen and all browser UI (URL bar, tabs, menus) is hidden Given the station page is active When a user attempts back/forward, address bar entry, context menu, new tab/window, or keyboard shortcuts (Alt/Cmd+Tab, Ctrl/Cmd+L, F11, Ctrl/Cmd+N/T) Then no navigation occurs and the station URL remains unchanged Given the station page is active When an in-app link targets an external domain Then navigation is blocked and the user remains on the station page Given any escape attempt occurs When it is blocked Then an audit event is recorded with stationId, attempt type, timestamp, and no PII
Admin Exit via PIN-Protected Overlay
Given the station page is active When an authorized operator invokes the hidden admin control (triple-tap top-right within 2 seconds) Then a PIN entry overlay appears without exposing system or browser navigation Given the PIN overlay is shown When the correct 6-digit PIN is entered Then kiosk mode exits to the RallyKit Admin screen within 2 seconds and an audit event is recorded Given the PIN overlay is shown When 5 consecutive incorrect PINs are entered within 5 minutes Then the overlay locks for 60 seconds and a throttling event is recorded (no PII)
Secret Gesture Admin Exit Fallback
Given the device is offline or keyboard input is unavailable When an operator performs the secret gesture (5 rapid taps on the RallyKit logo followed by a 2-second press) Then the PIN entry overlay appears Given the station page is active When random taps or partial gesture patterns occur Then no overlay is shown and no UI state changes Given the PIN overlay is triggered via the secret gesture When a correct PIN is provided Then behavior matches the standard admin exit flow and an audit event is recorded
Kiosk Identity Persists Across Resets
Given a station is enrolled and assigned a kioskId (non-PII UUID) When the device restarts or the browser/app is relaunched Then the same kioskId is used for all action payloads and audit logs Given the device loses network and later reconnects When buffered actions are synced Then each action retains the original kioskId Given local storage is inspected When data at rest is reviewed Then no PII is stored; only kioskId, station token, and non-sensitive config Given an admin initiates kioskId rotation When rotation is confirmed in Admin Then subsequent actions use the new kioskId and the change is recorded in the audit log
Focus-Loss Detection and Auto-Restore
Given the station page is active When the app loses visibility, crashes, navigates to an unsupported URL, or triggers an external intent Then the app auto-restores to the designated station page within 2 seconds Given auto-restore occurs When a partially completed form is detected Then all entered PII is cleared and the form resets to a fresh state Given repeated focus loss occurs When 3 auto-restores happen within 30 seconds Then a blocking screen prompts for admin intervention and an audit event is recorded
Station-Scoped Asset Fetch via RallyKit Auth
Given a valid station token is configured When the station page requests assets or action scripts Then all API calls include stationId and are authorized via RallyKit Auth Given a request attempts to access assets for a different stationId When the request is made Then the API responds 403 and no data is returned Given a request is made without a valid token When the request is made Then the API responds 401 and the station page displays a non-PII error prompting admin re-authentication Given valid credentials are present When an action is performed Then no user PII is returned from the API and only station-scoped content is delivered
OS-Level Kiosk Setup Documentation Provided
Given a new organizer is setting up on iOS, Android, ChromeOS, or Windows When they follow the corresponding platform guide Then they can lock the device to the designated station page using required OS kiosk settings in under 45 minutes Given the documentation is accessed When opened Then it includes prerequisites, step-by-step instructions with screenshots, exact setting names, and troubleshooting steps for each OS Given an OS vendor changes kiosk-related settings in a new release When the change is identified Then the docs are updated within 10 business days and a changelog entry is published
Auto-Reset on Submission and Idle Timeout
"As a volunteer, I want the kiosk to reset itself after a submission or inactivity so that the next person starts fresh without my intervention."
Description

Automatically clears the form and returns to the station start screen after a successful action submission or when idle for a configurable period. Timers pause while a user is actively typing and resume on inactivity. Supports multi-step action flows and ensures state machines return to the entry screen deterministically. Resets include clearing validation messages, scroll position, virtual keyboard state, and restoring the initial script/district context. Configuration is per-station with sensible defaults and remote override from the RallyKit dashboard.

Acceptance Criteria
Reset After Successful Submission
Given the kiosk station page is loaded and a user completes the action flow And the server responds with a successful submission acknowledgment When the success event is received by the client Then the form inputs, navigation state, and any stored user data are cleared within 1 second And the UI navigates to the station start screen within 1 second And no user-entered data remains in the DOM, memory, or URL And the idle timer is restarted from zero
Reset After Offline-Queued Submission
Given the device is offline at the time of submission And the submission is queued locally for later sync When the client marks the submission as queued Then the kiosk clears all user data and returns to the station start screen within 1 second And the queued item remains persisted for background sync And no user-entered data is visible after the reset And when connectivity is restored and sync completes, the kiosk remains on the start screen
Idle Timeout With Activity Pause
Given idleTimeoutSeconds is configured per station, defaulting to 60 seconds when unset When there is no user activity for idleTimeoutSeconds Then the kiosk resets and returns to the station start screen within 1 second When the user is actively typing (keystroke/input events occurring at least once every 2 seconds) Then the idle timer does not count down When typing stops Then the timer resumes from its remaining duration and reaches zero after the expected interval without further activity
Deterministic Reset From Any Multi-Step State
Given the user is on any step of a multi-step action flow (including with validation errors or pending requests) When a reset is triggered by successful submission or by idle timeout Then the state machine transitions to the StationStart state deterministically And all in-flight requests are canceled or ignored and do not alter post-reset state And the next user encounter begins at step 1 with initial context
Reset Clears UI State And Restores Initial Context
Given a reset is triggered Then all form inputs are emptied and validation/error messages are cleared And scroll position is restored to the top of the page And the virtual (soft) keyboard is dismissed And any open modals, toasts, or banners related to the prior session are closed And the initial script and district context configured for the station is restored
Per-Station Configuration, Defaults, And Remote Override
Given a station has no explicit idle timeout configured Then a default idleTimeoutSeconds of 60 is applied When a remote override for idleTimeoutSeconds is updated in the RallyKit dashboard for that station Then the kiosk applies the new value within 60 seconds or on next page reload, whichever is sooner And the change affects only the targeted station And invalid or out-of-range values are ignored and the previous value (or default) remains in effect
Privacy-Safe Confirmation Loop
"As a supporter in a public venue, I want a private, non-identifying confirmation screen so that my personal details aren’t visible to others."
Description

Displays a generic, non-identifying success confirmation after submission with optional countdown to auto-reset. The confirmation omits names, emails, phone numbers, addresses, and district details, while still communicating success and next steps. Supports large-text, high-contrast, and screen reader cues for accessibility, plus optional subtle haptic/audio chime in noisy venues. The screen includes a Restart button that immediately returns to the start view without revealing prior inputs. Content is localized and brandable without embedding PII.

Acceptance Criteria
Generic Non‑Identifying Confirmation Display
Given a supporter submits an action on the kiosk When the confirmation screen is displayed Then it presents a success message and next‑step guidance without displaying first name, last name, email, phone, street, city, state, ZIP/postal code, geolocation, district number, or legislator names And the confirmation UI contains no input fields or editable controls tied to prior data And inspecting the rendered DOM and outbound network calls for the confirmation view reveals no PII values And no Back or Edit affordance is available from the confirmation screen
Configurable Auto‑Reset Countdown
Given countdown_enabled = true and countdown_seconds = 7 When the confirmation screen appears Then a visible countdown decrements every second And at T=0 the kiosk returns to the start view within 500 ms And all prior inputs, in‑memory caches, and UI history are cleared Given countdown_enabled = false When the confirmation screen appears Then no countdown is shown And the confirmation persists until the user taps Restart or idle timeout triggers
Immediate Privacy‑Safe Restart
Given the confirmation screen is visible When the user taps Restart Then the start view loads within 500 ms And no prior inputs or selections are pre‑filled or visible And OS/app back navigation does not reveal prior inputs (back stack cleared) And Restart is operable via keyboard, switch control, and pointer input
Accessible Confirmation (Large Text, High Contrast, Screen Reader)
Given the confirmation screen is displayed When system text size is set to large or app text scale is 200% Then all confirmation text and controls remain readable without truncation, overlap, or loss of function And color contrast for text and essential UI meets WCAG 2.1 AA (≥ 4.5:1) And screen readers announce “Submission successful” and next steps within 1 second, with accessible names for Restart and countdown And keyboard focus lands on the confirmation heading, then can move to Restart without a keyboard trap
Optional Subtle Haptic/Audio Feedback
Given feedback_enabled = true When the confirmation screen appears Then a single chime plays (≤ 700 ms) honoring system volume and mute settings And a brief haptic (≤ 50 ms) triggers on supported devices And no additional sound/haptic plays on auto‑reset or on Restart Given feedback_enabled = false When the confirmation screen appears Then no sound or haptic feedback occurs
Localized and Brandable Confirmation Without PII
Given locale = es‑ES and organization branding is configured When the confirmation screen appears Then all user‑facing strings render in Spanish with correct punctuation and diacritics And brand logo/colors apply without reducing contrast below WCAG 2.1 AA And templates exclude PII placeholders such as {name}, {email}, {phone}, {address}, {district} And switching to a right‑to‑left locale renders mirrored layout correctly with no text overlap And language or branding changes take effect on next confirmation without app restart
Offline Submission Confirmation and Sync Notice
Given the device is offline when an action is submitted When the confirmation screen is displayed Then it shows a generic success with an offline sync notice that contains no PII And any queued count is shown as a number only (no message previews or identifiers) And the same privacy rules apply to UI, local cache, and logs When connectivity is restored before the countdown ends Then no PII is revealed and the configured countdown/reset behavior proceeds unchanged
Single-Tap Restart Control
"As a check-in volunteer, I want a one-tap reset control so that I can quickly prepare the kiosk for the next person if needed."
Description

Provides an always-available, kiosk-safe restart control that instantly resets the session and returns to the start screen. The control is large, high-contrast, and positioned to avoid accidental taps, with an optional 1–2 second press-and-hold to confirm. Works from any step, including error states and partial entries. Respects lockdown rules and does not require admin privileges. Emits analytics events for manual resets without logging PII to help optimize station staffing and UX.

Acceptance Criteria
Restart from Any Step to Start Screen
- Given a user is on any step (form, modal, or review), when the restart control is activated, then the session state is cleared and the app navigates to the start screen within 500 ms on a mid-tier device. - Then all in-session inputs, modal states, and error flags are cleared; using browser back/forward does not reveal prior inputs or state. - And the reset occurs locally without requiring network availability.
Press-and-Hold Confirmation (Accidental Tap Prevention)
- When Hold-to-Confirm is enabled, the control requires a continuous press of 1.5 s (configurable 1.0–2.0 s); a visible progress indicator fills during the press; releasing before completion cancels with no reset. - When Hold-to-Confirm is disabled, a single tap triggers the reset immediately. - On successful activation, haptic/sound or visual confirmation is provided within 100 ms. - The control ignores drag/multi-touch gestures and does not move or scroll during hold.
Error-State Recovery via Restart
- Given the app is showing any error state (network failure notice, validation error screen, or caught exception screen), when the restart control is activated, then the app returns to the start screen and all error flags are cleared without new uncaught errors. - Post-reset, the start screen becomes interactive within 500 ms and no OS/browser prompts appear. - Any pending requests from the prior state are canceled or detached to prevent duplicate side effects.
Kiosk Lockdown Compliance (No Admin Needed)
- The restart control is operable in kiosk/lockdown mode without requiring admin credentials, system dialogs, or leaving fullscreen. - Activation does not navigate outside the allowed origin, open new tabs/windows, or expose browser chrome. - Interaction with the control does not trigger system gestures (back/home) or OS-level menus.
Manual Reset Analytics (PII-Safe)
- On successful manual reset, an analytics event "manual_reset" is emitted with fields: eventId (UUIDv4), timestamp (UTC ISO 8601), kioskId, stationId, campaignId, screenId, durationSinceStartMs, holdToConfirmEnabled; no PII or form values are included. - The event is queued offline and retried with exponential backoff until acknowledged (HTTP 2xx), with at-least-once delivery and server-side deduplication via eventId. - The event dispatch occurs within 2 s of connectivity restoration.
Offline Action Buffer Preservation
- Performing a manual reset does not delete or modify the offline actions queue; queued item count before vs. after reset is identical. - After connectivity is restored, all previously queued actions sync successfully regardless of intervening resets. - Manual reset clears only volatile in-session UI state and does not reinitialize or corrupt the persistent sync queue.
Visibility, Size, Contrast, Placement, and Accessibility
- The restart control is always visible on every screen, including modals and error states. - Minimum touch target is ≥48×48 dp (or ≥44×44 pt on iOS); contrast ratio against background is ≥7:1. - Positioned consistently in a safe area (e.g., top-right) with ≥16 px spacing from other interactive elements to reduce accidental taps. - The control is keyboard focusable and screen-reader labeled; activation via Enter/Space triggers the same reset behavior as touch/click.
Offline Buffering & Resilient Sync
"As an organizer at a venue with shaky Wi‑Fi, I want actions to be stored and synced automatically so that I don’t lose submissions or slow the line."
Description

Queues completed actions locally when connectivity is poor, encrypts them at rest, and syncs automatically when back online using exponential backoff and network awareness. Uses deterministic client-generated IDs to ensure idempotent server writes and prevent duplicates. Persists the queue in IndexedDB or native storage, with size limits and eviction rules. On reconnect, flushes in order, reconciles conflicts, updates the live RallyKit dashboard counts, and writes audit-safe timestamps and kiosk IDs without exposing PII. Shows a subtle offline banner that does not block flow.

Acceptance Criteria
Queue and Encrypt Actions When Offline
Given the kiosk device is offline When a supporter submits an action Then the action is enqueued to persistent local storage And the stored payload is encrypted at rest using WebCrypto AES-GCM with a 256-bit key And no PII fields are written in plaintext to any storage or console logs And a non-blocking offline banner appears within 100 ms And the user receives a local success confirmation within 300 ms and can immediately start the next action
Exponential Backoff and Network-Aware Auto Sync
Given queued actions exist and the last sync attempt failed due to network errors When retries are scheduled Then the client retries with exponential backoff delays of approximately 1s, 2s, 4s, 8s, 16s, 32s up to a maximum of 60s with ±20% jitter And retries pause while the OS/browser reports offline and resume immediately upon an online event And HTTP 429/503 responses honor the Retry-After header if present (capped at 60s) And a successful sync of any item resets the backoff
Idempotent Sync with Deterministic Client IDs
Given deterministic client-generated IDs are derived from normalized payload fields and kiosk ID When the same action is retried or submitted multiple times due to connectivity issues Then the server stores exactly one record per unique client ID And duplicate submissions result in a 200 OK or 409 Duplicate that the client treats as success And the client marks the item as synced upon any success/duplicate response and does not re-enqueue it
Persistent Queue with Size Limits and Eviction Rules
Given actions are queued while offline When the kiosk app is reloaded or the device is power-cycled Then the queue is restored from IndexedDB or native storage with no loss of unsynced items And the system can persist at least 500 unsynced actions without data loss And total local storage used by the queue is capped at 20 MB per kiosk And when the cap is reached, only previously synced (cache) items are evicted; unsynced items are never evicted
Ordered Flush and Conflict Reconciliation on Reconnect
Given multiple queued actions with local submission timestamps t1 < t2 < t3 When the device regains connectivity Then the client flushes the queue strictly in FIFO order And server conflicts (e.g., 409 for existing client ID) are treated as successful reconciliations without resubmission And each accepted record includes audit-safe fields: client_local_submitted_at (UTC ISO 8601), server_received_at, and kiosk_id; no PII is transmitted beyond what is required for the action And after flush, the local queue contains zero items or only those that failed with non-retryable errors
Live Dashboard Count Updates Post-Sync
Given N queued actions are successfully synced to the server When each action is accepted by the server Then the RallyKit dashboard increments the corresponding counts within 5 seconds of acceptance And totals reflect the number of unique client IDs accepted (duplicates do not increment) And counts attributed to the kiosk match the kiosk_id sent during sync
Non-Blocking Offline Banner UX
Given the device transitions offline during kiosk use When offline is detected Then a subtle, non-modal banner is shown that does not overlap or disable the primary action controls And the banner meets WCAG 2.1 AA contrast and is announced once to screen readers And the banner auto-hides within 1 second of regaining connectivity And at no point does the banner add extra taps or steps to complete an action
PII Purge & Storage Hygiene
"As a compliance lead, I want the kiosk to purge any personal data after each attempt so that we meet privacy obligations and protect supporters."
Description

On any reset (submission, idle, or manual), fully purges PII from memory, DOM, URLs, form fields, browser history, autofill, and all web storage (localStorage, sessionStorage, IndexedDB) not required for offline queueing. Prevents PII in query strings, disables browser autofill where possible, sets no-store cache headers, and uses CSP to block unintended data exfiltration. Redacts PII from client logs and telemetry while retaining anonymized operational metrics. Provides automated cross-browser tests to verify no residual PII after reset.

Acceptance Criteria
All Reset Types Purge PII Across Runtime and Storage
Given a kiosk session where PII fields (name, email, phone, address) are filled and the token "PII_TEST_123" is injected into inputs, hidden fields, DOM text, URL hash, localStorage, sessionStorage, IndexedDB, and Cache Storage When a reset is triggered by successful submission Then within 1 second all visible fields are empty and document.documentElement.outerHTML contains no "PII_TEST_123" And window.location.search and window.location.hash are empty And localStorage and sessionStorage contain no keys or values including "PII_TEST_123" and no PII keys other than the offline-queue allowlist And all IndexedDB object stores contain no records with values including "PII_TEST_123" And Cache Storage contains no cached responses whose text includes "PII_TEST_123" When a reset is triggered by idle-timeout or by manual restart Then the same purge outcomes occur within 1 second for all of the above surfaces
No PII in URLs or Browser History
Given PII is entered on the kiosk start or form screen When navigating to any intermediate or confirmation screen Then location.search and location.hash are empty and contain no PII And no network request initiated by the client includes PII in its query string When the user triggers a reset by any method Then history.replaceState has removed PII-bearing history such that pressing Back returns to a clean start screen with empty fields And the current URL is the canonical kiosk URL without PII-bearing parameters
Autofill Prevented on Kiosk Inputs
Given the kiosk page is loaded in Chrome, Firefox, and Safari Then the form and all PII inputs (name, email, phone, address) have attributes: autocomplete="off", autocorrect="off", autocapitalize="off", spellcheck="false" And no input is auto-populated by the browser on initial load or after refresh When focusing each PII input before any user typing Then the value remains empty until simulated user input occurs And after submission and reset, reloading the page does not repopulate any prior values
Offline Queue Stores Only Required Data and Purges After Sync
Given the device is offline and a submission with PII is attempted Then the client stores a single queue record containing only allowed keys (e.g., actionType, targetIds, scriptVariantId, timestamp, payload) And the stored payload contains no cleartext name, email, phone, or address and does not match email/phone/address regexes And the token "PII_TEST_123" is not present in any client storage other than the queued payload (if strictly required) When connectivity is restored and sync completes Then queued records are transmitted and removed from client storage within 5 seconds And a subsequent inspection of localStorage, sessionStorage, IndexedDB, and Cache Storage finds no residual queued PII or "PII_TEST_123"
Security Headers and CSP Block Caching and Exfiltration
Given the kiosk app is requested over HTTPS Then all HTML and app route responses include headers: Cache-Control: no-store, no-cache, must-revalidate, max-age=0; Pragma: no-cache; Expires: 0 And responses include a Content-Security-Policy with at least: default-src 'self'; form-action 'self'; connect-src limited to first-party endpoints; object-src 'none'; base-uri 'none' And the CSP does not include wildcard * for form-action or connect-src and does not permit data exfiltration to non-allowlisted origins When a script, form, or fetch attempts to send PII to a non-allowlisted origin Then the browser blocks the request due to CSP and a CSP violation is recorded And Service Worker (if present) does not cache HTML responses that could contain PII
Logs and Telemetry Redact PII, Preserve Metrics
Given PII is entered and normal and error flows are executed (including a failed submission) Then console logs, client-side telemetry events, and analytics payloads contain no raw name, email, phone, or address, and no occurrences of "PII_TEST_123" And telemetry includes only anonymized operational metrics (counts, durations, statuses) and a pseudonymous kiosk identifier When exporting or viewing client logs for the session via supported tooling Then the output contains no PII and still includes sufficient anonymized metadata to audit action timing and outcomes
Automated Cross-Browser Tests Verify No Residual PII
Given automated tests run in CI on latest stable Chrome, Firefox, and Safari When tests seed PII including the token "PII_TEST_123", perform submissions, trigger idle-timeout, and manual resets Then assertions for DOM, URL/search/hash, localStorage, sessionStorage, IndexedDB, Cache Storage, and logs/telemetry find no residual PII after each reset And any attempt to include PII in query strings is detected and fails the suite And the CI job fails the build if any browser reports residual PII or header/CSP misconfiguration
Cross-Device Compatibility & Deployment Guide
"As a small nonprofit director, I want the kiosk to run on the devices we already have so that setup is fast and inexpensive."
Description

Ensures the kiosk runs reliably on iOS Safari, Android Chrome, ChromeOS, and modern desktop browsers, with responsive layouts for common tablet sizes and orientation lock where supported. Detects unsupported configurations (e.g., private mode storage limits) and surfaces clear remediation steps. Provides an in-app readiness check and a step-by-step deployment guide covering OS-level kiosk mode, screen timeout settings, and battery management. Includes a printable QR link to the station and a test script to validate reset, privacy, and offline sync behaviors.

Acceptance Criteria
Cross-Device Browser Compatibility Baseline
Given the kiosk station page When accessed on iOS Safari 16+, Android Chrome 110+, ChromeOS Chrome 110+, and latest-1 desktop Chrome, Edge, Safari, and Firefox Then the start screen loads without uncaught JavaScript errors or blank screens And the start action, submission, auto-reset after submission, and idle-timeout reset complete successfully on each platform And no horizontal scrolling occurs at 100% zoom on 768x1024, 800x1280, and 1280x800 viewports And all interactive elements have a minimum tap target of 44x44 CSS pixels And a service worker registers without error on supported browsers
Unsupported Configuration Detection and Remediation
Given a device is in private/incognito mode or has cookies/storage blocked or <10MB available storage When the station page initializes Then an "Unsupported configuration" notice appears with the specific reason detected And a platform-specific remediation checklist with links to the deployment guide is displayed And analytics logs the detection with platform, reason code, and timestamp And the kiosk flow cannot proceed until the issue is resolved
In-App Readiness Check Pass/Fail
Given an operator opens the in-app readiness check When they run the check on a target device Then results display pass/fail for: supported OS/browser version, service worker registration, offline cache availability, storage quota ≥10MB, cookies/localStorage enabled, and network connectivity And each failed item includes a remediation link to the relevant deployment step And the operator can export or print a summary with device name, OS/browser versions, timestamp, and overall pass/fail status
Deployment Guide Coverage and Printability
Given an operator opens the deployment guide When they select a platform (iOS, Android, ChromeOS, Windows, macOS) Then step-by-step instructions for OS-level kiosk/pinning (e.g., Guided Access, App Pinning, ChromeOS Kiosk, Assigned Access/Single App Mode) are displayed And steps include configuring screen timeout/auto-lock to prevent sleep during operation And battery management recommendations are provided (keep powered, Low Power Mode guidance, brightness) And a preflight checklist with checkboxes is available and can be printed or saved as PDF
Printable Station QR Code and Short Link
Given a station is configured When the operator selects "Print QR" Then a QR code is generated encoding the HTTPS station URL with a station token And a short fallback link is displayed that resolves to the same station URL And the print layout fits A4 and US Letter with margins and includes the station name/ID And scanning the QR with iOS and Android native camera apps opens the station start screen
Kiosk Test Script for Reset, Privacy, and Offline Sync
Given the operator opens the kiosk test script When they follow the scripted steps on a test device Then after a successful submission, the form clears and returns to the start screen within the configured post-submit reset time (±1s) and no entered PII remains visible or in autofill suggestions And after the configured idle timeout (±1s) with no interaction, the screen auto-resets to the start screen without exposing PII And with the device offline (airplane mode), submissions queue locally, an offline indicator is shown, and no network error is shown to the end user And upon connectivity restoration, queued submissions sync within 30s, server counts reflect the synced actions, and duplicates are not created
Orientation Lock Handling and Responsive Layout
Given an operator selects a preferred orientation in station settings When the kiosk starts on a device that supports the Screen Orientation API and user gesture requirements are met Then the orientation locks to the selected mode and persists across navigation and resets And if orientation lock is unsupported or denied, a persistent instruction explains how to lock orientation manually And layouts render without truncation or overlap in both orientations at 768x1024, 800x1280, and 1280x800, with on-screen keyboards not obscuring required fields

Heatmap Router

Live map and per-code stream show volume, conversion, and wait indicators by station. Smart prompts recommend where to redeploy volunteers or open new stations, with one-tap SMS/Slack directions to team leads. Ensures people power is aimed at districts that need it now.

Requirements

Real-time Station Heatmap
"As a campaign director, I want a live map of station performance so that I can instantly spot hotspots and bottlenecks across districts."
Description

A live, auto-refreshing geospatial view that visualizes each action station (phone, text, or field hub) with key KPIs: current inbound volume, conversion rate, average wait time, queue length, and capacity utilization. Color-coded intensity and status badges enable instant hotspot detection. Includes pan/zoom, clustering, tooltips, drill-down to station detail, and district rollups. Filters by campaign/bill, legislative chamber, time window, channel, and tag. Updates via WebSocket server-push with graceful polling fallback and under-one-second perceived latency. Mobile-responsive and accessible. Integrates with RallyKit routing to attribute activity to districts and legislators.

Acceptance Criteria
WebSocket Live Updates with Graceful Polling Fallback
- Given the heatmap is open on an active campaign, when new station metrics are emitted by the server, then affected markers and rollups update within 1 second of the server event timestamp at p95 and within 2 seconds at p99. - Given the WebSocket connection fails or drops, when the client detects the failure, then it switches to HTTP polling within 5 seconds, refreshing data at 5-second intervals until reconnection. - Given the client is in polling mode, when the WebSocket reconnects, then polling stops, the status indicator changes to "Live", and no duplicated updates or regressions are observed in metrics. - Given a burst of updates at 50 events/second for 10 seconds, when processing on the client occurs, then the UI remains responsive (no frozen interactions) and applies the most recent value per station within the latency targets.
KPI Visualization and Status Badges on Station Markers
- Given a station is visible on the map, when its marker is rendered, then the following KPIs are available in the marker tooltip: inbound volume (last 5 min), conversion rate (%), average wait time (sec), queue length (count), and capacity utilization (%), each with units and last-updated timestamp. - Given markers are color-coded by intensity, when capacity utilization is 0–60% then marker is green, 61–85% amber, and >85% red; color mapping is consistently applied across sessions. - Given wait time thresholds, when average wait time > 120s then a red "Over Capacity" badge is shown; when 61–120s then an amber "Backlog" badge is shown; when 0–60s and queue length = 0 then a gray "Idle" badge is shown. - Given conversion rate formatting, when displayed, then it shows one decimal place (e.g., 47.3%) and is never NaN; missing values display as "—".
Map Interactions: Pan/Zoom, Clustering, Tooltips, and Drill-down
- Given the user pans or zooms, when the map re-renders, then visible markers/clusters re-compute without losing data and maintain current filters. - Given marker density exceeds the clustering threshold, when zoomed out, then markers are clustered; clusters show the count of stations, summed inbound volume, summed queue length, and weighted averages for conversion and wait time (weighted by station inbound volume). - Given a cluster is clicked, when the action occurs, then the map zooms in to expand the cluster until individual stations are visible or the cluster splits. - Given a station marker is clicked, when the action occurs, then a station detail drawer opens with full KPIs, capacity settings, recent trend sparkline, and a link to the station detail page; pressing Back/Escape closes the drawer. - Given pointer hover or mobile tap, when requesting a tooltip, then the tooltip appears within 150 ms and is positioned without obscuring the marker.
Filtering by Campaign/Bill, Chamber, Time Window, Channel, and Tag
- Given filter controls are available, when the user selects campaign/bill (multi-select), chamber (House/Senate), time window (Last 5 min, 15 min, 1 h, Today, Custom range), channel (phone/text/field), and tags (multi-select), then the map, clusters, station details, and district rollups update to reflect the intersection of all selected filters within 1 second. - Given filters are applied, when the URL is inspected, then query parameters reflect the current filter state; reloading the page preserves applied filters. - Given a filter combination yields no results, when applied, then the map shows an informative zero-state and a one-click Clear Filters action. - Given filters are cleared, when Reset is used, then the view returns to default (all campaigns active, Last 15 min, all channels, all tags).
District and Legislator Attribution with Rollups
- Given RallyKit routing metadata is present, when events are ingested, then each station action is attributed to a district and legislator and exposed to the heatmap API. - Given the District Rollup toggle is enabled, when the map renders, then district polygons display with heat intensity reflecting aggregated capacity utilization and tooltips show aggregated inbound volume, conversion rate, average wait time, and top 3 contributing stations with percentages. - Given a district is clicked, when the action occurs, then a district detail drawer opens listing legislators with current action counts and conversion rates, and links to legislator profiles. - Given a test dataset for a known time window and filter set, when district totals are compared to the sum of underlying attributed station events, then the counts match exactly and percentages differ by no more than 0.1 percentage points due to rounding.
Mobile Responsiveness and Accessibility Compliance
- Given a mobile viewport (320–767 px width), when the heatmap loads, then filters are available via a collapsible panel, the station/district detail opens as a bottom sheet, and all touch targets are at least 44x44 px. - Given keyboard-only navigation, when tabbing through the interface, then markers/clusters/filters/tooltips are reachable in a logical order; Enter/Space activates, and Esc closes drawers; focus is visibly indicated. - Given a screen reader is active, when a marker or district receives focus, then it announces name/region, inbound volume, conversion rate, average wait time, capacity utilization, status badge text, and last-updated time via ARIA labels. - Given WCAG 2.1 AA requirements, when evaluated, then contrast for markers/badges meets ≥ 4.5:1, color is not the sole indicator of state (text/icon provided), and reduced motion preference disables non-essential animations.
Performance, Stability, and Error Handling Under Load
- Given 2,000 stations and an update rate of 10 events/second sustained for 60 seconds, when the heatmap runs on a mid-tier laptop, then the UI maintains ≥ 45 fps and browser memory stays ≤ 500 MB; on a mid-tier mobile device, ≥ 30 fps. - Given a cold start on broadband, when the default view loads, then initial render completes in < 2 seconds; on 3G Fast, < 5 seconds. - Given transient network/server errors, when data requests fail, then the UI shows a non-blocking toast, retries with exponential backoff up to 5 attempts, and continues displaying last-known-good data with a "Last update" timestamp. - Given a 15-minute soak test, when monitoring console logs, then no uncaught exceptions occur and all errors are captured with correlation IDs.
Per-Code Live Metrics Stream
"As a data lead, I want a live per-code feed of volume and conversion so that I can pinpoint where outreach is underperforming and act quickly."
Description

A continuously updating feed that aggregates performance by code (ZIP, area code, and legislative district code), showing volume, conversion, average wait, abandonment rate, and capacity usage. Supports sorting, searching, threshold highlighting, and time bucketing with a rolling 24-hour backfill. Normalizes and maps codes to districts and stations, handling partial/ambiguous codes. Provides deep links to select the same slice on the map and supports CSV/JSON export for quick sharing.

Acceptance Criteria
Live Stream Freshness and Continuity
Given the Heatmap Router dashboard is open and new supporter actions are recorded When an action impacting a code is ingested Then the affected per-code row updates within 3 seconds without a full page reload and the stream "last updated" indicator changes Given no new actions occur for 60 seconds When the stream remains idle Then a heartbeat indicator updates at least every 15 seconds to confirm liveness Given a temporary network disconnect occurs When connectivity resumes Then the client auto-reconnects within 10 seconds and backfills missed updates in correct chronological order with no duplicate rows
Metric Aggregation Accuracy per Code
Given a code with a known dataset for the selected time bucket When metrics are displayed Then volume equals total initiated actions; conversions equals completed actions; conversion_rate = conversions/initiated rounded to 2 decimals; avg_wait_seconds = mean wait for completed calls; abandonment_rate = abandons/initiated rounded to 2 decimals; capacity_usage_percent = (active_connections/available_capacity)*100 rounded to 0 decimals Given a code with zero initiated actions in a bucket When metrics are displayed Then conversion_rate, abandonment_rate, and avg_wait_seconds show 0 with no divide-by-zero errors Given metric headers or tooltips are available When the user hovers Then each metric’s definition is shown to match the above formulas
Sorting and Search at Scale
Given at least 5,000 distinct codes are loaded When the user sorts by any metric column ascending or descending Then results return within 500 ms and the sort order is correct and stable Given a search query of at least 2 characters When the user enters a code fragment, district name, or station label Then the list filters to matching rows (prefix or substring match, case-insensitive) within 300 ms and highlights matched terms Given both sort and search are active When new live data arrives Then current filters persist and the visible set remains correctly sorted without visual jitter
Threshold Highlighting and Rule Evaluation
Given admin-defined thresholds exist for avg_wait_seconds, abandonment_rate, and capacity_usage_percent When a per-code metric breaches a threshold Then the row is highlighted with the configured style and a badge lists all breached rules Given a breached metric recovers below its threshold for two consecutive updates When the condition clears Then the highlight is removed within one refresh cycle Given multiple thresholds are configured When two or more are breached simultaneously Then all are displayed without conflict and a tooltip shows the metric, rule, and first breach timestamp
Time Bucketing with Rolling 24h Backfill
Given bucket sizes of 1m, 5m, 15m, and 60m are available When the user selects a bucket size Then all aggregates use that size and bucket boundaries align to UTC minute/hour marks Given the stream initializes When backfill loads Then the last 24 hours of buckets load within 3 seconds for up to 100k rows and are marked as historical Given a bucket with no events When metrics are rendered Then zero counts are shown and a no-data state is recorded without interpolation
Code Normalization and Ambiguity Handling
Given incoming codes include ZIP5, ZIP+4, ZIP3 prefix, area code (with/without +1), and legislative district codes with punctuation variants When normalization occurs Then each value is converted to a canonical code_type and code per the mapping table with 100% consistency Given a partial code (ZIP3 or area code) that maps to multiple districts or stations When aggregating metrics Then counts are attributed to all matching districts/stations with ambiguous_flag=true and, if available, a confidence value; affected rows are visually flagged as Ambiguous Given an unrecognized or malformed code When processing Then it is excluded from aggregates and surfaced in an Unmapped bucket with error details available via export
Deep Links and Export Consistency
Given a selected code row with active filters (time window, bucket size, thresholds on/off) When the user clicks View on Map Then the map opens with the same filters applied and the selected code highlighted via a shareable URL containing query parameters Given the current stream selection When the user exports CSV or JSON Then the file is generated within 5 seconds for up to 100k rows and includes: timestamp, code, code_type, district_id, station_id, volume, conversions, conversion_rate, avg_wait_seconds, abandonment_rate, capacity_usage_percent, bucket_start, bucket_end, ambiguous_flag Given exported time fields When the file is generated Then timestamps are ISO 8601 UTC; CSV is UTF-8 RFC 4180 with a header row; JSON is an array of objects with correct types; export respects all active filters and the current sort order
Smart Staffing Recommendations
"As an organizing lead, I want actionable recommendations on where to move volunteers so that we maximize conversions where they are needed most."
Description

A recommendation engine that monitors live KPIs and forecasts short-term demand to suggest actions like redeploying volunteers, opening pop-up stations, or pausing low-yield stations. Combines configurable rules with lightweight predictive models, includes cooldowns to avoid thrash, and respects constraints such as station capacity, volunteer skills, schedules, and travel times. Presents recommendations with clear rationale, expected impact, and confidence. Supports one-tap accept/decline and records outcomes for continuous improvement.

Acceptance Criteria
Live Surge Redeployment Recommendation
Given station A’s average wait time > 120 seconds for 3 consecutive minutes AND active callers > 30 AND conversion rate over the last 5 minutes is ≥10% below the 1-hour baseline AND there are ≥3 eligible volunteers at nearby stations with utilization ≤50% and ETA ≤15 minutes When these conditions are detected Then the system emits a "Redeploy Volunteers" recommendation within 60 seconds specifying: source stations, count to move, target station, projected wait reduction to ≤60 seconds, projected conversion delta, and confidence score (0–1) And the recommendation appears in the Heatmap Router dashboard and enables one-tap Accept and Decline When Accept is tapped Then SMS and Slack messages are sent to the named team leads within 10 seconds, including directions, ETA, and volunteer list; and an action record is created with timestamp, actor ID, and payload When Decline is tapped Then the user must select a reason code; the decline and reason are logged; and the recommendation is not resurfaced for the same station for 10 minutes unless conditions worsen by ≥25%
Open Pop-up Station Based on Forecasted Demand
Given the 30-minute forecast for district D exceeds available staffed capacity by ≥20% (p95) AND at least one pre-approved pop-up location is available within 20 minutes AND ≥5 volunteers with required skill tags are available with ETA ≤20 minutes When the forecast breach persists for ≥2 consecutive refresh cycles (2 minutes) Then the system emits an "Open Pop-up Station" recommendation specifying: location, start time, required roles and counts, expected throughput per 15 minutes, projected conversion lift, and confidence score And on Accept, a new station is created in Active status with capacity derived from recommended roles, invitations are sent to matched volunteers, and directions are sent to assigned leads; all artifacts are logged And on Decline, a reason code is required and the recommendation is suppressed for 30 minutes for district D unless forecast error bands widen to ≥35%
Pause Low-Yield Station to Reallocate People Power
Given station S has conversion rate ≤3% over the past 30 minutes AND queue length < 3 for the past 10 minutes AND redeploying ≥2 volunteers from S to station T is projected to increase total conversions by ≥8% in the next hour When these conditions are detected Then the system emits a "Pause Station" recommendation for S with projected impact and confidence And on Accept, station S status changes to Paused, volunteers receive redeployment prompts, and new actions from S are prevented; on Decline, a reason is captured And the system suggests resume after 45 minutes or earlier if projected net benefit of pausing becomes ≤0
Cooldown Enforcement to Prevent Thrashing
Given any staffing action (Redeploy, Open, Pause) affecting station X was Accepted at time t When new conditions would otherwise trigger another action affecting station X within 20 minutes of t Then the recommendation engine suppresses the new action until t + 20 minutes Unless an override condition is met: average wait time > 300 seconds for 5 consecutive minutes OR forecasted demand exceeds capacity by ≥50% in the next 15 minutes And any override recommendation is labeled "Cooldown Override" in the UI and logs the override rationale
Constraint-Aware Assignment of Volunteers
Given volunteer profiles include skill tags, current location, schedule availability, and max travel time; and stations include capacity limits and required skills per role When generating any recommendation that moves or assigns volunteers Then the engine only proposes assignments that: do not exceed target station capacity; match all required skill tags; respect volunteer availability window; and estimate ETA ≤ the volunteer’s max travel time using current traffic And if constraints cannot be satisfied for ≥90% of required roles, no recommendation is emitted and a "Constraints Unsatisfied" note is logged
Recommendation Card Content, Confidence, and Outcome Learning
Given a recommendation is displayed in the Heatmap Router Then the card contains: title; affected stations/districts; rationale with metrics and time window; expected impact as numeric deltas (wait time, conversions); confidence score (0–1 with label Low/Med/High); prerequisites; and side effects And the card shows Accept and Decline buttons; Accept requires no additional input; Decline requires a reason code from a configurable list When Accept is tapped Then a snapshot of pre-action metrics is stored and a follow-up evaluation window of 60 minutes is scheduled When the evaluation window closes Then realized impact is computed and compared to predicted impact; the result, action outcome, and features are appended to the training dataset; and model performance metrics are updated in analytics
One-Tap SMS/Slack Dispatch
"As a field coordinator, I want to send directions to team leads with one tap so that redeployments happen fast without manual copy-paste."
Description

Integrated messaging that sends pre-filled, role-targeted instructions to team leads via SMS and Slack, including station location, directions, shift details, and deep links back to RallyKit. Supports reusable templates, variable substitution, preview/edit, and one-tap send from a recommendation card. Tracks delivery status, retries on failure, and falls back to alternate channels. Enforces role-based permissions and logs all messages for compliance.

Acceptance Criteria
One-Tap Dispatch from Recommendation Card
Given I am an authorized dispatcher viewing a Heatmap Router recommendation card with a suggested station and role-targeted recipients resolved When I tap "Send SMS" or "Send Slack" Then the message sends without any additional confirmation steps And I see a success toast within 2 seconds And a delivery status chip appears on the card showing "Queued" then updates to "Delivered" or "Failed" Given any required variable is unresolved or no eligible recipients are found When I view the recommendation card Then the "Send" button is disabled with a reason tooltip specifying the missing items
Role-Targeted Recipient Selection (Team Leads)
Given the target role is Team Lead and the station has one or more Team Lead contacts with channel info When I initiate dispatch Then only users with the Team Lead role are selected as recipients And no users without the Team Lead role are included And per-recipient channel (SMS or Slack) is selected based on their available contact methods and org preferences Given no Team Lead contacts exist for the station When I attempt to dispatch Then I am prompted to add a contact or choose an alternate station And sending is blocked
SMS Template Variable Substitution and Directions
Given a reusable SMS template containing variables {station_name}, {address}, {shift_start}, {shift_end}, {directions_link}, {deeplink} When I preview the SMS Then all variables are resolved with current recommendation data And any unresolved variable prevents sending with a validation error listing the missing variables Given the station has latitude/longitude or a mailable address When the SMS is generated Then the directions_link resolves to a shortened URL that opens the device's default maps app with the correct destination Given the template specifies a RallyKit deep link When the SMS is generated Then the deeplink opens RallyKit to the referenced station/redeployment screen after authentication
Slack Dispatch with Deep Links
Given Slack is connected for the org and target recipients have mapped Slack user IDs When I tap "Send Slack" on the recommendation card Then a direct message is sent to each recipient with station name, address, shift details, a directions link, and a RallyKit deeplink And the message posts within 3 seconds of request acceptance And I see per-recipient post results (OK/Failed) in the UI Given a recipient lacks a mapped Slack user ID When dispatch runs Then that recipient is excluded from Slack and queued for SMS fallback if a phone number exists And the UI indicates the fallback choice
Delivery Tracking, Retry, and Fallback
Given an SMS is sent When delivery webhooks are received from the provider Then the per-recipient status updates to one of: Queued, Sent, Delivered, Failed, Undeliverable, or Unknown with timestamp Given an SMS send fails or remains in "Sent" without "Delivered" for more than 2 minutes When retries are configured Then the system retries up to 3 times with exponential backoff (15s, 45s, 90s) And on final failure automatically falls back to Slack if available, otherwise marks Failed Given a Slack API error (rate_limit or server_error) When dispatch occurs Then the system retries per Slack guidance with backoff and, after 3 attempts, falls back to SMS if a phone number exists Given any fallback is executed When viewing the message log Then the original failure, each retry, and the fallback path are recorded per recipient
Preview and Edit Before Send
Given I tap "Preview" When the preview opens Then I see the fully resolved message for each channel and recipient And I can edit the message body per recipient without altering the underlying template And edits are limited to the current dispatch only Given I make edits When I tap "Send" Then the edited content is sent And the audit log records the final message bodies and that edits occurred Given I cancel from preview When I return to the recommendation card Then no messages are sent and no audit entries are created
Role-Based Permission Enforcement and Compliance Logging
Given my user role lacks "Dispatch:Send" When I view a recommendation card Then "Send" controls are hidden or disabled And any API attempt to send returns 403 and is recorded as a blocked attempt Given my user role includes "Dispatch:Send" When I send a message Then the system records an immutable audit entry including sender, timestamp, recipients, channel, template ID and version, resolved variable values, message body, delivery outcomes, retries, fallback decisions, and source (card ID) Given an auditor with "Compliance:Read" role When they export logs for a date range Then they receive a CSV within 10 seconds containing the above fields and a checksum per row
Station Capacity & Status Management
"As a team lead, I want to update station capacity and status in real time so that the heatmap and recommendations reflect on-the-ground reality."
Description

An admin panel and quick actions to define and update station metadata: capacity (seats/lines), current staffing, operating hours, skills available, and live status (open, busy, paused). Allows rapid creation of temporary pop-up stations with address/QR and auto-synchronizes capacity changes to the heatmap, feeds, and recommendation logic. Includes API endpoints, validation, permission checks, and device/telephony health signals to derive effective capacity.

Acceptance Criteria
Create Station with Core Metadata
Given an Org Admin with station:create permission When they submit a create request with name, capacity.lines, capacity.seats, staffing.current, operating_hours (weekly schedule), skills (allowed list), status in {open,busy,paused}, and optional address Then the API returns 201 Created with station_id and persisted fields And the new station appears in the admin list within 5 seconds And invalid inputs return 422 with field-level errors (non-negative integers, non-overlapping hours, enum validation, skills subset) And an audit event station.created is recorded with actor, timestamp, and before/after And a station.created event is emitted and consumed by heatmap/feeds/recommendations within 5 seconds
Quick Capacity and Status Update
Given an Org Admin with station:update permission in the admin panel or API When they change capacity.lines or status via a quick action Then the update persists and returns 200 OK with updated fields and version And heatmap tiles and per-code streams reflect the change within 5 seconds And recommendation prompts recalculate for impacted districts within 10 seconds And validation prevents capacity < 0 or status outside {open,busy,paused} And optimistic concurrency prevents overwrites; requests with stale version/ETag return 409 Conflict with no changes applied
Temporary Pop-Up Station with Address and QR
Given an Org Admin creates a station with temporary=true and a valid street address When the station is created Then a unique QR code (SVG and PNG) linking to the station check-in/action page is generated and accessible via UI and API And the address is geocoded to lat/lng with accuracy <= 100 meters or the request fails with 400 and reason=geocode_failed And the station appears on the heatmap within 5 seconds And if an end_time is provided, the station auto-changes status to paused at end_time and stops receiving routes And all generated QR links include a signed token that expires at end_time (or 30 days if none provided)
Effective Capacity Derived from Health Signals
Given device and telephony health signals provide online_lines and status per station When online_lines < configured capacity.lines Then effective_capacity is set to online_lines and exposed via API/UI And if online_lines == 0 for at least 60 seconds, the station auto-changes status to paused and Slack/SMS alerts are sent to station leads And when online_lines recover to >=1 for 120 seconds, the station auto-restores to its prior manual status unless manually paused And all auto-changes are audit-logged with reason=health_signal and include the raw metrics snapshot
Operating Hours Enforcement and Auto-Status
Given a station has operating_hours defined in the org timezone When current time is outside operating_hours Then new calls/emails are not routed to the station and action pages show "Closed" with next-open time And the station auto-status becomes paused outside hours and auto-restores to open at the next start time unless manually overridden And daylight saving time transitions are handled correctly in the defined timezone And manual overrides are honored until the next scheduled close or manual reset and are audit-logged with reason=manual_override
API Permissions, Validation, and Audit Logging
Given REST endpoints /stations (POST, GET), /stations/{id} (GET, PATCH), and /stations/{id}/quick-actions (POST) When a request lacks a valid OAuth2 token with appropriate scope (station:read or station:write) or the caller lacks Org Admin/Station Manager role for the target station Then the API responds 401 or 403 with no side effects And request payloads are validated against the published schema; unknown fields are rejected; PATCH requires If-Match with current version; otherwise 409 Conflict And write requests are rate-limited to 60/min per org; exceeding returns 429 with retry-after And every successful write emits an audit event with actor, source (UI/API), correlation_id, and before/after values
Threshold Alerts & Escalations
"As an operations manager, I want alerts when wait times or conversion dip so that I can react before performance degrades."
Description

Configurable alerting when KPIs breach defined thresholds (e.g., wait time > target, conversion below goal, abandonment spike). Sends notifications via in-app, SMS, Slack, and email with rate limiting, quiet hours, acknowledgements, snooze, and escalation policies/on-call rotations. Displays alert badges on the map and streams, links to recommended mitigations, and maintains an alert history for review.

Acceptance Criteria
Wait Time Breach Multi-Channel Alert
Given a station has wait_time_threshold T configured and quiet hours are not in effect And the station’s observed wait time exceeds T for 2 consecutive evaluation intervals When the breach is confirmed Then an alert with severity "High" and scope "Station" is created within 5 seconds of confirmation And the in-app alert panel displays the alert within 10 seconds And a badge appears on the heatmap and the station’s stream within 10 seconds And SMS, Slack, and email notifications are sent to the primary on-call within 15 seconds, each containing station, observed value, threshold, timestamp, and a link to recommended mitigations
Conversion Rate Drop Alert with Mitigations
Given a district has conversion_rate_target C and min_sample_size N configured And over the last N attempts the conversion rate is below C for 2 consecutive evaluation intervals And quiet hours are not in effect When the breach is confirmed Then an alert with severity "Medium" and scope "District" is created within 5 seconds of confirmation And a badge appears on the heatmap and district stream within 10 seconds And the alert details include at least one recommended mitigation with a one-tap link to initiate SMS/Slack directions to team leads And a single notification per channel (in-app, SMS, Slack, email) is sent to the on-call within 15 seconds
Abandonment Spike Detection and Notification
Given abandonment_absolute_threshold A, spike_delta D (%), baseline_window 15 minutes, and min_sample_size M are configured And the current abandonment rate >= A OR current abandonment rate exceeds the 15-minute rolling baseline by >= D, with at least M new attempts in the period And quiet hours are not in effect When the spike is detected and confirmed Then an alert with severity "High" and scope "Station or District" (as configured) is created within 5 seconds And badges render on the map and streams within 10 seconds And on-call receives in-app, SMS, Slack, and email notifications within 15 seconds including baseline, current rate, delta, and mitigation link
Quiet Hours and Rate Limiting Enforcement
Given quiet hours schedule is configured and currently active When any alert condition is confirmed during quiet hours Then the system creates the alert and displays in-app badges within 10 seconds And no SMS, Slack, or email notifications are sent while quiet hours remain active And if the condition persists when quiet hours end, exactly one notification per channel is sent within 30 seconds of quiet hours ending And given a rate_limit_window RL and per-alert deduplication key When an alert remains in breach Then no more than one notification per channel per deduplication key is sent within RL (in addition to acknowledgements/escalations) unless a user invokes "Notify Anyway" And when a user invokes "Notify Anyway" during quiet hours Then a single immediate notification per channel is sent and logged as an override
Acknowledgement, Snooze, and Escalation Workflow
Given an alert is assigned to an escalation policy with levels L1..Ln and ack_timeout AT When the initial recipient receives the alert Then if they acknowledge within AT, the alert state becomes "Acknowledged" and no escalation occurs And if not acknowledged within AT, the alert escalates to the next level and sends notifications per channel within 15 seconds And when a user snoozes the alert for S minutes, no notifications are sent during S and the badge shows a "Snoozed" indicator And if the condition persists when snooze expires, a single notification per channel is sent within 15 seconds and the state returns to "Active" And if the condition resolves at any time, the alert auto-closes, badges clear within 10 seconds, and no further notifications are sent
Alert Badges on Map and Streams
Given there is at least one active alert When the heatmap and per-code streams render Then affected stations/districts display an alert badge with severity color, metric name, and count of active alerts And hovering shows metric, threshold, observed value, and alert age And clicking opens the alert details drawer with mitigation links and actions (Acknowledge, Snooze, Notify Anyway) And when the alert resolves, badges disappear from map and streams within 10 seconds
Alert History and Audit Trail
Given alerts are generated, acknowledged, snoozed, escalated, notified, and resolved over time When viewing Alert History Then each alert shows a complete timeline of lifecycle events with UTC timestamp, actor/channel, observed value, threshold, scope, recipients, delivery outcomes, and resolution reason And the list can be filtered by timeframe, metric, scope, severity, status, and actor And exporting CSV for up to 10,000 rows completes within 5 seconds and matches the on-screen filters And history retains records for at least 180 days and entries are immutable; any corrections are recorded as appended audit events with user, timestamp, and reason
Recommendation & Action Audit Trail
"As an executive director, I want an audit trail of recommendations and actions so that I can prove impact and learn what works."
Description

End-to-end event logging that captures metrics snapshots, recommendations generated, user decisions, dispatch messages, and subsequent performance outcomes with timestamps and actors. Provides exportable, immutable reports with retention/redaction controls for compliance and board reporting. Enables traceability from a recommendation to observed impact and dashboards for acceptance rates, time-to-action, and ROI of redeployments.

Acceptance Criteria
Log Generation on Recommendation Creation
Given Heatmap Router generates a new recommendation When the recommendation is produced Then an audit event is written with fields: rec_id (UUIDv4), campaign_id, district_codes[], source_snapshot_id, algorithm_version, recommended_action_type, target_station_ids[], confidence_score (0.00–1.00), generated_at (UTC ISO 8601), actor='system' And the audit event is immutable with content_sha256 and created_by='system'; any update attempt returns 403 and is logged as a separate security event And the event is queryable via API and UI within 2 seconds of generation for p95 of events
Decision Capture and Linking
Given a user with role Organizer or Admin views a recommendation When they Accept, Reject, or Snooze it Then a decision event is recorded with fields: rec_id, decision_type in {'accept','reject','snooze'}, decision_reason (optional <= 256 chars), decided_by_user_id, decided_at (UTC ISO 8601), client_type, ip_hash And time_to_decision_seconds = decided_at - generated_at is computed and stored And the decision event links to the originating recommendation and is visible on the recommendation detail view within 5 seconds
Dispatch Message Audit with Redaction
Given a one-tap dispatch is sent via SMS or Slack from a recommendation When the user confirms the send Then a dispatch event is recorded with fields: rec_id, channel in {'SMS','Slack'}, message_template_id, message_content_sha256, recipient_count, recipient_ids_sha256[], sent_by_user_id, sent_at (UTC ISO 8601), delivery_summary {queued, sent, failed} And PII fields (phone, email, slack_user_id, message_body) are not stored in cleartext in the audit log; hashes are stored instead And exporting without Compliance role returns redacted dispatch data; Compliance role can include non-redacted content only with reason_code and justification; all such exports are logged
Outcome Tracking and ROI Attribution
Given a recommendation has been Accepted and a dispatch sent When post-action metrics are collected at T+15m, T+30m, and T+60m Then metrics snapshots are recorded with fields: rec_id, snapshot_at, volume, conversions, wait_indicator And uplift = (conversions_post - conversions_pre_baseline) is computed; baseline is the average conversions for the same station(s) in the 60 minutes prior And roi_per_volunteer = uplift / volunteers_redeployed is computed when volunteers_redeployed > 0; otherwise set to null And the recommendation detail view shows linked outcome snapshots and computed uplift/ROI
Immutable Export with Retention and Redaction Controls
Given an Admin requests an audit export for a date range and campaign When the export is generated Then the system produces CSV and JSON files plus a manifest.json containing file SHA-256 hashes and record counts And the export includes all recommendation, decision, dispatch, and outcome events with their immutable content hashes; ordering is chronological by timestamp And default retention is 24 months; retention settings are configurable per workspace (12–60 months); redaction jobs purge or hash PII fields on schedule and record a redaction event with fields: redaction_job_id, scope, executed_at, records_affected And export generation for up to 1,000,000 events completes within 2 minutes for p95 of requests
Dashboard Metrics and Drill-Through Consistency
Given the Audit Analytics dashboard is loaded When filters for date range, campaign, and district are applied Then acceptance_rate = accepted /(accepted + rejected + expired) matches aggregation from raw events with absolute difference <= 0.1% And median_time_to_action equals the median(decided_at - generated_at) from decision events within 0.1 minutes And ROI widgets display roi_per_volunteer aggregated by recommendation; selecting a data point opens a drill-through list of underlying rec_ids and linked events
End-to-End Traceability and Search
Given a user searches by rec_id, station_id, or district code When they open a recommendation Then the UI displays a chronological chain: recommendation -> decision -> dispatch(es) -> outcome snapshots with timestamps and actors And each hop is clickable to a detailed event view; missing links display "no event recorded" markers And the API endpoint GET /audit/recommendations/{rec_id}/chain returns the same chain with identical counts and timestamps

Saturation Guard

Automatically paces outreach by district and office to avoid overwhelming a single lawmaker’s lines. When voicemail is likely full or quotas are met, selected station codes shift to email or alternate targets and rebalance across swing areas—maximizing persuasion while preserving credibility.

Requirements

District Contact Load Monitoring
"As a campaign director, I want real-time detection of overloaded offices so that outreach auto-adjusts before we burn goodwill or waste supporter effort."
Description

Continuously capture and aggregate contact load signals at the target-office and district level (e.g., call answer rate, busy tones, voicemail full likelihood, average queue time, email/webform bounce or rate-limit responses) to determine saturation in real time. Normalize inputs across telephony and email providers, tag with station codes, and expose a single internal signal API for the pacing engine and dashboard. Trigger events when thresholds are crossed so campaigns automatically adjust before lines are overwhelmed, preserving relationships while maintaining throughput.

Acceptance Criteria
Real-Time Telephony Signal Ingestion and Normalization
- Given registered telephony providers emit call events (answer, busy, voicemail, queue_time) via webhook or pull API When events are received Then 95th percentile end-to-end ingestion-to-normalization latency is <= 3s and 99th percentile <= 5s - Given provider-specific payloads When normalization runs Then required fields {provider, station_code, target_office_id, district_id, event_type, event_id, timestamp, queue_time_ms?} are populated per mapping with 100% coverage for available source fields and invalid records are routed to a dead-letter with reason codes - Given normalized events When aggregation executes Then rolling metrics per target_office_id and per district_id are computed in 60s tumbling windows for answer_rate, busy_rate, avg_queue_time_ms, voicemail_full_likelihood and stored with at least 24h retention - Given duplicate event_ids When processing occurs Then duplicates are suppressed idempotently with no double-counting - Given voicemail-related signals are present When likelihood scoring runs Then voicemail_full_likelihood is produced for >= 95% of applicable events and null otherwise
Email and Webform Bounce/Rate-Limit Detection and Aggregation
- Given email and webform providers return responses or webhooks indicating delivery outcomes When 4xx/5xx or provider-specific bounce signals are received Then outcomes are classified into {hard_bounce, soft_bounce, rate_limited, blocked} with deterministic mappings and a classification error rate < 0.5% - Given classified responses When aggregation executes Then per target_office_id and per district_id rolling metrics are computed in 60s windows for bounce_rate and rate_limit_rate with p95 compute lag <= 3s - Given a rate-limited response (HTTP 429 or provider-specific equivalent) When detected Then a normalized rate_limited signal is available to the internal API and event stream within 2s p95
Saturation Threshold Crossing Event Emission
- Given rolling metrics per target_office_id and district_id When busy_rate > 0.35 for 3 consecutive minutes OR voicemail_full_likelihood >= 0.80 for 2 consecutive minutes OR rate_limit_rate > 0.20 for 3 consecutive minutes Then a saturation.entered event is emitted once per unique key {target_office_id,district_id,dimension} including payload {current_metrics, thresholds, window_start, window_end, station_code_breakdown, correlation_id} - Given a prior saturation.entered event for a key When metrics fall below hysteresis thresholds (busy_rate < 0.25 AND voicemail_full_likelihood < 0.60 AND rate_limit_rate < 0.10) for 3 consecutive minutes Then a saturation.recovered event is emitted with the same correlation_id - Given intermittent metric fluctuation When events are generated Then duplicate events are suppressed within a 5-minute de-duplication window and all events are durably published with at-least-once delivery and p99 publish-to-subscriber latency <= 2s to pacing and dashboard consumers
Station Code Tagging and Multi-Code Attribution
- Given inbound actions include one or more station codes When events are normalized Then all signals and aggregates include station_code attribution with support for multi-valued codes preserved in breakdowns - Given an unknown or missing station code When normalization runs Then station_code is set to "default" and the occurrence is logged via metrics without blocking processing - Given consumers request station_code breakdowns When querying aggregates Then global totals equal the sum of per-station_code values with zero discrepancy
Internal Signal API Contract, Filtering, and Security
- Given an authenticated service with scope signals.read When it calls GET /internal/v1/signals/aggregates?target_office_id=...&granularity=60s&from=...&to=... Then the API returns 200 with schema {target_office_id,district_id,window_start,window_end,answer_rate,busy_rate,avg_queue_time_ms,voicemail_full_likelihood,bounce_rate,rate_limit_rate,station_code_breakdown,version} and p95 latency <= 400ms for <= 500 windows - Given an unauthenticated or unauthorized caller When it calls any /internal/v1/signals endpoint Then the API returns 401/403 and no sensitive data is returned - Given invalid parameters (missing required, out-of-range, malformed) When validation runs Then the API returns 400 with machine-readable error codes and no side effects - Given schema evolution When a new optional field is added Then version is incremented and previous clients continue to function without changes
Outage Resilience, Staleness, and Backfill
- Given a provider outage or zero events from a source for >= 60s When monitoring detects the condition Then aggregates are marked stale=true for affected sources and no new saturation.entered events are produced solely due to staleness - Given connectivity is restored When backfill executes Then missing windows are backfilled within 10 minutes, metrics recomputed without double-counting, and corrected windows are flagged backfill=true - Given dead-letter volume exceeds 0.1% of events in any 5-minute window When alerting runs Then an operational alert is emitted and surfaced on the dashboard health panel within 2 minutes
Adaptive Channel Switching
"As an organizer, I want the system to switch from calls to email or alternate targets when lines are jammed so that supporters can still make an impact immediately."
Description

Automatically reroute supporters to the most effective available channel or target when call capacity is constrained or quotas are met. Update action pages and links on the fly to shift from calls to email/webform/text, or to alternate offices, while preserving district and bill context. Generate channel-appropriate instructions, dedupe repeat sends, and ensure all switches are transparent, reversible, and tracked for auditability without degrading the supporter experience.

Acceptance Criteria
Capacity Breach Switch for New Sessions
Given Office A’s call channel for Campaign X has reached the configured quota or exceeded the failure threshold (>= 3 consecutive busy signals or a voicemail-full event) within the last 60 seconds When a new supporter opens the action page for Campaign X Then the default channel changes from Call to the highest-ranked eligible fallback (Email, Webform, or Text) within 3 seconds And the action page shows channel-specific instructions and a district- and bill-aware script And the supporter’s target remains their own legislator unless policy specifies approved alternates And the switch is recorded with timestamp, trigger, old/new channel, target, campaign, and session ID
Mid-Session Non-Disruptive Switch Prompt
Given a supporter has an open Call action page for Office A for Campaign X And Office A is newly marked constrained per detection rules When the constraint is applied Then a banner appears within 10 seconds offering a switch to the selected fallback channel And accepting updates the instructions, script, and link in-place without page reload And declining lets the supporter continue calling with no blocking And the offered switch and the supporter’s choice are logged
Alternate Target Rebalancing Across Offices and Swing Areas
Given the primary target for a supporter is constrained When selecting an alternate target Then the system first prefers another office for the same legislator (e.g., district vs capitol) if eligible and under quota And if none, selects from the configured alternate target pool using weights to ensure no more than 30% of rerouted sessions hit a single alternate within a rolling 15-minute window And do-not-target lists and per-target daily caps are enforced And the selected alternate retains the supporter’s bill and district context in generated scripts
Cross-Channel and Cross-Session Deduplication
Given a supporter completes or attempts an action after a channel switch When the same supporter is routed to the same target and bill within the dedupe window (24 hours) Then duplicate sends are blocked across Email, Webform, and Text channels, and repeated calls are not re-prompted And the UI shows an Already completed state with options for an alternate target or share And dedupe keys include supporter_id (or hashed contact), target_id, bill_id, and channel group And organizer overrides can bypass dedupe for specific supporters with audit log entries
Channel-Appropriate Script and Instruction Generation
Given a switch from Call to Email, Webform, or Text for Campaign X When the fallback is presented Then the script includes correct salutation, subject (if applicable), and message body with all placeholders (legislator name, district, bill title/number, position, organizer signature) fully resolved with 0 unresolved tokens And webform-specific fields (address, phone, topic) are prefilled where technically possible or accompanied by exact entry instructions And SMS/Text variants fit within 160 characters per segment and follow configured link policies And all generated content passes content linting rules and supports English and Spanish where configured
Transparent, Reversible, and Auditable Switching
Given any automatic channel or target switch is executed When campaign staff open the admin audit view Then each switch is listed with timestamp, trigger condition, old/new channel, old/new target, supporter session ID (or anonymous hash), campaign ID, and decision reason And exports to CSV and JSON include the same fields and are available within 60 seconds of request And staff can toggle an override to force channel/target back to default, which takes effect for new sessions within 5 seconds and for open sessions on refresh And the supporter action page displays a notice explaining the switch with a learn-more link
Shortlink and Deep Link Continuity During Channel Switches
Given supporters access the campaign via a persistent action shortlink or QR code When a channel or target switch is activated Then all new visits via the same shortlink resolve to the updated channel/target without changing the URL within 5 seconds of the switch And existing UTM and referral parameters are preserved end-to-end And mobile deep links open the correct app or fallback web page for the new channel And CDN or cache invalidation occurs within 60 seconds to propagate updated routing
Quota and Rate Limit Controls
"As a campaign manager, I want to set and enforce daily and hourly contact caps by office so that no single lawmaker is overwhelmed and our outreach remains credible."
Description

Provide configurable per-office, per-district, and per-campaign caps and pacing windows (e.g., max calls/hour/day, quiet hours, per-supporter frequency) with time zone awareness. Enforce limits at assignment time to prevent oversaturation, and surface clear feedback to action pages when caps are reached. Support exception lists and temporary overrides for urgent moments while ensuring guardrails remain intact.

Acceptance Criteria
Enforce Per-Office Hourly and Daily Caps at Assignment
Given a campaign with Office A caps configured as max_calls_per_hour=50 and max_calls_per_day=200 And 50 call assignments to Office A have been created in the last rolling 60 minutes When an additional supporter starts an action that would assign a call to Office A Then the system must not assign Office A for that supporter And the assignment engine selects an alternate eligible target if available; otherwise, the call action is disabled and messaging is displayed And assignments to Office A automatically resume once the rolling hour drops below 50 And no more than 200 assignments are created to Office A between 00:00 and 23:59 in Office A’s local time
Enforce Per-District and Per-Campaign Pacing Windows
Given per-district caps are set to max_calls_per_hour=30 and max_calls_per_day=120 And a per-campaign pacing limit is set to max_calls_per_hour=300 using a rolling window When aggregate assignments in the last 60 minutes reach 300 across all districts Then the system throttles new assignments so the total does not exceed 300 within any rolling 60-minute window And no single district receives more than 30 assignments within any rolling 60-minute window or more than 120 assignments in a calendar day in the district’s local time And the dashboard displays real-time counts for used/remaining caps at district and campaign levels
Quiet Hours by Target Time Zone with Deferral Messaging
Given quiet_hours are configured as 20:00–08:00 for call actions And it is 21:30 in the target office’s local time zone When a supporter initiates a call action from any time zone Then the system must not assign a call to that office And the action page displays a non-blocking banner explaining calls are paused until 08:00 local time with the office’s current local time shown And the system offers a switch to send an email instead or schedules the call for the first minute after quiet hours with supporter confirmation And daylight saving time transitions are correctly honored based on the office’s time zone database
Per-Supporter Frequency Limits Across Channels
Given per-supporter limits are configured as max_calls_to_same_office_per_24h=1, max_total_actions_per_day=3, and cross-channel_cooldown_minutes=30 And the supporter completed a call to Office B 10 minutes ago When the supporter attempts another call to Office B within 24 hours Then the system blocks assignment to Office B and suggests an alternate office or email if permitted And if the supporter has already completed 3 actions today, all new action attempts are blocked with clear messaging until 00:00 in the supporter’s local time And if the supporter completed any action in the last 30 minutes, call and email actions enforce a 30-minute cooldown before another assignment is made
Exception Lists and Temporary Overrides with Guardrails
Given an admin with Override permissions creates a temporary override for District 12 raising max_calls_per_hour from 30 to 60 for 2 hours with a required reason And the campaign has a global hard_cap_per_hour=500 and per-supporter limits enabled When assignments are made during the override window Then assignments to District 12 may exceed the original district hourly cap up to 60 but never exceed the campaign hard cap of 500 And per-supporter frequency limits remain enforced and cannot be overridden And the override automatically expires at the configured end time, after which original caps resume without manual intervention And all override creations, edits, and expirations are logged with user, timestamp, scope, and reason
Action Page Feedback and Alternate Routing When Caps Reached
Given Office C has reached its hourly cap and email is enabled as an alternate channel When a supporter lands on the call action page that would target Office C Then the page loads successfully with HTTP 200 and displays a prominent message that calls are paused due to pacing limits And the primary call-to-action switches to “Send an email” with prefilled district-specific script And if alternate eligible targets exist, a “Try another office” option is presented with dynamic reassignment within 500 ms And no click leads to a dead-end; every path results in a valid action or clear explanation
Audit Log of Limit Enforcement Decisions
Given logging is enabled for Saturation Guard decisions And a supporter is prevented from calling Office D due to daily cap reached When viewing the campaign audit log Then an entry exists capturing supporter_id (hashed or pseudonymous), target office, decision (blocked), reason (daily cap), relevant thresholds and counters, timestamp, and the rule ID that triggered the decision And audit entries are immutable, queryable by time range, target, decision, and rule ID And export to CSV/JSON reproduces the same data fields without loss
Target Rebalancing Engine
"As a strategist, I want actions rebalanced to high-impact districts when others saturate so that overall persuasion and deliverability are maximized."
Description

Redistribute pending and new actions across comparable or high-impact targets when certain districts hit limits, using configurable weights (e.g., persuasion score, responsiveness, district priority, current volume). Respect geographic and policy constraints, avoid double-contact, and provide explainable decisions with reason codes. Operate in near real time and support simulation mode to preview effects before applying.

Acceptance Criteria
District Limit Triggered Rebalance
Given a district's primary target is marked saturated due to reaching the configured cap And there are pending or newly created actions addressing that district And at least one eligible alternate target exists per campaign configuration When the Target Rebalancing Engine runs Then 100% of those actions are reassigned away from the saturated target until the saturation window resets And the selection of alternates follows the current weighted ranking And the p95 decision latency from trigger to reassignment is 2 seconds or less And a summary record of counts moved per target is persisted for the event
Weighted Target Selection with Configurable Weights
Given campaign weights for persuasion_score, responsiveness, district_priority, and current_volume are configured And candidate targets have those attributes populated When the engine computes rankings Then each candidate is assigned a composite score using the configured weights and documented normalization And tie-breakers are applied in order: higher district_priority, lower current_volume, then ascending target_id for determinism And for a fixed test fixture, the resulting rank order and allocation proportions match the expected values to within ±1 action due to rounding And updating any single weight by +50% in config results in a weakly monotonic increase in share of assignments to candidates with higher values of that attribute over the same fixture
Geographic and Policy Constraint Enforcement
Given campaign constraints define allowed geographies, policy areas, and excluded targets When selecting alternates Then no assignments are made to targets outside allowed geographies or policy areas or listed as excluded And every rejected candidate is logged with a constraint code indicating the violated rule And if no eligible targets remain, the action is paused with reason code NO_ELIGIBLE_TARGETS and is not assigned
Double-Contact Avoidance for Supporter–Target Pairs
Given a supporter has contacted a target within the campaign's cooldown window for this campaign When the engine would otherwise assign that same supporter–target pair Then a different eligible target is selected instead And if none exists, the action is paused with reason code NO_ELIGIBLE_NEW_TARGETS And assignments are idempotent under concurrency, enforced by a unique constraint on supporter_id+target_id+campaign_id; duplicate attempts return a handled conflict and do not create multiple assignments
Reason Codes and Decision Logging for Reassignments
Given any reassignment occurs When the decision is finalized Then a decision log entry is written containing action_id, previous_target_id, new_target_id, timestamp, trigger_type, applied_weights, top_5_candidate_scores, and reason_codes And 99.9% of decision logs are queryable within 5 seconds of the reassignment And an export endpoint returns CSV and JSON for a supplied campaign_id and time range with a checksum for integrity verification
Near Real-Time Performance Under Load
Given sustained input of 1,000 actions per minute and 10 saturated districts When the engine processes rebalancing for 10 continuous minutes Then p95 decision latency is <= 2 seconds and p99 is <= 5 seconds And end-to-end throughput is >= 1,000 actions per minute with a backlog growth of 0 And error rate (5xx or failed assignments) is < 0.5% And if latency exceeds the p99 threshold for 2 minutes, the system triggers an alert and pauses new reassignments while preserving existing assignments
Simulation Mode Preview and No-Impact Guarantee
Given simulation mode is run with a specified config snapshot and an input set of 10,000 pending/new actions When the simulation completes Then it returns predicted allocations per target, percent of actions changed, and expected saturation relief time by district And no production assignments, logs, or counters are persisted or modified And a diff report of current vs simulated distributions with aggregated reason codes is available for download And repeating the simulation with the same inputs produces identical results
Script Variant Generator by Channel and Status
"As a volunteer, I want clear, channel-tailored instructions so that I can complete my action quickly and confidently even when the system changes routes."
Description

Produce concise, channel-optimized scripts that adapt to bill status, district specifics, and saturation context (e.g., voicemail vs. live staff vs. email/webform). Maintain consistent core messaging while adjusting length, tone, and required fields per channel. Integrate with RallyKit’s existing script engine, support localization, and include compliance disclaimers where required. Update variants dynamically when the system switches channels or targets.

Acceptance Criteria
Dynamic Channel or Target Switch Updates Script
Given an active campaign page is open and Saturation Guard flags a channel or target change for the supporter’s district When the channel switches (e.g., Phone -> Email/Webform) or the target legislator is rebalanced Then a new, channel- and target-optimized script variant is generated within 500 ms, the action page updates within 2 seconds without user reload, the variant contains the configured core message key phrase(s), correct target name/title/district, and any channel-required fields, and the previous variant is archived with a new version ID
Phone: Live Staff vs Voicemail Script Variants
Given call_type=live When generating a phone script Then the live variant is <= 60 words, includes greeting, self-identification, core ask, and one concise question for staff, and preserves the configured core message key phrase(s) and target details Given call_type=voicemail When generating a phone script Then the voicemail variant is <= 110 words, includes greeting, identity, core ask, bill ID, and callback detail (name + ZIP), contains no staff question, and preserves the configured core message key phrase(s) and target details
Bill Status Change Triggers Variant Refresh
Given the bill status changes (e.g., In Committee -> Floor Vote Today) When the status change event is received by RallyKit Then all active script variants are regenerated using templates mapped to the new status within 2 minutes, open action pages render the updated content within 5 seconds, no unresolved placeholders remain, and the variant version increments and is recorded for audit
District Personalization with Safe Data Handling
Given a supporter address resolves to Legislator L in District N When generating any channel script Then the script includes L’s name and title, District N identifier, and the bill ID; if no district datapoint is available, a generic district line is used; and no PII beyond supporter name and ZIP appears in the script text
Localization with Fallback per Channel
Given supporter locale is set to a supported language (e.g., es-US) When generating a script for any channel Then 100% of translatable strings render in the selected language and all placeholders render correctly; if a translation key is missing, only that key falls back to en-US while the rest remains localized; diacritics are preserved
Compliance Disclaimers by Jurisdiction and Channel
Given jurisdiction J requires a compliance disclaimer and the channel is Email or Webform When generating the script Then the configured disclaimer appears as the final paragraph exactly matching the compliance text for J Given jurisdiction J requires a spoken disclaimer and the channel is Phone When generating the script Then a single-sentence oral disclaimer line appears after the greeting Given jurisdiction J has no disclaimer requirement When generating the script Then no disclaimer text is included
Script Engine Integration and Performance
Given RallyKit Script Engine v2 is installed and variant templates are stored in the engine When rendering 10,000 script variants across Phone (live/voicemail) and Email/Webform channels under load Then the 99th-percentile render time is <= 300 ms, overall generation error rate is < 0.1%, and v1 campaign configurations render via adapters without template changes
Live Audit Trail and Reporting
"As a nonprofit director, I want an auditable record of why and when Saturation Guard changed channels so that I can reassure partners and funders and resolve complaints quickly."
Description

Record every pacing decision, switch, quota event, and rebalancing action with timestamps, user/segment identifiers, target details, channel, and reason codes. Provide real-time dashboards and exportable reports (CSV/JSON) to demonstrate compliance and impact to stakeholders. Include retention policies, privacy safeguards, and filters for investigating anomalies or complaints.

Acceptance Criteria
Audit Log Completeness for Pacing Decisions
Given Saturation Guard makes a pacing-related decision, When the event is recorded, Then the log entry includes event_id, event_type, timestamp (UTC ISO-8601 ms), campaign_id, segment_id or user_id, target_ids (office_id, legislator_id, district), channel_before, channel_after, reason_code, decision_score, request_id, and actor (system or user). Given a pacing event occurs, When persistence is attempted, Then 99.9% of log entries are durable within 2 seconds and acknowledged by the datastore. Given a required field is missing or invalid, When validation runs, Then the event is rejected, the error is logged with a reason, and a metric counter is incremented.
Real-Time Dashboard Visibility and Filtering
Given new audit events are generated, When viewing the Live Audit dashboard, Then events appear within 5 seconds of creation with correct field values. Given any combination of filters (date range, event_type, campaign_id, district, office_id, legislator_id, channel, reason_code, segment_id, user_id), When applied, Then only matching events are returned within 2 seconds for up to 10,000 results. Given no events match filters, When applied, Then the UI displays zero results without errors and offers to clear filters.
Export Audit Trail (CSV and JSON)
Given a filtered set of events, When exporting to CSV, Then the file contains a header row and all selected records with RFC 4180-compliant escaping, UTC timestamps, and column order matching schema v1. Given a filtered set of events, When exporting to JSON, Then the output is newline-delimited JSON objects conforming to schema v1 with correct data types. Given up to 1,000,000 matching events, When export is requested, Then the system streams the file and completes within 2 minutes or provides a resumable download link with progress updates. Given an export completes, When verification runs, Then a checksum is provided and matches the downloaded file.
Retention and Privacy Safeguards
Given the default retention policy is 180 days, When no override is set, Then audit events older than 180 days are purged daily and a purge summary event is logged. Given an admin sets retention between 30 and 730 days or places a legal hold on a campaign, When the next purge runs, Then events respect the configured retention or hold. Given fields containing personal data (e.g., phone, email), When stored, Then PII is hashed or tokenized and not exportable in raw form unless a privileged export with justification is approved and logged. Given a data subject deletion request is approved, When executed, Then identifiers are irreversibly removed or anonymized across audit logs within 7 days and a compliance record is logged.
Anomaly and Complaint Investigation
Given a complaint ticket with a timestamp and target office_id, When searching the audit, Then the investigator can filter by ticket reference, time window, office_id, and campaign_id to return related events. Given a selected event, When opening details, Then the UI shows the full decision chain (preceding and subsequent related events) with rule version and reason_code history. Given an investigation filter is applied, When exporting, Then only the filtered subset is exported with the same schema.
Tamper-Evident and Immutable Logging
Given an audit event is written, When stored, Then it is appended-only and linked via a hash chain (prev_hash, event_hash) to make tampering detectable. Given an admin attempts to edit or delete an event, When action is executed, Then the system denies modification and records an access event; redactions are performed via superseding events only. Given daily integrity verification runs, When discrepancies are detected, Then the job fails the check, raises an alert, and blocks new exports until resolved.
Reason Codes and Channel Shift Accuracy
Given a channel shift, quota, or voicemail-full event occurs, When logged, Then reason_code is present, valid against the catalog, and includes reason_detail where applicable. Given rule mappings are updated, When the next event is processed, Then reason_code_version is recorded and visible in dashboard and exports. Given test fixtures simulating quota reached, voicemail full detected, or swing-area rebalancing, When processed, Then the assigned reason_codes match the configured rules 100% of the time.
Admin Controls and Safeguards
"As an admin, I want configurable thresholds, overrides, and alerts so that I can manage edge cases and protect relationships without halting the campaign."
Description

Deliver an admin UI to configure thresholds, caps, weights, quiet hours, and exclusion lists; pause specific targets; perform manual overrides; and run dry-run simulations. Provide role-based access, change history, and alerting via email/Slack when thresholds are crossed or anomalies appear. Include safe rollback to default routing if providers fail or signals degrade.

Acceptance Criteria
Configure thresholds, caps, and weights
- Given a user with role "Campaign Admin" is authenticated, When they open Saturation Guard > Settings, Then they can set numeric caps per target office for calls per hour and per day with min=0, max=10000, integers only, and see inline validation errors. - Given valid values are entered, When Save is clicked, Then settings persist to the database and are retrievable via API within 2 seconds. - Given settings are saved, When the routing engine runs, Then new caps and weights take effect within 2 minutes and are applied only to new actions. - Given any field is changed, When saved, Then an audit record is created capturing user, timestamp (UTC), field name, before/after values, and optional reason, and is viewable in Change History. - Given weight sliders (0–100%) are adjusted, When saved, Then total active-channel weight normalizes to 100% and an error is shown if any single weight is <0 or >100. - Given voicemail-full detection threshold (N failures within M minutes) is set, When N failed call outcomes occur within M minutes to a target, Then that target’s phone channel is marked saturated for at least M minutes.
Quiet hours enforcement by timezone
- Given org timezone preferences are configured, When an admin sets quiet hours per timezone and day of week, Then call routing is disabled during those windows and actions shift to the next allowed channel per weights within 2 minutes. - Given quiet hours are active, When a supporter opens a one-tap call page, Then the call option is disabled with a message showing next available time and alternate action buttons. - Given an emergency override is enabled by an Org Owner with MFA, When saved with an expiry up to 2 hours, Then quiet hours are bypassed until expiry and this is logged. - Given calls are suppressed by quiet hours, Then each suppressed action logs suppression_reason=QuietHours and alternate channel used.
Exclusion lists and target pause management
- Given an admin uploads or pastes a CSV of target IDs/district codes (max 5000 rows), When validated, Then excluded targets are saved and removed from routing within 60 seconds. - Given a target is excluded or paused, When routing evaluates destinations, Then no calls/emails are sent to that target and distribution rebalances according to weights. - Given a paused target, When viewed in Target Manager, Then status shows Paused with actor, timestamp, and optional note; Unpause restores routing within 60 seconds. - Given invalid IDs are present in upload, When processed, Then those rows are reported with line numbers and reasons; no partial apply occurs unless admin confirms "Apply valid rows only". - Given auto-pause-if-capped is enabled, When a target’s cap is reached, Then the system auto-pauses that target until the cap window resets and notifies admins.
Manual routing overrides with TTL
- Given an admin defines a temporary override (e.g., route 30% of District X to alternate office or force email channel), When saved with TTL between 5 and 1440 minutes, Then the override applies within 60 seconds and auto-expires at TTL. - Given multiple rules exist, When conflicts occur, Then exclusion/paused targets take precedence over overrides; overrides take precedence over weights. - Given an override is active, When Change History is viewed, Then an entry shows scope, percentage, channel changes, creator, reason, start/end times. - Given an override is cancelled, When Cancel is clicked, Then routing reverts to prior configuration within 60 seconds and the cancellation is logged.
Dry-run simulations without live outreach
- Given an admin opens Simulate, When they input traffic profile (actions/hour), target mix, current settings snapshot time, and duration, Then the system returns a simulation plan within 15 seconds. - Given a simulation runs, Then no live calls/emails/SMS are triggered; only simulated metrics are produced. - Given simulation completes, When viewing results, Then the UI shows projected distribution by district/office/channel, cap hits, saturation events, quiet-hour suppressions, and alerts that would have fired. - Given results exist, When Export is clicked, Then a CSV and JSON download is provided with reproducible run ID and input parameters. - Given "replay last 24h" is selected, When run at 1x or 10x, Then outputs are deterministic given the same seed and settings snapshot.
Role-based access control for admin UI and APIs
- Define roles: Viewer (read-only), Campaign Admin (configure settings, simulations, overrides), Org Owner (manage roles, emergency overrides, provider failover settings). - Given a user’s role, When accessing restricted endpoints or UI actions, Then permissions are enforced and unauthorized requests return HTTP 403 with no side effects. - Given a role change by an Org Owner, When saved, Then the change takes effect on the next request, is logged in Change History, and the affected user is notified by email. - Given SSO is enabled, When user authenticates, Then roles map from IdP group claims; mismatches are rejected with HTTP 403 and an audit entry is recorded. - Given a Viewer role, When attempting to Save settings, Then Save controls are disabled in UI and blocked by API.
Safe rollback and alerting on provider degradation
- Given provider health monitors run every 30 seconds, When error rate > 5% for 5 consecutive minutes or p95 latency > 3s, Then the system fails over to default routing and email channel within 60 seconds. - Given failover occurs, Then no supporter action is lost; pending actions are queued and rerouted within 2 minutes SLA. - Given rollback is active, When providers are healthy for 10 consecutive minutes, Then normal routing is restored gradually over 5 minutes to avoid spikes. - Given thresholds are crossed or anomalies detected (e.g., traffic spike > 3x baseline), Then email and Slack alerts are sent within 2 minutes and include metric, affected targets, start time, and deep link to the dashboard. - Given any failover, restore, or threshold-crossing event, Then an audit log entry is created with correlation ID linking to system metrics.

Goal Rings

Set per-station and per-team targets with live progress rings, ETAs, and milestone chimes. Project a big-board view to fuel friendly competition and celebrate wins. Keeps volunteers motivated and gives organizers instant read on whether goals will be hit before the deadline.

Requirements

Target Setup & Hierarchy
"As an organizer, I want to set team and station goals with clear milestones so that everyone knows targets and progress is measured consistently."
Description

Enable organizers to create, edit, and manage goals at multiple levels (campaign, team, station) with inheritance and overrides. Supports goal types (e.g., calls, emails, signups), absolute targets or rate-based targets, start/end times, and configurable milestone thresholds (e.g., 25/50/75/100%). Integrates with RallyKit’s campaign model and station/team assignments to ensure actions are attributed correctly. Includes validation (e.g., end time after start, numeric targets), conflict resolution when station and team goals differ, and retroactive recalculation when goals are updated. Provides secure API endpoints and UI forms to configure goals, preview ring behavior, and save drafts before publishing to live displays.

Acceptance Criteria
Create Campaign, Team, and Station Goals with Inheritance
Given an organizer with Manage Goals permission, When they create a campaign-level goal with type, target, thresholds, and time window, Then the goal saves and is available to descendant teams and stations for inheritance. Given a team with no explicit team goal, When the team ring loads, Then it inherits the campaign goal values and labels as read-only inherited fields in the UI. Given a station under a team with no explicit station goal, When the station ring loads, Then it inherits the nearest ancestor goal (team if present, else campaign). Given a lower-level override is saved (team or station), When the respective ring loads, Then it uses the override and the UI marks overridden fields accordingly. Given an ancestor goal is updated, When descendants have not overridden those fields, Then subsequent reads reflect the updated inherited values and live displays update within 5 seconds.
Goal Type and Target Modes (Absolute vs Rate-Based)
Given the organizer selects a goal type from {calls, emails, signups}, When saving, Then only the selected type is counted toward progress for that goal. Given an absolute target is entered (e.g., 500), When actions are recorded, Then ring progress = completed_actions_of_type / 500 and ETA is computed from current rate if the time window is active. Given a rate-based target is entered (e.g., 30 calls/hour) with a defined start/end time, When saving, Then the system displays and stores a derived total target = rate * duration and uses it for ring progress. Given the time window or rate is edited on a rate-based goal, When saved, Then the derived total target and ETAs recalculate before confirmation and after save on live displays within 5 seconds. Given a rate-based target is configured without a valid time window, When saving, Then the save is blocked with a validation error indicating start and end times are required.
Validation Rules for Goal Configuration
Given a start time and end time, When end <= start, Then the UI prevents save and the API returns 400 with code INVALID_TIME_RANGE. Given target values for absolute mode, When target <= 0 or non-numeric, Then the UI marks the field invalid and the API returns 400 with code INVALID_TARGET. Given milestone thresholds are configured, When thresholds are not in ascending order or outside [0,100], Then save is blocked with inline error and API returns 400 with code INVALID_THRESHOLDS. Given no thresholds are provided, When saving, Then defaults of 25/50/75/100 are applied. Given the campaign time zone is set, When saving start/end times, Then times are persisted and displayed in the campaign time zone consistently across UI and API.
Conflict Resolution Between Team and Station Goals
Given both team-level and station-level goals exist for the same type and time window, When viewing a station ring, Then the station ring uses the station goal and not the team goal. Given a team has its own goal, When viewing the team ring, Then the team ring uses the team goal regardless of station overrides. Given a team has no goal but the campaign does, When viewing the team ring, Then it uses the campaign goal via inheritance. Given conflicting goals exist, When viewing the Settings UI for the station, Then a notice indicates the effective goal source (station override vs team/campaign) and how conflicts are resolved. Given conflicts are changed (e.g., remove station override), When saved, Then the effective goal updates immediately on next fetch and live displays reflect the change within 5 seconds.
Retroactive Recalculation on Goal Update
Given an existing goal with recorded actions, When the target, thresholds, or time window is updated and saved, Then progress percentages and milestone states are recalculated using all historical actions within the new time window. Given the time window is shortened, When recalculation runs, Then actions outside the new window are excluded from progress and aggregates. Given thresholds are changed, When recalculation completes, Then milestone completion statuses reflect the new thresholds without duplicating historical notifications in the UI. Given recalculation completes, When the Big Board and stations reload or receive updates, Then all rings reflect recalculated progress and ETAs within 5 seconds of save.
Draft, Preview, and Publish Flow with Secure API
Given an organizer creates or edits a goal, When they choose Save as Draft, Then no live displays change and the draft is versioned. Given a draft exists, When the organizer clicks Preview, Then a non-destructive preview ring renders using current data (and simulated milestones) without affecting production metrics. Given a draft is published, When confirmed, Then the published version becomes live immediately and supersedes any prior live version for that scope. Given an unauthenticated or unauthorized client calls the goal APIs, When attempting create/update/publish/delete, Then the API responds 401 (unauthenticated) or 403 (forbidden) and no changes are persisted. Given a goal is edited, When the organizer cancels, Then no draft or live change is saved.
Accurate Attribution of Actions to Goals
Given actions are performed by supporters, When actions are logged, Then they are attributed to the correct station via session or assignment metadata, roll up to the assigned team, and to the campaign. Given a station is reassigned from Team A to Team B mid-campaign, When viewing team progress, Then actions prior to reassignment remain with Team A, and subsequent actions count toward Team B; campaign totals remain unchanged. Given actions occur without a station context, When aggregating, Then they count toward the campaign goal but not toward any team or station goal. Given attribution mappings change, When a goal is recalculated, Then historical actions are counted according to the mapping in effect at the time of the action and within the goal’s active window.
Live Progress Rings & Data Pipeline
"As a volunteer lead, I want live progress rings that update instantly so that I can coach stations in real time."
Description

Render real-time progress rings that reflect current completion toward each goal using RallyKit’s action event stream. Subscribe via WebSockets/SSE to ingest actions, deduplicate events, and update ring visuals and counters with sub-second latency. Support concurrent rings per station and team, show total completed, percent to goal, and remaining. Implement client-side smoothing for bursty updates, offline/poor-network tolerance with queued deltas, and graceful degradation to periodic polling. Persist periodic snapshots for recovery, and ensure low CPU/GPU usage so big-board projections run smoothly on commodity hardware.

Acceptance Criteria
Sub-Second Live Updates via Event Stream
Given the client is subscribed via WebSocket or SSE and receives a unique action event with serverTimestamp and goal context, When the event is received, Then the corresponding ring and counters update within 500 ms at p95 and 800 ms at p99, and the numeric total, percent to goal, and remaining reflect the new state. Given a goal of G and completed C, When C changes, Then percent = round((C/G)*100, 1) clamped to [0,100] and remaining = max(G - C, 0), and the ring arc matches the displayed percent within ±0.5%.
Concurrent Rings per Station and Team
Given up to 50 rings across stations and teams are displayed simultaneously, When events for one specific goal arrive, Then only that ring updates and no other ring's totals change. Given both station-level and team-level goals are shown on the same big-board, When updates occur, Then each ring displays total completed (integer), percent to goal (N.N%), and remaining (integer), and values are internally consistent (completed + remaining = goal) at all times.
Idempotent Deduplication and Ordering Tolerance
Given duplicate action events with identical eventId and goalId arrive within a 24-hour window, When processing events, Then counts increase at most once per unique eventId. Given action events arrive out of order with up to 5 minutes of clock skew, When processing the same event set, Then final totals equal the totals produced with perfectly ordered delivery. Given late or replayed events older than the last applied offset, When received, Then they are ignored without regressing totals or causing negative remaining.
Offline and Poor-Network Tolerance with Queued Deltas
Given the client loses connectivity for up to 10 minutes, When connectivity is restored, Then the UI reconciles to correct totals within 3 seconds without double-counting by requesting deltas since the last acknowledged offset. Given intermittent packet loss up to 20% and jitter, When the stream reconnects, Then missed updates are backfilled and counters become eventually consistent within 5 seconds p95. Given a render backlog during burst delivery, When the UI catches up, Then queued deltas are applied in order and counters increase monotonically without visual regression.
Graceful Fallback to Polling and Auto-Resume Streaming
Given the event stream fails or remains unavailable for 5 seconds after 3 retry attempts, When failure is detected, Then the client switches to HTTP polling at 3-second intervals. Given the stream becomes available, When a successful handshake occurs, Then polling stops within one polling interval and streaming resumes without gaps or double-counts by using the last acknowledged offset. Given polling mode is active, When the backend state changes, Then the UI reflects changes within 4 seconds median and 7 seconds p95.
Periodic Snapshots and Fast Recovery
Given normal operation, When 100 new events are processed or 10 seconds elapse (whichever occurs first), Then a snapshot is persisted containing goalId(s), totals, lastProcessedOffset, and timestamp. Given an app reload or crash, When the client relaunches, Then rings restore from the last snapshot within 1 second and backfill from the server using lastProcessedOffset to reach real-time within 3 seconds p95. Given a missing or corrupted snapshot, When detected at startup, Then the client performs a full sync to accurate state within 5 seconds p95 and records a recoverable error.
Low Resource Usage and Smooth Animation on Big-Board
Given a big-board displaying 50 rings at 1080p on commodity hardware (2-core CPU with integrated GPU, 8 GB RAM), When running continuously for 2 hours, Then average app CPU usage is <25%, GPU <35%, working set <300 MB, with no memory growth >50 MB over the period. Given bursty event rates up to 200 events/second across all goals, When rendering, Then updates are coalesced to ≤10 visual updates/second per ring while numeric counters remain accurate. Given ring progress animations, When updates occur, Then animation durations are smoothed between 100–300 ms and frame rate remains ≥50 FPS at p95.
ETA & Goal Forecasting
"As a campaign director, I want ETAs and hit/miss predictions so that I can adjust staffing before the deadline."
Description

Calculate rolling-velocity ETAs and hit/miss predictions for each goal based on recent completion rates and remaining time. Display time-to-goal, likelihood bands (e.g., on track, at risk, off track) using color states, and update continuously as new actions arrive. Allow configurable lookback windows (e.g., last 5–15 minutes) and smoothing to reduce whiplash. Respect campaign deadlines and pauses, exclude test data, and surface clear messaging when velocity is insufficient or data is stale. Expose forecast data to the ring UI and big-board for immediate decision-making.

Acceptance Criteria
Recompute ETA on New Actions
Given a goal (team or station scoped) with a configured lookback window L minutes When a new qualifying supporter action is recorded Then the system recalculates rolling velocity using only actions within the last L minutes And updates time_to_goal and predicted_completion_timestamp within 2 seconds of the action timestamp And persists updated_at as an ISO-8601 timestamp And when no actions arrive, the forecast refreshes at least every 10 seconds
Likelihood Bands and Color States
Given a goal with a future deadline and a computed ETA When ETA <= deadline - max(5 minutes, 10% of the remaining time until deadline) Then likelihood_band = "on_track" and color_state = "green" When ETA > deadline OR ETA cannot be computed due to zero velocity Then likelihood_band = "off_track" and color_state = "red" When ETA <= deadline and is not "on_track" Then likelihood_band = "at_risk" and color_state = "amber" And the band and color are identical in ring UI and big-board displays
Configurable Lookback Window and Smoothing
Given lookback_window is configurable per goal between 5 and 15 minutes inclusive (default 10) When an admin updates lookback_window to any value in range Then subsequent forecasts use the new value within 15 seconds of save Given smoothing is enabled When action arrival rate varies by less than ±5% over 10 minutes in a controlled test feed Then the mean absolute minute-to-minute change in ETA with smoothing enabled is at least 30% lower than with smoothing disabled over the same period
Deadline-Aware Forecasting and Pauses
Given a goal with an upcoming deadline When current_time >= deadline and goal progress < 100% Then ETA = null, time_to_goal = null, likelihood_band = "off_track", and a "Deadline passed" message is surfaced in both UIs Given a campaign or goal is set to Paused When paused Then forecasts stop updating, and the UI surfaces "Paused" messaging And upon resume, the system recomputes forecasts within 5 seconds, excluding the paused interval from velocity calculations
Excluding Test Data from Forecasts
Given actions flagged as test=true or originating from designated test stations/users When these actions are ingested Then they are excluded from rolling velocity, ETA, time_to_goal, and likelihood calculations And injecting 100 test actions over 10 minutes results in no change to ETA, time_to_goal, or likelihood_band
Forecast Data in Ring UI and Big-Board
Given a forecast payload is exposed per goal (team or station) via API Then the payload includes: goal_id, eta_timestamp, time_to_goal_seconds, likelihood_band, color_state, rolling_velocity_per_minute, lookback_window_minutes, smoothing_enabled, updated_at And the ring UI and big-board consume the same payload And any change in these fields is rendered in both UIs within 2 seconds of update And color and numeric values are consistent across both UIs for the same goal_id
Insufficient or Stale Data Messaging
Given a lookback_window L When fewer than 3 qualifying actions exist in the last L minutes OR the last qualifying action is older than L minutes Then ETA and time_to_goal are null in the API, a message "Insufficient or stale data" is surfaced in ring UI and big-board, and the previous likelihood_band value is retained And when new qualifying actions meet the threshold, the message is removed and forecasts appear within 5 seconds
Milestone Chimes & Alerts
"As a field captain, I want milestone chimes when we cross thresholds so that the room stays energized without me watching the screen."
Description

Trigger configurable audio/visual cues when milestones are reached at station and team levels (e.g., 25/50/75/100%, custom thresholds). Provide a library of short, preloaded sounds, per-device volume controls, and a do-not-disturb mode. Guarantee exactly-once firing per milestone with debouncing and replay protection across reconnects. Offer optional notifications to external channels (e.g., Slack webhook) without exposing PII. Ensure accessibility-friendly cues (visual flash with sufficient contrast and captions) and allow organizers to toggle chimes per station/team and per session.

Acceptance Criteria
Exactly-Once Station Milestone Chime
Given a station has chimes enabled and a 50% milestone configured, When incoming progress events cause the station to cross 50% any number of times due to rapid updates, corrections, or duplicate events, Then exactly one audio cue and one visual cue are emitted on that station. Given multiple clients are connected to the same station view, When the 50% milestone is reached, Then the cue plays once across all clients and a single milestone event is logged with a unique ID. Given the station disconnects and reconnects shortly after emitting the cue, When the client resumes, Then the milestone cue is not replayed. Given progress dips below 50% after the cue, When it later crosses 50% again, Then no additional cue is emitted unless the milestone is explicitly reset by an organizer.
Team Milestones with Custom Thresholds
Given a team has default milestones at 25/50/75/100% and a custom threshold at 60%, When team progress crosses each threshold for the first time, Then a cue is emitted once per threshold. Given a custom threshold is removed before being reached, When progress crosses the removed percentage, Then no cue is emitted for that threshold. Given thresholds are edited, When changes are saved, Then they apply only to subsequent progress and do not retro-trigger cues for past milestones. Given station-level thresholds differ from team-level thresholds, When a station crosses its own threshold, Then station-level cues fire independently of team-level cues.
Per-Device Volume and Do-Not-Disturb
Given a device is viewing a station or team dashboard, When the user sets chime volume to X% (0–100), Then subsequent audio cues on that device play at X% without affecting other devices. Given Do-Not-Disturb is enabled on a device, When any milestone is reached, Then no audio plays on that device while visual cues remain active. Given the app is reloaded on the same device, When settings are persisted, Then the last-set volume and DND state are restored. Given chimes are disabled for the station/team by an organizer, When a milestone is reached, Then no audio plays on any device regardless of individual volume or DND.
Accessibility-Compliant Visual Cues
Given a milestone cue is emitted, When the visual flash displays, Then the contrast ratio between foreground and background is at least 4.5:1. Given the visual cue appears, When assistive technology is active, Then a caption with station/team name, milestone percentage, and timestamp is exposed via an ARIA live region within 1 second. Given the visual effect animates, When measured, Then flashing does not exceed 3 times per second and total animation duration is under 1.5 seconds. Given OS/browser reduced-motion is enabled, When the milestone occurs, Then a non-flashing, reduced-motion alternative is shown.
Slack Webhook Notifications Without PII
Given a Slack webhook is configured for a team, When a milestone is reached, Then a POST is sent within 5 seconds over HTTPS containing only campaign ID, team/station ID, milestone percentage, and ISO 8601 timestamp (no PII). Given internal delivery guarantees exactly-once per milestone, When retries cause duplicate attempts, Then the webhook payload includes an idempotency key and downstream receivers can deduplicate to avoid duplicate messages. Given the webhook endpoint returns 5xx, When retries are attempted, Then up to 3 exponential backoff retries occur without replaying local audio/visual cues and without adding PII. Given webhooks are disabled, When milestones occur, Then no outbound requests are made.
Organizer Chime Toggles per Station, Team, and Session
Given an organizer toggles chimes off for a specific station, When milestones occur for that station, Then no audio or visual cues are emitted for that station on any device until toggled back on. Given an organizer toggles chimes off for a team, When team-level milestones occur, Then team-level cues are suppressed without affecting station-level cues unless those are also toggled off. Given a session-level mute is activated, When the session ends or the organizer signs out, Then chime settings revert to the previously saved defaults. Given chime toggles are changed, When another organizer opens settings, Then the current on/off state is immediately reflected for the relevant station/team.
Preloaded Sound Library and Selection
Given the chime sound library is available, When an organizer opens sound settings, Then at least 8 distinct preloaded chime sounds (0.2–2.0 seconds each) are listed with preview controls. Given a sound is previewed, When the preview is played, Then it plays at the device's set volume without emitting a milestone event. Given a default sound is selected for a station or team, When settings are saved, Then subsequent cues use the selected sound and fall back to a default if the asset fails to load. Given custom audio uploads are not supported, When a user attempts to upload a file, Then the UI blocks the action and instructs the user to select from the library.
Big-Board Projection Mode
"As an organizer, I want a big-board projection with read-only share links so that I can display progress safely on any screen."
Description

Provide a full-screen, read-only display that aggregates rings for selected teams and stations with large typography, high contrast, and dark/light modes for venue lighting. Support auto-rotation between groups, pinning favorites, and on-screen ETAs and goal statuses. Include remote-safe sharing via expiring, tokenized URLs with no PII, automatic reconnect, and a low-bandwidth mode for spotty Wi-Fi. Add keyboard controls and simple touch remote (prev/next/refresh), and ensure performance at 60fps on common streaming sticks and older laptops.

Acceptance Criteria
Full-Screen Big-Board Display Rendering
- Given Big-Board mode is launched, when Full Screen is toggled, then the display occupies 100% of the viewport without browser chrome on latest Chrome, Firefox, Safari, and Edge. - Given read-only display, when users click/tap on any ring or control, then no state-changing action occurs and an interaction is ignored (no modals, no nav) and the cursor auto-hides after 5 seconds of inactivity. - Given dark and light modes, when the mode is switched, then the theme applies within 300ms and maintains a minimum contrast ratio of 7:1 for primary numerals/labels and 4.5:1 for secondary text. - Given selected teams/stations are configured, when the board loads, then only rings for those selections render and their totals match the dashboard API exactly (0 variance) within 1s of an update in normal mode and within 5s in low-bandwidth mode. - Given 1080p output, when viewed at default scale, then primary ring numerals render at ≥120px and scale responsively without truncation at 720p–2160p.
Group Auto-Rotation and Favorites Pinning
- Given ≥2 groups are selected, when auto-rotation is enabled, then the view advances to the next group at the configured interval (default 10s; adjustable 5–60s) and loops. - Given one or more groups are pinned, when rotation runs, then pinned groups appear every cycle with 2x dwell time relative to unpinned groups. - Given manual input (Prev/Next), when invoked, then the view changes within 200ms and the rotation timer resets. - Given only one group is selected or auto-rotation is disabled, when the board runs, then no automatic cycling occurs. - Given a reconnect event occurs mid-cycle, when the connection is restored, then rotation resumes at the correct next index without skipping pinned groups.
On-Screen ETAs and Goal Statuses
- Given live action data, when computing ETA, then the system uses a moving average of the last 10 minutes of velocity; if <3 events in that window, then display “Insufficient data” instead of an ETA. - Given a goal exists, when rendering each ring, then show one of: On Track (ETA ≤ deadline), At Risk (ETA within 10% beyond deadline), Behind (ETA >10% beyond), or Goal Reached (current ≥ goal) with corresponding icon/color legend. - Given new actions are logged, when the velocity changes, then ETAs and statuses update on-screen within 10s in normal mode and 30s in low-bandwidth mode. - Given time formatting, when ETAs are shown, then they display in organizer’s timezone and round to the nearest minute with a ±1 minute tolerance across refreshes. - Given a goal changes, when the board receives the update, then status recalculates and reflects the new goal within 5s.
Remote-Safe Tokenized Sharing (No PII)
- Given a share is created, when a URL is generated, then it contains only an opaque token (no team names, emails, station IDs, or other PII in path/query) and uses HTTPS. - Given the token TTL (default 24h) elapses, when the URL is visited, then access is denied with a 403 and a safe error screen; no data is rendered. - Given a share is revoked, when revocation is saved, then the prior URL becomes invalid within 5s and returns 403. - Given a tokenized URL is used, when API requests are made, then only read-only endpoints are accessible and any write attempt returns 401/403. - Given telemetry is captured, when a shared board is viewed, then logs contain only token and coarse device metadata (no IP stored beyond aggregate, no PII fields present in payloads).
Automatic Reconnect and Low-Bandwidth Mode
- Given network loss, when connectivity drops, then the board shows a small “Reconnecting…” badge within 5s, retains last-known data with a “Stale” indicator, and retries with exponential backoff (max 30s) and jitter. - Given connectivity is restored, when a retry succeeds, then live updates resume within 3s and the “Stale” indicator clears. - Given constrained conditions (avg downstream <150 kbps or >10% packet loss for 30s), when detected, then low-bandwidth mode auto-enables: animations disabled, update cadence ≤1 per 5s, assets compressed, and only deltas fetched. - Given conditions improve (throughput ≥300 kbps and packet loss <5%) for 2 minutes, when monitored, then the board automatically returns to normal mode. - Given a query flag lb=1 is present, when loading, then low-bandwidth mode is forced regardless of network heuristics.
Keyboard and Touch Remote Controls
- Given Big-Board is focused, when Left/Right Arrow keys are pressed, then the board navigates Prev/Next group within 150ms. - Given rotation is running, when Space or P is pressed, then rotation toggles pause/resume and the state is indicated on-screen. - Given R is pressed, when handling input, then data refreshes immediately and the timestamp updates. - Given a touch device, when the 3-button overlay is revealed, then single taps on Prev/Next/Refresh perform the same actions with <200ms latency; buttons are at least 44x44pt. - Given any control is actuated repeatedly, when inputs occur within 250ms, then actions are debounced to a single navigation/refresh.
60fps Performance on Low-End Devices
- Given a 1080p display on a streaming stick or older laptop (≥1GB RAM, dual-core ≥1.3GHz), when auto-rotation and standard animations run for 10 continuous minutes, then average frame rate ≥60fps with ≤2% dropped frames. - Given first load, when assets are cached, then time-to-first-render ≤2s on warm start and ≤5s on cold start. - Given ongoing operation, when monitored, then memory usage stays ≤200MB and sustained CPU ≤60% on the stated devices. - Given a group transition animation, when it plays, then per-frame render time stays ≤16ms and no input event experiences >100ms latency. - Given low-bandwidth mode, when enabled, then frame rate remains ≥60fps with animations disabled and no visual tearing/jank.
Leaderboard & Friendly Competition
"As a team lead, I want leaderboards that reward both speed and consistency so that competition is motivating and fair."
Description

Introduce leaderboards that rank teams and stations by progress percentage, absolute completions, and current velocity, with fair tie-breakers and normalization for differently sized teams. Display badges, streaks, and celebratory animations when goals are hit, with controls to limit visual noise on the big-board. Allow opt-in participation and filters to exclude training/test stations. Reset standings per campaign and preserve historical snapshots for post-event debriefs. Integrate with existing permissions so only authorized roles can modify leaderboard settings.

Acceptance Criteria
Rank Teams and Stations by Progress Percentage
Given a campaign with teams and stations each having defined goals And live completions recorded against those goals When the leaderboard is set to sort by Progress % Then entries are ordered by highest progress percentage to lowest And ties are broken by higher current normalized velocity, then earlier timestamp reaching the tied percentage, then alphabetical name And progress percentage is computed as total completions divided by goal, capped at 100% And entries with no goal are excluded from the Progress % leaderboard And the sort order remains stable when all tie-breakers are equal
Absolute Completions Leaderboard with Fair Tie-breakers
Given teams and stations with varying completion counts When the leaderboard is set to sort by Absolute Completions Then entries are ordered by highest total completions to lowest And ties are broken by higher current normalized velocity, then earlier timestamp of last completion, then alphabetical name And totals include only actions within the active campaign window And the sort order remains stable when all tie-breakers are equal
Velocity Ranking with Team-Size Normalization
Given the system computes current velocity as completions in the last 10 minutes divided by the count of active participants in the last 10 minutes When the leaderboard is set to sort by Velocity Then entries are ordered by highest normalized velocity to lowest And ties are broken by higher Progress %, then higher Absolute Completions, then alphabetical name And entries with zero active participants in the last 10 minutes have velocity reported as 0 And the velocity window and participant activity threshold are configurable at the campaign level
Opt-in Participation and Exclusion Filters
Given some teams and stations have opted in to appear on the leaderboard and others have not And some stations are flagged as Training/Test When the leaderboard is displayed with default filters Then only opted-in, non-Training/Test entries appear And underlying action counts still accrue for all entries regardless of opt-in or Training/Test status When a user applies the filter to include Training/Test entries Then flagged entries appear and are clearly labeled And selected filters persist for the user across sessions within the same campaign
Badges, Streaks, and Celebrations with Noise Controls
Given milestone thresholds at 25%, 50%, 75%, and 100% of goal per entry When an entry crosses a threshold for the first time in a campaign Then the corresponding badge is awarded and recorded once And a celebratory animation and chime are triggered only if Big-Board Noise Level permits it Given a streak is defined as maintaining a top-3 rank for 10 consecutive minutes When an entry achieves a streak Then a streak badge is displayed and logged When Big-Board Noise Level is set to Quiet Then milestone animations are suppressed and chimes are muted, but badges still update When set to Minimal Then only a short confetti at 100% and a single chime for new #1 are shown, with a cooldown of 60 seconds between celebrations
Per-Campaign Reset and Historical Snapshots
Given a new campaign is started or standings are manually reset by an authorized user When the reset occurs Then all leaderboard metrics (Progress %, Absolute Completions, Velocity, badges, streaks) return to zero or baseline for that campaign And historical data from prior campaigns remains unchanged When a campaign ends Then a read-only snapshot is stored with final ranks, metrics, badges, and timestamps And the snapshot can be retrieved and exported without being altered by subsequent actions
Permissions for Leaderboard Settings
Given existing roles include Org Admin and Campaign Lead as authorized roles for leaderboard settings When an authorized user attempts to modify leaderboard settings (metrics shown, noise level, opt-in status, filters default) Then the change succeeds and is audit-logged with user, timestamp, and diff When an unauthorized user attempts the same via UI or API Then the attempt is blocked, UI controls are disabled or hidden, and API returns 403 with no side effects And view-only access to the big-board is allowed for users with Viewer permissions

Status Sync

Continuously pulls official bill status changes (introduced, referred, hearing scheduled, floor vote, conference, signed/vetoed) and updates every linked action page and script in real time. Eliminates manual checks, prevents outdated asks, and keeps your campaign accurate across channels without extra work.

Requirements

Canonical Bill Status Integrations
"As a campaign manager, I want RallyKit to reliably ingest bill status from official sources so that my campaign updates are accurate without manual checks."
Description

Connect to official legislative data sources (e.g., state capitols and Congress) via APIs, webhooks, or approved feeds to continuously ingest bill status events. Normalize provider-specific statuses to a RallyKit canonical status model (introduced, referred, hearing scheduled, floor vote, conference, signed, vetoed), resolve bill identifiers across sources, and deduplicate events. Enforce rate limits, handle authentication and key rotation, and provide per-connector health checks. Emit a consistent event payload with timestamps and source provenance for downstream processing.

Acceptance Criteria
Establish Authenticated Connections to Official Sources
Given a provider requiring an API key stored in a secrets manager When the connector initializes Then it authenticates successfully without logging the secret Given an OAuth provider with expiring access tokens When the token is within 5 minutes of expiry Then the connector refreshes the token proactively without dropping requests Given an API key rotation in the secrets manager When rotation occurs Then the connector begins using the new key within 60 seconds and stops using the old key Given an authentication failure from the provider When retries are attempted Then exponential backoff with jitter is applied up to 5 attempts and the circuit opens for 60 seconds after the final failure Given a transient network outage When connectivity is restored Then the connector resumes ingestion from the last durable checkpoint with no data loss
Multi-Source Status Normalization to Canonical Model
Given fixture events from multiple providers for statuses introduced, referred, hearing scheduled, floor vote, conference, signed, vetoed When ingested Then each source status maps to exactly one canonical status Given an unknown or unmapped source status When ingested Then the event is routed to a dead-letter queue with reason "unmapped_status" and is not emitted downstream Given a source event contains both status and substatus When normalized Then the configured precedence rule is applied and recorded as mapping_version in the event metadata Given variations in case, spacing, or language-specific labels When normalized Then mapping is case-insensitive and whitespace-insensitive and produces the same canonical status Given a mapping table update is deployed When new events arrive Then the new mapping_version appears in emitted events without connector restart
Bill Identifier Resolution Across Sources
Given two providers reference the same bill with different identifiers When ingested Then a single internal bill_id is assigned and both source records link to it Given potential ambiguous matches across bills When resolving Then deterministic tie-breakers use jurisdiction > official source priority > recency and the chosen linkage is logged with rationale Given a new legislative session begins When resolving identifiers Then the session dimension is required and prevents cross-session collisions Given an event is emitted downstream When inspected Then it includes jurisdiction, session, chamber, bill_number, official_source_id (if available), and normalized bill_id Given the alias table is updated with a new crosswalk When updated Then subsequent events reflect the new linkage within 5 minutes
Event Deduplication Within and Across Providers
Given the same status event is received multiple times from one provider When processed Then only one downstream event is emitted per bill_id+canonical_status+occurred_at Given two providers send the same status for the same bill within a 2-minute timestamp skew When processed Then a single event is emitted using the earliest occurred_at and the dedup_key reflects both sources Given a webhook retry delivers the same source_event_id When processed Then the operation is idempotent and no duplicate events are emitted Given a provider issues a correction with a new occurred_at for a previously emitted status When processed Then a new event with action="correction" is emitted and the prior event is marked superseded in metadata
Provider Rate Limiting and Backoff Compliance
Given the provider returns HTTP 429 with a Retry-After header When received Then the connector waits the specified duration before retrying and sends no requests during the wait Given a provider publishes a hard limit of 5 QPS When running for 5 minutes Then observed request rate stays ≤ 5 QPS with zero 429 responses Given repeated transient failures When retrying Then exponential backoff with jitter is applied and maximum retry time per batch does not exceed 15 minutes before re-queuing Given ingestion backlog exceeds 1000 events When throttling Then polling is reduced while webhook processing is prioritized until backlog drops below 200
Per-Connector Health Checks and Telemetry
Given a GET /health/connectors/{id} request When the connector is operating normally Then the response status is up and includes last_success_at, last_error_at=null, queue_lag_seconds ≤ 30 Given authentication failures exceed a threshold of 3 in 5 minutes When queried Then the health status is degraded and last_error contains the latest auth error code and message Given no successful fetch for more than 5 minutes When queried Then the health status is down and the aggregate health endpoint reflects down for this connector Given a transition to down occurs When detected Then a notification is sent to the monitoring channel including connector_id and reason within 60 seconds Given the connector restarts When observed Then uptime resets and restart_count increments by 1 in the health payload
Emit Consistent Event Payload With Provenance
Given a normalized status event is emitted When inspected Then the payload contains event_id, bill_id, jurisdiction, session, chamber, bill_number, canonical_status, source_status, provider, occurred_at, received_at, emitted_at, source_event_id, dedup_key, mapping_version, provenance.connector_id, provenance.source_url, and signature Given event timestamps When validated Then emitted_at ≥ received_at ≥ occurred_at and all timestamps are RFC3339 UTC Given the schema registry enforces v1.0 for status events When publishing Then events conform to the schema and incompatible changes are rejected Given a burst of 1000 events in 60 seconds When processed Then the 95th percentile latency from received_at to emitted_at is ≤ 60 seconds with zero schema validation failures Given a consumer replays from an earlier offset When replayed Then the same event_id and dedup_key are observed for previously emitted events
Real-Time Sync Engine
"As an organizer, I want status changes to update my campaign assets immediately so that supporters always see the latest ask."
Description

Event-driven processing service that detects new or changed bill status events and propagates updates to all linked entities (action pages, scripts, automations) within 60 seconds. Supports idempotent processing, deduplicated replays, exponential backoff retries, and dead-letter queues. Guarantees at-least-once delivery with ordering per bill, and exposes metrics for lag, throughput, and failure rates.

Acceptance Criteria
Status Change Propagates to Linked Entities Within 60 Seconds
Given a tracked bill (bill_id=B1) with at least one linked action page, script, and automation, and the sync engine is running When an official status change event E updates B1 from "referred" to "hearing scheduled" and E is ingested at time t_ingest Then all linked action pages, scripts, and automations reflect "hearing scheduled" within 60 seconds of t_ingest And no linked entity displays the previous status after t_ingest + 60 seconds And a processing record for E is persisted with status=success and timestamps for ingestion and completion
Idempotent Processing and Deduplicated Replays
Given a previously processed event E with event_id=UUID1 for bill B1 When E is redelivered N times (N>=1) or a semantically duplicate event with identical (bill_id, status, version) arrives Then the engine performs no duplicate side effects (no additional writes, no extra notifications, no version increments) And the processing ledger records a single completed entry for UUID1 And metrics count redeliveries without increasing successful update count
Per-Bill Ordering with At-Least-Once Delivery
Given two status events E1 then E2 for the same bill B1 where E1.status="introduced" and E2.status="referred" And E2 arrives before E1 When the engine processes these events Then E1 is applied before E2 for bill B1 And for different bills (e.g., B2, B3) events may process concurrently And if any event is delivered more than once, the update is still applied at least once without breaking order for B1
Exponential Backoff Retries and Dead-Letter Handling
Given retry configuration base_delay=1s, backoff_factor=2, max_retries=5, jitter enabled And an update to a linked entity fails with a transient error When the engine retries Then subsequent retry delays approximate 1s, 2s, 4s, 8s, 16s with jitter up to ±20% And upon a permanent error or after 5 failed attempts, the event is moved to a dead-letter queue with bill_id, event_id, last_error, attempt_count And dead-lettered events are excluded from the primary processing stream until manually handled And metrics expose retry_attempts_total and dead_letter_total reflecting these outcomes
Operational Metrics Exposed for Lag, Throughput, and Failures
Given the engine processes 100 status events with 98 successes and 2 failures over a 60-second window When the metrics endpoint is scraped Then metrics include processing_lag_seconds (gauge) per bill stream, events_throughput_per_minute (rate), and processing_failures_total (counter) And reported values reflect the run (lag <= 60s for all processed events, throughput ≈ 100/min, failures_total increases by 2)
Isolation: Only Entities Linked to the Changed Bill Are Updated
Given two bills B1 and B2 with distinct linked entities When a status change event is processed for B1 Then only entities linked to B1 are updated And entities linked to B2 remain unchanged and unmodified in audit logs
Crash/Restart Recovery Without Data Loss
Given the engine ingests an event E for bill B1 and crashes before acknowledging completion When the engine restarts Then it resumes from the last committed offset and reprocesses E And the outcome reflects at-least-once semantics with idempotency preventing duplicate side effects And per-bill ordering is preserved across the crash for subsequent events
Status-Aware Script Templating
"As a field director, I want scripts to adapt to each bill’s current status so that supporters get precise, timely talking points."
Description

Versioned templating system that renders district-specific call and email scripts based on canonical bill status, chamber, and the supporter’s legislator mapping. Supports conditional blocks (e.g., if hearing scheduled then include date/location), variable placeholders (bill number, sponsor, committee, vote time), and localization. Provides preview per status and automatic re-render when status changes, with safe fallback text if required data is missing.

Acceptance Criteria
Auto Re-render on Bill Status Change
Given the canonical bill status changes from any state to a new state via Status Sync When the templating engine receives the status update event Then all linked call and email scripts for affected campaigns are re-rendered within 60 seconds And updated content is served on the next page load or API fetch for each action page And district-specific legislator mapping is applied during render for each supporter context And the previously published script version is retained in version history with a timestamped identifier
Conditional Hearing Details Block
Given the bill status is "Hearing Scheduled" and canonical data includes date, time, committee, and location When a call or email script is rendered Then a hearing details section is included with all available fields populated And if any of date, time, committee, or location is missing, a safe fallback sentence "A hearing is scheduled—details pending" is displayed instead of blank values And no unresolved placeholders (e.g., {hearing_date}) appear in the output And date/time values are formatted according to the campaign locale
Chamber- and Legislator-Aware Addressing
Given a supporter is mapped to a Representative and a Senator for their district When rendering a House-targeted action script Then the script uses House-specific terminology and addresses the mapped Representative by name and title And when rendering a Senate-targeted action script, the script uses Senate-specific terminology and addresses the mapped Senator by name and title And when an action targets both chambers, separate scripts are generated for each mapped legislator with correct addressing And if a legislator mapping is missing, the script uses a generic "your legislator" fallback and excludes unmapped targets
Localization Selection and Fallback
Given a campaign provides templates in English (en) and Spanish (es) and a supporter language preference is es When rendering call and email scripts Then the Spanish template variant is selected and rendered And variable content (e.g., dates, chamber titles) is localized for es And if a translation key is missing in es, that segment falls back to en while preserving the rest in es And if no supported language matches the supporter, the campaign default language is used
Per-Status Preview and Versioning in Editor
Given an admin user opens the template editor for a campaign When the admin selects a specific bill status (e.g., "Floor Vote") and clicks Preview with sample legislator and district Then the preview displays compiled call and email scripts with all placeholders resolved using sample data And when the admin saves changes, a new template version is created and labeled with an incremented version identifier and timestamp And the currently published version remains unchanged until the admin explicitly publishes the new version And the admin can toggle preview across statuses and versions without affecting live content
Placeholder Resolution and Safe Fallbacks
Given a template contains placeholders {bill_number}, {sponsor}, {committee}, {vote_time} and optional placeholders When rendering for any supporter context Then zero unresolved placeholders remain in the output (regex check matches none of /{[^}]+}/) And required variables missing at render time are replaced by configured fallback text blocks And if no fallback is configured, a standard safe default script is rendered instead of a partial or broken output And p95 render time per script is <= 200 ms under a templating service load of 100 RPS
Action Page Auto-Update & Cache Invalidation
"As a digital lead, I want action pages to reflect the latest status without manual edits so that conversion stays high and asks aren’t outdated."
Description

Automatically refresh action page content, metadata, and embeds when linked bill status changes. Invalidate CDN and in-app caches within 60 seconds, update social sharing metadata, and avoid interrupting users mid-action by applying updates on next step or reload. Display a subtle “Updated just now” banner to build trust.

Acceptance Criteria
Real-time Content and Script Refresh on Bill Status Change
Given a bill is linked to one or more action pages and scripts When the bill status changes via Status Sync (e.g., Hearing Scheduled → Floor Vote) Then within 60 seconds, all linked action pages render content (title/ask), district-specific scripts, and status-dependent CTAs based on the new status on first render or reload And subsequent GET requests return updated ETag/Last-Modified values reflecting the new content And the pre-update content is no longer served from any cache after 60 seconds And all pages linked to the bill (100% of known links) reflect the new status in a single propagation cycle
CDN and In‑App Cache Invalidation Within 60 Seconds
Given content for affected pages is cached at the CDN and application layers When a bill status update event is processed Then CDN objects tagged with the bill/page surrogate keys are purged and in‑app caches invalidated within 60 seconds of event time And purge/invalidation operations return 2xx responses and are recorded with IDs And on first post-purge request, the CDN shows a miss and fetches fresh content automatically And failed purge attempts are retried up to 3 times with exponential backoff, with final failure logged and alerted
Non‑Disruptive Updates During Active User Sessions
Given a supporter is actively completing an action (typing in a form or reading a call script) When a linked bill status update occurs Then the current page must not reload or swap visible content mid-interaction And updates are applied only on the next step transition (e.g., submit → thank-you) or on manual reload And no typed form data is lost; form fields retain values if the user navigates to the next step within the same session And call scripts shown during an ongoing call remain unchanged until the user advances or reloads And analytics show zero forced reload events attributable to status updates
Social Sharing Metadata Refresh
Given pages expose Open Graph and Twitter Card metadata When the bill status changes Then og:title, og:description, og:image (if status-dependent), and twitter:* equivalents are updated to match the new status within 60 seconds And the page returns updated ETag/Last-Modified for the HTML head And fetching the URL with Facebook Sharing Debugger or X Card Validator within 60 seconds returns the updated metadata And newly created link previews after the update display the refreshed text/image
Embeds and External Placements Update
Given an action page is embedded on external sites via iframe or JS snippet When the linked bill status changes Then the embed renders updated content and scripts on the next load without requiring host-site code changes And versioned, content-hashed asset URLs are emitted so that host/CDN caches are bypassed automatically And the embed endpoint supports ETag changes; a GET within 60 seconds returns a new ETag/Last-Modified And no mixed-version assets (CSS/JS) are served after 60 seconds
“Updated Just Now” Banner Behavior
Given a page load occurs after a bill status update is applied When the user views the page within 60 seconds of update completion Then a subtle, accessible banner displaying the text “Updated just now” is shown without causing cumulative layout shift > 0.05 And after 60 seconds the banner updates to a relative timestamp (e.g., “Updated 1 minute ago”) or auto-hides within 10 minutes And the banner is not displayed to users mid-action; it appears only on the next step or reload And the banner is dismissible via keyboard and screen reader accessible
Audit Logging and Admin Visibility of Updates
Given a bill status update triggers page/script refreshes and cache invalidations When the update cycle completes or fails Then an audit log entry is recorded with bill ID, previous/new status, affected page IDs/count, purge IDs, start/end timestamps, and total duration And the event appears in the admin dashboard within 60 seconds of completion And failures record error details and notify the on-call channel And audit entries are exportable (CSV/JSON) with UTC timestamps
Status Change Audit Trail
"As a compliance officer, I want a detailed audit trail of status changes so that I can prove what supporters saw and when."
Description

Maintain an immutable, queryable log of all ingested bill status events and resulting content updates. Record timestamp, source, previous and new canonical status, affected assets, and processing outcome. Provide a UI timeline, filters, CSV export, and one-click rollback to the previous script version when appropriate, with reason capture.

Acceptance Criteria
Log Entry Completeness on Status Ingestion
Given a new official bill status event is ingested for a tracked bill with a known previous canonical status and a new canonical status When the Status Sync processor handles the event Then exactly one audit event record is created with a unique event_id And the record includes non-null fields: timestamp_utc (ISO 8601), bill_id, jurisdiction, source, previous_status (from canonical enum), new_status (from canonical enum), processing_outcome in {success, skipped, failed} And the record includes affected_assets as an array of {asset_id, asset_type in {action_page, script}, version_before, version_after (nullable)} for all impacted assets And the record includes correlation_id linking ingestion to content updates and provider_event_id when available And the record is queryable by bill_id and timestamp within 1 second of creation
Immutability and Append-Only Enforcement
Given an existing audit event record When any user or process attempts to update or delete the record via UI or API Then the operation is rejected with HTTP 403 and message "Audit events are immutable" And no changes are made to the stored record or indexes And an additional audit event of type "immutable_write_attempt" is appended capturing actor_id, attempted_operation, target_event_id, and timestamp And only creation (append) of new events is permitted; bulk deletion endpoints are not exposed
Timeline UI with Filters
Given at least 5,000 audit events exist across multiple bills and sources When the user opens the Audit Trail page Then events are displayed in reverse-chronological order with visible fields: localized timestamp, bill_id, previous_status→new_status, source, processing_outcome, and affected_assets_count And the user can filter by date range, bill_id, source, processing_outcome, status transition (e.g., introduced→referred), and asset_type And applying any single filter returns updated results within 2 seconds for up to 10,000 events; combined filters within 3 seconds And clearing filters resets the view to the default timeline And expanding an event reveals details including the list of affected assets with version_before and version_after
CSV Export of Filtered Audit Events
Given the user has applied any set of filters to the Audit Trail When the user clicks "Export CSV" Then a CSV is generated containing exactly the filtered result set with headers: event_id, timestamp_utc, bill_id, jurisdiction, previous_status, new_status, source, processing_outcome, affected_asset_ids, affected_asset_types, version_before, version_after, correlation_id, error_code, error_message, actor_id, rollback_of_event_id And files with ≤10,000 rows download synchronously within 10 seconds; larger exports up to 100,000 rows run asynchronously and complete within 5 minutes with a downloadable link/notification And the CSV is UTF-8 encoded, comma-delimited, RFC 4180 compliant, using ISO 8601 UTC timestamps And generating an export is itself logged as an audit event with actor_id and row_count
One-Click Rollback to Previous Script Version with Reason
Given an audit event that updated one or more script assets is selected When the user clicks "Rollback" and enters a free-text reason of at least 10 characters and confirms Then the system validates eligibility per asset: a previous version exists and no newer edits have been applied since the selected event And if any assets are ineligible, the UI lists them with specific reasons and offers partial rollback of eligible assets And on success, the previous version is republished for each eligible asset within 60 seconds And a new audit event of type "rollback" is appended capturing actor_id, original_event_id, reason, affected_assets with outcome per asset, and timestamp And the timeline refreshes to display the rollback event and current script versions
Query Performance and Stable Pagination
Given 50,000+ audit events exist When the user paginates through the Audit Trail with a page size of 50 Then each page loads within 2 seconds with stable sort by timestamp_utc desc, event_id as tiebreaker And cursor-based pagination returns no duplicates or gaps even as new events are appended concurrently And changing filters or sort resets to the first page and preserves the selected page size
Transition Alerts & Subscriptions
"As a campaign owner, I want alerts when my bill hits pivotal stages so that I can coordinate rapid response actions."
Description

Configurable notifications for key status transitions (e.g., hearing scheduled, floor vote) via email, Slack, and in-app. Allow per-campaign subscriptions, quiet hours, digests versus instant alerts, and severity thresholds. Include deep links to affected pages and scripts.

Acceptance Criteria
Per-Campaign Subscription Management
Given a user has access to Campaign A and Campaign B, When they open Notification Settings, Then they can independently toggle Transition Alerts on or off per campaign. Given the user disables alerts for Campaign A and enables alerts for Campaign B, When a Hearing Scheduled update is detected for both campaigns, Then the user receives alerts only for Campaign B. Given a user removes their subscription from Campaign A, When subsequent status transitions occur in Campaign A, Then the user receives no alerts for Campaign A across all selected channels.
Configurable Transition Types per Campaign
Given an admin edits Campaign A alert settings, When they select only Hearing Scheduled and Floor Vote as key transitions, Then only those transitions generate alerts for Campaign A. Given Referred is deselected for Campaign A, When a bill in Campaign A is referred, Then no alert is generated for that event. Given multiple bills are tracked in a campaign, When any selected transition occurs for any tracked bill, Then an alert is generated per bill event.
Instant Alerts Timing and Content
Given Instant mode is enabled, When Status Sync detects a selected transition, Then an alert is delivered within 60 seconds of detection. Then the alert content includes Campaign Name, Bill Identifier, Transition Type, Transition Timestamp, and deep links to the affected action pages and scripts. Then the email subject is formatted as "[Campaign] [Bill ID] [Transition]" and Slack/in-app titles mirror this format.
Digest Mode Scheduling and Content
Given a user sets Digest mode to Daily at 18:00 in their profile timezone, When selected transitions occur throughout the day, Then exactly one digest is delivered at 18:00 containing all events since the previous digest time. Then the digest groups events by campaign and bill, includes counts, transition details, and deep links for each event. Given no events occurred since the last digest, When the digest time passes, Then no digest is sent.
Quiet Hours and Timezone Handling
Given a user sets quiet hours from 21:00 to 08:00 in their profile timezone, When selected transitions occur during that window and Instant mode is enabled, Then alerts are held and delivered at 08:00 unless Digest mode supersedes delivery time. Given the user has no timezone set, When quiet hours are applied, Then the campaign default timezone is used. Given a digest is scheduled for 07:30 during quiet hours, When events exist, Then the digest is delayed to 08:00 and contains all events up to that time.
Severity Threshold Filtering
Given severity levels are mapped for transitions (e.g., Hearing Scheduled=Important, Floor Vote=Critical, Referred=Normal), When a user sets their threshold to Important, Then only events with severity Important or higher are delivered. Given a user's threshold is Critical, When a Hearing Scheduled (Important) event occurs, Then no alert is sent; When a Floor Vote (Critical) occurs, Then an alert is sent. Then both Instant and Digest modes respect the user's severity threshold.
Multi-Channel Delivery and Deep Linking
Given a user selects Email and Slack channels and disables In-App, When a selected transition occurs, Then alerts are delivered via Email and Slack only. Then the Slack message contains a clickable deep link that opens the affected action page in RallyKit; if the user is not authenticated, Then after login they are redirected to that deep-linked resource. Then the email body contains deep links to both the affected action page and the corresponding script editor or viewer.
Manual Override & Sync Freeze
"As an admin, I want the ability to temporarily freeze or override status-driven updates so that I can handle edge cases without confusing supporters."
Description

Allow authorized users to pause automatic updates for a campaign or a specific bill, override the mapped canonical status, and set an expiry or manual resume. Display an override banner in the UI, log the action in the audit trail, and prevent scheduled updates until un-frozen.

Acceptance Criteria
Campaign-Level Sync Freeze
Given an authorized user with Sync Manager permission is on the campaign settings page When they enable "Freeze Status Sync" for the campaign and confirm Then status sync jobs for all bills linked to the campaign are suspended until resume or expiry And a persistent "Sync frozen (campaign scope)" banner is shown on the campaign dashboard and all linked action pages and script editors, displaying actor, timestamp, and expiry if set And inbound status updates (webhooks/polls) are logged as skipped with reason "frozen", and no affected content is altered And an audit trail entry is created with fields: userId, campaignId, action=freeze, scope=campaign, timestamp, expiry, reason (optional), outcome=success
Bill-Level Sync Freeze
Given an authorized user is on a bill's status panel within a campaign When they enable "Freeze Status Sync" for that bill and confirm Then only that bill's status sync jobs are suspended; other bills in the campaign continue syncing And a "Sync frozen (bill scope)" banner appears on that bill's action pages and script editors with actor, timestamp, and expiry if set And scheduled status updates for that bill are skipped and logged with reason "frozen" And an audit trail entry is created with fields: userId, campaignId, billId, action=freeze, scope=bill, timestamp, expiry, reason (optional), outcome=success
Manual Status Override Propagation
Given a campaign or bill is in a frozen state And the user selects a manual status value from the allowed statuses and saves When the override is saved Then the selected manual status is displayed with a "Manual override" badge in the UI for the frozen scope And all linked scripts and action page copy re-render to reflect the manual status on next publish or within 60 seconds, whichever comes first And background status sync does not overwrite the manual status while the freeze remains active
Expiry and Auto-Resume
Given a freeze is active with an expiry datetime in the future When the current time reaches the expiry Then the system automatically resumes status sync for the frozen scope without user intervention And override banners are removed within 60 seconds And an audit trail entry is created with action=auto_resume, scope, timestamp, outcome=success And the next scheduled sync fetches the current canonical status and updates scripts and action pages accordingly And previously skipped updates are not retroactively applied
Manual Resume (Unfreeze) Flow
Given a freeze is active When an authorized user clicks "Resume now" and confirms Then status sync resumes immediately for the frozen scope And any manual status override attached to the freeze is cleared and the canonical status is applied on the next sync cycle And override banners are removed within 60 seconds And an audit trail entry is created with action=manual_resume, scope, actor, timestamp, outcome=success
Authorization and Access Control
Given a user without Sync Manager or Campaign Admin permission attempts to freeze or set a manual override via UI or API When the request is submitted Then the system returns 403 Forbidden (API) or disables/hides controls with an explanatory message (UI) And no freeze or override state is changed And the attempt is recorded in the audit trail with action=freeze_attempt or override_attempt, outcome=denied, actor, scope, timestamp

Target Shift

Automatically recalibrates who supporters contact as the bill moves—committee members and chairs at hearing stage, floor leaders on vote days, conference conferees later, and the governor at signature time. Ensures each action hits the highest‑leverage target for that stage while preserving district‑accurate matching.

Requirements

Automated Stage Detection
"As a campaign manager, I want the platform to automatically detect a bill’s current stage so that targets update without manual intervention and stay aligned with the legislative calendar."
Description

Implement ingestion and normalization of legislative bill metadata to detect and track the bill’s current stage across jurisdictions (e.g., introduced, committee assigned, hearing scheduled, floor calendar, floor vote, conference, governor desk). Map source-specific statuses to a canonical lifecycle, with configurable mappings per state. Monitor via polling and/or webhooks, debounce noisy updates, and emit a single authoritative stage state for each tracked bill. On stage transition, trigger recalculation of campaign targets and scripts. Support multi-chamber bills, joint committees, special sessions, and manual override with audit-tagged provenance. Provide latency SLA (e.g., under 5 minutes) and health monitoring with alerts on data feed failures.

Acceptance Criteria
Canonical Stage Mapping (Per-State Config)
- Given a bill in State A with source status "Assigned to Judiciary", When ingested, Then it maps to canonical "committee assigned". - Given a bill in State B with source status "Ref'd to Jud.", When ingested, Then it maps to canonical "committee assigned" using State B mapping. - Given a state-level mapping change is deployed, When the next ingestion occurs, Then the mapping version is recorded in provenance and applied without code change. - Given an unmapped status appears, When ingested, Then the system flags "Unknown" and raises a mapping gap alert within 2 minutes.
Dual Ingestion: Webhooks and Polling with Debounce
- Given a webhook event indicating "hearing scheduled", When received, Then the canonical stage updates within 5 minutes P95. - Given polling detects a status change after a missed webhook, When fetched, Then the same canonical stage is applied. - Given multiple identical webhook events arrive within a configured debounce window of 120 seconds, When processed, Then only one stage transition is persisted and one event emitted. - Given out-of-order events with different source timestamps, When processed, Then the event with the later source timestamp determines the canonical stage.
Authoritative Stage State and Idempotent Events
- Given any point in time, When the bill's stage is queried via API, Then exactly one canonical stage value is returned. - Given a stage transition occurs, When posted to the event bus, Then exactly one "stage.changed" event is emitted per unique transition ID. - Given a replay of the same transition ID, When processed, Then no additional event is emitted and no duplicate persistence occurs. - Given a stage change, When persisted, Then the record includes stage value, effective timestamp, source(s), and provenance (webhook|polling|override).
Stage Transition Triggers Target/Script Recalculation
- Given a transition from "committee assigned" to "hearing scheduled", When persisted, Then target recalculation updates campaign targets to the relevant committee within 2 minutes P95. - Given a transition to "floor vote", When completed, Then scripts are regenerated using stage-specific templates for all districts and published without breaking district matching. - Given no stage change (e.g., duplicate events), When processed, Then no recalculation or script regeneration is triggered.
Multi-Chamber and Special Cases (Joint Committees, Special Sessions)
- Given House stage "hearing scheduled" and Senate stage "introduced", When aggregated, Then the canonical stage reflects the highest-progress stage per documented precedence and includes chamber context in provenance. - Given a joint committee hearing is scheduled, When ingested, Then the canonical stage is "hearing scheduled" with committee type = "joint" recorded. - Given a special session bill, When ingested, Then the session field is set to "special" and stage mapping uses the special session status namespace. - Given both chambers advance on the same day, When processed, Then only one aggregate stage transition is emitted with the later effective timestamp.
Manual Override with Audit-Provenance and Revert
- Given an admin with override permission, When they set the stage to "conference", Then the canonical stage updates immediately and provenance.type = "manual_override" with user ID, reason, and timestamp are stored. - Given a manual override is active, When an automatic update arrives, Then the system applies the configured policy (suppress or allow if newer) and logs the decision with compared timestamps. - Given a revert action is requested, When executed, Then the stage returns to the last automatic state, the override is archived, and an "override.reverted" audit event is emitted.
Latency SLA and Health Monitoring/Alerts
- Given a source status change occurs at time T, When processed, Then the canonical stage reflects the change by T+5 minutes for at least 95% of changes in a rolling 7-day window. - Given data feed failures (HTTP 5xx for 3 consecutive polls or webhook delivery failures), When detected, Then a critical alert is sent to on-call within 2 minutes and surfaced on the health dashboard. - Given the ingestion service is healthy, When observed, Then the dashboard shows green status, last successful fetch time, last event time, and P95/P99 stage update latency; SLO burn alerts trigger when burn rate exceeds 2x in 1 hour.
Stage-Based Target Rules Engine
"As a campaign admin, I want to configure who gets contacted at each bill stage so that supporter actions always reach the most influential decision-makers."
Description

Provide a configurable rules engine that determines who supporters should contact for each bill stage. Allow admins to define target sets such as committee members/chairs during hearings, floor leaders/whips on vote days, conference conferees during reconciliation, and the governor at signature time. Support conditions by chamber, party, sponsor/co-sponsor status, committee membership, and geographic relevance. Enable priority ordering, weighting, and deduplication across overlapping roles. Include jurisdiction-ready defaults with the ability to customize per campaign. Output a clean, ordered list of targets and contact channels for each supporter, ready for action page rendering and script generation.

Acceptance Criteria
Hearing Stage: Committee Members and Chairs Targeting
Given a campaign rule for stage hearing that targets committee members plus chair and vice-chair in the chamber of referral And the bill is scheduled for a hearing in Committee A of Chamber Y And the supporter resides in District D within the bill's jurisdiction When the rules engine evaluates targets for the supporter at stage hearing Then the output includes all members of Committee A in Chamber Y with available contact channels And the chair and vice-chair are present with higher priority than other members And targets outside the bill's jurisdiction are excluded And results are ordered by priority_rank ascending (1 is highest) And no person appears more than once if holding multiple roles, retaining the highest weight And each target includes at least one contact channel (phone or email)
Floor Vote Day: Chamber Leadership Prioritization
Given a campaign rule for stage floor_vote that targets chamber leadership (majority leader, minority leader, whips) and optionally the supporter’s district legislator And the bill is scheduled for a floor vote in Chamber Y on date T And the supporter resides in District D represented by Legislator L in Chamber Y When the rules engine evaluates targets for the supporter at stage floor_vote Then the output includes the majority leader, minority leader, and whip(s) of Chamber Y And if include_district_legislator is enabled, Legislator L is included with a lower priority than leadership And ordering follows priority_rank ascending so leaders appear before Legislator L And any overlapping roles are deduplicated, retaining the highest role weight And valid contact channels are present for each returned target
Conference Committee: Conferees Deduplication Across Roles
Given a campaign rule for stage conference that targets appointed conferees for Bill B And some conferees also hold other roles such as chair or bill sponsor When the rules engine evaluates targets at stage conference Then the output contains each conferee exactly once And for conferees with multiple roles, the assigned weight equals the highest configured weight among their roles And targets are ordered by priority_rank ascending with ties broken alphabetically by last name And only officially appointed conferees are included And contact channels are merged across roles without duplicate entries
Governor Signature Stage: Executive Outreach
Given a campaign rule for stage signature that targets the governor (required) and optionally the governor’s legislative director if contact is available And the bill has been enrolled and presented to the governor When the rules engine evaluates targets at stage signature Then the output includes exactly one governor contact for the jurisdiction with phone and email if available And includes the legislative director only if configured and contact is available And no legislative or committee targets are included And the total number of returned targets equals 1 or 2 accordingly
Conditional Filters by Chamber, Party, and Sponsorship
Given a rule for stage hearing with filters: chamber = Senate, party = Majority, and exclude sponsors/co-sponsors = true And the bill is assigned to Senate Committee S And some committee members are bill co-sponsors and some are in the minority party When the rules engine evaluates targets for the supporter at stage hearing Then only majority party, non-sponsor Senate members of Committee S are included And zero House members are included And the number of returned targets equals the count of committee members matching all configured filters
Jurisdiction Defaults and Campaign Overrides
Given jurisdiction State X has default target rules for all stages And Campaign C overrides the floor_vote stage to enable include_district_legislator and adjust weights When the rules engine evaluates targets for Campaign C at stage floor_vote Then the engine applies Campaign C overrides instead of the jurisdiction defaults And the returned target ordering and weights reflect Campaign C configuration And an audit log entry is created noting source = Campaign C, affected stage = floor_vote, and a timestamp And removing the override reverts behavior to jurisdiction defaults on the next evaluation
Output Data Contract for Action Pages
Given any stage evaluation completes successfully When the rules engine returns the target list Then each target includes fields: unique_person_id, full_name, role_list, chamber, district_or_state, party, priority_rank, weight, and contact_channels And contact_channels contains standardized items with type in [phone, email, twitter, facebook] and non-empty values, with at least one channel present And the overall response includes fields: stage, campaign_id, supporter_id, generated_at (ISO 8601), and targets array And targets are ordered by priority_rank ascending with stable order for equal weights And the response payload size does not exceed 200 KB when returning up to 100 targets
District-Accurate Target Resolution
"As a supporter, I want to be matched to the correct officials for my address so that my calls and emails are legitimate and effective."
Description

Preserve district-accurate matching as targets shift by resolving each supporter’s address to the correct jurisdiction and districts using validation (e.g., CASS/USPS) and boundary lookups. Compute the applicable target set per supporter by intersecting their districts with the stage-specific rules (e.g., their committee member if applicable, otherwise leaders). Handle edge cases like multi-member or at-large districts, redistricting changes, and vacant seats. Cache lookups with TTL, revalidate on stage changes, and prevent misrouting. Record the resolution logic used for each action for traceability and QA.

Acceptance Criteria
USPS/CASS Address Validation and Normalization
Given a mixed dataset of 500 known-valid and 200 known-invalid US addresses across all states/DC, When submitted for validation, Then ≥98% of valid addresses are confirmed deliverable by CASS/USPS and normalized to USPS format with ZIP+4 when available. Given the same dataset, When validation completes, Then 0 invalid addresses are marked deliverable and each invalid/ambiguous address returns a specific error code and recommended correction. Given any valid, mailable address, When geocoded, Then rooftop/parcel precision is achieved for ≥95% and remaining results are centroid-level flagged with accuracy=low; no target resolution proceeds for accuracy=low without explicit allow-low-accuracy flag. Given an address with a PO Box or military/DPO/APO, When processed, Then normalization is correct and district resolution is skipped or flagged per configured policy with a clear user-facing message.
District Boundary Resolution Across All Edge Cases
Given a normalized, geocoded address, When resolving districts, Then correct Congressional, State Upper, and State Lower districts are returned with dataset provider and version identifiers. Given multi-member or at-large districts, When resolving representatives, Then the full applicable member set is returned with chamber and role metadata. Given addresses near boundaries or in split precincts, When tie-break rules are applied, Then the resolution is deterministic, recorded, and matches a 1,000-address gold set with ≥99.5% accuracy and 0 critical misroutes. Given cached vs cold lookups, When measuring latency, Then P95 latency ≤200 ms for cache hits and ≤1.5 s for cold lookups under 100 RPS load. Given any boundary dataset update, When the effective-on date differs from action timestamp, Then the boundary version chosen corresponds to the action timestamp and is recorded.
Stage-Specific Target Intersection and Prioritization
Given Stage=Committee Hearing, When a supporter’s district includes a committee member, Then include their member(s) and the committee chair; When not, Then include the committee chair and designated ranking/vice chair only. Given Stage=Floor Vote, When the bill is on the supporter’s chamber, Then include their district legislator(s) in that chamber plus configured floor leader(s); When the bill is on the opposite chamber, Then include that chamber’s configured floor leader(s) only. Given Stage=Conference, When the supporter’s delegation has assigned conferees, Then include those conferees; Otherwise include conference chair(s). Given Stage=Governor Signature, When targets are computed, Then include the Governor and optionally the supporter’s district legislators only if campaign config allows. Then all targets are deduplicated, capped by per-action contact limits, and every selected target either represents the supporter’s district(s) or is an allowed stage-specific exception (leaders/chairs), with reasons recorded.
Handling Vacancies, Redistricting, and Ambiguity
Given a vacant seat in any applicable district, When computing targets, Then route using the configured fallback hierarchy (e.g., committee chair or chamber leadership), and record vacancy-triggered fallback in the trace. Given redistricting with effective dates, When an action timestamp precedes the effective date, Then pre-redistricting boundaries are used; When on/after the effective date, Then post-redistricting boundaries are used; the chosen boundary version is stored. Given an ambiguous address (missing unit, multiple parcel matches, low geocode accuracy), When resolution confidence < configured threshold, Then the action is blocked from sending, the user is prompted for clarification, and no targets are returned. Given known ambiguity test cases, When executed, Then 100% are blocked with actionable messages and 0 proceed to misrouted targets.
Caching Strategy and Revalidation on Stage Change
Given a successful district resolution, When caching, Then entries are keyed by normalized address and boundary dataset/version with a configurable TTL (documented default present) and cache metadata stored. Given a campaign stage change, When detected, Then target sets are re-computed for subsequent actions even if district cache entries are valid, and no pre-change target sets are served ≥5 seconds after the change. Given an address edit by a supporter, When saved, Then prior cache entries are invalidated and a fresh resolution is performed. Given load with repeat addresses, When measured, Then cache hit rate ≥80% and correctness of results is identical between cache hits and cold lookups.
Action-Level Traceability and Audit Logging
Given any supporter action, When targets are resolved, Then an immutable log captures: input and normalized address, geocode with accuracy, data sources and versions, districts, stage, rules version, candidate targets with inclusion/exclusion reasons, final targets, cache indicators, and timestamps. Given an action ID, When requested via UI or API, Then the full resolution log is retrievable and exportable as JSON and CSV within two interactions/API calls. Given the stored log, When the resolution is re-run in a deterministic test harness, Then the same targets are produced (byte-for-byte equality) in 100% of a 50-action QA sample. Given data retention policy, When evaluated, Then logs are retained ≥24 months and PII is minimized/redacted except fields required to reproduce resolution, meeting internal compliance checks.
Real-Time Action Page Target Refresh
"As an organizer, I want action pages to update targets instantly when the bill moves so that supporters never contact outdated officials."
Description

Ensure action pages automatically display the current, stage-appropriate targets and scripts without republishing. When stages change or rules are updated, push updates to the client via real-time channels and/or perform CDN cache purges; enable client-side rehydration so preloaded pages refresh targets, contact methods, and labels in place. Provide visual cues indicating an update occurred, while preserving UTM parameters, attribution, and partially completed forms. Maintain versioned scripts per stage and guarantee zero-downtime transitions with graceful rollbacks.

Acceptance Criteria
Live Stage Change Push Updates
Given an action page for a live campaign is open in the browser When the campaign stage changes in the backend Then the client receives a real-time update within 3 seconds (p95) via WebSocket/SSE and applies it without a full page reload And the targets list updates to the stage-appropriate roles while preserving district-accurate matching for the visitor’s validated address/geo And contact methods, target labels, and the recommended script update in place with no duplicate targets displayed And a non-blocking visual cue appears within 1 second indicating “Targets updated,” is dismissible, and announced via ARIA live=polite And all open tabs for the same campaign session reflect the update within 5 seconds (p95) And the client logs update_id and applied_at timestamp for diagnostics
Client-Side Rehydration of Preloaded Pages
Given a user opens a pre-rendered action page that is stale relative to the current stage When the client hydrates and fetches the latest campaign configuration Then stale targets, contact methods, labels, and recommended script are replaced with the current versions within 2 seconds of hydration (p95) And no field validation or render errors occur during replacement (no console errors of type error) And a visual cue indicates content was refreshed And CDN cache purge is executed on stage change and the next uncached request returns fresh HTML/config within 60 seconds (p95) or is served via stale-while-revalidate with a max staleness of 60 seconds And the client falls back to polling (<=15s interval) if real-time transport is unavailable
Preservation of UTM and Attribution on Refresh
Given a visitor lands on an action page with UTM parameters and an attribution token in the URL When a real-time update or hydration-driven refresh occurs Then the URL query parameters remain intact and are not stripped And the submission payload includes the original utm_source, utm_medium, utm_campaign, utm_content, utm_term, and attribution_id unchanged And internal client navigation and toasts do not trigger a history state that drops query params And analytics events emitted before and after the update contain the same UTM and attribution values And automated tests validate at least 5 UTM permutations including long values and URL-encoded characters
Preservation of Partially Completed Forms
Given a visitor has entered form fields (name, email, address), completed address verification, and optionally edited the script text When a target/script update is applied in place Then all entered form values, checkbox states, and validation statuses remain unchanged And the caret position is preserved for the actively focused input or textarea And if the visitor is using the auto-generated script, it updates to the new version; if the visitor has manually edited the script, their edits are preserved and not overwritten And the selected district and matched targets remain consistent unless the verified address changes And no duplicate form submissions are triggered by the update
Zero-Downtime Target Transition at Scale
Given a scheduled stage flip occurs during peak traffic When the update is rolled out across services and clients Then no 5xx responses are returned to clients for 5 minutes around the transition window And any submission received during propagation is routed according to the server-side effective stage at submission time, even if the client view is stale And server-side validation prevents dispatch to obsolete targets once the new stage is active And p95 submission latency increases by no more than 200 ms during the transition And monitoring emits success/failure metrics and alerts on threshold breaches
Graceful Rollback to Previous Target/Script Version
Given a faulty configuration or unexpected behavior is detected after a stage change When an operator triggers a rollback to the last known-good version Then clients receive and apply the rollback update within 3 seconds (p95) without a full page reload And targets, labels, and recommended script revert while preserving user-entered form data and UTM parameters And a visual cue indicates the content has been reverted And audit logs record rollback_id, actor, reason, previous_version_id, and restored_version_id And duplicate outreach caused by conflicting versions is prevented server-side for the same session
Audit-Ready Versioning and Target Traceability
Given any action submission after an update When the submission is recorded Then the record includes stage, target_id list, script_version_id, ruleset_hash, update_id, district_match inputs, UTM parameters, attribution_id, and submitted_at timestamp And the script content for the recorded script_version_id is retrievable for audit purposes And the record is queryable in reporting within 1 minute of submission And all required fields are non-null and pass schema validation in 100% of submissions (excluding intentionally anonymous fields)
Target Shift Audit Log and Reporting
"As a compliance officer, I want a detailed audit trail of target shifts so that I can prove outreach aligned with the bill’s status at every step."
Description

Create an immutable, timestamped audit log capturing each detected stage, rules evaluation, and resulting target list changes per campaign and per action session. Store provenance (data feed vs manual override), actor, previous vs new targets, and affected supporter counts. Expose this in the dashboard with filters (date range, stage, jurisdiction) and export options (CSV/JSON). Provide reports that correlate conversion rates and outcomes by stage and target type to demonstrate effectiveness and deliver audit-ready proof for stakeholders and funders.

Acceptance Criteria
Append audit log on stage change (data feed vs manual override)
Given a campaign with active action sessions at stage "Committee" And Target Shift rules version is recorded When the system ingests a legislative data feed indicating a stage change to "Floor" Then exactly one audit log record is appended with fields: log_id (UUIDv4), campaign_id, action_session_id (nullable), previous_stage, new_stage, rules_version, provenance = "data_feed", actor_type = "system", actor_id, previous_targets (array of target_ids), new_targets (array of target_ids), affected_supporter_count (integer >= 1), created_at (ISO 8601 UTC), request_id, reason_code = "stage_change", hash, prev_hash And the set difference between previous_targets and new_targets reflects the recalculated targets And the record is queryable in the dashboard within 5 seconds of creation When a user with role "Organizer" performs a manual override of targets Then a new audit log record is appended with provenance = "manual_override", actor_type = "user", and correct previous_targets and new_targets
Immutability and tamper evidence of audit log
Given any existing audit log record When an API or UI attempts to update or delete the record Then the operation is rejected with an error (HTTP 409 or 405) and no stored record is altered And only a new correction entry can be added with field correction_of referencing the original log_id And each record includes hash = SHA-256(canonical_payload + prev_hash) and prev_hash of the previous record in the campaign chain And verifying the hash chain for any campaign succeeds from first to latest record
Rules evaluation trace and reproducibility
Given a stage change triggers Target Shift recalculation When rules are evaluated Then the resulting audit log includes ruleset_id, rules_version, rules_fired (ordered list of rule IDs), decision_path, and parameters used (jurisdiction, chamber, bill_id, stage, committee_id if applicable) And replaying the rules engine with the logged parameters produces the same new_targets deterministically And the log stores evaluation_duration_ms and engine_request_id
Dashboard filters and sorting
Given the Audit Log dashboard is open When the user applies filters: date range (UTC), stage in ["Committee","Floor","Conference","Governor"], and jurisdiction in ["US-CA","US-NY"] simultaneously Then only matching records are returned And results are sorted by created_at descending by default and can be toggled to ascending And pagination supports page sizes 25, 50, 100 with accurate total count and stable ordering across pages And visible columns include created_at, campaign_id, action_session_id, previous_stage → new_stage, provenance, actor (type and id), affected_supporter_count And the first page of filtered results loads in under 2 seconds for up to 100k matching records
CSV and JSON export of filtered audit logs
Given a set of filters is applied in the Audit Log dashboard When the user exports to CSV Then the file downloads as UTF-8 with a header row and includes visible columns plus log_id, hash, prev_hash, request_id And all timestamps are ISO 8601 UTC And the exported rows exactly match the filtered result set and ordering And exports succeed for up to 1,000,000 rows without truncation When the user exports to JSON Then the payload is an array of objects with the same fields and values as the CSV export and is delivered via a downloadable file or streaming endpoint
Effectiveness report by stage and target type
Given completed action data exists for campaigns When the user opens the Effectiveness report and selects a date range Then the report displays, per stage and target_type (Committee Member, Chair, Floor Leader, Conferee, Governor): action_sessions_initiated, actions_completed, conversion_rate = actions_completed / action_sessions_initiated (percent with 2 decimals), and outcomes breakdown (calls_completed, emails_sent) And filters for jurisdiction and campaign narrow the aggregates correctly And attribution of target_type derives from audit log entries used for those sessions And the report can be exported to CSV and JSON with the same metrics and filters applied
Per-action session traceability
Given an action_session_id is selected When a user views its audit trail Then the UI shows a chronological sequence of all audit log entries for that session with created_at, previous_stage, new_stage, previous_targets, new_targets, actor_type, actor_id, and provenance And at least one evaluation log exists per stage transition, even when targets did not change (new_targets equals previous_targets) And clicking a target_id resolves to the target’s current name and title via lookup
Fallbacks and Exceptions Handling
"As a campaign manager, I want intelligent fallbacks when data is incomplete so that supporters can still take meaningful action without errors."
Description

Define resilient fallbacks when data is missing or ambiguous to avoid dead ends. Examples: if committee rosters are unavailable, default to chair and majority/minority leaders; if address resolution fails, prompt for ZIP+4 or route to universal targets; if conferees are not published, target conference leadership. Provide admin-configurable policies, alerting for unresolved gaps, retry strategies, and jurisdiction-specific rules for special sessions and joint committees. Guarantee that every action renders at least one valid target and clearly indicates when a fallback was applied.

Acceptance Criteria
Committee roster data unavailable at hearing stage
Given a campaign is at the committee hearing stage and the committee roster API returns empty or error When a supporter opens the action page Then the system selects the committee chair and chamber majority and minority leaders for the correct chamber as targets And all selected targets belong to the correct jurisdiction and current session And the page renders without errors and allows the action to be sent
Address resolution fails for supporter
Given a supporter enters an address and initial geocoding fails When the supporter attempts to proceed Then the system prompts for ZIP+4 with inline validation And if ZIP+4 resolves the address, district matching is applied and stage-appropriate targets are selected And if resolution still fails after two attempts, the system routes to the configured universal targets for the stage And the action renders at least one valid target and can be submitted
Conference conferees not published
Given the bill is in conference and the conferee roster endpoint is unavailable or returns no members When a supporter opens the action page Then the system targets conference leadership (chair and vice chair, where applicable) and majority/minority leaders of both chambers per policy And duplicate individuals appearing in multiple roles are de-duplicated in the target list And all selected targets are from the correct legislature and session
Jurisdiction-specific rules for special sessions and joint committees
Given the campaign is marked as operating during a special session or the bill is assigned to a joint committee When the system determines targets Then special-session leadership mappings are used instead of regular session mappings for that jurisdiction And for joint committees, targets are selected from both chambers according to the configured priority order And no targets outside the special session or joint body are selected
Admin-configurable fallback policy and priority order
Given an admin opens Target Shift settings for a campaign When the admin sets the fallback policy order (e.g., Committee Chair -> Chamber Leaders -> Governor) and saves Then the configuration is persisted and versioned with timestamp, admin user, jurisdiction, and campaign ID And subsequent target selections for that campaign follow the saved order without deployment And an audit log entry records the before/after policy values
Alerting and retry strategy for unresolved data gaps
Given an external data fetch (e.g., committee roster or conferees) fails When the system retries the fetch Then it performs up to 3 retries with exponential backoff over a total of 10 minutes And upon final failure, it marks the data source as Degraded, applies the configured fallback policy, and proceeds And an alert is sent to project admins within 5 minutes containing source, failure reason, incident ID, and applied fallback And the incident is logged with correlation ID and retry outcomes
Guaranteed valid target and fallback disclosure
Given any stage and any data availability state When a supporter opens the action page Then at least one valid target is displayed within 2 seconds or an actionable retry message is shown And if a fallback was applied, a visible "Fallback applied" badge with a short reason is displayed to the supporter And the action record stores FallbackApplied=true, FallbackReasonCode, FallbackChain, and SelectedTarget IDs for audit And no action can be submitted without at least one valid target

Script Autodraft

Generates fresh, stage‑aware call and email scripts with dynamic fields (bill number, committee name, hearing date/time, vote window). Offers quick-approve suggestions tuned to urgency and channel, so your team ships accurate, persuasive language in minutes instead of hours.

Requirements

Stage-Aware Template Engine
"As a campaign director, I want scripts to automatically adapt to the bill’s stage and communication channel so that our outreach is accurate, timely, and persuasive without manual rewriting."
Description

A rules-driven template engine that selects and assembles call and email scripts based on the bill’s current stage (e.g., introduced, in committee, floor vote, conference, signed/veto). It applies stage-specific frameworks, inserts required dynamic fields, and enforces channel-appropriate structure (concise call script vs. formatted email subject and body). It integrates with RallyKit’s campaign settings and legislator matching so scripts are district-specific, ensures safe fallbacks when data is missing, and outputs publish-ready text that can be consumed by action pages and outreach tools.

Acceptance Criteria
Stage-driven template selection
Given a bill with stage = Introduced, When generating call and email scripts, Then the engine selects the Introduced framework and produces a co-sponsorship ask. Given a bill with stage = In Committee, When generating scripts, Then the engine selects the Committee framework and references the committee workflow. Given a bill with stage = Floor Vote, When generating scripts, Then the engine selects the Floor Vote framework and includes a clear vote ask (YES/NO) tied to the vote window. Given a bill with stage = Conference, When generating scripts, Then the engine selects the Conference framework and frames outreach to conferees. Given a bill with stage = Signed or Veto, When generating scripts, Then the engine selects the Executive Action framework and produces thank/urge-to-sign or veto messaging accordingly. Given a bill with an unrecognized stage, When generating scripts, Then the engine uses the General Advocacy fallback template and records a StageFallback warning in metadata. Then the output metadata includes stage, selectedTemplateId, and frameworkName for auditability.
Dynamic fields population and safe fallbacks
Given billNumber, committeeName, hearingDateTime, and voteWindow are available, When generating scripts, Then all four appear in the appropriate places with correct formatting. Then hearingDateTime is formatted as 'MMM D, YYYY h:mm a z' and localized per campaign settings timezone. Given hearingDateTime is missing, When generating scripts, Then the text substitutes 'an upcoming hearing' and no placeholder tokens remain. Given voteWindow is missing, When generating scripts, Then the text substitutes 'as soon as possible' and no placeholder tokens remain. Given committeeName is missing, When generating scripts, Then the text substitutes 'the committee' and no placeholder tokens remain. Then no unresolved placeholders remain (no occurrences of {{...}} or [[...]]), and no 'null'/'undefined' strings exist. Then consecutive whitespace is normalized to single spaces except intentional line breaks.
Channel-specific structure enforcement
Given channel = Call, When generating, Then the script length is ≤ 120 words, contains a 1-sentence intro, a single explicit ask, and a brief closing, and contains no HTML/markdown. Given channel = Email, When generating, Then the output includes a subject (≤ 90 characters) containing the billNumber and stage label, and a body with: greeting ('Dear {Title} {LastName},'), 1–3 short paragraphs, a clear ask, and a sign-off block. Then email body preserves paragraph breaks using '\n' line separators and contains no unresolved tokens or HTML tags unless campaign settings explicitly allow rich formatting. Then each channel’s variant is validated against its schema (Call: {script}; Email: {subject, body}).
District-specific personalization via legislator matching
Given a supporter’s address maps to a specific legislator, When generating scripts, Then the legislator’s correct title (Senator/Representative) and last name are inserted in greeting/opening. Given the bill is in the committee stage and the matched legislator serves on the referenced committee, When generating scripts, Then the copy explicitly references that committee service. Given legislator matching fails, When generating scripts, Then the copy uses a neutral addressee ('your legislator') and generation completes without error. Then no legislator is referenced from the wrong chamber for the current stage’s target. Then metadata.targets includes the legislatorId(s) used or 'generic' on fallback.
Campaign settings and urgency influence
Given campaign settings specify allowedChannels, tone, and urgency, When generating scripts, Then only allowedChannels are produced, language reflects the tone, and urgency-specific phrasing is applied. Given urgency = High, When generating email, Then the subject is prefixed with '[Urgent]' and the body includes a time-sensitive call-to-action tied to voteWindow/hearingDateTime when available. Given urgency = Normal, When generating email, Then no urgency tag is added to the subject and language avoids alarm terms. Given campaign talkingPoints are provided, When generating scripts, Then at least one talking point is included verbatim or paraphrased within the body per channel constraints. Then all settings applied are listed in metadata.appliedSettings.
Publish-ready output contract and performance
Given a generation request, When successful, Then the engine returns UTF-8 text and a structured payload containing: stage, selectedTemplateId, channel, and for Email {subject, body} or for Call {script}, plus metadata {dynamicFieldsUsed, missingFields, appliedSettings, warnings[]}. Then no field exceeds 10,000 characters, newline characters are '\n', and trailing whitespace is trimmed. Then the 95th percentile generation latency is ≤ 1,000 ms and the 99th percentile is ≤ 2,000 ms under nominal load (≤ 10 dynamic fields, ≤ 2 channels). Given any non-critical data is missing, When generating, Then a fallback is used and a non-fatal warning is emitted; the API responds with HTTP 200 and warnings[] populated. Given a critical internal error occurs, When generating, Then the API responds with HTTP 500 and no partial script text is returned.
Dynamic Field Resolver & Validation
"As an organizer, I want dynamic fields auto-filled and validated so that scripts are always accurate and ready to ship."
Description

A service that fetches, formats, and validates dynamic fields required by scripts, including bill number, committee name, hearing date and time, and vote window. It normalizes time zones to the campaign’s locale, applies human-friendly formatting, and provides resilient fallbacks and warnings when inputs are missing or stale. It integrates with configured legislative data sources, caches results with defined TTLs, exposes validation errors to the UI, and guarantees consistent field availability to the template engine and action pages.

Acceptance Criteria
Primary Data Fetch & Field Mapping
Given a campaign configured with a legislative data source and a known bill identifier When the Dynamic Field Resolver runs with a cold cache Then it retrieves bill number, committee name, hearing date/time, and vote window from the primary source And it maps values to the internal schema keys: bill_number, committee_name, hearing_datetime_utc, vote_window_start_utc, vote_window_end_utc And it returns all required fields as non-empty values with ISO-8601 for UTC date/times And it includes source metadata (source_name, fetched_at_utc) And it completes the operation without errors
Time Zone Normalization & Human-Friendly Formatting
Given a campaign locale set to America/Chicago and a hearing_datetime_utc of 2025-09-06T19:00:00Z When the resolver formats display fields Then hearing_date_display equals "Sat, Sep 6, 2025" and hearing_time_display equals "2:00 PM CT" And vote_window_display shows localized start and end (e.g., "Sep 6, 2:00–4:00 PM CT") And daylight saving rules are applied correctly for the locale And all display fields use abbreviated weekday, month name, and 12-hour time with AM/PM and timezone abbreviation
Cache TTL & Invalidation
Given a cache TTL of 300 seconds and a successful retrieval at T0 When a second request for the same campaign/bill occurs at T0+120 seconds Then the resolver returns cached values and marks cache_hit=true When another request occurs at T0+301 seconds Then the resolver refetches from the source, updates the cache, and marks cache_hit=false And when a manual invalidate command is issued for the key Then the next request performs a refetch regardless of remaining TTL
Fallbacks on Missing or Stale Data
Given the primary source omits committee name and vote window When the resolver runs Then committee_name_display equals "committee information pending" and vote_window_display equals "vote timing to be announced" And the resolver sets warnings with codes MISSING_FIELD for committee_name and vote_window And fields include stale=false and completeness=partial Given a last_updated_utc older than the configured staleness threshold When the resolver runs Then it sets warnings with code STALE_DATA and stale=true while still returning the best-known values
Validation Errors Surface to UI
Given an invalid bill identifier format is provided When the resolver validates inputs before fetch Then it returns no field values and a validation_errors array with at least: code=INVALID_INPUT, field=bill_id, severity=error, message present Given the upstream API returns a 404 for the bill When the resolver runs Then it returns validation_errors containing code=NOT_FOUND, field=bill_id, severity=error, message present And the HTTP/API response to the UI includes a 4xx status and the structured errors
Consistent Field Availability to Templates
Given any combination of missing, stale, or unavailable upstream data When the template engine requests dynamic fields Then the resolver returns a fields object that always includes the keys: bill_number, committee_name_display, hearing_date_display, hearing_time_display, vote_window_display And no template render throws or fails due to missing keys And placeholder text conforms to the approved defaults list and is never empty And all string outputs are trimmed and free of leading/trailing whitespace
Urgency and Channel Tuning
"As a communications lead, I want quick-approve suggestions tuned to urgency and channel so that our messages fit the moment and medium."
Description

Configurable tone and length presets that tailor language to urgency levels (e.g., low, medium, high) and channel (call vs. email). The system adjusts calls-to-action, subject line intensity, and script length while adhering to organizational messaging and compliance guardrails. It provides previews for each combination, allows default presets per campaign, and exposes a simple control in the editor to switch urgency without rewriting content.

Acceptance Criteria
Apply urgency and channel presets to call scripts
Given a campaign has defined tone, CTA count, and length limits for Low, Medium, and High call presets When the editor channel is Call and an urgency preset is selected Then the generated call script matches the selected preset’s limits for word count (within preset range), CTA count (exact match), and tone phrases (from preset’s approved list) And switching to another urgency updates only preset-controlled segments and preserves user-edited custom sections unchanged
Adjust email subject intensity and body length by urgency
Given email presets define subject intensity keywords, body length ranges, and CTA placement rules for Low, Medium, and High When the editor channel is Email and an urgency preset is selected Then the email subject contains the required intensity pattern for the selected preset (e.g., at least one preset keyword for Medium/High, none for Low) and is ≤ 65 characters And the body word count falls within the preset-defined range and includes a clear CTA within the first paragraph if Medium/High And switching urgency updates preset-controlled text while preserving any user-edited custom sections
Preview all urgency–channel combinations before publish
Given Call and Email channels are enabled and three urgency levels (Low, Medium, High) are available When the user opens Previews for the script Then six previews (Call×3, Email×3) are rendered with channel and urgency labels, subject line (for Email), and estimated read/speak time And each preview accurately applies the corresponding preset limits and shows all dynamic fields as placeholders or resolved samples And all previews render without error within 2 seconds each on a typical broadband connection
Campaign default presets set and inherited
Given a campaign default is configured for each channel (e.g., Call=High, Email=Medium) When a new script is created within that campaign Then the script initializes with the campaign’s default urgency preset per channel And changing the campaign default only affects scripts created after the change and does not modify existing scripts And authorized users can override the default preset at the script level without altering the campaign default
Editor urgency toggle is simple and non-destructive
Given a script with user-edited custom sections is open in the editor When the user changes the urgency via a single toggle/control Then the update completes in under 500 ms and the cursor focus remains in place And only preset-controlled segments (tone markers, CTA phrasing, length) are updated; user-edited custom sections remain unchanged byte-for-byte And no validation errors are introduced by the change
Messaging and compliance guardrails enforced across presets
Given organizational messaging rules and compliance guardrails are configured When a script is generated or its urgency/channel is changed Then required disclaimers and attribution are present for all presets and channels And no disallowed phrases (as defined in guardrails) appear in the output; violations are blocked with actionable error messages And the script cannot be saved or published until guardrail validation reports zero critical violations
Quick Approve & Edit Workflow
"As a small team member, I want to approve and publish scripts with minimal clicks so that we can move fast during critical windows."
Description

An inline editor and review flow that surfaces suggested scripts, supports rapid edits, and enables one-click approval and publish to action pages. It includes role-based permissions, approver attribution, timestamps, keyboard shortcuts, and bulk approval for multi-target campaigns. It shows change diffs between suggestions and current live copy, prevents accidental overwrites with optimistic locking, and records approvals for downstream reporting.

Acceptance Criteria
Inline Editor Shows Suggested Script and Diff
Given I have Approver or Editor role and a campaign with a suggested script exists When I open the Quick Approve & Edit panel for a target Then I see the suggested script populated in an inline editor And I see a diff against the current live copy And dynamic fields (bill number, committee name, hearing date/time, vote window) render preview values from campaign context And I can edit the suggested text and save a draft without publishing And the system records a draft version number on save
One-Click Approve Publishes to Action Pages
Given I have Approver role and an edited or untouched suggestion is open When I trigger Approve & Publish (button or Ctrl/Cmd+Enter) Then the live copy for the target and channel updates to the approved text with a new version number And all linked action pages reflect the new copy within 30 seconds And I receive a success confirmation with version ID and UTC timestamp And the previous live version remains accessible in version history
Role-Based Permissions and Attribution
Given a user without Approver role opens the panel Then the Approve & Publish control is disabled with a tooltip indicating the required role Given a user with Approver role approves a script Then the system records approver ID, name, role, and UTC timestamp, and displays them in the approval banner and audit log And the editor who last saved the draft is attributed on the version And unauthorized users cannot publish via API or UI (request is rejected with 403)
Optimistic Locking Prevents Overwrites
Given User A and User B open the same target script at version v When User A saves a draft or approves, creating version v+1 And User B attempts to save or approve based on version v Then the system blocks User B’s action with a conflict message showing the current version and a diff of User B’s changes versus v+1 And User B can refresh to latest or apply their changes on top and retry And no overwrite of version v+1 occurs
Bulk Approval for Multi-Target Campaigns
Given a campaign with multiple targets and per-target suggestions When I select a subset of targets (e.g., 5 of 7) and choose Bulk Approve & Publish Then only the selected targets are approved and published with new version numbers And unselected targets remain unchanged And the operation returns a summary with counts of successes and failures And failures include per-target error messages and a Retry Failed option And successful approvals update linked action pages within 30 seconds
Keyboard Shortcuts Enable Rapid Review
Given focus is in the Quick Approve & Edit panel Then Ctrl/Cmd+S saves a draft without publishing And Ctrl/Cmd+Enter approves and publishes when the user has Approver role And D toggles the diff view; J and K navigate to next and previous suggestion And a visible shortcut help overlay lists available shortcuts And shortcuts do not fire when focus is inside fields capturing the same keystrokes unless modifiers are used
Approval Records Available for Reporting
Given an approval occurs Then the system writes an immutable approval record with: campaign ID, target ID, channel, script version, previous version, approver ID, approver role, approved UTC timestamp, diff checksum/hash, source suggestion ID, and publish result And the record is queryable via reporting UI and API within 60 seconds And admins can export approval records to CSV including the above fields And deletion of a campaign does not delete approval records; they are retained per data retention policy
Real-Time Regeneration & Notifications
"As a campaign director, I want scripts to regenerate when the bill status changes so that our call and email language stays up to date without constant monitoring."
Description

Automated regeneration of scripts when bill status or key fields change, triggered by webhooks or scheduled polling. The system queues regeneration jobs, respects rate limits, and updates drafts while preserving the current live version until approval. It notifies campaign owners via in-app alerts and email, shows a concise diff of what changed, and offers one-click approve-and-republish to keep language synchronized with the legislative timeline.

Acceptance Criteria
Auto-Regenerate on Webhook
Given a valid webhook payload indicates a change to bill status or key fields for a tracked bill with active Script Autodraft When the webhook is received by the system Then exactly one regeneration job per affected script is enqueued within 5 seconds And the job uses the latest bill data and templates to render a new Draft And the Draft includes updated dynamic fields (bill number, committee name, hearing date/time, vote window) And the job completes within 60 seconds for the 95th percentile And the Live version remains unchanged
Scheduled Polling Change Detection
Given polling is configured to run every 5 minutes and prior field snapshots are stored When no changes are detected in bill status or key fields during a poll Then no regeneration jobs are enqueued When changes are detected in bill status or any key field Then exactly one job per affected script is enqueued within the same poll cycle And duplicate jobs for the same script are prevented within a 60-second deduplication window And the resulting Draft reflects the latest detected values
Queueing and Rate Limit Compliance
Given the outbound provider rate limit is configured to 10 API calls per rolling 60 seconds for test When 100 scripts require regeneration concurrently Then the system processes jobs via a queue without exceeding the configured limit in any rolling 60-second window And jobs expose states Queued, Running, Succeeded, Failed for monitoring And average queue wait time does not exceed 2 minutes under this load And upon receiving HTTP 429, the job retries with exponential backoff up to 3 attempts before marking Failed
Preserve Live Version Until Approval
Given a campaign has a currently published Live script When a regeneration job completes Then only the Draft version is updated with new content And the Live version identifier remains unchanged And supporter-facing pages continue to render the existing Live version And a diff between the new Draft and current Live is available for review
Notifications with Concise Diff
Given a regeneration job completes successfully for a campaign When the Draft is updated Then an in-app alert is created for all campaign owners within 10 seconds including a summary of changed fields and a link to review And an email notification is delivered to campaign owners within 2 minutes containing the same summary and link And the diff displays field-level changes (bill status, committee name, hearing date/time, vote window) and line-level text changes limited to 15 lines And no duplicate notifications are sent for the same job
One-Click Approve and Republish
Given a user with publish permission is viewing the Draft vs Live diff When the user selects Approve and Republish Then the Draft is promoted atomically to the new Live version with a new version ID And supporter-facing pages reflect the new Live content within 30 seconds And an audit log captures user, timestamp, prior version, new version, and change summary And the prior Live version is retained and can be restored
Failure Handling and Recovery
Given a regeneration job encounters a non-2xx response or timeout When all retry attempts are exhausted Then the job is marked Failed and the Live version remains unchanged And an in-app alert labeled Failure is generated with error code/message and a Retry action And an email notification is sent within 5 minutes containing the failure summary When a user clicks Retry Then the job is re-enqueued once and follows the standard processing and notification flow
Localization & Personalization
"As a grassroots organizer, I want scripts that reference the correct legislator and reflect local context and language so that supporters feel confident taking action."
Description

Support for district-specific and multilingual scripts with merge fields for legislator name, title, district, and local references. It localizes dates and times, provides Spanish as an initial secondary language with extensible language packs, and offers side-by-side previews for translators and reviewers. It ensures safe fallbacks when translations are missing, validates merge fields at publish time, and personalizes scripts using supporter location while protecting privacy.

Acceptance Criteria
District-Specific Merge Field Resolution
Given a supporter’s location resolves to a single legislative district and chamber When Script Autodraft generates a draft for that supporter’s action Then {{legislator.name}}, {{legislator.title}}, and {{legislator.district}} render with values from the authoritative directory for that district And any configured {{local.reference}} tokens render from the district’s content library And the rendered script contains no unresolved {{...}} placeholders And the same input produces identical output values on repeat generation And if the district cannot be resolved, the script falls back to a neutral salutation without legislator PII and logs a non-blocking warning
Spanish Script Generation with Localized Dates/Times
Given language = Spanish (es-US) for a selected campaign And the draft includes dynamic date/time fields for hearings or vote windows with a specified event timezone When Script Autodraft generates call and email scripts Then all static copy is in Spanish and dynamic date/time strings are formatted per es-US locale conventions with correct timezone conversion And day and month names appear in Spanish And punctuation and capitalization follow Spanish locale rules for dates/times (e.g., 3:30 p. m.) And channel-specific subject/opening lines are generated in Spanish And no English strings appear unless explicitly marked as domain terms in the language pack
Safe Fallbacks for Missing Translations
Given the selected target language has missing keys for one or more phrases When a draft is generated or previewed Then only the missing segments fall back to the default language (en-US) while preserving all merge field values And the UI flags each fallback segment with a non-destructive indicator for translators And a machine-readable report lists missing keys by locale, channel, and template section And generation does not error or omit required content And publishing remains allowed unless critical system keys are missing; in such cases the publish action is blocked with a clear error
Publish-Time Merge Field Validation
Given a campaign owner attempts to publish scripts across all enabled locales and channels When publish validation runs Then publishing is blocked if any merge fields remain unresolved (e.g., {{legislator.name}}, {{hearing.time}}) And the error report enumerates each unresolved field with locale, channel, and template path And resolving the listed issues results in a successful publish without additional hidden errors And the validation result is recorded with timestamp, user, and locales checked for auditability
Translator Side-by-Side Preview and Review
Given a translator opens the side-by-side preview for a script When they select a source language (en-US) and a target language (e.g., es-US) Then the interface displays aligned segments with synchronized highlighting of merge fields that are non-editable And toggling “Show sample values” injects live district-specific sample data without persisting it And the translator can approve or request changes per segment, with status tracked and exportable And keyboard navigation and copy protections prevent accidental alteration of merge tokens
Extensible Language Packs Enablement
Given an admin uploads or enables a new language pack (e.g., fr-CA) containing phrase keys and locale settings When the pack passes schema validation Then the new locale appears in language selectors for Script Autodraft within 60 seconds And generation uses the pack’s phrases and locale rules without code changes And missing keys in the new pack safely fall back to the default language with reporting And the pack can be disabled to immediately remove it from selection without affecting existing published content
Privacy-Preserving Location-Based Personalization
Given a supporter provides location data (address or ZIP+4) for personalization When district resolution runs Then only the derived district and chamber identifiers are stored with the action; raw address/coordinates are discarded after personalization completes And no raw location PII is written to analytics, logs, or webhooks And generated scripts include district-specific personalization without exposing supporter PII And a privacy audit log records that location data was processed ephemerally with retention = session scope
Audit Trail & Evidence Export
"As a nonprofit director, I want an audit-ready history of what scripts were used when so that we can satisfy grant reporting and compliance reviews."
Description

Immutable versioning and storage of every generated, edited, approved, and published script with linked metadata (bill stage, data sources, approver, timestamps). It enables export to CSV and PDF, provides version IDs that can be referenced in reports, and links script versions to action outcomes for audit-ready proof. Retention policies and access controls protect sensitive information while satisfying compliance and grant reporting requirements.

Acceptance Criteria
Immutable Versioning and Metadata Capture
Given a user generates, edits, approves, or publishes a script in Script Autodraft When the change is saved Then a new immutable version record is created without modifying any prior version And the record includes version_id (UUIDv4), sha256_hash of exact stored content, action_type, actor_user_id, actor_role, and created_at_utc timestamp And the record links bill_number, bill_stage, committee_name, hearing_datetime_utc, vote_window_start_utc, vote_window_end_utc, and data_source_uris And attempting to update a prior version returns HTTP 409 and is audit-logged with actor_user_id and ip_address And GET /versions/{version_id} returns the exact content and metadata whose sha256_hash matches the stored value
Linking Script Versions to Action Outcomes
Given an action page renders a specific script version When a supporter completes a call or email action Then the action record persists script_version_id, channel (call|email), completed_at_utc, and outcome_status And aggregate metrics by script_version_id are updated within 60 seconds of action completion And GET /versions/{version_id}/summary returns actions_total, actions_calls, actions_emails, success_rate, and last_action_at_utc And exported metrics match the live dashboard within ±0.5% for the same filters
CSV Evidence Export
Given a user with export permissions selects filters (date range, bill number/stage, approver, channel) When they request Export CSV Then a UTF-8 CSV with a header row is generated within 60 seconds for up to 5,000 versions and 250,000 actions And the CSV contains one row per version with columns: version_id, action_type, script_body, bill_number, bill_stage, committee_name, hearing_datetime_utc, vote_window_start_utc, vote_window_end_utc, data_source_uris, approver_user_id, approver_name, created_at_utc, published_at_utc, actions_total, actions_calls, actions_emails And values are escaped per RFC 4180, timestamps are UTC ISO-8601, and empty fields are blank (not "null") And the file is downloadable immediately and also available via a pre-signed URL that expires within 24 hours And an audit log entry records exporter_user_id, applied filters, row_count, and file_checksum_sha256
PDF Evidence Packet Export
Given a user with export permissions selects one or more version_ids and a date range When they request Export PDF Then the system generates a PDF/A-2b compliant packet with a cover page (org, campaign, filters, generated_at_utc) And for each version, the packet includes script content, full metadata, approver name/role, approved_at_utc (if available), version_id, sha256_hash, and a QR code that resolves to the read-only version URL And an action summary per version is included with totals by channel and a daily time series chart And the packet is watermarked "Audit Copy," page-numbered (Page X of Y), and includes a manifest page listing all version_ids and hashes And generation completes within 90 seconds for up to 100 versions and 50 pages with total file size ≤ 50 MB
Retention Policy Enforcement and Access Controls
Given organization-level retention settings are configured for evidence and action PII When a record reaches end-of-retention Then a nightly job purges or anonymizes PII as configured while retaining immutable version records and required aggregates And each purge produces an audit log with job_id, scope, counts affected, and reason, and a purge report CSV is available to Compliance Admins And only Org Admins and Compliance Admins can view or export evidence; other roles receive HTTP 403 and the attempt is audit-logged with actor and ip_address And data access is scoped to the user's organization; cross-organization access is blocked and logged And legal holds can be placed on specific version_ids to suspend purges until released
Integrity Verification and Hash Chain
Given an auditor requests integrity verification for one or more version_ids When the system recomputes sha256_hash values over stored content Then each recomputed hash must match the stored hash; mismatches are flagged, return integrity: fail, and emit an alert event And a manifest.json (version_id, sha256_hash, created_at_utc) is embedded in exports and can be validated via API/CLI against live data And verification of 1,000 version records completes within 30 seconds at the 95th percentile

Redline Brief

Produces a plain‑language, color‑coded digest of what changed since your last update—status, amendments, sponsors, vote counts—and highlights exactly which script lines should be edited. Gives organizers a fast briefing they can trust and act on immediately.

Requirements

Legislative Update Ingestion & Versioning
"As an organizer, I want RallyKit to automatically capture the latest bill changes and keep prior versions so that I can see exactly what changed without manual tracking."
Description

Implement reliable connectors to official legislative data sources across jurisdictions and sessions, normalizing fields such as status, actions timeline, sponsors and cosponsors, amendment texts, committee and floor votes, and effective dates. Persist timestamped snapshots for each tracked bill and campaign linkage, with debounced polling/webhooks, rate limiting, retries, and deduplication to ensure freshness without noise. Store versions and metadata in a versioned datastore keyed by bill/session so that downstream services can compute precise diffs. Integrate with RallyKit campaign records to automatically associate updates to the correct campaigns and enable real-time dashboard updates.

Acceptance Criteria
Ingest and Normalize Multi‑Jurisdiction Legislative Data
Given at least one official source per configured jurisdiction and session When an ingestion run is triggered by schedule or webhook Then records for all tracked bills are fetched without transport or parse errors And each record is normalized to the internal schema with fields: bill_key(jurisdiction, session, bill_number), status, actions[], sponsors[], cosponsors[], amendments[], committees[], votes[], effective_date, last_action_at, source_url, source_updated_at And invalid or missing fields are set to null with reason codes and the record still validates the schema And records that fail schema validation are quarantined and visible in an admin review queue within 1 minute And normalized documents include a stable unique bill_key and ISO‑8601 UTC timestamps
Versioned Snapshot Persistence by Bill/Session
Given a tracked bill with an existing latest snapshot When a newly normalized record differs from the latest snapshot beyond whitespace/order‑only changes Then an immutable snapshot is created with version_id, bill_key, created_at, changed_fields[], previous_version_id, and content_hash And no snapshot is created when no material fields changed And snapshots for a bill_key are retrievable in descending created_at order And snapshots are retained for at least 24 months And concurrent ingestions for the same bill_key produce a single latest snapshot via optimistic concurrency or locking
Debounced Triggers, Rate Limits, Retries, and Deduplication
Given multiple webhook notifications for the same bill_key within 60 seconds When ingestion is scheduled Then only one fetch occurs due to a per‑bill debounce window of 60 seconds Given configured rate_limit_per_provider When fetching Then the system does not exceed 90% of the configured limit and backs off on 429 responses Given a transient 5xx from a provider When fetching Then retries occur with exponential backoff up to 3 attempts over 5 minutes before surfacing a failure Given identical payloads or matching content_hashes When evaluated for snapshot creation Then no duplicate snapshots are created (idempotency) And 95th‑percentile end‑to‑end latency from provider change detection to stored snapshot is ≤ 5 minutes
Amendment and Vote Data Fidelity
Given an amendment update for a tracked bill When ingested Then the amendment is stored with amendment_id, title, sponsor(s), status, adopted flag, sequence, and either text or authoritative link, and is versioned on change Given committee or floor roll‑call data When ingested Then votes are stored with chamber, date, motion, yea, nay, present, absent counts and per‑legislator positions when provided And when both per‑legislator and totals are provided, totals equal the sum of individual positions; otherwise aggregate_only=true is set
Automatic Campaign Association and Real‑Time Dashboard Update
Given one or more active campaigns linked to a bill_key When a new snapshot is created Then the snapshot is associated to all linked campaigns within 5 seconds And the dashboard for each campaign displays an “Updated” indicator and shows latest status and last_action_at within 10 seconds And an event “legislation.updated” is published with bill_key, version_id, and campaign_id(s) And archived campaigns are excluded from association and not included in the event payload
Diff Readiness API for Redline Brief
Given two version_ids for the same bill_key When the Redline Brief service requests a diff Then the API returns canonical normalized documents for both versions and an RFC‑6902 JSON Patch representing the changes And sponsor/cosponsor arrays are order‑normalized to ensure stable diffs And the response includes change_classes for status, amendments, votes, and sponsors And p95 latency for adjacent version diffs is ≤ 300 ms under a load of 50 RPS
Observability, Audit Trail, and Data Freshness SLA
Given any stored snapshot When requested via admin or API Then the system returns audit metadata including source_url, fetch_time, raw_payload_hash, transform_version, and operator_id if manually edited And all timestamps are ISO‑8601 UTC And service metrics are exposed: ingestion_success_rate, snapshot_create_rate, dedupe_rate, update_latency_p95 per provider And alerts fire within 2 minutes when ingestion_success_rate drops below 99% over 15 minutes or update_latency_p95 exceeds 5 minutes
Redline Diff Engine
"As a campaign director, I want a clear, color-coded summary of what changed since my last brief so that I can grasp updates in seconds."
Description

Create a field-aware differencing service that compares the latest bill snapshot to the last briefed version and classifies changes into categories (status update, amendment added/removed, sponsor change, vote count/result change). Produce structured diff output (JSON) with severity levels and machine-readable tags used to drive consistent color-coding in the UI and exports. Include rules per jurisdiction to ignore trivial edits (e.g., formatting) and highlight meaningful changes. Expose an API to fetch diffs by bill or campaign, enabling the real-time dashboard to render color-coded updates and power follow-on features such as notifications and script mapping.

Acceptance Criteria
Classify Bill Status Update
Given a last briefed bill snapshot with status = "In Committee" and a latest snapshot with status = "Passed House" When the diff engine compares the two snapshots Then the diff output contains exactly one item with category = "status_update", field_path = "status", previous_value = "In Committee", current_value = "Passed House", severity = "high", and tags includes ["status_changed", "stage_transition"] And no other change items are emitted for status Given a status change that does not cross a stage boundary per configuration (e.g., "Scheduled" to "Rescheduled") When the diff engine runs Then the emitted status_update item has severity = "low" and tags includes ["status_changed", "non_substantive_stage"]
Detect Amendment Added and Removed
Given the last briefed snapshot contains amendments ["A1"] and the latest snapshot contains amendments ["A1", "A2"] When the diff engine runs Then the diff output contains one item with category = "amendment_added", amendment_id = "A2", severity = "medium", and tags includes ["amendment_added"] Given the last briefed snapshot contains amendments ["A1", "A2"] and the latest snapshot contains amendments ["A1"] When the diff engine runs Then the diff output contains one item with category = "amendment_removed", amendment_id = "A2", severity = "medium", and tags includes ["amendment_removed"] Given an amendment where only formatting changes occur within its text and the jurisdiction rules mark such edits as trivial When the diff engine runs Then no amendment_* change item is emitted for that amendment
Sponsor Roster Change Detection
Given last briefed sponsors = { prime: "S1", cosponsors: ["S2"] } and latest sponsors = { prime: "S3", cosponsors: ["S2", "S4"] } When the diff engine runs Then the diff contains one item category = "sponsor_prime_changed" with previous_value = "S1", current_value = "S3", severity = "high", tags includes ["sponsor_changed", "prime_changed"] And the diff contains one item category = "sponsor_added" with sponsor_id = "S4", severity = "low", tags includes ["sponsor_added"] Given a sponsor is removed from cosponsors between snapshots When the diff engine runs Then the diff contains one item category = "sponsor_removed" with sponsor_id set to the removed sponsor, severity = "low", tags includes ["sponsor_removed"]
Vote Count and Result Change Detection
Given last briefed vote summary = { aye: 48, nay: 50, abstain: 2, result: "Failed" } and latest vote summary = { aye: 51, nay: 49, abstain: 0, result: "Passed" } When the diff engine runs Then the diff contains one item category = "vote_result_changed" with previous_value = "Failed", current_value = "Passed", severity = "high", tags includes ["vote_changed", "result_flip"] And the diff contains one item category = "vote_counts_changed" with previous_value = { aye: 48, nay: 50, abstain: 2 }, current_value = { aye: 51, nay: 49, abstain: 0 }, severity = "medium", tags includes ["vote_changed", "tally_updated"] Given vote tallies change but the result remains the same When the diff engine runs Then only one item category = "vote_counts_changed" is emitted and no vote_result_changed item is emitted
Jurisdiction-Specific Ignore Rules
Given jurisdiction = "CA" and the only differences between snapshots are whitespace and punctuation in bill_text When the diff engine runs Then no diff items are emitted for bill_text changes Given jurisdiction = "CA" and the bill_title changes from "Clean Air Act Update" to "Clean Air Act Amendments" When the diff engine runs Then exactly one item category = "title_changed" is emitted with severity = "medium" and tags includes ["title_changed"] Given the ignore rules configuration for jurisdiction = "CA" is updated to treat committee name casing changes as trivial When the diff engine runs on snapshots differing only by committee name casing Then no diff item is emitted for committee name
Structured Diff JSON and Script Mapping Hints
Given two snapshots with at least one detected change When the diff engine runs Then the response is valid JSON with top-level fields: bill_id, base_version_id, head_version_id, jurisdiction_code, generated_at (ISO 8601), and items (array) And each items[i] contains: id (stable), category in ["status_update","amendment_added","amendment_removed","sponsor_added","sponsor_removed","sponsor_prime_changed","vote_counts_changed","vote_result_changed","title_changed"], severity in ["low","medium","high"], field_path, previous_value, current_value, tags (array of strings) And for items with category in ["status_update","vote_result_changed","amendment_added","amendment_removed"] the item includes script_hints (array of strings) mapping to script template keys (e.g., "script:status_line", "script:amendment_block") And tags are consistent with category and severity (e.g., include category and feature tags such as "status_changed", "vote_changed", "amendment_changed") And the JSON validates against the published schema version = "v1" present in the payload
Diff API Fetch by Bill or Campaign
Given a bill with 120 stored diffs When GET /api/v1/diffs?bill_id={bill_id}&page_size=50 Then the API responds 200 within 500 ms (p95) and returns 50 items with a next_cursor token Given a valid next_cursor from the previous response When GET /api/v1/diffs?cursor={next_cursor} Then the API responds 200 and returns the next page of items without duplicates Given a campaign linked to three bills with diffs When GET /api/v1/diffs?campaign_id={campaign_id} Then the API responds 200 and returns a combined, time-ordered (desc by generated_at) list of diffs across all linked bills Given a since parameter (ISO 8601 timestamp) or since_version_id When GET /api/v1/diffs?bill_id={bill_id}&since=2025-08-01T00:00:00Z Then only diffs generated after the given point are returned Given an invalid query (missing bill_id or campaign_id and no cursor) When a request is made Then the API responds 400 with a machine-readable error code Given an If-None-Match header with a current ETag for the requested page When the request is made Then the API responds 304 with empty body
Plain‑Language Digest Generator
"As a busy organizer, I want a plain-language brief I can trust so that I don’t need to parse legislative jargon."
Description

Transform structured diffs into concise, non-legalese summaries that explain what changed and why it matters, using deterministic templates with jurisdiction-aware phrasing and fallbacks. Support concise bullets and compact paragraphs with hyperlinks to source documents, while preserving accuracy via guardrails that restrict claims to verifiable data. Provide configuration for tone (neutral/advocacy-ready) and length, and integrate with the Brief view in RallyKit so organizers can review, edit, and publish the digest to action pages and updates without leaving the dashboard.

Acceptance Criteria
Neutral Bullet Digest from State Bill Diff
Given a structured diff containing status, amendment, sponsor, and vote count changes for a state bill and a recognized jurisdiction When the user selects tone = Neutral and format = Bullets and clicks Generate in the Brief view Then the system produces a bullet digest where each change category yields at least one bullet And each bullet uses plain-language phrasing from the jurisdiction’s template set with no legalese terms (e.g., no section/clause citations) unless present in the diff And each bullet is tagged with its semantic change type (status|amendment|sponsor|vote) for downstream color-coding And the digest includes a brief “what this means” clause only when a jurisdiction rule exists for that change type; otherwise the clause is omitted And generation completes within 2 seconds at P95 for diffs with up to 25 discrete changes
Jurisdiction-Aware Phrasing with Safe Fallbacks
Given the jurisdiction for the bill is recognized and a template set is available When generating the digest Then terms reflect jurisdiction metadata (e.g., Assembly vs House; committee vs standing committee) as defined in the template set And if a required rule or term is missing, the generator uses a generic fallback phrase and suppresses the specialized clause without error And a non-blocking warning is attached to the digest metadata identifying each fallback that occurred And the final digest contains no unresolved placeholders or template tokens
Accuracy Guardrails and Source Linking
Given a structured diff and linked source artifacts (bill page, amendment text, vote record) are provided When generating the digest Then every factual statement maps to explicit fields in the diff or to deterministic rule text; no speculative or inferential language is included And all numbers, dates, chamber names, and sponsor names exactly match the input data And each bullet or paragraph includes at least one hyperlink to a primary source or official record for that change category And if a required source URL is missing for any change category, generation fails with a descriptive error and no digest is published
Tone and Length Configuration
Given tone is set to Neutral or Advocacy-ready and length is set to Short, Medium, or Long When generating the digest Then Neutral tone contains no directives or calls-to-action and uses neutral verbs (e.g., "changed", "moved") And Advocacy-ready tone uses supporter-facing phrasing and may include one urgency clause drawn only from approved templates And Short length is <= 500 characters, Medium is 501–900 characters, and Long is 901–1500 characters (excluding URLs) And the generator prioritizes inclusion of status and amendment facts first, then sponsors, then votes, while meeting the selected length band
Bullets vs Compact Paragraphs Formatting
Given the output format is set to Bullets or Paragraph When generating the digest Then Bullets mode outputs a list where each bullet covers exactly one change category and retains clickable hyperlinks And Paragraph mode outputs a single compact paragraph of <= 7 sentences covering all change categories in priority order And formatting renders correctly in the Brief view and on action pages without broken links or markdown artifacts
Deterministic Output for Identical Input
Given identical input data, settings, and template version When the digest is generated multiple times Then the output text and metadata are byte-identical across runs And changing any single input field or setting results in a different output hash And the generator records the input hash and template version in the digest metadata
Brief View Edit-Approve-Publish with Audit Trail
Given a generated digest is presented in the Brief view When a user with Publish permission edits, saves, and publishes the digest Then the edited content is versioned and saved as a draft prior to publish And publishing updates the linked action pages and supporter update modules without leaving the dashboard And an audit record is created capturing timestamp, user ID, input diff hash, output hash, destinations updated, and publish result And the UI reflects Published status and provides links to the updated destinations
Script Impact Mapping & Highlights
"As a script writer, I want RallyKit to pinpoint which lines of my call and email scripts need edits so that I can update actions quickly and accurately."
Description

Analyze diffs against existing RallyKit call and email scripts to detect references to status, sponsors, vote outcomes, or key bill elements, then highlight specific lines and placeholders that should be updated. Present in-editor highlights with color-coded badges, inline suggestions, and one-click actions to accept updates or mark as reviewed. Track last-reviewed timestamps per script segment to prevent repeat prompts, and log applied changes for auditability. This integrates with one-tap action pages so updates propagate instantly to supporter-facing content once published.

Acceptance Criteria
Inline Highlighting of Script Lines Requiring Updates
Given a script contains references to bill status, sponsors, vote outcomes, or key bill elements and a Redline diff flags changes to any of these When the editor opens the script with the diff context Then every affected line and placeholder token is highlighted inline and all unaffected content remains unhighlighted And multiple occurrences of a changed reference across the script are all highlighted And initial highlights render within 300 ms for desktop and 1000 ms for mid-range mobile on scripts up to 5,000 words
Color-Coded Badges for Change Types
Given a highlighted segment relates to a status change When it is rendered Then it shows a badge labeled "Status" using design token badge.status and an accessible tooltip describing the change Given a highlighted segment relates to sponsor changes When it is rendered Then it shows a badge labeled "Sponsors" using design token badge.sponsors and an accessible tooltip describing the change Given a highlighted segment relates to vote outcomes When it is rendered Then it shows a badge labeled "Vote" using design token badge.vote and an accessible tooltip describing the change Given a highlighted segment relates to a key bill element change When it is rendered Then it shows a badge labeled "Element" using design token badge.element and an accessible tooltip describing the change And all badges meet WCAG AA contrast (>=4.5:1) and include non-color cues (icon or text)
Inline Suggestions with One-Click Accept or Mark Reviewed
Given a highlighted segment is focused or expanded When the suggestion panel opens Then it displays a proposed replacement derived from the diff, preserving placeholders and variable syntax Given the user clicks Accept update When the action completes Then only the targeted segment text is replaced, the highlight is cleared, an audit log entry is created, and the segment state is set to updated Given the user clicks Mark reviewed When the action completes Then no text changes occur, the segment’s last_reviewed timestamp is stored, the highlight style is downgraded to reviewed, and further prompts for this segment are suppressed And Accept update and Mark reviewed are operable via mouse and keyboard (tab + enter/space) and are announced by screen readers
Repeat Prompt Suppression via Last-Reviewed Timestamps
Given a segment was marked reviewed at T1 against diff version V1 When the editor is reopened with the same diff V1 still current Then that segment is not prompted again Given a new diff version V2 updates the same reference with updated_at > last_reviewed When the editor loads with V2 Then the segment is prompted again And last_reviewed is stored per script segment at the workspace level and persists across sessions
Audit Logging of Applied Script Changes
Given the user accepts an update When the change is applied Then an append-only audit entry is recorded with fields: script_id, segment_id, diff_id, change_type, old_text, new_text, user_id, timestamp, action=accept Given the user marks a segment reviewed When the action is recorded Then an audit entry is recorded with fields: script_id, segment_id, diff_id, change_type, user_id, timestamp, action=review And audit logs are filterable by date range and exportable to CSV and JSON And audit entries are immutable and retained for at least 24 months
Publish and Propagate to One-Tap Action Pages
Given accepted updates exist in the editor When the user clicks Publish Then the supporter-facing one-tap call and email action pages reflect the updated script within 5 seconds and stale caches are invalidated And the system records a last_published timestamp and shows a success state Given propagation fails When the timeout threshold (10 seconds) is exceeded or an error occurs Then the publish is rolled back, an error message is shown, and no supporter-facing content changes
Source Transparency & Audit Trail
"As a nonprofit director, I want traceable sources and timestamps for each change so that I can defend our actions in audits and to stakeholders."
Description

Display verifiable provenance for every change, including data source, retrieval timestamp, version IDs, and direct links to official legislative pages and archived snapshots. Maintain an immutable record of published briefs and script updates, with reviewer identity and time, and provide export options (PDF/CSV/JSON) suitable for audits and grant reporting. Surface provenance badges within the brief so organizers can quickly confirm accuracy and share audit-ready proof with stakeholders.

Acceptance Criteria
Provenance badges within Redline Brief
Given a published Redline Brief is viewed When the brief loads Then each changed section displays a provenance badge containing source name, retrieval timestamp (ISO 8601, UTC), and version ID And clicking a badge opens the source details panel And badges meet WCAG AA contrast and are visible on desktop and mobile And badges render within 300 ms after main content load
Source details panel shows verifiable metadata
Given a user clicks a provenance badge When the source details panel opens Then it displays: canonical data source name, official source URL, retrieval timestamp (ISO 8601, UTC), internal version ID, bill version label (e.g., Introduced/Amended), parser version, and SHA-256 content hash And timestamps differ from backend retrieval record by no more than 60 seconds And each field has a Copy action that places the exact value on the clipboard with a success toast And if live source is unavailable, a clear status is shown with a link to the last successful archived snapshot
Links to official pages and archived snapshots
Given the source details panel is open When the user selects "Official Page" Then the app opens the official legislative URL in a new tab and receives HTTP 200–399 within 5 seconds And when the user selects "Archived Snapshot" Then the app opens an immutable snapshot URL whose timestamp is within ±60 seconds of the retrieval timestamp And the snapshot content hash matches the stored SHA-256 value And if any link check fails, a non-blocking warning is displayed and the failure is recorded in the export metadata
Immutable audit log for briefs and script updates
Given a brief or script is published or updated When the action is saved Then an append-only audit entry is recorded with unique ID, entity type, entity ID, action (publish/update), reviewer user ID, reviewer display name, timestamp (ISO 8601, UTC), previous version ID, new version ID, change summary, and a hash signature And audit entries cannot be modified via UI or API; mutation attempts return 403 and are logged And a hash-chain integrity check across the last 10,000 entries returns Pass And audit entries are retrievable by entity and date range with pagination, responding within 2 seconds for up to 10,000 records
Export audit-ready reports (PDF/CSV/JSON)
Given an organizer requests an export from the brief or audit log When a format (PDF/CSV/JSON) and date range are selected Then the generated file includes provenance fields, official and snapshot links, audit entries, and a file-level SHA-256 checksum And PDF contains a human-readable summary plus a QR code linking to a verification endpoint And CSV and JSON conform to the documented schema version with correct headers/keys and data types And exports up to 10,000 records are available within 60 seconds and retained for 7 days And downloaded files verify successfully against the provided checksum
Version-to-version diff traceability in Redline Brief
Given a new Redline Brief is generated from updated legislative data When compared to the previous version Then the digest shows color-coded diffs for status, amendments, sponsors, and vote counts, each annotated with source version IDs And each suggested script edit includes a reason code and a link to the specific underlying change And hover/tap reveals before/after values with retrieval timestamps for both versions And an API endpoint returns the diff payload with stable IDs and mapping between script edits and source changes And automated tests confirm 100% mapping coverage for supported change types
Alerting & Delivery Channels
"As an organizer, I want to be notified immediately when meaningful changes happen so that I can mobilize supporters without delay."
Description

Deliver Redline Brief updates through configurable channels (email, SMS, Slack) with noise controls that trigger only on material changes and support quiet hours. Include a compact digest with top changes and a deep link to the editor highlighting impacted script lines. Provide per-campaign notification preferences and team-level routing to ensure the right people see urgent updates. Integrate with RallyKit’s real-time dashboard to synchronize read/unread state and allow one-tap publish flows from the notification.

Acceptance Criteria
Trigger Only on Material Changes
Given noise controls are set to Material Only for a campaign, When a monitored bill update contains only non-material edits (typographical/formatting/metadata), Then no notification is sent on any channel. Given a bill status changes between workflow stages (e.g., Committee → Floor), When the update is ingested, Then a notification is sent per campaign/user preferences. Given a new amendment is filed or an existing amendment is updated, When the update is ingested, Then a notification is sent and the digest lists the amendment identifier(s). Given sponsor list changes by ≥1 sponsor added or removed, When the update is ingested, Then a notification is sent and the digest shows counts and names of changed sponsors. Given a vote event is recorded or total yeas/nays delta ≥3 since last brief, When the update is ingested, Then a notification is sent and the digest includes the latest vote counts. Given a campaign-specific threshold overrides defaults (e.g., vote delta threshold set to 5), When updates occur below the threshold, Then no notification is sent.
Quiet Hours With Scheduled Delivery
Given quiet hours are configured as 21:00–07:00 in the campaign’s timezone, When a material update occurs at 22:15 local time, Then no immediate notification is sent and a queued digest is scheduled for 07:00. Given multiple material updates occur during quiet hours, When quiet hours end, Then a single consolidated digest is delivered summarizing all material changes since the last send. Given a user enables an override for Final Vote events, When a final passage/failure occurs during quiet hours, Then an immediate notification is delivered despite quiet hours. Given a user disables SMS during quiet hours but allows email, When a material update occurs at 23:00, Then only email is queued for delivery at 07:00 and no SMS is sent. Given the campaign timezone is America/New_York, When quiet hours are evaluated, Then suppression and scheduling respect that timezone regardless of the recipient’s device timezone.
Per-Campaign User Preferences and Channel Delivery
Given a user sets per-campaign preferences (Email=On, SMS=Off, Slack=On), When a material change occurs in that campaign, Then notifications are sent only via Email and Slack for that user. Given no explicit user preferences exist for a campaign, When a material change occurs, Then org-level defaults are applied for that user. Given Slack routing is configured to channel #leg-updates with a valid webhook, When a notification is sent, Then a message posts to #leg-updates containing the compact digest and the deep link. Given an email notification is sent, Then the subject includes "[Redline Brief] <Campaign Name> — Change Summary" and the body contains the top changes and the deep link; SPF/DKIM/DMARC pass for the sending domain. Given SMS is enabled and a notification is sent, Then the SMS contains a 1–2 line digest and a shortened deep link; messages exceeding 160 characters are properly concatenated. Given any channel delivery fails (HTTP 5xx, SMTP hard bounce, SMS carrier error), When retry policy executes, Then up to 3 retries occur within 10 minutes and failures are logged for audit.
Team-Level Routing and Escalation
Given a campaign routes urgent updates to the "Legislative Lead" team, When a status changes to Floor or a Final Vote event occurs, Then all team members with at least one enabled channel receive the notification. Given a recipient has muted the campaign, When team routing executes, Then that recipient is excluded from delivery. Given primary recipients do not acknowledge (open/click) within 10 minutes, When escalation rules are enabled to the "Directors" team, Then the notification is sent to the Directors team. Given a team member joins or leaves the team, When routing runs, Then membership changes are reflected within 5 minutes based on the current roster.
Compact Digest Content and Deep Link Behavior
Given a notification is generated, Then the digest lists up to the top 5 changes prioritized as: status, amendments, sponsors, votes; if more exist, a "+N more" indicator is included. Given any channel deep link is opened by an authenticated editor, When the editor loads, Then impacted script lines are auto-highlighted and scrolled into view. Given a deep link is opened by a non-authenticated user, When authentication completes, Then the user is redirected to the editor with the original highlight context restored. Given the script changed again after the notification was sent, When the deep link is opened, Then highlights reflect the diff relative to the brief referenced by the notification timestamp. Given color-coding standards for Redline Brief, When the digest renders in Email/Slack, Then additions, removals, and neutral updates use the defined colors with sufficient contrast (WCAG AA).
Read/Unread Synchronization With Dashboard
Given a user opens an email (open pixel) or clicks an SMS/Slack link for an update, When the event is received, Then the update is marked Read for that user in the dashboard within 10 seconds. Given a user manually marks an update as Read in the dashboard, When a duplicate notification exists in any channel, Then no further reminders are sent for that update to that user. Given multiple read events arrive from different channels, When they are processed, Then the operation is idempotent and the earliest timestamp is stored as the read time. Given transient network failure during event reporting, When retry succeeds within 5 minutes, Then the read state is updated accordingly and no duplicate notifications are triggered.
One-Tap Publish From Notification
Given a notification includes a "Publish Script Updates" action and the user has Publisher role and valid session, When the action is tapped, Then the script updates publish within 10 seconds and an in-channel confirmation is shown. Given the user lacks Publisher role, When the action is tapped, Then publish is blocked and the user is directed to the editor with an insufficient permissions notice. Given 2FA is required and not satisfied, When the action is tapped, Then the user is prompted to complete 2FA before publish proceeds. Given a publish is initiated from any channel, Then an audit log records actor, timestamp, channel, and before/after script version identifiers and links to the published change.
Accessible Color Coding & Legend
"As a color-blind user, I want the redline colors and icons to be distinguishable so that I can understand changes without confusion."
Description

Adopt a WCAG AA-compliant color palette and redundant iconography/patterns for change categories to ensure readability for color-blind and low-vision users across web and print. Provide an in-product legend, tooltips, and a print-friendly mode that preserves meaning without color. Allow admins to select from approved palettes to match brand while maintaining accessibility guarantees. Apply the same semantics consistently in the brief view, script editor highlights, and exported PDFs.

Acceptance Criteria
WCAG AA Contrast Compliance
Given the selected accessible palette is applied across brief view, script editor highlights, and exported PDFs, When contrast ratios are measured per WCAG 2.2 AA for all text, icons/graphics, and interactive components, Then normal text has contrast ≥ 4.5:1, large text (≥18pt regular or ≥14pt bold) has contrast ≥ 3:1, non-text UI components/graphics have contrast ≥ 3:1, focus indicators have contrast ≥ 3:1 against adjacent colors, and links are visually distinguished from surrounding text by at least one non-color cue (e.g., underline or icon), And automated accessibility checks report zero contrast violations in app screens and sample PDFs, And colors used in PDFs map 1:1 to the app palette tokens with no unmapped colors.
Redundant Icons and Patterns for Change Categories
Given the change categories {Status Changed, Amendment Added, Sponsor Update, Vote Count Changed, Script Lines to Edit, No Change}, When any category indicator is rendered in the brief view, script editor highlights, or exported PDFs, Then the indicator conveys meaning using at least two non-color cues (unique icon and distinct hatch/pattern) plus a text label, and color is not the sole indicator, And each icon has an accessible name/aria-label matching the category, each label is visible by default, and each pattern remains discernible at 200% zoom and in monochrome/grayscale print, And each indicator meets a minimum 44x44 px touch target on web and does not obscure adjacent text in PDF (minimum 8 px padding).
In-Product Legend and Contextual Tooltips
Given a user is viewing the Redline Brief or Script Editor, When they activate the Legend control, Then a modal or side panel opens within 300 ms listing every change category with its icon, pattern swatch, color swatch, and plain-language description, and it is fully keyboard operable (Tab/Shift+Tab traversal, Esc to close) and announced correctly by screen readers (role, name, focus management), And when a user hovers or focuses any category chip/icon anywhere in the UI, Then a tooltip appears within 300 ms showing the category name and description, dismisses on Esc/blur, remains visible on focus, and exposes the same text via aria-describedby for screen readers, And when printing or exporting to PDF, Then the legend is included on the first or last page with icon/pattern and text so meaning is preserved without color.
Print-Friendly Mode Preserves Meaning Without Color
Given a user initiates browser Print or Export to PDF from the Redline Brief, When the output is rendered in grayscale or monochrome (including via CSS print styles or PDF export settings), Then all change categories remain distinguishable by icon/pattern/label without reliance on color, with no loss of meaning, And text over any patterned background maintains contrast ≥ 4.5:1, pattern line thickness ≥ 1 px at 300 DPI, fonts ≥ 11 pt, and 0.5 in page margins are respected, And an automated test that applies a 100% grayscale filter to the PDF confirms that each category’s indicator remains unique and legible, and a checklist-based manual review confirms no color-only semantics.
Admin Selection of Approved Accessible Palettes
Given an org admin opens Settings > Appearance > Palette, When the palette list loads, Then at least three WCAG AA–validated palettes are presented with names and live previews and no freeform color inputs are available, And when the admin selects a palette and saves, Then the selection is persisted to the organization, an audit log entry is recorded (user id, timestamp, palette id), and the application applies the palette across brief view, script editor, and PDFs within 2 seconds of save, And if a stored palette id is invalid or missing, Then the system falls back to the default accessible palette and records the fallback event.
Semantic Consistency Across Brief, Editor, and PDFs
Given a single mapping table defines icons, patterns, colors, and labels for all change categories, When rendering category indicators in the brief view, script editor highlights, and exported PDFs, Then the same mapping is applied in all contexts with identical icon/pattern/label and palette token usage for a given category, And automated snapshot/tests compare tokens across the three contexts for every category and pass at 100%, And no context inverts or repurposes a color/icon/pattern for a different meaning, and updates to the mapping propagate to all contexts in the same release.

Hearing Guard

Detects scheduled hearings and auto‑plans pre‑hearing ramps, respectful quiet windows during proceedings, and post‑hearing follow‑ups. Protects relationships with lawmakers while timing outreach for maximum impact and higher completion rates.

Requirements

Hearing Detection Engine
"As a campaign director, I want hearings auto-detected and linked to our tracked bills so that I can plan outreach without manual monitoring and avoid missing critical timelines."
Description

Continuously ingest and normalize legislative hearing schedules from multiple sources (official calendar APIs, ICS feeds, email alerts, and scraped pages) and map them to tracked bills, committees, and sponsors in RallyKit. Deduplicate events, resolve conflicts, handle timezone conversions, and automatically create a hearing timeline with key phases (pre‑hearing ramp, quiet window, post‑hearing). Provide reliability features including retry/backoff, change detection for reschedules/cancellations, and alerting on ingestion failures. Expose a clean event object to downstream modules (planning, throttling, messaging) and update linked action pages and scripts as bill status changes.

Acceptance Criteria
Multi-source hearing ingestion and normalization
Given configured sources (official calendar API, ICS feed, email alert, scraped page) for a jurisdiction When a new or updated hearing is published in any source Then the engine ingests it and normalizes it to the standard schema with required fields: id, external_ids, source_type, jurisdiction, chamber, committee, bill_ids, sponsors, start_at_utc, end_at_utc (nullable), location, agenda (nullable), status (scheduled|rescheduled|cancelled), source_url, last_seen_at_utc And the normalized event is available to downstream consumers within 10 minutes of the source update And ingestion success rate is >= 99% over a rolling 24-hour window
Event mapping to tracked bills, committees, and sponsors
Given a normalized event that references bills, committees, and sponsors by name/identifier When entity mapping runs Then the event is linked to RallyKit-tracked bills/committees/sponsors with precision >= 98% and recall >= 95% on the curated test set And ambiguous or unmapped references are flagged with reason codes and surfaced for review within the dashboard within 10 minutes
Deduplication and conflict resolution across sources
Given multiple ingested events that represent the same hearing from different sources When the deduplication process runs Then a single canonical event is produced using source precedence (Official API > ICS > Scraped > Email) and field-level freshness timestamps And duplicates are merged within 60 seconds of detection, with provenance retained for all merged fields And conflicting field values are resolved per precedence; lower-precedence values are stored as alternatives for audit
Timezone normalization and hearing timeline creation
Given a hearing time provided in a jurisdiction's local timezone, including during DST transitions When the event is normalized Then start_at_utc and end_at_utc are computed accurately (±1 minute) and local display time reflects official timezone rules And a hearing timeline is auto-created with phases: pre_hearing_ramp (default 48h before start), quiet_window (start to end), post_hearing (24h after end or 2h default if end missing) And timelines crossing midnight or DST boundaries have no gaps or overlaps
Reliability: retries, backoff, and failure alerting
Given transient fetch failures (HTTP 5xx, timeouts, connection errors) for any source When ingestion attempts occur Then the engine retries up to 3 times with exponential backoff (1s, 2s, 4s) and records structured error codes and messages And if a source remains failed for > 15 minutes, an alert is sent to the configured channel (email/webhook) including source id, error summary, and impact And 95th percentile end-to-end ingestion latency is <= 10 minutes and daily availability is >= 99.5%
Change detection for reschedules and cancellations
Given a previously scheduled canonical event When any source indicates a reschedule (date/time change) or cancellation Then the canonical event status/times are updated within 5 minutes, the hearing timeline phases are recalculated, and an audit log captures old vs new values with provenance And downstream planning, throttling, and messaging modules receive a change notification within 5 minutes And outreach is suppressed during the quiet_window and retimed/resumed per the new schedule
Expose clean event object and trigger downstream updates
Given a canonical hearing event exists When downstream modules request events via internal API or subscribe to the event stream Then they receive an event object conforming to schema version v1 with documented field names/types, stable identifiers, and idempotent update semantics And when a linked bill status changes, associated action pages and scripts are updated within 10 minutes and an update event with correlation IDs is emitted And schema changes are versioned; backward-compatible additions do not break existing consumers; breaking changes require version bump and passing migration tests
Pre‑Hearing Ramp Planner
"As a grassroots organizer, I want an automatic pre-hearing action plan so that supporters get the right prompts at the right times without me building a timeline by hand."
Description

Auto-generate a backward plan from the hearing start time, defining a sequenced cadence of actions (email/SMS/one‑tap pages/call tasks) targeted to relevant supporters in affected districts. Use RallyKit’s bill‑status script generator to produce district‑specific talking points and action copy, and schedule sends with adaptive throttling and frequency caps. Segment by supporter engagement level and legislator position (sponsor, undecided, opposed) to maximize impact. Produce a previewable schedule, required assets, and readiness checks (audience size, deliverability, consent) before activation.

Acceptance Criteria
Backward Plan Generation from Hearing Start
Given a hearing with start time T0 and an active bill selection When the user generates a pre-hearing ramp Then the system produces at least 4 scheduled steps before T0 across at least 2 channels And each step displays a relative offset (e.g., T-72h) and an absolute timestamp in the org’s timezone And no step is scheduled at or after T0 And if T0 − now < 24h, the system generates a condensed ramp with at least 2 steps and marks omitted steps as "skipped due to time constraint" And each step lists channel, target segment, estimated audience size, and required assets
District Targeting and Audience Qualification
Given supporter records with validated legislative districts and consent flags, and affected districts derived from the hearing body When the audience is built for the ramp Then only supporters in affected districts with required consent are included And supporters missing district mapping or consent are excluded with counts by exclusion reason And the eligible audience total equals the sum of segment counts within ±1% And if eligible audience < configurable minimum (default 50), the Readiness: Audience Size check = Fail
Bill-Status Script and District-Specific Copy
Given the current bill status and legislator positions per district When generating action copy for each ramp step Then email subjects/bodies, SMS messages, call scripts, and one-tap page copy are produced via the bill-status script generator with district and legislator tokens resolved And no template token remains unresolved in any asset And SMS content per step is ≤ 320 characters And call target lists include the correct legislators for each district and step type And all links are UTM-tagged with campaign and step identifiers
Segmentation by Engagement Level and Legislator Position
Given supporter engagement levels [High, Medium, Low] and legislator positions [Sponsor, Undecided, Opposed] When the planner builds segments and assigns cadence and content Then each segment’s ask is tailored: Sponsor = thank-you, Undecided = support ask, Opposed = persuasion ask And High receives up to 3 touches in the ramp, Medium up to 2, Low up to 1 by default And constituents of Sponsor legislators are excluded from persuasion call tasks And segment counts are reported and sum to the total eligible audience
Adaptive Throttling and Frequency Caps
Given throttle settings email=300/min, sms=120/min and a per-supporter frequency cap of 3 touches/7 days When the ramp is activated Then the measured dispatch rate per channel does not exceed its throttle by >5% in any rolling 1-minute window And no supporter is scheduled beyond the frequency cap; excess touches are suppressed and counted And if provider rate-limits or bounce rate >5% in the last 5 minutes, the system reduces the send rate by ≥25% within 2 minutes and reschedules remaining sends within pre-hearing windows And no rescheduled send occurs at or after T0
Preview and Readiness Checks Gate
Given a generated ramp When the user opens the Preview Then a timeline shows each step with timestamp, channel, segment, estimated audience, and content preview And readiness checks display statuses for Audience Size, Deliverability (verified sending domain and SMS number), Consent Coverage, and Call Target Mapping And the Activate control is disabled until all readiness checks = Pass And on activation, an activation summary (version, counts, scheduled timestamps) is logged
Auto-Replan on Hearing Time Change
Given a scheduled ramp and a change to the hearing start time T0 When the new T0 is saved Then all unsent steps shift to maintain their relative offsets to the new T0 And already-sent steps remain unchanged and are labeled Sent And the user is notified and must acknowledge the updated plan before re-activation And a new preview version is created and the prior version is archived
Quiet Window Enforcement
"As a policy director, I want communications to lawmakers paused during hearings so that we protect relationships and avoid disrupting proceedings."
Description

Automatically enforce respectful quiet windows from a configurable period before gavel-in through adjournment, pausing or queueing outbound calls, emails, and texts to lawmakers and committee offices involved in the hearing. Respect local office hours and per‑office preferences, block high‑pressure actions, and surface an in‑app banner to supporters indicating a quiet window is in effect with an option to pledge actions that will be sent after the window lifts. Provide real‑time controls to pause/resume, exemptions for non‑contact updates, and logs proving suppression for audit purposes.

Acceptance Criteria
Quiet Window Timing by Hearing Schedule
Given a detected hearing with gavel-in time T, adjournment A, and a configured pre-gavel buffer B minutes When current time enters [T - B, A) Then the system marks the quiet window as Active for all affected committees/offices within 60 seconds and records window_start = T - B And while Active, the quiet window remains in effect until an adjournment signal for the hearing is received And upon receiving adjournment, the quiet window transitions to Inactive within 60 seconds and records window_end = actual adjournment timestamp
Automatic Pausing and Queuing of Outbound Actions
Given the quiet window is Active for an office targeted by a campaign action (call, email, text) When a supporter or automation attempts to send an action to that office Then the action is not delivered and is stored in a durable queue with attributes: action_id, campaign_id, office_id, channel, created_at, reason = QuietWindow, priority, and FIFO order And no deliveries to that office occur during the quiet window (0 messages sent) And after the quiet window becomes Inactive, queued actions start dispatching within 120 seconds, preserving FIFO per office+channel and not exceeding the configured per-office rate limit And each dispatched action outcome (sent, failed, deferred) is recorded with a delivery timestamp and correlation id
Local Office Hours and Per-Office Preferences
Given an action is queued due to a quiet window and targets office O with local timezone Z and published office hours When the quiet window becomes Inactive Then if current time in Z is outside O's office hours (or violates per-office do-not-contact preferences), the action remains queued with reason = OfficeHours and next_attempt set to the next allowed time per O's schedule And if current time in Z is within allowed hours and channels, the action becomes eligible for dispatch subject to rate limits And timezone resolution uses the office's stored timezone and accounts for DST correctly
Blocking High-Pressure Actions During Quiet Window
Given action types flagged as HighPressure (e.g., call storms, rapid dialer bursts) are configured for a campaign When the quiet window is Active for a targeted office Then initiating HighPressure actions via UI is disabled with an inline message indicating a quiet window is in effect And initiating HighPressure actions via API returns HTTP 409 with error_code = QUIET_WINDOW_ACTIVE and includes window_end ETA when available And each blocked attempt is logged with actor, timestamp, action_type, office_id, and reason = QuietWindow
Supporter Banner and Pledge-Later Flow
Given a supporter opens an action page that targets one or more offices under an Active quiet window When the page renders Then a banner appears within 2 seconds indicating a quiet window is in effect, shows a countdown to window_end, and offers a Pledge to Act button And selecting Pledge creates a pending action record linked to the supporter with status = Pledged, reason = QuietWindow, and sends a confirmation message if contact info is available And upon quiet window end (and within office hours), pledged actions auto-dispatch within 10 minutes under normal rate limits, and the supporter receives a notification of completion if notifications are enabled And pledged actions remain viewable and cancelable by the supporter until dispatch
Organizer Real-Time Controls and Exemptions
Given an organizer with permissions opens the Hearing Guard controls for a campaign When the organizer toggles Pause or Resume for quiet window enforcement globally or for a specific hearing/office Then the change takes effect across delivery services within 15 seconds and is reflected in the UI state badges And the organizer can add exemptions that allow non-contact updates (e.g., in-app updates, internal alerts) to proceed during quiet windows while still suppressing external contact to offices And all overrides and exemptions are audit-logged with user_id, scope, previous_state, new_state, reason, and timestamp
Audit Logs for Suppression and Release
Given actions are suppressed or queued due to quiet windows When viewing the Quiet Window Audit Log Then each record includes: action_id, supporter_id (if available), campaign_id, office_id, office_timezone, channel, action_type, suppression_reason, rule_id, created_at, released_at (if applicable), dispatcher_id (system), delivery_status, and error (if any) And logs are filterable by date range, campaign, office, channel, and reason, with result counts matching underlying data within ±1% And logs are exportable as CSV and JSON up to 100,000 rows per export, delivered within 30 seconds for typical load And log records are immutable and retained for at least 24 months
Post‑Hearing Follow‑up Orchestrator
"As a campaign manager, I want outcome‑based follow‑ups to launch automatically so that we can sustain momentum and recognize allies without manual triage."
Description

Trigger tailored follow‑ups based on detected hearing outcomes (advanced, amended, failed, delayed) and known vote behavior. Generate appropriate scripts (thank‑yous, persuasion, regroup) using RallyKit’s templates, schedule sends within configurable windows, and retarget segments (e.g., those who acted pre‑hearing vs. not). Include smart retries, next‑step recommendations, and automatic updates to one‑tap pages. Capture performance metrics and feed learnings back into future ramp defaults.

Acceptance Criteria
Outcome-Based Follow‑up Triggering
Given a hearing record is finalized with an outcome in {advanced, amended, failed, delayed} and vote behavior per legislator is available When RallyKit ingests the updated hearing record Then a single follow-up plan is created within 15 minutes And supporters are mapped to follow-up type by outcome and their legislator’s vote: advanced/yes => thank-you; amended or failed/no or abstain => persuasion or regroup; delayed => hold with next-step recommendation And no duplicate plans are created for the same hearing ID And if the outcome is corrected within 60 minutes, pending unsent messages are re-planned to match the latest outcome and prior plan entries are canceled
Dynamic Script Generation by Outcome and Vote Behavior
Given RallyKit templates for thank-you, persuasion, and regroup are available And supporter context includes district, legislator name, bill ID, bill title, and outcome When a follow-up plan is created Then call, email, and SMS scripts are generated per segment with all placeholders resolved And tone and language match the template guidance for the detected outcome and vote behavior And SMS body <= 320 characters, email subject <= 70 characters, call script read time <= 60 seconds And a preview is available per channel and per segment before scheduling And scripts are stored with version IDs and linked to the plan for audit
Configurable Send Windows and Quiet Periods
Given org-configured post-hearing send window (start delay and end window) and quiet hours are set per time zone When a follow-up plan is scheduled Then initial sends are queued no earlier than the configured start delay and no later than the window end And all sends respect org quiet hours and avoid the hearing’s quiet period And scheduling is time zone aware for each supporter’s locale And sends outside allowed windows are not dispatched and are flagged
Segment Retargeting Based on Pre‑Hearing Activity
Given segments are defined as A = supporters who acted pre-hearing and B = supporters who did not When scheduling post-hearing follow-ups Then Segment A receives thank-you plus secondary ask content; Segment B receives primary persuasion or regroup content And supporters who already completed the post-hearing action are excluded from subsequent sends And a frequency cap (default 1 message per 24 hours per channel) is enforced And deduplication across channels prevents more than one initial touch per supporter for the plan
Smart Retries and Failure Handling
Given retry policy is configured with max retries, backoff strategy, conversion window, and channel fallbacks When an initial send results in no completed action within the conversion window or a delivery failure occurs Then a retry is scheduled per policy using backoff, up to the max retries And channel fallback is used if the previous channel bounced/failed or underperformed And retries stop immediately upon action completion or opt-out And all retries and outcomes are logged with timestamps and reasons
Automatic One‑Tap Page Updates by Outcome
Given RallyKit one-tap action pages exist for the campaign When a follow-up plan is created after a hearing Then associated one-tap pages update within 10 minutes to reflect the outcome and follow-up type (targets, script, CTAs) And existing short links continue to resolve to the updated page without 404s or redirect loops And changes are recorded in the audit log with plan ID and editor = system And uptime for the page update process is >= 99.9% over a rolling 30 days
Metrics Capture and Learning Feedback
Given performance metrics tracking is enabled When post-hearing follow-ups are dispatched Then deliveries, opens, clicks, and completed actions are captured per channel, segment, legislator, and outcome with <5-minute latency to the dashboard And a post-hearing report is generated within 24 hours summarizing conversion by segment and outcome And next-ramp defaults (send time, retry count, channel mix) are updated automatically based on observed lift, with a change record noting data source and timestamp And organizations can opt out of automatic defaults updates at the campaign level
Relationship Safeguard Rules
"As an advocacy lead, I want built‑in safeguards that cap and throttle outreach so that we don’t damage legislative relationships or trigger complaints."
Description

Implement configurable safeguards to prevent over‑contacting lawmakers and staff: per‑office daily/weekly caps, rolling frequency limits, channel‑specific throttles, and exclusion lists (e.g., sponsors on vote day). Detect and flag risk conditions (too many calls in a short span, duplicate messages from a single supporter cohort) and automatically adjust sends. Integrate with supporter dedupe, consent, and geography matching to ensure contacts are district‑relevant and compliant. Surface violations and applied limits in reports.

Acceptance Criteria
Per-Office Daily and Weekly Caps Enforcement
- Given an office with configured caps: daily=30 and weekly=120 across all channels, When outreach events for the office reach 30 within the current UTC day, Then the system blocks additional sends to that office until 00:00 UTC next day and records a cap_exceeded event including office_id, cap_type=daily, and blocked_count. - Given the weekly cap is 120, When cumulative sends in the Monday–Sunday week equal 120, Then further sends are deferred to the following Monday 00:00 UTC, queued with earliest_send_at set, and counted as deferred (not dropped). - Given deferred sends due to caps, When the cap window resets, Then the system auto-releases queued items FIFO within applicable throttles and logs a cap_released event with released_count.
Rolling Frequency Limit (Sliding Window)
- Given a rolling limit configured as max 10 sends per office within any 15-minute window across all channels, When 10 sends occur between time t and t+15m, Then the 11th and subsequent attempts in that window are delayed until the oldest event falls outside the window. - Given delayed sends awaiting window availability, When the window moves, Then the system recalculates available slots at least every 60s and schedules next earliest_send_at with randomized jitter of 0–60s to avoid re-bursts. - Given concurrent sends from multiple campaigns, When enforcing the rolling limit, Then the limit is applied cumulatively across campaigns for that office.
Channel-Specific Throttles per Office
- Given per-office per-channel limits phone=5/min, email=20/min, sms=10/min, When sends are scheduled, Then the system ensures each channel’s rate for the office does not exceed its limit irrespective of overall rolling limits. - Given mixed-channel traffic, When overall rolling limits permit sends, Then channel-specific throttles still cap per channel and excess items are queued per channel with next_send_at computed and visible in queue metrics. - Given queued items due to channel throttles, When capacity is available, Then items are dispatched FIFO per channel and dispatch logs include channel, office_id, and throttle window timestamps.
Exclusion Lists and Conditional Blocks
- Given a rule excluding bill sponsors and co-sponsors on vote day from 00:00–23:59 local capitol time, When a supporter action targets an excluded legislator in that window, Then the system blocks the send, sets suppress_reason=excluded_office, and suggests an alternative eligible target if available. - Given an exclusion list by office_id/person_id is active, When any campaign attempts to include that target, Then the send is suppressed and an audit record is written including rule_id, user_id (if initiated by user), timestamp, and campaign_id. - Given an authorized admin override with a mandatory reason, When the override is applied for a specific send, Then that send proceeds once and the report records override=true and the provided reason, linked to the audit record.
Risk Detection: Burst and Duplicate Content Mitigation
- Given thresholds burst_threshold=25 events/5m per office and duplicate_threshold=40% identical content from the same cohort within 30m, When either threshold is met, Then the system sets office_send_state=paused for 30m, applies randomized delays of 1–5m on resume, and switches to an alternate message template if configured. - Given a pause is triggered, When notifying stakeholders, Then an in-app and email alert is sent to campaign admins within 60s including threshold_type, observed_value, office_id, cohort_id (if applicable), and recommended actions. - Given the pause interval elapses and metrics are below thresholds, When resuming, Then the system auto-resumes sends and logs mitigation_applied with before_rate, after_rate, and duration_paused.
Supporter Dedupe, Consent, and District Relevance Gate
- Given supporters may have duplicate contact methods, When scheduling outreach, Then the system deduplicates by normalized phone and email so each supporter contributes at most one outreach per office per 24h and records dedupe_count. - Given channel-specific consent is required, When a supporter lacks opt-in or has revoked consent for a channel, Then that channel is not used and the attempt is suppressed with suppress_reason=consent_missing and channel recorded. - Given geography matching is enabled, When a supporter is not a constituent of the office’s district at confidence >=0.90, Then the contact is suppressed unless campaign policy allows out-of-district outreach; policy decisions are logged as policy=out_of_district_allowed/denied. - Given bicameral districts, When matching representatives, Then the system routes to the correct office(s) and suppresses non-relevant offices with suppress_reason=not_in_district.
Reporting of Safeguards, Violations, and Applied Limits
- Given any suppression, deferral, throttle, pause, or override occurs, When viewing the safeguards report, Then each event row includes timestamp (UTC), campaign_id, office_id, channel, rule_id, rule_type, configured_limit, observed_value, action_taken, affected_count, and user_id (if override). - Given filters by date range, campaign, office, channel, and rule_type, When filters are applied, Then the report returns results within 2s for up to 30 days of data (<=100k rows) and aggregates (totals by rule_type) reflect the filtered set. - Given a CSV export is requested for the filtered set, When generation completes, Then the file is available within 60s for <=100k rows and column totals match on-screen aggregates within 0.1%.
Hearing Timeline Console & Overrides
"As a nonprofit director, I want a clear timeline with the ability to tweak or pause automations so that I remain in control and can answer board or partner questions with audit‑ready proof."
Description

Provide a single timeline view showing detected hearings with their pre‑hearing ramp, quiet window, and post‑hearing activities. Allow admins to preview content, adjust timings, add/remove steps, set manual quiet windows, and override or cancel automations. Show readiness checks, estimated reach, and predicted send volumes. Include role‑based permissions, full change history, and exportable, time‑stamped audit logs demonstrating what was sent, suppressed, or queued and why.

Acceptance Criteria
Timeline Rendering of Hearing Phases
Given a hearing with a known start and end time and associated bill(s) When an admin opens the Hearing Timeline Console for that hearing Then the timeline displays three distinct phases: pre-hearing ramp, quiet window aligned to the hearing, and post-hearing follow-up And each phase shows start/end timestamps in the legislature’s time zone and relative offsets (e.g., T-48h) And each step within phases displays its type, scheduled send window, and current readiness status indicator
Admin Preview and Content Validation
Given a timeline step that will send supporter communications When the admin clicks Preview for that step and selects a district and bill status Then the system renders the exact message content (subject, body, call script) with district- and bill-status-specific merges And any missing merges or assets show a specific readiness error with field-level highlights And readiness status changes to Ready only when all required fields, target lists, and sending credentials are valid
Adjust Timings and Steps
Given an existing hearing timeline with default steps When the admin adjusts a step’s start or end time by drag-and-drop or numeric input Then the new time is saved, validated against phase boundaries, and conflicting overlaps are flagged with an actionable error And predicted send volumes and estimated reach recalculate within 3 seconds and display updated totals When the admin adds a new step or removes an existing step Then the timeline updates immediately and readiness checks re-run for the modified plan
Manual Quiet Window and Overrides
Given an admin needs a custom quiet period beyond the detected hearing time When the admin sets a manual quiet window on the timeline Then all sends scheduled within that window are automatically suppressed and marked as Suppressed-QuietWindow with reason stored When the admin applies an override to pause or cancel automation for a step or the entire hearing plan Then no new sends are queued, pending sends are canceled where possible, and each action is logged with user, timestamp, scope, and reason
Role-Based Permissions Enforcement
Given roles of Admin, Editor, and Viewer are assigned to users When a Viewer accesses the console Then they can view timelines and logs but cannot modify steps, timings, or overrides, and blocked actions show a permissions error When an Editor accesses the console Then they can edit steps and timings but cannot export audit logs or delete history, and restricted actions are blocked with audit trail entries When an Admin accesses the console Then they can perform all actions including overrides, cancellations, and exports
Change History and Audit Log Export
Given any change (create/update/delete step, timing change, override, suppression) occurs When the change is saved Then a history entry is recorded with user, role, timestamp (UTC), affected entity, before/after values, and reason/comment When the admin exports the audit log for a selected date range and hearing Then the system generates a downloadable CSV and JSON with time-stamped records of sent, suppressed, and queued actions including rationale, message variant, target counts, and correlation IDs
Hearing Updates, Reschedules, and Quiet-Window Respect
Given a hearing is rescheduled or canceled by the legislature feed When the system receives the update Then the timeline recalculates phase boundaries, shifts steps accordingly, and notifies Admins of the change And no sends occur during the recalculated quiet window, with any attempts marked as Suppressed-QuietWindow and included in logs And predicted send volumes and estimated reach update to reflect new timings within 3 seconds

Source Stamp

Auto‑adds verified citations and timestamps to scripts and receipts, including bill page URLs and docket links. Builds instant credibility with partners, leadership, and auditors, reducing back‑and‑forth and delivering audit‑ready proof of accuracy.

Requirements

Verified Source Aggregation
"As a campaign director, I want RallyKit to automatically collect and validate official bill and docket links so that my scripts cite authoritative sources without manual research."
Description

Aggregate and validate authoritative source links for each campaign’s targeted legislation, including bill pages, dockets, committee reports, and amendment histories across federal, state, and municipal jurisdictions. Implement jurisdiction-specific adapters to locate canonical URLs, perform HTTP health checks, canonicalize and de-duplicate links, and record provenance (publisher, retrieval method, and last_verified_at). Persist normalized source records tied to bill/version/status, detect content changes, and surface verification freshness to dependent services. Provide retries, error categorization, and observability to ensure reliability and audit-ready accuracy.

Acceptance Criteria
Canonical URL Resolution per Jurisdiction
- Given a federal bill identifier "HR 123 (118th)" When the aggregation job runs Then it resolves a single canonical bill page URL on "congress.gov" and zero non-canonical alternates - Given a California state bill "SB 42 (2023-2024)" When the adapter runs Then it returns the canonical URL on "leginfo.legislature.ca.gov" within 2 seconds - Given a New York City council item "Int 0001-2024" When the adapter runs Then it returns the canonical URL on "council.nyc.gov/legislation" or "legistar.council.nyc.gov" per configuration - Given primary registry downtime simulated by HTTP 503 When resolving Then the adapter falls back to the configured mirror and records fallback_used=true in provenance - Given an unknown jurisdiction code When resolving Then the job fails gracefully with error_category="unsupported_jurisdiction" and no records persisted
HTTP Health Checks with Retries and Error Categorization
- Given a list of 10 source URLs When health checks execute Then each URL receives HEAD (or GET on HEAD unsupported) and results are stored with http_status, response_time_ms, and checked_at ISO8601 - Given transient network timeout on first attempt When retry policy applies Then a maximum of 3 retries with exponential backoff (200ms, 400ms, 800ms) are attempted before marking error_category="timeout" - Given a permanent 404 response When categorized Then error_category="not_found" and retries are not attempted - Given a 301/302 redirect chain up to 5 hops When followed Then the final destination URL is persisted as resolved_url and chain length recorded - Given TLS/SSL errors When encountered Then error_category="tls_error" is recorded and the URL is excluded from canonical set
Link Canonicalization and De-duplication
- Given two URLs differing only by tracking parameters When canonicalized Then query parameters not in the whitelist are removed and URLs compare equal - Given http and https variants of the same host When normalized Then https is enforced and host casing is lowercased - Given multiple distinct URLs pointing to identical content When processed Then a content_hash (SHA-256) is computed and duplicates are merged to a single source record with merged_sources>=2 - Given a batch containing 100 mixed URLs with duplicates When persisted Then the resulting unique records count is reduced according to normalization rules and a dedupe_report is stored - Given normalization rules update When applied Then existing records are re-canonicalized and a migration report records changed keys and impact
Provenance Recording and last_verified_at
- Given a newly persisted source When saved Then provenance includes publisher, publisher_url, retrieval_method in ["api","scrape","manual"], adapter_name, adapter_version, operator_id nullable, and last_verified_at in UTC ISO8601 - Given any verification run When completed Then last_verified_at is updated only on successful 2xx health check and previous value is retained otherwise - Given a manual correction submitted by an admin When applied Then a provenance entry is appended with change_reason="manual_override" and prior values preserved in history - Given a fallback to mirror registry When used Then provenance.fallback_used=true and fallback_target is recorded - Given schema validation When executed Then missing required provenance fields cause the write to fail and are logged with error_category="schema_validation"
Normalized Source Persistence by Bill/Version/Status
- Given bill metadata (jurisdiction, bill_id, session, version_id, status) and resolved sources When persisted Then each source record is stored with foreign keys to bill/version/status and passes uniqueness constraint on (bill_id, version_id, normalized_url) - Given an upsert of an existing source When executed Then immutable fields remain unchanged, mutable fields (last_verified_at, http_status) are updated, and an audit trail entry is created - Given a transaction with 10 sources When committed Then either all records are saved or none if any write fails, and the job reports transaction_status - Given referential integrity violation (missing version_id) When attempted Then the write is rejected with error_category="integrity_error" - Given a request to fetch sources for a bill/version When queried Then results include all canonicalized sources with pagination and are returned within 500ms for up to 200 records
Content Change Detection and Version Update
- Given a previously stored source with content_hash H1 When re-verified and new hash H2!=H1 Then a content_change event is emitted with old/new hashes and detected_at timestamp - Given detected substantive content change When evaluated Then the associated bill version is marked stale_for_sources=true and dependent caches are invalidated - Given detected change limited to dynamic noise (e.g., timestamp-only diffs per configured ignore selectors) When filtered Then no content_change event is emitted and last_verified_at is updated - Given two consecutive runs with no change When processed Then no new events are emitted and verification_freshness_age increases accordingly - Given an amendment history page change When detected Then the change is linked to amendment_id if present and a new normalized source version is created
Observability and Freshness Surfacing to Dependent Services
- Given the verification service running in production When observed Then metrics emit success_rate, retry_rate, median_latency_ms, and error_counts per jurisdiction to the monitoring system with 1-minute granularity - Given SLOs of success_rate>=99% and median_latency_ms<=800 When breached for 5 consecutive minutes Then an alert is fired and recorded with alert_id and recovery status - Given a dependent service requests sources via API When responded Then each source includes freshness_age_hours, freshness_level in ["fresh","warn","stale"] based on thresholds (<=24, >24<=72, >72), and last_verified_at - Given tracing is enabled When a job runs Then a trace_id is propagated across adapter, health check, persistence, and API response logs - Given log sampling configured at 10% for success and 100% for errors When executed Then structured logs include request_id, bill_id, jurisdiction, adapter_name, and error_category where applicable
Script Citation Auto-Inject
"As an organizer, I want citations to be auto-inserted into every generated script based on bill status so that supporters and partners can instantly trust the messaging."
Description

Embed verified citations directly into generated scripts based on bill status and district context, using template tokens and formatting rules that produce human-readable labels and clickable links. Ensure accessibility (clear labels, keyboard focus), mobile-friendly output, and copy/paste safety. Block or warn on insertion if sources are stale or unverified, and fall back to a safe message when required. Integrate with the existing script engine so citations appear consistently in canvasser views, action pages, and exports without additional configuration.

Acceptance Criteria
Auto‑Inject Verified Citations by Bill Status and District
Given a script is generated for a supporter matched to a district and a bill with known status When the script engine renders the script Then a citation block is appended without user intervention in the designated template location And the block includes a human‑readable “Bill page” label linked to the official bill page URL And the block includes a human‑readable “Docket” label linked to the official docket URL And the links correspond to the current bill status (e.g., committee vs. floor page) as provided by bill metadata And the citation block displays a “Verified” badge with an ISO‑8601 verifiedAt timestamp And the citation content is identical for the same script across repeated renders of the same inputs
Template Tokens Render Human‑Readable Labels and Clickable Links
Given a script template contains tokens for bill number, bill URL, docket URL, and verifiedAt timestamp When the engine renders in HTML contexts (canvasser view, action page) Then tokens resolve to labeled anchor links with visible text “Bill page: <bill number>” and “Docket: <docket id>” And no raw tokens (e.g., {{...}}) appear in the output When the engine renders in plain‑text contexts (copy, CSV export) Then tokens resolve to labeled lines with full URLs (no HTML) and ISO‑8601 timestamps And labels are in English sentence case and free of placeholder text And all URLs are absolute (https) and pass basic validation (RFC 3986)
Accessibility and Keyboard Navigation for Citation Block
Given the citation block is present in an HTML view When a user navigates via keyboard only Then the citation region is reachable in logical tab order after the script body And each link has a visible focus indicator with at least 3:1 contrast against adjacent colors And activating a focused link with Enter/Space follows the link And link text is descriptive without relying on surrounding context (e.g., includes bill number) And the citation region has role="region" and aria‑label="Sources" (or equivalent) with no Axe “link‑name” or “color‑contrast” violations And the component meets WCAG 2.1 AA for color contrast and focus visibility
Mobile‑Friendly Citation Rendering on Action Pages and Canvasser View
Given a mobile viewport width of 320–375px When the script with citations is rendered Then there is no horizontal scrolling introduced by the citation block And all links wrap gracefully without clipping or overlap And interactive tap targets for links are at least 44px by 44px And base font size for the citation block is ≥14px And the block respects system dark mode without reducing contrast below WCAG 2.1 AA
Copy/Paste‑Safe Citations in Common Destinations
Given a user copies the rendered script (including citations) from the UI When pasting into Gmail, Outlook, Google Docs, and a plain‑text editor Then labels and URLs remain intact and readable And no zero‑width or non‑printing characters are introduced into pasted content And URLs are unbroken (no unexpected line breaks inside the URL) And parentheses, ampersands, and query characters are correctly encoded and preserved And for plain‑text paste, full https URLs are included after labels on their own lines
Stale or Unverified Source Handling with Safe Fallback
Given citation sources include a verification status and verifiedAt timestamp When a source is unverified or the URL is unreachable (HTTP 4xx/5xx) Then the engine blocks citation injection and inserts a safe fallback message: “Sources currently unavailable — verification required.” And a non‑blocking warning is displayed to editors indicating which source failed When a source is verified but stale (verifiedAt older than 24 hours) Then the engine injects citations with a visible “Stale” badge and shows a warning to editors And all such events are logged with bill id, source type, and reason (unverified, unreachable, stale)
Consistent Citation Injection Across Views and Exports Without Configuration
Given a campaign uses the standard script engine with no additional per‑campaign settings When rendering the script in canvasser view, action page, and export (CSV) Then the citation block appears in the same relative position across HTML views And the CSV export includes columns for Bill page URL, Docket URL, Verification Status, and VerifiedAt (ISO‑8601) And values in all outputs are consistent for the same script inputs And disabling the Source Stamp feature flag (if globally off) removes the citation block from all outputs consistently
Receipt Proof Payload
"As a compliance lead, I want supporter receipts to include the exact sources and timestamps used at action time so that I can pass audits with verifiable proof."
Description

Augment supporter action receipts and audit logs with a structured "source stamp" payload containing the exact citations used, their last_verified_at timestamps, the bill status snapshot, and a content hash that ties the script to its sources at action time. Include actor, environment, and campaign identifiers to support end-to-end traceability. Expose this payload in on-screen confirmations, emails/SMS, CSV exports, and API/webhook deliveries to partner CRMs, ensuring audit-ready, tamper-evident proof of accuracy.

Acceptance Criteria
On-Screen Confirmation Shows Source Stamp
Given a supporter completes an action via RallyKit When the on-screen confirmation is displayed Then the confirmation includes a Source Stamp section containing: - citations list with one or more HTTPS URLs - last_verified_at for each citation in ISO 8601 UTC (e.g., 2025-08-24T15:04:05Z) - bill_status snapshot with status_code and fetched_at (ISO 8601 UTC) - content_hash as a 64-character lowercase hex SHA-256 string - actor_id, environment, and campaign_id values And all displayed values match the stored receipt payload for that action
Email and SMS Receipts Embed Source Stamp
Given a supporter opts to receive a receipt via email or SMS When the receipt is delivered Then the email body contains a Source Stamp block including citations (HTTPS URLs), last_verified_at (ISO 8601 UTC), bill_status (status_code, fetched_at), content_hash (64-char hex), and actor_id/environment/campaign_id And the SMS contains a secure HTTPS link to a hosted receipt page that displays the full Source Stamp with the same values And the values in both channels are identical to the on-screen confirmation for the same action
CSV Export Includes Source Stamp Fields
Given an admin exports actions as a CSV from RallyKit When the CSV file is generated Then each row includes the following columns populated from the Source Stamp: - source_stamp.citations_urls (semicolon-separated HTTPS URLs) - source_stamp.citations_last_verified_at (semicolon-separated ISO 8601 UTC timestamps aligned by order with citations_urls) - source_stamp.bill_status_code - source_stamp.bill_status_fetched_at (ISO 8601 UTC) - source_stamp.content_hash - actor_id - environment - campaign_id And column values exactly match the stored payload for each corresponding action
API/Webhook Delivers Source Stamp with Tamper-Evident Signature
Given a partner CRM webhook subscription is configured When RallyKit emits an action webhook Then the JSON body includes source_stamp with fields: citations (array of objects with {url, last_verified_at}), bill_status ({status_code, fetched_at}), content_hash, actor_id, environment, campaign_id And the request includes an X-Signature header containing an HMAC-SHA256 of the raw request body using the integration's shared secret And recomputing the HMAC over the received body with the shared secret reproduces the X-Signature value And the body validates against the published OpenAPI schema for the webhook
Deterministic Content Hash Binds Script to Sources
Given a script body, the set of citation URLs with their last_verified_at values, and the bill_status snapshot at action time When content_hash is computed using SHA-256 over a canonical JSON that includes {script_body_normalized, citations_urls_sorted, citations_last_verified_at, bill_status} Then the same inputs produce identical content_hash across on-screen, email/SMS, CSV, and webhook outputs And any change to the script body, any citation URL or last_verified_at, or the bill_status snapshot produces a different content_hash And the hash is emitted as a 64-character lowercase hexadecimal string
Traceability Identifiers Consistent Across Channels
Given an action is recorded for a supporter within a campaign and environment When the Source Stamp is viewed via on-screen confirmation, email/SMS, CSV, or webhook Then actor_id, environment, and campaign_id are present in each channel And the values are identical across all channels for the same action And environment is one of production, staging, or development (lowercase) And actor_id and campaign_id conform to UUIDv4 format And no personally identifiable information beyond these identifiers is included in the Source Stamp
Audit Log Stores Immutable Source Stamp Snapshot
Given an action completes and an audit log entry is created When the Source Stamp snapshot is persisted to the audit log Then the stored snapshot is immutable (write-once) and cannot be altered by subsequent edits to scripts, citations, or campaign settings And retrieving the audit log entry via admin UI or API returns the same content_hash and field values that were captured at action time
Timezone-Accurate Timestamps
"As a program manager, I want timestamps to be accurate to timezone and immutable so that comparisons across campaigns and audits remain consistent."
Description

Record and display all source and action timestamps in immutable ISO 8601 with timezone awareness (stored in UTC, rendered in the organization’s preferred timezone). Define and capture key events—source_verified_at, script_generated_at, and action_submitted_at—to preserve causal ordering and enable reliable comparisons across campaigns and jurisdictions. Ensure clock synchronization, handle daylight saving transitions, and prevent modification after issuance while allowing append-only corrections with explicit reason codes.

Acceptance Criteria
UTC Storage and Organization-Time Rendering
Given an organization with preferred timezone "America/New_York" And all timestamping nodes are synchronized within <= 500 ms of an NTP reference When source_verified_at, script_generated_at, and action_submitted_at are captured at instant t Then the database stores each timestamp as an ISO 8601 UTC string with a trailing 'Z' equal to t And the API/UI render each timestamp in ISO 8601 including the numeric offset and IANA zone ID for the org timezone And each rendered local time equals t converted to the org timezone with the correct DST offset for t And both stored and rendered strings pass ISO 8601 validation
Key Event Fields Recorded Once Per Action
Given a new action lifecycle for a supporter and a campaign script When the source is verified, the script is generated, and the supporter submits the action Then source_verified_at, script_generated_at, and action_submitted_at are each populated exactly once and are non-null And subsequent attempts to overwrite any of these fields are rejected with HTTP 409/422 and no persisted change And each timestamp appears on the supporter receipt, the script/source stamp, and in CSV/API exports for that action
Causal Ordering Enforcement Across Events
Given an action record that may already contain one or more event timestamps When persisting a new event timestamp for that action Then the system enforces source_verified_at <= script_generated_at <= action_submitted_at for that action And any write that would violate this ordering is rejected atomically with HTTP 409 and error code ORDERING_VIOLATION And a database constraint or equivalent guard prevents out-of-order persistence under concurrent writes
Daylight Saving Transition Accuracy
Given an organization with preferred timezone "America/Los_Angeles" When rendering stored UTC instant 2025-03-09T09:59:59Z Then the display shows 2025-03-09T01:59:59-08:00 America/Los_Angeles When rendering stored UTC instant 2025-03-09T10:00:00Z Then the display shows 2025-03-09T03:00:00-07:00 America/Los_Angeles When rendering stored UTC instant 2025-11-02T08:59:59Z Then the display shows 2025-11-02T01:59:59-07:00 America/Los_Angeles When rendering stored UTC instant 2025-11-02T09:00:00Z Then the display shows 2025-11-02T01:00:00-08:00 America/Los_Angeles And sorting by UTC preserves chronological order across the transitions
Clock Synchronization and Drift Handling
Given all production timestamping nodes are NTP-synchronized to a trusted time source Then measured clock drift per node is <= 500 ms for 99.9% of minutes, and an alert is emitted if drift > 500 ms persists for > 60 seconds When a client submits an action with any client-side time data Then action_submitted_at is stamped using server time only, ignoring client-provided times And operational metrics expose current drift and last sync status via monitoring endpoints
Immutability and Append-Only Corrections
Given a receipt or record has been issued containing any of the three timestamps Then direct updates to those persisted timestamp fields via API or admin UI are disallowed with HTTP 403 and no data change When a correction is required Then an append-only correction record can be created with fields {event_name, old_value_utc, corrected_value_utc, reason_code, corrected_by, corrected_at_utc} And reason_code must be one of [DATA_SOURCE_ERROR, CLOCK_DRIFT_CORRECTION, OTHER] or the write is rejected with HTTP 422 And exports and receipts show the original value plus the most recent correction metadata without altering the original stored timestamp
Source Cache & Refresh Policy
"As a technical admin, I want sources to be cached and refreshed automatically with clear policies so that pages stay current without rate-limiting or stale data."
Description

Introduce a caching layer for verified sources with configurable TTLs by jurisdiction and source type, background refresh jobs, and webhook/queue triggers when bill status changes. Implement exponential backoff, rate limiting, and conditional requests (ETag/If-Modified-Since) to reduce load while keeping data fresh. Surface freshness indicators and failure states to dependent components, and provide manual refresh controls with safe concurrency to avoid stale citations in scripts and receipts.

Acceptance Criteria
TTL by Jurisdiction and Source Type
Given a default TTL of 24h and an override TTL of 6h for jurisdiction US-CA and sourceType billPage And a verified source for US-CA billPage is cached at T0 When the same source is requested at T0+5h Then the cached value is returned with no blocking network call and freshnessStatus = "fresh" When the same source is requested at T0+6h+1m Then a refresh is initiated before serving, the cache is updated if changed, and the new expiry = now + 6h When a source for US-CA with sourceType docketLink is requested without an override Then the default 24h TTL is applied When the override TTL for US-CA billPage is changed to 4h at T0+2h Then the new TTL is applied on the next cache write without invalidating the current cache entry
Conditional Requests with ETag/If-Modified-Since
Given a cached source with ETag "abc" and Last-Modified = D1 When a refresh is performed Then the request includes headers If-None-Match: "abc" and If-Modified-Since: D1 When the server responds 304 Not Modified Then the payload remains unchanged, lastValidatedAt is set to now, and expiresAt = lastValidatedAt + TTL When the server responds 200 OK with a new ETag "def" and updated content Then the cache is atomically updated with the new payload and ETag, lastFetchedAt and lastValidatedAt = now, and the previous version is not served to dependents
Exponential Backoff and Rate Limiting on Refresh Failures
Given a refresh attempt receives 429 or any 5xx response When subsequent retries are scheduled Then exponential backoff is applied with delays of 1m, 2m, 4m, 8m, up to a max delay of 60m, with maxAttempts = 5 before marking freshnessStatus = "failed" And nextRetryAt and errorCount are updated on each failure And a per-host rate limit of 60 requests/minute is enforced; excess refreshes are queued without exceeding the limit When a retry succeeds Then freshnessStatus transitions to "fresh", errorCount resets to 0, and nextRetryAt is cleared
Webhook/Queue Trigger on Bill Status Change
Given a bill.status_changed event for billId X is received via webhook or internal queue When the event is processed Then all cached sources linked to billId X are refreshed immediately using conditional requests, regardless of current TTL And duplicate events for X within a 5-minute window are de-duplicated to a single refresh job And processing is idempotent; replays do not cause duplicate network calls or exceed rate limits And the system emits a refresh outcome event (success/failure) with correlationId for downstream consumers
Freshness Indicators and Failure States Surfaced to Dependents
Given a cached source with TTL = 24h and refreshLeadTime = 15m When now = lastValidatedAt + 23h 50m Then the API returns freshnessStatus = "near_expiry" When now > expiresAt Then the API returns freshnessStatus = "stale" And the source metadata includes lastFetchedAt, lastValidatedAt, ttlSeconds, expiresAt, ageSeconds, freshnessStatus in {fresh, near_expiry, stale, failed}, and failureReason (if any) And dependent components can retrieve and render "Verified as of <timestamp>" using lastValidatedAt
Manual Refresh with Safe Concurrency and Audit
Given an admin triggers a manual refresh for source Y via UI/API When the request is received Then a refresh job is enqueued and a 202 response with jobId is returned within 500ms And if a refresh for Y is already running, subsequent manual requests within that window return 409 Conflict with code "refresh_in_progress" and do not start a new job And concurrent manual/background triggers for Y acquire a distributed lock so only one network fetch occurs And an audit record is written with actorId, sourceId, requestedAt, startedAt, finishedAt, outcome, and error (if any) When the job completes successfully Then dependent components see the updated content on their next read
Stale-Data Guardrails for Scripts and Receipts
Given allowStaleCitations = false (default) and a source is stale (now > expiresAt) When the script/receipt generator requests the source Then the generator must not render the citation and returns an error code "source_stale" or displays a placeholder "citation unavailable—refresh required" And a metric and log entry are emitted including sourceId and ageSeconds When allowStaleCitations = true and a source is stale Then the generator may render the last-known-good citation but must include an "Out of date" indicator and the exact "Verified as of" timestamp When a successful refresh occurs Then subsequent generations use the updated citation without the stale indicator
Admin Review & Override
"As a campaign owner, I want a review and override panel for citations so that I can correct edge cases and maintain accuracy under deadlines."
Description

Provide an admin panel to preview, approve, or disable auto-added citations per campaign, and to add vetted custom sources with explicit verification state and rationale. Enforce role-based permissions, show diff/lineage for changes, and maintain an append-only audit trail for all overrides. Include guardrails that prevent publishing scripts with unverified overrides unless a policy-based exception is acknowledged, ensuring speed under deadline without sacrificing auditability.

Acceptance Criteria
Preview and Approve Auto-Added Citations
Given I am a user with role "Campaign Admin" and a campaign with auto-added citations exists When I open the campaign's Source Stamp panel and select "Preview citations" Then I see a list of citations with source title, bill page URL, docket link, capture timestamp, and verification state for each citation And each citation has actions "Approve" and "Disable" When I approve a citation Then its state updates to "Approved" and it is included in script and receipts preview within 10 seconds When I disable a citation Then its state updates to "Disabled" and it is excluded from script and receipts preview within 10 seconds
Add Vetted Custom Source With Verification
Given I am a user with role "Campaign Admin" or "Compliance" When I add a custom source with URL, title, description, verification state (Verified/Unverified), and rationale Then the form enforces required fields: URL, verification state, rationale And URL is validated as reachable (HTTP 200/301/302) before save And the custom source saves with a capture timestamp and actor And unverified entries are visually flagged in the list And verified entries are included in script and receipts preview; unverified entries are excluded from publish until exception is acknowledged
Role-Based Permissions Enforcement
Rule: Only users with roles "Campaign Admin" or "Compliance" can approve/disable citations or add custom sources Rule: Users with role "Read-Only Auditor" can view citations, verification states, diffs, and audit trail but cannot modify (controls disabled) Rule: Users without these roles are blocked with 403 and cannot view rationale content or override controls Rule: All permission denials are logged with actor, timestamp, and requested action
Diff and Lineage View for Overrides
Given a citation or custom source has at least one override or state change When I open "View history" for that item Then I see a chronological timeline of versions with previous value, new value, change type, actor (user id and role), timestamp (UTC), and rationale And a side-by-side diff highlights added/removed text in title and rationale fields And I can filter versions by actor, change type, and date range And I can revert by creating a new override that restores a prior version (no in-place edits)
Append-Only Audit Trail Integrity
Rule: All approvals, disables, custom source additions/edits, and policy exception acknowledgements create immutable audit records with UUID, timestamp (UTC), actor, action type, before/after values, campaign_id, and rationale Rule: Audit records cannot be edited or deleted via UI or API; attempts return 403 and are themselves logged Rule: Audit log export provides JSONL with record count and SHA-256 checksum; exported count matches UI within ±0 records (exact match) Rule: Audit records are ordered strictly by timestamp and UUID and are append-only (no gaps or overwrites)
Publish Guardrails for Unverified Overrides
Given the campaign has at least one unverified custom source or unverified override When a user attempts to publish scripts or action pages Then the publish is blocked and a modal lists the unverified items And the modal requires acknowledgement of a policy-based exception with reason, policy reference, and expiration if policy mandates And if the configured policy requires secondary approval, a second user with role "Compliance" must approve before publish succeeds And upon successful exception, publish proceeds and an audit record links the exception to the release id
Per-Campaign Scoping of Overrides
Given I approve/disable a citation or add a custom source in Campaign A When I view Campaign B Then no changes from Campaign A are visible in Campaign B And API endpoints require campaign_id and enforce isolation; cross-campaign mutation attempts return 403 And all audit records include campaign_id and display only in the matching campaign's admin view

Tone Tuner

Lets you set the voice for each stage—Inform, Urge, or Escalate—and automatically adapts suggested scripts to the selected tone and reading level. Keeps multi‑org coalitions aligned on message while maintaining accessibility and inclusivity.

Requirements

Stage-based Tone Profiles
"As a campaign director, I want to define tone profiles for each stage so that our messaging stays consistent and strategic across all actions."
Description

Enable creation and management of tone profiles for each campaign stage (Inform, Urge, Escalate). Each profile includes configurable attributes (formality, urgency, sentiment, assertiveness), inclusive language guidelines, target reading level ranges, and channel-specific nuances (phone, email, social). Provide default presets aligned with RallyKit best practices and allow per-campaign and per-coalition overrides. Include versioning with change history, role-based access controls, real-time previews, and validation warnings when settings conflict with accessibility or coalition guidelines. Integrate with existing campaign setup so profiles attach to actions and auto-apply during script generation.

Acceptance Criteria
Create and Configure Stage Tone Profiles
Given I have Editor permissions for a campaign or coalition When I create or edit a tone profile for the Inform, Urge, or Escalate stage Then I can set values for formality, urgency, sentiment, and assertiveness from predefined scale options And I can define inclusive language guidelines, a target reading-level range, and channel-specific nuances for phone, email, and social And required fields must be completed before Save is enabled And out-of-scope values are rejected with inline error messages When I save the profile Then the profile is persisted and immediately available in campaign setup
Default Presets and Overrides Precedence
Given RallyKit best-practice presets exist for each stage When I start a new campaign Then default stage profiles are pre-populated from presets When a coalition-level override is configured Then member campaigns inherit coalition values in place of platform defaults When a campaign-level override is configured Then campaign values take precedence over coalition or platform defaults for that campaign only When an override is cleared Then the setting reverts to the next applicable source And each field shows its source label: Default, Coalition, or Campaign
Versioning and Change History
Given a stage tone profile exists When I save changes to the profile Then a new immutable version is created with version number, timestamp, editor, and required change summary When I view history Then I can compare any two versions and see field-level diffs When I revert to a prior version Then a new version is created that matches the reverted values and becomes the active version And the version applied to any generated script is recorded and viewable for audit
Role-Based Access Control and Audit
Given roles exist: Viewer, Editor, Admin When a Viewer accesses tone profiles Then they can read but cannot create, edit, or revert profiles When an Editor accesses tone profiles within their scope (campaign or coalition) Then they can create, edit, and revert profiles within that scope only When an unauthorized user attempts a restricted action Then the action is blocked with a permission error and no changes are saved And all create, edit, revert, and permission-denied events are logged with user, action, scope, and timestamp
Real-time Preview and Validation Warnings
Given a user is editing a stage tone profile When they adjust tone attributes, reading-level range, inclusive guidelines, or channel nuances Then the multi-channel preview updates in real time to reflect the selected tone and channel formatting And the preview displays estimated reading level and flags any content outside the configured range And non-inclusive or restricted phrases per guidelines are highlighted with suggested alternatives When conflicts exist with coalition accessibility or messaging guidelines Then non-blocking warnings are shown, and saving is only blocked if enforcement is enabled for that guideline
Auto-apply Profiles to Actions and Script Generation
Given actions are assigned to stages and channels within a campaign When scripts are generated for those actions Then the active profile for the action’s stage and channel is applied, including tone attributes, reading level, inclusivity rules, and channel nuances And the applied profile’s version and source (Default, Coalition, Campaign) are recorded with the script for audit When the stage of an action changes due to bill status updates Then regenerated scripts use the new stage’s active profile When a profile is updated Then newly generated scripts use the new active version, while previously sent scripts remain unchanged
Adaptive Script Generation
"As an organizer, I want scripts to auto-adapt to tone and reading level so that I can launch action pages quickly without manual rewriting."
Description

Generate channel-ready scripts that automatically adapt to the selected stage tone and target reading level while incorporating district, legislator, and bill status data. The engine rewrites sentence structure, vocabulary, and calls-to-action to match tone parameters without changing core policy intent. Support placeholders for personalization, channel variants, and optional length constraints. Preserve manual edits across regenerations with diff-aware updates and offer seeded, deterministic outputs for coalition review. Integrate with the existing auto-match-to-legislators pipeline and cache generated variants for performance.

Acceptance Criteria
Stage Tone and Reading Level Adaptation
Given a base policy summary derived from bill metadata and coalition-approved copy And a selected stage in {Inform, Urge, Escalate} and target_reading_level L When a script is generated for any channel Then the measured reading level (Flesch-Kincaid Grade) is within L ± 0.5 And a tone classifier labels the script as the selected stage with probability ≥ 0.80 And the CTA verb set matches the configured verb list for the selected stage And semantic similarity between the script and the base policy summary is ≥ 0.90 (cosine), indicating no policy intent drift
District, Legislator, and Bill Status Data Merge
Given supporter address resolves to district D and legislator set {L_i} and bill B with status S When a script is generated Then placeholders {district}, {legislator_name}, {chamber_title}, {bill_id}, {bill_status_phrase} are fully resolved with accurate values for each L_i And honorifics and chamber-specific titles are correct per legislator And status S maps to the correct tense/phrase per the status mapping table And when multiple legislators are matched, one correctly personalized variant is produced per legislator And the script contains 0 unreplaced or unknown placeholders
Channel Variants and Length Constraints
Given channel ∈ {phone, voicemail, email, SMS, social} and optional length_limit provided per channel When a script is generated Then the script conforms to channel structure standards (e.g., greeting, bill mention, clear ask, sign-off where applicable) And if length_limit is provided, the script does not exceed length_limit for that channel And if length_limit is not provided, defaults are enforced: SMS ≤ 320 chars, social ≤ 280 chars, voicemail ≤ 30s (~75 words), email subject ≤ 70 chars And channel-appropriate CTAs are used (phone=call request, email=send message, SMS=tap-to-call/link, social=share/post) And all required placeholders for the channel are resolved or a blocking validation error is returned with a list of missing fields
Deterministic Seeded Output for Coalition Review
Given identical inputs (bill, status, tone, reading level, channel, placeholders) and a fixed seed S When scripts are generated multiple times Then outputs are byte-identical across runs and environments for the same S And changing the seed to S' yields a measurably different output (≥ 5% token difference) And the seed value and generation parameters are recorded in metadata for audit and reproduction And a "lock seed" option prevents accidental reseeding during regeneration
Diff-Aware Regeneration Preserving Manual Edits
Given a previously generated script with user-edited spans marked When regenerating after changes to tone, reading level, or bill status Then user-edited spans are preserved verbatim with 0 modifications And non-edited spans update to reflect new parameters And a side-by-side diff is presented highlighting only non-edited changes And if a required data update intersects an edited span, the system flags a merge conflict and prompts the user to accept, reject, or reconcile suggested changes And an undo action restores the prior version in one step
Auto-Match Integration and Caching Performance
Given a supporter address is submitted and auto-match returns legislator(s) When the action page loads scripts Then scripts are generated using matched legislator data without manual selection And for cached variants, p95 render latency ≤ 300 ms; for cache misses, p95 ≤ 2,000 ms under 50 rps load And cache keys include bill_id, bill_status, stage, reading_level, channel, legislator_id, jurisdiction, seed, and length constraints And cache invalidates within 5 seconds of changes to tone settings, reading level, bill status, or coalition seed lock And only one generation occurs per unique cache key under concurrency (request coalescing), with all waiting requests receiving the same result
Coalition Profiles and Inclusivity Guardrails
Given a campaign with an active coalition tone profile and inclusive-language rules When scripts are generated across stages and channels Then the coalition tone profile is applied consistently (same style and CTA verb sets per stage) And the inclusive-language lint reports 0 critical and ≤ 2 minor issues; otherwise generation fails with actionable errors And reading level targets meet accessibility requirements (Inform ≤ Grade 8 by default unless overridden) And personalization placeholders avoid protected-class assumptions and honor user-provided pronouns if available
Reading Level Guardrails
"As a communications lead, I want readability and inclusivity guardrails so that our scripts remain accessible and welcoming to diverse supporters."
Description

Provide real-time readability scoring (e.g., Flesch–Kincaid, SMOG) and enforce target grade bands defined in tone profiles. Offer automatic rephrasing suggestions to meet targets while preserving meaning, flag jargon, idioms, and exclusionary or polarizing phrases, and suggest inclusive alternatives. Validate ADA and screen-reader friendliness (plain language, link text clarity) and require alt-text where applicable. Allow configurable exceptions with justification logged for audit. Surface warnings and fixes inline in the editor and block publishing if mandatory thresholds are not met.

Acceptance Criteria
Real-time Readability Scoring and Target Band Enforcement
Given a tone profile with a target grade band is active When the user types or pastes content up to 1500 words Then Flesch–Kincaid Grade Level and SMOG scores update in-line within 400 ms at the 95th percentile and display pass/fail against the target band And If either metric exceeds the upper bound of the band, the content is marked "Out of band" and Publish is disabled And If both metrics are within the band, the content is marked "Within band" and Publish is enabled (assuming no other blockers) And When the user switches the stage (Inform, Urge, Escalate), the target band updates immediately and the content is re-evaluated
Automatic Rephrasing Suggestions to Meet Targets
Given content is out of the target grade band When the user opens readability suggestions Then the system proposes at least one rephrasing per flagged sentence that reduces grade level by ≥1 on at least one metric (Flesch–Kincaid or SMOG) And Quick-Apply replaces text in-place and re-scores within 400 ms And Suggestions preserve named entities (e.g., bill numbers, legislator names, organization names) and hyperlinks unchanged And The user can accept or reject each suggestion individually and undo within one action
Jargon, Idiom, and Polarizing Language Detection with Inclusive Alternatives
Given organization-wide and campaign-specific lexicons for jargon, idioms, and exclusionary/polarizing phrases are configured When content contains any listed term or its inflection Then the term is highlighted inline with category and severity And At least one inclusive or plain-language alternative is suggested per occurrence And The user can mark an occurrence as "Ignore for this document"; ignored items no longer block publishing but remain visible as Ignored And Admins can add, edit, or disable lexicon entries, with changes applying on save
ADA and Screen-Reader Friendliness Validation
Given the editor contains text, links, and images When accessibility validation runs automatically during editing Then sentences >25 words or passive-voice sentences are flagged as Warnings And Non-descriptive link text (e.g., "click here", "read more") without an aria-label or title providing context is flagged as an Error And Images without alt-text are flagged as Errors and block publishing And All Errors must be resolved or excepted before Publish is enabled
Inline Warnings, Fixes, and Publish Blocking
Given one or more validation issues exist When the user views the editor Then each issue appears inline and in a side panel with severity (Error blocks, Warning advisory) And Clicking an issue scrolls to the location and shows rationale plus a Fix action when available And Publish is disabled while any Error exists and enabled when no Errors exist (Warnings may remain) And Resolving an issue updates counts and state within 400 ms
Configurable Exceptions with Audit Logging
Given the user has the Exception Approver permission When they grant an exception for one or more failing checks Then a justification text (minimum 20 characters) is required And The system logs user, timestamp, criteria waived, affected content snapshot, and justification in an immutable audit log And Publishing is enabled for the document while the exception is active And The exception status is visible in the editor and included in exportable compliance reports
Coalition Message Governance
"As a coalition coordinator, I want an approval workflow for tone and scripts so that partner organizations stay aligned before publishing."
Description

Implement a multi-organization approval workflow for tone profiles and generated scripts. Support roles (Owner, Editor, Reviewer), suggest-and-approve editing with inline comments, required consensus rules (e.g., majority or all sign-off), and time-boxed review windows with reminders. Provide version diffs, shareable previews, and audit-ready logs of approvals, overrides with justification, and publication timestamps. Ensure permissions respect coalition agreements and restrict unilateral changes to shared tone settings.

Acceptance Criteria
Role-based Permissions for Shared Tone Settings
Given a coalition workspace with roles Owner, Editor, Reviewer and a shared tone profile, When a user attempts to modify live shared tone settings, Then direct edits are blocked and only proposal creation by Owners/Editors is allowed. Given coalition agreements restrict unilateral changes, When any user attempts to publish changes without required approvals or to alter governance settings without authorization, Then the action is denied with a permission error and the attempt is logged. Given per-organization access rules are configured, When an Editor from Org A creates a proposal, Then only permitted coalition org members (by role) can view/review it; users outside the coalition receive a 403 and no metadata leakage occurs. Given a user’s role is revoked during an active review, When they next access the proposal, Then access is immediately removed and any prior vote from that user is marked Revoked and excluded from consensus tallies.
Coalition Tone Profile Proposal Submission and Routing
Given an Owner or Editor finalizes edits in proposal mode, When they click Submit for Approval, Then the system creates a Proposal ID, captures a read-only snapshot with a content hash, sets status Pending Review, and displays the consensus rule applied. Given the proposal touches multiple stages (Inform, Urge, Escalate), When submitted, Then the snapshot includes all affected stage scripts and tone settings with stage-level metadata. Given required reviewers are defined per org and role, When the proposal is submitted, Then in-app and email notifications are sent to all required reviewers within 1 minute with links to the preview and their decision panel. Given shareable previews are enabled, When the proposer generates a preview link, Then a tokenized read-only URL is created that expires in 7 days, shows a Pending Approval banner, and hides all edit controls. Given a draft is saved but not submitted, When users navigate away, Then no notifications are sent and no review window timer starts.
Inline Suggestions and Comment Threads
Given a reviewer opens the proposal preview, When they select text in any script, Then they can add an inline suggestion or comment anchored to that exact range with their org and role labels displayed. Given an inline suggestion exists, When the proposer accepts or rejects it, Then the change is applied to the proposal draft, the decision is recorded, and the thread status updates to Resolved or Unresolved accordingly. Given @mentions are used in a comment, When a user is mentioned, Then the mentioned user receives an in-app and email alert within 1 minute linking to the exact comment. Given content moderation rules, When an Owner deletes a comment, Then a tombstone remains showing who deleted it, when, and the reason, preserving audit visibility.
Consensus Rule Enforcement (Majority/All)
Given consensus rule = Majority and 5 required reviewers are assigned, When at least 3 approvals are recorded and fewer than 3 rejections exist, Then the proposal status becomes Approved and publishing is enabled. Given consensus rule = All and 4 required reviewers are assigned, When any one required reviewer rejects, Then the proposal status becomes Changes Requested and publishing remains disabled until resubmission. Given abstentions are permitted, When a reviewer selects Abstain and provides a reason, Then their decision is logged and excluded from approval/rejection counts. Given an Owner updates the coalition consensus rule configuration, When proposals are already in review, Then the new rule applies only to proposals created after the change; existing proposals retain their original rule and this is displayed on their header.
Time-boxed Review Window and Reminders
Given a 72-hour review window is configured, When a proposal is submitted, Then a countdown timer starts, the window start timestamp (UTC) is logged, and the deadline is displayed in the proposal header. Given a proposal is Pending Review, When 48 hours and 4 hours remain, Then automated reminders are sent to outstanding required reviewers and the proposer. Given the review window expires without meeting the consensus rule, When the timer reaches 0, Then the proposal status becomes Expired, publishing remains disabled, and the proposer may extend the window once by up to 24 hours or withdraw the proposal. Given an Owner elects to override after expiry, When they click Override Approve and enter a justification of at least 20 characters, Then the proposal is marked Approved (Override), changes are published, and all coalition members are notified immediately.
Version Diff Visualization and Export
Given a proposal exists, When a reviewer opens View Diff, Then a side-by-side redline displays changes to tone settings and all stage scripts (Inform, Urge, Escalate) with additions highlighted in green and deletions in red, plus a summary list of changed sections. Given accessibility requirements, When the diff is displayed, Then non-color cues (icons, underlines) and aria-labels are present and color contrast meets WCAG AA. Given a user requests export, When Export Diff is clicked, Then a PDF and CSV bundle is generated within 30 seconds containing proposal metadata (ID, content hash, submitter, timestamps) and the redline content.
Audit Logging of Approvals, Overrides, and Publication
Given any governance action occurs (submit, comment, approve, reject, abstain, override, publish), When the action is performed, Then an immutable audit entry records actor ID, org, role, action type, target proposal ID, UTC timestamp, source IP, before/after version hashes, and any justification text. Given an Owner opens the audit log, When they filter by date range, action type, org, or user, Then the system returns matching entries within 5 seconds and allows CSV export of the results. Given a proposal is published (standard or override), When publication completes, Then the publication timestamp is recorded in the proposal history and audit log, and a digest email with the approval trail is sent to all coalition Owners.
Tone Performance Analytics
"As a campaign strategist, I want analytics on tone and reading level effectiveness so that I can optimize our messaging for higher completions."
Description

Track and report action outcomes segmented by stage tone and reading level, including conversion rates, abandonment, time-to-complete, call duration where available, and downstream outcomes (e.g., legislator responses). Enable A/B and multivariate tests between tone presets and reading levels, with statistically sound winner determination. Visualize trends by district, audience segment, channel, and time, and generate recommendations (e.g., escalate tone in low-response districts). Provide privacy-safe aggregation, opt-out controls, and CSV/API export to support external reporting.

Acceptance Criteria
Segmented Outcome Metrics by Tone Stage and Reading Level
Given the analytics dashboard is loaded with at least 10,000 recorded actions When the user applies filters for Stage (Inform/Urge/Escalate), Tone Preset, Reading Level bands (e.g., ≤6, 7–9, 10–12, >12), Channel, Audience Segment, and District Then the dashboard displays, for the selected cohort(s): conversion rate (%), abandonment rate (%), average time-to-complete (seconds), average call duration (seconds, when telephony data is present), and counts of downstream outcomes (e.g., legislator responded yes/no/unknown) And results reflect the selected time range and timezone and match exported/API values within ±0.1 percentage points for rates and ±0.5 seconds for time metrics And queries up to 1,000,000 actions and 50 filter combinations complete in ≤5s at p95 And missing call duration is shown as "N/A" and does not affect averages
A/B and Multivariate Test Setup and Allocation
Given an organizer creates a new experiment selecting 2–6 tone presets and up to 3 reading levels When the test is launched with equal or custom weight allocation Then traffic is randomized and assigned per audience segment and channel without bias, with assignment stickiness for 7 days per supporter And a sample size calculator is provided using baseline conversion and minimum detectable effect to recommend per-variant n for 80% power at α = 0.05 And the system prevents launch if any variant’s minimum required sample size is < 100 or allocation would yield < 100 per variant in the planned window And experiment metadata (test_id, variant labels, start time, allocation, hypotheses) is stored and appears in analytics and exports
Statistically Sound Winner Determination
Given an experiment has reached the calculated minimum sample size per variant for 80% power at α = 0.05 When the analysis runs on the primary metric (conversion rate) and secondary metrics (abandonment, time-to-complete) Then the winner is declared only if the two-sided proportion test shows p ≤ 0.05 and the 95% CI for uplift has a lower bound ≥ 0 And multiple comparisons across variants are corrected using Holm–Bonferroni And time-based metrics use nonparametric bootstrap CIs and are reported as supportive, not decisive And if criteria are not met, the experiment is marked Inconclusive with guidance to extend or adjust MDE And exports include per-variant effect sizes, p-values, CIs, sample sizes, and analysis timestamp
Trend Visualization by District, Audience Segment, Channel, and Time
Given the user selects a time range and granularity (hour/day/week) When viewing trends pivoted by tone stage and reading level Then the system renders time-series charts for conversion and abandonment, bar charts by channel, and a choropleth map by district with accessible color contrast (WCAG AA) And tooltips disclose metric definitions and current filters; legends are toggleable to show/hide series And timezone can be changed and all charts re-compute accordingly And p95 chart render time is ≤3s for datasets up to 2 years and 5M actions And screenshots/PNGs and data CSV for each chart can be downloaded with the active filters embedded in the file metadata
Privacy-Safe Aggregation and Opt-Out Controls
Given privacy controls are enabled for the workspace When any cohort has < 20 actions or < 5 completions in the selected range Then the system suppresses row-level metrics for that cohort and aggregates to the next higher level (e.g., from district to state) or displays "Insufficient data" And no PII is stored or exposed in analytics, exports, or API (only hashed supporter IDs used for deduplication) And supporter opt-outs are respected within 15 minutes of opt-out, removing their events from all aggregates and re-computing affected metrics And an audit log records who changed privacy settings, what changed, and when And all suppression events are indicated with an icon and tooltip stating the applied thresholds
CSV and API Export for External Reporting
Given a filtered analytics view or experiment is selected When the user requests a CSV export or calls the analytics API Then the export includes a header with filter context, timezone, schema version, and generation timestamp And each row contains: time_bucket_start, stage, tone_preset, reading_level_band, channel, audience_segment, district, impressions, actions_started, actions_completed, conversion_rate, abandonment_rate, avg_time_to_complete_sec, avg_call_duration_sec, downstream_outcomes_{category}, test_id, variant_label And CSV generation completes within 60s for up to 10M rows via streaming; larger exports are chunked and notified via email/webhook And the API supports identical filters, cursor pagination, and 120 req/min rate limit, returning results consistent with the dashboard within ±0.1% And access is restricted to users with Export permission; unauthorized requests are denied with 403 and audited
Automated Performance Recommendations
Given districts or segments underperform relative to campaign median by ≥20% (relative) on conversion for 3 consecutive days and sample size ≥ 200 actions When recommendations are generated daily at 02:00 local timezone Then the system produces actionable suggestions (e.g., escalate tone from Inform to Urge, adjust reading level band) including rationale, affected cohorts, estimated uplift range with 95% CI, and a confidence score (0–1) And organizers can apply with one click, schedule, or dismiss; dismissed recommendations for the same cohort/tactic are not resurfaced for 30 days And applied recommendations are logged and evaluated with a pre/post analysis surfaced after 7 days And recommendations never override active experiments without explicit user confirmation
Multilingual Tone Localization
"As a statewide organizer, I want tone-consistent scripts in multiple languages so that non-English-speaking supporters can participate equally and effectively."
Description

Support multilingual tone profiles and script generation starting with Spanish, maintaining equivalent tone attributes (formality, urgency, sentiment) across languages. Provide locale-specific salutations, idioms, and accessibility guidelines, with region-aware legislator titles. Include translation memory, glossary management for policy terms, and a human-in-the-loop review mode for community validators. Detect target language from audience segments and allow per-language overrides while keeping coalition alignment and audit history intact. Ensure right-to-left and diacritic support where applicable.

Acceptance Criteria
Spanish Tone Parity across Stages
Given an English tone profile for Inform, Urge, and Escalate with target formality, urgency, sentiment, and reading level And a campaign with Spanish (es-US) enabled When the system generates Spanish scripts for all three stages Then each Spanish script’s measured formality, urgency, and sentiment are within ±1 on a 5-point scale of the configured targets And the Spanish reading level meets thresholds (INFLESZ ≥ 55 for Inform, ≥ 50 for Urge, ≥ 45 for Escalate) And the configured address mode (tú/usted/ustedes) is consistently applied across script sections And the output contains 0 occurrences of untranslated source strings or placeholders
Locale Salutations and Legislator Titles (es-US)
Given a fixture of 20 US legislators across federal and state chambers with known/unknown gender metadata And es-US locale mappings for titles and salutations and UseIdioms=true When Spanish scripts are generated Then 100% of salutations use the correct localized title per chamber and jurisdiction (e.g., Senador/a, Representante, Asambleísta) And neutral or inclusive forms are applied when gender is unknown per configuration And at least one locale-curated idiom is included in the body without violating readability thresholds And names and titles render with correct diacritics in web, email, and PDF previews (0 replacement characters) And greetings follow Spanish punctuation conventions
Locale Accessibility and Reading Level (es-US)
Given es-US accessibility guidelines with a banned-terms list and readability targets per stage When Spanish scripts are generated for Inform, Urge, and Escalate with Accessibility Mode enabled Then each script meets the readability target (INFLESZ ≥ 55 Inform, ≥ 50 Urge, ≥ 45 Escalate) And average sentence length is ≤ 20 words And there are 0 occurrences of terms from the banned-terms list And numbers 1–10 are spelled out if the configuration requires it And links and phone numbers are formatted per es-US locale conventions
Translation Memory and Glossary Enforcement (es-US)
Given a translation memory (TM) with 100 pre-approved Spanish segments and a locked glossary of 30 policy terms for es-US And an English source containing 20 TM-repeated segments and 10 glossary terms When Spanish scripts are generated Then ≥ 95% of repeated segments are exact-match reused from TM and non-matches are flagged as New And 100% of glossary terms use the locked translations exactly And a generation report lists reused, fuzzy, and new segments with segment IDs And approved edits update TM and are time-stamped with editor identity
Community Validator Review and Audit (es-US)
Given community validators assigned to es-US with an approval threshold N = 2 When a new Spanish script is generated Then its status is Needs Review and it cannot be published until ≥ 2 validator approvals are recorded And validators can propose inline edits with comments and change diffs are captured And upon Publish, the audit log records who approved, what changed, timestamps, and rationale And the final approved content updates TM/glossary per configured rules
Language Detection, Overrides, and Coalition Alignment
Given an audience dataset of 1,000 contacts with ground-truth language labels (en, es) And per-language override settings at campaign, segment, and contact levels When the system assigns target language and generates scripts Then automatic detection achieves precision ≥ 0.95 and recall ≥ 0.95 for Spanish on the test set And explicit overrides at contact, segment, or campaign level take precedence over detection in that order And all detections and overrides are recorded in the audit trail with actor, scope, old→new values, and timestamps And updates to coalition master tone profiles propagate to all locales within 60 seconds while preserving per-language overrides
RTL and Diacritic Rendering Support
Given the ar locale feature flag is enabled alongside es-US When scripts are generated and previewed for Arabic (ar) and Spanish (es-US) in web, email, and PDF Then Arabic content renders right-to-left with mirrored punctuation and right-aligned layout in previews And Spanish and Arabic diacritics display and export without loss (0 � characters) and survive copy/paste intact And outbound emails include UTF-8 encoding and the correct Content-Language header per locale And all input fields accept and store combining marks without truncation

Smart Split Rules

Define flexible allocation formulas—percentage, fixed, tiered, or hybrid—by partner tag, campaign, channel, or date window. Add floors, ceilings, and rounding, then set fallbacks if a partner lacks budget. RallyKit auto-applies rules to subscription and usage items, recalculates when attribution changes, and issues clean, auditable adjustments without spreadsheets.

Requirements

Rule Builder & Scope Targeting
"As an ops admin, I want to create and manage flexible split rules by partner, campaign, channel, and time so that allocations reflect our agreements without spreadsheets."
Description

Provide a visual and API-driven rule builder that lets admins define allocation formulas (percentage, fixed, tiered, hybrid) scoped by partner tag, campaign, channel, and date windows. Include floors, ceilings, and currency-aware rounding policies with configurable precedence and conflict resolution. Support rule versioning, draft/publish workflow, validation (e.g., totals sum to 100%, no overlapping scopes unless prioritized), and rollback. Integrate with RallyKit’s settings and permission model so only authorized roles can create, edit, and publish rules.

Acceptance Criteria
Create Percentage Rule With Scoped Targeting and Validation
Given an admin with Rule Builder permission is in the visual Rule Builder When they configure a Percentage allocation rule scoped to partnerTag="Allied", campaign="SB-42", channel="Email", and dateWindow="2025-09-01".."2025-10-31" And they set partner splits that total 100% with floors and ceilings defined for each partner Then the UI displays a running total that must equal 100% before Save Draft is enabled And attempting to Save Draft when totals ≠ 100% shows field-level errors and prevents saving And a successfully saved Draft persists scope, floors, ceilings, and rounding policy exactly as entered
API Rule Creation With Draft/Publish and Permissions
Given a service account with rules.write permission and an approver with rules.publish permission When the service account POSTs a rule to /api/rules with state="Draft" Then the API returns 201 with the rule in Draft and records creator, timestamps, and version "v1" When a user without rules.publish attempts to publish the Draft Then the API returns 403 and the state remains Draft When an approver publishes the Draft Then the state becomes Published, the version increments to "v2", and the published_by and published_at fields are recorded
Conflict Resolution by Precedence for Overlapping Scopes
Given two rules with overlapping scopes apply to the same transaction and have explicit precedence values When both rules would otherwise match Then only the rule with the higher precedence (lower precedence number) is applied and the other is skipped And if precedence is equal, the system resolves deterministically by earliest publish timestamp And the evaluation log records which rule won and why And the UI blocks publishing if overlapping scopes lack either precedence or a deterministic tiebreaker
Currency-Aware Rounding With Reconciliation
Given organization currency policies are set to USD=round to 2 decimals (bankers rounding) and JPY=0 decimals (round half up) When an allocation is computed for USD 1,000.00 across partners using floors/ceilings and for JPY 100,000 across the same partners Then each partner amount is rounded per its currency policy And a reconciliation step adjusts the smallest-impact partner so rounded amounts sum exactly to the original amount in each currency And the maximum rounding delta per partner does not exceed one unit of the currency’s least denomination
Rule Versioning, Audit Trail, and Rollback
Given a Published rule at version v2 exists When an editor clones it to Draft v3, makes changes, and publishes Then v3 becomes the active Published version and v2 remains read-only in history And the audit log captures actor, timestamp, and diff of changed fields When an authorized admin initiates a rollback to v2 Then v2 becomes the active Published version, v3 is archived, and subsequent evaluations use v2 And the audit trail records the rollback event with actor and reason
Budget Fallbacks, Floors, and Ceilings Enforcement
Given a hybrid rule with floors and ceilings references a partner whose budget is exhausted When the rule is evaluated for an allocation Then the configured fallback is applied (skip partner or reallocate proportionally) without violating other partners’ floors/ceilings And no negative allocations are produced And the evaluation log notes the fallback path taken and the affected partners
Publish-Time Validation Blocks Invalid Rules
Given a Draft rule contains any of: percentage totals not equal to 100%, overlapping date windows without precedence, invalid rounding policy for currency, end date before start date, or missing required scope fields When a user attempts to Publish Then the Publish action is blocked and field-level errors enumerate each failing validation And all errors must be resolved before Publish becomes available And Save Draft remains available to persist corrections
Allocation Engine for Subscriptions & Usage
"As a finance manager, I want RallyKit to automatically calculate partner allocations for every subscription and usage item so that payouts and internal reporting are accurate and consistent."
Description

Implement a deterministic, idempotent allocation engine that auto-applies Smart Split Rules to both subscription and usage line items at calculation time. Handle order of operations, proration, multi-currency rounding, floors/ceilings, and hybrid/tiered computations at scale. Ensure performance with batch processing and streaming triggers, and expose calculation results with trace IDs linking to the originating action, invoice, partner, and rule version. Provide configuration for synchronous (inline) and asynchronous (queued) execution paths.

Acceptance Criteria
Idempotent Allocation Recalculation for Identical Inputs
Given an invoice with subscription and usage line items, a specific rule version, and an idempotency key K, When the allocation engine is executed multiple times sequentially and concurrently with the same inputs and key K, Then the persisted allocation records are identical across runs, no duplicate adjustments are created, and totals per line item and per partner match exactly Given the same request with idempotency key K, When a retry occurs after a completed run, Then the API returns 200 with a replay indicator and zero new writes Given the same request without an idempotency key, When executed, Then the API rejects the write with 400 IdempotencyKeyRequired
Rule Conflict Resolution, Versioning, and Fallbacks
Given multiple Smart Split Rules match a line item, When selecting a rule, Then the engine applies the rule with the highest specificity in this priority order: partner_tag+campaign+channel+date_window > partner_tag+campaign+channel > partner_tag+campaign > partner_tag > campaign+channel+date_window > campaign+channel > campaign > channel > date_window > global_default, and records applied_rule_version_id Given two rules at the same specificity, When both are effective, Then the engine chooses the rule with the latest effective_at; if tied, the highest semantic version wins Given no rule matches, When calculating, Then the engine applies the global_default rule; if none exists, the amount is placed into an Unallocated bucket with reason NoMatchingRule Given a partner’s available budget is less than the computed share and fallback_strategy=cap_and_redistribute, When allocating, Then the partner is capped at available budget and the shortfall is redistributed proportionally among remaining eligible partners while honoring floors/ceilings Given fallback_strategy=redirect_to_fallback_partner(F), When a partner lacks budget, Then 100% of the shortfall is allocated to partner F; if F also lacks budget, the configured strategy is applied iteratively until resolved or marked Unallocated with reason BudgetShortfall
Computation Correctness for Floors, Ceilings, Tiered and Hybrid Rules with Multi-Currency Rounding
Given a hybrid rule with fixed and percentage components, When allocating, Then the fixed amounts are applied first and the percentage is computed over the remaining allocable base Given a tiered usage rule with defined thresholds and rates, When usage spans multiple tiers, Then the engine computes allocations per tier segment and sums the tier results to the partner totals Given partner-level floors and ceilings, When raw allocations are computed, Then floors are enforced by topping up eligible partners and ceilings by capping overages; any infeasible floor is marked with reason FloorNotMet and remaining funds are redistributed per rule weights without violating other ceilings Given multi-currency partners with settlement currencies, When final partner amounts are computed, Then amounts are converted using the effective FX rate at calculation_time and rounded to currency minor units; the sum of rounded partner amounts equals the rounded allocable base by distributing residuals via Largest Remainder method with deterministic tie-break (ascending partner_id)
Subscription and Usage Proration Across Partial Periods and Date Windows
Given a subscription line item active for a partial billing period, When proration_basis=Actual/Actual at hour precision, Then the prorated base equals original_amount * (billable_seconds_in_period / total_seconds_in_period) Given a mid-cycle plan change generating multiple partial lines, When calculating, Then each partial line is prorated independently using its active window prior to rule allocation Given a billing period overlapping multiple rule date windows, When calculating, Then the prorated base is split by overlap proportion per window and each segment is allocated under the rule effective in that window; the sum of segment allocations equals the original line’s prorated base after rounding adjustments Given usage-based line items, When allocating, Then usage is not prorated by time; allocations use actual measured usage for the period, and tier thresholds are evaluated on total period usage
Execution Path Selection, Queue Processing, and Performance SLOs
Given tenant configuration execution_mode=sync, When calculating allocations for <=100 line items and <=20 partners, Then the API returns 200 with results inline and p95 latency <=400 ms Given execution_mode=async or request async=true, When posting a batch of <=10,000 line items and <=100 partners, Then the API returns 202 with job_id and the job completes with p95 time-to-complete <=2 minutes Given a streaming trigger (e.g., action captured or invoice finalized), When fired, Then an async job is enqueued within 1 second and completes with p95 <=30 seconds for batches <=500 line items Given transient failures during async processing, When retries occur, Then retries are idempotent using the same idempotency key, employ exponential backoff, attempt up to 5 times, and poison-queue on final failure with reason codes Given operational load, When processing batches, Then throughput is >=5,000 line items per minute with <0.1% retry rate and zero lost or duplicated allocations
Traceability and Audit Linkage with Trace IDs and Exposure
Given any allocation calculation, When records are persisted, Then each allocation includes trace_id, source_invoice_id, originating_action_id (if applicable), partner_id, applied_rule_version_id, and idempotency_key Given a trace_id, When calling GET /traces/{trace_id}, Then the API returns the full lineage linking originating action, invoice, applied rule version, allocation rows, and any adjustments Given an auditor export request, When calling Export Allocations for a date range, Then the file includes all traceability fields and amounts that reconcile to invoices and partner statements Given a retry of the same calculation, When executed, Then the trace_id remains stable across attempts and the lineage view shows a single successful calculation with retry metadata
Retroactive Recalculation on Attribution Changes
"As a data steward, I want allocations to update automatically when attribution or rule versions change so that our books and partner reports stay correct without manual fixes."
Description

Automatically detect attribution changes (e.g., partner tag edits, channel reassignment, corrected campaign) and re-evaluate impacted records within the relevant date windows and rule versions. Generate delta-only adjustments that are auditable and idempotent to prevent double counting. Provide controls for backfill range selection, throttling, and conflict handling when multiple changes occur. Surface a change log and reconciliation status within the dashboard.

Acceptance Criteria
Detect and Scope Impacted Records on Attribution Change
Given an attribution change is saved (partner tag edit, channel reassignment, or campaign correction) When the system processes the change event Then only records whose attribution falls within the change’s effective date window and applicable rule versions are queued for recalculation And the recalculation job contains exactly the impacted record IDs with a recorded count And an entry is added to the change log capturing actor, change type, affected dimensions, effective date window, rule version(s), and total queued And no records outside the date window or rule versions are included
Delta-Only Recalculation and Idempotent Adjustments
Given a recalculation job executes for impacted records When new allocations are computed under current rules Then one adjustment per record/partner/line-item is generated equal to (New Allocation − Previously Posted Allocation), honoring floors, ceilings, rounding, and fallback rules And re-running the same job (or replaying the same change) produces zero additional adjustments (verified by idempotency key/hash) And the sum of original postings plus adjustments equals the recomputed target allocation within rounding tolerance And each adjustment is assigned a unique, traceable ID linked to the source change and record
Backfill Range Selection and Throttling Controls
Given an admin selects a backfill start and end date and configures batch size and rate limits When the recalculation job runs Then only records within the selected date range are processed And processing occurs in batches that do not exceed the configured batch size and rate limit And the admin can pause, resume, or cancel the job without creating duplicate adjustments And progress (% complete), remaining count, and ETA are displayed and updated in the dashboard
Conflict Handling for Overlapping Changes
Given multiple attribution changes overlap on the same records When recalculation determines applicable changes Then the system applies precedence by effective date/time and rule version (later overrides earlier); ties break deterministically by change ID And only the winning change affects adjustments; superseded changes are marked as superseded in the log And no double counting occurs: at most one adjustment set is posted per record per run
Audit Log and Reconciliation Visibility
Given recalculation completes for a change or backfill job When a user opens the dashboard Then a chronological change log shows who/what/when, before vs after attribution, rule version(s), affected record count, and job status And a reconciliation view lists each impacted record with previous allocation, recomputed allocation, delta adjustment amount, processing status (Queued, Processed, Failed, Deferred), and correlation/adjustment IDs And users can filter by date range, partner, campaign, channel, status and export the reconciliation to CSV
Subscription and Usage Items Recalculation Consistency
Given impacted data contains both subscription charges and usage-based items When recalculation runs Then subscription adjustments are prorated to the billing period boundaries and usage adjustments reflect rated quantities within the usage window And both item types apply the same floors, ceilings, rounding, and fallback budget rules And partner totals across item types reconcile to the recomputed allocations
Fallback and Budget Exhaustion Behavior
Given a partner implicated by recalculation lacks sufficient budget per split rules When adjustments are computed Then the system applies the configured fallback destination or marks the record Deferred with a retry schedule, without creating negative balances unless explicitly allowed And budget consumption/restoration is updated and reflected in reconciliation totals And all fallback actions are logged with reason codes and linked to the originating change
Budget Fallbacks & Balance Enforcement
"As a program lead, I want configurable fallbacks when a partner’s budget is exhausted so that actions are still attributed fairly and we avoid negative balances."
Description

Enforce partner budget caps and implement configurable fallback behaviors when funds are insufficient: reallocate proportionally to remaining partners, defer to a holding account, or apply a house allocation. Support partial fulfillment, soft/hard caps, and concurrency-safe balance checks during high-volume events. Provide per-partner or per-campaign fallback hierarchies, notifications on exhaustion, and dashboards to monitor remaining balances and applied fallbacks.

Acceptance Criteria
Hard Cap Enforcement with Proportional Reallocation
Given a campaign with Smart Split Rule weights A:50%, B:30%, C:20% and a rounding rule of 2 decimals And partner A has a hard cap of $10,000.00 with current spend $10,000.00 And partners B and C are eligible with ceilings not exceeded When an allocatable amount of $1,000.00 is processed Then partner A receives $0.00 And the full $1,000.00 is reallocated to B and C proportionally to active weights after excluding A (B:60%=$600.00, C:40%=$400.00) And no partner’s post-allocation spend exceeds any cap, floor, or ceiling And an auditable record is written identifying excluded partners, fallback type “Proportional,” and computed amounts And the same result is produced for an equivalent subscription item and usage item
Partial Fulfillment Under Hard Caps with Exhaustion Notification
Given partners B and C with hard caps and remaining budgets B:$150.00 and C:$50.00 And a single allocatable amount of $300.00 targeting B and C under the campaign’s split rule And the campaign fallback is “Holding Account” When the allocation is processed Then B receives $150.00 and C receives $50.00 and both reach their hard caps And the remaining $100.00 is allocated to the Holding Account and marked “DeferredPendingBudget” And exhaustion notifications are sent to the owners of B and C within 2 minutes including remaining balance ($0.00), cap, and time of exhaustion And the allocation is marked “Partially Fulfilled” with an auditable link to the fallback entry
Soft Cap Warning Allows Overage
Given partner D has a soft cap of $5,000.00 with current spend $4,900.00 and overage is allowed And an allocatable amount of $300.00 is processed for D When the allocation is processed Then $300.00 is allocated to D and spend becomes $5,200.00 And a “Soft Cap Exceeded” event with overage $200.00 is logged and visible on the partner activity timeline And a notification is delivered to the configured channels within 2 minutes containing cap, current spend, overage, and recommended actions
Holding Account Fallback When All Partners Exhausted
Given a campaign where all eligible partners are at hard cap or lack budget And the campaign fallback is configured to “Holding Account” When an allocatable amount of $750.00 is processed Then 100% of $750.00 is allocated to the Holding Account And the transaction is labeled “DeferredPendingBudget” with the originating campaign, intended partners, and reason “AllExhausted” And a retry job is queued to reattempt allocation upon budget replenishment And the dashboard reflects the Holding Account entry and reason within 60 seconds
House Allocation Fallback for Single-Partner Campaign
Given a campaign with a single partner E with a hard cap reached And the fallback hierarchy is House -> Holding When an allocatable amount of $200.00 is processed Then $200.00 is allocated to the House account without exceeding any cap And an auditable entry is recorded with fallback type “House,” amount $200.00, and source campaign And if the House account is disabled, the amount is allocated to the Holding Account instead
Concurrency-Safe Balance Checks During High-Volume Events
Given a campaign with partner H having a hard cap of $10,000.00 and current spend $9,000.00 And 1,000 allocation events of $15.00 each occur concurrently with unique idempotency keys When the system processes all events Then partner H’s final spend does not exceed $10,000.00 by more than $0.00 And no allocation is applied more than once per idempotency key And any retries due to contention are logged with attempt counts and succeed without manual intervention
Per-Partner and Per-Campaign Fallback Hierarchy Resolution
Given partner F has a fallback hierarchy Proportional -> House -> Holding and the campaign default hierarchy is Holding -> House And partner G has no partner-level override When allocations for F and G encounter insufficient funds Then F’s fallback sequence is applied in order, skipping ineligible steps, until an allocation succeeds And G inherits and follows the campaign default hierarchy in order, skipping ineligible steps, until an allocation succeeds And audit logs show the evaluated hierarchy, skipped steps with reasons, and the final applied fallback for each partner
Auditable Ledger & Explanations
"As a compliance officer, I want a clear audit trail for every allocation and adjustment so that I can verify calculations and satisfy audits quickly."
Description

Record every allocation and adjustment in an immutable ledger capturing inputs (amounts, attribution, currency), the applied rule/formula and version, rounding behavior, floors/ceilings, and outputs per partner. Generate human-readable explanations that show step-by-step calculations and reasons for any adjustments. Provide filters, export (CSV/JSON), and API access, with traceability to actions, invoices, partners, and recalculation events to deliver audit-ready proof.

Acceptance Criteria
Ledger Entry Completeness and Traceability Fields
Given an allocation or adjustment is produced by Smart Split Rules When a ledger entry is recorded Then the entry includes: entry_id (UUID), created_at (UTC ISO 8601), entry_type (allocation|adjustment|recalculation), source_id, currency (ISO 4217), gross_amount, attribution inputs (partner_id, campaign_id, channel_id, date_window, item_type), rule_id, rule_version, formula_type, rounding_mode, rounding_precision, floor_value, floor_applied (bool), ceiling_value, ceiling_applied (bool), outputs per partner [partner_id, amount_before_rounding, amount_after_rounding], actor_id, checksum, previous_entry_ref (nullable), action_id (nullable), invoice_id (nullable), recalculation_event_id (nullable) And the checksum verifies the entry contents on read And required fields are validated against the published schema and non-null where required
Immutability and Append-Only Behavior
Given any existing ledger entry When an update or delete is attempted via UI, API, or background process Then the operation is rejected with 409 Conflict and no fields of the original entry change And corrections occur only via a new compensating entry (entry_type=adjustment) that references the original entry_id in previous_entry_ref
Recalculation Delta Adjustments and Reconciliation
Given attribution for a recorded action changes When recalculation is triggered Then new ledger entries are appended with entry_type=recalculation representing only the deltas per partner relative to the prior state and referencing the original entries And the sum of historical entries plus deltas equals the current expected allocation within currency precision And affected partner totals and invoice previews reflect new totals within 1 minute
Human-Readable Explanations Coverage and Accuracy
Given any ledger entry is viewed When the user opens the Explanation Then the explanation shows, in order: starting amount and currency; rule and version; formula with substituted values; tier evaluations; floors/ceilings with thresholds and flags; rounding mode and precision with intermediate and final amounts; adjustment reason (if any); referenced entry_ids And numeric values are formatted per currency minor units and locale while preserving exact stored values And computed amounts in the explanation exactly match the stored outputs per partner And explanations are retrievable via API (expand=explanation) and included in CSV export as a text field
Filterable Ledger View with Performance SLAs
Given the ledger UI is open When filters by date range, partner tag, partner, campaign, channel, currency, entry_type, rule_version, and amount range are applied singly or in combination Then results and totals update correctly within 2 seconds P95 for datasets up to 100k entries And applied filters persist in the URL, are retained on reload, and are shareable And clearing filters resets the view to all entries
Export to CSV and JSON with Schema Integrity
Given a filtered ledger result set When the user exports CSV Then the file contains a header row with stable column names, one row per partner-output record, UTF-8 encoding, comma delimiter with RFC 4180 quoting, and UTC timestamps And numeric fields use dot decimal, currency codes are ISO 4217, and row count equals the number of partner-output records When the user exports JSON Then the file conforms to the documented schema with correct types, preserves UUIDs, includes explanations when requested, and respects active filters and sort And exports larger than 100 MB are automatically chunked with deterministic filenames and a manifest of parts
API Access for Ledger with Pagination and Security
Given an authenticated client with appropriate scopes When it requests GET /ledger with filters, sort, page parameters (size<=1000) and cursor or page number Then the API responds 200 with results, total_count, pagination links/cursors, stable sort order, and P95 latency <= 500 ms for up to 100k entries And endpoints enforce OAuth2 scopes (ledger:read, ledger:export), apply a 60 requests/min rate limit per token, and return 401/403 for insufficient credentials And each item includes required fields and optional explanation when expand=explanation is provided And responses support ETag with If-None-Match returning 304 when unchanged
Simulation & What‑If Preview
"As an admin, I want to simulate rule changes and see their impact before going live so that I can catch errors and set stakeholder expectations."
Description

Offer a sandbox to preview rule outcomes against sample or historical data before publishing changes. Display per-partner allocations, edge case handling (floors, ceilings, rounding), potential budget exhaustion, and deltas versus currently active rules. Support shareable previews, scenario labeling, and zero side-effects (no production writes) to de-risk deployments and align stakeholders.

Acceptance Criteria
Sandbox Preview Shows Per-Partner Allocations and Edge-Case Annotations
Given a draft Smart Split Ruleset and a selected dataset When the user runs a preview Then per-partner allocations are displayed for both subscription and usage items And each allocation row clearly indicates any applied floor, ceiling, rounding mode, and fallback with reason codes And totals by partner, item type, and overall equal the total eligible amount within a rounding tolerance of ≤ $0.01 per item And a calculation trail is available per row showing inputs and applied rules
Delta Comparison Against Active Rules
Given currently active rules and a sandbox draft ruleset When the user generates a preview Then the UI shows delta columns (absolute $ and % change) per partner and at totals level And the sum of partner-level deltas equals 0 for revenue-constrained distributions, excluding rounding residuals And any rounding residuals are surfaced as a separate line not exceeding $0.01 per item And users can toggle between viewing absolute and percentage deltas
Zero Side-Effects: No Production Writes During Preview
Given the system is in preview mode When a user creates, runs, edits, or shares a scenario Then no production ledger entries, invoices, payouts, partner balances, or budget records are created or modified And no payout or adjustment jobs are enqueued And audit logs record the action as "preview" with scenario ID, user, timestamp, and dataset fingerprint
Shareable, Labeled Scenarios With Access Controls and Expiration
Given a user labels a scenario and chooses to share it When a share link is generated Then a unique, read-only URL with revocable token and optional expiration (date/time) is created And only authenticated org members with view permission or holders of the token (if link-sharing is enabled) can access the preview And viewers cannot publish or edit rules unless they have edit permission; the UI indicates read-only mode And the scenario label, creator, created date, and expiration are displayed in the preview header And the owner can revoke the link, after which subsequent accesses are denied
What-If Parameter Tweaks Recompute Results Within SLA
Given a preview is open with up to 2,000 partners and 100,000 line items selected When the user modifies any rule parameter (e.g., percentage, floor/ceiling, rounding, fallback) Then recalculation completes and the UI updates results within 5 seconds at the 95th percentile And a progress indicator is shown during recompute and cleared upon completion with a refreshed timestamp And results reflect the new parameters without requiring a full page reload
Data Set Selection and Filters (Historical or Sample)
Given the user selects a historical date window or configures a sample dataset (size, distributions) When the preview runs Then only records within the selected date window or sample configuration are used for calculations And applied filters by campaign, channel, and partner tag are honored and displayed as active filters And the preview header shows record counts by item type and total eligible amount for the selected dataset And the scenario metadata stores dataset type, date range, filters, and a dataset fingerprint for reproducibility
Budget Exhaustion and Fallback Visualization
Given rules include partner budgets with floors/ceilings and defined fallbacks When the preview runs Then partners projected to exhaust budget in the selected window are flagged with warnings and estimated exhaustion points And allocations after exhaustion follow the configured fallback path, with each reassignment annotated by reason and target partner And exhausted partners receive no further allocation beyond their ceiling or remaining budget And overall totals remain balanced after applying fallbacks and ceilings
Public API & Webhooks for Splits
"As a developer, I want programmatic access and event notifications for splits so that I can integrate RallyKit with our finance and reporting systems."
Description

Expose secure endpoints to create, read, update, and archive Smart Split Rules; retrieve allocations and ledger entries; and initiate recalculations. Publish webhooks for allocation_created, allocation_adjusted, budget_exhausted, and rule_published events. Implement OAuth2/API keys, RBAC, rate limits, pagination, filtering by campaign/partner/date, and schema versioning to support integrations with finance, data warehouses, and partner portals.

Acceptance Criteria
OAuth2/API Key Auth and RBAC Enforcement
- All API endpoints accept only HTTPS; any HTTP request is rejected with 426 Upgrade Required. - Requests with invalid or expired OAuth2 access tokens or API keys return 401 with a WWW-Authenticate header. - Scope-based RBAC enforced: tokens with read-only scope cannot perform create/update/archive or recalculation operations (return 403 with error code access_denied). - Endpoint access is restricted by role (e.g., Finance, Admin) as configured; unauthorized roles receive 403. - Every response includes X-Request-Id and Date headers; request and decision are recorded in the audit log with actor, org, endpoint, method, and outcome. - Sensitive secrets (tokens, API keys) are never returned in responses or logs; last4 and created_at are the only exposed attributes for keys. - Rate-limited requests return 429 with Retry-After and X-RateLimit-* headers (see rate limiting criteria).
CRUD Endpoints for Smart Split Rules
- POST /v1/split-rules creates a rule supporting formula types: percentage, fixed, tiered, and hybrid; accepts floors, ceilings, rounding mode, fallback handling, partner tags, campaign/channel scoping, and date windows; valid payload returns 201 with rule_id. - Invalid rule definitions (e.g., percentages not totaling 100 when required, overlapping date windows, unknown partner tags) return 422 with field-level errors. - GET /v1/split-rules/{id} returns the rule, including status (draft/published/archived), version, ETag, and audit fields (created_at/by, updated_at/by). - PATCH /v1/split-rules/{id} allows partial updates to mutable fields; immutable fields (id, created_at, creator) are rejected with 409 or 422; If-Match required, outdated ETag returns 412 Precondition Failed. - Archiving a rule via POST /v1/split-rules/{id}/archive sets archived=true, archived_at, returns 200, and prevents future allocations from using the rule; GET excludes archived by default unless include_archived=true. - Idempotency-Key header is honored on POST and PATCH; duplicate requests return 201/200 with the original resource id and no side effects. - Publishing a rule toggles published=true and emits a rule_published webhook with the rule_id and version.
Allocation and Ledger Retrieval with Filtering and Pagination
- GET /v1/allocations and /v1/ledger return 200 with JSON arrays and metadata; default sort is created_at desc and stable within a page. - Filtering supported via query params: campaign_id, partner_tag, channel, rule_id, created_at_gte, created_at_lte; combining filters narrows results (AND semantics). - Pagination uses limit (1–250, default 50) and cursor; responses include next_cursor when more results exist; providing next_cursor returns the next page. - Responses include schema_version (v1) and X-Request-Id; empty sets return 200 with an empty array and no error. - Each allocation item includes references: allocation_id, rule_id, campaign_id, partner_id or tag, amount, currency, rounding_applied, source (subscription/usage), and attribution_snapshot_id. - Ledger entries expose entry_id, allocation_id, type (allocation|adjustment|reversal), delta_amount, currency, reason_code, created_at, and correlation_id. - Requests with invalid filters or cursors return 400 with error code invalid_parameter and details per field.
Recalculation Trigger and Job Status
- POST /v1/recalculations accepts a payload specifying one or more scopes (e.g., campaign_id, partner_id, allocation_ids, date window) and returns 202 with job_id and status=pending. - Only tokens with write scope and appropriate role can initiate recalculation; otherwise return 403 access_denied. - Idempotency-Key is required; repeating the same request with the same key returns 202 with the original job_id without creating a new job. - GET /v1/recalculations/{job_id} returns 200 with status (pending|running|completed|failed), counts of examined/adjusted allocations, adjustments ledger summary, started_at, finished_at, and links to affected resources. - Upon completion, allocation_adjusted webhooks are emitted for each changed allocation; no webhooks are sent when no changes occur. - Recalculations produce auditable ledger adjustments referencing original entries; no net new allocation is created without a corresponding ledger entry. - Failed jobs expose error code and message; partial work is compensated with reversals to maintain ledger integrity.
Webhooks Delivery and Reliability
- Event types allocation_created, allocation_adjusted, budget_exhausted, and rule_published are emitted with payload including event_id, type, occurred_at (ISO8601 UTC), schema_version, org_id, correlation_id, and resource body. - Deliveries are signed using HMAC-SHA256 with a per-endpoint secret; header includes timestamp and signature; requests older than a configurable tolerance are rejected by verifiers. - Delivery is at-least-once; duplicate deliveries carry the same event_id and can be de-duplicated by consumers. - Ordering is guaranteed per resource id; cross-resource ordering is not guaranteed. - Retry policy: non-2xx responses are retried with exponential backoff up to 12 attempts across 48 hours; 2xx stops retries; 410 marks the endpoint disabled; 429 is treated as retryable. - Webhook management supports test pings and secret rotation without delivery interruption; signatures change only after rotation activation. - Delivery attempts and statuses are recorded with response code, duration, and last_error for auditability.
Rate Limiting, Versioning, and Error Model
- Per-organization and per-API key rate limits are enforced; exceeding limits returns 429 with Retry-After and headers X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset. - All endpoints are versioned under /v1; responses include schema_version=v1; requests with an unsupported version return 406 with code version_unsupported. - Deprecations are communicated via Deprecation and Sunset headers at least 90 days before removal; requests to sunset versions return Warning headers. - Error responses are JSON with fields: code, message, details (object keyed by field), and correlation_id; content-type is application/json; no stack traces are exposed. - Successful responses use standard HTTP codes: 200 (read), 201 (create), 202 (accepted async), 204 (no content), 400/401/403/404/409/412/422/429/500 as appropriate. - OpenAPI schema is published and kept in sync; every deployed change to v1 that adds fields is backwards compatible; breaking changes require a new version.

Grant Buckets

Create time‑boxed funding pools that draw down first, then roll over to partners when exhausted. Assign buckets to campaigns or specific line items (e.g., SMS, lookups) with start/end dates, caps, and reporting labels. Prevent overspend, prove grant utilization instantly, and keep funder-backed activity flowing without manual tracking.

Requirements

Bucket Lifecycle Management
"As a nonprofit director, I want to create and manage time‑boxed funding buckets with caps and dates so that I can align spend to grant terms without manual spreadsheets."
Description

Provide UI and APIs to create, edit, activate, pause, and archive time‑boxed funding buckets with required fields: name, funding source label, start/end dates, hard spend cap, applicable scope (campaign-level and/or specific line items such as SMS, lookups, calling minutes, email sends), and reporting tags. Enforce validation (non-overlapping dates per scope when needed, currency consistency, required labels), support cloning templates, and display real-time remaining balance. Integrate with RallyKit campaigns so buckets can be attached at campaign creation or retroactively. Persist immutable audit metadata (creator, timestamps, changes) to meet grant compliance and enable downstream reporting.

Acceptance Criteria
UI and API Bucket Creation With Required Fields
Given an authenticated org admin using the UI When they submit Create Bucket with any required field missing (name, funding source label, start date, end date, hard spend cap, applicable scope, reporting tags) Then submission is blocked and inline errors identify each missing field Given valid inputs (start date <= end date, cap > 0 numeric) When they submit via UI Then the bucket is created and visible in the Buckets list within 3 seconds And GET /buckets/{id} returns the persisted values exactly as entered Given a well-formed API request to POST /buckets with all required fields When the request is made with valid auth Then the API responds 201 with the new bucket id and body reflecting persisted values Given a malformed API request (missing required fields or invalid types) When POST /buckets is called Then the API responds 400 with field-level errors Given start date after end date or cap <= 0 When creating via UI or API Then the operation is rejected with validation errors and no record is created
Bucket Editing And Validation Including Non-Overlap
Given an existing bucket When a user edits mutable fields (name, funding source label, start/end dates, hard spend cap, applicable scope, reporting tags) via UI or PATCH /buckets/{id} Then changes persist and are retrievable via GET within 3 seconds Given a proposed change that creates a date range overlapping another Active or Paused bucket with the same scope When saving Then the save is blocked with an error identifying the conflicting bucket(s) and overlapping dates Given invalid edits (end before start, cap <= 0) When saving Then the save is blocked with clear validation messages Given attempts to modify immutable fields (id, creator, created_at) When calling PATCH Then the API returns 400 and the fields remain unchanged
State Transitions: Activate, Pause, Archive Respect Spend Allocation
Given an existing bucket When an authorized user triggers Activate, Pause, or Archive via UI or POST /buckets/{id}/state Then the bucket's state changes accordingly and is reflected in GET within 3 seconds And the audit log records actor, timestamp, from_state, to_state, and reason (if provided) Given a Paused or Archived bucket When spend allocation runs for eligible actions Then no new spend is deducted from that bucket Given an Archived bucket When attempting to assign it to a campaign or line item Then the operation is blocked with "Bucket is archived"
Clone Bucket From Template
Given an existing bucket selected for cloning When the user invokes Clone via UI or POST /buckets/{id}/clone Then a new bucket is created copying name, funding source label, start/end dates, hard spend cap, applicable scope, and reporting tags And the cloned bucket has a unique id, zero current spend, and fresh audit metadata And the system requires the name to be unique before save; duplicate names are rejected with a clear error
Real-Time Remaining Balance Display And Overspend Prevention
Given a bucket with cap C and total booked spend S When a new spend event of size X is successfully allocated to the bucket Then remaining_balance displayed in UI and returned by GET /buckets/{id} updates to C - (S + X) within 5 seconds And remaining_balance is never negative Given concurrent spend events that would cause S to exceed C When allocation runs Then the system prevents overspend such that total booked spend never exceeds cap And any rejected spend attempts return an error indicating cap exceeded
Campaign Integration: Attach At Creation And Retroactively With Scope And Currency Checks
Given a user creating a campaign When they select one or more applicable funding buckets Then the campaign is created with those associations persisted and visible on both the campaign and bucket detail views Given an existing campaign When attaching or detaching a bucket via UI or API Then associations update successfully and are reflected via GET within 3 seconds And the audit log records actor, timestamp, and change details Given a bucket whose currency differs from the campaign or line item currency When attempting to attach Then the system blocks the association with a "Currency mismatch" error Given an Archived bucket When attempting to attach Then the system blocks the association with "Bucket is archived"
Immutable Audit Trail For Grant Compliance
Given any create, update, state change, attach/detach, or clone action on a bucket When the action completes Then an audit entry is appended capturing actor id, timestamp, action type, changed fields (old -> new), and request origin (UI/API) Given the audit log When retrieved via GET /buckets/{id}/audit Then entries are read-only, ordered newest-first, and cannot be modified or deleted by any role Given attempts to alter audit metadata via API or UI When requests are made Then the system denies the operation and records the attempt in the audit log
Spend Routing & Drawdown Engine
"As an operations manager, I want spend to automatically route to the right grant bucket and roll over when it runs out so that activity continues without interruption and accounting stays accurate."
Description

Implement a deterministic, concurrency-safe allocation engine that routes eligible costs to the correct bucket at the moment of spend. Respect scope rules (campaign vs. line item), bucket priority within a campaign, and date windows. Draw down from the active bucket until exhausted, then automatically roll over to the next configured destination (partner bucket or default org funds). Ensure idempotency across retries, lock contention handling for high-throughput actions, and accurate partial allocations when a single action incurs multiple line-item costs. Provide configuration for fallback behavior when no eligible bucket is available.

Acceptance Criteria
Deterministic Allocation by Scope, Priority, and Date Window
Given an action with campaign and line‑item costs within an active date window When the allocation engine processes the action Then the cost is routed to the highest‑priority eligible bucket matching scope (campaign vs. line item) and active dates And repeated processing with identical inputs produces identical bucket IDs, amounts, and ledger entries (deterministic) And the allocation record includes bucket_id, scope_type, priority_rank, allocation_amount, action_id, and remaining_balance after allocation
Automatic Rollover on Bucket Exhaustion
Given an active bucket with remaining_balance < incoming cost amount When an allocation is attempted Then the engine allocates the remaining_balance from the active bucket and routes the remainder to the next configured destination in order (partner bucket(s) then default org funds) And both ledger entries are committed atomically; either both succeed or neither does And once remaining_balance reaches 0, the bucket is marked exhausted and is skipped for subsequent allocations
Idempotent Processing and Retry Safety
Given the same action event (same action_id and cost_version) is delivered multiple times or retried When processed concurrently or sequentially Then only one set of allocation ledger entries is created and subsequent attempts return the existing allocation reference And a unique idempotency key composed of action_id + line_item_id + cost_version prevents duplicates at the datastore level And the API responds 200 with the existing allocation for duplicate submissions
Concurrency and Lock Contention Under High Throughput
Given 500 concurrent actions target the same campaign/bucket When processed under load Then the bucket balance never goes below zero and no overspend occurs And p95 end‑to‑end allocation latency ≤ 200 ms and error rate from lock timeouts < 0.5% And automatic retry with exponential backoff (max 3 attempts, jitter 10–50 ms) resolves transient lock conflicts And operational metrics expose lock_wait_ms and allocation_latency_ms with per‑bucket tags
Partial Allocations for Multi‑Line‑Item Costs
Given a single action incurs multiple line‑item costs (e.g., SMS, lookup, email) When processed Then each line item is evaluated independently against scope rules and may route to different eligible buckets And if a line item spans multiple buckets due to exhaustion, the cost is split across buckets with bankers rounding to 2 decimals and the sum of splits equals the original line‑item cost And allocation records and reports show one entry per split with correct bucket, line_item, and reporting_label
Fallback Behavior When No Eligible Bucket Is Available
Given no bucket matches scope or active date window for a cost When processing the allocation Then the engine applies the configured fallback policy: route to default org funds or reject with a policy error And the fallback is configurable at campaign and org levels and takes effect immediately without deployment And if policy is "queue until eligible", costs are queued and auto‑allocated once a matching bucket becomes active; otherwise, costs routed to default are not retroactively moved unless an explicit reallocation job is executed
Auditability and Reporting Integrity
Given any completed allocation When an audit is requested Then the system returns an immutable trail including input action, evaluated rules (scope match, priority order, date checks), idempotency key, rollover steps (if any), and resulting ledger entries with timestamps And a dry‑run of the allocation using the same inputs produces the same routing decision without writing And exported reports reconcile to ledger totals by bucket, campaign, line_item, and reporting_label within ±0.01 of currency units
Overspend Guardrails
"As a finance lead, I want hard controls that stop charges when a grant cap is hit so that we never exceed funder-authorized budgets."
Description

Add pre-commit budget checks that prevent overspend beyond a bucket’s hard cap. Support configurable behaviors: block action, throttle volume, or reroute to fallback funds. Provide clear user-facing and admin notifications when blocks occur, and log each prevented or rerouted charge with reason codes for audits. Include rate-aware checks for burst traffic and ensure enforcement at both campaign and line-item levels.

Acceptance Criteria
Hard Cap Block on Line-Item Spend
Given a line-item funding bucket with hard cap $1,000.00 and current committed $995.00 And the bucket behavior is configured to Block when cap would be exceeded When a $10.00 charge is pre-committed for that line item Then the charge is rejected before any external vendor call And the committed amount remains $995.00 And the response includes application error code LINE_ITEM_CAP_BLOCK And no partial charge is processed
Throttle Behavior Near Cap
Given a line-item bucket with remaining funds $6.00, unit cost $0.10/action, and throttle mode enabled at 60 actions/min When 200 actions are submitted within 60 seconds Then no more than 60 actions are accepted within that 60-second window And accepted actions never exceed the remaining funds And subsequent requests within the window receive application error code FUNDING_THROTTLED with a Retry-After value And when remaining funds reach $0.00, further requests are blocked with LINE_ITEM_CAP_BLOCK
Reroute to Fallback Bucket on Exhaustion
Given a primary bucket for a campaign line item has $0.50 remaining and reroute behavior is enabled with fallback order [Bucket B, Bucket C] And unit cost is $0.10/action When 20 actions are attempted Then the first 5 actions charge the primary bucket and deplete it to $0.00 And the next 15 actions are automatically charged to Bucket B in order And if Bucket B becomes insufficient mid-burst, remaining actions are charged to Bucket C And end users experience no error during reroute And each accepted action records and exposes the actual bucketId used for billing
Dual-Level Enforcement: Campaign vs Line-Item Caps
Given a campaign bucket hard cap $500.00 and a line-item bucket hard cap $200.00 for SMS And current committed are $499.00 on the campaign bucket and $150.00 on the line-item bucket When a $2.00 SMS charge is pre-committed Then the charge is blocked due to campaign cap And the response reason code is CAMPAIGN_CAP_BLOCK And no funds are deducted from either bucket Given the campaign committed is $100.00 and the line-item committed is $199.50 When a $2.00 SMS charge is pre-committed Then the charge is blocked due to line-item cap And the response reason code is LINE_ITEM_CAP_BLOCK
Burst Traffic Rate-Aware Pre-Commit Checks
Given a bucket with remaining $10.00 and unit cost $0.50/action And the system uses a pre-commit reserve that holds funds up to 30 seconds during processing When 100 concurrent action requests arrive within 1 second Then at most 20 actions are accepted and reserved And the remaining 80 are rejected with LINE_ITEM_CAP_BLOCK or CAMPAIGN_CAP_BLOCK as applicable And the final committed total does not exceed $10.00 And retried requests with the same requestId within the reserve window do not change the outcome (idempotent)
User and Admin Notifications on Funding Blocks
Given a public action is blocked due to any funding cap When a user attempts the action Then the user sees a clear message within 1 second: "This action is currently paused due to funding limits. Please try again later." And the UI remains responsive (no crash or spinner beyond 3 seconds) And within 60 seconds an admin notification is sent to all configured channels containing: timestamp, campaignId, lineItemId, bucketId, attempted amount, remaining funds, reasonCode, environment And notifications are de-duplicated to at most 1 per reasonCode per bucket per 5-minute window
Audit Logging for Prevented and Rerouted Charges
Rule: For every prevented charge (blocked or throttled), write an immutable log entry with fields: eventId, timestamp (UTC), requestId, campaignId, lineItemId, originalBucketId, outcome in {BLOCKED, THROTTLED}, amount, reasonCode in {CAMPAIGN_CAP_BLOCK, LINE_ITEM_CAP_BLOCK, THROTTLE_LIMIT}, policyVersion Rule: For every rerouted charge, write an immutable log entry with fields: eventId, timestamp (UTC), requestId, originalBucketId, fallbackBucketId, outcome=REROUTED, amount, reasonCode=FALLBACK_REROUTE, policyVersion Rule: Logs are queryable by date range, campaignId, lineItemId, reasonCode, and exportable as CSV for up to 100k rows within 60 seconds Rule: Log entries are tamper-evident (no updates/deletes via UI) and retained for at least 24 months
Grant Utilization Reporting & Export
"As a development officer, I want live utilization reports and audit-ready exports so that I can demonstrate grant impact to funders on demand."
Description

Deliver real-time dashboards and exports showing consumed vs. remaining amounts by bucket, date range, campaign, and line item. Include drill-down to action-level entries with timestamps, costs, and allocation outcomes. Provide one-click CSV export and a read-only API endpoint for funders, with optional share links scoped to specific buckets. Surface key KPIs (burn rate, projected exhaustion date, blocked/rerouted events) to prove utilization instantly.

Acceptance Criteria
Dashboard Utilization by Bucket and Dimensions
Given at least one active grant bucket with a cap, start/end dates, and recorded allocations When a user opens the Grant Utilization dashboard and selects a bucket, date range, and optional filters (campaign, line item) Then the dashboard shows for each selected bucket: cap, consumed, remaining, and percent utilized And consumed equals the sum of allocations matching filters within the selected date range, rounded to 2 decimals And remaining equals max(cap − consumed, 0), rounded to 2 decimals And percent utilized equals (consumed ÷ cap) × 100 with one decimal place; if cap = 0 then percent utilized = 0% And date range filtering is inclusive of start 00:00 to end 23:59:59.999 in the organization’s timezone And p95 load time for the aggregates is ≤ 1500 ms for up to 50 buckets with filters applied
Drill-down to Action-level Allocation Details
Given a user clicks a bucket’s “View details” from the utilization dashboard When the details view loads Then it lists action-level rows with columns: action_id, timestamp (ISO 8601 UTC and org-local display), campaign, line_item, unit_cost, amount_charged_to_bucket, allocation_outcome (charged|rerouted|blocked), reroute_target_id (if any), reason_code (if blocked) And rows are filterable by campaign, line_item, allocation_outcome, and a secondary date range And the sum of amount_charged_to_bucket in the details view matches the dashboard’s consumed value for the same filters within $0.01 And pagination supports page sizes up to 100; sorting by timestamp desc by default And each row links to its source event for audit verification
One-click CSV Export Respects Filters and Reconciles Totals
Given a user is viewing either the summary dashboard or the action-level details with selected filters When the user clicks “Export CSV” Then a CSV downloads that includes only data within the current filters and access scope And summary exports contain one row per bucket with columns: bucket_id, bucket_name, cap, consumed, remaining, percent_utilized, start_date, end_date, burn_rate_per_day, projected_exhaustion_date, blocked_count, rerouted_count And detail exports contain one row per action with columns: bucket_id, bucket_name, action_id, timestamp_utc, timestamp_local, campaign_id, campaign_name, line_item, unit_cost, amount_charged_to_bucket, allocation_outcome, reroute_target_id, reason_code And numeric values use period as decimal separator; timestamps are ISO 8601 UTC; text is UTF-8 And exports up to 100,000 rows complete within 10 seconds p95 via streaming And the sum of amount_charged_to_bucket in the CSV reconciles to the on-screen consumed value within $0.01 for identical filters
Read-only Funder API Endpoint with Scoped Access
Given a funder API token scoped to specific bucket IDs When a client issues GET /api/v1/grants/utilization with parameters (bucket_id[], date_range, level=summary|detail, campaign_id[], line_item[]) Then the API returns 200 with JSON matching the published schema; only data for scoped buckets is included And non-GET methods return 405; requests for out-of-scope buckets return 403 And summary objects include: bucket_id, cap, consumed, remaining, percent_utilized, start_date, end_date, burn_rate_per_day, projected_exhaustion_date, blocked_count, rerouted_count And detail objects include: action_id, timestamp_utc, campaign_id, campaign_name, line_item, unit_cost, amount_charged_to_bucket, allocation_outcome, reroute_target_id, reason_code, bucket_id And pagination is cursor-based with limit ≤ 1000; responses include next_cursor when more data exists; ordering by timestamp desc for detail And rate limit is enforced at 60 requests/min per token; excess returns 429 And p95 latency ≤ 800 ms for summary (≤ 50 buckets) and ≤ 1200 ms for detail (≤ 1000 rows) And all numeric fields validate as numbers; timestamps validate as ISO 8601 UTC
KPI Surface: Burn Rate, Projection, Blocked/Rerouted
Given a selected bucket and date range within its active window When KPIs render on the dashboard Then burn_rate_per_day equals (total_consumed_in_range ÷ number_of_active_days_in_range), where active days are calendar days the bucket is active within the selected range And projected_exhaustion_date equals today + (remaining ÷ burn_rate_per_day) if burn_rate_per_day > 0; otherwise null And blocked_count equals the number of actions in range with allocation_outcome = blocked; rerouted_count equals those with allocation_outcome = rerouted And currency values show 2 decimal places; percents show 1 decimal; projected date is in org timezone And KPIs update within 60 seconds of new allocations or status changes affecting the bucket And if bucket end_date has passed, projected_exhaustion_date displays “Ended” and no future projection is shown
Share Links for Specific Buckets (Read-only, PII-safe)
Given an admin generates a share link scoped to one or more bucket IDs with optional date range and visibility level (summary|detail) When a viewer accesses the share URL Then they see only the scoped buckets and date range in read-only mode; no create/update/delete actions are available And supporter PII is excluded; supporter identifiers are hashed per-link; email/phone are not present in any view or export And share links can be set to expire (default 30 days) and can be revoked; expired links return 410, revoked links return 403 And all access via share links is logged with timestamp, actor (link id), and IP And exports and API calls initiated from a share link inherit the same scope and PII rules
Real-time Update Latency and Data Integrity
Given new actions are ingested and allocated to buckets When viewing dashboard, details, export, or API for the affected buckets Then the new actions appear in detail views within 60 seconds, and dashboard totals and KPIs reflect them within 60 seconds And exports and API responses include the new actions within 60 seconds under identical filters And duplicate action_ids are ignored for allocation and reporting; no duplicate rows are produced And consumed totals across UI, export, and API reconcile within $0.01 for identical filters and time windows And rerouted actions show amount_charged_to_bucket = 0 and a non-null reroute_target_id; blocked actions show amount_charged_to_bucket = 0 and a non-null reason_code
Partner Rollover & Notifications
"As a coalition organizer, I want funds to roll over to partner accounts automatically with notifications so that funded activity keeps running across organizations without manual coordination."
Description

Enable bucket configurations to specify one or more partner recipients for rollover when exhausted. Support cross-tenant attribution with secure permission checks, acceptance workflow for partners, and auditable transfer records (who, when, from/to, amount). Notify relevant stakeholders via email/in-app alerts on nearing cap, exhaustion, and rollover events. Display partner rollover state within campaign and bucket views.

Acceptance Criteria
Partner Selection Permission Checks for Rollover
Given a Bucket Admin configures rollover partners for a bucket across tenants When opening the partner selector Then only tenants with an active cross-tenant link and RolloverRecipient permission are displayed And Save is disabled if no eligible partner is selected And an unauthorized user attempting to modify partners receives a 403 and no changes are persisted And selection changes are versioned with actor and timestamp
Partner Acceptance Workflow
Given a partner is added to a bucket's rollover list When the configuration is saved Then an invitation is sent to the partner organization's designated admins via email and in-app And the partner can Accept or Decline within the app And status shows Pending until accepted; Declined removes the partner from the active list And unaccepted partners are skipped by rollover execution And acceptance actions capture actor, timestamp, and IP for audit
Auto Rollover on Bucket Exhaustion
Given active campaign spend draws down a primary bucket with configured rollover partners And the primary bucket balance reaches 0 during a charge When an additional eligible charge is processed Then the charge is funded from the first Accepted partner in priority order with remaining balance And if that partner's available cap is insufficient, the remainder rolls to the next partner until fully covered or no partners remain And if no partner can cover the charge, the action is blocked with a user-facing error and no partial spend is recorded And allocations occur atomically with an idempotency key per charge
Auditable Rollover Transfer Records
Given any rollover allocation or transfer between tenants occurs Then a transfer record is created with fields: transfer_id, source_tenant_id, source_bucket_id, destination_tenant_id, destination_bucket_id (if applicable), amount, currency, initiated_by (system|user_id), triggering_event_id, timestamp (UTC), checksum, correlation_id And records are immutable, queryable, and exportable (CSV/JSON) and filterable by date range, campaign, and line-item And every read/write of the transfer record is captured in an audit log with actor and IP And transfer records pass referential integrity checks against tenant and bucket tables
Threshold and Event Notifications
Given a bucket with notifications enabled When usage crosses 80% of cap (configurable threshold) Then email and in-app alerts are sent to bucket owners and designated partner contacts within 1 minute And upon 100% exhaustion and upon each rollover allocation, alerts include amount, source bucket, destination partner, and links to records And notifications are deduplicated per event and rate-limited to max 1 per 5 minutes per bucket And delivery outcomes (delivered, bounced, read) are recorded and visible to admins
Rollover State Display in Campaign and Bucket Views
Given a user with View Budget permission opens a campaign or bucket view Then the UI displays current balance, near-cap indicator, configured partners with acceptance status, priority order, next eligible partner, and last rollover summary (amount, time) And cross-tenant partner names are masked unless the user has CrossTenantView permission And the view loads under 300 ms from cache and under 1000 ms on cold load And data updates in real time on new rollover events without page reload
Cross-Tenant Attribution of Rolled-Over Spend
Given spend is allocated to a partner via rollover Then the source tenant's ledger shows the action funded by Partner with partner_tenant_id and bucket_id attribution fields And the partner tenant's ledger shows a corresponding debit with a matching correlation_id And both ledgers reconcile to the transfer records within 0.01 of the currency unit for rounding And access controls prevent either tenant from viewing the other's PII or campaign content beyond funding metadata
Per-Action Funding Attribution
"As a campaign strategist, I want each action to show which bucket funded it so that I can analyze performance and optimize spend by source."
Description

Tag every costed action (e.g., SMS send, lookup, call, email) with the resolved bucket ID, allocation amount, and reason codes. Expose attribution in the campaign activity feed, action detail pages, and exports. Ensure compatibility with existing RallyKit analytics so conversion metrics can be filtered by funding source, enabling comparison of funded vs. unfunded performance.

Acceptance Criteria
Automatic Bucket Resolution on Costed Action
Given a costed action (SMS, lookup, call, email) is created for a campaign with an assigned primary grant bucket that is active (within start/end dates) and has remaining balance >= action_cost_cents When the action is persisted Then the action is tagged with attribution records array of length 1 containing {bucket_id = primary_bucket.id, allocation_amount_cents = action_cost_cents, reason_code = "resolved_primary"} And the bucket remaining balance decreases by allocation_amount_cents atomically And the resulting bucket balance is never negative And the attribution record includes created_at (ISO 8601), resolver = "system", and action_id matching the action
Rollover Attribution on Bucket Exhaustion
Given a campaign has a rollover chain [primary -> partner A -> partner B] and an action_cost_cents that exceeds the current bucket remaining balance When the action is persisted Then the system allocates in chain order, creating one attribution record per contributing bucket, each with {bucket_id, allocation_amount_cents > 0, reason_code in ["partial_primary_rollover","resolved_rollover"]} And sum(allocation_amount_cents across all attribution records) == min(action_cost_cents, sum(remaining_balances across chain)) And each contributing bucket balance is decremented by its allocation atomically with no overspend And if chain funds < action_cost_cents, an additional attribution record is created with {bucket_id = null, allocation_amount_cents = action_cost_cents - chain_funds, reason_code = "unfunded_remainder"} And all attribution records for the action share the same action_id and created_at within the same transaction
Unfunded or Ineligible Bucket Handling
Given a costed action occurs and no eligible bucket is found due to any of: no bucket assigned to campaign/line-item, bucket out of date range, or cap reached with no rollover When the action is persisted Then the action is tagged with a single attribution record {bucket_id = null, allocation_amount_cents = action_cost_cents, reason_code in ["no_eligible_bucket","out_of_window","cap_reached"]} And no bucket balances are modified And the action is classified as Unfunded in analytics and exports
Attribution Visibility in Activity Feed and Action Detail
Given an action has one or more attribution records When the campaign activity feed and the action detail page are viewed Then each action entry displays attribution summary within 2 seconds of action creation, including for each record: bucket_id (or Unfunded), reporting_label, allocation_amount formatted in currency, and reason_code And the action detail view lists the full attribution array with bucket names, IDs, allocation_amount_cents, reason_code, and created_at And totals on the detail page show sum(allocation_amounts) == action_cost
Exports Include Funding Attribution Fields
Given an admin exports campaign activity When the CSV export is generated Then the file contains one row per action-attribution pair with columns: action_id, action_type, campaign_id, bucket_id (nullable), bucket_reporting_label, allocation_amount_cents (integer), reason_code, action_timestamp (ISO 8601), user_id/contact_id (if applicable) And actions with multiple attributions produce multiple rows sharing the same action_id And numeric and timestamp formats match specification; no empty reason_code values And totals by bucket in the export match bucket ledger decrements for the same date range
Analytics Filter by Funding Source
Given the analytics dashboard supports filtering by funding source (bucket_id and reporting_label) and by Unfunded When a user applies a filter for a specific bucket_id or reporting_label Then only actions that have at least one attribution record matching the filter are included in counts and conversion rates And when the Unfunded filter is applied, only actions with bucket_id = null attribution (covering full cost) are included And All = Funded U Unfunded where Funded includes actions with any funded attribution (set semantics, no double counting of actions) And conversion metrics (e.g., completion rate) recompute using the filtered action set and match underlying export-derived calculations within 0.1%

Credit Caps

Set per-partner caps by month or campaign with live meters, early‑warning alerts, and soft‑stop options. Overflow automatically routes to a designated fallback payer or pauses only the billable components tied to that partner, protecting coalitions from surprise bills while keeping essential actions online.

Requirements

Partner Cap Configuration
"As a coalition admin, I want to set monthly and campaign-level credit caps per partner so that I can control shared costs and prevent overages without taking down critical actions."
Description

Provide UI and API to define per-partner credit caps by month and/or by campaign, including cap amount, currency, effective dates, and scope (all campaigns vs. selected). Allow configuration of soft-stop behavior (pause only billable components tied to that partner) or overflow routing (designate one or more fallback payers with priority order). Support configurable threshold levels for alerts, default templates, and validation to prevent overlapping or conflicting caps. Persist configurations with versioning and change history. Integrate with RallyKit’s billing attribution and action orchestration layers so caps govern cost accrual in real time across one-tap pages and automated outreach flows.

Acceptance Criteria
UI: Create Monthly and Campaign-Specific Partner Cap
Given an admin user with Manage Billing permissions When they create a partner cap with amount, currency, frequency (monthly or per-campaign), scope (all campaigns or selected campaigns), effective start/end dates, and behavior (soft-stop or overflow) Then the form validates required fields, positive numeric amount, ISO 4217 currency, and date logic (start <= end or open-ended) And prevents saving if another active/scheduled cap for the same partner and overlapping scope/period would conflict And on Save the cap is persisted as version 1 and appears in the partner’s cap list within 3 seconds And selected campaigns, behavior, thresholds, and default alert templates persist exactly as chosen
API: Define and Manage Partner Caps with Validation
Given a client with caps:write scope and a valid API key When it POSTs a cap with partner_id, cap_type (monthly|campaign), amount, currency, effective_start/end, scope (all|campaign_ids[]), behavior (soft_stop|overflow), optional fallback_payers [payer_id, priority], and thresholds Then the API returns 201 with the created cap, id, version=1, and canonicalized values; invalid payloads return 400 with field-level errors And attempts to create overlapping or conflicting caps for the same partner/scope/period return 409 Conflict And GET /partner-caps?partner_id=… returns active, scheduled, and historical caps with pagination and filters And updates require If-Match/ETag; mismatched ETag returns 412 Precondition Failed; successful updates create version+1 and append to change history
Enforcement: Soft-Stop Pauses Only Partner-Billable Components
Given an active soft-stop cap for Partner A governing selected campaigns And Partner A is the payer for billable components in one-tap pages and automated outreach flows When accrued costs for the governed scope reach 100% of the cap Then within 60 seconds the system pauses only Partner A billable components while non-billable steps and other partners’ components continue And no further costs are attributed to Partner A beyond the cap; affected actions show "Temporarily paused by budget cap" for Partner A components And components resume automatically at cap reset or upon an approved cap increase
Overflow: Route Over-Cap Costs to Fallback Payers by Priority
Given an active cap for Partner A configured with overflow and a fallback priority list [Partner B, Partner C] When Partner A reaches or exceeds the cap Then subsequent billable costs are attributed to the highest-priority fallback with available budget within 60 seconds of threshold crossing And if no fallback is eligible, only the billable components tied to Partner A are paused; non-billable steps continue And each routing decision records partner_id, fallback_partner_id, priority, timestamp, action_id, amount, and reason in immutable audit logs And alerts notify the capped partner and the activated fallback payer on first activation and when fallback changes
Monitoring: Live Meters and Threshold Alerts
Given a partner cap is active When costs accrue from one-tap pages or automated outreach flows Then the dashboard shows a live meter with used, remaining, percent, currency, and reset date updating within 5 seconds of new accruals And alerts fire at configurable thresholds (default 50%, 80%, 100%) via email and webhook using the selected templates And each threshold triggers once per period until reset; duplicate alerts are suppressed; all alerts are logged with delivery status And a Test Alert action sends previews without affecting meters or counters
Governance: Versioning and Change History for Caps
Given an existing partner cap When an admin edits attributes or schedules changes Then a new immutable version is created (version increments) with editor identity, timestamp, summary, and JSON diff; prior versions remain unchanged And future-dated versions take effect at their start time automatically; active versions switch with no downtime and no double-billing And the full change history is retrievable via UI and API and supports revert, which creates a new version after validation (no conflicts/overlaps) And deleting a cap marks it inactive and preserves history; cost governance follows the currently active version only
Real-Time Cap Meters
"As a campaign director, I want live meters showing each partner’s cap usage so that I can make decisions in real time during spikes and adjust tactics before we hit limits."
Description

Display live meters for each partner’s cap utilization at coalition, campaign, and partner detail levels. Ingest billable usage events from the billing engine and attribute them to partners/components within ≤5 seconds end-to-end. Provide percent-to-cap, remaining balance, projected hit time (simple trend), and breakdown by billable component. Ensure accuracy via idempotent event processing, late event handling, and backfill reconciliation. Expose meters in the dashboard widgets and via read-only API for embedding and data warehouse sync.

Acceptance Criteria
Coalition Dashboard Live Cap Meter Updates
Given a partner with an active cap is displayed on the coalition dashboard When a billable usage event for that partner is emitted by the billing engine Then the coalition-level meter updates used, percent_to_cap, remaining_balance, projected_hit_time, and per-component breakdown within ≤5 seconds end-to-end And last_updated is set in UTC within 1 second of system time And the change in used equals the event’s billable quantity mapped to the correct component And projected_hit_time is present; under a constant event rate (≥10 minutes), it deviates ≤2 minutes from the analytically expected depletion time
Campaign Meter Isolation and Accuracy
Given a selected campaign view shows Partner A’s cap meter When events for multiple campaigns arrive out of order within a 30-second window Then only events attributed to the selected campaign affect this meter And percent_to_cap equals round((used/cap)*100, 1) within ±0.1% And all eligible events are reflected within ≤5 seconds of receipt
Partner Detail Component Breakdown Consistency
Given Partner A has multiple configured billable components When events for each component are ingested Then the breakdown lists each configured component with its used amount and share And the sum of component used equals the meter’s total used within ±0.01 credits And unknown components are quarantined and excluded from totals pending reconciliation
Idempotent Event Processing
Given an event with a unique event_id is retried by the billing engine When the same event_id arrives up to 3 times Then meters reflect a single increment only And audit logs record one processed instance and duplicates ignored And processing is atomic with no partially applied increments visible
Late Event Handling Within Cap Window
Given an event whose occurred_at is within the current cap window but arrives after newer events When the event is ingested Then its impact is applied exactly once to totals and breakdown without double counting And last_updated reflects the recomputation time And projected_hit_time is recalculated within ≤5 seconds if the projection changes
Backfill Reconciliation Correctness
Given a backfill reconciliation is executed for Partner A over a specified date range When authoritative usage totals are applied Then dashboard meters recompute to match the authoritative ledger within ±0.01 credits And percent_to_cap, remaining_balance, projected_hit_time, and component breakdown reflect the reconciled state And an audit entry records the reconciliation scope and outcome
Read-only Meters API Completeness and Freshness
Given an authorized GET request to the meters API at any level (coalition, campaign, partner) When the request is executed Then the response is 200 and includes: scope identifiers, level, cap_window, used, cap, percent_to_cap, remaining_balance, projected_hit_time, components[], last_updated And POST/PUT/PATCH/DELETE return 405 (method not allowed) And data freshness is ≤5 seconds behind the billing engine at p95, with ETag support and since/pagination parameters for warehouse sync
Threshold Alerts & Notifications
"As a finance lead, I want proactive alerts before and at cap thresholds so that I can reallocate budget or add a fallback payer without disrupting live actions."
Description

Enable configurable alert thresholds (e.g., 50%, 75%, 90%, 100%, custom) per cap with recipient lists and channels (email, Slack, webhook). Provide early-warning nudges based on projected run rate. Include alert deduplication, quiet hours, and escalation rules (e.g., notify finance at 90%, all owners at 100%). Surface alert history in the partner timeline. Deliver webhook payloads with partner, campaign, cap ID, utilization, and recommended actions for automation.

Acceptance Criteria
Cap-Level Multi-Threshold & Recipient/Channel Configuration
- Rule: A user can add thresholds at 50%, 75%, 90%, 100%, and one or more custom percentages per cap. - Rule: For each threshold, a user can assign one or more recipient lists and channels (email, Slack, webhook). - Rule: Saving fails if any enabled threshold lacks at least one recipient and one channel. - Rule: Configuration persists and is effective immediately for subsequent alert evaluations. - Rule: Per-threshold toggles allow enabling/disabling without deleting definitions. - Rule: All edits are audit-logged with user and timestamp.
Threshold Crossing Notifications Delivery
- Given cap utilization increases from below to at or above a defined threshold, notifications are sent to all configured recipients via each selected channel within 60 seconds. - Rule: No notification is sent for decreases or when utilization remains below the threshold. - Rule: Message content includes partner name, campaign name, capId, threshold percent, used, limit, utilization percent, and link to manage cap. - Rule: One alert instance is generated per cap+threshold crossing and per recipient+channel, subject to deduplication rules. - Rule: Timestamps in human-facing messages reflect the account timezone and include UTC offset. - Rule: Repeated increases that do not drop below the threshold do not trigger additional alerts.
Projected Run-Rate Early Warning
- Rule: Each cap supports enabling Early Warning with a configurable window W (in hours). - Given the projected ETA to reach the cap based on the trailing 24-hour run rate is <= W, send an Early Warning alert. - Rule: Early Warning recipients and channels are configurable per cap; default to those defined for the 75% threshold if not set. - Rule: Early Warning content includes current utilization, run-rate basis window, projected ETA (timestamp), and recommended actions. - Rule: At most one Early Warning is sent per cap per 12 hours per recipient+channel, subject to deduplication and quiet hours. - Rule: No Early Warning is sent when the projected ETA is unknown or greater than W.
Deduplication & Quiet Hours
- Rule: Suppress duplicate alerts for the same cap+eventType (threshold crossing or early warning)+threshold+recipient+channel within a configurable suppression window S (default 60 minutes). - Rule: Quiet Hours can be configured per account with start/end times and timezone; during Quiet Hours, human-channel alerts (email, Slack) are deferred. - Rule: Alerts at 100% threshold bypass Quiet Hours and are delivered immediately. - Rule: Deferred alerts are delivered when Quiet Hours end; multiple deferred alerts of the same type for the same cap are collapsed into a single summary. - Rule: Webhook deliveries are not subject to Quiet Hours but do honor deduplication. - Rule: Admins can override to send an alert immediately, ignoring Quiet Hours.
Escalation Rules & Acknowledgment
- Rule: At 90% utilization, Finance recipients are included in the notification delivery list in addition to any threshold-assigned recipients. - Rule: At 100% utilization, all Owners of the partner and campaign are notified across configured channels. - Rule: If a 100% alert is not acknowledged within 30 minutes, it is resent every 30 minutes up to N times (configurable). - Rule: Acknowledgment via email link, Slack action, or webhook API stops further escalations for that cap event. - Rule: Escalations respect Quiet Hours except for 100% alerts, which bypass them; webhooks are always sent immediately. - Rule: Escalation attempts and acknowledgments are recorded with attempt count and timestamps.
Alert History in Partner Timeline
- Rule: Each alert event (threshold crossing, early warning, escalation, acknowledgment) creates an entry in the partner timeline. - Rule: Timeline entry includes timestamp (UTC), capId, cap scope (month or campaign), campaignId, eventType, threshold percent or early warning, utilization percent, used/limit, recipients count by channel, delivery status, and link to alert details. - Rule: Timeline supports filtering by eventType, capId, campaignId, threshold, channel, and date range. - Rule: Timeline shows the most recent 200 alert events and supports pagination to older items. - Rule: Timeline data is read-only and tamper-evident.
Webhook Payload & Delivery Semantics
- Rule: For each alert event, a JSON webhook is POSTed to each configured endpoint with fields: eventId, eventType, occurredAt, partnerId, partnerName, campaignId, campaignName, capId, capScope, limit, used, utilizationPct, thresholdPct (if applicable), etaToCap (if applicable), recommendedActions[], manageCapUrl. - Rule: Each webhook request includes an HMAC-SHA256 signature in header X-RallyKit-Signature computed with the endpoint’s shared secret. - Rule: A 2xx response within 10 seconds marks delivery success; otherwise retries use exponential backoff for up to 12 hours with jitter. - Rule: Webhooks are idempotent; the same eventId may be retried without creating duplicates downstream. - Rule: Delivery results (success, failure, retries) are recorded and visible in alert history. - Rule: A test send function allows firing a sample webhook and displays the last response and signature verification status.
Soft-Stop & Fallback Routing Engine
"As an operations manager, I want cost overflow to automatically route to a backup payer or gracefully pause only the billable pieces so that essential advocacy actions remain online even when a partner hits their cap."
Description

Implement a rules-driven engine that, upon cap breach, either pauses only the billable components mapped to the capped partner (soft-stop) or routes incremental costs to a designated fallback payer according to a priority list. Ensure atomic decisioning at the point of charge with <250 ms added latency and idempotent retries. If routing fails (e.g., fallback has no capacity), degrade gracefully by applying soft-stop while keeping non-billable and other-partner-funded essentials online. Log every decision with trace IDs for audit and provide reversible overrides by admins.

Acceptance Criteria
Atomic Charge-Time Decisioning (<250 ms)
Given a charge event reaches the routing engine When a decision (proceed, fallback, or soft-stop) is evaluated Then the added decision latency is <= 250 ms at p95 and <= 400 ms at p99, measured from engine intake to decision emission And the decision is computed atomically and persisted before any downstream billing call is attempted And the decision response includes a monotonic decision_id used by downstream systems
Soft-Stop: Pause Only Partner-Funded Billable Components
Given partner P has reached its cap for scope S (e.g., month or campaign) And an action contains components funded by P, components funded by other partners, and non-billable components When the action is processed Then only the billable components funded by P are paused and not charged And components funded by other partners continue normally And non-billable components continue normally And the decision includes reason_code=soft_stop_cap_breached and the list of paused component_ids And no charges are posted to P for the paused components
Fallback Routing by Priority with Capacity Checks
Given partner P has breached its cap for scope S And a fallback payer priority list [F1, F2, ... Fn] exists And F1 has available capacity and is enabled When the charge event is processed Then the incremental cost is routed to F1 and charged to F1 And if F1 lacks capacity or is disabled, the next available Fi is selected And the decision includes payer_id set to the selected fallback and rule_path=fallback_priority And P is not charged for the routed cost
Graceful Degradation When No Fallback Is Available
Given partner P has breached its cap for scope S And no fallback payer in the priority list has capacity or routing fails definitively When the charge event is processed Then the engine applies a soft-stop to the partner-funded billable components And keeps non-billable and other-partner-funded components online And no partial or duplicate charges are attempted to P or any fallback And the decision includes reason_code=soft_stop_no_fallback and affected component_ids
Idempotent Retries and Exactly-Once Charging
Given a charge request is identified by idempotency_key K and trace_id T When transient errors occur and the request is retried with the same K within the idempotency window Then exactly one decision is committed and returned for all retries And exactly one downstream charge is posted (or none if soft-stopped) And subsequent retries return the original decision and reference the original charge_id if applicable And duplicate or out-of-order deliveries with the same K do not create additional charges or decisions
Comprehensive Decision Logging with Trace IDs
Given any routing decision is produced When the decision is persisted Then an audit log entry is available within 2 seconds containing: trace_id (UUID), idempotency_key, timestamp (UTC), partner_id, scope, cap_state_before, cap_state_after, decision_type (proceed|fallback|soft_stop), payer_id (if any), rule_path, component_ids, amount_intended, amount_charged, latency_ms, and override_id (if applied) And 100% of decision responses include a trace_id that correlates to exactly one audit log entry And audit entries are immutable and exportable for compliance review
Admin Reversible Overrides of Routing Decisions
Given an authorized admin creates an override specifying scope (partner, campaign, component), effect (force_fallback_to=F, force_soft_stop, lift_soft_stop_up_to=$X), and time window When applicable charge events occur during the override window Then the engine applies the override deterministically and records override_id in the decision and audit log And when the override is revoked or expires, subsequent decisions revert to rules-driven behavior And a dry-run validation shows the projected impact before activation And reverting an override is immediate and does not retroactively alter past audit entries
Billable Component Tagging & Attribution
"As a campaign builder, I want to tag which parts of an action are billable to each partner so that caps, routing, and invoices reflect true cost ownership without manual reconciliation."
Description

Provide a schema and UI to tag action components (e.g., call dialing minutes, email sends, SMS, data enrichment) as billable and attribute each component’s costs to a specific partner and campaign. Support default attribution rules and per-campaign overrides. Ensure that tagging drives metering accuracy, soft-stop scoping, and invoice line-itemization. Validate configurations to prevent orphaned costs and ensure every billable event resolves to exactly one payer or a routed fallback.

Acceptance Criteria
Single Payer Resolution for Billable Events
- Given a billable event with component_type, campaign_id, and published attribution rules, When the event is processed, Then the system assigns exactly one payer_partner_id (direct or fallback) and persists it on the event record. - Given rules that would map an event to more than one payer, When an admin attempts to publish the ruleset, Then the publish is blocked with a specific ambiguity error and no changes take effect. - Given an event that matches no direct payer and no configured fallback, When the event is processed, Then it is quarantined, excluded from meters/invoicing, and an alert is sent to org admins.
Attribution Precedence: Defaults vs Campaign Overrides
- Given a global default payer for component_type=X and a campaign-level override for campaign=C to payer=B, When an X event occurs in C, Then payer_partner_id=B. - Given a partner-level default exists and no campaign override, When an event occurs, Then payer resolves to the partner-level default. - Given global and partner defaults exist and no campaign override, When an event occurs, Then precedence is campaign override > partner default > global default > fallback, and the decision is written to an audit log with rule identifiers.
Real-Time Metering Accuracy and Idempotency
- Given valid tagged events stream in, When meters aggregate by partner, campaign, and component, Then meters reflect new units within 60 seconds of event ingestion. - Given duplicate events share the same idempotency_key, When processed, Then only one counts toward meters and invoicing. - Given 5 minutes have elapsed since ingestion, When comparing meters to raw accepted events, Then totals match within 0.1% or a reconciliation alert is raised.
Soft-Stop Scoping by Component and Partner
- Given Partner A’s monthly cap for component=SMS in Campaign C is reached with soft-stop enabled, When additional SMS events are triggered for Partner A in C, Then those SMS events are paused and labeled soft-stopped while other components and partners remain unaffected. - Given overflow routing is configured to Fallback F, When the cap is reached, Then subsequent SMS events for Partner A in C are billed to F and continue without interruption. - Given no overflow routing is configured, When the cap is reached, Then only the scoped billable components are paused and non-billable actions remain available.
Invoice Line-Itemization and Reconciliation
- Given a billing period is closed, When an invoice is generated for a partner, Then it contains line items per campaign and component_type with unit_count, unit_price, and subtotal, and the invoice total equals the metered totals for the period (difference = 0). - Given a specific line item, When the user requests details, Then the system provides a downloadable CSV of event_ids supporting that line item. - Given currency settings are USD, When calculating subtotals, Then rounding follows ISO 4217 two-decimal rules and totals reconcile to the cent.
Rule Authoring UI and Validation
- Given an admin creates or edits tagging/attribution rules, When saving, Then the UI and server validate required fields (component_type, billable_flag, unit_of_measure, unit_price, payer_partner_id or fallback_payer_id, scope) and block save with field-level errors for missing or conflicting inputs. - Given overlapping rules that would cause ambiguity, When attempting to publish, Then the system prevents publish and highlights conflicting rules with guidance to resolve. - Given a ruleset is published, When viewing history, Then a version number, author, and effective timestamp are recorded and visible.
Historical Integrity, Locking, and Reprocessing
- Given a rules change after events were invoiced, When viewing prior invoices, Then historical payer attribution remains unchanged and periods marked Locked cannot be reprocessed. - Given events in an open period require re-attribution due to a rules fix, When reprocessing is initiated, Then meters and pending invoices restate to reflect the current ruleset and a restatement log is captured. - Given reprocessing completes, When comparing pre- and post-restatement metrics, Then the system provides a downloadable diff report by partner, campaign, and component.
Audit Trail & Exportable Reports
"As a compliance officer, I want detailed, exportable logs of cap changes and routing decisions so that I can demonstrate stewardship of funds and satisfy audits without manual data pulls."
Description

Record immutable logs for cap creations/edits, threshold crossings, routing decisions, soft-stops, overrides, and meter adjustments. Include timestamp, actor, before/after values, and related entities (partner, campaign, component). Provide dashboard views and export to CSV/JSON, plus API endpoints for retrieval by date range and entity. Generate an audit-ready summary per billing period showing caps set, utilization, overflows, and payers of record to satisfy compliance and grant reporting needs.

Acceptance Criteria
Immutable Log for Cap Creation and Edit Events
Given a user with permission creates or edits a credit cap for a partner/campaign/component When the change is saved Then an immutable audit log entry is appended containing: event_type (cap_created|cap_edited), timestamp (UTC ISO-8601, millisecond precision), actor_id and actor_type (user|system), partner_id, campaign_id (nullable), component_id (nullable), before_values (object), after_values (object), reason (nullable), request_id, entry_id, prev_hash, hash And the log entry cannot be updated or deleted via any UI or API (attempts return 405) And the entry appears in the audit dashboard within 5 seconds of save
Threshold Crossings and Early-Warning Alerts Audited
Given a cap meter crosses a configured threshold (defaults: 50, 75, 90, 100 percent) in an upward direction When the threshold is crossed Then an audit entry is recorded with: event_type=threshold_crossed, cap_id, period (month|campaign), previous_percent, new_percent, threshold_percent, timestamp, partner_id, campaign_id, component_id (nullable), alert_channels, recipients_count, correlation_id And only one entry is recorded per threshold per cap per billing period And the entry appears in the dashboard within 5 seconds
Overflow Routing Decisions and Soft-Stop Actions Logged
Given a cap is exhausted or a soft-stop rule is active When an action would exceed the cap Then an audit entry is recorded with: event_type in (overflow_routed, soft_stop_paused), decision (route_to_fallback|pause_components), fallback_payer_id (nullable), affected_components[], affected_action_count, partner_id, campaign_id, timestamp, correlation_id And if an authorized user overrides a soft-stop to proceed Then an audit entry with event_type=override_applied is recorded including: actor_id, reason (required), scope (single_action|time_window), expiration (nullable), correlation_id And all entries are immutable and link to the triggering threshold/cap event via correlation_id
Meter Adjustments and Overrides Audit Details
Given a manual meter adjustment is submitted by an authorized user When the adjustment is applied Then an audit entry is appended with: event_type=meter_adjusted, cap_id, old_value, delta, new_value, unit=credits, justification (min 10 chars), actor_id, timestamp, partner_id, campaign_id, component_id (nullable), source_reference (nullable), entry_id, prev_hash, hash And the entry is immutable and visible on the dashboard within 5 seconds And both increases and decreases in utilization are recorded
Audit Dashboard Views, Filtering, and Access
Given a user with role in (Admin, Finance, Compliance) When they open the Audit Trail dashboard Then they can filter by: date range (UTC), event_type, partner_id, campaign_id, component_id, actor_id, and search by request_id or correlation_id And results are sorted by timestamp desc by default, support column visibility toggles, and paginate at 50 rows/page And the first page loads within 2 seconds for datasets up to 10,000 entries, and timestamps display in selected timezone (UTC or local) And users without required role receive 403 and no data is leaked
CSV/JSON Exports and API Retrieval by Date and Entity
Given audit entries match a selected filter When the user exports to CSV or JSON Then the file includes all fields (including entry_id, hash, prev_hash), UTF-8 encoding, RFC 4180-compliant CSV with header row, and row count matches the filtered total And exports up to 250,000 rows complete within 60 seconds and stream for larger datasets without timeouts And the API endpoint GET /api/audit-logs supports filters: start, end (ISO-8601), partner_id, campaign_id, component_id, event_type, actor_id, page, page_size (max 1000); returns 200 with data[], pagination metadata, and ETag; unauthorized requests return 401 and forbidden access returns 403 And results are ordered by timestamp desc and are consistent across CSV/JSON/API for the same filters
Billing-Period Audit-Ready Summary Report
Given a billing period closes for a partner or campaign When the summary is generated at period close or requested via API Then the report includes: list of caps set (by entity) with values, total utilization per cap, threshold crossings, overflows count and credits, soft-stops count, routing decisions taken, and payer(s) of record with amounts; with period_start, period_end, generated_at And totals reconcile to underlying audit logs within 0.1% variance And the summary is viewable in the dashboard, exportable to CSV/JSON, and retrievable via GET /api/billing-summaries?start&end&entity with pagination
Role-Based Access & Change Controls
"As an account owner, I want controlled, auditable changes to caps and routing so that no single user can accidentally or improperly alter spending limits."
Description

Enforce permissions so only authorized roles can create or modify caps, routing rules, and component tagging. Support dual-approval for high-impact changes (e.g., raising a cap or changing fallback payer) and optional reason codes. Provide preview/diff of changes and effective-dating to schedule updates at period start. Notify stakeholders of approved changes and lock configurations after period close to preserve financial integrity.

Acceptance Criteria
RBAC enforcement on cap, routing, and component tagging
Given a user without edit permissions (e.g., Viewer role), When they attempt to create or modify a cap, routing rule, or component tag via UI or API, Then the UI action is disabled/blocked and the API returns 403 with no persistence. Given a user with the authorized role (e.g., Cap Editor within org/partner scope), When they create or modify permitted entities, Then the API returns 2xx, changes persist, and are visible to users with read access. Given any cross-tenant identifier, When a user attempts access outside their tenant, Then the system returns 404/403 and no data is leaked. Then every allowed or denied attempt creates an immutable audit log entry including user ID, role, timestamp (UTC), entity type/ID, fields changed (before/after), request origin, and outcome; and audit entries are view-only and cannot be edited or deleted via UI/API.
Dual‑approval for high‑impact changes with reason codes
Given a proposed cap increase or change to the fallback payer, When the requester submits the change, Then the change enters Pending Approval and does not take effect until approved by a different user with Approver role. Given the requester also has Approver role, When they attempt to approve their own request, Then the system blocks self‑approval and records the attempt. Given org setting "Require reason code" is enabled, When submitting a high‑impact change, Then a reason code from the configured list (with optional free‑text note) is mandatory; otherwise it is optional. When an approver approves, Then the request is marked Approved with approver identity and timestamp; when rejected, Then it is marked Rejected with a mandatory rejection reason and no changes are applied. Then the full approval chain (requester, approver(s), timestamps, comments) is recorded in the audit log and linked to the change set ID.
Change preview and diff before approval
Given a pending change, When an approver opens it, Then a preview shows current vs proposed values for cap amount, cap period, routing target(s), fallback payer, component tags, and effective date. Then the UI highlights only changed fields and displays impacted partner(s)/campaign(s) and period(s). Then a unique change set ID is shown and matches the ID stored in the audit log. Then the Approve action remains disabled until the approver has viewed the diff section (recorded as an event in the audit log). Given no field changes are detected, When a change is submitted, Then the system rejects the submission as no‑op with a clear error.
Effective‑dated scheduling at period start
Given a user selects "Next period start" for a partner/campaign, When scheduling a change, Then the effective date/time auto‑populates to the next billing period start using the org’s default timezone and cannot be set earlier. Given a custom effective date/time is entered, When validating, Then the date/time must be in the future and not within a closed period; otherwise the submission is rejected with a validation error. Given a change is Approved and effective‑dated in the future, When the effective date/time arrives, Then the system applies the change automatically and records the application event in the audit log. When an effective‑dated change is already Approved, Then its effective date cannot be edited; the user must cancel and resubmit a new change.
Stakeholder notifications on approved changes
Given a change is approved, When the approval is recorded, Then notifications are sent within 60 seconds to configured stakeholders (email and Slack/webhook) including change set ID, requester, approver, reason code (if provided), summary of changes, effective date, and links to view and audit. When delivery to a channel fails, Then the system retries with exponential backoff for up to 24 hours and records success/failure in notification logs. Given multiple recipients are configured, When notifications are sent, Then duplicates are suppressed per recipient per change set. Then notification preferences can be scoped per partner/campaign and are honored in delivery.
Period close locks and immutability
Given a billing period is marked Closed for a partner/campaign, When any user attempts to create or modify caps, routing, or component tags effective within that closed period, Then the UI shows a locked state and the API returns 422 with no changes persisted. Given a pending change targets a date within a now‑closed period, When the period is closed, Then the system automatically cancels the pending change and notifies the requester with the reason. When viewing historical configurations for a closed period, Then records are read‑only and match the applied audit log entries without mutation. Then any attempted edits to closed periods are recorded in the audit log as denied due to period lock.

Dunning Guard

Automate late‑payer safeguards with configurable net terms, reminder cadences, pay‑by‑link, and ACH/card retries. Quarantine delinquent partner charges, reallocate after grace periods, and freeze new spend tied to that partner only—without disrupting the rest of the campaign. Every step is logged for transparent, audit‑ready history.

Requirements

Net Terms & Grace Configuration
"As a finance admin, I want to set net terms and grace periods per partner so that safeguards trigger consistently with our policies and agreements."
Description

Provide organization- and partner-level configuration for invoice net terms (e.g., Net 15/30/45) and grace periods, including defaults, overrides, and effective-dating. Enforce validation (e.g., grace ≤ net term), surface settings in admin UI, and expose them via API for automation. Changes must propagate to billing, reminder cadences, retry windows, and quarantine/freeze rules so downstream safeguards trigger precisely when a partner becomes late. Include migration of existing partners to sensible defaults and guardrails to prevent accidental policy changes on active invoices.

Acceptance Criteria
Org Default Net Terms and Grace in Admin UI
Given I am an organization admin in the Admin UI Billing Settings When I set Net Terms to one of the allowed values (e.g., 15, 30, 45, or a custom value within 1–90) and Grace Period to a value within 0–30 days Then the settings save successfully with an effective start date of today and are displayed consistently across the Admin UI And the saved defaults are applied to new invoices created after save time unless a partner override exists
Partner-Level Overrides with Effective Dating
Given a partner has no override and the organization has defaults When I create a partner-level override for Net Terms and/or Grace with an effective date T Then invoices with issue date >= T use the partner override and invoices with issue date < T use the previously effective settings And when I clear the partner override, future invoices revert to using the organization defaults
Validation and Guardrails on Settings Changes
Given I attempt to save a Grace Period greater than the Net Terms Then the save is blocked and I see a validation error explaining grace must be less than or equal to net terms Given I attempt to set Net Terms outside 1–90 days or Grace outside 0–30 days Then the save is blocked with a validation error indicating allowed ranges Given there are active (issued, unpaid) invoices When I attempt to change net/grace with an immediate effective date Then the system blocks the change or requires a future effective date so active invoices are not altered Given I attempt to backdate a new effective period prior to today Then the save is blocked to prevent retroactive policy changes
Propagation to Billing, Due Dates, Reminders, and Retries
Given an invoice is issued on date I with applicable Net Terms N and Grace G Then the due date is I + N days and the delinquent date is due date + G days at 00:00 in the organization time zone Given settings change to new values N2/G2 with an effective date before the invoice due date Then the invoice’s due and delinquent dates recalculate to reflect N2/G2 and the reminder cadence re-schedules remaining reminders accordingly; already-sent reminders are not altered Given payment retries are enabled for failed charges on the invoice Then the retry window and schedule align to the recalculated due/delinquent dates based on the effective settings
Quarantine and Freeze Triggers and Scope Isolation
Given an invoice reaches its delinquent date and has an outstanding balance Then the partner is marked delinquent, new spend tied to that partner is frozen, and further charges for that partner are quarantined And no other partners or campaigns are affected by this state change Given a full payment posts before the delinquent date Then no quarantine or freeze is triggered Given a partial payment posts after the delinquent date leaving a balance Then quarantine/freeze remain in effect until the balance is cleared or an authorized override is applied
API Exposure with Validation and Concurrency Controls
Given I call GET /v1/billing/settings Then I receive the organization default net terms and grace, effective periods, and any partner-level overrides Given I call PUT/PATCH to update organization or partner settings with a correct version/etag Then the update succeeds and returns the new effective period metadata Given I call PUT/PATCH with a stale or missing version/etag while a concurrent update occurred Then the request is rejected with 409 Conflict Given I submit invalid values (e.g., grace > net or out of allowed ranges) Then the API responds 422 Unprocessable Entity with field-level errors
Migration of Existing Partners to Sensible Defaults Without Retroactive Changes
Given migration runs on an organization with existing partners and no explicit settings Then the organization default is initialized to Net 30 and Grace 7 days, and partners inherit these defaults Given a partner has pre-existing legacy terms information Then migration creates a partner override reflecting those terms with an effective date equal to the migration date Given invoices were issued before migration Then their due and delinquent dates remain unchanged and no reminder/retry schedules are retroactively recalculated
Reminder Cadence Engine
"As an accounts receivable specialist, I want automated, configurable reminders so that I spend less time chasing payments and partners have clear, timely prompts to pay."
Description

Automate multi-step reminder sequences for upcoming and overdue invoices with configurable timing (e.g., T-7, Due, D+3, D+7), channels (email at MVP; webhooks for Slack/ops tools), and templates with dynamic merge fields (amount due, due date, pay-by-link, invoice line items). Support pause/resume on payment, skip if balance zeroed, throttling to avoid spam, and localization. Log delivery and engagement events and expose status in a per-invoice timeline. Admin UI to define org-wide defaults and partner-specific overrides; API to trigger ad-hoc nudges.

Acceptance Criteria
Cadence Scheduling by Due Date and Updates
Given an invoice with a due_date and an org default cadence [T-7, Due, D+3, D+7] in the configured timezone, When the invoice is created or the due_date is updated, Then reminder steps are scheduled at the computed datetimes with assigned channel and template. Given a cadence step’s scheduled time is already in the past at scheduling time, When scheduling occurs, Then that step is marked Skipped (In Past) and is not sent. Given the due_date changes after some steps have been sent, When rescheduling runs, Then only future pending steps are re-computed to the new offsets, prior sent steps remain unchanged, and no duplicate pending steps are created. Given the scheduler runs repeatedly, When it evaluates the same invoice, Then the resulting schedule is idempotent and contains no duplicate pending steps.
Admin UI: Defaults and Partner Overrides with Precedence
Given an admin user with permissions, When they create or edit the org-wide cadence (steps with offset, channel, and template), Then the system validates required fields, prevents invalid offsets, saves the cadence, and shows a preview for a sample due_date in the configured timezone. Given a partner-specific override cadence is saved, When scheduling invoices for that partner, Then the override is applied instead of the org default; when the override is disabled, scheduling reverts to the org default. Given both an org default and a partner override exist, When calculating the effective cadence, Then precedence is partner override > org default and any unspecified properties in the override inherit from the org default. Given cadence configuration changes are saved, When auditing changes, Then an audit log entry exists capturing actor, timestamp, before/after values, and scope (org or partner).
Template Rendering and Localization
Given an email template contains merge fields {amount_due}, {due_date}, {pay_by_link}, and {invoice_line_items}, When a reminder is generated, Then all merge fields are resolved from the invoice context and the subject/body render without unresolved tokens. Given a partner locale is configured, When rendering the reminder, Then currency and date formats reflect the locale, translated copy for that locale is used, and fallback is the org default locale, else en-US. Given a pay-by-link is required, When rendering, Then a unique, secure pay-by-link URL for the invoice is included in the message. Given any required merge field cannot be resolved, When rendering, Then the reminder is not sent, status is RenderFailed, and an error with details is logged to the invoice timeline.
Payment State Suppression and Zero-Balance Skip
Given an invoice has a payment in_progress, When a reminder step becomes due, Then the send is paused and the step is marked Paused (Payment In-Progress) with a retry scheduled for after the payment resolves. Given a payment posts successfully and the invoice balance equals zero, When future reminder steps exist, Then all pending steps are cancelled and marked Cancelled (Paid) and no further sends occur. Given a partial payment posts and a non-zero balance remains, When the next step becomes due, Then the reminder sends and reflects the updated amount_due and any revised due_date.
Throttling and Send Suppression Windows
Given a configurable throttle window of 24 hours per invoice per channel, When a send is attempted inside the window after a prior send for the same invoice and channel, Then the attempt is skipped and marked Skipped (Throttled) on the invoice timeline. Given organization-wide throttle settings and partner-specific overrides, When both exist, Then partner overrides take precedence for that partner’s invoices. Given an ad-hoc nudge is requested via API and would violate the throttle window, When the request is processed, Then the nudge is rejected with HTTP 429 and no send is performed and the event is logged as Skipped (Throttled).
Outbound Delivery and API Ad-hoc Nudges
Given an email reminder step is due, When dispatching, Then the system records Sent with provider message ID; if a bounce or complaint webhook is received, Then the status updates to Bounced or Complaint accordingly. Given a webhook reminder step is due, When dispatching to the configured endpoint, Then a JSON payload including invoiceId, partnerId, stepId, templateId, amountDue, dueDate, payByLink, and locale is POSTed with a verifiable signature; any 2xx response marks Delivered, 5xx responses trigger exponential backoff retries up to 3 attempts, and 4xx responses do not retry and are marked Failed. Given an authenticated client, When calling the ad-hoc nudge API with invoiceId, channel, templateId, and an idempotency key, Then exactly one nudge is created and sent subject to skip/throttle rules and a 2xx response returns a nudge identifier; repeated calls with the same idempotency key within 24 hours do not create duplicates and return the original result.
Event Logging and Per-Invoice Timeline
Given any reminder lifecycle change occurs (Scheduled, Sent, Delivered, Bounced, Opened, Clicked, Skipped, Paused, Cancelled, Failed), When viewing the invoice timeline, Then an event is present with timestamp (UTC), channel, step identifier, recipient, templateId, status, reason/error code, and metadata. Given an email recipient opens or clicks the reminder, When the tracking pixel or tracked link is invoked, Then an Opened or Clicked event is recorded and appears on the invoice timeline within 2 minutes. Given multiple events exist for the same invoice, When viewing the timeline, Then events are displayed in reverse chronological order and include access to raw payloads for audit.
Pay‑by‑Link Checkout
"As a partner contact, I want a one-tap, secure link to pay my invoice so that I can resolve balances quickly without logging into a portal."
Description

Generate secure, single-invoice payment links that prefill partner, invoice, and amount due, supporting card and ACH based on partner permissions. Links include expiration, one-time tokenization, and SCA/3DS where applicable. The checkout flow returns real-time success/failure webhooks, updates invoice status, and feeds retry/quarantine logic. Provide branded templates, mobile-first design, and a lightweight receipt flow. Admins can regenerate or revoke links, and links are embedded in reminders and the dashboard.

Acceptance Criteria
Generate single-invoice pay-by-link with prefilled partner, invoice, and amount
Given an admin selects an open invoice and partner with defined payment-method permissions When the admin clicks “Generate Pay‑by‑Link” Then the system creates a signed URL containing a single‑use token tied to the invoice ID, partner ID, and amount due with a configurable expiration TTL And the checkout renders only the payment methods permitted for that partner (card and/or ACH) And the link creation event is audit‑logged with actor, timestamp, invoice ID, partner ID, permitted methods, and TTL (no sensitive payment data persisted) And the link preview uses the org’s selected branded template
Enforce link expiration, single-use, and tamper protection
Given a valid, unexpired pay‑by‑link token When a payer opens the URL Then the token is marked active for the session and is consumed on successful payment or on explicit cancel Given a consumed or expired token When the URL is revisited or submitted Then access is blocked with an “expired or used” state and the attempt is audit‑logged Given any modification to signed parameters (invoice ID, amount, partner ID) When the URL is requested Then signature verification fails and the request is rejected with HTTP 403 and an audit log entry
Checkout supports card and ACH per permissions with SCA/3DS as required
Given card is permitted for the partner and regional rules require SCA/3DS When the payer attempts a card payment Then a 3DS challenge is initiated and must be successfully completed before authorization proceeds Given ACH is permitted for the partner When the payer selects ACH and provides bank details Then the flow presents required ACH authorization text and submits a debit for the invoice amount Given only one payment method is permitted When the checkout loads Then only that method is visible/selectable, and disallowed methods are not rendered
Real-time webhook delivery and invoice state transitions
Given a payment attempt succeeds When the processor confirms success Then a payment.succeeded webhook is delivered within 5 seconds including invoice_id, partner_id, amount, currency, transaction_id, and idempotency_key And the invoice status transitions to Paid with paid_at timestamp set Given a payment attempt fails When the processor returns a failure Then a payment.failed webhook is delivered within 5 seconds including standardized error code and message And the invoice status remains Due or transitions to Failed with failure_reason set And webhook handling is idempotent (duplicate events do not create duplicate updates)
Drive retry cadence, quarantine, and spend freeze from outcomes
Given a payment.failed webhook is processed for an invoice in Dunning Guard scope When the event is acknowledged Then a retry schedule is created per the org’s reminder cadence And the partner’s delinquent charges are quarantined and new spend tied to that partner is frozen only for that partner Given a later payment.succeeded for the same invoice When the event is processed Then quarantine is lifted and the partner‑specific spend freeze is removed And all state changes are audit‑logged with actor=system and source=webhook
Admin revokes or regenerates pay-by-links and downstream references update
Given an active pay‑by‑link exists When an admin clicks Revoke Then the token is immediately invalidated, future accesses show a revoked state, and an audit entry records the action Given an active pay‑by‑link exists When an admin clicks Regenerate Then the previous token is invalidated and a new URL with a new token and expiration is issued And the latest link is propagated to reminder templates and the dashboard within 2 minutes And all actions capture actor, timestamp, old_link_id, new_link_id in the audit log
Branded, mobile-first checkout and lightweight receipt flow
Given an org brand (logo, colors, reply‑to email) is configured When the checkout and receipt render Then the brand is applied consistently and passes WCAG 2.1 AA contrast and keyboard navigation checks And on a 3G network and 360–414px width device, the checkout first meaningful paint occurs within 2.5s and layout remains responsive with no horizontal scroll Given a successful payment When the receipt is generated Then a confirmation page and email are produced containing org name, partner name, invoice number, amount, currency, transaction ID, last4 (if card) or bank descriptor (if ACH), and timestamp And the downloadable receipt PDF (if requested) is under 100KB and includes an audit reference ID
Adaptive Payment Retry
"As a finance admin, I want smart, compliant payment retries so that recoverable failures are resolved automatically without harming partner experience."
Description

Automatically retry failed ACH/card payments with policy-driven schedules and caps, adapting to failure reason codes (e.g., insufficient funds vs. hard declines). Implement exponential backoff, banking-day awareness, and cutoffs tied to grace periods. Respect network and gateway rules, de-duplicate concurrent attempts, and surface next-attempt ETA to staff. Capture reason codes, update invoice timeline, and notify stakeholders selectively. Allow per-partner opt-outs and method preferences. Ensure PCI-compliant handling via tokenized instruments.

Acceptance Criteria
Reason‑Code Adaptive Retry Scheduling and Caps
Given a payment failure with a recoverable reason code (e.g., insufficient_funds or network_timeout), when the retry engine evaluates the invoice, then it schedules the next attempt using the policy designated for that reason code and increments the attempt count within the configured max_attempts per instrument per invoice. Given a payment failure with a hard-decline/do-not-retry reason code, when the retry engine evaluates the invoice, then no further retries are scheduled, the invoice is marked final-failure with the normalized reason, and the schedule is cleared. Given multiple failures of mixed reasons, when the next attempt is scheduled, then the most recent normalized reason code determines the policy used and the policy version is recorded on the attempt. Given max_attempts has been reached for the instrument, when an additional failure occurs, then no further attempts are scheduled and the stop reason is set to max_attempts_reached.
Exponential Backoff with Banking‑Day Awareness and Grace Cutoffs
Given a soft failure that is eligible for retry, when generating the next attempt schedule, then the interval increases exponentially using the configured base and factor and does not exceed max_backoff. Given an ACH payment and the calculated next attempt lands on a non-banking day or bank holiday per configured calendar, when scheduling, then the attempt is moved to the next banking day within business hours per policy. Given the organization-defined grace_period_end for the invoice occurs before the next computed attempt time, when scheduling, then no further attempts are created and the stop reason is grace_cutoff. Given exponential backoff would produce an attempt after grace_period_end, when recomputing the schedule, then earlier intervals are compressed only if allowed by policy; otherwise retries stop at the last allowed attempt before the cutoff.
Network/Gateway Rule Compliance Enforcement
Given a card network or gateway response that includes a do_not_retry flag or an ineligible code per compliance rules, when evaluating eligibility, then the system suppresses further retries and records compliance_suppressed=true on the attempt. Given network-specific retry spacing or daily caps (e.g., max 4 retries per instrument per 24h), when scheduling retries, then the system enforces these limits across all workers and nodes and defers or cancels attempts that would violate them. Given a gateway response contains a retry_after duration, when scheduling the next attempt, then the system honors retry_after if it is stricter than internal policy. Given a request rate would exceed gateway rate limits, when dispatching, then the dispatcher queues/delays the attempt and logs rate_limit_deferral with the planned dispatch time.
Concurrent Attempt De‑duplication and Idempotency
Given multiple scheduler workers pick the same invoice+instrument for retry within a short window, when dispatching, then at most one attempt transitions to in_progress and the others exit with status de_duplicated without contacting the gateway. Given a retry request is sent to the gateway, when using idempotency, then a deterministic idempotency key (invoice_id+instrument_token+attempt_seq) is attached and duplicate submissions return the original gateway transaction and are stored as a single attempt in the timeline. Given an attempt is in_progress, when another trigger tries to start a retry for the same invoice+instrument, then the system rejects the start with conflict and no charge is created. Given two retries are scheduled at the same timestamp due to race conditions, when reconciling the queue, then one is rescheduled according to policy or cancelled to preserve sequence order.
Operational Visibility: Invoice Timeline, Reason Codes, Next‑Attempt ETA, and Selective Notifications
Given any retry attempt outcome, when recording results, then the raw gateway payload, provider reason code, normalized reason, and human-readable label are stored and appended to the invoice timeline with an ISO8601 timestamp and actor=system. Given a next attempt is scheduled or rescheduled, when viewing the invoice or partner record, then staff see a Next Attempt ETA with timezone that updates within 5 seconds of any schedule change; if no further attempts remain, ETA displays None with the stop reason. Given notification policy settings, when intermediate retries are scheduled or fail, then staff emails/slacks are suppressed unless notify_on_intermediate=true; on final failure, a single summary notification is sent to finance and optionally to the partner per policy with a pay-by-link that expires per link_ttl. Given partner notifications are enabled for insufficient_funds on ACH, when sending, then messages are delivered only on business days between the configured local hours window and duplicate notifications are suppressed within a 24h dedupe period.
Per‑Partner Opt‑Outs and Payment Method Preferences
Given a partner has retry_opt_out=true, when a payment fails, then no retries are scheduled, the invoice is flagged manual_collection_required, and the override source is recorded in the audit log and displayed in the UI. Given a partner has preferred_methods=[ACH] and fallback_to_card=false, when an ACH failure with an ineligible reason (e.g., account_closed) occurs, then retries stop and the stop reason is preference_block; no card attempts are created. Given a partner has preferred_methods priority (e.g., [ACH, CARD]) and fallback_to_card=true, when ACH becomes temporarily ineligible (e.g., insufficient_funds cooling-off), then the scheduler switches to CARD according to card policy and records the method switch in the attempt metadata. Given both global policy and partner overrides exist, when computing the schedule, then partner overrides take precedence and the effective policy version and source are persisted on each attempt.
PCI‑Compliant Tokenized Instrument Handling
Given a retry is executed, when building the gateway request, then only tokenized instrument identifiers are used; no raw PAN or bank account numbers are stored, transmitted, or logged by RallyKit, and logs display only masked last4 and brand. Given users view payment details in the UI, when rendering instrument data, then sensitive authentication data is never displayed; access is role-restricted and audited, and hosted fields/iFrames are used for any data entry. Given audit exports are generated, when compiling retry histories, then no sensitive authentication data is included; only masked identifiers, reason codes, timestamps, and gateway transaction IDs are present. Given an attempt to input raw card or bank data through RallyKit, when validating client forms, then the input is blocked and users are redirected to the tokenization provider flow.
Partner Spend Quarantine
"As a campaign operations manager, I want delinquent partner charges quarantined so that unpaid activity is contained without disrupting the rest of the campaign."
Description

When an invoice becomes delinquent beyond grace, automatically quarantine new charges attributable to that partner into a suspense bucket rather than posting to billable spend. Flag quarantined items in real time across RallyKit modules (e.g., calling credits, messaging) without halting unrelated campaign operations. Provide reconciliation views, alerts, and rules to release or reassign quarantined charges upon payment or after escalation. Support reallocation after grace expiration per policy (e.g., move to org suspense account) with full reversibility and audit trail.

Acceptance Criteria
Auto-Quarantine on Grace Expiry
Given a partner invoice is delinquent and the configured grace period has ended When any new charge attributable to that partner is generated from any module Then the charge is recorded in Suspense bucket "Suspense:Partner-{partner_id}" with status=Quarantined, quarantine_reason=DelinquentBeyondGrace, and is excluded from billable spend and invoicing And the quarantine operation completes within 2 seconds of charge creation and is idempotent for duplicate events And the Suspense entry includes partner_id, campaign_id, module, original_charge_id, amount, currency, created_at, quarantine_reason, and source_user/integration identifiers And billable spend ledger remains unchanged while suspense ledger increases by the exact amount
Scoped Spend Freeze for Delinquent Partner
Given partner A is delinquent beyond grace and partner B is in good standing in the same campaign When simultaneous new charges are attempted for partner A and partner B Then partner A's charges are quarantined (status=Quarantined) and return error code QUARANTINED_PARTNER to initiating APIs/UI, while partner B's charges post normally with no degradation in completion time beyond 5% of baseline And no charges for partner A are posted to billable spend, even under race conditions or retries And the UI surfaces a non-blocking inline message for partner A actions: "Spend quarantined due to delinquency—view details"
Real-Time Cross-Module Quarantine Flagging
Given a partner is in a quarantined state When a user views Calling Credits, Messaging, and Action Pages modules Then quarantined items are visually flagged with a "Quarantined" badge and tooltip explaining the reason and next steps And module lists and export APIs include a quarantined=true boolean and quarantine_reason fields for affected items And the quarantine flag is visible within 5 seconds of state change across all modules and via API
Suspense Reconciliation and Rule-Based Release
Given there are quarantined items in the Suspense bucket When a finance user opens the Reconciliation view Then they can filter by partner, module, date range, amount range, status, and reason; sort by created_at and amount; and view running totals by partner and module And they can select items and perform actions: Release to Billable, Reassign to Org Suspense, or Mark for Escalation, with confirmation prompts summarizing ledger impact And configurable rules exist: on payment posted -> auto-release matching items within 10 seconds; on days_past_due >= policy_threshold -> auto-reassign to Org Suspense And every action adjusts ledgers accurately, updates item status, and emits an audit log entry and webhooks And exports (CSV) reflect current statuses and include audit reference IDs
Alerts and Notifications for Delinquency and Quarantine
Given a partner crosses grace into delinquency When quarantine is first activated Then RallyKit sends immediate alerts to configured channels (email, Slack/webhook) containing partner name, amount delinquent, pay-by-link, and link to reconciliation And alerts are de-duplicated per partner per 24h window, with a daily digest if quarantine remains active And alerts auto-resolve once payment posts and all items are released; otherwise escalate to "High" severity after policy_threshold days
Reversibility and Audit Trail Integrity
Given any quarantine-related action (quarantine, release, reassign, reversal) occurs When an auditor reviews the Audit Log Then entries are immutable, chronologically ordered, and include action_type, actor (user/service), timestamp (UTC), before_state, after_state, amounts, affected_ledger_ids, reason_code, and correlation_id And a reversal action restores the prior ledger state and references the original action via correlation_id, with no net discrepancy across ledgers And the audit log is exportable by date range and partner, and is retained per data retention policy with tamper-evident checksums
Selective Partner Spend Freeze
"As a nonprofit director, I want the system to freeze new spend for only the delinquent partner so that the rest of our advocacy work continues uninterrupted."
Description

Enable policy controls to freeze only new spend tied to a specific delinquent partner once thresholds are hit, while allowing all other partners and campaign functions to proceed. Provide granular scopes (module-level toggles), automatic and manual triggers, and in-app messaging that explains the freeze to internal users. Include override roles, escalation workflows, and clear unfreeze conditions (e.g., payment posted, override approved). Surface freeze status in partner profile, invoice view, and APIs to prevent accidental new allocations.

Acceptance Criteria
Automatic Freeze on Delinquency Threshold Reached
Given a partner’s account meets the configured delinquency thresholds (e.g., past_due_days >= configured_days OR past_due_balance >= configured_amount) When the dunning evaluator runs (at least every 5 minutes) or an invoice status changes to Past Due Then partner.freeze_status = "Frozen" is set with freeze_reason in ["past_due_days_exceeded","balance_over_limit"] and freeze_scopes = policy.default_scopes And an audit log entry is created with actor="system", timestamp, thresholds breached, selected scopes, and correlation to triggering invoice(s) And only new spend tied to this partner is blocked across the selected scopes; existing pre-scheduled spend created before the freeze continues unchanged And other partners’ spend and campaign functions remain unaffected And an in-app banner and inline notices appear within 1 minute on partner profile and allocation screens, explaining reason, thresholds, timestamp, and next steps
Manual Partner Spend Freeze by Authorized Role
Given a user with role in ["Finance Admin","Dunning Manager"] opens a partner record and selects "Freeze spend" And selects freeze_scopes and reason_code = "manual_admin_freeze" and optionally enters justification text When the user confirms Then partner.freeze_status = "Frozen" is applied immediately with the selected scopes And an audit log records actor, justification, scopes, timestamp And notifications are sent to watchers and the partner owner within 1 minute And UI and API blocking behavior matches automatic freeze rules
Module-Level Freeze Scope Controls
Given module-level scopes are available ["AdBuys","SMS","EmailCredits","ListRentals","WorkOrders","Custom"] When a freeze is applied with a subset of scopes Then creation of new spend in the selected scopes is blocked And creation of new spend in non-selected scopes succeeds And partner profile displays scope badges indicating which modules are frozen And GET /partners/{id} returns freeze_scopes array matching the selection
Block New Allocations via UI and API While Frozen
Given partner.freeze_status = "Frozen" When a user or integration attempts to create or schedule new spend for that partner via UI or API Then the operation is rejected atomically with HTTP 409 and error_code = "freeze_active", including partner_id, scopes, and unfreeze_conditions in the response And no partial allocations are created; all-or-nothing enforcement is guaranteed And the UI shows a non-dismissable inline error and a "View dunning details" link
Override and Escalation Workflow for Temporary Unfreeze
Given a user with role "Finance Director" creates an override request for a frozen partner And specifies scopes, a time-bound window (duration <= 72h), and optional spend_cap When a distinct approver with role in ["Finance Director","CFO"] approves (requester cannot self-approve) Then partner.freeze_status = "TemporarilyUnfrozen" is applied with the specified scopes, cap, and expiry And notifications are sent to stakeholders; audit log records requester, approver, limits, and expiry And when cap is reached or expiry passes, the system automatically re-applies the prior freeze within 1 minute
Automatic Unfreeze on Payment or Condition Resolution
Given a frozen partner remits payment and delinquency falls below configured thresholds When the payment settles and ledger updates Then partner.freeze_status = "Unfrozen" is applied automatically within 5 minutes with unfreeze_reason = "payment_posted" And in-app banners are updated to reflect resolution and next steps And an audit entry records payment reference, resolution thresholds, and timestamp And if a manual override expires while delinquency still exists, the system re-freezes per policy without user action
Freeze Status Visibility in UI and APIs
Given users view partner profile, invoice view, allocation composer, and reporting screens When partner.freeze_status is in ["Frozen","TemporarilyUnfrozen"] Then each view displays a consistent status chip, reason, scopes, and next-step guidance And the invoice view links the triggering invoice(s) and shows amounts and days past due And APIs expose fields: freeze_status, freeze_reason, freeze_scopes, unfreeze_conditions, override_window, and audit references And list endpoints support filters (e.g., freeze_status=in:Frozen,TemporarilyUnfrozen) to power reporting
Audit‑Ready Dunning Log
"As a compliance officer, I need a complete, exportable history of all dunning actions so that we can prove adherence to policy and resolve disputes quickly."
Description

Record an immutable, searchable timeline of all dunning-related events: configuration changes, reminders sent, pay-by-link generations and clicks, payment attempts and outcomes, retries, quarantines, freezes, reallocations, overrides, and releases. Each entry includes timestamp, actor/system, trigger, payload snapshot, and result. Provide filters by partner, invoice, date range, and event type, plus export to CSV/JSON and webhook streaming. Ensure retention policies, redact sensitive PAN/ACH data, and support evidentiary needs for audits and partner disputes.

Acceptance Criteria
Immutable Dunning Event Logging
Given any dunning-related action occurs (configuration_changed, reminder_sent, pay_by_link_generated, pay_by_link_clicked, payment_attempted, payment_succeeded, payment_failed, retry_scheduled, retry_executed, partner_quarantined, spend_frozen, funds_reallocated, override_applied, quarantine_released) When the system processes the action Then a log entry is appended containing: id, ISO-8601 UTC timestamp, actor (userId or system), trigger (manual/API/scheduled/webhook), eventType, partnerId, invoiceId (if applicable), payloadSnapshot (JSON), and result (status code and message) And the entry cannot be updated via any API or UI and direct data-layer mutations are blocked/denied And delete operations on log entries are blocked/denied and return a 405 or equivalent And any correction is recorded as a new override_applied event referencing the prior entry id And concurrent actions produce distinct entries with unique ids and strictly monotonic createdAt ordering
Search and Filter the Dunning Log
Given the log contains at least 100,000 entries across multiple partners and invoices When a user filters by partnerId, invoiceId, date range (start/end, timezone-aware), and one or more eventType values Then the results include only entries matching all provided filters And results are sorted by timestamp descending by default and can be toggled to ascending And the first page (up to 200 records) returns in ≤2 seconds under typical load And pagination uses a stable cursor that returns consistent results across pages for the same filter set And requesting a filter set with no matches returns zero results without error
Export Filtered Log to CSV and JSON
Given a filtered result set of up to 100,000 entries When the user selects Export CSV Then a UTF-8 RFC4180-compliant CSV with a header row is generated containing all non-redacted fields and downloads or is made available via a pre-signed URL within 60 seconds And the exported row count equals the filtered result count And sensitive fields remain masked/redacted in the export And an export_created event is logged with filter summary, format, rowCount, and checksum When the user selects Export JSON Then a newline-delimited JSON (NDJSON) file with the same fields and redactions is generated with the same performance, count accuracy, and logging guarantees
Realtime Webhook Streaming of Dunning Events
Given a webhook subscription is configured with target URL, secret, and selected event types When new matching dunning events are logged Then the system POSTs the event to the subscriber within 5 seconds of creation And each delivery includes event id, timestamp, type, partnerId, invoiceId (if applicable), payloadSnapshot (redacted), and result And requests are signed with HMAC-SHA256 using the shared secret and include a timestamp to prevent replay attacks And a 2xx response from the subscriber marks the delivery successful; non-2xx triggers retries with exponential backoff up to 12 attempts over 1 hour And deliveries are idempotent using the event id and include a unique delivery id; duplicates must be safely ignored by consumers And event ordering is preserved per partnerId and per invoiceId
Sensitive Data Redaction in Log, Exports, and Webhooks
Given an event payload includes payment card or bank account details When the payloadSnapshot is persisted to the log Then PAN values are never stored in full and are masked to last4 only, CVV/CVC are never stored, and ACH account/routing numbers are masked to last4 And tokens or instrument ids are stored/displayed in masked form only And only an allowlisted subset of fields is retained in payloadSnapshot; all others are dropped And the same redaction rules apply to search results, exports, and webhook payloads And automated scans over a sample of 10,000 recent entries detect zero occurrences of unmasked PAN/CVV/ACH numbers
Retention Policy Enforcement and Legal Hold
Given an organization-level retention policy N days is configured and a legal hold list exists for specified partners or invoices When a log entry exceeds N days and is not under legal hold Then it is purged within 24 hours by a scheduled job And a non-sensitive purge_tombstone event is appended recording the purged entry id (hashed), purge reason, and timestamp And purged entries no longer appear in search, export, or webhook outputs And entries under legal hold are excluded from purge until the hold is removed And purge operations are tracked with metrics (count, duration, failures) and failures are retried with alerting
Audit/Dispute Evidence Package Generation
Given an auditor or admin requests an evidence package for a specific partner, invoice, and/or date range When Generate Evidence is initiated Then a ZIP bundle is produced within 2 minutes for up to 100,000 entries containing: a manifest with scope and filter summary, ordered CSV and NDJSON of the log entries (redacted), and relevant configuration snapshots referenced by those entries And the bundle includes a SHA-256 checksum and creation timestamp And access to the bundle is provided via a pre-signed URL that expires in 7 days and is restricted to authorized roles And an evidence_bundle_generated event is logged with scope, counts, checksum, and requester id

Invoice Composer

Generate branded, itemized invoices per organization and campaign with rate cards, unit counts, taxes/VAT, PO fields, and notes. Include VerifyLink-backed, tamper‑evident receipt bundles for each line or batch. Export PDF/CSV, schedule sends, and sync references to your accounting system, cutting reconciliation time to minutes.

Requirements

Org-Branded Invoice Templates
"As a nonprofit director, I want our invoices to reflect our brand and legal details so that recipients trust them and finance teams can process them without extra verification."
Description

Enable per-organization invoice templates with logos, color palette, typography, header/footer blocks, legal entities, bank/remittance details, and locale-aware formatting. Provide a WYSIWYG template editor with live preview, tokenized fields (organization, campaign, billing period, VerifyLink), and multi-language support. Allow default templates at the org level with campaign-level overrides. Ensure generated PDFs meet accessibility standards (tagged PDFs, selectable text) and include precise page layout controls (margins, page breaks) for consistent rendering across devices and printers.

Acceptance Criteria
Org-Level Branding and Identity in Templates
Given an org admin uploads a logo (SVG or PNG ≤ 2 MB) and sets a header placement and width in mm, When the template is saved and a PDF is generated, Then the logo renders on every page header at the configured width ±2 mm with no clipping or distortion. Given the org defines a color palette (primary, secondary, accent in hex), When applied to headings, table borders, and link text in the template, Then the generated PDF uses those colors exactly (hex match) and links remain visually distinct and clickable. Given the org selects typography (font family from the allowed list and sizes for H1/H2/body), When a PDF is generated, Then the chosen fonts are embedded and used for all corresponding text, and no font substitution warnings appear in Acrobat Preflight. Given header and footer blocks are configured with rich text and placeholders, When previewing and generating a multi-page invoice, Then the header and footer render on every page, do not overlap body content, and maintain a minimum 10 mm content-safe area. Given legal entity name and bank/remittance details are provided, When the template is rendered, Then these details appear in the footer as selectable text (not a raster image) and copy/paste preserves exact characters.
WYSIWYG Editor with Live Preview
Given an org admin edits template content (text, styles, blocks), When a change is made, Then the live preview updates within 1 second and reflects the exact formatting. Given unsaved changes exist, When the user attempts to navigate away or close the editor, Then a confirmation dialog appears and leaving without saving discards changes. Given the editor could produce invalid markup, When content is saved, Then it is sanitized to valid, safe HTML/CSS and no rendering errors occur in preview or PDF generation. Given blocks are reordered via drag-and-drop, When the new order is saved, Then the preview and subsequent PDFs reflect the new order. Given a template exceeds one page, When previewing page boundaries, Then page break guides display accurately relative to current margin settings.
Tokenized Fields and Data Binding
Given the user inserts tokens {organization.name}, {campaign.name}, {billing_period.start}, {billing_period.end}, and {verifylink}, When previewing or generating an invoice for a specific campaign and billing period, Then each token is replaced with the correct value from that context. Given a token is unknown or misspelled, When saving the template, Then validation fails with an inline error listing the unknown tokens and the template is not saved until corrected. Given a required token has missing source data (e.g., billing period not set), When generating a PDF, Then generation is blocked with a clear error describing the missing data and how to resolve it. Given a {verifylink} token is present, When a PDF is generated, Then the link is clickable, unique per invoice, cryptographically signed (tamper-evident), and altering any character in the URL causes verification to fail server-side (HTTP 4xx).
Multi-language and Locale-aware Formatting
Given the org language is set to Spanish (es-ES) and currency to EUR, When rendering an invoice dated 2025-03-07 totaling 1234.5, Then the date displays as 07/03/2025 and the amount as 1.234,50 € with Spanish labels for static text. Given the template includes translatable labels, When a translation is missing, Then the default language label is used and the missing key is reported in logs and highlighted in the editor. Given the org switches to English (en-US) and USD, When rendering the same invoice, Then the date displays as 03/07/2025 and the amount as $1,234.50 with English labels. Given number, date, and currency formats are locale-driven, When previewing and generating PDFs, Then formatting in preview matches output exactly for the selected locale.
Default Templates and Campaign Overrides
Given a default org template is set, When a new campaign is created, Then the campaign inherits the default template without user action. Given a campaign sets a template override, When generating invoices for that campaign, Then the override template is used; when the override is removed, Then subsequent invoices revert to the org default. Given role-based access control, When a non-admin attempts to set the org default template, Then the action is denied with HTTP 403; campaign owners may set campaign-level overrides only. Given template versioning is enabled, When the org updates the default template, Then previously generated PDFs remain unchanged, and new invoices use the latest published version.
PDF Accessibility and Selectable Text
Given a PDF is generated from any template, When validated with a PDF/UA checker (e.g., PAC 2024), Then it passes with zero critical errors and includes a correct reading order, tagged headings, table headers, and link annotations. Given the template includes a logo image, When the PDF is generated, Then the logo has alternative text set from the org name. Given text content is rendered, When viewed in a PDF reader, Then all text is selectable and copyable (no full-page rasterization), and fonts are embedded. Given brand colors are applied, When evaluated for contrast, Then body text and headings meet WCAG 2.1 AA contrast thresholds against their backgrounds.
Page Layout Controls and Consistent Rendering
Given page margins are set (e.g., top/bottom/left/right in mm), When a PDF is generated, Then measured printable content margins are within ±2 mm of configured values on every page. Given a manual page break is inserted in the template, When rendering, Then content after the break starts at the top of the next page and no content is truncated. Given a table section is marked "keep rows together", When a row would split across pages, Then the entire row moves to the next page without overlap. Given the same invoice PDF is opened in Acrobat Reader, macOS Preview, and Chrome, Then the page count is identical and no overlapping or clipped elements are observed in any viewer. Given the page size is set to A4 or Letter, When printing from common PDF viewers, Then the print dialog defaults to 100% scale with correct page size and no auto-fit cropping.
Rate Cards and Itemized Line Items
"As a campaign manager, I want to pull itemized lines from our rate card and enter unit counts so that invoices are consistent and quick to assemble."
Description

Provide rate card management per organization and campaign with item catalogs, unit types (hours, actions, contacts, seats), tiered pricing, effective date ranges, and currency. Allow creating invoices by selecting rate card items, entering quantities, and optionally overriding prices with audit notes. Support discounts (percent/fixed), per-line and invoice-level, and show computed subtotals. Permit attaching supporting evidence to each line and tagging lines to campaigns and service periods. Validate quantity and pricing inputs and prevent negative or inconsistent totals.

Acceptance Criteria
Rate Card Item Selection With Effective Dates and Currency Lock
Given an organization has a USD rate card with item "Outbound Calls" priced $1.25/unit effective 2025-01-01 to 2025-12-31 When a user creates an invoice with service period 2025-08-01 to 2025-08-31 and adds "Outbound Calls" Then the system auto-selects the USD rate card and applies $1.25 as unit price Given the service period falls outside any item's effective date When the user attempts to add the item Then the item cannot be added and the user sees "No active price for selected period" Given an invoice started with currency USD When the user tries to add an item from an EUR rate card Then the add is blocked and the user is prompted to choose a matching-currency rate card
Tiered Pricing Calculation (Graduated Tiers)
Given a rate card item "Emails Sent" with graduated tiers: 0–999 @ $0.05, 1,000–4,999 @ $0.045, 5,000+ @ $0.04 When quantity 6,200 is entered Then the line extended amount equals (1,000*0.05 + 4,000*0.045 + 1,200*0.04) = $279.00 and the applied tier breakdown is displayed Given the same item When quantity 800 is entered Then the extended amount equals 800*0.05 = $40.00 Given tier definitions change effective 2025-09-01 When service period is 2025-09-10 to 2025-09-20 Then the new tiers are used for calculation
Quantity and Unit Type Validation
Given unit type "hours" When the user enters a quantity with up to 2 decimal places between 0.01 and 10,000 Then the value is accepted; otherwise it is rejected with an inline error Given unit type "actions" or "contacts" or "seats" When the user enters a non-integer or a negative value Then the input is rejected and save is blocked Given any unit type When quantity equals 0 Then the input is rejected with "Quantity must be greater than zero" Given any line When the computed extended amount requires rounding Then round-half-up to 2 decimals in invoice currency is applied
Line Price Override With Audit Trail
Given a line added from a rate card When a user overrides the unit price Then the system requires an audit note (min 10 characters) and records user, timestamp, original unit price, new unit price, and note Given an override is saved When the line is displayed or exported Then both original and overridden unit price are shown along with the audit note Given a line with an override When the user removes the override Then the unit price reverts to the applicable rate card price and a reversal entry is logged
Discounts: Per-Line and Invoice-Level With Validation
Given a line item with unit price $100 and quantity 3 (extended $300) When a percent discount of 10% is applied to the line Then the line discount equals $30.00 and the line total equals $270.00 Given the same line When a fixed discount of $305 is entered Then the input is rejected with "Discount exceeds line amount" Given an invoice subtotal of $1,000 before invoice-level discounts When a 7.5% invoice-level discount is applied Then the invoice discount equals $75.00 and no line total becomes negative Given both line-level and invoice-level discounts When totals are computed Then line discounts are applied first, followed by invoice-level discount proportionally across lines, and the final invoice total is non-negative
Computed Subtotals and Totals Display
Given an invoice with one or more lines When quantities, overrides, and discounts are entered Then the UI shows per-line: unit price, quantity, line discount, line subtotal, and line total; and per-invoice: subtotal before discounts, total line discounts, invoice-level discount, and grand total, all in the invoice currency Given an invoice When any input changes affecting amounts Then all dependent subtotals and totals recalculate within 200 ms and match export values Given an invoice When exported to PDF and CSV Then all shown amounts and breakdowns match on-screen values including currency symbol and 2-decimal precision
Line Attachments and Campaign/Service Period Tagging
Given a line item When a user uploads up to 5 attachments (PDF, PNG, JPG, CSV, DOCX) totaling up to 25 MB per line Then the files are saved, virus-scanned, and retrievable; exports include attachment metadata (filename, size, uploaded by, uploaded at) Given a line item When the user tags a campaign from the same organization and sets a service period (start and end dates) Then the tags are saved and shown in the invoice and exports Given a line item When the service period is outside the invoice date range Then the system allows it but flags with a non-blocking warning "Service period outside invoice dates"
Regional Tax/VAT Calculation Engine
"As a finance coordinator, I want taxes calculated correctly for each client’s jurisdiction so that we stay compliant and avoid rework."
Description

Implement configurable tax profiles per organization and destination, supporting VAT/GST/sales tax, tax IDs/registration numbers, exemptions, reverse charge, and zero-rated supplies. Allow per-line taxability and multiple tax components. Calculate taxes based on org and client jurisdictions with correct rounding rules and include a detailed tax breakdown on PDFs/CSVs. Validate tax ID formats where applicable and include required statutory notes on invoices. Maintain historical tax rates for accurate back-billing and auditing.

Acceptance Criteria
Tax Profile Selection by Origin/Destination and Date
Given an organization with multiple active tax profiles mapped by destination jurisdiction and validity date ranges And an invoice with seller origin, buyer destination, and invoice date When the invoice is calculated Then the engine selects the highest-priority tax profile whose jurisdiction matches the destination (or origin where the profile requires) and whose validity range contains the invoice date And the selected profile ID and version are recorded on the invoice metadata And all tax computations use the rates and rules from that profile And currency rounding follows the configured jurisdictional rule for that profile
Per-Line Taxability and Multiple Tax Components
Given a rate card with items flagged as taxable or exempt and a tax profile defining multiple tax components such as state, county, and city When an invoice is calculated with one or more line items and quantities Then taxable lines have each applicable tax component applied at the configured rate And exempt or non-taxable lines show zero tax for all components And per-line tax amounts and component breakdowns are calculated before invoice-level summaries And the invoice tax total equals the sum of per-line component amounts within a tolerance of ±0.01 in the invoice currency
Reverse Charge and Zero-Rated Supplies
Given a VAT/GST tax profile that supports reverse charge and zero-rated supplies And the buyer’s jurisdiction and tax ID indicate eligibility for reverse charge, or the line item is flagged zero-rated with a reason code When the invoice is calculated Then VAT/GST is not charged on the eligible lines And the invoice displays the required statutory note for the jurisdiction including the reverse-charge statement or the zero-rate reason And taxable lines not meeting these conditions are taxed normally
Tax ID Format Validation
Given the organization and client provide tax IDs appropriate to their jurisdictions When a user enters or updates a tax ID on the organization, customer, or invoice Then the system validates the format and checksum rules for supported jurisdictions And rejects invalid IDs with a field-level error explaining the expected format And accepts valid IDs regardless of case or common separators And stores normalized IDs for downstream use on invoices and exports
Customer and Line-Level Tax Exemptions
Given a customer has an active tax exemption record for a jurisdiction with a certificate or authorization ID and validity dates And specific line items may be exempt by category When the invoice is calculated for that customer and jurisdiction Then eligible lines are set to zero tax and marked as tax-exempt with the certificate or authorization ID And ineligible lines remain taxable And the invoice includes the required exemption note for the jurisdiction
Historical Tax Rates and Back-Billing
Given a tax profile with versioned rates and effective date ranges stored immutably And an invoice is created or edited with an invoice date in the past When taxes are calculated Then the engine applies the rates and rules effective on the invoice date, not today’s rates And the applied profile version, component rates, and calculation timestamps are recorded for audit And recalculating the same invoice without changing the invoice date yields identical tax results
Detailed Tax Breakdown on PDF and CSV Exports
Given an invoice with one or more taxable lines and multiple tax components When the user exports the invoice to PDF or CSV Then each line in the export shows per-component tax name, rate, and amount, plus the line’s total tax And the export shows invoice-level tax subtotals per component and a grand tax total And displayed amounts match internal calculations and on-screen values to two decimal places in the invoice currency And required statutory notes and tax IDs appear in the designated sections
PO, Notes, and Custom Reference Fields
"As an accounts receivable clerk, I want PO and reference fields on invoices so that clients can approve and pay without delays."
Description

Capture purchase order numbers, client reference codes, internal cost centers, and freeform notes at both invoice and line-item levels. Allow admins to define required/optional custom fields per organization and mark which fields must appear on exports/PDFs. Enable filtering and searching by these references across the invoice list. Enforce presence and format of required fields before sending or exporting.

Acceptance Criteria
Admin configures invoice and line-item custom reference fields
Given I am an organization admin on Settings > Invoice Fields When I create custom fields with scope Invoice and Line Item including label, key, data type (text/number), required flag, and optional regex format Then the fields are saved for this organization and listed with their scope, required, and format metadata And When I toggle “Show on PDF/Export” for any field Then the toggle state is persisted and will control visibility in exports
Invoice editor shows and persists PO, client refs, cost center, and notes
Given custom reference fields are configured for the organization When I create or edit an invoice in Invoice Composer Then invoice-level fields (e.g., PO Number, Client Ref, Cost Center, Notes) are visible in the Invoice Details section And line-item fields (including Notes) are visible per line item row And notes accept multi-line input up to 2000 characters and preserve newlines in saved data And when I save the draft and reopen it, all entered values persist at their respective scopes
Required and format validations block send and export
Given at least one invoice-level and one line-item field are configured as required, with a regex format on PO Number And an invoice is missing required values or has values that violate the regex When I attempt to Send the invoice or Export PDF/CSV Then the action is blocked and an error summary lists each missing/invalid field with scope and, for line items, the row number And the offending fields are highlighted inline with specific messages (e.g., “PO Number must match ^[A-Z0-9-]{6,}$”) And once all errors are corrected, the Send/Export action completes successfully
Search and filter invoices by PO and custom references
Given there are invoices with various PO Numbers, Client Refs, and Cost Centers When I use the invoice list search bar with a keyword Then results include invoices where any reference field contains the keyword (case-insensitive, substring match) And When I apply field-specific filters (e.g., PO Number equals X, Cost Center contains Y) Then only invoices matching all active filters are shown And clearing the search and filters returns the full list
PDF and CSV exports include selected reference fields only
Given “Show on PDF/Export” is enabled for PO Number and Client Ref and disabled for Notes When I export an invoice to PDF Then the PDF displays the enabled invoice-level fields in the header and the enabled line-item fields per row And disabled fields do not appear anywhere on the PDF When I export invoices to CSV Then columns exist for all enabled invoice-level and line-item fields with their configured labels as headers And disabled fields are not present as columns And empty values appear as empty cells in CSV and as “—” in PDF
Scheduled send/export preflight validation for required references
Given a user schedules an invoice Send and an Export for a future time And the invoice has required reference fields configured When the schedule is created Then the system performs a preflight check and blocks scheduling if required fields are missing or invalid, showing specific errors And When the scheduled time arrives Then the system re-validates and executes only if all required fields are present and valid; otherwise it fails the run and notifies the user with the error summary
VerifyLink Tamper-Evident Receipt Bundles
"As a grant auditor, I want a verifiable link for each invoiced activity so that I can confirm delivery without requesting raw logs."
Description

Generate tamper-evident receipt bundles per invoice line or per invoice batch that summarize underlying RallyKit actions (e.g., calls, emails) with cryptographic hashes, timestamps, and counts. Embed a VerifyLink URL and optional QR code in the invoice PDF and email that opens a verification page showing line-to-proof mapping and integrity status. Detect any data tampering and surface a clear validation state. Store immutable proofs and provide an API/CSV export for auditors. Log verification events for audit trails.

Acceptance Criteria
Generate Tamper‑Evident Bundle per Invoice Line
Given an invoice with line items linked to RallyKit actions When the invoice is finalized Then the system generates a receipt bundle per line item containing: SHA-256 digest of the manifest, ISO 8601 UTC createdAt timestamp, total actionCount, and lineItemId And the manifest lists every actionId included with its individual SHA-256 digest And the bundle JSON validates against schema version v1.0 And the bundle receives a unique bundleId and is persisted with its top-level SHA-256 hash stored on the invoice record And any subsequent modification attempt creates a new bundleId; the original remains unchanged
Generate Tamper‑Evident Bundle per Invoice Batch
Given an invoice with multiple line items and batch bundling selected When the invoice is finalized Then the system generates one batch bundle scoped to the invoice containing: SHA-256 digest of the batch manifest, ISO 8601 UTC createdAt, total actionCount across all included lines, and invoiceId And the batch manifest enumerates each lineItemId with its actionIds and individual SHA-256 digests And the batch actionCount equals the sum of the per-line action counts And the batch bundle passes schema validation v1.0 and is persisted with a unique bundleId
Embed VerifyLink URL and QR in PDF and Email
Given an invoice PDF and email are generated When bundles (line or batch) exist Then each relevant section includes a human-readable VerifyLink URL (HTTPS) containing the bundleId And a QR code is rendered that encodes the same URL and scans successfully on iOS and Android default camera apps And clicking/tapping the URL or scanning the QR opens the verification page in the default browser And the PDF hyperlink target is accurate and not broken for all included bundles And the email link uses absolute HTTPS and passes email client click tests (Gmail, Outlook web/desktop)
Verification Page Integrity Check and Status Display
Given a user opens a VerifyLink for a bundle When the page loads Then the system recomputes the manifest and item hashes from stored immutable proofs and compares to the stored top-level hash And if all hashes match and counts align, the status displays as Valid (green) with lastVerifiedAt timestamp (ISO 8601 UTC) And if any mismatch occurs, the status displays as Invalid (red) with a clear description of the first failing element (e.g., actionId, expected vs actual hash) And if some underlying records are missing but others verify, the status displays as Partial (amber) with counts of verified vs expected And the page shows line-to-proof mapping (invoiceId, lineItemId or batch, actionCounts) and the bundle schema version And server responds within 2 seconds p95 for bundles up to 10,000 actions
Immutable Storage of Proofs
Given proofs and manifests are written to storage When a write is committed Then the content-addressed object key equals the SHA-256 of the content And subsequent writes of different content generate different keys; existing objects are not mutated And any attempt to overwrite an existing key is rejected and logged And retention policy is configured for a minimum of 7 years And a GET by key returns byte-identical content for integrity re-check
Auditor API and CSV Export
Given an authenticated auditor with read scope requests exports When requesting API JSON for a bundle or invoice Then the response includes bundleId, scope(line|batch), invoiceId, lineItemId (nullable), manifestHash, actionCount, createdAt, schemaVersion, status (Valid|Invalid|Partial|Unverified), and proofObjectKey And when requesting CSV export for an invoice, the file includes one row per bundle with the same fields and a downloadable URL to the manifest And API supports filtering by invoiceId and createdAt range and paginates with limit/nextCursor And CSV generation completes within 60 seconds for invoices up to 100 bundles And all endpoints require HTTPS and token-based authentication; unauthorized requests return 401
Verification Event Logging and Audit Trail
Given any verification occurs via VerifyLink or API When the integrity check completes Then an audit log entry is recorded with timestamp (ISO 8601 UTC), bundleId, verifierType (web|api), result (Valid|Invalid|Partial), requestIp, userAgent hash, and referer (if web) And repeated verifications aggregate counts per day while preserving raw entries And audit logs are queryable by bundleId and date range and exportable to CSV And tamper detection events trigger a high-severity log with diff details of mismatched hashes And log entries are immutable once written
PDF/CSV Export and Scheduled Sending
"As an operations lead, I want to schedule and send invoices automatically so that billing goes out on time with minimal manual work."
Description

Support one-click export to PDF and CSV with consistent file naming, draft watermarks, and inclusion of all metadata (PO, notes, custom fields, tax breakdown, VerifyLink). Provide email delivery with per-organization templates, multiple recipients (To/CC/BCC), and attachment options (PDF and CSV). Allow scheduling initial sends and automated reminders with timezone awareness, and track delivery (bounces, opens, link clicks) with resend history. Preserve sent artifacts for audit.

Acceptance Criteria
PDF Export Content and Draft Watermark
Given an invoice in Draft status with PO, notes, custom fields, tax breakdown, and VerifyLink references, When the user clicks Export > PDF, Then the generated PDF contains all invoice fields including PO, notes, each custom field, per-line items with unit rate and unit count, tax/VAT breakdown by rate, totals, and VerifyLink URLs/QR codes for each applicable line or batch. Given an invoice in Draft status, When exporting to PDF, Then a semi-transparent DRAFT watermark appears on every page; And When the invoice status is Final/Sent/Paid, Then no watermark is present. Given a multi-page invoice, When exported to PDF, Then page numbers and repeated header/footer with organization branding appear on all pages. Given a localized currency, When exported to PDF, Then currency symbol/code and number formatting match the invoice settings.
CSV Export Content and Structure
Given an invoice with header metadata and line items, When the user clicks Export > CSV, Then the file is RFC 4180–compliant, UTF-8 encoded, and includes a single header row. Then the CSV includes columns: invoice_id, invoice_number, org_id, org_name, campaign_id, campaign_name, status, issue_date, due_date, currency, po_number, notes, custom_fields.* (one column per key), line_number, item_name, unit_rate, unit_count, line_subtotal, tax_rate, tax_amount, line_total, verifylink_url, totals_subtotal, totals_tax, totals_total. Given any field values containing commas, quotes, or newlines, When exported, Then they are properly quoted and escaped per RFC 4180. Given numeric fields, When exported, Then unit_rate, tax_rate, tax_amount, and totals use a dot as decimal separator with two fractional digits. Given no custom fields exist on the invoice, When exported, Then no custom_fields.* columns are present.
Consistent File Naming Scheme
Given any export of an invoice, When generating a file name, Then it matches the pattern ^[a-z0-9-]+_inv-[A-Z0-9-]+_[0-9]{8}T[0-9]{6}Z_(draft|final)\.(pdf|csv)$ and includes orgSlug, invoiceNumber, UTC timestamp, status token, and correct extension. Given a Draft invoice, When exported, Then the status token is "draft"; Given a Final/Sent/Paid invoice, Then the status token is "final". Given multiple exports within the same minute, When naming files, Then timestamps include seconds to avoid collisions. Given files attached to scheduled emails, When sent, Then attachment file names follow the same naming convention as on-demand exports.
Email Delivery With Templates, Recipients, and Attachments
Given an organization with a selected email template containing merge tags, When composing a send for an invoice, Then the default subject and body populate from the organization template and all merge tags resolve using invoice and organization data; If any merge tag cannot resolve, sending is blocked with a validation error identifying the missing field. Given recipient addresses entered in To, CC, and BCC, When sending, Then addresses are deduplicated across fields, validated to RFC 5322 format, and any invalid address blocks send with a field-level error. Given attachment options, When the user selects PDF and/or CSV, Then the chosen files are generated from the current invoice state and attached; At least one attachment is required, with PDF preselected by default. Given a successful send, When completed, Then the UI displays a confirmation including provider message ID, recipients grouped by To/CC/BCC, and the attachment list.
Scheduling Initial Send and Automated Reminders With Timezone Awareness
Given a user schedules the initial send for a specific local date/time and timezone, When saved, Then the system stores and displays both the local time and the computed UTC time; When the scheduled time occurs, Then the email is sent at the intended local wall time accounting for DST. Given reminder settings of a defined cadence (e.g., every N days) and a maximum count, When the invoice remains Unpaid, Then reminders are sent at the configured cadence up to the max; When the invoice is marked Paid or Voided before the next reminder, Then no further reminders are sent. Given a scheduled time in the past, When saving, Then the user must choose to send immediately or reschedule; If send immediately is chosen, Then the email is dispatched within 2 minutes. Given an existing schedule, When the timezone is changed, Then all future occurrences are recalculated to occur at the same local wall time in the new timezone.
Delivery Tracking and Resend History
Given an email send, When delivery events occur, Then the system records per-recipient events for delivered, bounced (with SMTP code and provider reason), opened (via tracking pixel), and link_clicked (via tracked redirect) with UTC timestamps. Given a bounced recipient, When a user clicks Resend, Then a new send is created without overwriting prior events, allowing the address to be edited, and the resend is linked to the original in history. Given any sent invoice, When viewing the invoice communication log, Then the user sees a timeline of sends with message ID, sender, recipients by To/CC/BCC, subject, attachments, and aggregated event counts (delivered, opens, clicks, bounces). Given tracking is disabled by a recipient's mail client, When no open is recorded, Then delivery still reflects delivered if confirmed by the provider and no open event is inferred.
Audit Preservation of Sent Artifacts
Given any executed send with attachments, When the send completes, Then the system stores an immutable audit bundle containing the exact rendered subject and HTML/plaintext body, recipients grouped by To/CC/BCC, attachments (PDF/CSV), SHA-256 hash for each artifact, provider message ID, and the delivery event log. Given an audit bundle, When later downloaded, Then each artifact's computed hash matches the stored hash and all VerifyLink references in attached documents remain present. Given access to audit artifacts, When a user with read permissions views them, Then they can view or download but cannot edit or delete; All access is logged with user, timestamp, and action. Given templates or organization settings change after sending, When viewing an older audit bundle, Then the bundle shows the original rendered content as sent, unchanged.
Accounting Sync and Reconciliation Mapping
"As a bookkeeper, I want invoices and references to sync to our accounting system so that reconciliation takes minutes instead of hours."
Description

Integrate with accounting systems (e.g., QuickBooks Online, Xero) via OAuth to push invoices, customers, line items, and tax codes, and to pull payment status updates. Provide mapping of RallyKit rate card items to accounting products/services and of tax profiles to tax codes. Store external IDs for round-trip sync, handle duplicates/conflicts, and support dry-run validation. Sync payments and status changes (paid/partial/void) back to RallyKit and present a reconciliation view that matches POs, totals, and dates to cut reconciliation time to minutes.

Acceptance Criteria
OAuth Connection and Tenant Selection to Accounting System
Given an org admin selects QuickBooks Online or Xero and clicks Connect When the OAuth flow completes and the admin selects a company/tenant Then RallyKit stores the access and refresh tokens encrypted and persists the tenant ID And the connection status is shown as Connected with the provider and tenant name And a connection test endpoint returns 200 within 2 seconds And when tokens expire, RallyKit refreshes tokens automatically before the next API call And if the app is revoked at the provider, the next sync attempt marks the connection Disconnected and surfaces a clear, actionable error within 5 seconds of detection
Rate Card and Tax Profile Mapping Validation
Given RallyKit rate card items and tax profiles exist and provider products/services and tax codes are fetched When an admin opens the Accounting Mapping screen Then each RallyKit rate card item must be mapped to exactly one provider product/service And each RallyKit tax profile must be mapped to exactly one active provider tax code And saving is blocked if any required mappings are missing, with a visible count of missing mappings And unmapped items are clearly flagged and filterable And a Preview Mapping action validates that all mappings resolve to active provider records and returns Ready if no issues, otherwise lists blocking errors and warnings
Idempotent Invoice Push with External IDs
Given an invoice in RallyKit with a mapped customer, mapped rate card items, and mapped tax profiles When the user clicks Sync Now to push to the connected accounting system Then RallyKit upserts the Customer and stores the external customer ID on the RallyKit customer record And RallyKit creates or updates the Invoice with line items and tax codes and stores the external invoice ID on the RallyKit invoice And an idempotency key is used so re-running the push within 24 hours does not create duplicates; it updates the same external invoice if data changed And if the provider rejects due to a duplicate invoice number, RallyKit offers configured resolution options (e.g., prefix number, overwrite, cancel) and records the chosen action in an audit log And the sum of line items and taxes in the provider matches the RallyKit invoice total within 0.01 of the invoice currency And the sync result shows counts of created/updated customers, invoices, and line items
Payment Status Sync to RallyKit
Given external payments or voids are recorded in the connected accounting system for previously synced invoices When a scheduled sync runs every 15 minutes or a provider webhook is received Then RallyKit updates each affected invoice status to Paid, Partial, or Void to match the provider And partial payments accumulate until the total paid equals the invoice total, at which point status becomes Paid And payment amount, date, and external reference are stored on the RallyKit invoice payment records And a timeline entry is created per change with source Accounting, timestamp, and the delta applied And voided invoices are locked from further pushes unless a user with permissions overrides with a forced resync
Reconciliation View Matching POs, Totals, and Dates
Given invoices have been synced and payment statuses pulled back When a user opens the Reconciliation view for a selectable date range Then each row shows RallyKit vs Accounting values for PO/reference, totals, dates, and status And rows are auto-matched when PO/reference matches and totals match within 0.01 and invoice dates are within ±3 days And unmatched rows are flagged with a categorical reason (PO mismatch, total variance, date variance, status variance, missing in provider, missing in RallyKit) And the user can filter to Unmatched, open a row to see discrepancies, edit mappings, and trigger a resync without leaving the view And the view exports a CSV/PDF including a summary of matched vs unmatched counts and variance totals And on a baseline dataset of 100 invoices, the median time from opening the view to exporting a resolved report (≥95% matched) is ≤ 3 minutes
Dry-Run Sync Validation and Reporting
Given required mappings are saved and a connection is active When a user runs a Dry-Run for a selected date range or campaign Then no records are created or updated in the provider; all operations run in validation/preview mode only And the report lists counts of customers, invoices, line items to be created/updated, with line-level references and any blocking errors or warnings And each error clearly references the RallyKit record and the provider entity/type that would be affected And the dry-run completes within 60 seconds for up to 500 invoices and surfaces a progress indicator And the user can download the dry-run report as CSV and access a shareable link in RallyKit audit logs

Partner Wallets

Offer prepaid balances with auto top‑ups, threshold alerts, and refund/credit handling. Assign wallets to partners or coalitions, set allowed spend categories, and enable shared wallets across campaigns. Small partners gain predictability, while organizers prevent dunning interruptions during peak moments.

Requirements

Wallet Creation & Partner Assignment
"As an org admin, I want to create and assign prepaid wallets to partners or coalitions so that they can fund campaign actions predictably without per-campaign billing setup."
Description

Enable admins to create prepaid wallets with defined currency, starting balance, and ownership, then assign them to individual partners or coalitions. Wallets can be linked as the default funding source on selected campaigns and scoped by org-level permissions. Provide CRUD management, visibility settings, and real-time balance display in the RallyKit dashboard. Integrates with campaign action execution to validate available funds before spending. Ensures small partners can onboard quickly and launch without per-campaign billing setup.

Acceptance Criteria
Admin creates prepaid wallet with ownership assignment
Given I am an org admin with Wallets:Create permission When I open the Create Wallet form Then I can enter Wallet Name (required), Currency (required from supported list), Starting Balance (>= 0), Ownership Type (Partner or Coalition), and select an Owner (required) Given all inputs are valid When I submit the form Then a wallet is created with a unique ID, selected currency, starting balance set as current balance, selected owner, creator ID, and an audit log entry is recorded Given the Wallet Name duplicates an existing wallet for the same owner When I submit the form Then I see "Name must be unique per owner" and the wallet is not created Given Starting Balance is negative or non-numeric When I attempt to submit Then inline validation prevents submission and displays an error message Given the wallet is successfully created When I view the Wallets dashboard Then the new wallet appears under the owner's scope within 2 seconds
Assign wallet as default funding source on a campaign
Given I have Manage Campaigns permission and Use Wallet permission on Wallet W When I open Campaign C settings Then I can select Wallet W as the Default Funding Source Given Wallet W and Campaign C are within the same authorized org or coalition scope When I save Campaign C settings Then Campaign C stores Wallet W as its default funding source and displays W's currency and current balance Given Wallet W is already the default on Campaign C When I change the default to Wallet W2 Then W is unlinked, W2 is linked, and the change is recorded in the audit log Given I lack Use Wallet permission on Wallet W When I attempt to link W to Campaign C Then the UI disables the option and the API returns 403 Forbidden Given Campaign C has a default wallet When an action under Campaign C is executed Then the system uses the default wallet for pre-spend validation
Pre-spend validation and atomic deduction on action execution
Given Campaign C has default Wallet W and the action cost is X When a supporter initiates an action Then the system checks that W.balance >= X before processing Given W.balance >= X When the action is processed Then X is reserved atomically, the action completes, a transaction record is created with amount X and a reference to the action, and W.balance decreases by X immediately Given N concurrent actions where N*X > current W.balance When they execute Then at most floor(W.balance / X) actions succeed, the remainder fail with "Insufficient wallet funds", and W.balance never drops below 0 Given an amount X was reserved but downstream processing fails When the action is rolled back Then the reserved amount is released and W.balance is restored within 5 seconds, and a voided transaction record is logged Given W.balance < X When a supporter attempts the action Then the action is blocked, the API returns a non-2xx error, and the user sees "Insufficient wallet funds" with no deduction applied
Org-level permissions and visibility scoping
Given a user with Org Admin role When accessing Wallets Then they can create, read, update, and archive wallets owned by their org and coalition wallets they administer Given a user with Campaign Manager role and Use Wallet access to Wallet W When configuring a campaign they manage Then they can link W as the default funding source but cannot edit W's properties Given a user without access to Wallet W When listing wallets in the UI or via API Then W is not returned in listings and direct access returns 403 Forbidden Given a coalition-owned wallet W shared with Member Org M When a user from M views wallets Then they can view W and its balance but cannot view wallets not shared with M Given Use Wallet access is revoked from a user When they refresh the UI or call the API Then linking options for W disappear immediately and subsequent calls receive 403 Forbidden
Wallet updates, archiving, and audit logging
Given Wallet W When I edit Wallet Name or Visibility settings and save Then changes persist, are reflected in the dashboard within 2 seconds, and an audit log entry captures the before/after values Given Wallet W has no transactions When I change W.currency and save Then the change is accepted and audit logged Given Wallet W has one or more transactions When I attempt to change W.currency Then the change is blocked with a clear error "Currency cannot be changed after transactions exist" Given Wallet W has zero balance and is not set as default on any active campaign When I archive W Then W moves to Archived state, is removed from selectable lists, and an audit entry records the action Given Wallet W has a non-zero balance or is linked as default on an active campaign When I attempt to archive W Then the action is blocked with an error explaining the constraint
Real-time wallet balance display in dashboard
Given Wallet W exists When I open the Wallets dashboard Then W displays current balance with correct currency formatting, owner, and visibility status Given a transaction deducts X from W When I keep the dashboard open Then W's displayed balance updates within 2 seconds without manual refresh Given high-frequency transactions (>= 10 per minute) on W When I observe the dashboard for 1 minute Then the displayed balance never lags actual balance by more than 2 seconds and equals the sum of starting balance minus posted transactions Given the live updates connection is interrupted When the connection is restored Then the dashboard re-syncs the balance within 5 seconds and indicates that it has reconnected
Auto Top-up Rules Engine
"As a finance admin, I want wallet balances to auto top-up when they drop below a threshold so that campaigns don’t halt during peak supporter activity."
Description

Provide configurable auto top-up rules per wallet: low-balance threshold, top-up amount, funding method, daily/weekly caps, and retry/backoff on payment failures. Support pause/resume, test charges, and configurable grace behavior to prevent dunning interruptions during spikes. On threshold breach, initiate a tokenized payment, update balance atomically, and emit events for notifications and reporting. Integrates with existing billing providers and respects org permissions and audit logging.

Acceptance Criteria
Threshold Breach Triggers Auto Top-up
Given a wallet with low_balance_threshold and top_up_amount configured and a valid tokenized funding_method When the wallet balance falls to or below the threshold due to spend Then initiate a tokenized payment for the exact top_up_amount using the configured funding_method within 10 seconds And apply the charge capture and balance update atomically (either both succeed or neither) And enforce idempotency so concurrent threshold detections result in at most one top-up And the resulting wallet balance increases by exactly the top_up_amount
Daily and Weekly Caps Enforcement
Given daily_cap and weekly_cap values are configured for the wallet When an auto top-up would cause total auto top-up amount to exceed the remaining daily or weekly cap Then do not initiate a payment And record the attempt as blocked_by_cap with remaining cap values And emit a cap_exceeded event to the notifications/reporting bus And the wallet balance remains unchanged And the next auto top-up may only occur after the relevant cap window resets
Payment Failure Retry with Configurable Backoff
Given a retry_policy with max_attempts and backoff intervals is configured for the wallet When a top-up attempt fails with a transient error per provider classification Then schedule retries according to the configured backoff intervals until max_attempts is reached And stop retrying immediately on permanent (non-retryable) errors And each attempt uses a unique attempt_id and the same idempotency_key to prevent duplicate charges And emit attempt_started, attempt_failed, retry_scheduled, and final_failed or succeeded events for observability And no duplicate charges are created across attempts
Pause and Resume Auto Top-ups
Given auto top-ups are paused for the wallet When the balance falls below the threshold Then do not initiate any auto top-up And record a suppressed_by_pause entry and emit a paused_state_breach event And when auto top-ups are resumed Then subsequent threshold breaches trigger top-ups per the current rules And no retroactive top-ups are executed for breaches that occurred during the paused period
Funding Method Selection and Test Charge Verification
Given a wallet rule with a selected funding_method backed by an approved billing provider When enabling auto top-ups or changing the funding_method Then perform a provider-supported non-settling test authorization to verify the token And mark the funding_method as verified on success and verification_failed on failure And block auto top-ups until verification succeeds And ensure all top-ups use the configured provider token (no raw PAN stored) And allow funding_method selection only from providers enabled for the org
Grace Behavior During Traffic Spikes
Given grace_behavior is enabled with a configured grace_limit and in_flight_window When a threshold breach occurs while an auto top-up is in progress due to a traffic spike Then allow eligible actions to proceed without dunning interruptions up to the grace_limit or until the in_flight_window elapses And reconcile the wallet by applying the pending top-up to cover in-flight actions upon success And if the top-up ultimately fails, enforce the configured post-failure behavior (queue, deny, or soft-negative up to limit) and emit events And end users do not encounter payment/dunning errors during the grace window
Audit Logging, Permissions, and Event Emission
Given org permissions and audit logging are required When a user creates, updates, pauses, resumes, or deletes auto top-up rules, or a top-up attempt occurs Then enforce that only users with billing.manage permission in the org can mutate rules (others receive 403 and no change) And write an immutable audit log entry for each action and attempt including timestamp, actor_id, org_id, wallet_id, rule_id, attempt_id, previous_values, new_values, result, and provider_response_code where applicable And emit structured events (threshold_breached, topup_attempted, topup_succeeded, topup_failed, cap_blocked, paused, resumed) within 1 second of occurrence with idempotency_key and correlation_id And make audit records and events available to reporting within 60 seconds of write
Spend Category Controls
"As a coalition lead, I want to constrain each wallet to specific spend categories and caps so that funds are used only for approved actions."
Description

Allow admins to define allowed spend categories (e.g., calls, emails, texts, data enrichment) and optional per-category caps for each wallet. During action execution, enforce category checks and block or warn when limits are exceeded. Provide overrides with approver workflow and detailed error messages back to campaign tools. Surface category-level usage charts to show burn rate and remaining capacity per category.

Acceptance Criteria
Enforce Allowed Category at Action Execution
Given a wallet with allowed spend categories ["Calls", "Emails"] And a campaign action is initiated in category "Texts" with a non-zero cost When the action requests wallet authorization Then the system denies the authorization And returns error code "category_not_allowed" And includes wallet_id, partner_id, campaign_id, category, cost, and timestamp in the audit log And no funds are reserved or deducted
Enforce Per-Category Hard Cap (Block on Exceed)
Given a wallet with a "Calls" cap of 500.00 for the current budget period And recorded "Calls" spend this period is 480.00 When a new "Calls" action attempts to spend 30.00 Then the system blocks the action And returns error code "cap_exceeded" with current_spend=480.00, attempted_spend=30.00, cap=500.00, available=20.00 And no funds are deducted And a "cap_exceeded" event is emitted with correlation_id
Warn on Cap Exceed When Policy Is Warning
Given a wallet with category "Emails" cap of 100.00 and enforcement policy "warn_on_exceed" And recorded "Emails" spend this period is 95.00 When a new "Emails" action attempts to spend 10.00 Then the system authorizes the action And returns warning code "cap_exceeded_warning" with pre_spend=95.00, attempted_spend=10.00, cap=100.00, post_spend=105.00, overage=5.00 And a notification is sent to wallet admins within 2 minutes And the overage is reflected in category usage metrics
Override Workflow for Exceeded Category Cap
Given an action is blocked with error code "cap_exceeded" for category "Texts" And the actor has permission to request an override When the actor submits an override request with justification and desired amount Then an approver sees the request in the Approvals queue within 2 minutes And upon approval, the system issues an override_token scoped to wallet_id, category, and a max_override_amount, expiring in 15 minutes And when the campaign tool retries the action with a valid override_token within the token window Then the system authorizes the action up to the approved amount and records override_id and approver_id in the transaction log And upon rejection, the action remains blocked and the rejection reason is stored and returned
Structured Error Response to Campaign Tools
Given an action is blocked due to category policy or cap When the API responds to the campaign tool Then the response includes a JSON error body with fields: code, message, category, wallet_id, correlation_id, current_spend, cap, attempted_spend, available, resolution_hint And code is one of ["category_not_allowed", "cap_exceeded", "policy_violation"] And correlation_id matches the audit/event logs And the API response is returned within 500 ms at p95 under normal load
Category-Level Usage Charts Show Burn and Remaining
Given a wallet with category-level spend and optional caps When a user views the wallet analytics for the current period Then each active category displays total_spend, cap (if any), remaining_capacity, and burn_rate (avg daily spend over the last 7 days) And a forecasted_depletion_date is shown when cap exists and burn_rate > 0 And metrics reflect new actions within 2 minutes of completion And users can filter charts by time range (this period, last 7 days, custom)
Shared Wallets Aggregate Category Enforcement Across Campaigns
Given a wallet shared by multiple campaigns with a category "Calls" cap of 500.00 And Campaign A has spent 450.00 in "Calls" this period When Campaign B attempts a "Calls" action costing 60.00 Then the system aggregates category spend across all campaigns And blocks the action with error code "cap_exceeded" and current_spend=450.00, attempted_spend=60.00, cap=500.00, available=50.00 And category usage charts display aggregated spend across campaigns
Shared Wallets & Campaign Allocation
"As a campaign director, I want to share a wallet across several campaigns with configurable allocations so that we can centrally manage budgets while protecting critical efforts."
Description

Enable a single wallet to fund multiple campaigns with configurable allocations (fixed amounts or percentage splits) and optional hard caps per campaign. Apply concurrency-safe deductions with priority rules when simultaneous actions occur. Provide fallback behavior when a campaign allocation is exhausted (e.g., stop, use unallocated pool, or escalate for approval). Display per-campaign consumption and remaining allocation in real time.

Acceptance Criteria
Percentage Allocations Deduct From Correct Buckets
Given a shared wallet with balance $1,000 and percentage allocations: Campaign A = 50%, Campaign B = 30%, Campaign C = 20%, with no hard caps and no fallback When Campaign B incurs a $90 charge and Campaign C incurs a $50 charge Then the wallet balance decreases to $860 And Campaign A remaining allocation remains $500 And Campaign B remaining allocation decreases from $300 to $210 And Campaign C remaining allocation decreases from $200 to $150 And the UI shows A: $0 spent / $500 remaining, B: $90 spent / $210 remaining, C: $50 spent / $150 remaining within 2 seconds of each charge posting
Fixed Amount Allocations With Hard Caps (Stop Fallback)
Given a shared wallet with balance $1,000 and fixed allocations: Campaign A = $200 (Hard Cap: Stop), Campaign B = $300 (Hard Cap: Stop), Unallocated = $500 When Campaign A incurs a $200 charge and subsequently attempts an additional $10 charge Then the $200 charge succeeds and A remaining allocation becomes $0 And the additional $10 charge is blocked with error "allocation exhausted" And the wallet balance becomes $800 And no funds are deducted from Unallocated or Campaign B And an audit log entry is created for the blocked charge with reason=AllocationExhausted and a correlation ID
Allocation Exhausted Uses Unallocated Pool
Given a shared wallet with balance $500 and fixed allocations: Campaign A = $200 (Soft Cap, Fallback: Use Unallocated), Campaign B = $200 (Soft Cap, Fallback: Use Unallocated), Unallocated = $100 When Campaign A incurs charges totaling $210 within 1 minute Then $200 is deducted from Campaign A's allocation and $10 is deducted from Unallocated And Unallocated remaining becomes $90 And Campaign A shows $210 spent / $0 allocation remaining with a "used unallocated pool" flag And an audit log records a fallback event linking both deductions with a single operation ID
Allocation Exhausted Escalates For Approval
Given a shared wallet with balance $100 and fixed allocation: Campaign A = $100 (Fallback: Escalate for Approval) and no Unallocated balance When Campaign A attempts a $20 charge after having spent $90 (remaining $10) Then the $20 charge is moved to PendingApproval with over-allocation amount = $10 And no funds are deducted and the action is paused And an approval request is sent to the designated approver within 60 seconds containing campaign, amount, shortfall, and reason And when the approver approves within 5 minutes and the wallet has ≥ $20 balance Then the $20 charge completes, the wallet balance decreases by $20, and the pending item is resolved And all state transitions (Requested, PendingApproval, Approved, Completed/Failed) are recorded in the audit log
Concurrency-Safe Deductions Respect Priority Rules
Given a shared wallet with balance $100 and two campaigns using the wallet: Campaign A (priority=1) and Campaign B (priority=2), both with Fallback: Stop And both campaigns submit a $60 charge concurrently (within the same millisecond) with unique operation IDs When the system processes the deductions Then exactly one charge succeeds per the priority rule (Campaign A) and deducts $60, leaving wallet balance $40 And the other charge fails with "insufficient funds" and no partial deduction occurs And no double-spend occurs; the final wallet balance is exactly $40 And the audit log shows a single wallet lock acquisition and ordered outcomes for both operation IDs
Real-Time Per-Campaign Consumption & Remaining Display
Given a shared wallet funding Campaigns A, B, and C and the wallet dashboard is open on the Consumption view When charges post to A ($25), then to B ($10), and a blocked charge for C occurs due to a cap Then the per-campaign Spent and Remaining widgets update within 2 seconds of each event And fallback usage badges and blocked-status tags render within 2 seconds And all displayed amounts match the ledger to the cent And if the websocket disconnects, the client retries and shows a Last Updated timestamp not older than 10 seconds
Threshold Alerts & Notifications
"As a partner admin, I want proactive alerts about wallet levels and top-up issues so that I can intervene before actions are interrupted."
Description

Offer configurable alerts for low balance, top-up success/failure, payment retries, and category cap nearing/exceeded. Support email and Slack notifications, daily digests, and escalation rules when repeated failures occur. Include deep links to add funds, edit rules, or pause a wallet. Respect role-based recipients and notification preferences at the partner or coalition level.

Acceptance Criteria
Low Balance Alert Trigger and Delivery
Given a partner or coalition wallet with a configured low-balance threshold (amount or percentage) When the available balance falls to or below the threshold due to spend or fees Then send both Email and Slack alerts to authorized recipients within 60 seconds And the alert includes wallet name/ID, current balance, threshold value and type, timestamp (UTC and local), and deep links to Add Funds, Edit Threshold Rules, and Pause Wallet And notifications respect partner/coalition-level notification settings and per-user preferences; only users with roles authorized for Wallet Alerts receive the message And duplicate alerts for the same wallet/threshold crossing are suppressed for 15 minutes; a new alert is sent only after the balance recovers above threshold and crosses again And for shared wallets, the alert lists all associated campaigns by name
Auto Top-Up Success Notification
Given auto top-up is enabled for a wallet When a top-up transaction succeeds Then send a success notification via Email and Slack within 60 seconds to authorized recipients And the notification includes wallet name/ID, top-up amount, funding source nickname and last4, transaction ID, resulting balance, and deep links to Wallet Activity and Edit Top-Up Rules And delivery channels honor each recipient’s notification preferences (Email and/or Slack) And the event is recorded in the audit log with a link included in the notification
Auto Top-Up Failure, Retries, and Escalation
Given auto top up is enabled for a wallet When a top-up payment attempt fails Then send a failure alert immediately via configured channels with error code category (without sensitive PAN data) and deep links to Update Payment Method, Add Funds, and Pause Wallet And schedule up to 3 automated retries at 15 minutes, 1 hour, and 6 hours after the initial failure; each retry result triggers a success or failure notification And if 3 consecutive failures occur within 8 hours or the balance remains at/below the low-balance threshold after the final retry Then escalate by notifying coalition admins and partner owners and posting to the configured escalation Slack channel And escalation notifications include deep links to Update Payment Method and Add Funds And once a subsequent top-up succeeds, mark the incident resolved and stop further failure notifications for that incident
Category Cap Nearing/Exceeded Alerts
Given a wallet has category spend caps configured (e.g., Voice Calls, SMS, Email) When spend in a category reaches the nearing threshold (configurable percentage; default 80%) Then send a nearing-cap alert with current spend, cap amount, percent used, affected category, and a deep link to Edit Category Caps And when spend in a category exceeds the cap Then send an exceeded-cap alert with the same details and deep links to Reallocate Budget and Pause Wallet And alerts are sent once per threshold crossing per category and deduplicated for 30 minutes And for shared wallets, include a breakdown of top contributing campaigns by spend in that category And recipients and channels honor role-based access and preferences
Daily Digest Delivery
Given daily digests are enabled at the partner or coalition level When the configured digest time occurs Then send a consolidated Email and Slack summary for the last 24 hours including counts and lists of low-balance alerts, top-up successes/failures/retries, and category cap nearing/exceeded events And each digest item includes severity, timestamp, wallet name/ID, and deep links to Add Funds, Edit Rules, or View Activity as appropriate And only recipients who opted into digests at their scope receive the digest, with channels honoring per-user preferences And the digest includes a link to a dashboard view filtered to the organization’s wallets and timeframe
Role-Based Recipients and Notification Preferences Enforcement
Given any wallet notification is generated When recipients are resolved Then only users with roles authorized for that notification type at the wallet’s scope (partner or coalition) are selected And per-user channel preferences (Email, Slack) and frequency settings (immediate vs digest-only) are applied And changes to roles or preferences take effect within 5 minutes and are reflected in subsequent notifications And no notifications are sent to deactivated users; bounces are logged and surfaced to admins
Refunds, Credits & Adjustments Handling
"As a finance controller, I want refunds and credits to flow back into the originating wallet with clear reason codes so that our accounting remains accurate and auditable."
Description

Automatically credit wallets for failed or reversed actions and provide manual adjustments with reason codes, notes, and permissions. Support refund-to-funding-source when applicable or credit-to-wallet policies configurable per org. Ensure all changes are reflected in balances, allocations, and reports, with idempotent operations and complete audit trails.

Acceptance Criteria
Auto-Credit on Failed Action
Given an action incurred a wallet charge C and the action is later marked failed or reversed When the failure event is processed Then a credit transaction of amount C is posted to the same wallet and linked to the original charge And the wallet available_balance increases by C And allocation and campaign spend metrics tied to the action are reversed accordingly And the ledger and reports reflect the credit within 60 seconds
Org Refund Policy Enforcement
Given an organization has a refund policy of either credit_to_wallet or refund_to_funding_source and a charge C is reversed When the reversal is processed Then if policy is credit_to_wallet, the wallet is credited C and no external refund is issued And if policy is refund_to_funding_source and the funding source supports refunds, an external refund of C is issued and the wallet balance is unchanged And if an external refund attempt fails and fallback_to_wallet is enabled, a wallet credit of C is posted and the failure is logged And all transactions record policy_applied and external_ref_id (if any) in the ledger
Manual Adjustment with Reason Codes and Permissions
Given a user initiates a manual wallet adjustment with amount A, reason_code R, and note N When the user submits the adjustment Then the request is rejected if the user lacks wallet.adjust permission or R is not in the allowed set or A = 0 And if approved, a ledger entry is created with type credit if A > 0 else debit, capturing actor, timestamp, reason_code, and note And the wallet balance and allocation rollups update accordingly within 60 seconds
Idempotent Operations via Idempotency Key
Given a request to create a refund/credit/debit includes idempotency_key K and payload P When the same K and P are submitted more than once within 24 hours Then exactly one ledger mutation occurs and subsequent responses return the same transaction_id and balance snapshot And if K is reused with a different payload, the request is rejected with HTTP 409 and no balance change
Allocation Restoration for Shared Wallet Charges
Given a shared wallet charge T was allocated across campaigns/partners per recorded allocation map M When T is reversed or credited Then the credit is allocated back according to M And campaign/partner allocation reports and budgets are recalculated within 60 seconds
Audit Trail Completeness and Immutability
Given any refund, credit, or manual adjustment occurs When the audit log is queried for the affected wallet Then the entry contains transaction_id, type, amount, currency, before_balance, after_balance, actor, actor_id, reason_code, note, source_event_id, policy_applied, external_ref_id (if any), idempotency_key, and created_at And audit entries are immutable; any correction is a compensating entry And audit exports are available via API and CSV with record counts matching ledger entries for the time range
Balance and Reporting Consistency
Given a refund, credit, or adjustment is posted When viewing the wallet API, dashboard, and partner statements Then all surfaces display the same available_balance, pending/reserved, and allocated totals within 60 seconds And threshold alerts are re-evaluated after the update And no action processing is halted due to transient discrepancies during this window
Real-time Ledger, Reporting & Export
"As a compliance officer, I want a real-time, exportable ledger of wallet transactions so that I can deliver audit-ready proof to stakeholders."
Description

Maintain an immutable, timestamped ledger of all wallet transactions (top-ups, spends, refunds, adjustments) with attribution to campaign, category, user/system actor, and funding source. Provide filters, rollups, and dashboards for spend by partner, campaign, category, and time period. Offer CSV export and API endpoints to deliver audit-ready evidence to funders and stakeholders.

Acceptance Criteria
Append-Only Immutable Ledger
Given an existing ledger entry, when a user attempts to edit or delete it via UI or API, then the request is rejected with 409 Conflict and no changes are made to the entry. Given a correction is needed, when an authorized user submits an adjustment entry referencing the original transaction, then a new ledger entry is created with related_transaction_id and both entries remain unchanged thereafter. Given any ledger write, when the entry is stored, then it is assigned a globally unique transaction_id and a monotonically increasing per-wallet sequence_number. Given a rejected mutation attempt, when audit logs are reviewed, then an audit event exists with actor_id, timestamp_utc, and reason immutable_ledger_violation.
Real-Time Attribution and Timestamping for All Transaction Types
Given a top-up, spend, refund, or adjustment occurs, when the transaction completes, then a ledger entry is persisted with non-null values for wallet_id, partner_id, type ∈ {top_up, spend, refund, adjustment}, amount (> 0.00), currency (ISO-4217), timestamp_utc (ISO 8601), actor_type ∈ {user, system}, and funding_source_id (required for top_up and refund). Given a campaign and category are applicable, when the transaction is a spend or refund, then campaign_id and category are populated; for top-ups, campaign_id may be null. Given the entry is written, when queried within 2 seconds (P95) via UI list or API, then the entry is visible and the wallet balance reflects the change. Given concurrent spends on the same wallet, when processed, then the ledger shows both entries with unique sequence_numbers and the final balance equals initial_balance − sum(spend) + sum(refund/adjustment) within $0.01.
Multi-Dimensional Filtering and Drilldown
Given filters for partner, wallet, campaign, category, actor_type, funding_source_id, and date range (absolute and relative like Last 7 Days and MTD), when applied to the ledger list, then only matching entries are returned. Given sorting by timestamp_utc asc or desc, when selected, then results are returned in the requested order and include a stable cursor for pagination. Given a dataset of up to 1,000,000 entries, when filters are applied, then the server responds within 1.5 seconds at P95 and returns total_count and page_count. Given a rollup tile is clicked, when navigating to drilldown, then the ledger view opens with the originating filters pre-applied and the same result set as the tile query.
Accurate Rollups by Partner, Campaign, Category, and Time Period
Given a time grain of day, week, or month and a selected timezone, when generating rollups, then totals are computed in the organization’s configured timezone and include zero-value periods for gaps. Given rollups by campaign within a partner, when totals are compared, then the sum of campaign totals equals the partner total within $0.01. Given a period filter, when comparing beginning and ending wallet balances, then beginning_balance + top_ups − spends + refunds + adjustments = ending_balance within $0.01. Given category-level rollups, when categories are constrained by wallet allowed categories, then no spend appears outside the allowed set.
Live Dashboard Latency and Freshness Indicators
Given new ledger activity, when viewing dashboards, then KPIs and charts refresh automatically at least every 15 seconds and reflect new data within 5 seconds P95 of ledger write. Given a dashboard is rendered, when inspected, then a Last updated timestamp is visible and shows a value within 15 seconds of current time during active updates. Given a dashboard element is clicked, when drilling down, then the user is taken to the ledger list with identical filters and time range.
CSV Export for Audit-Ready Evidence
Given a set of filters, when exporting to CSV, then the file includes only matching entries and the following columns with headers: transaction_id,wallet_id,partner_id,wallet_name,campaign_id,campaign_name,category,type,amount,currency,timestamp_utc,org_local_timestamp,actor_type,actor_id,funding_source_id,related_transaction_id,sequence_number,balance_after,metadata_json,request_id. Given field values contain commas or quotes, when exported, then the CSV is RFC 4180 compliant, UTF-8 encoded without BOM, with fields properly quoted and escaped. Given up to 2,000,000 rows match the export, when requested, then the system performs an asynchronous export, notifies the requester, and makes the file available within 15 minutes; for 100,000 rows or fewer, a synchronous download starts within 10 seconds. Given a completed export, when downloaded, then a manifest.json is available with row_count, generated_at, applied_filters, and a sha256 checksum that matches the CSV file.
Audit API Endpoints with Parity to UI and CSV
Given an authenticated user with appropriate permissions, when calling GET /api/ledger with filters and pagination, then the response returns only authorized data for that organization or partner and includes items, next_cursor, total_count, and request_id. Given the same filters are used across UI, CSV export, and API, when results are compared on a sample of 100 transactions, then all three sources return identical records and ordering. Given an OpenAPI definition is requested, when retrieved, then endpoints for ledger and rollups are documented with field types, enums, pagination, rate limits, and example responses. Given an API request is repeated with the same parameters, when executed within 5 minutes, then ETag or If-None-Match returns 304 Not Modified if data is unchanged; otherwise 200 with updated data.

Split Simulator

Model projected usage and see who pays what before launch. Test scenarios, caps, and grant exhaustion dates, then share a link for approvals. The simulator flags risk (late‑payer exposure, cap breaches) and outputs a sign‑off snapshot so coalitions align on cost expectations up front.

Requirements

Scenario Builder
"As a campaign organizer, I want to quickly assemble multiple cost split scenarios with editable assumptions so that my coalition can compare options and choose one to approve."
Description

Provide an interactive builder to create and manage multiple simulation scenarios for coalition campaigns. Users can define coalition members, anticipated supporter actions per channel (calls, emails, texts), timeframes, pricing tiers, per‑member caps, shared grant pools, and assumption notes. Supports naming, cloning, and version labels; autosaves drafts and marks “ready for approval” states. Parameters update in real time with instant recalculation. Integrates with RallyKit org roster, campaign settings, and price books to prefill defaults. Produces a scenario summary and detailed line items sized for export and approval.

Acceptance Criteria
Create Scenario with Prefilled Defaults
Given a user with access to a RallyKit org that has roster, campaign settings, and price books configured When the user clicks "New Scenario" in Scenario Builder Then a new scenario is created with coalition members prefilled from the org roster And default pricing tiers and rates prefilled from the active price book And the campaign timeframe prefilled from the selected campaign settings And all prefilled fields remain user-editable And a unique scenario name placeholder is generated using the pattern "[Campaign Name] Scenario [YYYY-MM-DD HH:mm]"
Real-Time Recalculation on Parameter Change
Given an open draft scenario with baseline inputs for calls, emails, and texts When the user updates any parameter (e.g., per-member caps, shared grant pool, pricing tier, supporter volumes, timeframe) Then projected usage, per-member costs, grant drawdown, and cap utilization recalculate within 500 ms of the change And updated values appear in both the scenario summary and detailed line items without a page reload And invalid inputs are highlighted and excluded from totals until corrected
Autosave Draft and Restore
Given a draft scenario with unsaved changes When the user pauses input for 2 seconds or navigates within the builder Then changes are autosaved and the last-saved timestamp updates And on reload or return, the draft restores exactly as last saved, including field values, selected tabs, and filters And no data loss occurs on browser refresh or transient network loss up to 30 seconds And autosave failures display a non-blocking warning and retry up to 3 times with exponential backoff
Clone Scenario with Versioning
Given an existing scenario labeled v1.0 When the user selects "Clone" Then a new scenario is created with identical parameters and assumption notes as the source And the version label increments by 0.1 by default (e.g., v1.1) and remains editable And the new scenario name appends "— Copy" until renamed And metadata shows a link to the source scenario And an audit record captures user, timestamp, and source scenario ID
Apply Caps and Shared Grant Pool
Given coalition members with configured per-member caps and a shared grant pool balance When supporter action volumes and pricing tiers are defined Then costs allocate to members up to their caps, with overage shown as unpaid exposure And shared grant funds apply according to configured rules (e.g., first-come or proportional), showing remaining balance And any cap breach or grant exhaustion is flagged inline and summarized at scenario level And line items show quantities, rates, cap applied, grant applied, member pay, and exposure totals per member
Ready for Approval Gate
Given a complete scenario with required fields (name, version label, timeframe, coalition members, pricing tiers, caps/grant settings, assumption notes) When the user clicks "Mark Ready for Approval" Then validation runs and must pass with zero errors And the scenario status changes to Ready for Approval and financial parameters become read-only And a shareable view-only link is generated And authorized users can revert the scenario to Draft
Export Summary and Detailed Line Items
Given a scenario in Draft or Ready for Approval When the user selects "Export" Then CSV and XLSX files generate within 3 seconds And the summary export includes totals by member and channel, caps applied, grant applied, exposure, and grand totals And the detailed export includes member, channel, timeframe bucket, quantity, rate, cost, cap applied, grant applied, and notes columns And exported totals match on-screen totals within rounding rules, and filenames include scenario name, version, and timestamp
Cost Allocation Engine
"As a finance lead, I want accurate per‑member cost projections based on our rules so that each organization knows what they will owe under different usage patterns."
Description

A deterministic calculation engine that models projected usage and allocates costs per member according to configurable rules: usage weights, minimum commitments, tiered rates, per‑channel pricing, caps and overages, shared grant offsets, and time‑based proration. Generates monthly and total projections, per‑member ledgers, and a consolidated coalition summary. Handles rounding, taxes/fees, and currency. Exposes a transparent formula view and validation checks. Integrates with RallyKit billing exports and dashboards to ensure consistency between simulation and live tracking.

Acceptance Criteria
Weighted Usage, Minimums, and Tiered Rates per Channel
Given a coalition configuration with usage_weights per member, minimum_commitments, per-channel tier_rate tables, and projected usage by channel for a target month When the Cost Allocation Engine runs the monthly projection Then each member's allocated_cost equals the greater of (minimum_commitment) and (sum over channels of weighted_usage_in_tier * tier_rate), with deterministic rule evaluation order And repeating the run with identical inputs produces identical outputs and a stable run_id checksum And the coalition_total equals the sum of all member allocated_costs within 0.01 of the currency unit
Caps, Overages, and Shared Grant Offsets
Given a shared grant_pool with amount and allocation_rule, member monthly_caps, and overage_rate definitions When projected allocations exceed caps or grant_pool balance Then grant offsets are applied per allocation_rule before cap enforcement And member charges are clipped at monthly_caps and any excess recorded as overage at overage_rate And the engine outputs grant_exhaustion_month and per-member cap_breach indicators And coalition_total equals (post-grant + overages) within 0.01 of the currency unit
Time-Based Proration for Mid-Cycle Joins and Exits
Given members with activation and deactivation dates within the projection month and proration_method = daily When the engine computes minimums, caps, grants, and usage allocations Then each of those components is prorated by active_days / total_days in the month And usage outside active periods is excluded from allocations And per-member ledgers display proration_factor and active_day counts used in calculations
Rounding, Taxes/Fees, and Currency Handling
Given a configuration with currency, tax_rate rules, per-transaction/platform fees, and optional FX rate snapshot When the engine computes line items and totals Then rounding uses round-half-up to 2 decimals at line_item and total levels, with rounding_adjustment entries recorded And taxes/fees are applied after grants and caps, per jurisdiction rules, and included in member totals And if FX conversion is configured, conversions use the provided FX snapshot id and the coalition_total reconciles within 0.01 of the currency unit
Per-Member Ledgers and Consolidated Coalition Summary
Given a completed projection run When outputs are generated Then each member ledger includes line items for: channel usage by tier, minimum adjustments, grant offsets, cap adjustments, overages, taxes/fees, proration, and rounding adjustments, each with identifiers and timestamps And the consolidated coalition summary aggregates each category and equals the sum of all member ledgers per category within 0.01 And CSV and JSON exports validate against the published schema and contain one row/object per member with a matching run_id
Transparent Formula View and Validation Checks
Given any member ledger entry When the user opens the formula view Then the engine displays the resolved equation with variable values that reproduce the ledger amount within 0.01 And validation checks pass for: totals reconciliation, non-negative post-cap charges unless credit_memo, and configuration sanity (e.g., cap >= minimum or explicit override) And invalid configurations block the run with machine-readable error codes and human-readable messages; valid runs emit validation_passed = true
Consistency with RallyKit Billing Exports and Dashboards
Given a simulation configured to mirror live billing settings for a historical month with available exports When results are compared to RallyKit billing exports and dashboard metrics for the same period Then per-member totals and coalition total match within 0.01 and line-item categories map 1:1 to export fields And any mismatch generates a diff report listing members, fields, expected vs actual values, and percentage deltas
Cap & Grant Modeling
"As a grants manager, I want to model how caps and grant funds impact each member’s costs over time so that we avoid surprises when funds run out."
Description

Configure hard and soft spending caps per member (monthly, campaign‑level) and define shared grant pools with start/end dates, exhaustion order, and rollover rules. Simulate grant depletion dates and cap pacing, including spillover behavior when caps are reached (stop, reallocate, or overage rates). Recalculate as assumptions change and visualize timelines. Integrates with RallyKit grant records to import balances and with the allocation engine to offset charges.

Acceptance Criteria
Per-Member Hard and Soft Cap Configuration
Given a coalition with member "Org A" and campaign "HB123" When the user sets a monthly soft cap of $5,000 and a monthly hard cap of $7,500 for Org A And sets a campaign-level hard cap of $20,000 for Org A on HB123 Then the simulator saves the values and displays them normalized to currency with two decimals And prevents saving if any hard cap is less than its corresponding soft cap (showing a validation error) And applies the monthly caps per calendar month and the campaign-level cap across all months in the simulation And flags a "Soft Cap Warning" when projected monthly spend exceeds the soft cap but is <= the hard cap And disallows projected spend beyond the hard cap unless a spillover behavior is configured
Grant Pool Definition and Exhaustion Order
Given a grant pool "Equity Fund" with start date 2025-01-01 and end date 2025-06-30 and an initial balance of $50,000 When the user sets an exhaustion order [Equity Fund -> General Fund] And selects "FIFO by start date" as the tiebreaker for overlapping pools Then the simulator validates required fields (name, start date, end date, initial balance, exhaustion order) And prevents overlapping date ranges with identical priority without a tiebreaker (showing an error) And persists the exhaustion order and tiebreaker And marks the pool as inactive outside its date window during simulation
Grant Rollover and Expiration Rules
Given grant pool "Equity Fund" has a remaining $1,200 at its end date When rollover is set to "Carry Over to next pool period" Then the amount rolls into the next defined period or successor pool per mapping and increases the opening balance by $1,200 When rollover is set to "Expire" Then the $1,200 is removed from available funds and logged as expired And the simulator surfaces an "Expiration" or "Rollover" event in the timeline with the exact amount and date
Depletion Date and Cap Pacing Simulation Visualization
Given daily projected action volume produces an expected spend curve for campaign HB123 When the user runs the simulation for 6 months Then the simulator computes for each member and grant: first soft-cap breach date, hard-cap hit date, and grant depletion date And displays a timeline with markers labeled with exact dates for each computed event And provides tooltips showing remaining balance and pace at each marker And the computed dates update within 1 second after changes to assumptions
Spillover Behavior on Cap Reach: Stop, Reallocate, Overage
Given Org A reaches its monthly hard cap on 2025-03-18 with $400 remaining projected spend in March When spillover behavior is set to "Stop" Then the simulator truncates projected spend at the cap and shows $400 as "Unserved" When spillover behavior is set to "Reallocate" Then the simulator reallocates the $400 to the next eligible member or grant per exhaustion/priority rules and logs the transfer When spillover behavior is set to "Overage" Then the simulator applies the configured overage rate (e.g., 120% of base) to the $400 and classifies it as overage spend And all three behaviors produce a reconciliation where total projected actions remain constant across scenarios
Recalculation and Change Propagation
Given the user edits a cap value, grant balance, or a grant's start/end dates When the change is saved Then all derived metrics (depletion dates, warnings, spillovers, offsets) recalculate and the timeline updates immediately And the simulator records a versioned snapshot with timestamp, user, and changed fields And the user can undo the last change and see metrics and timeline revert to the prior snapshot
Grant Balance Import and Allocation Engine Offsets
Given RallyKit contains grant records for "Equity Fund" ($50,000) and "General Fund" ($25,000) When the user clicks "Import from Grants" Then the simulator fetches balances, maps them to defined grant pools by ID, and prompts for manual mapping on any mismatch And displays the last sync timestamp and user And uses the allocation engine to apply offsets so projected charges first consume grant balances per exhaustion order, then apply member caps, then apply overage per spillover rules And shows a reconciliation where Base Projected Charges = Grant Offsets + Member Charges + Overage (within $0.01 rounding) And if the import fails, the simulator shows a non-blocking error and allows manual entry of balances
Risk Scoring & Alerts
"As a coalition director, I want the simulator to flag financial and operational risks with clear explanations so that I can adjust assumptions or policies before launch."
Description

Provide real‑time risk analysis that evaluates each scenario against configurable thresholds, including late‑payer exposure (uncovered costs), cap breach likelihood, grant depletion risk, and budget variance. Present inline warnings, color‑coded badges, and explanatory tooltips, and include risk flags in exported snapshots. Allow admins to tune thresholds and assumptions (e.g., payer reliability scores). Integrates with the allocation and cap/grant models to surface root causes and recommended mitigations.

Acceptance Criteria
Real-Time Risk Badges and Inline Warnings Update as Inputs Change
Given an open scenario in the Split Simulator with default thresholds, When the user changes any input affecting costs or funding (allocations, caps, grants, payer mix, reliability, budget), Then risk scores and corresponding inline warnings and color-coded badges update within 500ms without a page reload. Given the default thresholds (late-payer exposure: >$0 Amber, >=$5,000 Red; cap utilization: >=90% Amber, >=100% Red; grant depletion: depletion before campaign end Amber, before campaign midpoint Red; budget variance: >=5% Amber, >=10% Red), When the scenario metrics cross a threshold, Then the badge color matches the defined level and the warning text names the risk and metric value. Given no metrics meet any Amber/Red threshold, Then the UI displays a Green/Low badge and no warning banners.
Admin Configures Risk Thresholds and Payer Reliability
Given an authenticated admin, When they open Risk Settings, Then they can configure per-risk thresholds (currency, percent, and date rules) and set payer reliability scores between 0.0 and 1.0 inclusive. Given valid inputs, When the admin saves, Then settings persist org-wide, are versioned with timestamp, user, and change note, and immediately apply to subsequent calculations. Given invalid values (e.g., negative currency, percent > 100, reliability outside 0–1), When the admin attempts to save, Then the system blocks save and shows inline validation messages specifying the field and allowed range. Given scenario-level overrides are enabled, When an editor sets overrides, Then calculations for that scenario use the overrides and the UI shows a "Scenario Overrides" indicator with a link to view overridden values.
Explainability Tooltips and Root-Cause Panel
Given a visible risk badge or inline warning, When the user hovers or taps the info icon, Then a tooltip displays: risk name, current metric value, threshold(s), brief formula, and assumed inputs used. Given the user selects "View details" in the tooltip, When clicked, Then a right-side panel opens listing the top 3 root causes ranked by contribution percent, with one recommended mitigation per cause including estimated impact on the risk metric. Given the scenario changes, When recalculated, Then tooltip and details panel values refresh to match the current state within 500ms.
Exported Snapshot Includes Risk Flags and Explanations
Given the user exports a snapshot to PDF or CSV or generates a share link, Then the snapshot includes a Risk Summary table with one row per risk dimension showing status color, metric value, threshold, explanation, and timestamp. Given any risk is Amber or Red, When exported, Then the snapshot highlights it with the correct color and includes the top mitigation suggestion. Given no risks, When exported, Then the snapshot displays "No Material Risk" with Green status. Given settings at export time, When exported, Then the snapshot records the settings version ID and scenario hash, and shared links render the same immutable content later.
Integration with Allocation, Cap, and Grant Models
Given updated allocation and payer reliability inputs, When risk is recalculated, Then late-payer exposure equals projected payable amounts adjusted by reliability minus expected covered costs, displayed as a currency with two decimals. Given a cap value for a payer or overall, When projected usage exceeds the cap, Then a Cap Breach risk flag appears showing exceedance amount and projected breach date derived from usage ramp assumptions. Given a grant with balance and end date, When projected drawdown depletes before the campaign end date, Then a Grant Depletion risk appears with the projected depletion date and days before end date. Given a scenario budget, When projected total differs, Then Budget Variance displays absolute difference and percent of budget.
Risk Notification and Approval Workflow Alignment
Given one or more risks are Amber or Red, When the user clicks Request Sign-Off, Then the sign-off dialog lists each risk with status, requires checkbox acknowledgment per risk, and allows optional mitigation notes. Given a stakeholder opens the sign-off link, When they approve, Then the system records their name, email, timestamp, and the risk states and thresholds at the moment of approval. Given the scenario changes after approval and any risk state differs, Then subsequent exports and share pages display a "Post-approval change detected" banner and block reuse of the old approval.
Shareable Approval Link & Snapshot
"As a coalition approver, I want a secure link and a sign‑off snapshot that capture exactly what we agreed to so that everyone has a clear record before we launch."
Description

Generate a read‑only, shareable link for a selected scenario with access controls (org‑restricted, invite list, optional passcode). Render a concise approval view: scenario assumptions, per‑member cost table, grant/cap timelines, and risk flags. Create an immutable, timestamped sign‑off snapshot (PDF and hashed JSON) that captures approver name, signature, and comments. Store snapshots in RallyKit’s audit log and attach them to the campaign record for downstream reference.

Acceptance Criteria
Read-Only Shareable Link Generation for a Selected Simulator Scenario
Given a user selects a simulator scenario, When they click "Generate Approval Link", Then a unique, read-only URL is created and displayed. Given the approval link URL, When it is opened by any recipient with valid access, Then all scenario inputs are non-editable and a visible "Read-only" indicator is shown. Given the approval link is generated, Then the URL contains an unguessable token with at least 32 bytes of entropy. Given the approval link is generated, When the source scenario is later edited, Then the approval view continues to render the pre-edit state associated with the link. Given the approval link is opened, Then page load time for the approval view is under 2 seconds at p95 for scenarios up to 10k rows in the cost table.
Access Control Modes: Org-Restricted, Invite List, Optional Passcode
Given org-restricted mode is selected, When a recipient opens the link, Then access requires authentication in the scenario owner’s organization and non-members receive HTTP 403 without revealing scenario details. Given invite-list mode is selected with specified emails, When a recipient opens the link, Then access is granted only to identities matching those emails and others receive HTTP 403. Given a passcode is enabled, When any recipient opens the link, Then the passcode must be provided before content is revealed; five consecutive failures trigger a 15-minute lockout. Given access settings are updated by the owner, When a recipient next opens the link, Then the new settings apply immediately. Given an access denial occurs, Then no PII or scenario metadata is returned in body or headers beyond a generic error.
Approval View Content Completeness and Accuracy
Given the approval view is loaded, Then it displays scenario name, version identifier, author, and a list of assumptions with labels and units. Given the approval view is loaded, Then it displays a per-member cost table with member name, share basis, projected volume, unit rate, total cost, and allocation method; the grand total equals the sum of rows within ±0.01. Given the approval view is loaded, Then it displays grant and cap timelines with start/end dates, remaining balances, and projected exhaustion dates derived from scenario inputs. Given the approval view is loaded, Then it displays risk flags for late-payer exposure and cap breaches with severity levels and explanatory text. Given the approval view is loaded, Then currency, number, and date formats match the project locale settings.
Sign-Off Snapshot Creation and Integrity (PDF + Hashed JSON)
Given an approver has access to the approval view, When they submit name, signature, and optional comments via "Sign Off", Then a snapshot is created with an immutable UTC ISO-8601 timestamp (Z). Then a PDF is generated that mirrors the approval view and includes approver name, signature, comments, and timestamp. Then a JSON payload of the snapshot content is generated; a SHA-256 hash of the JSON is computed and stored. Then the snapshot records the exact scenario version ID and link token used for sign-off. Given the stored JSON and hash, When the system recomputes the hash, Then it matches the stored value; if not, verification fails and is logged. Given the underlying scenario is later modified, Then the snapshot content remains unchanged and verifiable by hash.
Storage in Audit Log and Campaign Record Attachment
Given a snapshot is created, Then an audit log entry is written capturing creator user ID, approver identity, timestamp, scenario ID, link ID, and signer IP. Then the snapshot files (PDF, JSON, hash) are stored read-only with retention of at least 7 years. Then the snapshot is attached to the associated campaign record and visible in its Approvals list within 5 seconds of sign-off. Given a user with audit-view permission searches by campaign or scenario, When they view audit entries, Then the snapshot entry is retrievable and previewable. Given a user without sufficient permission attempts to delete or modify a snapshot or audit entry, Then the action is blocked and an audit event is recorded.
Share Link Lifecycle: Expiration, Revocation, and Analytics
Given a link is generated, Then a default expiration of 30 days is set unless overridden by the owner to a value between 1 and 90 days. Given the link is expired, When a recipient opens it, Then access is denied with an "Expired" status and no content is revealed. Given the owner revokes the link, When recipients attempt to open it, Then access is denied with a "Link revoked" status. Given a link is opened or a sign-off occurs, Then an event is recorded with timestamp, user identity (if available), and user agent; aggregate counts are visible to the owner. Given more than 20 passcode failures occur within 10 minutes, Then the link is auto-suspended and the owner is notified via email within 1 minute.
Audit Trail & Version Control
"As a compliance officer, I want an auditable history of changes and approvals so that we can defend our cost split decisions during reviews."
Description

Maintain complete version history for each scenario, including parameter diffs, editors, timestamps, and approval state changes. Support labels (e.g., v1, Finance‑Approved), rollback to prior versions, and inline comments with mentions. Provide a printable audit report that aligns with RallyKit’s audit‑ready proof standard. Integrates with the shareable snapshot and approval flow to ensure traceability from draft to sign‑off.

Acceptance Criteria
Version History Recording and Diffing
Given a user edits scenario parameters and clicks Save, When changes differ from the latest saved version, Then the system creates a new immutable version with a unique ID, editor identity, and precise timestamp. Given a new version is created, When viewing the version history, Then a field-level diff shows added, removed, and changed parameters with old and new values for each field. Given no parameter changes since the last save, When the user attempts to save, Then the system prevents version creation and displays a no-changes-detected message. Given a version is opened, When the user selects View Diff, Then the diff is rendered within 2 seconds for up to 200 changed fields and can be exported as JSON.
Approval State Transitions Auditability
Given an approval workflow is enabled, When a scenario transitions among Draft, Pending Approval, Approved, or Rejected, Then each transition logs actor, prior state, new state, timestamp, and an optional comment. Given a non-approver attempts to approve or reject, When the action is submitted, Then the action is blocked, the attempt is logged, and the user sees an insufficient-permissions message. Given an approval is revoked, When the revocation occurs, Then a reversal event is recorded linking to the original approval event. Given a version has an approval decision, When viewing audit history, Then the approval entry clearly references the exact version ID and any associated snapshot ID.
Rollback to Prior Version
Given a user with Edit permission selects a prior version, When they click Rollback and confirm, Then a new version is created that exactly matches the selected version’s parameters and metadata (except timestamps and editor), and the history records the rollback source and target IDs. Given a rollback is performed, When viewing approvals on the newly created version, Then approval state resets to Draft and no prior approvals are carried forward. Given a rollback is performed, When viewing history, Then existing historical versions and their approvals remain unchanged, and the rollback appears as an append-only event. Given labels existed on the source version, When rollback completes, Then no labels are copied except an automatic system label noting “Rolled back from <version ID>”.
Labels and Version Naming
Given a saved version, When a user with Edit permission applies labels, Then the system allows multiple unique labels per version, supports free-text labels up to 30 characters, and records label add/remove events with actor and timestamp. Given reserved labels map to approval states, When a user attempts to manually apply a reserved approval label, Then the action is blocked and guidance is shown to use the approval flow. Given the version list is viewed, When filtering by label, Then only versions containing the selected label(s) are returned. Given a version is exported to audit report, When labels exist, Then all labels appear exactly as stored in the report.
Inline Comments with Mentions
Given a version or specific parameter is selected, When a user posts a comment with @mentions, Then the comment stores author, timestamp, target context (version or field), and each mention, and triggers an in-app notification to mentioned users with a deep link. Given a comment is edited, When the edit is saved, Then an edit history entry is recorded with editor, timestamp, and diff of text changes; the original text remains viewable in history. Given a comment is deleted by its author or an admin, When deletion is confirmed, Then the comment is soft-deleted (hidden from default view), retained in audit history with a tombstone marker, and included in the printable audit report. Given a user without access to the scenario is mentioned, When the comment is submitted, Then submission is blocked and the user is informed the mention target lacks access.
Printable Audit Report Generation (Audit‑Ready Proof)
Given a version or a range of versions is selected, When Generate Audit Report is clicked, Then a PDF is produced containing organization, campaign, scenario name, version IDs, editors, timestamps, labels, full approval history, parameter diffs, and comment log entries. Given a report is generated, When the file is produced, Then the document includes a unique checksum/hash, page numbers, and a generated-on timestamp (UTC and org-local) and renders within 5 seconds for up to 25 versions. Given the same inputs and data state, When generating the report again, Then the checksum/hash remains identical, ensuring reproducibility. Given the report is viewed, When links are followed, Then deep links navigate to the exact version, diff view, approval event, or comment referenced.
Traceability with Shareable Snapshot and Approval Flow
Given a shareable snapshot is created for approval, When the snapshot link is generated, Then it embeds the immutable version ID and content hash and is marked read-only. Given a scenario changes after a snapshot is created, When the snapshot is viewed, Then the snapshot still points to the original version and displays a banner indicating newer versions exist. Given an approval is granted from a snapshot, When viewing audit history, Then the approval event references the snapshot ID and the exact version ID, and the snapshot displays the approver, decision, and timestamp. Given a version has an approved snapshot, When attempting to delete that version, Then deletion is blocked or requires admin override with mandatory justification, and the outcome is logged.
Data Import & Export
"As an operations manager, I want to import roster and baseline data and export final projections in standard formats so that setup is fast and downstream teams can act."
Description

Enable CSV and API import of coalition member rosters, historical usage baselines, and pricing defaults to seed scenarios. Provide field mapping, validation, and error resolution. Export scenario results (per‑member ledgers, assumptions, risk summary) to CSV, PDF, and RallyKit billing integrations. Support one‑click attach to the campaign kickoff checklist and notify stakeholders upon export.

Acceptance Criteria
CSV Member Roster Import with Field Mapping and Validation
- Given a valid CSV file (≤20 MB) containing member roster fields (name, email, external_id, district, org_name), When the user uploads the file and maps CSV columns to required fields via the mapping UI, Then the system validates that all required fields are mapped and shows a preview of the first 100 rows with detected issues highlighted. - Given the user starts the import, When validation runs, Then rows with errors are flagged with line number, field, and reason; the user can download an error CSV and re-upload a corrected file without re-mapping via a saved mapping profile. - Given duplicate external_id or email within the file or compared to existing members, When duplicates are detected, Then the user can choose a rule (skip, update, or create new) and the system applies it consistently, reporting counts per outcome. - Given a file with 10,000 rows and ≤5% errors, When importing, Then successful rows are committed and errored rows are skipped; the import completes in ≤2 minutes and a summary (imported, updated, skipped, errored) is displayed. - Given a completed import, When reviewing the roster, Then all imported members are searchable and an audit log entry records who imported, when, file checksum, and mapping profile used.
API Import for Rosters, Baselines, and Pricing Defaults
- Given an authenticated API client with scope split_simulator.import, When it POSTs to /api/v1/simulator/import with JSON arrays for members, usage_baselines, and pricing_defaults, Then the API responds 202 with job_id and a Location to poll. - Given a job_id, When the client GETs /api/v1/simulator/import/{job_id}, Then status transitions queued→validating→processing→completed|failed with progress percent and counts per entity type. - Given validation failures, When the job completes, Then the result includes an errors[] list with object_type, index, field, code, and message; the API returns 207 Multi-Status for partial success. - Given an Idempotency-Key header is provided, When the same payload is replayed within 24 hours, Then no duplicate records are created and the prior job result is returned. - Given payload size ≤5 MB and rate limit 60 requests/min, When limits are exceeded, Then the API returns 413 or 429 with Retry-After; request/response logs omit PII values.
Scenario Seeding from Imported Data
- Given successful imports exist for a coalition, When a user creates a new Split Simulator scenario and selects that coalition, Then the scenario pre-populates members from the roster, per-member historical usage baselines, and default pricing by member org. - Given pricing defaults exist and a member lacks an override, When the scenario seeds, Then coalition-level defaults apply; member-specific overrides take precedence; missing values are flagged with inline warnings. - Given seeded values, When the user opens Assumptions, Then totals (projected actions, cost caps, grant allocations) reconcile to the sum of per-member values within ±0.5% tolerance. - Given audit requirements, When viewing scenario metadata, Then it displays source import job_ids, timestamps, and mapping profile names for each seeded data class.
Export Scenario Results to CSV and PDF
- Given a scenario with at least one member, When the user clicks Export → Files (CSV, PDF), Then the system generates: per-member_ledger.csv, assumptions.csv, risk_summary.csv, and scenario_snapshot.pdf. - Given export runs, When files are generated for up to 5,000 members, Then totals reconcile across CSVs and PDF (difference ≤$0.01 per member due to rounding), and generation completes in ≤60 seconds. - Given downloaded files, When opened, Then each CSV includes specified columns with headers; numbers use period decimal separators; dates are ISO 8601 UTC; the PDF includes a cover page with scenario name, id, created_at, and a sign-off block. - Given a versioned scenario, When exporting multiple times, Then filenames include scenario_id, version, and UTC timestamp; prior exports remain accessible in Export History.
Export to RallyKit Billing Integrations
- Given RallyKit Billing is connected with field mapping configured, When the user selects Export → Billing, Then the system validates required mappings and shows a dry-run summary of records to be sent. - Given the user confirms, When the export executes, Then ledger entries are transmitted via API in batches of 500 with exponential backoff retries for transient 5xx up to 3 attempts. - Given idempotency is enabled, When re-exporting the same scenario version, Then duplicates are not created in Billing (dedup by scenario_version and entry_id). - Given failures occur, When some records fail, Then the export is marked Partial Fail, a retry file is created, and the user is notified with failure reasons; success/failure counts are displayed. - Given a successful export, When completed, Then a confirmation with external batch_id and a link to the Billing system is stored in Export History and audit logs.
One-Click Attach to Campaign Kickoff Checklist
- Given a scenario export bundle exists, When the user clicks Attach to Kickoff Checklist and selects a campaign, Then the latest export bundle is attached to the campaign’s checklist as a read-only artifact. - Given the attachment is created, When collaborators view the checklist, Then they see the artifact with name, size, checksum, scenario_id/version, and created_by; permissions respect campaign visibility settings. - Given the scenario is updated later, When a new export is attached, Then the prior attachment is retained and labeled superseded; the checklist shows the most recent as Current.
Stakeholder Notifications on Export Completion
- Given stakeholders are configured (emails and optional Slack webhook), When an export completes (Files or Billing), Then notifications are sent within 60 seconds including scenario name, version, export type, summary counts, and links. - Given attachments exceed 10 MB, When sending the email, Then the system includes secure download links instead of attachments; links expire in 14 days. - Given a delivery failure, When an email bounces or Slack returns a non-2xx, Then the system retries up to 3 times over 10 minutes and logs the failure; the user sees a banner with details. - Given unsubscribed stakeholders, When export notifications are sent, Then unsubscribed recipients are skipped and listed in the notification log.

Magic Join Link

One-tap SMS deep link that verifies the volunteer’s phone, pre-fills their profile, and starts district detection instantly—no passwords or app installs. Volunteers enroll in under 30 seconds, with clear consent capture and a smooth handoff to their first task, reducing drop-off at the door.

Requirements

Secure Expiring Join Links
"As a campaign admin, I want to generate secure, single-use SMS join links so that volunteers can enroll in one tap without downloads or account creation."
Description

Generate signed, single-use SMS deep links that open a mobile web flow without passwords or app installs. Links include a short TTL, replay protection, revocation, and device-aware routing (iOS/Android/desktop) with graceful fallbacks to a responsive web join page. Support custom branded domains, UTM and referral parameters, multi-tenant campaign scoping, localized landing copy, and QR/short-link variants. Store link issuance and consumption events for attribution and security auditing.

Acceptance Criteria
Signed, Time-Limited Link Integrity
Given a join link is created with TTL=15 minutes and a server-side signature v1 When the link is requested within 15 minutes of issuance Then the signature validates against the active key and the join flow loads with HTTP 200 When the same link is requested 1 second after TTL expiry Then the request is rejected with HTTP 410 Gone and no session is created or modified When any query parameter in the link is altered (e.g., tenantId, campaignId, utm_source) Then signature validation fails and the server returns HTTP 401 Unauthorized and logs a denied event with reason "signature_invalid"
Single-Use and Replay Protection
Given a valid unused join link with token T When the link is first consumed successfully Then token T is atomically marked consumed with consumed_at and consumer_fingerprint persisted When the same URL is requested again from a different device or fingerprint Then the request is rejected with HTTP 409 Conflict and no additional enrollment/session is created When the same URL is retried within 60 seconds from the original device due to network retry Then the server returns HTTP 200 and resumes the existing session without creating duplicate records
Revocation Controls
Given an admin user revokes a specific join link before consumption When the revoked link is requested Then the server returns HTTP 410 Gone and logs a denied event with reason "revoked" Given an admin user revokes all outstanding links for Campaign C When any non-consumed link for Campaign C is requested Then the server returns HTTP 410 Gone and the link cannot be consumed subsequently Given signing keys are rotated and prior keys are retired When a link signed with a retired key is requested Then the server returns HTTP 401 Unauthorized and logs "key_retired"
Device-Aware Routing and Localized Fallback
Given the user opens the link on iOS Safari or Android Chrome When user agent detection runs Then the mobile-optimized join flow loads and renders within the same domain over HTTPS with HTTP 200 Given the user opens the link on a desktop browser When the link is requested Then a responsive web join page loads with equivalent functionality and clear guidance, HTTP 200 When device/UA detection fails or is unknown Then the system falls back to the responsive web join page, preserving all parameters When the Accept-Language header contains a supported locale (e.g., es-ES) Then the landing copy renders in that locale; otherwise defaults to en-US
Custom Branded Domains and Multi-Tenant Scoping
Given Tenant T has verified branded domain join.example.org via DNS and TLS When links are generated for Tenant T Then all links use https://join.example.org/... and present a valid TLS certificate (TLS 1.2+), HSTS enabled When a link scoped to Tenant T and Campaign C is requested under Tenant U or Campaign D Then the server returns HTTP 403 Forbidden and no cross-tenant data is exposed When the branded domain is misconfigured or certificate invalid Then the system falls back to the RallyKit default domain over HTTPS and logs a warning event
Attribution Metadata Preservation (UTM/Referral and Variants)
Given a link includes utm_source, utm_medium, utm_campaign, and referralCode parameters When the user flows through any redirects (short-link expansion or QR code variant) Then all parameters are preserved exactly and are available to the client and server When the user completes enrollment Then the stored enrollment record includes the UTM and referral values and they appear in attribution reports and exports When parameters are missing or malformed Then defaults are applied without causing errors, and the event is logged with reason "params_invalid"
Audit Trail for Issuance and Consumption Events
Given a link is issued Then an issuance event is stored with fields: link_id, created_at (UTC), tenant_id, campaign_id, creator_user_id, ttl_seconds, domain, signature_version, param_hash Given a link request occurs Then a consumption event is stored with fields: link_id, attempted_at (UTC), outcome (success|expired|revoked|signature_invalid|key_retired|cross_tenant|replay), device_type, user_agent, ip_hash, geo_country, http_status When a link is successfully consumed Then consumed_at (UTC) and consumer_fingerprint are recorded and immutable And all events are retained for at least 24 months and are exportable via API and CSV
One-Tap Phone Verification
"As a volunteer, I want my phone number to be verified automatically when I tap the link so that I can join quickly without typing codes."
Description

Automatically verify phone ownership when the magic link is opened on the device that received the SMS, establishing a verified session without manual code entry. Provide resilient fallbacks: OTP via SMS and voice call, resend with backoff, and device handoff support if the link is opened on another device. Enforce rate limiting, fraud/threat detection, and VOIP handling rules. Persist verification status and telemetry for troubleshooting while meeting the sub-30-second enrollment target.

Acceptance Criteria
One-Tap Verification on Same Device
Given a valid magic join link is delivered via SMS and opened on the same device/browser that received it When the user taps the link within its validity window Then the system verifies phone ownership without manual code entry And creates a verified session bound to that phone number and current device fingerprint And marks the phone verification status = verified with a server-side timestamp And prevents reuse of the link after first successful verification And displays confirmation of verified status within 2 seconds for p90 traffic
OTP Fallback via SMS and Voice
Given auto-verification is unavailable or the link is opened on a different device When the user requests an SMS OTP Then a 6-digit numeric OTP is generated, delivered via SMS, and expires in 5 minutes And only the latest OTP is valid; prior OTPs are invalidated And up to 3 consecutive incorrect OTP entries lock further attempts for 5 minutes And upon correct entry, verification status is set to verified and a session is established on the current device And when the user opts for Voice Call, the same OTP is delivered via text-to-speech within 30 seconds with retry support
Resend with Backoff and Delivery Constraints
Given the user has requested an OTP and indicates it was not received When the user taps Resend Then the first resend is allowed after 30 seconds, doubling wait times (30s, 60s, 120s, 240s), up to 4 resends per verification session And the UI displays the countdown to next resend and disables the button until eligible And provider delivery status is checked; after two consecutive undelivered SMS attempts, the UI defaults to offering Voice Call And if a resend is delivered, all previous OTPs are invalidated automatically
Cross-Device Handoff Support
Given the magic link is opened on a device different from the one that received the SMS When the user completes OTP verification Then a verified session is created on the device in use for subsequent enrollment steps And the original one-tap link token is invalidated to prevent reuse And the user proceeds to the next step without additional login requirements
Rate Limiting and Fraud/VOIP Controls
Given verification attempts originate from a phone number, IP address, or device fingerprint When limits are exceeded (more than 5 verification attempts per phone per 15 minutes, 20 per IP per hour, or 10 per device per hour) Then further attempts are blocked for 1 hour and a generic error is shown And VOIP numbers are detected; one-tap auto-verify is disallowed and OTP is required And high-risk signals (e.g., disposable numbers, known bad IP ranges) trigger a hard block or additional CAPTCHA challenge And all blocks and challenges are recorded with reason codes
Telemetry, Audit Logs, and Persisted Status
Given any verification flow (one-tap, SMS, voice) is executed When the flow completes or fails Then the system persists: phone hash, method, timestamps (issued, delivered, verified), attempt counts, device fingerprint hash, IP hash, provider codes, outcome, and correlation ID And OTP values are never logged or stored after verification And an audit record can be retrieved by correlation ID showing step timeline within 2 seconds And verification status persists in durable storage and is reloaded on session restore within 2 seconds
Sub-30-Second Enrollment Performance
Given a volunteer on a typical US 4G connection (>=2 Mbps, p95 RTT <200 ms) When they use one-tap verification on the same device Then time from link tap to verified session is <= 5 seconds at p95 And when fallback OTP is used, time from requesting OTP to verified session is <= 30 seconds at p90 for successful deliveries And the system emits performance metrics for these SLAs and alerts if thresholds are exceeded for 5 consecutive minutes
Explicit Consent & Audit Trail
"As a volunteer, I want to understand and control what I’m consenting to so that I can opt in confidently and change my preferences later."
Description

Present clear, localized consent language for SMS, calls, and email with explicit checkboxes and an easily accessible privacy notice. Capture immutable consent records (timestamp, phone, consent scopes, IP, user agent, locale, campaign) and store them in an append-only audit log. Support STOP/HELP keywords, opt-out management, region-specific compliance (e.g., TCPA/GDPR/CCPA), and exportable, audit-ready reports. Sync consent states to volunteer profiles and downstream messaging services.

Acceptance Criteria
Localized Consent Presentation on Magic Join Link
Given a volunteer opens a Magic Join Link with a detectable locale (Accept-Language header or locale query parameter) When the enrollment screen renders Then SMS, call, and email consent language is displayed in the detected locale And a privacy notice link is visible and opens the localized notice in a new tab And all consent checkboxes are unchecked by default and individually labeled And the Continue/Join action remains disabled until required checkboxes are selected And the content version ID and locale code used are captured with the submission
Separate, Explicit Checkboxes for SMS, Calls, Email
Given the enrollment form is displayed When the volunteer interacts with consent checkboxes Then each channel (SMS, calls, email) has its own independent checkbox and description And no checkbox is pre-selected And submission is blocked with an inline error if a required scope is missing And the submitted consent scopes match exactly the final state of the checkboxes at submit time
Immutable Consent Record With Required Fields
Given a volunteer submits the enrollment form When consent is recorded Then the system stores a new immutable record including: ISO-8601 UTC timestamp, E.164-normalized phone, consent scopes by channel, IP address, user agent, locale, campaign ID, privacy notice URL/version, content version ID, join link/session ID, and actor (self/admin) And the record is append-only (no update/delete API available); any subsequent change creates a new record linked by subject ID And attempts to mutate an existing record are rejected and logged with reason
Consent State Sync to Profiles and Messaging Providers
Given consent is created or changed When the operation completes Then the volunteer profile reflects the current consent state per channel within 5 seconds And downstream messaging providers are updated to the same state within 60 seconds And sync failures are retried with exponential backoff and surfaced in an admin error queue And no outbound messages are sent on a channel while its consent is pending or revoked
STOP/HELP Keyword Handling and Opt-Out Management
Given an inbound SMS from a verified volunteer number contains STOP, UNSUBSCRIBE, CANCEL, END, or QUIT (any case, with or without punctuation) When the message is received Then SMS consent is set to revoked immediately and an opt-out confirmation is sent And an immutable opt-out event is recorded with timestamp, keyword, and channel And downstream messaging providers are updated within 60 seconds And a HELP message triggers a help reply without changing consent
Region-Specific Compliance Application (TCPA/GDPR/CCPA)
Given a volunteer's region is inferred from E.164 country code, self-declared location, or IP geolocation When consent UI is rendered and consent is stored Then region-appropriate disclosures are shown (e.g., GDPR legal basis and DPO contact in EU, CCPA notice at collection for CA, TCPA SMS consent language in US) And the method of region determination is stored in the consent record And if the region is ambiguous, the strictest consent requirements are applied and recorded And consent text versions can be configured per region and are referenced in the record
Audit-Ready Consent Export Reports
Given an admin with export permission requests a consent audit for a date range, campaign, or subject When the export is generated Then a CSV and JSON file is produced including all consent and opt-out events with required fields and current state per channel And the export includes headers, schema version, generating user, generated timestamp, and a SHA-256 checksum And the export is available for secure download within 5 minutes and is logged in the audit trail
Instant District Detection
"As a volunteer, I want my legislative district detected automatically so that my actions are targeted to the right representatives without extra steps."
Description

Kick off district detection immediately upon link open by requesting precise or approximate location permission with privacy-first copy; gracefully fall back to ZIP+4 or street address entry. Resolve districts and legislators via civic data providers with real-time matching, caching, and retry logic. Handle edge cases (new districts, PO Boxes, campus housing) and allow manual correction. Persist district to the profile for use in targeting, scripts, and action routing.

Acceptance Criteria
Precise Location Permission - Instant District Match
Given a volunteer opens a valid Magic Join Link for the first time When the join page loads Then a geolocation permission prompt appears within 1 second with on-screen copy stating: "We use your location only to detect your district. We don't store your exact location. You can enter an address instead." When the volunteer allows precise location Then state lower chamber, state upper chamber, and US House districts are resolved via the configured provider within 3 seconds And the matched district names and current legislators are displayed And district and legislator IDs are persisted to the volunteer profile And the first task starts with a district-specific script
Approximate Location Only - ZIP+4 Assisted Resolution
Given the volunteer allows approximate location only When the provider returns multiple candidate districts or confidence < 0.9 Then the UI prompts for ZIP+4 with format validation (#####-#### or #########) When a valid ZIP+4 is submitted Then the correct districts are resolved within 5 seconds And the result is displayed and persisted to the profile And if ambiguity remains after ZIP+4, the UI escalates to street address entry
Location Permission Denied - Address Entry Fallback
Given the volunteer denies location permission Then the enrollment is not blocked and an address entry form is shown immediately And PO Box addresses are rejected with the message: "PO Boxes can't determine districts. Please enter a street address." And address inputs support autocomplete and USPS validation When a valid street address is submitted Then districts are resolved within 5 seconds And the matched districts and legislators are displayed And the district IDs are persisted and the volunteer proceeds to the first task
New or Unmapped District - Provisional Match with Retry
Given the provider response is unknown/unmapped due to recent boundary changes Then the system uses the most recent available boundary set and marks the match as Provisional (non-blocking) And a background retry is scheduled every 6 hours for up to 48 hours When a retry succeeds Then the profile is updated to the definitive districts, changes are audited, and the volunteer is notified on next session If all retries fail Then the profile remains Provisional and an admin alert is created
Campus Housing or Shared Address - Manual Correction Flow
Given the address geocodes near a district boundary or a multi-unit building is detected Then the UI offers a "Review district" option When the volunteer chooses to review Then a map and search allow pin adjustment or address search to select the correct district And the selected district is validated against the provider When confirmed Then the manual override (source=manual) is saved to the profile and used for all subsequent actions without re-prompt
Caching and Reopen Behavior - No Re-Prompt and Fast Load
Given the volunteer profile already has district IDs set within the last 30 days When the volunteer reopens a Magic Join Link or starts a new action Then no geolocation permission prompt is shown And the cached district is used to proceed within 500 ms And a silent background refresh checks for boundary updates If a discrepancy is detected Then the user is prompted to confirm the update before the profile is changed
Privacy-First Permission Prompt Content and Accessibility
Given any permission or fallback screen in the detection flow Then copy must include purpose (detect district), scope (no exact location stored), and an alternative path (enter address instead) And a link to the Privacy Policy is visible and opens in a new tab And a "Skip and enter address" action is always visible without scrolling And all elements meet WCAG 2.1 AA (keyboard navigable, focus visible, screen-reader labels present) And the flow is usable at 320px viewport width without horizontal scrolling
Seamless First Task Handoff
"As a volunteer, I want to be taken straight to my first action after joining so that I can start contributing immediately with minimal friction."
Description

Upon successful verification and consent, route volunteers directly to their best next action page with prefilled profile and district context. Support dynamic task selection by campaign rules, A/B tests, and inventory (e.g., calls vs. emails). Ensure a two-tap maximum to start the first action, with progress tracking, latency budgets, and offline-safe UI fallbacks if district detection lags. Record handoff and completion events for conversion measurement.

Acceptance Criteria
Direct Handoff After Verification and Consent
Given a volunteer completes phone verification and explicit consent via Magic Join Link When the verification success event is emitted Then the app redirects the volunteer directly to the selected action page within 300 ms and without any intermediate screens, password prompts, or profile forms
Dynamic Task Selection by Rules, A/B Tests, and Inventory
Given campaign rule set R, A/B test allocation T, and live inventory I And a volunteer V with attributes A qualifies for the campaign When V is routed after verification Then the assigned first task matches rule set R, respects T bucket assignment (sticky for 24 hours based on V.id hash), and only uses channels with available inventory in I And the assignment decision is logged with rule_version, test_bucket, chosen_task_id, and inventory_snapshot_id
Two-Tap Maximum to Start First Action
Given the volunteer lands on the action page via handoff When they initiate the first task (call or email) Then the action can be started in no more than 2 taps from page load, with all required fields prefilled and no mandatory scrolling or text entry And the UI records taps_to_action_start <= 2 in telemetry
District Context and Profile Prefill on Action Page
Given the volunteer’s phone has been verified and profile fields were parsed from the join link When the action page renders Then the visible page shows the volunteer’s district (house/senate as applicable), targeted legislator list, and a bill-status-specific script for that district And profile fields (name, phone, zip/postal code) are prefilled and editable, with zero required empty fields
Latency Budgets and Fallback When District Detection Lags
Given handoff has begun and the action page is loading on a 4G connection When the page becomes interactive Then time_to_interactive <= 1500 ms and first_action_cta_render <= 1000 ms And if district detection takes > 2000 ms, a generic script and “Detecting district…” banner are shown, then seamlessly swap to district-specific content without page reload or loss of user input
Event Logging for Handoff and Completion with Idempotency
Given analytics is enabled When a volunteer is routed and performs the first task Then events handoff_started, handoff_routed, action_started, and action_completed are emitted with fields {event_id, volunteer_id, campaign_id, task_id, variant, district_id, timestamp_ms} And events are delivered at-least-once with de-duplication by event_id and 99% delivered within 60 seconds And the funnel from handoff_routed to action_completed is visible in the dashboard within 5 minutes
Offline-Safe Queueing and Resume for First Task
Given the device is offline during or after handoff When the volunteer opens the action page and attempts the first task Then an offline-safe UI is shown within 500 ms, events are queued locally, and actions that can run offline (e.g., opening dialer or email composer) proceed And upon reconnect within 30 minutes, queued events sync in order and the volunteer resumes the same task state or is routed to the next best task without duplication
Admin Link Builder & Branding
"As a campaign admin, I want to configure and preview branded join links and flows so that the enrollment experience matches our campaign and compliance needs."
Description

Provide an admin UI to configure link parameters (expiration, scopes, campaign, first-task mapping, UTM tags), consent text, and landing copy. Enable brand controls (logo, colors, sender name), preview flows for iOS/Android/desktop, and test-mode links. Support bulk generation via CSV and an API, role-based access control, and validation to prevent misconfiguration. Offer share-ready SMS text snippets that comply with carrier policies.

Acceptance Criteria
Link Parameter Configuration & Validation
Given I am an Admin on the Link Builder When I set expiration within the UI’s allowed range, select scopes from the allowed set, choose an existing campaign, map an existing first task, add UTM tags using allowed keys and URL-safe values, and click Save Then the configuration is saved successfully and the generated link preview reflects these parameters And attempting to use an out-of-range expiration, unknown scope, non-existent campaign/task, or disallowed/malformed UTM key/value blocks save and shows a specific inline error per field And the saved link expires at the configured UTC timestamp
Consent Text & Landing Copy Configuration
Given I am on Consent & Landing Copy settings for a Magic Join Link When I enter consent text and landing copy using allowed formatting and all required merge fields for the selected scopes, then click Preview and Save Then the content is sanitized (no executable code), versioned with editor and timestamp, and stored And the live preview renders exactly the saved content And save is blocked with explicit errors if required disclosures or merge fields are missing
Brand Controls & Accessibility
Given I have Brand settings access When I upload a logo in an allowed format/size, set primary/secondary colors via valid hex codes, and enter a sender name within allowed length/characters, then Save Then the preview updates immediately to show the logo, colors, and sender name And color contrast against body and button text meets WCAG AA or save is blocked with guidance on adjustments And invalid assets or values (unsupported file type/size, bad hex, overly long sender name) are rejected with field-level errors
Cross-Device Preview of Join Flow
Given a saved configuration When I open Preview and switch device to iOS, Android, and Desktop Then each preview renders the correct OS-specific handoff and CTA labels and fallbacks (mobile deep-link/SMS handoff, desktop QR/copy link) And all copy/branding reflect the saved configuration And a Preview Mode watermark is visible And interactions in preview do not emit analytics or send real messages
Test-Mode Links & Analytics Exclusion
Given Test Mode is toggled on When I generate and open a test link Then the link includes a visible test marker in the UI and a test parameter in the URL/metadata And any actions triggered by the link are labeled as test and excluded by default from production dashboards and exports And bulk send and publish actions require explicit confirmation or are disabled for test links
Bulk Generation via CSV and API with RBAC
Given I have a role permitted to bulk generate links When I upload a CSV containing the required columns and valid data and start generation Then the system validates each row, produces links for valid rows, and returns a downloadable results file with per-row status, link URL, and error messages for failures And duplicate protection prevents regenerating links for identical idempotency keys within a set window And when using the API with valid authentication and payload, the endpoint returns per-record results with appropriate HTTP status codes, enforces rate limits, and surfaces validation errors in a structured schema And users without permission receive a 403 and no links are generated
Share-Ready SMS Snippet Compliance
Given a saved configuration When I generate the share-ready SMS snippet Then the snippet includes the configured sender/brand name, a compliant opt-out/HELP line, and the Magic Join link on an approved domain, and is at or under the system’s character limit And merge fields are resolved and no placeholder tokens remain And the compliance checker shows Pass before copy/download is enabled; otherwise, specific reasons are shown and copy/download is blocked
Real-time Enrollment Analytics & Webhooks
"As a campaign manager, I want real-time analytics and webhooks for the join flow so that I can track conversions, attribute outreach, and react quickly to issues."
Description

Expose a live funnel from link sent → clicked → verified → consented → district detected → first task started/completed, with per-link, per-channel, and per-campaign attribution. Provide exports, CSV downloads, and webhooks to CRMs, data warehouses, and Slack for alerts. Include deliverability and error insights (carrier blocks, OTP failures, location denial), retention settings, and PII-safe aggregation for reporting.

Acceptance Criteria
Real-Time Enrollment Funnel Rendering
Given an organization has active Magic Join Links today When a recipient progresses through stages (link_sent, link_clicked, phone_verified, consent_captured, district_detected, first_task_started, first_task_completed) Then the dashboard updates each stage count within 3 seconds of event ingestion And stage transitions are enforced in order; out-of-order events are reordered by event_time and do not create duplicates And cumulative and per-stage conversion rates are displayed with two-decimal precision and correct numerators/denominators And a single volunteer contributes at most one count per stage per campaign (idempotency enforced by volunteer_anonymous_id + campaign_id) And late-arriving events up to 24 hours post-occurrence reconcile prior counts without double-counting And all timestamps render in the org’s timezone while exports/webhooks carry event_time_utc
Attribution by Link, Channel, and Campaign
Given a Magic Join Link has link_id, channel, and campaign_id When a volunteer enters the funnel from that link within a 30-day attribution window Then all subsequent stages attribute to the most recent qualifying touch (last-touch) per campaign And if multiple links are clicked before phone verification, the most recent link_id and channel take precedence And direct entries without link attribution are labeled as Direct/Unknown And the dashboard and exports support filtering by link_id, channel, campaign_id, and show attributed conversion rates And attribution metadata is persisted and included in CSV/export and webhook payloads
Deliverability and Error Insights
Given an SMS send attempt via the Magic Join Link channel When delivery receipts or errors are returned by carriers/providers Then messages are categorized into delivered, failed_carrier_block, filtered_as_spam, unreachable_number, rejected_throttle, and unknown with provider code mapping And OTP outcomes are tracked as otp_sent, otp_verified, otp_failed_timeout, otp_failed_mismatch And location stage outcomes are tracked as district_detected, location_denied_user, location_denied_os, geocode_failed And the dashboard displays send rate, delivery rate, click-through rate, OTP success rate, and district detection rate with numerators/denominators and date filters And when a failure rate exceeds a 15-minute rolling baseline by >3 standard deviations with at least 100 attempts, an alert banner is shown and an optional Slack alert is emitted if configured
Outbound Webhooks to CRMs, Warehouses, and Slack
Given an admin configures a webhook destination (CRM, Warehouse, or Slack) with a verified endpoint and signing secret When any funnel event occurs (sent, clicked, verified, consented, district_detected, first_task_started, first_task_completed, deliverability_error) Then a POST is sent within 5 seconds containing a versioned JSON payload, HMAC-SHA256 signature header, and timestamp And 2xx responses are treated as success; 429 and 5xx responses are retried up to 5 times with exponential backoff (10s, 30s, 90s, 270s, 810s) and jitter; other 4xx are not retried And failed deliveries after retries are recorded in a dead-letter queue with replay capability from the UI And Slack destinations format human-readable alerts and include a deep link to the event in the dashboard while redacting PII And per-destination rate limits are enforced to avoid exceeding provider limits, with backpressure visible in admin logs
On-Demand and Scheduled CSV Exports
Given a user selects a date range, campaign(s), channel(s), and attribution filters in the analytics view When they request an export Then a CSV is generated with headers including event_time_utc, org_tz, volunteer_anonymous_id, stage, link_id, channel, campaign_id, reason_code, device_type, locale, geo_precision, and excludes PII And exports larger than 100 MB are chunked and gzipped; a secure download URL is provided and expires after 24 hours And scheduled exports can be configured hourly/daily to S3 or GCS with customer-managed bucket paths and IAM roles, or emailed as a signed link And export jobs include success/failure status, row counts, and checksum; failures are retried up to 3 times And filter criteria are persisted with the export job for auditability
PII-Safe Aggregation and Privacy Controls
Given aggregate reporting views are loaded When a cohort or slice contains fewer than 10 unique volunteers in the selected period Then counts are suppressed (shown as <10) and excluded from derived rate calculations to preserve privacy And dashboards and exports use volunteer_anonymous_id; phone numbers, names, and exact addresses are never displayed or exported And all webhook payloads include only anonymized identifiers unless the destination is explicitly marked PII-allowed (default off) and shows a warning And a data dictionary documents all fields and privacy status; fields marked sensitive cannot be enabled for export without admin approval And access to raw event views is restricted by role; access attempts are audited
Data Retention Settings and Purge Operations
Given an admin sets raw_event_retention_days and aggregate_retention_days in org settings When the retention threshold is reached Then raw events older than the threshold are purged nightly; aggregates are retained per their own threshold And purge operations are atomic, logged with counts and time windows, and exposed in an audit log with admin identity And legal_hold flags at the campaign level pause purge for those campaigns until cleared And users requesting exports that include soon-to-be-purged data receive a warning with the scheduled purge time And webhooks are not emitted for purge actions; downstream systems rely on their own retention policies

Team Autopilot

Automatically assigns new volunteers to the best-fit team using district, event code, capacity, language, and skill preferences. Sends an instant welcome with the right chat link, shift info, and first assignment so volunteers know exactly where to go and leaders spend zero time triaging.

Requirements

Volunteer Profile Intake
"As a new volunteer, I want to share my district, languages, skills, and availability so that I am matched to the right team without extra back-and-forth."
Description

Capture and persist structured volunteer attributes required for automated assignment, including district, language(s), skills, availability window(s), preferred communication channel, and consent flags. Provide validated intake via RallyKit forms, API, and manual admin entry with idempotent upsert and sensible defaults. Normalize data (e.g., language ISO codes, district IDs), deduplicate by contact identifiers, and map profiles to existing RallyKit contacts. Expose a read API for the matching engine and ensure PII is secured and access-controlled.

Acceptance Criteria
Validated Web Form Intake with Normalization and Defaults
- Given a volunteer completes the RallyKit intake form, When they submit, Then the system validates required fields (at least one contact identifier: email or phone; district or mappable address; consent flags for data processing and outreach) and displays inline errors for any missing/invalid inputs. - Given language selections, When the form is submitted, Then languages are stored as ISO 639-1 codes and any non-supported codes are rejected with an error. - Given a district entered as text or derived from address, When saved, Then it is normalized to a canonical district ID (e.g., state-chamber-district) and persisted. - Given availability windows entered in local time, When saved, Then they are validated for format and non-overlap, converted to UTC, and the source timezone is recorded. - Given preferred communication channel is not selected, When a mobile number is present, Then the default preferred_channel is SMS; otherwise the default is Email. - Given the submission passes validation, When saved, Then a profile_id is created, created_at/updated_at timestamps are set, and the record is retrievable in Admin and via the read API within 2 seconds.
Idempotent Profile Upsert via Public API
- Given an authorized client sends PUT /v1/volunteers/{external_id} with a valid payload, When the external_id is new, Then the API creates a profile (201 Created) and returns profile_id and normalized fields. - Given the same PUT request is retried with identical payload, When processed, Then no duplicate profile is created, the same profile_id is returned, and the response is 200 OK. - Given POST /v1/volunteers with an Idempotency-Key header, When the same key and body are replayed within 24 hours, Then the API returns the original result without creating duplicates. - Given partial updates via PATCH /v1/volunteers/{id}, When fields are omitted, Then omitted fields remain unchanged and provided fields are validated and normalized. - Given invalid schema or field values, When the request is processed, Then the API returns 400 with field-specific errors; 401 for missing/invalid auth; 403 for insufficient scope. - Given successful upsert, When the response is returned, Then it includes updated_at, an ETag/version, and all normalized codes (district_id, languages ISO codes) with p95 latency ≤ 300 ms for payloads ≤ 10 KB.
Admin Console Manual Entry with Validation and Audit
- Given a user with role Org Admin or Volunteer Manager opens Create Volunteer in Admin, When they save a new profile, Then the same validations and normalizations as the public form are enforced. - Given defaults are applicable, When a field is left blank (e.g., preferred_channel), Then sensible defaults are applied consistent with product rules. - Given any create or update in Admin, When saved, Then an audit log entry is recorded with actor, timestamp, operation, and before/after field values for PII-safe diffs. - Given a user without required role attempts create/update, When action is attempted, Then access is denied with an explanatory message and no changes are persisted. - Given an existing profile is edited, When changes are saved, Then updated_at increments and changes are visible in read API within 2 seconds.
Deduplication and Contact Mapping on Intake
- Given a new intake (form, API, or Admin) includes an email and/or phone matching an existing RallyKit contact, When saved, Then the profile is mapped to that contact_id and no new contact is created. - Given an intake matches an existing volunteer profile by any unique key (normalized email, E.164 phone, external_id), When upserted, Then the existing profile is updated rather than creating a duplicate. - Given overlapping attribute sets, When merging, Then union-able fields (languages, skills, availability windows) are merged without duplicates; incoming non-null values overwrite nulls; explicit district_id values take precedence over derived ones. - Given conflicting high-risk identity signals (e.g., same phone but different legal name and explicit do-not-contact flag differences), When detected, Then the system returns 409 Conflict (for API) or blocks save (for UI) and records a review-required audit event. - Given a merge occurs, When the operation completes, Then the response includes the preserved profile_id and a list of merged fields; an audit log captures source and merge rationale.
Read API for Matching Engine with Filtering and Pagination
- Given an authorized matching client, When it calls GET /v1/volunteers?updated_since=ISO8601&district_id=...&language=...&limit=..., Then the API returns only profiles matching filters with normalized fields, paginated via a stable cursor, default limit 100 and max 1000. - Given large result sets, When pagination is used, Then next/prev cursors are provided and results are consistently ordered by updated_at,profile_id. - Given field minimization, When the client does not request PII scope, Then the API returns only non-PII attributes needed for matching and redacts contact info. - Given successful responses, When headers are returned, Then each page includes ETag and Last-Modified; conditional requests with If-None-Match or If-Modified-Since return 304 when appropriate. - Given authentication/authorization, When token is missing or lacks scope matching.read, Then the API returns 401/403 respectively. - Given performance SLOs, When requesting 100 records, Then p95 latency is ≤ 300 ms and error rate ≤ 0.1% over a rolling 5-minute window.
PII Security and Access Control for Volunteer Profiles
- Given PII fields (name, email, phone, address, consent flags), When stored, Then they are encrypted at rest with KMS-managed keys and transmitted only over TLS 1.2+ with HSTS enabled. - Given role-based access control, When a user or client without PII-read permission accesses profiles, Then PII fields are redacted and access is logged; authorized roles/clients can view per least-privilege scopes. - Given consent flags, When outreach consent is false, Then preferred contact channels are excluded from read API responses for matching and a do_not_contact flag is exposed. - Given system logging, When requests/responses are logged, Then PII is excluded or masked; secrets and tokens are never logged. - Given any profile read or write, When it occurs, Then an immutable audit event is recorded with actor/client_id, purpose, and timestamp.
Event Code Routing
"As an organizer launching a pop-up drive, I want signups tied to an event code so that volunteers are routed to teams working that specific effort."
Description

Ingest and validate event codes from sign-up links, QR codes, and landing pages to scope candidates for assignment to the correct campaign or initiative. Map each event code to eligible teams, geography, and time bounds; support code expiration and fallbacks (e.g., route to at-large team). Pass event code context to the matching engine and welcome messaging. Provide admin UI and API to create, edit, and report on event codes and their routing outcomes.

Acceptance Criteria
Ingest and validate event codes from URLs, QR scans, and landing pages
Given a signup URL includes an event_code parameter, When the page loads and the user submits the form, Then the system validates presence, allowed pattern, and active existence of the code, And attaches the validated code to the user's session. Given a QR code deep link includes an event_code, When opened on mobile and the user continues signup, Then the same validation rules apply, And invalid codes are rejected with a user-visible error and an analytics event invalid_code recorded with reason. Given a landing page includes a free-form event code field, When the user enters a code and submits, Then whitespace is trimmed, matching is case-insensitive, And unknown or malformed codes are rejected with a clear error and guidance to try again or proceed without a code. Given any invalid or missing code, When the user proceeds, Then no code context is attached to the session, And the attempt is logged without blocking signup.
Route candidates using mapped teams, geofence, and time window
Given a validated event code exists, When a candidate is evaluated for assignment, Then the eligible teams list is restricted to the teams mapped to that code. Given the event code has a geographic scope, When the candidate provides location or district, Then only teams whose coverage intersects the code's geofence remain eligible. Given the event code has start and end times, When current time is evaluated, Then only teams are considered if the time is within the code's active window (inclusive of boundaries). Given the filters eliminate all mapped teams, When routing is computed, Then the system marks fallback_needed = true with reason no_eligible_team.
Handle expired or invalid codes with at-large fallback
Given an event code is expired or not yet active, When a candidate attempts signup, Then the system does not route to mapped teams, And sets fallback_needed = true with reason expired_or_inactive. Given fallback_needed = true, When routing proceeds, Then the candidate is assigned to the configured at-large (or default fallback) team within the same campaign, And the assignment is flagged as fallback with the captured reason. Given no fallback team is configured, When fallback would be applied, Then no team is assigned, And an alert is logged for administrators with code, campaign, and reason. Given a fallback assignment occurs, When outcomes are reported, Then the event is counted under fallback with the correct reason in reporting.
Pass event code context to matching engine and welcome messaging
Given a validated code or a fallback resolution, When invoking the matching engine, Then the payload includes eventCode, campaignId, eligibleTeams (post-filter), geofence, timeWindow, isFallback, and fallbackReason. Given an assignment is returned, When generating the welcome message, Then the message builder receives the same context and populates campaign name, team chat link, next shift info, and first assignment instructions based on the final assigned team. Given a fallback assignment, When generating the welcome message, Then fallback-specific copy is used and no placeholders remain unresolved. Given an assignment and message are produced, When auditing the transaction, Then the audit record contains the event code, context fields, final team, message template id, and timestamps.
Admin UI to create and edit event codes with audit trail
Given a user with Manage Event Codes permission, When creating a new event code, Then the UI requires code (unique), campaign/initiative, eligible teams, optional geofence, optional start/end time, optional fallback team, and description. Given a code value is entered, When validated, Then it must be unique, conform to allowed characters, and not exceed the max length; start time must be before end time if both are provided. Given all fields are valid, When the admin clicks Save, Then the code is created, a preview of routing scope is available, and version 1 is stored; an audit log entry records who, when, and the initial values. Given an existing code is edited, When changes are saved, Then a new version is created with required change reason, diffs are captured in the audit log, and prior versions are read-only with an option to restore. Given a code is deactivated, When status is set to inactive, Then new signups no longer route via this code while existing sessions keep their previously attached context.
Event code management API for create, read, update, and list
Given a valid API client with appropriate scope, When POST /event-codes is called with a complete payload, Then the API returns 201 with the created resource and schema-valid fields, And the code is immediately usable. Given a duplicate code value is submitted, When POST /event-codes is called, Then the API returns 409 conflict with a machine-readable error. Given an existing code, When GET /event-codes/{code} is called, Then the API returns 200 with full mapping details including teams, geofence, timeWindow, fallback team, status, and current version. Given an update request, When PATCH /event-codes/{code} includes valid changes, Then the API returns 200 with the updated resource and increments the version; invalid fields return 422 with details. Given abusive or unauthorized access, When requests lack valid auth, exceed rate limits, or access forbidden campaigns, Then the API returns 401/403/429 respectively, and no state changes occur; all endpoints are documented in OpenAPI and conform to schema validation.
Reporting on routing outcomes and code performance
Given event codes are in use, When viewing the Event Codes report for a selected date range, Then each code shows total signups, valid code hits, expired/inactive hits, routed assignments, fallback assignments by reason, team distribution, and conversion to first assignment completion. Given filters are applied, When filtering by campaign, team, and source (link/QR/landing), Then results update and totals remain internally consistent. Given a user exports the report, When clicking Export CSV, Then a CSV downloads containing the on-screen aggregates and per-code rows; an optional detail export contains timestamped routing events with candidate and assignment ids. Given any metric is displayed, When totals are computed, Then counts and percentages are accurate (routed + fallback + invalid = total code hits) and match underlying event logs. Given drill-in on a code, When the user opens recent events, Then the last 100 events show timestamp, code, candidate id, routing outcome, assigned team, and fallback reason if any, suitable for audit.
Smart Team Matching Engine
"As an operations lead, I want new volunteers automatically assigned to the best-fit team so that leaders spend zero time triaging."
Description

Implement a deterministic, explainable scoring and rules engine that selects the best-fit team per new volunteer using weighted criteria: district proximity, language match, skill fit, availability overlap, event code eligibility, and current team capacity. Support hard constraints (must-match language, capacity ceilings) and tunable weights per organization. Provide a low-latency service (<2s p95) with retry and graceful handling of incomplete data, and expose an API/webhook for upstream intake and downstream messaging triggers.

Acceptance Criteria
Deterministic, Explainable Match Decision
Given an identical volunteer payload and org weighting config When the engine is invoked multiple times Then the same team_id and final_score are returned for every invocation Given two or more teams have identical final weighted scores When tie-breaking is required Then selection follows this deterministic order: higher capacity remaining, higher skill match count, closer district proximity, lowest team_id lexicographically, and yields the same team_id across runs Given a successful match decision When the response is returned Then it includes explanation with per-criterion raw values, normalized scores, applied weights, rule hits/misses, final weighted score, tie-breaker path, decision_id, and config_version
Hard Constraints: Language and Capacity Enforcement
Given org config marks language as must_match When a team does not include the volunteer's required language Then that team is excluded and reason_codes include language_mismatch Given a team at or above its capacity_ceiling When matching occurs without admin_override=true on the request Then that team is excluded and reason_codes include capacity_full Given no teams satisfy all hard constraints When matching completes Then the engine returns HTTP 200 with decision=no_match and aggregates reason_codes for the blocking constraints
Tunable Weights per Organization
Given an org submits a weights configuration with non-negative values and at least one weight > 0 When the configuration is saved Then a new config_version is created and becomes effective for new matches within 60 seconds Given an org has no custom weights When matching runs Then default system weights are applied and the response includes config_version=default Given an invalid weights payload (e.g., negative numbers, non-numeric) When save is attempted Then the API returns 422 with field-level validation errors and the previous config remains active Given a match occurs When explanation is generated Then it references the active config_version and exact weight values used
Latency, Resiliency, and Incomplete Data Handling
Given 10,000 production requests over a rolling 24 hours When measuring POST /v1/match end-to-end latency Then p95 < 2000ms and p99 < 3500ms with 0 unhandled timeouts Given transient 5xx errors from a dependent read service When fetching team or org data Then the engine retries up to 2 times with exponential backoff starting at 100ms and only returns 503 after retries are exhausted, including retry_count in the response Given non-hard attributes are missing (e.g., availability, skills) When matching runs Then the engine proceeds using available attributes, sets explanation.data_gaps accordingly, and returns a valid decision Given hard-constraint data is missing (e.g., language unknown) When matching runs Then the engine returns decision=needs_more_data with required_fields listed and does not return 5xx
API and Webhook Integration for Intake and Messaging Triggers
Given an authenticated POST /v1/match with a schema-valid payload When invoked Then the API returns 200 with {decision, team_id|null, final_score, explanation, decision_id, config_version, created_at} Given header X-Idempotency-Key is provided When duplicate requests with the same body are received within 24 hours Then the same decision_id and body are returned with idempotency_hit=true Given an org has a webhook URL configured When a decision is produced in {assigned, no_match, needs_more_data} Then an HMAC-SHA256 signed webhook is delivered within 2 seconds and retried with exponential backoff for up to 24 hours on non-2xx responses Given a request fails schema validation When /v1/match is called Then the API returns 400 with machine-readable error details Given missing or invalid credentials When /v1/match is called Then the API returns 401 or 403 accordingly
Event Code Eligibility and Availability Overlap Matching
Given a volunteer request includes event_code When matching runs Then only teams whose eligible_event_codes include the code are considered; otherwise decision=no_match with reason_codes including event_code_ineligible Given volunteer availability windows and team shift windows in differing IANA time zones When overlap is computed Then daylight saving rules are respected and availability_match=true is set when overlap_minutes >= 30 and explanation includes overlap_minutes Given no event_code is provided When matching runs Then event code eligibility is not applied and no scoring penalty is assessed
Audit Record and Observability
Given any decision is produced When persistence occurs Then an immutable audit record is stored with hashed volunteer_id, input snapshot, weights, rule outcomes, final score, decision, tie-breaker path, actor, and timestamp, with retention >= 1 year Given the service is operating When metrics are emitted Then request_count, decision_count by type, latency (p50/p95/p99), retry_count, match_rate, and constraint_block_rate are published with SLO alerts on p95 latency and 5xx rate Given an auditor requests an export for a date range When the export API is called Then a CSV or JSON file is generated within 5 minutes with PII redactions per org policy and includes decision_id, timestamps, reason_codes, and scoring details
Capacity & Overflow Management
"As a team lead, I want assignments to respect my team’s capacity so that we neither overload nor leave volunteers idle."
Description

Track real-time team capacity with configurable limits and buffers. Enforce capacity during matching, place excess volunteers on a waitlist, or spill to predefined sibling/at-large teams based on rules. Allow dynamic capacity updates from team leads, auto-rebalancing when capacity opens, and notify affected volunteers and leads. Prevent rapid over-assignment via throttles and provide visibility into remaining capacity across teams.

Acceptance Criteria
Enforce Capacity and Buffer During Autopilot Matching
Given Team T has capacity=20 and buffer=3 and an overflow mode configured And 16 volunteers are currently assigned to Team T When 5 additional volunteers are matched to Team T by Team Autopilot within 60 seconds Then only 1 volunteer is assigned before reaching the soft cap of 17 (capacity - buffer) And the remaining 4 volunteers are handled by the configured overflow mode (waitlist or spill), without exceeding the hard capacity of 20 And the assignment count for Team T updates in real time (<=2 seconds) for subsequent matches
Waitlist Placement When Team Is Full
Given Team T has overflow=waitlist, capacity=10, and buffer=2 (soft cap=8) And 8 volunteers are already assigned to Team T When a new eligible volunteer is matched to Team T Then the volunteer is added to Team T with status=Waitlisted within 2 seconds And the volunteer receives a notification with their waitlist position and next steps via configured channels And an audit log records time, team_id, volunteer_id, reason="capacity reached (soft cap)", actor="system"
Spillover to Sibling or At-Large Teams
Given Team A has overflow=spill with sibling priority [Team B, Team C] and fallback at-large team ATL And Team A has reached its soft cap while Team B has 3 remaining soft slots, Team C has 0, and ATL has capacity When 2 volunteers are matched for Team A Then both volunteers are assigned to Team B And if all siblings lack soft capacity, volunteers are assigned to ATL; if ATL unavailable, they are waitlisted under Team A And audit logs capture source_team=A, destination_team, rule_applied="spillover", and capacity snapshots
Dynamic Capacity Update by Team Lead
Given User U is a Team Lead for Team T with permission=manage_capacity And Team T has capacity=15 and buffer=3 When U updates capacity to 22 and buffer to 2 and saves the change Then the new limits are applied to matching within 5 seconds And the dashboard reflects capacity=22 and buffer=2 after refresh And a versioned audit record stores old_values, new_values, user_id, timestamp, and reason And unauthorized users receive HTTP 403 when attempting to change capacity or buffer
Auto-Rebalancing from Waitlist When Capacity Opens
Given Team T has 5 volunteers waitlisted in FIFO order and capacity increases by 3 (or assignments drop by 3) When auto-rebalancing runs Then the first 3 waitlisted volunteers are promoted to assigned within 30 seconds And any volunteer already assigned elsewhere is skipped and the next in order is promoted And promoted volunteers and the Team Lead are notified of the changes via configured channels And rebalancing never exceeds hard capacity and respects throttle settings
Assignment Throttling to Prevent Over-Assignment
Given throttle=10 assignments per team per 60 seconds with burst=5 for Team T And Team T receives 30 eligible matches within 60 seconds When Team Autopilot processes these matches Then no more than 10 assignments are committed for Team T in any rolling 60-second window And excess matches are queued and processed in subsequent windows subject to current capacity And hard capacity is never exceeded during or after processing And a metric events.throttle.hit is emitted with team_id and counts for monitoring
Capacity Visibility Across Teams Dashboard
Given the Capacity dashboard is opened with filters org=XYZ and district=OH-03 When data loads Then each team row displays: assigned_count, hard_capacity, soft_cap (capacity - buffer), buffer, remaining_soft_capacity, remaining_hard_capacity, waitlist_count, and last_updated timestamp And totals across visible teams are shown in the header And data auto-refreshes at least every 10 seconds and matches API values within a 1% tolerance And CSV export includes the same fields plus generated_at timestamp
Instant Welcome Messaging
"As a first-time volunteer, I want an instant, clear welcome with where to go and what to do so that I can start contributing right away."
Description

Trigger an immediate, localized welcome message upon assignment via the volunteer’s preferred channel (SMS, email, WhatsApp) with the team-specific chat link, next shift info, and first task. Use templating with language localization, merge fields (team name, leader, schedule), deep links to Slack/Discord/WhatsApp, and link tracking. Integrate with messaging providers, respect consent and quiet hours, implement retries and fallbacks, and surface delivery/bounce status to leaders.

Acceptance Criteria
Immediate Welcome via Preferred Channel
Given a volunteer is newly assigned to a team via Team Autopilot and has a valid preferred channel with documented consent When the assignment event is received Then a localized welcome message is dispatched on the preferred channel within 5 seconds of assignment creation And the message includes: team_name, team_leader_name, team_chat_link (deep link), next_shift_datetime (in volunteer local timezone), and first_task_summary And the send operation returns a provider message ID and timestamp stored against the assignment record And the system prevents duplicate sends for the same assignment via an idempotency key for 15 minutes
Localization and Template Merge Accuracy
Given a volunteer has a language/locale and channel preference When the welcome template is rendered Then the template selected matches the volunteer language; if unavailable, fall back to English And all merge fields (team_name, team_leader_name, next_shift_datetime, first_task_summary, team_chat_link, org_name, event_code) render with non-empty values and no unresolved placeholders And date/time formats match the volunteer locale (e.g., en-US: M/D/YYYY h:mm a; es-MX: DD/MM/YYYY HH:mm) And channel-specific length limits are enforced (SMS ≤ 1600 chars, WhatsApp ≤ 4096 chars, Email subject ≤ 78 chars), with safe truncation rules applied without breaking URLs And all URLs include UTM and a unique click_tracking_id parameter
Consent and Quiet Hours Enforcement
Given a volunteer’s consent status and local timezone are known When an assignment occurs Then messages are only sent on channels where explicit opt-in/consent is recorded for that channel And if no consent exists for the preferred channel but exists for others, the system does not send on the non-consented channel and marks reason=ConsentMissing And quiet hours are enforced between 21:00 and 08:00 in the volunteer’s local timezone (configurable per org) And if the assignment occurs during quiet hours, the message is queued and automatically sent at the start of the next allowed window And all suppressions (consent, quiet hours) are logged with reason, timestamp, and channel for audit
Retry and Channel Fallback Policy
Given a transient failure occurs on send (HTTP 5xx, provider timeouts, or rate-limit) When the system attempts delivery Then it retries up to 3 times with exponential backoff (30s, 2m, 10m) And if final retry fails, it attempts fallback to the next allowed channel based on volunteer channel preference order and org defaults And fallback only occurs on channels with valid consent and outside quiet hours And once any channel confirms delivery, all other pending retries/fallbacks are cancelled And idempotency ensures at most one welcome message is delivered per assignment
Deep Links and Link Tracking
Given the team has a chat space in Slack, Discord, or WhatsApp with a configured deep link When the welcome message is sent Then the chat link resolves to the correct destination for the respective app (Slack/Discord deep link or WhatsApp invite) and includes a web fallback URL And each link in the message contains a unique click_tracking_id tied to the assignment And the first-click event is recorded within 60 seconds of the click with channel, timestamp, and volunteer ID And leaders can see per-link click counts and last-click time on the team dashboard
Delivery and Bounce Status to Leaders
Given messages are dispatched through providers supporting webhooks/callbacks When delivery/bounce/blocked events are received Then the volunteer’s message status updates to one of {Queued, Sent, Delivered, Bounced, Blocked, Suppressed, Failed, Retried} within 60 seconds And provider metadata (message_id, error_code, error_message) is stored and surfaced in the leader dashboard And leaders can filter by status and export a CSV including volunteer, channel, timestamps, status, and provider metadata And aggregate delivery rate and bounce rate are visible per team and per channel for the last 24h and 7d
Provider Integration and Configuration
Given an org configures messaging providers When credentials are saved for Twilio (SMS/WhatsApp) and SendGrid (Email) Then the system validates credentials with a live health check and stores them encrypted And per-environment (staging/production) configuration is supported with safe test modes that prevent real sends And inbound provider webhooks are authenticated (HMAC/Signature) and processed idempotently And if a provider is degraded (health check fails), sends are paused for that channel and surfaced as provider_outage in status until recovery
Admin Overrides & Rule Configuration
"As an admin, I want to adjust matching rules and override assignments so that the system reflects real-world shifts."
Description

Provide an admin interface to configure matching weights, hard constraints, eligible teams per event code, capacity limits, and default fallbacks. Enable authorized staff to manually reassign volunteers, pause assignments to a team, and batch-update rules. Require audit notes on overrides, enforce role-based access control, and preview the impact of rule changes before applying them.

Acceptance Criteria
Rules Configuration: Matching Weights & Hard Constraints
Given I have Admin:AutopilotRules permission When I set weights for district, event code, capacity, language, and skill to integer values between 0 and 100, optionally mark one or more attributes as Hard Constraints, and click Save Then input validation enforces allowed ranges and requires either at least one weight > 0 or at least one Hard Constraint enabled And a new versioned ruleset is created with version id, timestamp, and user, and becomes active immediately And the ruleset change appears in history with a diff of changed fields And auto-assignments created after the save timestamp use the updated weights and constraints
Eligible Teams by Event Code & Default Fallbacks
Given rules define eligible teams for event code EV123 and a default fallback team Fallback-A When a new volunteer with event code EV123 is evaluated by Autopilot Then only the defined eligible teams are considered for assignment And if no eligible team is assignable due to capacity or pause, the volunteer is assigned to Fallback-A And if Fallback-A is unavailable (paused or at capacity), the volunteer is marked Unassigned with reason "No eligible team" and placed in the Unassigned queue And an audit entry records event code, evaluated teams, selection path (eligible → fallback → unassigned), user/system actor, and timestamp
Capacity Limits and Team Pause Enforcement
Given Team T1 has capacity 50 with 50 currently assigned When a new volunteer would be auto-assigned to T1 Then T1 is excluded from consideration due to capacity being reached Given Team T2 is set to Paused When auto-assignment evaluates eligible teams Then T2 is excluded from placement until unpaused And existing assignments on T2 remain unchanged and unaffected
Manual Reassignment with Required Audit Notes
Given I have Admin:OverrideAssignments permission When I open a volunteer record and choose Reassign to Team T3 Then the system requires an audit note and blocks submission until a non-empty note is provided And if Respect capacity is selected and T3 is full or paused, reassignment is blocked with a clear error stating the reason And if Override capacity is selected, reassignment proceeds regardless of capacity/pause And the audit log records from-team, to-team, user, timestamp, note text, and whether capacity/pause was overridden
Batch-Update Rules with Preview & Apply
Given I select multiple event codes and teams and propose changes to weights, hard constraints, eligible teams, and fallbacks When I click Preview Then a preview shows a diff of proposed changes and a projected impact including counts of currently unassigned volunteers affected and volunteers whose assignment target would change if applied now And no changes are persisted while in Preview When I click Apply Then all changes are applied atomically as a single new versioned ruleset with id, timestamp, and user, or none are applied if any change fails, with an error surfaced to the user And the preview snapshot and apply outcome are written to the audit trail
Role-Based Access Control for Rules and Overrides
Given role-based permissions are defined for Admin:AutopilotRules and Admin:OverrideAssignments When a user without Admin:AutopilotRules attempts to view or edit rule configuration Then access is denied with a 403-equivalent response and the attempt is logged to the audit trail When a TeamLead without Admin:OverrideAssignments attempts to reassign a volunteer outside their team or override capacity/pause Then the action is blocked with a permissions error and logged And users with the requisite permissions can successfully perform the corresponding actions
Assignment Audit & Metrics
"As a program director, I want transparent logs and metrics on assignments so that I can prove outcomes and improve performance."
Description

Record and expose a full audit trail for each assignment, including inputs (profile, event code), rule evaluations, score breakdowns, chosen team, and notification outcomes with timestamps and actors. Provide dashboards and exports for metrics such as assignment time, success rate, capacity utilization, language coverage, welcome message delivery/click-through, and waitlist aging. Support filtering by campaign, event code, team, and time range to meet audit and optimization needs.

Acceptance Criteria
Persist full assignment audit trail
Given a volunteer is assigned by Team Autopilot When the assignment workflow completes Then the system writes an audit record within 2 seconds containing: - assignment_id (UUID), campaign_id, event_code, volunteer_id, selected_team_id/name - profile snapshot at decision time (district, languages, skills) - evaluated rules with inputs and pass/fail results - score breakdown per candidate team and final rank - selection rationale (top score or tie-break rule id) - step timestamps in ISO 8601 UTC (evaluation_start, evaluation_end, selection_time, notification_queued/sent/delivered/failed) - actor for each step (system or user_id) And notification outcomes per channel include provider message_id and error codes when applicable
Retrieve and protect audit records (append-only)
Given an assignment_id When a user with Audit Viewer role requests the audit via UI or API Then the full audit record is returned with all fields and a 200 response within 1 second for records under 10KB And when any client attempts to update or delete an existing audit entry Then the system prevents mutation and creates an immutable amendment event linked to the assignment_id with actor and timestamp And all audit entries include created_at and are verifiably unchanged post-write (e.g., checksum/hash present)
Assignment metrics dashboard completeness and correctness
Given a selected time range and filters When viewing the Metrics dashboard Then the following are displayed and computed as defined: - Assignment time: p50 and p95 from intake_received_at to selection_time (overall and per team) - Success rate: assigned_count / total_intake_count with failure reasons distribution - Capacity utilization: assigned_shifts / configured_capacity per team - Language coverage: percent of assignments matching volunteer preferred language; unmatched count - Welcome delivery: delivery_rate = delivered/sent; CTR = unique_clicks/unique_delivered - Waitlist aging: p50, p95, and counts by 0–24h, 24–72h, >72h since waitlist_entered_at And metric values match a direct query of the underlying event store within 0.5% or 1 record, whichever is greater
Cross-cutting filters (campaign, event code, team, time range)
Given any combination of campaign(s), event code(s), team(s), and time range When filters are applied Then all widgets, tables, and exports reflect the selection consistently And filters support multi-select and inclusive logic; time range supports absolute and relative presets (Last 24h, 7d, 30d, Custom) And filtered results render within 2 seconds for datasets up to 1M records; API supports equivalent query parameters with server-side pagination
Audit and metrics exports
Given a filtered view When the user clicks Export Then the system generates downloadable CSV and NDJSON files containing: - For audit exports: one row/document per assignment with all audit fields and timestamps - For metrics exports: aggregates per day and per team plus overall totals with definitions in headers/metadata And exports include filter context, time zone, schema_version, and are delivered within 2 minutes for up to 1M rows And exported totals match on-screen totals for the same filters
Notification delivery tracking and click-through attribution
Given a welcome notification is sent When delivery webhooks are received Then the assignment’s audit is updated with per-channel status transitions (queued, sent, delivered, failed) including provider message_id and error codes And links in welcome messages include assignment_id and campaign_id identifiers When a supporter clicks Then a click event is recorded and attributed to the assignment and volunteer And CTR on the dashboard equals unique_clicks / unique_delivered for the selected filters and matches raw click logs within 1 record

MicroBrief 60

A 60-second, swipeable training that previews the script, safety notes, and do/don’t basics, ending with a quick confirm. Volunteers start confident and on-message, while organizers get consistent, compliant onboarding—even in fast-moving, day‑of environments.

Requirements

60-Second Swipe Carousel
"As a first-time volunteer, I want a short, swipeable training I can finish in under a minute so that I feel confident and can start making calls or sending emails right away."
Description

Deliver a mobile-first, swipeable training flow capped at 60 seconds that sequences 4–6 concise cards covering overview, script preview, safety notes, and do/don’t basics. Include a progress indicator, optional timer, and clear primary actions that do not auto-advance to ensure comprehension. Support low-bandwidth environments with lightweight assets and prefetching, resume-on-return for interrupted sessions, and event instrumentation on each card view. Integrate with RallyKit session context so the flow loads per-campaign content instantly and respects campaign-level settings (mandatory vs. optional).

Acceptance Criteria
Mobile swipe training flow with required cards and controls
Given a mobile device in portrait and an active RallyKit campaign session When the user opens MicroBrief 60 Then the carousel renders between 4 and 6 cards, including (at minimum) Overview, Script Preview, Safety Notes, and Do/Don’t Basics in that order And a visible progress indicator displays current index and total (e.g., 1/5) and updates correctly on each navigation And primary actions Next and Back are present on all non-final cards; the final card shows Confirm/Finish And swipe left/right gestures and Next/Back taps navigate exactly one card per action; first/last card boundaries prevent further navigation And under no circumstance does the carousel auto-advance a card (including timer expiry or media load) And pressing Confirm marks the training complete and returns control to the host flow
Optional timer displays but never auto-advances
Given the campaign setting Show Timer is enabled When the user starts MicroBrief 60 Then a countdown timer set to 60 seconds is visible across all cards and remains synchronized on navigation And the default timer value is 60 seconds and cannot be configured above 60 seconds by campaign settings And when the timer reaches zero, the user remains on the current card; no auto-advance occurs and controls remain enabled And when Show Timer is disabled, no timer is rendered And timer state (remaining seconds) persists on resume within the same session
Mandatory vs optional training gating behavior
Given campaign Training Requirement is Mandatory When a user attempts to access the campaign action page Then MicroBrief 60 launches before the action page, and no Skip control is available And the action page remains inaccessible until the user presses Confirm on the final card And upon completion, the user is routed to the intended action page Given campaign Training Requirement is Optional When the user views any card Then a Skip Training control is visible and leads to the action page And a skip event is recorded with skipped=true and completed=false
Low-bandwidth performance and prefetching
Given a device on throttled Regular 3G (750 kbps, 300 ms RTT) When the user opens MicroBrief 60 Then the first card becomes interactive within 2.0 seconds And initial transfer for UI + first card content is <= 150 KB gzipped And per-card media payloads are <= 50 KB and lazy-loaded; the next card’s assets are prefetched on idle And swiping to a prefetched card yields <= 100 ms input-to-frame latency And if media fails to load, text-only fallbacks render with no cumulative layout shift > 0.1 And the flow remains fully usable without video; images are optional enhancements
Resume-on-return for interrupted sessions
Given a user viewed at least one card When they leave the flow and return within the same RallyKit session context within 24 hours Then the carousel reopens to the last viewed card (or the next unconfirmed card) with the correct progress indicator And if the timer is enabled, remaining time persists from last known value And if the campaign content version has changed since last view, the flow restarts at card 1 and displays a brief content updated notice
Per-campaign content via session context
Given a valid RallyKit session with campaignId X When MicroBrief 60 is opened Then the cards render campaign X’s content without additional user input And the Script Preview card contains the script variant corresponding to campaign X’s current bill status And campaign-level flags (Mandatory/Optional, Show Timer) are applied immediately And switching to a different campaign in session context updates the content source on the next open And if required content is missing for any required card type, the flow blocks start and surfaces a descriptive error for organizers
Card view and completion analytics instrumentation
Given analytics is enabled When a card is visible for >= 200 ms Then emit a microbrief_card_view event once per distinct view with properties: campaignId, sessionId, userId or deviceId, cardId, cardIndex, isMandatory, timerEnabled, timestamp And when the user navigates away from a card, emit microbrief_card_dwell with dwellMs and cardIndex And upon Confirm, emit microbrief_complete with completed=true, totalDwellMs, totalCards, skipped=false; upon Skip (optional mode), emit completed=false, skipped=true And analytics events are queued offline, retried up to 3 times with exponential backoff, and deduplicated by (sessionId, cardId, viewInstanceId)
Context-Aware Script Preview
"As a volunteer, I want to preview the exact script tailored to my legislator and current bill status so that I know exactly what to say when I take action."
Description

Render the correct district- and bill-status–specific script inside the MicroBrief by pulling the same targeting and templating logic used by RallyKit action pages. Merge user and legislator tokens (e.g., name, district, bill identifier, stance) and show the exact wording the volunteer will use for call and email actions. Handle fallbacks for unknown matches, stale bill status, or missing data with safe, neutral copy. Cache the selected variant for the session and expose the variant ID for downstream analytics and handoff.

Acceptance Criteria
Correct Variant Rendering Matches Action Page Logic
Given a volunteer with a resolvable address and a campaign with an active bill status When the MicroBrief loads the script preview Then the rendered text exactly matches the output of the RallyKit action page templating API for the same inputs And the selected variant key equals the variant key returned by the templating API And district-specific phrasing corresponds to the targeted legislator's chamber and district
Token Merge Without Placeholders
Given available tokens for user (first_name, city), legislator (title, last_name, district), and bill (identifier, short_title, stance) When the script preview is rendered Then all tokens are interpolated with the correct values and casing rules defined by the template And no raw placeholders like {{...}} remain in the UI And punctuation and spacing around merged tokens follow the template rules (no double spaces or dangling commas)
Call and Email Script Consistency
Given the campaign supports both call and email actions When the volunteer switches between Call and Email in the MicroBrief Then each channel displays the correct channel-specific script variant And each channel exposes a distinct channel-scoped variant ID And email preview includes a subject line that matches the action page output and preserves line breaks And call preview preserves pause markers or emphasis formatting defined by the template
Safe Fallbacks for Unknown Targeting or Stale Status
Given the system cannot resolve a legislator, bill status, or required tokens within the allowed fetch window or data is missing When the MicroBrief renders the script preview Then a neutral, compliant fallback script is shown with generic salutation (e.g., "Dear Legislator") and no personal names, districts, or status-specific wording And no contradictory or outdated status language is shown And the variant ID is set to a designated fallback code And the UI remains non-blocking and allows the volunteer to proceed
Session-Level Variant Caching
Given a script variant has been selected for the volunteer When the volunteer navigates across MicroBrief steps or reopens the MicroBrief within the same session Then the same variant and text are shown without re-querying the templating API And the cached variant persists per channel (call/email) and does not cross channels And the cache clears on session end (e.g., browser tab close or explicit session reset)
Variant ID Exposed for Analytics and Handoff
Given a script variant is selected When the MicroBrief displays the preview Then the variant ID is stored in session storage under rk.microbrief.variantId.<channel> And an analytics event microbrief_script_selected is emitted with campaign_id, channel, bill_id, legislator_id (if available), and variant_id And on confirm, the handoff payload to downstream actions includes the same variant_id And in fallback conditions, variant_id equals the designated fallback code
Performance and Reliability of Script Preview
Given normal network conditions (RTT <= 150 ms) When loading the MicroBrief script step on a cold cache Then the preview renders within 1500 ms from step load And with a warm cache, the preview renders within 300 ms And if the templating API does not respond within 1200 ms, the fallback script renders within the next 300 ms And all errors are captured once via telemetry without exposing stack traces or raw placeholders to the volunteer
Compliance Guardrails & Safety Notes
"As an organizer, I want built-in safety and compliance guardrails so that every volunteer receives consistent, approved guidance even when we’re moving fast."
Description

Provide a dedicated module that surfaces mandatory safety notes, legal disclaimers, and do/don’t guidance within the MicroBrief. Enforce inclusion of organizer-defined required content and automatically flag or block disallowed phrases using a configurable ruleset. Localize or vary guidance by state or campaign policy, and record the displayed policy version for each viewer. Ensure content is accessible, concise, and consistent across all volunteers to maintain compliant, on-message outreach in rapid, day-of mobilizations.

Acceptance Criteria
Enforce Required Safety Notes in MicroBrief
Given a MicroBrief contains organizer-defined required slides, When a volunteer starts the brief, Then those slides are presented and cannot be skipped or fast-forwarded. Given a volunteer attempts to complete without viewing all required slides, When they tap Confirm, Then completion is blocked and missing slides are highlighted. Given all required slides have been viewed, When the volunteer completes the brief, Then the system records completion with required_content_viewed=true for the session.
Block Disallowed Phrases via Ruleset
Given a ruleset with disallowed phrases/regex is active, When an organizer saves or publishes MicroBrief content or scripts, Then any disallowed matches are flagged inline and publishing is blocked. Given one or more flags exist, When a compliance approver reviews and approves an override with a reason, Then publishing is permitted and an audit entry records rule IDs, approver, reason, and timestamp. Given content is clean, When the organizer publishes, Then an audit entry records ruleset_version, scan_pass=true, and zero violations.
State-Specific Guidance Selection
Given a campaign has state-specific guidance for CA and TX and a default policy, When a volunteer provides a ZIP mapped to CA, Then the CA guidance variant is displayed. Given a volunteer's state cannot be determined, When the brief loads, Then the default guidance variant is displayed. Given a volunteer changes their state selection, When they navigate to the next slide, Then the guidance updates to the selected state's variant within 1 second.
Policy Version Recording per Viewer
Given policy variant X version 3 is displayed, When the volunteer completes the MicroBrief, Then the system logs viewer_id/session_id, campaign_id, state, policy_variant_id=X, version=3, and completion_timestamp. Given an admin exports attestations for a date range, When the export is generated, Then each row includes viewer/session identifier, campaign_id, state, policy_variant_id, policy_version, and completion status (completed/skipped/blocked).
Accessibility and Conciseness Compliance
Given the MicroBrief slides are rendered, When automated a11y checks run, Then text contrast is >= 4.5:1, all interactive elements have visible focus indicators and ARIA labels, and the flow is fully navigable via keyboard and screen readers read slide titles and disclaimers. Given the assembled brief content, When estimated read time is computed at 200 wpm, Then total estimated duration <= 60 seconds, number of slides <= 7, and per-slide body text <= 300 characters. Given any audio/video is present, When the brief is played, Then closed captions are available and enabled by default.
Completion Gate with Confirm
Given all required content has been viewed, When the volunteer taps "I understand and will comply", Then a confirm record is stored with timestamp and user/session ID. Given the confirm is not provided, When the volunteer attempts to proceed to the action page, Then navigation is blocked and a prompt instructs completion of the MicroBrief. Given the confirm is recorded, When the action page loads, Then a "Trained" status badge is displayed and the session state reflects microbrief_confirm=true.
Content Consistency and Versioning
Given two volunteers start the same campaign and state brief within a 10-minute window, When they view safety notes, Then both see the same policy version number and identical content. Given an organizer publishes a policy update, When a new volunteer starts after the publish time, Then the new version is served and the version number increments by 1. Given a volunteer started before the update, When they resume, Then their session remains pinned to the prior version and their completion log reflects that version.
Quick Confirm & Comprehension Check
"As an organizer, I want a quick confirmation that volunteers understood the basics so that I can ensure quality and compliance before actions go live."
Description

Conclude the MicroBrief with a lightweight confirmation step (single-tap confirm or 1–2 question micro-check) that verifies the volunteer understands the script and safety basics before proceeding. Require completion to unlock the action page when the campaign marks onboarding as mandatory. Store timestamp, campaign ID, MicroBrief version, and result for audit readiness. Provide accessible controls, instant feedback on incorrect answers, and a friction-minimized retry path.

Acceptance Criteria
Mandatory Onboarding Gate to Action Page
Given campaign onboarding enforcement is set to Mandatory, When a volunteer who has not completed the MicroBrief confirm or micro-check for this campaign and MicroBrief version attempts to open the action page, Then the action page is blocked and a MicroBrief 60 prompt is displayed. Given enforcement is Optional or Off, When a volunteer who has not completed the step attempts to open the action page, Then the action page opens and the MicroBrief 60 entry point remains available. Given enforcement is Mandatory, When the volunteer completes the required step, Then the action page unlocks within 2 seconds and remains unlocked for this campaign and MicroBrief version for at least 24 hours or until a new version is published.
Single-Tap Confirm Completion
Given campaign completion mode is Confirm-only, When the volunteer taps the single confirmation control after viewing the MicroBrief, Then the system records result=confirmed and completion=pass. Given the confirmation is recorded, When the success state is displayed, Then it appears within 500 ms and the action page unlocks per gating rules. Given the volunteer re-enters the MicroBrief after confirming, When they reach the end screen, Then the UI indicates completion is already recorded and does not require a second confirmation.
1–2 Question Micro-Check with Instant Feedback and Retry
Given campaign completion mode is Micro-check and the micro-check contains 1–2 questions covering script and safety basics, When the volunteer submits answers, Then evaluation occurs instantly and feedback is displayed within 500 ms. Given any answers are incorrect, When feedback is shown, Then the UI identifies which questions were missed and provides a single-tap Try Again that restarts the micro-check without reloading or additional navigation. Given the volunteer retries, When the micro-check restarts, Then question or answer order is randomized and the volunteer can resubmit without additional steps or rate limits. Given all answers are correct, When submitted, Then result=passed is recorded and the action page unlocks per gating rules. Given enforcement is Optional and the volunteer fails, When they exit the micro-check, Then result=failed is recorded and the action page remains accessible.
Audit-Ready Result Logging
Given a completion event (confirm or micro-check), When the result is determined, Then the system persists a record with fields: ISO8601 UTC timestamp, campaignId, microBriefVersion, result in {confirmed, passed, failed}. Given the record is persisted, When an organizer views audit logs, Then the record is retrievable and filterable by campaignId and microBriefVersion within 5 seconds of sync. Given the device is offline at completion time, When connectivity is restored, Then the record auto-syncs within 10 seconds without user action and preserves the original timestamp.
Accessible Completion Controls and Messaging
Given a volunteer using a screen reader, When the MicroBrief completion screen loads, Then all actionable controls (confirm, submit, retry) expose names, roles, and states and success/error messages are announced via ARIA live regions. Given keyboard-only navigation, When moving through the completion UI, Then all controls are reachable in logical order, operable with Enter/Space, and display a visible focus indicator. Given touch input, When interacting with controls, Then touch targets are at least 44x44 px and text contrast meets WCAG 2.1 AA (≥ 4.5:1) and non-text contrast (≥ 3:1). Given validation errors, When they occur, Then an inline message describes what to fix in plain language and is programmatically associated to the offending control.
Offline and Network Interruption Resilience
Given the device is offline, When the volunteer completes confirm or passes the micro-check, Then the action page unlocks locally per gating rules and a queued sync record is created. Given connectivity returns, When the app is in the foreground, Then the queued completion record syncs within 10 seconds and the UI remains unlocked. Given sync fails after 3 attempts, When retries back off, Then a non-blocking banner indicates "Sync pending" and retries continue with exponential backoff until successful.
One-Tap Action Handoff
"As a volunteer, I want to start my call or email immediately after confirming so that I can act while the guidance is fresh."
Description

Seamlessly transition users from the MicroBrief to the correct one-tap action page (call or email) with deep-link parameters that carry the selected script variant, confirmation state, and session identifiers. Prefill any necessary context so the action page loads instantly and reflects the briefed content. Emit a handoff event and handle edge cases such as back-navigation, incomplete confirmations, or device app switching. Preserve analytics continuity to attribute outcomes to MicroBrief completion.

Acceptance Criteria
Confirmed MicroBrief to Correct Action Page Deep-Link
Given a user completes MicroBrief 60 and taps "Start Action" When the selected action type is call Then the call action page opens via deep link within 1 second and displays the exact script variant previewed in the MicroBrief Given a user completes MicroBrief 60 and taps "Start Action" When the selected action type is email Then the email action page opens via deep link within 1 second and pre-fills subject and body matching the previewed script variant Given the MicroBrief confirmation state is true When the action page loads Then it reflects the confirmed state and suppresses any redundant onboarding prompts Given session_id, campaign_id, script_variant_id, and target_ids are present in deep-link params When decoded by the action page Then the values are applied accurately (URL-decoded, type-validated) and recipients match the MicroBrief’s mapped targets Given the deep link contains unknown or extra params When processed Then they are ignored without throwing user-visible errors
Analytics Continuity Attribution from MicroBrief to Action
Given a unique session_id is created at MicroBrief start When the handoff occurs Then the same session_id is present on action_started and action_completed events Given a microbrief_id and script_variant_id exist When action events are recorded Then they include attribution fields {microbrief_id, script_variant_id, action_type} and join to the MicroBrief completion with 0 mismatches in a QA dataset of 100 flows Given a cross-domain or app-to-web handoff and third-party cookies are unavailable When continuity is evaluated Then attribution loss is ≤2% using first-party storage and URL parameters Given network retries occur When action_completed is delivered Then no duplicate conversions are stored due to an idempotency_key
Handoff Event Emission and Delivery
Given a user taps "Start Action" When the deep link is initiated Then an event handoff_initiated is emitted before navigation with fields {event_id, session_id, user_hash, microbrief_id, script_variant_id, action_type, timestamp, device, referrer} Given poor connectivity When the event cannot be delivered within 500ms Then it is queued and retried with exponential backoff up to 3 times and delivered within 60 seconds after connectivity is restored Given duplicate taps within 3 seconds When events are processed Then only one handoff_initiated is recorded via deduplication on event_id Given privacy constraints When the payload is inspected Then no plaintext PII is present (only hashed or tokenized identifiers)
Guardrail for Incomplete Confirmation
Given a user has not completed the MicroBrief confirm step When they tap "Start Action" Then navigation is blocked and a confirm prompt is shown with a clear CTA to confirm Given the user completes the confirm prompt When they re-tap "Start Action" Then navigation proceeds and analytics logs microbrief_confirmed=true Given strict confirm is disabled by admin configuration When a user taps "Start Action" without confirm Then navigation proceeds, microbrief_confirmed=false is sent, and the action page shows a non-blocking warning banner
Back Navigation and Re-entry Behavior
Given the user arrived on the action page via handoff When they use OS or in-app back Then they return to MicroBrief at the confirmation-complete step with prior selections and the same script variant preselected Given the user re-initiates the handoff within 10 minutes When they tap "Start Action" again Then a re_handoff event is emitted with attempt incremented and the action page state is not duplicated (no duplicate recipients or script prefill) Given analytics instrumentation When the user navigates back from the action page Then no additional microbrief_completed events are emitted
App Switching, Fallbacks, and Performance
Given the deep link targets a native dialer or email client When the device lacks a compatible app Then a responsive web fallback action page opens within 1.5 seconds Given the user switches apps during handoff and returns within 10 minutes When the app resumes Then session_id, script_variant_id, confirmation state, and target_ids persist without requiring re-briefing Given standard 4G conditions (≈400ms RTT, 1.6 Mbps) When the action page loads Then first meaningful paint ≤ 1.0s at p95 and total blocking time < 150ms Given a deep-link parameter fails validation When the action page loads Then a safe default script variant is applied, an error is logged (with no PII), and the user can proceed without blocking
Organizer MicroBrief Composer
"As a campaign director, I want to quickly assemble and update a 60-second training from templates so that I can keep volunteers aligned as circumstances change."
Description

Offer an admin UI for organizers to author and manage MicroBriefs using modular cards (overview, script preview, safety notes, do/don’t, confirm). Provide templates, drag-and-drop ordering, inline previews for mobile/desktop, and scheduling. Allow campaign-level settings for mandatory completion, A/B variants, localization, and policy attachments. Version each MicroBrief with change history and publish states (draft, scheduled, live) to coordinate day-of updates without downtime.

Acceptance Criteria
Compose MicroBrief with modular cards and drag-and-drop ordering
Given I am an organizer with admin access on a campaign When I create or edit a MicroBrief Then I can add any of the five card types (overview, script preview, safety notes, do/don’t, confirm) And I can reorder cards via drag-and-drop And the new order persists after save and page reload And I can remove a card and undo the removal within 10 seconds And validation prevents saving if zero cards are present And validation prevents publishing if the confirm card is missing when mandatory completion is enabled
Real-time inline previews for mobile and desktop
Given I am editing a MicroBrief When I toggle the preview device between Mobile and Desktop Then the preview renders the current unsaved changes within 500 ms of the last keystroke And the preview uses the same components as the live runtime, reflecting exact card order and content And no unpublished content is exposed to public endpoints And preview toggles do not modify saved content or publish state
Template gallery application and customization
Given I open the template gallery When I apply a template to a new MicroBrief Then the MicroBrief is pre-populated with all required cards and placeholder content from the template And I can edit any field without breaking the template’s structure And I can reset to template defaults with a single action and confirmation And switching templates prompts a diff summary and requires confirmation before overwriting existing content And saved MicroBrief metadata records the originating template
Versioned publish workflow: draft, scheduled, live, and rollback without downtime
Given a MicroBrief in Draft When I schedule it with a specific timezone and go-live time Then a Scheduled version is created with the selected timestamp and author metadata And at the scheduled time the Live version updates atomically with zero downtime and no 5xx responses And live update latency is under 3 seconds end-to-end And I can promote Draft to Live immediately via Publish Now with the same atomic guarantees And I can revert to any prior version and re-publish it And all state transitions (Draft, Scheduled, Live, Reverted) are recorded with actor, timestamp, and reason
Mandatory completion gating on action pages
Given the campaign setting "MicroBrief mandatory" is ON When a supporter opens an action page for that campaign Then action controls (call/email buttons) remain disabled until the supporter completes the MicroBrief and taps Confirm And completion is stored as a signed token scoped to campaign and supporter/session and expires after 24 hours And if the setting is OFF the supporter can proceed without completing the MicroBrief And organizer override can bypass gating for specific audiences via query param or role And start, completion, bypass, and expiry events are logged with timestamps for audit
A/B variants configuration and analytics
Given two or more MicroBrief variants exist for a campaign When I set traffic splits that sum to 100% Then supporters are deterministically assigned to a variant based on user ID or fingerprint and the assignment persists for 7 days And I can pause a variant and its traffic is redistributed proportionally without downtime And the analytics view shows, per variant: impressions, completions, completion rate, median time-to-complete, and downstream action conversion And I can export variant metrics as CSV for a selected date range
Localization and policy attachments
Given locales are enabled for a campaign When I add localized content for any card and attach locale-specific policy files/links Then supporters see the MicroBrief in their selected/detected locale with per-field fallback to the default locale for missing translations And previews allow switching locales and reflect the localized content immediately And publish/version operations apply consistently across locales with shared version numbers and locale-specific content snapshots And all attached policy files are downloadable, virus-scanned, and tracked per locale in the audit log
Real-Time Completion Analytics & Audit
"As an organizer, I want real-time visibility into who completed the brief and how it impacts actions so that I can optimize training and prove compliance."
Description

Track MicroBrief engagement and outcomes in real time, including starts, per-card views, time to complete, confirmation results, and drop-off points. Present a campaign dashboard with completion rates, action conversion lift, and variant comparisons, and provide exportable, timestamped logs linking MicroBrief version to subsequent actions for audit readiness. Respect privacy by minimizing PII, honoring consent settings, and applying role-based access controls to reports.

Acceptance Criteria
Real-Time MicroBrief Engagement Stream
Given a volunteer starts a MicroBrief session, When the first card is displayed, Then a Start event is recorded with session_id, microbrief_version_id, and event_ts in ISO 8601 with millisecond precision. Given a volunteer swipes through cards, When a card is active for ≥100ms, Then a CardView event is recorded with card_id, position_index, and dwell_ms. Given a volunteer exits before confirmation, When they close the flow or are inactive for 60s, Then an Exit event is recorded with last_card_id and exit_reason (manual|timeout). Given a volunteer taps confirm, When the confirmation screen is submitted, Then a Confirm event is recorded with confirm_result (pass|fail) and time_to_complete_ms computed as Confirm_ts − Start_ts. Given intermittent network conditions, When duplicate client_event_id values are received, Then only one record per client_event_id is stored per session_id. Given events are ingested, When the analytics dashboard is open, Then p95 latency from event_ts to dashboard display is ≤3s and p99 ≤10s.
Completion Rate Dashboard Calculation
Given a campaign and date range are selected, When the dashboard loads, Then it displays Starts, Completes (confirm_result=pass), Completion Rate=Completes/Starts, and median Time to Complete. Given filters (campaign_id, microbrief_version_id, channel, date_range), When filters are changed, Then metrics recalculate within 2s and show last_updated_ts. Given an export is generated for the same filters, When metrics are recomputed from the export, Then dashboard values match within 0.5% absolute or 5 records (whichever is smaller). Given there are zero Starts in the period, When the dashboard renders, Then Completion Rate shows N/A and no division-by-zero errors occur.
Action Conversion Lift Attribution
Given actions (call|email) are performed in RallyKit, When an action occurs within 2 hours after a MicroBrief Start, Then the action is attributed to that session if and only if the session has not been attributed previously. Given two cohorts (Completers: confirm_result=pass; Non-completers: started but no pass), When conversion is calculated, Then conversion rate per cohort and absolute/relative lift are displayed and cohorts exclude duplicated supporters within the window. Given each cohort size ≥100, When lift is displayed, Then a 95% confidence interval is shown; otherwise the UI displays "insufficient data". Given an export is generated, When conversion and lift are recomputed from the export, Then results match dashboard within 1% absolute or 10 records (whichever is smaller).
Variant Comparison Reporting (A/B)
Given multiple MicroBrief versions are active, When the Variant Comparison view is opened, Then per-version metrics (Starts, Completion Rate, median Time to Complete, Drop-off Rate) are shown side-by-side for the selected filters. Given per-version Starts ≥100, When statistical significance is calculated, Then a two-proportion z-test (alpha=0.05) is used for Completion Rate and the leading version is flagged as "Likely better" if p<0.05. Given per-version Starts <100, When the view loads, Then significance indicators are disabled and a sample-size hint is shown. Given filters are applied, When results refresh, Then values update within 2s and match export-derived recomputation within 1% absolute.
Drop‑Off Funnel and Card-Level Analytics
Given the MicroBrief card sequence is defined, When viewing the funnel, Then the view-through rate and exit rate at each card position are displayed and the highest drop-off step is highlighted. Given dwell times are calculated, When per-card stats render, Then p50 and p90 dwell_ms are displayed for each card. Given multiple versions exist, When viewing analytics, Then all funnel and dwell metrics are segmented by microbrief_version_id and position_index. Given an export is generated, When per-card counts are compared, Then on-screen counts match export within 5 records per card.
Audit-Ready Exportable Event and Action Logs
Given a user with Admin or Analyst role requests an export for a date range ≤31 days, When the request is submitted, Then a CSV is generated within 60s with a SHA-256 checksum and a download link that expires in 24h. Given the export is generated, When inspecting columns, Then it includes session_id, microbrief_version_id, event_type, event_ts (ISO 8601 UTC Z), card_id, position_index, dwell_ms, confirm_result, time_to_complete_ms, action_id, action_type, action_ts, attributed (true|false), consent_status. Given the dashboard shows totals for the selected filters, When export row counts are compared, Then counts match exactly for Starts, Confirms, Actions, and Attributed Actions. Given audit requirements, When PII fields are evaluated, Then no raw phone/email/name are present and supporter_id is a salted hash stable within tenant for 90 days.
Privacy, Consent, and Role-Based Access Controls
Given a supporter has not consented to analytics, When they start a MicroBrief, Then only strictly necessary events are stored with an anonymized supporter_id and the session is excluded from attribution and variant analyses. Given a user with Organizer role accesses analytics, When the dashboard loads, Then aggregate metrics are visible but raw exports and per-session logs are inaccessible (403). Given an unauthorized API token calls analytics endpoints, When access is attempted, Then a 403 is returned and the attempt is recorded in an access log with actor_id and event_ts. Given data retention policies, When events exceed 180 days and are not under legal hold, Then they are purged from analytics stores and excluded from exports.

Ready Check

Lightning-fast device and comprehension check: confirms audio, connectivity, and script understanding, then recommends the best role (calls, texts, canvass, or data). Prevents stalls at the first action and routes volunteers to tasks they can complete successfully right away.

Requirements

Audio Device Test & Troubleshooter
"As a first-time volunteer, I want to quickly test my microphone and audio so that I know I can complete calls without technical issues."
Description

Performs automatic microphone, speaker, and permission diagnostics for call tasks, including loopback recording, playback, input level meter, and echo/noise cancellation checks. Guides users through browser and OS permission fixes with step-by-step prompts and detects common misconfigurations (muted mic, wrong input, Bluetooth conflicts). Supports desktop and mobile browsers, persists preferred devices, and retries gracefully. Results are scored (pass, warn, fail) and fed into Ready Check’s role recommendation. Logs outcomes to RallyKit’s real-time dashboard and flags users for non-call roles if audio readiness is insufficient.

Acceptance Criteria
Mic Permission Request and Verification
Given the user starts Ready Check for a call task on a supported browser When the microphone permission state is prompt or denied Then the app requests microphone access via getUserMedia with audio constraints and displays browser/OS-specific steps to grant access And the final permission state (granted, denied, blocked) is captured within 5s on desktop or 8s on mobile from user action And an event is emitted with permission_state and latency_ms And if denied or blocked after two attempts, audioReadiness is set to fail and the user is routed to non-call roles And desktop (Chrome 110+, Firefox 110+, Edge 110+, Safari 16+) and mobile (iOS Safari 16+, Chrome Android 110+) are supported; unsupported versions show an incompatibility message and mark warn with alternative role guidance
Loopback Recording and Playback Verification
Given microphone permission is granted and input/output devices are selected When the user taps Test Mic Then a 3-second sample is recorded at ≥16 kHz, 16-bit mono And the level meter updates at ≥10 Hz during recording And the sample plays back to the selected output with start latency ≤300 ms after a user gesture And the user can confirm audibility via a single Heard it / Didn’t hear control And if playback fails twice (e.g., autoplay restrictions), instructions are shown and the result is marked warn
Input Level Meter and Clipping Detection
Given an active microphone stream When the user speaks at normal volume within 30 cm of the mic Then the input level meter reflects amplitude within 200 ms and updates at ≥10 Hz And a peak indicator flags clipping when signal ≥ -1 dBFS for ≥50 ms And a Too quiet warning appears if speech RMS < -45 dBFS across a 2-second window, with guidance to adjust mic or position And last measured RMS and peak values are stored in-session for scoring
Device Selection and Persistence Across Sessions
Given multiple audio devices are available When the user selects a specific microphone and speaker Then the selections are applied immediately to all audio tests And selections persist for 30 days per user and browser and are restored on next Ready Check session And on device re-enumeration, the app restores by deviceId; if unavailable, it falls back to default, surfaces a warning, and prompts re-selection And device labels are logged (no serial numbers or raw hardware IDs) with the selection change event
Echo and Noise Cancellation Check
Given loopback playback and recording are available When the recorded sample is analyzed Then idle background noise floor is computed; if > -50 dBFS, mark warn with guidance And echo is estimated by correlating playback and recording; if echo return loss < 15 dB, mark warn and recommend echo cancellation or headphones And if echoCancellation is supported, it is enabled and verified via getSettings(); if unsupported, mark N/A without failing And results display pass/warn indicators for noise and echo separately
Misconfiguration Detection and Guided Fixes with Graceful Retry
Given any sub-test fails or returns warn When a known misconfiguration is detected (muted hardware, wrong input, exclusive app using mic, Bluetooth disconnected/low battery, OS-level mute) Then browser/OS-specific, step-by-step fix instructions are shown with deep links where available And after each step, the sub-test auto-retries up to 3 times with backoff (0s, 2s, 5s) And on successful resolution, the flow resumes; on persistent failure, audioReadiness is set to fail and non-call roles are recommended And the user can switch roles (text/canvass) without losing Ready Check progress
Scoring, Dashboard Logging, and Role Recommendation
Given outcomes from permission, loopback, level, echo/noise, and device checks When computing audio readiness Then an overall score is produced: pass (all critical checks pass), warn (non-critical issues), fail (any critical check fails) And the score and per-check details are posted to RallyKit’s real-time dashboard within 2 seconds of completion And Ready Check uses the score to recommend roles: pass → call, warn → call with tips, fail → text/canvass/data prioritized And a Retry audio test option is available before finalizing the recommendation And logs include timestamp, anonymized user ID, browser/OS, and device labels; no PII or persistent hardware IDs are stored
Network Health & Dialer Readiness
"As a remote volunteer with variable internet, I want Ready Check to verify my connection and dialer compatibility so that I’m routed to a task that will work reliably on my device."
Description

Measures bandwidth, latency, jitter, and packet loss, performs STUN/TURN reachability tests, and checks firewall/NAT compatibility for WebRTC-based calling to ensure stable call quality. Detects captive portals and low-power modes on mobile that may disrupt calls. Classifies connectivity (excellent/good/limited/unsupported) and recommends calls, texts, canvass, or data tasks accordingly. Integrates with RallyKit’s telephony provider configuration to validate region-specific endpoints and records metrics to the organizer dashboard for triage and capacity planning.

Acceptance Criteria
Initial Ready Check before first call session
Given a volunteer opens Ready Check from a call action When diagnostics run Then the system measures downstream bandwidth, upstream bandwidth, RTT latency, jitter, and packet loss and completes within 10 seconds And displays the five metrics and a Calls Ready status And if Calls Ready is false, the Start Calling action is replaced with a single-tap recommended non-call task (texts/canvass/data); if true, Start Calling remains enabled
STUN/TURN and firewall/NAT reachability
Given Ready Check executes reachability tests When STUN over UDP 3478 to at least two servers fails Then the system attempts TURN over UDP 3478, TCP 3478, and TLS 443 in order and attempts media relay allocation And Calls Ready passes this criterion if any STUN or TURN path is established and allocation succeeds within 3 seconds And if no STUN/TURN path is reachable or NAT is Symmetric with no TURN available, Calls Ready = false And the detected NAT type is recorded with the results
Region-specific telephony endpoint validation
Given an organizer-selected telephony region is configured When Ready Check runs Then DNS resolution and TLS handshake to that region’s signaling/media endpoints succeed and median RTT <= 200 ms And if the selected region fails or median RTT > 200 ms, fallback regions allowed by the organizer are tested and the region with median RTT <= 250 ms is selected And if no allowed region is reachable with median RTT <= 250 ms, Calls Ready = false and a non-call role is recommended And the chosen region and RTT are stored with the test results
Captive portal and mobile low-power detection
Given Ready Check runs on any device When an HTTP/HTTPS captive portal probe returns a redirect/blocked response Then Captive Portal Detected is shown, Calls Ready = false, and actions to Open Network Login and Retry are provided Given a mobile device When the OS/browser reports data saver or low-power mode active Then a warning is shown and the recommendation is downgraded away from calls unless the user explicitly overrides And upon clearing the captive portal or disabling data saver and rerunning, status and recommendations update within 5 seconds
Connectivity classification and role recommendation
Given measured latency, jitter, packet loss, upstream bandwidth, and WebRTC reachability Then connectivity is classified as: - Excellent: latency <= 100 ms, jitter <= 20 ms, packet loss <= 0.5%, upstream >= 0.5 Mbps, WebRTC path available - Good: latency <= 150 ms, jitter <= 30 ms, packet loss <= 1.5%, upstream >= 0.3 Mbps, WebRTC path available - Limited: latency <= 300 ms, jitter <= 50 ms, packet loss <= 3%, upstream >= 0.15 Mbps, only TURN/TCP 443 or degraded flag - Unsupported: worse than Limited or no WebRTC path And the recommended role is: Calls for Excellent/Good; Texts for Limited; Canvass/Data for Unsupported And classification and recommendation are consistent across two consecutive runs when metric variance is within 5%
Organizer dashboard metrics logging and triage
Given Ready Check completes When results are sent to the backend Then the system records timestamp, volunteer/session ID, device and network type, latency, jitter, packet loss, bandwidth, NAT type, STUN/TURN reachability, captive portal/power saver flags, selected region, classification, and recommendation And the organizer dashboard displays the new record within 15 seconds of test completion And aggregates show counts by classification and recommendation for the past 24 hours and support export of the recorded fields And failed submissions are retried with exponential backoff for up to 5 minutes and the UI reflects telemetry send status
Script Comprehension Micro-Quiz
"As a volunteer new to the issue, I want a quick comprehension check of the script so that I’m confident about what to say to a legislator."
Description

Displays the district- and bill-status-specific script and runs a 30–60 second comprehension check using two quick items (multiple choice plus a confidence or paraphrase prompt). Adapts reading level, offers translations, and supports screen readers. Provides instant feedback, highlights key talking points, and captures self-reported comfort level. Scores understanding and confidence, then sends results to the recommendation engine. Stores aggregate comprehension insights for organizers to refine scripts without adding setup overhead.

Acceptance Criteria
Correct Script Rendering by District and Bill Status
Given a supporter has district and current bill status available When the micro-quiz loads Then the script variant mapped to that district and bill status is displayed with 100% of placeholders resolved (legislator names, bill identifiers, district references) And the displayed script version ID is logged with timestamp and session ID And the script content is loaded within 2 seconds at the 90th percentile on a 4G connection
Time-Bound Two-Item Micro-Quiz Flow
Given the script is visible When the micro-quiz begins Then exactly two items are presented: one multiple-choice comprehension question and one confidence slider or paraphrase prompt, per configuration And a visible timer enforces a 30–60 second window with a soft reminder at 45 seconds and auto-submit at 60 seconds And users cannot continue without answering both items unless the 60-second auto-submit triggers And the multiple-choice item has a single correct answer and records selection time-stamp And the second prompt is either a 0–100 confidence slider in 10-point increments or a paraphrase textbox requiring at least 15 words with live character count
Instant Feedback and Key Talking Point Highlighting
Given the user submits an answer to the multiple-choice item When feedback is shown Then correctness is indicated immediately and the correct answer is displayed if the choice was incorrect And the relevant key talking point within the script is highlighted (minimum 3 tagged key phrases available per script) And feedback renders within 300 ms from submission on median devices And highlight styling meets contrast ratio of at least 4.5:1 and is preserved in high-contrast mode
Understanding and Confidence Scoring with Recommendation Dispatch
Given both quiz items are completed or auto-submitted When scoring is executed Then an Understanding Score (0–100) is computed including the multiple-choice outcome and, if provided, paraphrase evaluation per rubric, or falls back to MCQ-only when paraphrase is not used And a Confidence Score (0–100) is captured from the slider or from a required self-reported confidence question when paraphrase is used And the payload {sessionId, userId (if available), language, readingLevel, understandingScore, confidenceScore, timeToComplete, scriptVersionId} is sent to the Recommendation Engine API within 1 second of scoring And on non-2xx API responses the system retries up to 3 times with exponential backoff and logs failures with correlation ID And the recommendation response (decisionId and recommendedRole) is stored with the session and made available to the Ready Check flow
Accessibility, Reading Level Adaptation, and Translations
Given a user selects a target reading level When the script renders Then the displayed script’s readability score is within ±1 grade level of the target using the configured readability metric And increasing browser/system text size to 200% preserves layout and functionality without loss of content And all interactive controls are operable via keyboard only, with visible focus states, correct ARIA roles, and a logical focus order And screen readers announce all labels, questions, options, feedback, and highlight context in a meaningful order And language switching localizes all UI strings and script content for at least English and Spanish, including right-to-left support where applicable
Aggregate Comprehension Insights for Organizers Without Setup Overhead
Given 20 or more quiz completions exist for a script variant When an organizer opens the insights dashboard Then aggregate metrics are displayed: completion rate, average understanding score, average confidence score, top misunderstood talking points, distribution by language and reading level, and average time to complete And no individual responses or raw paraphrase text are exposed; only anonymized aggregates and keyword frequencies are shown And insights refresh at least every 15 minutes and reflect data up to the last successful aggregation job And organizers are not required to configure additional fields; insights appear automatically per script version
Role Fit Scoring & Recommendation
"As an organizer, I want the system to recommend the best role based on readiness and skills so that volunteers start productive tasks immediately."
Description

Combines device tests, network health, script comprehension, user preferences, time available, and prior performance to generate an explainable score for calls, texts, canvass, and data tasks. Uses a transparent rules engine with configurable thresholds and safe defaults so organizers can launch in minutes. Presents the top recommendation with a brief rationale and alternative options, and supports organizer overrides. Writes the chosen role and rationale to RallyKit’s live dashboard and triggers assignment creation via existing matching and scripting services.

Acceptance Criteria
Compute Multi-Factor Role Scores
Given a volunteer has completed device audio test, network health check, script comprehension quiz, and has stored preferences, time available, and prior performance metrics And the active ruleset and thresholds are loaded When the Role Fit Scoring runs Then the system produces four numeric scores (0–100) for calls, texts, canvass, and data And each score is derived using the configured weights and thresholds from the active ruleset And missing inputs are replaced with documented safe defaults from the ruleset and flagged in the score metadata And the scoring response includes a timestamp and ruleset version id And the 95th percentile scoring latency is ≤300 ms under nominal load And the scores are persisted in the session for subsequent presentation and auditing
Display Explainable Recommendation Rationale
Given role scores have been computed for a volunteer When the system presents the recommendation Then it displays the top role, its score, and a brief rationale summarizing the top contributing factors with their relative impact (at least three factors when available) And it displays up to three alternative roles with their scores in descending order And the rationale text is readable (≤280 characters) and references concrete inputs (e.g., "Strong network"; "Prefers texting"; "Low call completion last week") And the displayed rationale factors match the underlying contribution data from the rules engine And the presentation is accessible (focusable elements, readable by screen readers)
Configurable Rules Engine With Safe Defaults
Given no organizer configuration is provided When scoring is executed Then the system uses a published default ruleset with safe thresholds and weights And the default ruleset is versioned and visible in admin settings Given an organizer updates thresholds or weights within allowed ranges When the changes are saved Then the new ruleset version is activated within 60 seconds and tagged with editor, timestamp, and change summary And scoring invocations thereafter reference the new version id And if validation fails, the save is rejected with actionable error messages and no partial activation occurs
Top Recommendation With Alternatives and Tie-Breaking
Given four role scores are available When determining the top recommendation Then the system selects the highest scoring role that meets the minimum viability threshold defined in the ruleset And in the event of a tie (within the configured tie band), the system breaks ties using priority order: device/network readiness > time available fit > stated preference > recent performance trend And the selection process is deterministic for the same inputs And the UI presents at least two alternative roles (when available) with scores and clear labels And roles below the minimum viability threshold are not recommended as top choice
Organizer Override With Audit Trail
Given a recommendation has been presented to a volunteer and visible to an organizer When the organizer selects an override role and submits with an optional note Then the system records the override, including organizer id, timestamp, original recommendation, selected role, and note And the volunteer’s final role changes immediately to the override selection And the rationale displayed to the volunteer updates to indicate "Organizer override" along with the note (if provided) And an audit log entry is available in the dashboard history and via API And organizers can revert to auto-recommendation in a single action, which is also logged
Dashboard Writeback and Assignment Trigger
Given a final role has been determined (auto or override) When the final role is committed Then the system writes the chosen role and rationale to the RallyKit live dashboard within 1 second p95 And it triggers assignment creation via the existing matching and scripting services using the volunteer id, role, and correlation id And it awaits confirmation and marks the assignment status accordingly (Created, Pending, Failed) And operations are idempotent using the correlation id to prevent duplicate assignments on retries And any failures are surfaced in the dashboard with actionable error messages
Resilience and Fallback Behavior
Given one or more inputs (device test, network check, quiz results, or prior performance) are unavailable or time out When scoring is attempted Then the system substitutes safe defaults as per the ruleset and annotates the rationale with which defaults were used And if the top call role is disqualified due to failing audio/network checks, the system recommends the best viable role among texts, canvass, or data with an explanation And if assignment creation fails after three retry attempts with exponential backoff, the system marks the status as Pending Assignment and notifies the organizer And the user-facing flow remains responsive and does not block the volunteer from starting the recommended task
One-Tap Role Handoff
"As a volunteer, I want to accept a recommended task and start with one tap so that I don’t lose momentum."
Description

Provides immediate, single-tap transition from recommendation to the correct task experience (dialer, texting, canvass, or data cleanup) by deep-linking with secure tokens that carry script, district targeting, and user context. Pre-checks authentication and permissions, resumes sessions if needed, and gracefully falls back to an alternate role if an endpoint is unavailable. Tracks acceptance, drop-offs, and time-to-first-action to ensure Ready Check eliminates first-action stalls.

Acceptance Criteria
Single-Tap Handoff to Dialer After Ready Check
Given a user has completed Ready Check and received a Calls recommendation with script S, district D, and context C When the user taps “Start Calls” Then the dialer loads within 2 seconds on a 4G-or-better connection And script S and district targeting D are preloaded without additional input And user context C (language, accessibility) is applied And no extra confirmation screens are displayed before the dialer And a handoff_accepted event with correlation ID is recorded
Pre-Check Authentication and Permissions
Given a user is not authenticated or lacks required permissions for the recommended role When the user taps the one-tap handoff Then an authentication check completes within 3 seconds And if not authenticated, the user is prompted to sign in and upon success is auto-continued to the role with token data preserved And if a permission is missing (e.g., dialer access), a single permissions screen is shown and upon grant the user is auto-continued And if authentication/permission fails, the user is returned to Ready Check with an actionable error message and no loss of session
Resume Prior Task Session Within 10 Seconds
Given the user has an incomplete session for the recommended role less than 12 hours old When the user taps the one-tap handoff Then the previous session resumes within 10 seconds, restoring queue position, script variant, and progress state And a non-blocking toast confirms “Resumed prior session” And if the prior session is older than 12 hours or cannot be restored, a fresh session starts with the same script and targeting And a session_resume event is logged with resume=true/false and reason
Fallback to Alternate Role on Endpoint Outage
Given the primary role endpoint health check fails (>=3s timeout, 5xx, or circuit open) When the user taps the one-tap handoff Then the system routes within 2 seconds to the highest-ranked alternate role from Ready Check And an info banner explains the fallback (e.g., “Calls unavailable—starting Texts”) And district targeting and script variant are mapped appropriately to the alternate role And a fallback event is logged with reason code and affected endpoint And the user is never left on a dead-end screen
Secure Deep-Link Token Integrity and Expiration
Given a handoff deep-link with token T When T is validated Then T must be signed, scoped to the user, audience-checked, and expire within 15 minutes of issuance And T is single-use; after first successful handoff, reuse is rejected And tampered, expired, or audience-mismatched tokens are blocked with a non-technical error and return to Ready Check And security_audit events capture user ID (if available), IP, reason, and token_id hash
Telemetry for Acceptance, Drop-off, and Time-to-First-Action
Given any handoff flow When the user interacts with the one-tap handoff Then events are emitted: handoff_accepted (t0), task_ui_loaded (t1), first_action_completed (t2) And time_to_first_action = t2 - t0 is computed and stored with correlation ID And drop-offs are captured and labeled with stage (pre-load, post-load, pre-action) And 99% of telemetry is delivered within 60 seconds with retry/backoff for up to 24 hours And metrics are visible in the dashboard within 5 minutes of occurrence
Cross-Device Deep-Link Compatibility and Graceful Degradation
Given iOS, Android, and desktop browsers When the user taps the one-tap handoff Then the native app opens if installed; otherwise a responsive web module opens with identical context And if the OS blocks the deep link, a fallback sheet offers “Open in browser,” “Copy link,” and QR code within 5 seconds And UTM/campaign parameters and correlation ID are preserved across redirects And multiple tabs/windows are prevented for a single handoff And basic accessibility is met: actionable element is labeled and screen readers announce the destination role
Readiness Audit Log & Consent
"As a nonprofit director responsible for compliance, I want an audit trail of readiness checks and consent so that I can demonstrate due diligence while protecting supporter data."
Description

Captures and stores a tamper-evident record of all readiness checks, scores, recommendations, user choices, and overrides with timestamps and minimal necessary personal data. Presents a concise consent notice explaining tests performed and data usage, with links to policy and the option to opt out of non-essential tracking. Supports configurable retention, export to CSV for audits, and organizer-level access controls. Integrates with RallyKit’s audit-ready reporting to demonstrate due diligence and protect supporter privacy.

Acceptance Criteria
Consent Notice Prior to Readiness Check
Given a first-time supporter initiates Ready Check When the readiness flow is loaded Then a consent modal is displayed before any tests run And the modal summarizes tests to be performed (audio, connectivity, comprehension), data captured, and purposes And the modal includes links to Privacy Policy and Data Retention Policy And the supporter must provide explicit consent via an unchecked checkbox and Continue button And the modal offers "Run essential-only" and "Decline and exit" options And no non-essential tracking or storage begins until full consent is given And the system records consent_version, consent_choice (full|essential|declined), and consent_timestamp (ISO 8601 UTC) And the system logs consent_presented and consent_outcome events with event_id and session_id
Minimal Data Collection and Opt-Out Enforcement
Given a supporter selects essential-only or opts out of non-essential tracking When Ready Check executes Then only the minimal fields are persisted: pseudonymous_user_id, session_id, campaign_id, consent_choice, consent_version, check_types, check_results (pass/fail + score), recommended_role, user_selected_role, override_reason (optional), timestamps (start|end|duration_ms) And IP addresses are stored only in truncated form (/24 for IPv4, /48 for IPv6) and user agent is stored as family only (e.g., "Chrome") And no email, phone, exact IP, device model, or third-party identifiers are stored And analytics/marketing beacons are disabled for the session And each record is tagged data_scope = "essential" And Ready Check outcomes and routing remain functional
Tamper-Evident Audit Log Record with Hash Chain
Given any readiness check completes or a recommendation is overridden When the audit log entry is written Then the entry includes event_id (ULID), event_type, campaign_id, pseudonymous_user_id, session_id, check_types, check_results, score_overall, recommended_role, user_selected_role, override_reason (optional), consent_choice, consent_version, timestamps (start|end|duration_ms), and server_timestamp (ISO 8601 UTC) And the entry includes previous_hash and record_hash = SHA-256(HMAC(secret, canonical_json || previous_hash)) And direct updates to existing entries are rejected with 405; corrections are appended as new events referencing original event_id And an integrity verification endpoint can validate any sequence of 1–1,000 records and returns the first failing index if a mismatch occurs And all event timestamps are generated server-side to prevent client clock skew
Organizer-Level RBAC and Data Scoping
Given roles Admin, Organizer, and Viewer exist with defined permissions When a user with Organizer role scoped to Campaign A accesses the audit log Then only records for Campaign A are visible and exportable And Admins can view and export all records within the organization; Viewers have read-only access without export capability And permission checks are enforced on list, detail, verify-integrity, and export endpoints And unauthorized access attempts return 403 with no data in the response body and are logged as security events And only permitted fields (no raw personal identifiers) are returned per role
Configurable Retention and Automated Purge
Given an organization sets readiness log retention to 90 days and legal_hold = false When any record exceeds 90 days since server_timestamp Then a nightly job purges it from the primary datastore within 24 hours And purged records are excluded from UI queries and exports within 24 hours And a signed purge manifest is generated with counts by day and the manifest_hash, available to Admins And if legal_hold = true, no deletions occur and an in-app banner indicates the hold state And changing retention to 30 days applies at the next scheduled purge, including to existing records that exceed 30 days
CSV Export and Audit-Ready Reporting Integration
Given an Admin requests a CSV export with date range, campaign filter, and consent filter When up to 100,000 matching records exist Then the export completes within 30 seconds and provides a signed URL that expires in 24 hours And the CSV uses UTF-8, LF line endings, quoted fields as needed, and includes a header row with columns: event_id, server_timestamp, campaign_id, pseudonymous_user_id, session_id, consent_choice, consent_version, test_audio, test_connectivity, test_comprehension, score_overall, recommended_role, user_selected_role, override_reason, data_scope, previous_hash, record_hash And all timestamps are in ISO 8601 UTC And export results respect RBAC and data_scope (essential-only sessions exclude non-essential fields) And the audit-ready report can ingest the CSV and verify a random 1% sample via the integrity endpoint with 100% pass rate

FastPass Badge

Generates a temporary volunteer badge as a QR and SMS code that encodes verified district, team, and permissions. Scan to check in at kiosks and launch one‑tap action pages without retyping PII; badges auto-expire for privacy and can be reissued instantly if a device changes.

Requirements

Time-Bound Signed Badge Tokens
"As a field organizer, I want volunteer badges that are time‑limited and tamper‑proof so that check‑ins and actions remain secure and private."
Description

Generate QR and SMS codes as cryptographically signed, time‑bound tokens that encode supporter district, team, and permission scopes. Tokens include issued_at, expires_at, and nonce; are signed with rotating keys; optionally encrypted to prevent data leakage in QR; and are validated by kiosks and action pages using cached public keys. Expired or tampered tokens are rejected, and configurable TTLs balance convenience and privacy. The implementation supports key rotation, revocation lists, and replay protection to ensure badges cannot be forged or reused improperly.

Acceptance Criteria
Signed Token Issuance with TTL and Nonce
Given an organizer issues a FastPass Badge to a verified supporter with district, team, and permission scopes configured When the badge is generated Then the token contains claims: issued_at and expires_at in ISO 8601 UTC with a 'Z' suffix And expires_at - issued_at equals the org-configured TTL (±1s tolerance) And the token contains a cryptographically secure nonce (jti) with at least 96 bits of entropy encoded as base64url And the token header includes kid referencing the active signing key and alg is one of the platform-approved algorithms And the QR code and SMS deep link are produced containing the token or a token reference identifier And the issuance event is logged with a hash of the token identifier; no PII is written to logs
Kiosk Validation Using Cached Public Keys
Given a kiosk has a cached JWKS last refreshed within the configured interval and a defined clock skew tolerance When a FastPass token is scanned Then the kiosk selects the public key by kid and verifies the token signature successfully And validation fails with INVALID_SIGNATURE if verification fails or kid is not found And validation fails with EXPIRED_TOKEN if current time > expires_at + allowed skew And on success, the kiosk creates a session and records outcome with token hash, kiosk_id, and timestamp And average validation time on a warm cache is within the performance budget for kiosks
Expiry Enforcement and Configurable TTL
Given an organization TTL configuration exists When the TTL is updated Then newly issued tokens use the new TTL and previously issued tokens retain their original expires_at And the platform enforces TTL within the allowed min/max range; attempts to set outside that range are rejected with 422 UNPROCESSABLE_ENTITY And tokens with expires_at in the past are rejected with EXPIRED_TOKEN and a reissue link or instruction is returned And validation honors the configured clock skew tolerance
Key Rotation and Revocation Handling
Given a new signing key is promoted and the previous key is scheduled for retirement When kiosks refresh JWKS within the configured refresh interval Then tokens signed by the new key validate and tokens signed by the previous key continue to validate until the retirement time And when a key or specific token id (jti) appears on the revocation list, validation fails with REVOKED_KEY within one refresh interval And if the kiosk is offline, it validates using the last successfully fetched JWKS until the cache TTL; after TTL expiry, validations are refused with KEYSET_STALE And all keyset fetches and revocation decisions are auditable with timestamps
Encrypted QR Payload to Prevent Data Leakage
Given the organization has enabled encrypted QR payloads When a FastPass QR is generated Then the QR encodes a JWE that reveals no district, team, or permission values when scanned with a generic QR reader And kiosks and action pages possessing the decryption key can decrypt and validate the token end-to-end And when encryption is disabled, the QR contains only an opaque token or reference id; no PII or scope values are present in cleartext And SMS delivery remains functional in both modes using a deep link that resolves to secure validation
Replay Protection Across Kiosks and Action Pages
Given a token nonce (jti) is first seen and accepted at kiosk A at time T When the identical token is presented again within the configured replay_window at kiosk B (B ≠ A) Then validation is rejected with REPLAY_DETECTED and no action is performed And when the identical token is presented again at kiosk A after replay_window and before expires_at, it is accepted And all replay decisions are logged with token hash, kiosk_id, and event time
Scope and District/Team Integrity Enforcement
Given a valid token encoding district, team, and permission scopes When the token is used to access an action page Then only features authorized by the encoded scopes are enabled; requests requiring missing scopes are rejected with SCOPE_VIOLATION And the district value routes the supporter to district-specific scripts and legislator targets And any tampering of district, team, or scopes results in signature verification failure and INVALID_SIGNATURE And attempted elevation of scope without an updated valid signature is denied and logged
Instant Badge Issuance & Reissue
"As a volunteer, I want to receive my badge instantly and reissue it if I change devices so that I can keep participating without delays."
Description

Provide a self‑service flow that verifies supporter identity and district, then issues a FastPass badge via SMS deep link and downloadable QR. The flow binds the badge to the current device when supported, allows instant reissue on device change, enforces rate limits and captcha on requests, and supports step‑up verification for elevated permission scopes. Delivery channels include SMS with email fallback, with localized copy and clear UX. Issuance events are logged and revocable by admins without requiring user PII to be re‑entered.

Acceptance Criteria
Self-Service First-Time Badge Issuance (SMS Deep Link + QR)
- Given a supporter completes identity verification using phone or email, When their district is verified via address lookup or geo-confirmation, Then the system issues a FastPass token, sends an SMS deep link within 30 seconds (p95), and displays a downloadable QR on the confirmation screen. - And the QR and deep link encode only a short-lived opaque token (no PII) that maps server-side to district, team, and permission scopes. - And the token has a server-enforced expiry; attempts after expiry return an unauthorized response and show a “Badge expired — reissue” prompt. - And the QR is scannable by supported kiosks and standard mobile cameras, successfully resolving to the one-tap action page.
Device Binding and One‑Tap Launch
- Given the supporter opens the SMS deep link on a supported device/browser, When the badge is first activated, Then the token is bound to that device and subsequent uses from the same device do not require re-verification until expiry or revocation. - And if the environment does not support device binding, Then issuance completes without binding and the user is informed that cross-device use may require re-verification. - And launching from the badge opens the correct one-tap action page pre-filled with district, team, and granted permissions, without requiring PII entry. - And attempts to use a device-bound token from a different device are redirected to the reissue flow.
Instant Reissue on Device Change
- Given a supporter selects “Lost/changed device? Reissue badge”, When they re-verify using their registered phone or email and confirm district, Then all previous tokens are revoked and a new token is issued; the SMS deep link is delivered within 30 seconds (p95) and a new QR is displayed. - And any attempt to use a revoked token returns an unauthorized response with a non-PII message and a link to the reissue flow. - And reissue is completed without requiring re-entry of stored PII beyond the verification factors (phone/email and district confirmation).
Rate Limiting and CAPTCHA Enforcement
- Given issuance/reissue requests from the same phone/email or IP exceed default thresholds (per-identity: 5/hour; per-IP: 20/hour, both org-configurable), Then the system responds with 429 Too Many Requests and a user-friendly retry-after message. - And after 2 failed verification attempts within 10 minutes or when anomalous traffic is detected, Then a CAPTCHA challenge is required; failure blocks issuance and is logged. - And all rate-limit and CAPTCHA events are recorded in audit logs with timestamp and hashed identifiers, with no raw PII stored.
Step-Up Verification for Elevated Permission Scopes
- Given a supporter requests elevated scopes (e.g., kiosk check-in, team lead), When step-up verification is triggered, Then a second factor is required via the alternate channel (email if primary was SMS, or SMS if primary was email) before issuance. - And if step-up verification fails or times out after 5 minutes, Then issuance for elevated scopes is denied and the supporter is offered base-scope issuance only if allowed by org policy. - And the verification method, result, and granted scopes are recorded in the audit log.
Delivery Fallback and Localization
- Given SMS delivery fails due to carrier rejection or remains undelivered after 60 seconds (p95), Then an email fallback is sent with the same deep link and instructions; the confirmation screen still provides a downloadable QR. - And all user-facing copy in the flow and messages is localized to the user's selected or detected locale; at minimum English and Spanish are available; no hard-coded strings remain in the UI or templates. - And all outbound messages include org name, support contact, and opt-out instructions; links are HTTPS and contain no PII.
Admin Revocation and Audit Logging Without PII Re-entry
- Given a badge is issued or reissued, Then an immutable audit record is created with timestamp, hashed supporter ID, district, team, granted scopes, device-binding status, delivery channel, locale, and request IP. - And an admin can revoke an active badge from the admin console, immediately invalidating the token; optional user notifications are sent per org policy. - And a supporter whose badge was revoked can start the reissue flow without re-entering previously stored PII and must complete verification before a new badge is issued.
Kiosk Scan and Check-In
"As a check‑in volunteer, I want to scan badges quickly at a kiosk with offline resilience so that lines move fast and attendance is accurate."
Description

Deliver a kiosk scanning experience that reads QR badges via camera, accepts manual short codes as fallback, validates token signatures and expiry, and checks the volunteer in to the event and station. The kiosk records location, station ID, timestamp, and operator, queues events offline with automatic retry, and displays clear success or error states. It integrates with RallyKit attendance and action tracking, enforces permission scopes at the kiosk, and throttles repeated or suspicious scans.

Acceptance Criteria
Successful QR Badge Scan Check-In
Given the kiosk is authenticated to an event with a configured stationId, location, and operatorId And camera access is enabled and the device is online When a valid FastPass QR badge is presented and decoded And the token signature verifies and the token is not expired Then the kiosk displays a success state with volunteer first name or initials, team, and station within 2 seconds And a single check-in record is created with eventId, stationId, operatorId, location code/geohash, UTC ISO-8601 timestamp, badgeId hash, and tokenId And the record is transmitted to the server and acknowledged with HTTP 2xx And the volunteer is marked present for the event at this station
Manual Short Code Fallback Check-In
Given the kiosk is configured to accept manual short codes And the network is online When the operator enters a valid short code and submits Then the kiosk resolves the token via secure request (HTTPS) And validates signature and expiry And displays the same success state as QR within 3 seconds And creates and transmits a check-in record matching the QR flow And masks PII on screen beyond initials/team And after 3 invalid short-code attempts within 60 seconds, the kiosk enforces a 60-second cooldown and logs the attempts with operatorId
Token Signature and Expiry Validation
Given a QR or short code is provided with an invalid signature or expired token When it is scanned or submitted Then the kiosk shows a clear error: Invalid badge or Expired badge And does not display any PII And creates no attendance record And logs a rejected-scan event with reason, operatorId, stationId, location, and timestamp And provides an action to Request new badge or Use short code as applicable
Offline Queueing with Automatic Retry
Given the kiosk is offline or the server is unreachable When a valid QR badge is scanned or a valid short code is submitted Then the kiosk displays a Queued success state within 2 seconds And stores an encrypted, idempotent queue item locally with eventId, stationId, operatorId, location, timestamp, badgeId hash, and tokenId And prevents duplicate queued entries for the same idempotency key within 5 minutes And retries delivery with exponential backoff (starting 2s, max 60s) And upon connectivity, transmits queued items and marks them Synced within 5 seconds of successful post And if an item remains unsent after 24 hours, the kiosk surfaces an operator alert and retains the item for export
Permission Scope Enforcement at Kiosk
Given the kiosk station requires specific permission scopes (e.g., phonebank, registration) And a badge token includes declared scopes When the badge is scanned Then if scopes include the station requirement, check-in proceeds as success And if scopes do not include the requirement, the kiosk displays Insufficient permissions, no check-in is recorded, and the attempt is logged And restricted one-tap action pages cannot be launched from this kiosk session without required scopes
Throttling Repeated or Suspicious Scans
Given repeated scan activity occurs at a kiosk When the same badge is scanned more than once within 60 seconds at the same station Then the kiosk displays Already checked in, creates no new record, and logs a duplicate-scan event And when more than 5 failed scans (invalid/expired/denied) occur within 30 seconds, the kiosk applies a 60-second throttle on further scans, displays a throttling message, and logs a security alert with deviceId and operatorId And throttling automatically clears after the cooldown and does not block legitimate successful scans thereafter
Attendance and Action Tracking Integration
Given a successful check-in is completed (live or after queued sync) When the server processes the event Then the attendee appears in the RallyKit event attendance list within 5 seconds of receipt And the check-in is visible in audit logs with operatorId, stationId, location, timestamp, and outcome And an ActionContext is attached so that one-tap action pages launched from the kiosk within 30 minutes prefill district, team, and permissions And idempotency prevents duplicate attendance entries for the same badge at the same station within 5 minutes
PII-less One-Tap Action Launch
"As a supporter, I want one‑tap access to the correct district action without retyping my info so that I can complete more actions in less time."
Description

Enable action pages to accept a validated badge token and launch the correct district‑specific scripts and legislator targets without retyping personal information. The action session is ephemeral, pre‑fills only non‑PII context derived from token claims, and routes to the appropriate call or email flow based on current bill status. The system prevents data persistence on shared kiosks, supports rapid multi‑action flows, and records outcomes for analytics and audit while minimizing data exposure.

Acceptance Criteria
Validate FastPass Token and Establish Ephemeral Session
Given a FastPass badge token is presented, When the token is scanned or clicked, Then the signature and algorithm are verified against the current RallyKit signing keys and invalid tokens are rejected with error code TOKEN_INVALID. Given a valid token with exp and iat claims, When validation occurs, Then tokens outside of a 60-second leeway window are rejected with error code TOKEN_EXPIRED. Given a valid token, When a session is created, Then the session TTL is 15 minutes from last activity and the session is destroyed on TTL expiry. Given a valid token, When the session is created, Then only district_id, team_id, permission_set, and token_id claims are stored in-memory session and no PII fields are persisted to localStorage, sessionStorage, IndexedDB, or non-HttpOnly cookies. Given a token_id is on the revocation list, When the token is presented, Then access is denied within 500 ms and an audit log entry is written with reason TOKEN_REVOKED and no PII.
District Targeting and Script Generation From Token Claims
Given a valid session containing district_id and current bill status, When the action screen loads, Then the correct legislator targets for district_id are resolved and displayed within 1 second. Given a valid session containing district_id and current bill status, When the action screen loads, Then the script variant corresponding to the current bill status is displayed with the correct district-specific merge fields populated from non-PII claims. Given the bill status changes during an active session, When the user advances to the next step, Then the flow updates to the new status within 5 seconds and surfaces the corresponding script and targets. Given an unrecognized district_id claim, When resolution is attempted, Then the user is routed to a district lookup fallback without any PII prompts and an anomaly is logged for investigation.
One‑Tap Launch on Shared Kiosk Without PII Persistence
Given a kiosk device marked as shared, When a FastPass QR is scanned, Then the action page opens directly to the relevant call or email intro without requiring or pre-filling PII fields. Given a shared kiosk session, When the user ends the action or 60 seconds of inactivity elapse, Then all session cookies are cleared, Cache-Control headers are no-store, no-cache, must-revalidate, and back navigation returns to a neutral lock screen. Given a shared kiosk, When storage is inspected after session end, Then localStorage, sessionStorage, and IndexedDB contain zero RallyKit keys and no PII is present in the browser disk cache. Given two different users scan badges sequentially, When the second user starts within 10 seconds of the first user's logout, Then no data from the first session is accessible and the second session starts cleanly.
Rapid Multi‑Action Flow Using Non‑PII Context
Given a completed action in a valid session, When the user taps Next Action within 10 minutes, Then the next action launches without rescanning and reuses only district_id, team_id, and permission_set from session context. Given a valid session chaining actions, When each new action loads, Then time to interactive is under 700 ms on median kiosk hardware and PII fields remain empty across all steps. Given a session chains multiple actions, When the fifth action completes, Then the user is prompted to rescan for additional actions and the session cannot initiate a sixth action without a new scan.
Dynamic Routing Based on Real‑Time Bill Status
Given bill status is Call, When a valid session starts, Then the user is routed to the call flow and sees the call script variant for their district. Given bill status is Email, When a valid session starts, Then the user is routed to the email flow and sees the email script variant for their district. Given bill status is No Action, When a valid session starts, Then the user sees a neutral message and no action flow is initiated. Given randomized test inputs across districts and statuses, When 1,000 combinations are executed, Then the incorrect routing rate is less than 0.1%.
Outcome Recording With Minimal Data Exposure
Given an action completes, When an outcome is recorded, Then the event includes action_type, timestamp, district_id, legislator_ids, team_id, token_hash (salted SHA-256), and outcome status, and excludes name, email, phone, and address. Given an action completes, When the event is published, Then it appears in the analytics stream within 2 seconds and in the audit export within 5 minutes with the exact fields specified. Given duplicate submissions for the same token_hash and action_type within 30 seconds, When events are processed, Then they are deduplicated to a single outcome record with an attempts count incremented. Given data retention policy is applied, When 12 months have elapsed since event creation, Then the event is purged from hot storage and only aggregate non-identifying metrics remain.
Badge Expiry, Reissue, and Device Change Handling
Given a token with an exp claim has expired, When it is presented, Then the user is shown an "Badge expired — rescan" prompt and no session is created. Given an organizer reissues a badge, When the new token is presented, Then the old token_id is marked revoked and any active sessions using it are terminated within 5 seconds. Given a token contains a non-PII binding_id and it differs from the current device binding for shared kiosk hardening, When the token is presented, Then the prior binding is invalidated and a new session is required without exposing PII.
Admin Badge Controls and Policy
"As a campaign director, I want to configure badge policies and revoke badges on demand so that security and compliance requirements are met."
Description

Provide an admin console for configuring badge TTLs, permission templates, and team mappings; viewing active and expired badges; and revoking badges individually or in bulk. Include search and filters, webhook notifications for issuance and revocation, and CSV export of badge events. Admins can set kiosk policies such as offline limits and allowed scopes, require step‑up verification for sensitive permissions, and override or block reissue when risk is detected.

Acceptance Criteria
Badge TTL Configuration and Enforcement
Given an admin sets badge TTL to X (minutes/hours/days) and saves, When a badge is issued at T0, Then the badge expires at T0+X and cannot be used after that timestamp Given the TTL setting is updated, When a new badge is issued, Then the new badge inherits the new TTL and existing badges retain their original expiry unless "Apply retroactively" is checked Given "Apply retroactively" is checked and confirmed, When saved, Then all active badges recalculate expiry based on the new TTL and an audit entry records before/after values and actor Given an expired badge is scanned at a kiosk, When validation occurs, Then access is denied with reason "Expired" and the attempt is logged with timestamp and kiosk ID
Permission Templates and Team Mappings Management
Given an admin creates a permission template with scopes and team mappings and saves, When saved, Then the template appears in the list with version number, creator, and timestamp Given a badge is issued using a template, When the badge detail is viewed, Then scopes and team mapping match the template and are read-only on the badge record Given the template is edited, When changes are saved, Then a new template version is created and existing badges are not modified until reissued Given a template mapping references a non-existent team, When saving, Then validation fails with a clear error and no changes persist Given a template is archived, When issuing a badge, Then the archived template is not selectable
Badge Directory Search, Filters, and Details
Given an admin opens the badge directory, When searching by email, phone, name, badge ID, team, scope, status, or date range, Then matching results return within 2 seconds for up to 10,000 records with pagination Given filters (status, template, team, TTL, last activity) are applied, When Save View is clicked, Then the view is saved and can be set as default for the admin Given an admin opens a badge detail, When viewing, Then the panel shows issuer, issue time, expiry, status, template, scopes, team, last kiosk seen, reissue history, and webhook delivery statuses Given no results match, When search executes, Then an empty state appears showing active filters and a Clear Filters action
Revoke Badges Individually and in Bulk
Given an active badge is selected, When Revoke is confirmed, Then status changes to Revoked immediately, QR/SMS codes are invalidated, and subsequent scans are denied with reason "Revoked" Given multiple badges are selected (up to 10,000), When Bulk Revoke is confirmed, Then a progress indicator shows completion and a summary lists successes and failures with reasons Given any revocation occurs, When completed, Then an audit log entry is written and revocation webhooks are fired for each affected badge Given a bulk revoke is in progress, When the admin navigates away, Then the job continues server-side and completion is shown in Notifications
Webhook Notifications for Issuance and Revocation
Given a webhook endpoint with secret is configured for events badge.issued and badge.revoked, When a badge is issued or revoked, Then a POST is sent within 5 seconds containing event_id, type, created_at, badge_id, actor_id, template_id, status, and an HMAC-SHA256 signature header Given the endpoint responds non-2xx, When delivery fails, Then retries occur with exponential backoff for up to 24 hours and redelivery stops after a 2xx response Given an admin clicks Send Test, When executed, Then a test event is sent and delivery result is logged and visible in the UI Given an endpoint is paused or returns 410 Gone, When events occur, Then deliveries are skipped and a warning appears in webhook settings
CSV Export of Badge Events
Given an admin selects a date range and event types (issued, revoked, expired, scan_denied), When Export CSV is requested, Then a background job generates a CSV and the file is available within 2 minutes for ≤100,000 events Given the export completes, When the file is downloaded, Then timestamps are UTC ISO-8601, IDs are strings, and PII fields are redacted per policy Given an export link is generated, When 7 days pass, Then the link expires and the file is deleted; an audit log entry records the export request and completion Given export size exceeds 1,000,000 events, When requested, Then the UI prompts to narrow filters or request a split export
Kiosk Policy and Sensitive Permission Controls
Given an admin sets kiosk offline limit to N minutes and allowed scopes list, When a kiosk is offline, Then check-ins are permitted for up to N minutes using cached validation and denied thereafter until reconnect Given a badge action requests a sensitive scope marked step-up, When initiated, Then the user must complete step-up verification (SMS OTP or admin PIN) within 2 minutes or the action is denied and logged Given a reissue request originates from a new device or exceeds velocity thresholds per risk policy, When automatic reissue is attempted, Then the system blocks reissue and requires explicit admin override with reason or denies it if hard-block is enabled Given any policy change is saved, When kiosks poll or a check-in occurs, Then the updated policy is applied within 60 seconds and the applied policy version is recorded in the audit log
Badge Scan Audit Logging
"As a compliance officer, I want detailed, privacy‑preserving logs of badge events so that we can prove activity without exposing PII."
Description

Implement privacy‑preserving, immutable logging for badge lifecycle and scan events, capturing hashed badge IDs, profile references, timestamps, kiosk or station identifiers, IP or device metadata, and action outcomes. Logs are queryable in real time for live dashboards and exportable for compliance audits, with retention policies and redaction routines to minimize PII exposure. Integrity is maintained via append‑only storage and signature chaining aligned with RallyKit’s audit‑ready proof standard.

Acceptance Criteria
Immutable Append-Only Chain
Given the audit log store is initialized with a genesis record When any badge lifecycle or scan event is written Then the system appends a new immutable record that cannot be updated or deleted via any supported interface And the record includes the previous record’s hash, its own content hash, and a service signature to form a verifiable chain And a chain verification job successfully validates any contiguous range of records When a record is tampered with at rest Then the next scheduled verification detects the break and emits a critical alert within 60 seconds and marks the affected chain segment as invalid And write attempts that would break chain invariants are rejected with an error and no partial writes occur
Pseudonymous IDs and PII Minimization
Given a badge-related event is to be logged When persisting the event Then store only a salted hash of the badge ID and an internal profile UUID; do not store raw badge ID or direct PII (name, email, phone) And store IP metadata as truncated CIDR (IPv4 /24, IPv6 /48) and device metadata without stable unique device identifiers And schema validation rejects any payload containing disallowed PII fields with a 400 error and reason And encryption at rest is enabled for the log store and its backups And a weekly automated scan finds zero occurrences of disallowed PII fields in sampled records (sample size >= 100k or full dataset if smaller)
Complete Event Coverage and Fields
Given badge lifecycle events (issue, reissue, rotate, expire, revoke) and scan events (check-in, action launch, action completion, failure) When these events occur in production Then an audit log record is written for each with fields: event_type, hashed_badge_id, profile_ref, timestamp_utc (ISO-8601), kiosk_id or station_id, team_id, district_code, outcome, and request_context (truncated IP/device) And 99.9% of emitted events over any rolling 24 hours are successfully persisted with no loss And retries produce no duplicates; duplicate submissions are detected and coalesced idempotently And event timestamps are monotonic per source within 500 ms tolerance; out-of-order arrivals are ordered by source sequence on read
Real-Time Queryability for Dashboards
Given continuous event ingestion at up to 50 events per second When querying the live dashboard for the last 15 minutes filtered by kiosk_id, event_type, team_id, or outcome Then new events are visible within 2 seconds of write at the 95th percentile and within 5 seconds at the 99th percentile And queries returning up to 5,000 rows complete in under 1 second at p95 and under 3 seconds at p99 And results exclude raw PII, returning only hashed IDs and references And pagination and deterministic sorting by timestamp desc with event_id tie-breaker are supported
Compliance Export with Integrity Proof
Given an auditor with Compliance Export permission requests an export for a date range and optional filters (team_id, district, event_type) When the export is generated Then the system produces CSV and JSONL files with a defined schema and field dictionary And includes a manifest with total record count, SHA-256 checksums per file, and a chain proof containing first and last record hashes signed by the service And a verification utility or endpoint validates file checksums and chain continuity with zero errors And exports up to 100,000 records complete within 2 minutes 95% of the time And all export requests and downloads are themselves logged in the audit log
Retention and Redaction Enforcement
Given a configured retention policy of 18 months and an allowlist of fields to keep When records exceed the retention window and are not on legal hold Then a scheduled job redacts or purges disallowed fields while preserving chain integrity via tombstone or redaction markers And the job emits a summary log with counts of redacted and retained records and any errors And subsequent queries return redacted placeholders for affected fields and never return deleted values And attempts to access records beyond retention via export or API return a 404 or 410 And a legal hold flag prevents redaction for specified scopes until released, with all holds logged
Accessible Scanning & Fallbacks
"As an attendee with accessibility needs or a basic phone, I want alternative check‑in and action flows so that I’m not blocked by technology."
Description

Ensure the badge and kiosk experiences meet accessibility and reliability standards by supporting high‑contrast and large‑text modes, screenreader labels, keyboard‑only navigation, and multi‑language content. Provide low‑light camera guidance, manual short‑code entry, printable badges, and SMS links that open action pages directly. Handle camera permission denials gracefully and offer alternate flows without blocking check‑in or action launch.

Acceptance Criteria
High-Contrast and Large-Text Modes Toggle
- Given any badge or kiosk screen, When High Contrast is toggled on, Then all text and interactive elements meet a minimum 4.5:1 contrast ratio and no information is conveyed by color alone. - Given any badge or kiosk screen, When Large Text is enabled and the user sets OS/app text size to 200%, Then content reflows without overlap/truncation, horizontal scrolling is not required at 320px width, and all tap targets are at least 44x44 px. - Given High Contrast or Large Text mode is enabled, When the user navigates between Badge, Scanner, and Action pages, Then the mode persists until explicitly turned off and a persistent control to toggle it is available within 2 taps/keystrokes. - Given a badge QR code is displayed, When High Contrast is enabled, Then the QR maintains sufficient quiet zone and contrast to remain scannable with a success rate of ≥95% within 2 seconds on mid‑range devices.
Screen Reader Accessible Labels and Announcements
- Given a screen reader is active, When the Scanner screen loads, Then focus moves to the instruction heading which is announced with level and purpose; the camera preview is marked aria-hidden and scanning status updates via aria-live="polite". - Given any actionable control, When focused, Then its accessible name, role, and state are announced and are not duplicated or empty. - Given an error or success occurs (e.g., code invalid or check-in complete), When it appears, Then it is announced via role="alert" within 500 ms and keyboard focus moves to the message or the next logical control. - Given content includes action scripts or bill info, When read by a screen reader, Then the reading order matches the visual order and headings follow a correct hierarchy without skipped levels.
Keyboard-Only Navigation and Focus Order
- Given only a keyboard is used, When navigating the kiosk or web app, Then a visible focus indicator is present on every focusable element and meets at least 3:1 contrast with 2 px minimum thickness. - Given the user starts at the check-in screen, When using Tab/Shift+Tab/Enter/Space/Escape, Then they can complete check-in and launch an action without mouse input. - Given a modal dialog opens, When navigating with keyboard, Then focus is trapped within the modal, Escape closes it, and focus returns to the triggering control. - Given the Scanner screen, When navigating with keyboard, Then "Use manual code" and "SMS link" alternatives are reachable within 5 Tab presses and actionable via Enter/Space.
Multi-Language Content Selection and Persistence
- Given a language selector, When the user chooses a supported language (e.g., EN, ES), Then all UI labels, guidance, errors, and dynamic content render in the chosen language within 500 ms. - Given a deep link includes a locale parameter or a kiosk default locale is configured, When the app loads, Then the corresponding language is selected automatically; if unsupported, EN is used with no crash or blocker. - Given the language is changed mid-flow, Then user inputs persist and the step does not reset; the choice persists for the session and 30 days in local storage unless cleared. - Given a screen reader is active, When language switches, Then the document lang attribute and any inline alternate-language spans are updated to ensure correct pronunciation and formatting.
Camera Permission Denied Graceful Fallback
- Given the app requests camera access, When the user denies or the device lacks a camera, Then the scan step presents non-blocking alternatives: "Enter Short Code", "Receive SMS Link", and "Retry Camera" without preventing progress. - Given the user selects "Retry Camera", When permission is granted, Then the scanner activates without page reload or loss of state. - Given a persistent deny is detected, Then a "How to enable camera" help link opens in a new tab with browser-specific instructions. - Given a permission denial event occurs, Then an anonymized analytics record is created with device type and browser for reliability reporting.
Reliable Scanning in Varying Conditions (Low-Light and Printed Badges)
- Given ambient light is below 50 lux or three consecutive scan failures occur within 10 seconds, Then contextual low‑light guidance is shown and a flashlight toggle appears on supported devices within 200 ms. - Given the device supports torch control, When the flashlight is toggled, Then the torch state changes within 200 ms and its state is announced to assistive technologies. - Given a printed FastPass badge at 300 DPI with QR size ≥ 25 mm and a 4‑module quiet zone, When scanned from 10–50 cm under 50–500 lux, Then the scan succeeds ≥ 95% within 2 seconds on mid‑range devices. - Given repeated scan failures, Then a prominent control to switch to manual short‑code entry is shown and actionable without leaving the screen.
Non-Scanning Fallbacks: Manual Short-Code and SMS Deep-Link
- Given the user selects "Enter Short Code", When entering a 6–8 character alphanumeric code, Then client-side format validation prevents invalid submission with inline errors, and server validation returns success/failure within 700 ms on a 4G connection. - Given a valid short code is submitted, Then the user is checked in and taken to the correct one‑tap action page pre‑filled with district, team, and permissions without additional PII entry. - Given the user requests an SMS link, When a valid phone number is provided or on file, Then an SMS arrives within 30 seconds; tapping the link opens directly to the action page without re‑authentication. - Given SMS composition is unavailable (e.g., desktop kiosk), Then a short URL and QR code for the action page are displayed with 4.5:1 contrast and are scannable with ≥ 95% success within 2 seconds.

Lite Mode

Low-data onboarding and training with compressed assets, SMS fallbacks, and offline caching of the MicroBrief and scripts. Volunteers in rural or crowded venues can enroll and act even on shaky networks, with actions syncing as soon as a connection returns.

Requirements

Adaptive Low-Data Asset Delivery
"As a volunteer on a limited data plan, I want pages to load quickly with minimal data so that I can enroll and take actions even when my connection is slow or expensive."
Description

Deliver a text-first Lite Mode experience that minimizes bandwidth by aggressively compressing and conditionally serving assets based on detected network quality. Implement adaptive image/video compression (WebP/AVIF, bitrate ladder), system fonts, deferred/lazy loading, and removal of non-essential third-party scripts. Enforce payload budgets (<=250KB first interactive load on 3G; <=100KB per subsequent view) with build-time and runtime guards. Provide a text-only rendering path for MicroBriefs, scripts, and action pages that degrades gracefully without images. Integrate with the campaign content pipeline so organizers upload once and the system generates both standard and lite variants automatically. Include client-side cache-busting and integrity checks to ensure safe reuse of assets across flaky sessions.

Acceptance Criteria
3G First Interactive Payload Budget
Given Lite Mode is enabled via user setting or Save-Data and effectiveType is "3g" When the user loads the first Lite entry page (MicroBrief) Then the total compressed transfer size before the first interactive element is ready is <= 250KB And no non-essential third-party scripts are requested before first interactive And the build pipeline fails if the total size of critical Lite entry resources (brotli/gzip) exceeds 250KB And a runtime guard logs a budget violation and defers non-critical requests if projected size would exceed 250KB
Subsequent View Payload Budget
Given Lite Mode is active and the initial Lite bundle is cached When the user navigates to any subsequent Lite view within the same campaign Then additional network transfer attributable to that view before its first interactive element is ready is <= 100KB And prefetching is disabled on slow-2g/2g/3g/Save-Data, and any eager media is deferred And a runtime guard blocks non-critical requests that would exceed the 100KB budget and emits a budget violation metric
Adaptive Media Variant Selection
Given image and video assets have multiple renditions (AVIF/WebP with JPEG/PNG fallback; video bitrate ladder) When Lite Mode detects effectiveType in {"slow-2g","2g","3g"} or Save-Data is true Then images are served as AVIF or WebP if supported, else JPEG/PNG fallback, selecting the smallest variant that matches the rendered size And videos do not auto-play or auto-load on "slow-2g","2g" or Save-Data; only a poster is shown until explicit user action And on "3g" a video loads the lowest bitrate rendition only after explicit tap And the server/CDN uses Client Hints (DPR, Width, Save-Data) or the client uses srcset/sizes to choose the variant
Text-Only Rendering Path for Core Pages
Given Lite Mode text-only is triggered by effectiveType in {"slow-2g","2g"}, Save-Data true, or user choice When the user opens MicroBriefs, call/email scripts, or action pages Then the page renders with text and essential CTA elements only, with zero network requests for images or video And no broken media placeholders are displayed and layout remains readable And all core actions (copy script, tap-to-call/email, submit) are fully functional end-to-end And accessibility is preserved (focus order intact and CTAs have accessible names)
Content Pipeline Auto-Generation of Lite Variants
Given an organizer uploads campaign content (copy, images, optional videos) once When the campaign is saved or published Then the system generates Lite variants automatically: compressed images (AVIF/WebP + fallback), video bitrate ladder renditions, and text-only extracts for MicroBriefs/scripts/action pages And each variant is linked in a manifest associated with the content record for delivery-time selection And uploads without alt text prompt for alt or block publish with a warning until resolved And processing completes within 5 minutes for campaigns with <= 200 assets, with failures surfaced as retriable tasks and logged
Cache Busting and Integrity Across Flaky Sessions
Given static assets are fingerprinted (content-hash in URLs) and shipped with integrity metadata (SRI or client checksums) When a returning user on an unstable connection resumes Lite Mode Then cached assets with matching fingerprints are reused without re-download And any fingerprint mismatch triggers a safe re-fetch and replaces the stale asset And a new deploy invalidates stale caches within 5 minutes via an updated manifest/service worker version And any integrity verification failure blocks execution of the affected asset, falls back to text-only where possible, and logs an error event
Minimal Critical Path: Deferred Assets, System Fonts, No Non-Essential Third-Party Scripts
Given Lite Mode is active When loading any Lite page Then only the system font stack is used pre-interactive (no webfont requests occur) And all images and videos are lazy-loaded and do not initiate network requests before entering the viewport And analytics/marketing scripts not on the Lite allowlist do not load; no network requests to blocked third-party domains occur during Lite sessions And no preload or preconnect hints target excluded third-party origins
Offline MicroBrief & Script Caching
"As a field volunteer in a dead zone, I want the MicroBrief and scripts available offline so that I can keep training and acting without waiting for signal."
Description

Preload and cache the current campaign MicroBrief, district-specific scripts, and essential UI in a Service Worker-managed offline cache so volunteers can read training content and perform actions without connectivity. Version caches by campaign, bill status, and locale; apply TTLs and semantic versioning to invalidate stale content. Display freshness indicators and auto-refresh when a connection returns. Ensure all cached content is text-first, with optional low-fidelity media placeholders. Support multilingual content and right-to-left scripts. Provide graceful fallback when a new version is required but unavailable offline.

Acceptance Criteria
Offline Access to MicroBrief and District Scripts
- Given a first-time load with connectivity, When the service worker installs, Then it caches the current campaign MicroBrief, the user's selected district-specific scripts, and the essential UI shell. - Given the device is offline, When the volunteer opens the MicroBrief or an action page for a cached district, Then the content is rendered from the service worker cache with no network requests. - Given the device is offline, When the volunteer requests a district script that is not cached, Then the app displays "Not available offline" and offers to select a cached district or retry when online. - Given a cache exists, When the app is in offline mode, Then navigation to the MicroBrief and action page succeeds without errors.
Versioned Cache by Campaign, Bill Status, and Locale
- Given cached entries, When inspected in Cache Storage, Then keys are namespaced by campaignId, billStatus, locale, and semantic version (e.g., campaign-123|hb-42|en-US|1.4.0). - Given a new semantic version is available online, When the service worker activates, Then it downloads the new version into a new cache namespace without deleting the previous version until the swap completes. - Given versioned caches exist, When switching to the newer version, Then the swap is atomic and all content within the session uses one version consistently.
TTL and Semantic Version Staleness Handling
- Given a TTL is set for cached content, When the TTL expires while offline, Then the UI shows a "Stale" badge with the last-updated timestamp and the content remains readable. - Given connectivity is restored after TTL expiry, When the app regains network, Then the service worker refreshes stale entries in the background and updates the UI to "Updated" when complete. - Given a semantic version difference of major (e.g., 1.x to 2.0.0), When detected, Then the previous version is marked "Update required" and is not used for actions that declare a minVersion higher than cached.
Freshness Indicator and Auto-Refresh on Reconnect
- Given the app is offline, When viewing cached content, Then a persistent offline indicator and "Last synced: <timestamp>" are visible. - Given the app regains connectivity, When newer content is available, Then the content auto-refreshes in place within the current view and surfaces a non-blocking "Content updated" notification. - Given auto-refresh occurs, When the update completes, Then the service worker purges the superseded cache version per retention policy.
Text-First Content and Low-Fidelity Placeholders
- Given low-data mode or offline state, When rendering MicroBrief and scripts, Then all text content loads without blocking on media requests. - Given media assets are present, When offline or low-data mode is enabled, Then images/videos are replaced by low-fidelity placeholders with reserved dimensions to avoid layout shift. - Given connectivity is available, When the user opts to load media, Then full-fidelity media is fetched on demand; otherwise placeholders remain without auto-loading.
Multilingual and RTL Rendering Offline
- Given locale selection is set to an RTL language (e.g., ar), When offline, Then MicroBrief and scripts render in the selected language from cache with dir=rtl applied to the layout and correct text alignment. - Given locale selection is set to an LTR language (e.g., en-US, es), When offline, Then content renders in that language from cache with dir=ltr. - Given a string is missing for the selected locale, When offline, Then the system falls back to the default locale for that string and indicates the fallback non-intrusively.
Graceful Fallback for Required New Version Unavailable Offline
- Given an action requires script version N and the cached version is < N, When the device is offline, Then the app blocks starting that action, explains that an update is required, and provides a "Retry when online" option. - Given the device comes back online after the above, When the required version is fetched, Then the action becomes available automatically and the user is notified. - Given the above scenario, When offline, Then reading the cached MicroBrief remains available even if actions are blocked.
Deferred Actions Queue & Sync
"As a volunteer with spotty service, I want my actions to be saved and synced later so that my efforts still count and organizers can see accurate records."
Description

Enable actions (calls, emails, sign-ups) to be executed offline by recording them in a local, encrypted queue that syncs automatically when connectivity returns. Use client-generated, idempotent action IDs, local timestamps, and minimal PII to ensure deduplication and reliable replay. Implement background sync with exponential backoff, conflict detection, and server acknowledgments that include canonical timestamps for auditability. Provide user feedback for queued vs. synced states, and ensure organizer dashboards reflect eventual consistency with clear badges and retry errors. All stored data should be encrypted at rest using WebCrypto with secure key handling.

Acceptance Criteria
Offline Action Capture & Encrypted Local Queue
Given the device is offline when a user completes a call, email, or sign-up action When the action is saved locally Then it is written to a local queue encrypted at rest via WebCrypto using a non-extractable device-specific key And the queued record includes only: client_action_id, action_type, local_timestamp (UTC ISO 8601), campaign_id, target_id, script_version_id, and the minimal PII required to replay And no plaintext PII is readable on disk; direct storage inspection reveals only ciphertext And the write operation returns success or failure deterministically; on failure the user is informed and the action is not marked complete
Client-Generated Idempotent IDs & Deduplication
Given a new offline action is created When client_action_id is generated Then it is a ULID or UUIDv4 and persisted with the record When the same client_action_id is replayed multiple times Then the server returns 200 with duplicate=true for non-first occurrences and no additional audit records are created When the same client_action_id arrives with a mismatched payload checksum Then the server responds 409 Conflict and the client marks the item error=payload_mismatch and halts retries And across 100k generated actions in test, no ID collisions occur
Background Sync & Exponential Backoff
Given there are queued actions and the last sync attempt failed due to connectivity When scheduling the next retry Then exponential backoff is applied starting at 1s, doubling each attempt up to a max of 15m with ±20% jitter When connectivity changes from offline to online Then a sync is triggered within 5 seconds or on next foreground if background sync is unsupported And actions are sent FIFO; 5xx responses are retried; 4xx responses mark the item as terminal error with reason And an item is marked failed after 10 unsuccessful attempts and remains retriable only via explicit user action
Server Acknowledgment & Canonical Timestamping
Given a queued action is accepted by the server When the server responds Then the acknowledgment includes server_action_id, canonical_timestamp (UTC ISO 8601), and duplicate flag When the acknowledgment is received Then the client marks the item synced, updates it with canonical_timestamp, and removes any sensitive PII from local storage If the server processed the action but the client missed the ack due to a drop Then on replay the server returns duplicate=true with the original server_action_id and canonical_timestamp
User Feedback for Queued vs Synced States
Given an action is queued offline When viewing the action confirmation UI Then a visible "Queued for sync" status and badge are displayed and are screen-reader accessible with accurate ARIA labels When the action is successfully synced Then the UI updates to "Sent" within 2 seconds of acknowledgment receipt and the queued badge is removed If a retryable error occurs Then the UI shows "Retrying" with last-attempt time; if a terminal error occurs, it shows "Failed" with a human-readable reason and a "Try Again" control
Organizer Dashboard Eventual Consistency & Badging
Given offline actions later sync to the server When acknowledgments are issued Then the dashboard reflects the new actions within 10 seconds via live update or next refresh And actions arriving more than 10 minutes after their local_timestamp are tagged "Late Sync" in UI and exports And duplicate replays from the same client_action_id do not increment totals more than once across widgets and exports And terminal errors appear in an Errors view with error_code, retry_count, and last_attempt timestamp
Secure Key Handling & Data Minimization Post-Sync
Given encryption keys are created on first use When generating and storing keys Then keys are created via WebCrypto as non-extractable, remain on-device, and are never transmitted When an item reaches status=synced Then any stored PII needed only for replay is purged within 5 seconds, retaining only non-PII metadata If the local keystore is cleared Then previously queued ciphertext is unrecoverable and the app prompts re-enrollment before capturing new actions When a key rotation is triggered by version upgrade Then existing queued items remain decryptable or are re-encrypted successfully; otherwise they are marked failed without exposing plaintext
SMS Action Fallbacks
"As a volunteer with no data connection, I want to receive scripts and targets via SMS and confirm completion so that I can still participate and get credit for my actions."
Description

Provide SMS-based enrollment and action flows when the web experience cannot be reliably loaded. Support opt-in via keyword, SMS delivery of district-matched call scripts and targets, and simple reply-based confirmations (e.g., DONE) to log completion. Map campaign actions to concise SMS templates with dynamic merges (name, district, bill status) and ensure compliance with carrier policies (opt-in/out, HELP/STOP) and privacy constraints (avoid transmitting sensitive PII). Integrate with supported SMS gateways (e.g., Twilio) and record SMS events for analytics and audit logs. Offer locale-aware templates and automatic link shorteners for any required deep links.

Acceptance Criteria
Keyword Opt-In and HELP/STOP Compliance
Given an unregistered user texts the campaign keyword to the designated number, When the message is received, Then the system replies with a single consent message including program name, message frequency, HELP/STOP instructions, and a link to Terms/Privacy, and requests a YES to confirm. Given the user replies YES within 30 minutes, When the confirmation is processed, Then the user is marked opted-in with timestamp, campaign ID, and gateway message SID recorded, and a welcome message is sent. Given a user texts HELP at any time, When the message is received, Then a help response with support contact and STOP instructions is sent within 5 seconds and the event is logged. Given a user texts STOP, STOPALL, UNSUBSCRIBE, CANCEL, END, or QUIT, When the message is processed, Then opt-in status is set to false, a confirmation of opt-out is sent, and no further campaign messages are sent until re-opt-in. Given an opted-out user texts START or UNSTOP, When processed, Then the consent message is resent and the user is only re-subscribed upon YES. Given locale is set to es, When sending consent/help/stop confirmation, Then Spanish templates are used; otherwise English.
District Matching via Minimal PII over SMS
Given an opted-in user needs district matching, When the user replies with a ZIP+4 or full address via SMS, Then the system resolves the legislative district and target officials and stores only district and target IDs; raw address text is not stored beyond the matching transaction. Given the address is invalid or ambiguous, When processed, Then the user is prompted up to 2 clarification attempts and, after failure, is provided a fallback hotline number or web link; no PII is persisted. Given a district is resolved, When confirmation is needed, Then the user receives a short summary (district code and official names) and may reply CONFIRM to proceed or EDIT to retry.
District-Specific Script and Target Delivery
Given a live campaign action and a resolved district, When preparing the call action via SMS, Then the message includes target name, office phone, bill identifier, stance, and a script no longer than 320 characters with dynamic merges for first_name (or Friend), district, and bill_status. Given the script exceeds 320 characters, When sending, Then it is split into at most 2 messages labeled 1/2 and 2/2. Given locale is available for the action, When sending, Then the template is selected by user locale; default is English. Given an email action exists, When SMS fallback is used, Then the user receives a concise summary plus a short link to the prefilled email form; if the user replies EMAIL, Then the subject and body are sent via SMS within 2 messages max.
Reply-Based Completion Logging (DONE/CALLED/EMAILED)
Given a user receives an action prompt, When the user replies DONE, CALLED, or EMAILED within 2 hours, Then the system records a completion event with campaign ID, action ID, timestamp, source=sms, and gateway message SID. Given duplicate completion replies arrive within 10 minutes, When processed, Then only the first is counted; subsequent duplicates are acknowledged but ignored for analytics. Given the user replies SKIP, When processed, Then the action is marked skipped with reason=sms and the next available action is offered. Given an unrecognized keyword is received, When processed, Then the user receives guidance listing valid replies (DONE, CALLED, EMAILED, SKIP, HELP).
Branded Link Shortening and Deep-Link Tracking
Given an outbound SMS contains a URL, When sending, Then the system replaces it with a branded short link unique per recipient and action, appends UTM parameters, and records a click-tracking token. Given a recipient clicks the short link, When the redirect occurs, Then the click event is logged with timestamp and user locale; deep link resolves to the intended destination. Given link resolution fails, When detected, Then the user receives an SMS fallback with instructions to reply with a keyword to proceed without the link. Given a campaign ends, When retention window of 90 days elapses, Then short-link tokens expire and cannot be resolved.
SMS Gateway Integration and Audit-Ready Event Logging
Given Twilio credentials are configured, When inbound and outbound messages occur, Then the system validates webhook signatures, rejects invalid requests, and logs security failures. Given outbound SMS is queued, When sending, Then delivery status transitions (queued, sent, delivered, failed) and error codes are stored; transient failures are retried up to 3 times with exponential backoff; messages with error 30007 are not retried. Given analytics are requested, When queried, Then per-campaign metrics show counts and rates for sent, delivered, failed, opt-ins, opt-outs, replies, and per-message cost estimates. Given audit logs are exported, When generated, Then logs include message direction, timestamp, campaign and action IDs, anonymized user identifier (phone hash), template ID, and gateway SID.
Quiet Hours, Rate Limits, and Throughput Controls
Given a user's timezone is known or inferred from area code, When scheduling messages, Then non-urgent messages are not sent between 9:00 PM and 8:00 AM local time; urgent sends require an override flag and are audit-logged. Given a campaign sends multiple SMS to a user, When enforcing limits, Then no more than 6 campaign messages (excluding HELP/STOP and confirmations) are sent per user in any rolling 7-day window and at most 1 per hour. Given a batch broadcast to N users, When enqueuing, Then the system sustains at least 20 messages per second with 95th percentile enqueue-to-gateway-send latency under 2 seconds and applies back-pressure when gateway rate limits are reached. Given delivery outcomes are received, When aggregated daily, Then at least 95% of outbound messages have a terminal status (delivered or failed) within 10 minutes and 99% of events are recorded without loss.
Low-Data Onboarding & Micro Training
"As a first-time volunteer on a crowded venue network, I want a quick, data-light signup and micro training so that I can start helping immediately."
Description

Create a phone-number–first, low-field onboarding with OTP via SMS and a compact, text-only MicroBrief that renders under tight size budgets. Allow enrollment to complete offline with deferred verification and sync on reconnect. Present micro-lessons (e.g., key points, do/don’t, 1-step practice) in <500 characters with optional ultra-light audio/text alternatives for accessibility. Persist progress locally and reconcile when online. Ensure WCAG-compliant contrast, large tap targets, and support for languages selected by the campaign.

Acceptance Criteria
Phone-Number-First Onboarding with SMS OTP (Low-Data)
Given a new volunteer accessing Lite Mode on a 2G-equivalent connection (≤128 kbps) When they open the onboarding flow Then the initial page payload is ≤80 KB and becomes interactive within 3 seconds on a Moto G5–class device And only two input fields are required: phone number and OTP And when a valid phone number is submitted, an OTP is sent via SMS within 30 seconds in ≥95% of attempts And the OTP can be resent after 30 seconds, up to 3 times per session, with a daily limit of ≤5 OTPs per phone number And enrollment completes without email or additional profile fields after successful OTP verification
Deferred Offline Enrollment and Sync on Reconnect
Given the device is offline or experiencing >20% packet loss When the user enters a phone number, selects a language, and accepts terms Then the enrollment state is saved locally as Pending Verification with timestamp And the user can access the MicroBrief and micro-lessons offline And on reconnect, the app auto-initiates verification by sending an OTP SMS and prompts for code entry And locally stored progress (completed lessons, settings) syncs within 10 seconds of reconnect And sync is idempotent; no duplicate records are created And if verification fails 3 times, progress remains local and the user can retry later
MicroBrief and Scripts Size Budget with Offline Caching
Given Lite Mode is enabled When the MicroBrief is first loaded online Then the textual MicroBrief payload is ≤25 KB including metadata And each district-specific script variant is ≤15 KB And MicroBrief and scripts are cached for offline use within 1 second of load And subsequent loads render from cache within 1 second with zero network And cached content is invalidated only on version change or after a 24-hour TTL
Micro-Lessons ≤500 Characters with Ultra-Light Accessibility
Given a micro-lesson in Lite Mode When displayed to the user Then the body text length is ≤500 characters And an optional audio alternative is ≤100 KB, mono, ≤10 seconds, with a synchronized text transcript And media controls provide play, pause, and stop; no autoplay occurs And lesson completion is recorded locally immediately and synced on reconnect And screen readers announce lesson title and control labels with accessible names
WCAG AA Compliance and Large Tap Targets in Lite Mode
Given any Lite Mode onboarding or training screen When evaluated against WCAG 2.1 AA Then all text/background pairs meet a contrast ratio ≥4.5:1 And interactive elements have a hit area ≥44x44 dp with ≥8 dp spacing And focus indicators are visible and all actionable elements are reachable via keyboard/switch input And images/icons have accessible names or are marked decorative as appropriate And up to 200% zoom preserves functionality without horizontal scrolling on a 320 px viewport
Language Selection and Localization for Campaign-Enabled Languages
Given a campaign has enabled multiple languages (e.g., English, Spanish, Arabic) When a volunteer selects a language during onboarding or it is preselected via invite Then UI labels, MicroBrief, scripts, micro-lessons, and OTP SMS are rendered in the selected language And right-to-left rendering is applied for RTL languages And missing translations fall back to the campaign default with a single-language indicator And the language can be switched later and persists offline across restarts And caches are maintained per language without cross-language contamination
Local Progress Persistence and Idempotent Reconciliation
Given a volunteer completes micro-lessons and a one-step practice while offline When connectivity is restored Then all queued events are transmitted within 10 seconds And the server responds with correlation IDs and the client marks local items as synced And duplicate events are not created if retries occur And if the server remains unreachable, retries use exponential backoff for up to 24 hours without losing local state And the user’s visible progress remains consistent across app restarts
Connectivity Detection & Smart Mode Switching
"As a volunteer moving between coverage zones, I want the app to switch to a lite experience automatically so that I don’t have to troubleshoot or restart my actions."
Description

Continuously detect connectivity and network quality (online/offline events, Network Information API where available) to automatically enable Lite Mode and SMS fallbacks when conditions deteriorate. Provide clear UI banners for mode changes, a manual override, and persistent per-device preference. Throttle telemetry to respect low-data constraints and log transitions for diagnostics. Ensure all core flows have defined degraded behaviors so users can continue without error loops.

Acceptance Criteria
Auto-enable Lite Mode on poor connectivity or offline
Given the app is in Standard Mode and navigator.onLine becomes false, When the connectivity monitor detects the change (event-driven), Then the app switches to Lite Mode within 1 second and enables low-data features. Given the app is in Standard Mode and the Network Information API reports effectiveType in ["slow-2g","2g"] OR downlink < 0.5 Mbps OR rtt > 800 ms OR saveData = true, When the connectivity monitor evaluates state (on change or every 2 seconds), Then the app switches to Lite Mode within 1 second. Given the Network Information API is unavailable, When two consecutive health pings either fail or have TTFB > 3 seconds, Then the app switches to Lite Mode within 1 second. Then heavy assets (videos/high-res images) and background prefetch are disabled, offline caching for MicroBrief and scripts is enabled, and SMS fallback is made available where supported.
Smart reversion to Standard Mode on sustained good network
Given the app is in Lite Mode and manual override is not active and navigator.onLine = true, And the Network Information API reports effectiveType in ["4g","3g"] AND downlink >= 1.5 Mbps AND rtt <= 300 ms continuously for 30 seconds, When the stability window elapses, Then the app switches to Standard Mode within 1 second. Given the Network Information API is unavailable, When health pings have TTFB < 800 ms for a continuous 30 seconds, Then the app switches to Standard Mode within 1 second. Then a new downgrade to Lite Mode will only occur after a fresh detection of poor connectivity lasting at least 3 seconds, preventing mode flapping.
Manual override persists per device and suppresses auto-switching
Given a user enables Manual Mode and selects Standard Mode, When connectivity deteriorates, Then the app remains in Standard Mode and does not auto-switch. Given Manual Mode is enabled on a device, When the app is reloaded or the user returns in a new session on the same device, Then the selected mode persists from device-local storage. Given a user enables Manual Mode and selects Lite Mode, When connectivity improves, Then the app remains in Lite Mode and does not auto-switch. Given a user selects Reset to Auto, When the next connectivity evaluation occurs, Then automatic switching resumes according to detection rules.
Mode change UI banner is timely, clear, and accessible
Given a mode change occurs (Standard -> Lite or Lite -> Standard), When the transition is committed, Then a top-of-app banner appears within 1 second showing the new mode and the primary reason ("Offline", "Poor connection", "Data Saver on", or "Network recovered"). Then the banner includes a clearly labeled control to open connectivity settings for manual override. Then the banner meets WCAG 2.1 AA (contrast >= 4.5:1), uses role="status" with aria-live="polite", is keyboard focusable, and is dismissible via keyboard and pointer. Then the banner text is localized according to the current app language. Then the banner is not re-shown unless a new mode change occurs.
SMS fallback triggers and actions reconcile after reconnect
Given a user starts an action (call/email) while offline OR after 2 failed API attempts across 10 seconds due to network errors, When SMS capability is available on the device, Then an SMS fallback button is displayed. When the user taps the SMS fallback, Then the SMS composer opens with a prefilled message containing action type, bill ID/status, target district/legislator code, and a signed token. Then the intended action is enqueued locally with a unique clientActionId and shown as "Pending sync". When connectivity is restored, Then the queued action is synced within 10 seconds, deduplicated by clientActionId, and marked as completed once confirmed by the server. Then no duplicate completions are recorded for the same clientActionId.
Telemetry throttles under Lite Mode and low-data conditions
Given Lite Mode is active OR navigator.connection.saveData = true, When emitting telemetry, Then non-critical events are suppressed and critical events are batched and sent at most once per 60 seconds. Then total telemetry bandwidth is limited to <= 5 KB per minute while these conditions hold. Then offline telemetry is queued up to 200 events; upon exceeding the cap, the oldest events are dropped (LRU). Then telemetry payloads are compressed and exclude PII. Given the app returns to Standard Mode, When 10 seconds have passed, Then normal telemetry rates resume.
Core flows degrade gracefully without error loops
Given connectivity is lost or degraded mid-flow, When the user enrolls, Then enrollment data is stored locally, marked as pending, and a clear offline confirmation replaces spinner retries. Given connectivity is lost or degraded, When loading the MicroBrief and scripts, Then cached versions are served with a "May be stale" badge; network fetch retries are capped at 3 with exponential backoff (max 8 seconds), after which retries stop. Given legislator lookup fails due to network, When the user proceeds, Then the app offers ZIP/address entry and uses cached district mapping if available, avoiding map embeds and allowing progress without a hard error. Given one-tap actions are initiated offline, When the user proceeds, Then SMS fallback is presented and non-SMS intents are queued for later sync with clear status. Then audit events for all flows are queued offline and synced within 15 seconds of reconnect, deduplicated by clientActionId; no more than one error toast is shown per flow per 5 minutes; zero uncaught errors occur during these degraded paths.
Lite Mode Controls & Analytics
"As an organizer, I want to enable and monitor Lite Mode performance so that I can ensure volunteers in low-connectivity areas can act and that I have reliable proof of impact."
Description

Give organizers per-campaign controls to enable Lite Mode, set content size budgets, and preview a lite rendering. Provide dashboards that report offline session rates, queued vs. synced actions, median sync delay, SMS fallback usage, and content bundle sizes. Trigger preflight warnings when assets exceed budgets and offer optimization suggestions. Allow export of Lite Mode metrics and synced action logs for audit-ready proof.

Acceptance Criteria
Per-Campaign Lite Mode Toggle
Given I am an Organizer on Campaign X When I open Campaign Settings and toggle Lite Mode ON Then the Lite Mode state is saved for Campaign X and is effective within 5 seconds on all campaign action pages And action pages for Campaign X render with low-data assets and display a Lite indicator badge Given Lite Mode is ON for Campaign X When I toggle it OFF and save Then Lite Mode is disabled, the indicator is removed, and subsequent visits use standard assets Rule: Default Lite Mode state for new campaigns is OFF Rule: Lite Mode state persists across sessions and cache clears
Content Size Budget & Preflight Warnings
Given I set a content size budget (in KB) for Campaign X When I attempt to save or publish campaign assets Then a preflight computes total bundle size and compares it to the budget And if total <= budget, a Within Budget status is shown with total size (KB) And if total > budget, a warning banner appears showing overage (KB) and top 5 largest assets And at least 3 optimization suggestions are displayed with estimated size savings And publishing remains allowed only after explicit confirm when over budget Rule: A soft warning is shown when total size exceeds 90% of budget
Lite Rendering Preview Accuracy
Given Campaign X has assets and Lite Mode is configurable When I click Preview Lite Rendering Then a preview loads within 3 seconds using low-data assets And it displays estimated bundle size (KB) and estimated first-load time on 2G and 3G And the actual downloaded bundle size during preview is within 5% of the estimate And the preview does not alter live campaign content
Lite Mode Analytics Dashboard Metrics
Given data exists for Campaign X within a selected date range When I open the Lite Mode dashboard Then I can filter by campaign and date range (Last 24h, 7d, 30d, custom) And the dashboard displays: offline session rate (%), queued actions count, synced actions count, median sync delay (seconds), SMS fallback usage count and rate, and average/95th percentile content bundle size (KB) And metrics and charts update within 5 seconds of filter changes And metric definitions are available via tooltips And all timestamps are displayed in UTC
Offline Queue and Sync Telemetry
Given a supporter device is offline during an action for Campaign X When the supporter submits the action Then the action is queued locally with a created_at UTC timestamp and unique idempotency key And no server record is created while offline When connectivity is restored Then the queued action syncs within 60 seconds and is marked synced with synced_at UTC timestamp And sync_delay_seconds = synced_at - created_at is recorded and appears in analytics within 5 minutes And duplicate submissions are prevented via the idempotency key
Export Lite Mode Metrics and Action Logs
Given I am an Organizer on Campaign X When I export Lite Mode metrics for a date range Then I can choose CSV or JSON Lines format And the export includes fields: campaign_id, campaign_name, session_id, user_hash (if available), action_type, channel, offline_flag, queued_at_utc, synced_at_utc, sync_delay_seconds, bundle_size_kb, sms_fallback_used, device_type, app_version And numeric fields are numbers, timestamps are ISO 8601 UTC, sizes are in KB And the file is ready within 30 seconds and available via signed URL for 24 hours And a SHA-256 checksum is provided for integrity verification
Permissions and Audit Trail for Lite Mode Controls
Given I am an Organizer or Admin When I change Lite Mode settings, adjust budgets, preview lite, or perform an export Then the action succeeds and an audit log entry is recorded with actor_id, action, campaign_id, timestamp_utc, before_value, after_value Given I am a Viewer or unauthorized user When I attempt any of these actions Then the action is blocked with a 403 error and an audit entry is recorded for the attempt Rule: Audit logs are filterable by actor, campaign, action, and date range and exportable in CSV for the last 90 days

Product Ideas

Innovative concepts that could enhance this product's value proposition.

Constituent Checklight

Verify supporters are real constituents using address match, district lookup, and SMS code. Flag mismatches and dedupe by person to silence astroturf claims.

Idea

Coalition Role Rings

Create tiered roles and approval gates for coalitions, from script editors to data-only partners. Share pages safely while preserving attribution and restricting exports.

Idea

ProofLock Ledger

Append-only action log with cryptographic signatures and change history. Export tamper-evident receipts per action for auditors and funders.

Idea

QR Swarm Pages

Generate hundreds of unique QR-coded action pages in one click, each with station/team attribution. Watch live counts per code to route volunteers instantly.

Idea

Bill Pulse Scripts

Auto-sync scripts and targets to official bill status feeds. Highlight changes, suggest fresh language, and schedule quiet-hour sends around hearings.

Idea

Partner Bill Split

Split subscription and usage costs across partner tags automatically. Generate itemized invoices per organization and campaign, with credit caps and late-payer safeguards.

Idea

Volunteer Snap Onboarding

Let volunteers self-enroll via SMS link, auto-detect district, and auto-assign teams. Show a 60-second training page before first canvass action.

Idea

Press Coverage

Imagined press coverage for this groundbreaking product concept.

P

RallyKit Launches Real-Time Advocacy Command Center for Small Nonprofits and Grassroots Teams

Imagined Press Article

San Francisco, CA — RallyKit today announced the general availability of its lightweight advocacy campaign manager built for small nonprofit directors and scrappy grassroots organizers. Designed to replace spreadsheets and bulky CRMs with one real-time dashboard, RallyKit auto-matches supporters to their legislators, generates district-specific scripts by bill status, launches one-tap action pages, and tracks every call and email live—delivering audit-ready proof from day one. RallyKit was created for lean teams who need speed, accuracy, and accountability without enterprise bloat. In just 45 minutes, organizations can set up a campaign, publish their first one-tap action page, and begin watching completions rise in real time. As bills move, RallyKit updates targets and scripts automatically, so supporters always reach the right office with the right message at the right moment. “Small teams shouldn’t have to choose between moving fast and staying credible,” said Jordan Reyes, co-founder and CEO of RallyKit. “RallyKit brings the essentials into one place—district-accurate matching, auto-updated scripts, live tracking, and proof you can hand to a board or auditor—so grassroots leaders can focus their energy where it matters.” At the center of RallyKit is a real-time campaign dashboard that shows where actions are surging, which districts need attention, and how many calls and emails are completing minute by minute. Supporters are matched to their legislators instantly, and script content adjusts as bill status changes—eliminating stale asks and manual rework that slow momentum. Early users say RallyKit unlocks a new level of clarity and speed. “I stopped juggling eight tabs and started leading,” said Alisha Morgan, Executive Director at River Commons Action. “With RallyKit, we launched one-tap pages in an afternoon, watched completed calls roll in live, and redirected volunteers based on what the map showed us—all with audit-ready proof for our funders.” Key capabilities available at launch include: - Auto-match and verify: Address Autocorrect and confidence scoring boost match rates while minimizing supporter friction. - Script intelligence: Status-aware Script Autodraft suggests language tuned to urgency and tone, with Redline Briefs that highlight what changed since the last update. - One-tap action pages: Mobile-first pages let supporters complete calls or emails in seconds, with Dual Codes (QR + short link/SMS keyword) to maximize access. - Live tracking: District-level heat maps and progress rings show completions as they happen, helping teams redeploy volunteers in real time. - Audit-ready proof: A tamper-evident ProofLock ledger with VerifyLink receipts and Privacy Proof Tokens satisfies compliance while protecting PII. “Accuracy builds trust, and trust drives volume,” added Priya Narayanan, Head of Product at RallyKit. “RallyKit’s live status sync, target shifts by stage, and source-stamped scripts mean organizers never have to wonder if their supporters are reaching the right office with credible, current information.” For coalition work, RallyKit includes granular permissioning and attribution that travels with each page, partner tag, and station code. Immutable credit-split rules, approval ladders, and scoped sharing keep multi-organization teams aligned, fast, and fair—so collaboration lifts results rather than creating bottlenecks or disputes. RallyKit’s on-the-ground toolkit turns crowds into completed actions. Swarm Builder creates hundreds of QR-coded action pages in one click, Print Packs auto-generate branded materials for stations, Auto-Reset Kiosk keeps lines moving at events, and Heatmap Router guides volunteers to the districts that need them most. Saturation Guard protects credibility by pacing outreach when lines to a single office are overwhelmed, rebalancing toward high-leverage targets. Compliance and accountability are built in, not bolted on. Time Anchor pins the campaign ledger to an independent timestamp authority, Trace Diffs show who changed what and why, Key Rotation preserves verifiability over time, and GrantPack outputs a signed, tamper-evident report bundle that funders can verify independently with a simple link or QR code. RallyKit is available starting today for advocacy teams in the United States. The platform supports multilingual scripts, accessibility features, low-data modes, and multiple verification channels to ensure inclusivity and reach across urban, suburban, and rural contexts. New customers can get started with a guided 45-minute setup and prebuilt flows tailored to common campaign patterns. About RallyKit RallyKit is a lightweight advocacy campaign manager for small nonprofits and grassroots organizers. It replaces spreadsheets and bloated CRMs with one real-time dashboard that auto-matches supporters to legislators, generates district-specific scripts by bill status, launches one-tap action pages, tracks every action live, and delivers audit-ready proof. Media contact - Press: press@rallykit.org - Phone: +1 (415) 555-0137 - Web: https://www.rallykit.org/press Quotes and customer references are available upon request. Demos can be scheduled at the link above. Organizations interested in coalition features or on-site event support can contact RallyKit’s partnerships team for tailored onboarding.

P

RallyKit Unveils ProofLock Ledger to Deliver Tamper‑Evident, Auditor‑Ready Receipts for Every Action

Imagined Press Article

San Francisco, CA — RallyKit today introduced ProofLock, a trust-and-compliance layer that turns every supporter action into a verifiable, tamper‑evident receipt. Built for nonprofit operations leaders, coalitions, and funders who require clear, defensible records without compromising privacy, ProofLock anchors an append‑only action log to an independent timestamp authority and provides instant verification via QR code or short link. With ProofLock, organizers no longer need to assemble audit binders or export raw PII to prove impact. Each action is recorded with cryptographic signatures, district verification status, and a human‑readable change history. Auditors and partners can confirm receipt integrity in seconds—no special software required. “Accountability should be simple, not scary,” said Maya Ortega, CTO and co‑founder of RallyKit. “ProofLock gives lean teams the same level of verifiability once reserved for large enterprises: immutable logs, clear diffs, and privacy‑preserving receipts that anyone in a coalition or a funder’s office can validate with confidence.” ProofLock weaves together four pillars of trust: - Time Anchor: The daily root hash of the action ledger is pinned to an independent timestamp authority, making backdating or silent rewrites provably impossible. - VerifyLink: Each receipt includes a short URL and QR code that opens a no‑PII verification page, so reviewers can validate signatures and chain position against the live ledger. - Trace Diffs: Human‑readable side‑by‑side diffs show what changed, who approved it, and why—compressing technical hashes into clear narratives that fit grant reports and coalition reviews. - Key Rotation: Automatic rotation with rollover proofs reduces security risk while keeping all historical receipts verifiable forever. To protect constituents and uphold data promises, ProofLock integrates privacy by design. Privacy Proof Tokens attest that a supporter was district‑verified without exposing their full address, while Export Guardrails enforce policy-backed controls like Aggregates Only or Row-Level with redaction. Watermarked downloads, scoped API tokens, and just‑in‑time access requests reduce the risk of uncontrolled list spread across coalitions. Operations teams gain additional safeguards with Dispute Seals, which allow staff to flag any entry or batch as disputed, attach evidence, invite co‑signs, and record resolution—without altering history. For grant periods, GrantPack creates a signed bundle of deduped counts, receipt sets, and verification instructions that funders can validate independently, cutting back‑and‑forth and accelerating disbursements. “Proof is the new performance,” said Denise Tran, Operations Director at Open Rivers Alliance. “With ProofLock, I can hand our board and funders a link that verifies itself. We keep PII tight, resolve questions fast, and spend more time organizing instead of assembling spreadsheets.” ProofLock integrates natively with RallyKit’s real-time campaign dashboard. As actions complete, receipts are generated instantly and can be surfaced in coalition views with immutable attribution. Approval Ladders and Ring Templates further ensure the right people review scripts, targets, and page publishes before they go live, while Draft Sandboxes enable safe collaboration without touching production. The compliance layer supports multi-language campaigns, low-data workflows, and accessibility needs. Fallback Verify options—email magic links, IVR phone codes, and WhatsApp—ensure that verification succeeds even when SMS is blocked or internet connectivity is inconsistent, particularly critical for rural reach and busy venues. “Trust grows when every stakeholder can see and verify the same source of truth,” added Ortega. “Whether you are a small nonprofit proving outcomes to a first-time funder or a coalition coordinating dozens of partners, ProofLock gives you audit-ready confidence at the speed of your campaign.” Availability ProofLock is available today to all RallyKit customers in the United States. Existing campaigns receive VerifyLink receipts immediately; advanced features such as GrantPack and Dispute Seals can be enabled per campaign in Settings. RallyKit provides implementation guidance to align Export Guardrails with your organization’s policies. About RallyKit RallyKit is a lightweight advocacy campaign manager that replaces spreadsheets and bloated CRMs with one real-time dashboard. It auto-matches supporters to legislators, generates district-specific scripts by bill status, launches one-tap action pages, tracks every action live, and delivers audit-ready proof. Media contact - Press: press@rallykit.org - Phone: +1 (415) 555-0137 - Web: https://www.rallykit.org/press For demos, pricing, and security documentation, contact the RallyKit team or visit the press page to request a technical briefing.

P

RallyKit Debuts Bill Pulse Automation to Keep Scripts and Targets Accurate in Real Time

Imagined Press Article

San Francisco, CA — RallyKit today launched Bill Pulse Automation, a suite of tools that keeps advocacy campaigns aligned with fast-moving legislative calendars—without late-night copy‑paste or risky guesswork. By continuously syncing official bill status, automatically shifting targets as a bill advances, and drafting stage‑appropriate scripts with verified citations, RallyKit ensures every supporter reaches the right lawmaker with the right message at the right time. “Nothing undermines credibility like an outdated ask,” said Priya Narayanan, Head of Product at RallyKit. “Bill Pulse Automation takes the manual drudgery out of staying current, so small teams can mobilize faster, reduce errors, and convert more calls and emails when it counts.” The suite brings together six coordinated capabilities: - Status Sync: Continuous monitoring of official bill activity—introductions, referrals, hearings, floor votes, conference, and signature—updates linked pages and scripts in real time. - Target Shift: As status changes, RallyKit automatically redirects outreach to the highest-leverage offices, from committee chairs and members to floor leaders, conferees, and the governor. - Script Autodraft: Fresh call and email language is generated for each stage with dynamic fields (bill number, committee, hearing date/time, vote windows), ready for quick review and approval. - Redline Brief: A plain-language digest shows exactly what changed since the last update—amendments, sponsors, vote counts—and highlights which script lines to edit, compressing deliberation to minutes. - Hearing Guard: Scheduled hearings trigger pre-hearing ramps, respectful quiet windows during proceedings, and post-hearing follow-ups to preserve relationships while maximizing impact. - Source Stamp: Every script and receipt is annotated with verified URLs and timestamps for instant credibility with partners, leadership, and auditors. For coalition communications, Tone Tuner keeps partners aligned on voice. Organizers can set a stage-specific tone—Inform, Urge, or Escalate—and RallyKit adapts the suggested script language and reading level accordingly to maintain consistency across email, SMS, and social. “Because our coalition spans rural and urban districts, accuracy is non-negotiable,” said Marcus Lee, Policy Director at Forward Together Network. “RallyKit’s real-time status sync and Target Shift helped us reassure skeptical lawmakers and volunteers alike that we were always hitting the right inboxes and phone lines.” Bill Pulse Automation is deeply integrated with RallyKit’s verification and inclusion features. Address Autocorrect and Confidence Lights improve district matches while minimizing friction. Fallback Verify offers alternative channels—email magic links, IVR codes, WhatsApp—and multilingual prompts so campaigns remain accessible across connectivity levels and assistive technologies. Dual Codes pair every QR with a short link and SMS keyword to keep actions flowing when cameras or data are unavailable. Organizers also benefit from live operational awareness. Heatmap Router shows action flow by district and recommends where to redeploy volunteers. Saturation Guard paces calls and emails by office to avoid swamping a single line, automatically shifting to email or alternate targets if voicemail is full or quotas are met. Goal Rings provide visible progress, ETAs, and milestone chimes to keep teams motivated and on schedule for key votes. “Bill Pulse Automation is about sharper timing and fewer mistakes,” added Narayanan. “When the feed flips to ‘hearing scheduled,’ your pages and scripts reflect it immediately. When the floor vote is announced, Tone Tuner and Script Autodraft make it easy to escalate respectfully—and to de‑escalate after the vote with thanks or next‑step guidance.” Availability Bill Pulse Automation is available today for U.S.-based campaigns in RallyKit. Existing customers can enable Status Sync and Target Shift at the campaign level and opt into Hearing Guard calendars. Tone Tuner and Script Autodraft work with Approval Ladders to ensure the right reviewers can approve changes from email or mobile before publish. About RallyKit RallyKit is a lightweight advocacy campaign manager for small nonprofits and grassroots organizers. It replaces spreadsheets and bulky CRMs with one real-time dashboard that auto-matches supporters to legislators, generates district-specific scripts by bill status, launches one-tap action pages, tracks every action live, and delivers audit-ready proof. Media contact - Press: press@rallykit.org - Phone: +1 (415) 555-0137 - Web: https://www.rallykit.org/press Demo requests and technical briefings are available on request. Coalition leaders can contact RallyKit’s partnerships team for guided setup and role-based templates aligned to existing approval processes.

P

RallyKit Rolls Out QR Swarm Toolkit to Turn Crowds into Completed Calls and Emails

Imagined Press Article

Atlanta, GA — RallyKit today announced its QR Swarm Toolkit, a field-tested system that converts event energy into district-accurate, completed actions in minutes. Built for on-the-ground mobilization at rallies, town halls, concerts, canvasses, and campuses, the toolkit combines one-click page generation, print-ready materials, kiosk safety, and live routing to help volunteers act quickly and organizers keep pressure balanced across districts. “Events create powerful momentum, but conversion often stalls at the table,” said Fiona Alvarez, VP of Field Experience at RallyKit. “The QR Swarm Toolkit removes friction for supporters and chaos for organizers—so a great crowd becomes great outcomes, measured and verified in real time.” The QR Swarm Toolkit includes: - Swarm Builder: Generate hundreds of unique, team-tagged QR action pages from a single template in one click, prefilled with UTM and attribution rules. - Print Packs: Auto-generate posters, table tents, badges, and stickers with branded QR codes, matching short links, and bold station IDs—ready for standard printers. - Dual Codes: Pair every QR with a memorable short link and SMS keyword fallback to include supporters without camera access or mobile data. - Auto-Reset Kiosk: Lock any device to a station page with auto-clear after submission or idle timeout; buffers actions offline and syncs once reconnected. - Heatmap Router: Watch volume, conversion rates, and wait indicators by station; route volunteers by SMS or Slack to districts that need them most. - Saturation Guard: Pace outreach by office to avoid overwhelming a single lawmaker’s line; shift to email or alternate targets when voicemail is full or quotas are met. - Goal Rings: Set station and team targets with live progress rings, ETAs, and milestone chimes; project a big board to fuel friendly competition. Field leaders can launch in under an hour. Magic Join Link verifies a volunteer’s phone, pre-fills their profile, and starts district detection without passwords. Team Autopilot assigns them to the best-fit team based on district, role, capacity, language, and skills. MicroBrief 60 gives a swipeable, 60-second orientation to scripts, safety notes, and do/don’t basics, followed by a quick confirm. Ready Check validates audio, connectivity, and comprehension before the first task. “On Saturday we spun up 60 QR stations in 20 minutes,” said Evan Carter, an event producer and early RallyKit user. “Auto-Reset Kiosk kept lines moving, Lite Mode worked where the venue Wi‑Fi didn’t, and the big board turned volunteers into finishers. We left with proof we could hand straight to our grant officer on Monday.” Inclusivity is a core design principle. Lite Mode compresses assets and caches scripts for low-data environments, while Fallback Verify channels—email magic links, IVR, WhatsApp—ensure constituent checks succeed even when SMS is blocked. Accessibility features support screen readers and multilingual scripts so more supporters can participate without friction. Organizers retain granular control across coalitions. Ring Templates apply permission tiers in one click; Scoped Sharing limits which partners can edit, publish, or view assets by page, district subset, or channel; and Approval Ladders set clear review gates with SLAs and escalations so day‑of speed doesn’t sacrifice safety. Immutable Attribution Locks enforce credit-split rules at publish, with dispute workflows that protect trust among partners. Every action generates audit-ready proof through RallyKit’s ProofLock ledger. VerifyLink receipts and Privacy Proof Tokens allow funders and coalition leads to validate outcomes without exposing PII. Trace Diffs and Time Anchor make backdating or silent changes provably impossible, while GrantPack assembles a signed bundle of results by event or grant window. “Crowds are precious—don’t waste them,” added Alvarez. “With QR Swarm, an organizer can redirect ten volunteers to an underperforming district in seconds and know, on the spot, that completed calls and emails are actually landing where they count.” Availability The QR Swarm Toolkit is available today for RallyKit customers in the United States. Print Pack assets include common sizes with crop marks and optional district micro‑scripts. Organizers can request an on-site activation checklist and a remote rehearsal session via the RallyKit press page. About RallyKit RallyKit is a lightweight advocacy campaign manager for small nonprofits and grassroots organizers. It replaces spreadsheets and bulky CRMs with one real-time dashboard that auto-matches supporters to legislators, generates district-specific scripts by bill status, launches one-tap action pages, tracks every action live, and delivers audit-ready proof. Media contact - Press: press@rallykit.org - Phone: +1 (415) 555-0137 - Web: https://www.rallykit.org/press For coalition playbooks, station code templates, and sample big-board layouts, contact the partnerships team via the press page.

P

RallyKit Introduces Coalition Finance Controls to Bring Clarity and Confidence to Shared Advocacy Budgets

Imagined Press Article

Washington, DC — RallyKit today introduced Coalition Finance Controls, a transparent billing and safeguards system that helps multi‑organization campaigns plan, allocate, and reconcile shared advocacy costs without spreadsheets or surprises. Built for coalitions that mix grant-backed activity with partner contributions, the system translates complex attribution into clean invoices and auditable histories while protecting campaigns from late-payer risk. “Coalitions win when everyone sees the same numbers and no one gets blindsided,” said Jordan Reyes, co‑founder and CEO of RallyKit. “Coalition Finance Controls align expectations upfront, automate the hard parts of cost sharing, and preserve momentum when dollars get complicated.” The system is anchored by Smart Split Rules, flexible allocation formulas—percentage, fixed, tiered, or hybrid—applied by partner tag, campaign, channel, or date window. Organizers can add floors, ceilings, rounding rules, and fallbacks if a partner lacks budget. RallyKit auto‑applies these rules to subscription and usage items, recalculates when attribution changes, and issues clean, auditable adjustments—no offline math required. To manage funding that arrives in bursts, Grant Buckets create time‑boxed pools that draw down first and then roll over to partners when exhausted. Buckets can be earmarked for specific line items like SMS or lookups, with start/end dates, caps, and clear labels for reporting. Credit Caps set per-partner limits by month or campaign with live meters, early‑warning alerts, and soft‑stop options; overflow can route to a fallback payer or pause only the billable components tied to that partner. Dunning Guard automates late‑payer safeguards with configurable net terms, reminder cadences, pay‑by‑link, and ACH/card retries. RallyKit quarantines delinquent charges tied to a specific partner, reallocates after grace periods, and freezes new spend for that partner only—without disrupting the rest of the campaign. Every step is logged to ProofLock for a transparent, tamper‑evident record. “Before RallyKit, we lost hours reconciling costs and calming nerves,” said Carla Jimenez, Managing Director at Statewide Voices Coalition. “Now we model splits before launch, set caps everyone agrees on, and send invoices with receipts funders can verify themselves. It’s professional, predictable, and fair.” On the back end, Invoice Composer generates branded, itemized invoices per organization and campaign with rate cards, unit counts, taxes/VAT, purchase order fields, and notes. Each line or batch can include a VerifyLink-backed receipt bundle so recipients validate outcomes independently. Partner Wallets add prepaid balances with auto top‑ups, threshold alerts, and refund/credit handling, offering predictability to small partners while preventing dunning interruptions during peak moments. To align expectations before a campaign starts, Split Simulator models projected usage across channels and dates, showing who pays what under different scenarios. The simulator flags risk—late‑payer exposure, cap breaches, grant exhaustion—and outputs a shareable sign‑off snapshot so coalitions document consensus in advance. Coalition Finance Controls work hand in hand with RallyKit’s collaboration guardrails. Ring Templates apply permission tiers tailored for coalitions, while Approval Ladders add stage gates for scripts, targets, and page publishes. Scoped Sharing restricts what assets each partner can view or edit. Attribution Locks enforce immutable credit-split rules at publish, with a dispute workflow requiring dual confirmation and preserving a complete, tamper‑evident trail. “Money dynamics can strain even the strongest coalitions,” said Reyes. “By pairing clear, pre‑agreed rules with verifiable receipts and automated safeguards, RallyKit helps partners focus on outcomes—completed calls and emails—rather than spreadsheets and surprises.” Availability Coalition Finance Controls are available today for RallyKit customers in the United States. Organizers can import existing split rules, set up Grant Buckets and Credit Caps in minutes, and generate their first invoices from past activity with a guided wizard. RallyKit’s team offers implementation sessions to align policies and reporting needs with Export Guardrails and Privacy Proof Tokens. About RallyKit RallyKit is a lightweight advocacy campaign manager for small nonprofits and grassroots organizers. It replaces spreadsheets and bulky CRMs with one real-time dashboard that auto-matches supporters to legislators, generates district-specific scripts by bill status, launches one-tap action pages, tracks every action live, and delivers audit-ready proof. Media contact - Press: press@rallykit.org - Phone: +1 (415) 555-0137 - Web: https://www.rallykit.org/press Coalitions can request a finance briefing and sample invoice pack via the press page. RallyKit also provides a checklist to map existing policies to in‑platform controls and a calculator template to compare pre‑ and post‑automation reconciliation time.

Want More Amazing Product Ideas?

Subscribe to receive a fresh, AI-generated product idea in your inbox every day. It's completely free, and you might just discover your next big thing!

Product team collaborating

Transform ideas into products

Full.CX effortlessly brings product visions to life.

This product was entirely generated using our AI and advanced algorithms. When you upgrade, you'll gain access to detailed product requirements, user personas, and feature specifications just like what you see below.