Threat Model
1. Threat Categories
ATOMx addresses ten categories of threat. The first six are the primary attack surfaces; the last four are systemic threats that span the platform.
| # | Category | Primary Concern |
|---|---|---|
| 1 | Identity threats | Spoofing, impersonation, fabricated credentials |
| 2 | Replay threats | Reuse of valid material outside its intended window |
| 3 | Disclosure threats | Public exposure of pilot, operator, or station data |
| 4 | Time threats | Manipulation of clocks to extend or backdate validity |
| 5 | Key-abuse threats | Misuse of legitimately-issued credentials |
| 6 | Sensor threats | Honest signing of falsified sensor data (GPS spoofing, jamming) |
| 7 | SoC and supply-chain threats | Compromise below the TPM line of sight |
| 8 | Availability threats | DoS, RF jamming, partition isolation |
| 9 | OEM and platform-insider threats | Malicious or compromised actors with legitimate authority |
| 10 | Operational and regulatory-evasion threats | Cooperative actors gaming the system |
1.1 Framework alignment
The taxonomy is informally aligned to STRIDE for traditional security threats (spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege) and to LINDDUN for privacy-specific threats (linkability, identifiability, non-repudiation-as-a-privacy-cost, detectability, disclosure of information, content unawareness, policy/consent violation). MITRE ATT&CK is consulted for tactic/technique mapping where it applies; ATOMx does not adopt ATT&CK as the primary structure because the airspace-trust threat surface is not well covered by enterprise/ICS matrices alone.
2. Identity Threats
| Threat | Description | Mitigation |
|---|---|---|
| Pilot identity spoofing | False identity claimed at authorization | login.gov identity proofing + IACRA FTN verification; PIV / CAC where strong non-repudiation is required (Pilot Identity §4) |
| Drone identity spoofing | False aircraft claims valid identity | TPM attestation challenge at onboarding; dual-chain cross-reference of EK and Drone Assembly Certificate (Aircraft Hardware Identity §2) |
| Firmware / bootloader tampering | Compromised software running on a genuine drone | PCR Quote at onboarding compared against baseline (Aircraft Hardware Identity §4) |
| Hardware key extraction | Private key removed from the secure element | TPM design — keys never leave the chip; physical-tampering response wipes the chip |
| Stolen field device | Managed verifier device is compromised | Device credential revocation; local secure storage; short-lived person-bound credentials (Onboarding Authorities §2) |
3. Replay Threats
| Threat | Description | Mitigation |
|---|---|---|
| Cross-session replay | Telemetry packet reused on a different flight | flight_auth_id in every packet differs per session (Flight Lifecycle §3) |
| Intra-session replay / reordering | Old packet replayed within the same session | Strictly incrementing seq number; server tracks last seen |
| Token replay | Old Public Aircraft Token reused | Short validity, nonce, replay detection, duplicate token detection |
| Capsule replay | Capsule used outside its validity window | Each capsule carries a unique nonce; field devices remember recently-seen nonces and reject duplicates locally; ATOMx detects them globally during reconciliation |
| Endorsement transplant | Endorsement from one flight applied to another | Authority signature covers the base token fields, binding the endorsement to a specific request (Authorization Package §3) |
4. Disclosure Threats
| Threat | Description | Mitigation |
|---|---|---|
| Public tracking | Public uses aircraft ID to track aircraft across missions | Authorization-bound rotating Public Aircraft Token (Authorization Package §5) |
| Pilot doxxing | Public receives pilot / operator or control-station location | Protected fields envelope-encrypted; not in public broadcast or COP (Authorization Package §6) |
| Unauthorized disclosure | Protected fields accessed without valid need | Disclosure Policy Engine; field-level release; logging; purpose codes |
| Federation leak | Cross-agency federation exposes more than intended | Slack-Connect model: opt-in selective sharing per peer (Onboarding Authorities §4) |
5. Time Threats
The general principle: time is a credential, and the system does not trust any single time source. Aircraft and field devices each independently track their last trusted ATOMx time anchor in tamper-resistant hardware, and degrade gracefully when their confidence in current time is low. Full treatment in Disconnected Operations §5.4.
| Threat | Mitigation |
|---|---|
| Aircraft clock rolled forward / back to reuse an expired or future capsule | Secure element stores a monotonic counter plus the last signed time anchor received from ATOMx; capsule validity is checked against max(SE-time-anchor, GNSS-time) |
| GNSS time spoofing | Aircraft cross-checks GNSS time against secure-element monotonic counter; sudden jumps flag the aircraft into a “time-degraded” state which is broadcast |
| Verifier device clock manipulation | Field device’s secure enclave records its own monotonic time; verification logic uses enclave time, not OS clock |
| Long offline windows where every clock has drifted | Capsule validity windows for time-bounded and blanket authorizations sized with explicit slack; “time-degraded but otherwise valid” reported as a distinct state |
6. Key-Abuse Threats
| Threat | Description | Mitigation |
|---|---|---|
| Insider misuse | Authorized user accesses sensitive fields improperly | Disclosure logging, purpose codes, after-action review |
| Offline key abuse | Field credentials decrypt outside scope | Person / device / geography / purpose constraints; reconciliation flags out-of-scope decryption |
| Authority key compromise | ATOMx-held authority signing key misused | HSM custody; signing-event audit log; key rotation; revocation of issued endorsements as needed (Authorization Package §4) |
| Break-glass abuse | Field verifier overuses break-glass to bypass scope | Break-glass disclosures flagged in reconciliation; clusters trigger after-action review (Disconnected Operations §6.5) |
7. Sensor Threats — The Residual Risk
GPS spoofing is the residual risk that cryptography alone cannot solve. A drone with a hardware-bound key and attested firmware will honestly sign whatever its GPS sensor reports. Full mitigation requires independent position corroboration — cellular triangulation, ground radar, or ADS-B — which is an infrastructure problem, not a cryptography problem.
ATOMx surfaces this as a telemetry-confidence axis in the Authorized Operational State, not as a binary. A flight whose reported position diverges from corroborating sources keeps its trust level (the cryptographic chain is intact) but its confidence is downgraded and a security alert is raised.
8. SoC and Supply-Chain Threats
PCR attestation proves the integrity of the software stack but not the SoC itself. Hardware backdoors in silicon, or compromised radio chips, are invisible to TPM attestation. These are addressed through:
| Threat | Mitigation |
|---|---|
| Hardware backdoor in SoC, radio, or peripheral chip | Platform certificates issued by the OEM at assembly, asserting the bill of materials; NDAA / Section 889 compliance for US government customers; Blue UAS / Trusted Component listing where applicable |
| Counterfeit TPM chip with valid-looking EK | Cross-reference of EK Certificate against TPM manufacturer’s published serial registry; periodic spot-attestation and PCR profile drift detection |
| Pre-fabrication supply-chain compromise (malicious silicon insertion) | Trusted-foundry programs; formal hardware verification; incoming-inspection regime — out of scope for ATOMx software, in scope for OEM eligibility review (Onboarding Manufacturers §2) |
| Drone capture / physical forensics post-landing | Tamper-response circuits in TPM; on-detection key zeroization; encrypted at-rest storage of flight logs |
| Decapping / side-channel key extraction | TPM physical-tamper resistance ratings (FIPS 140-3 Level 3+); short-lived session keys derived under EK so a long-term extraction yields limited blast radius |
This is the gap to surface to certification reviewers: TPM attestation is a strong claim about software state and a weaker claim about hardware composition. See Aircraft Hardware Identity §5.
9. Availability Threats
Cryptographic guarantees are worthless if the platform is unreachable when the trust answer is needed.
| Threat | Mitigation |
|---|---|
| Volumetric DoS against the Authorization Service | Edge rate-limiting; per-operator quotas; horizontal scaling of stateless services; circuit-breakers on dependent services |
| DoS against the Disclosure Policy Engine during incidents | Pre-warmed capacity for known event windows (NSSEs, large airshows); priority queuing for authority requests; degraded-mode read replicas |
| RF jamming of C2, telemetry, or Remote ID broadcast | Aircraft enters time-degraded state and broadcasts the flag; ATOMx falls back to corroborating sources; authority verifiers downgrade confidence |
| Wholesale GNSS jamming over a region | Aircraft cross-checks GNSS time against secure-element monotonic counter; operations may be mass-curtailed by authority via amendment; fixed reference receivers used for ground-truth where deployed |
| Network partition isolating an operator from ATOMx | Pre-cleared offline capsule covers planned operations; spontaneous-flight gap noted as open question (Origins §9) |
| Audit-ledger storage exhaustion | Tiered retention with cryptographic anchors; aged events archived to immutable cold storage with retained hash chain |
10. OEM and Platform-Insider Threats
Threats from actors with legitimate authority but malicious intent. These are the hardest threats to defend against and rely on procedural controls layered over technical ones.
| Threat | Mitigation |
|---|---|
| Malicious OEM signs a Drone Assembly Certificate for a backdoored unit | OEM eligibility review with re-certification cadence; batch-attestation reconciliation against expected production volume; field-evidence anomaly detection against the signed batch records |
| OEM HSM compromise (private key extraction) | Per-OEM CA segregation — compromise of one OEM does not affect others; rapid revocation of the OEM intermediate; recovery requires re-signing all in-field aircraft (significant operational cost, hence the OEM HSM is a hard gate) |
| Compromised auditor / certification body falsely passes an OEM | Multi-auditor diversity; ATOMx-side independent technical-conformance verification; whistleblower channel; no auditor’s pass alone unlocks production minting |
| ATOMx insider with HSM access misuses Root or intermediate keys | Dual-control / m-of-n quorum on Root ceremonies; signing-event audit log replicated to a tamper-evident ledger; HSM keys never leave the appliance; rotational privilege; periodic external audit |
| Audit-ledger tampering by an ATOMx insider (delete or alter events) | Append-only ledger with hash-chained entries; off-site replication to FedRAMP-aligned cold storage; periodic third-party audit; signed merkle anchors published to authority partners |
| Authority key compromise | ATOMx-held authority signing key segregated per authority; HSM custody; rapid revocation of issued endorsements; cross-authority detection if signing volume anomalously spikes |
| Social engineering of field verifiers (impersonation, coerced break-glass) | Three-factor binding (person + device + purpose); break-glass requires explicit confirmation and free-text justification; clusters trigger after-action review; manager-approval requirement for sensitive purpose codes |
11. Operational and Regulatory-Evasion Threats
Cooperative-on-paper actors gaming the system to obtain authorizations they should not have, or to operate outside their authorization.
| Threat | Mitigation |
|---|---|
| Operator splits one large operation into many small flights to stay under thresholds | Cumulative-effect detection; per-operator analytic baselines; authority discretion to require aggregate filing |
| False purpose code claimed at field-verification disclosure | Purpose-code use is logged; pattern detection across an authority; FOIA / discovery exposure as a practical deterrent |
| Operator declares lower-risk mission class than actual | Conformance evaluation against telemetry catches the divergence in flight; corroborating sources flag anomaly |
| Authority over-issues endorsements (“rubber-stamp”) | Signing-volume metrics per endorser visible to that authority’s leadership; sampled audit of endorsements against operation type |
| Linkability across rotating Public Aircraft Tokens via traffic analysis | Token rotation policy varies entropy; common waypoints / departure points may still leak — flagged as a residual privacy risk (§13) |
12. Disclosure Lifecycle Threats
Beyond the disclosure threats already in §4, there are lifecycle threats specific to the aftermath of a legitimate disclosure.
| Threat | Mitigation |
|---|---|
| Authorized recipient redistributes disclosed PII | Disclosure receipt embeds usage terms; legal terms in authority MOU; downstream sharing logged where possible |
| Disclosed identity persists in authority systems beyond purpose | Disclosure receipts carry retention tags; periodic purge audits |
| FOIA / discovery request for disclosure logs | Log structure designed to support redaction without breaking integrity; authority counsel reviewed before the policy is set |
13. Residual Risks (Accepted)
These risks have no full mitigation today and are accepted with monitoring. Each is documented to set reviewer expectations and to drive future work.
| Risk | Why Accepted | Compensating Control |
|---|---|---|
| GPS spoofing with corroborating-sensor compromise | Cryptography signs honest sensor readings; sensor-fusion is an infrastructure problem | Telemetry-confidence axis; cross-corroboration with ADS-B / radar where available |
| Truly disconnected first flight (no prior connectivity) | Pre-cleared capsule model assumes one online window before flight | Surface as “Not Publicly Verifiable”; reconciliation later. See Origins §9 |
| SoC-level silicon backdoor | Out of scope for software cryptographic mitigation | NDAA / supply-chain attestation; trusted-foundry programs |
| Token-rotation linkability via traffic analysis | Operational metadata (waypoints, schedules) leaks even with rotated tokens | Stricter rotation for protected missions; future research on geometry generalization |
| Wholesale GNSS jamming | Adversary capability outside ATOMx control | Time-degraded state propagation; fixed reference receivers in critical airspace |
14. Cross-References
- The trust-model framing of authorization vs. identity vs. security alert: Trust Model §4
- The trust-model security model with original 10-threat table: Trust Model §6 (this page is the authoritative consolidated reference; trust-model §6 is a summary)
- The connected-flight attack-vector table: Flight Lifecycle §4
- The disconnected-operations time-tampering deep dive: Disconnected Operations §5.4
- Reconciliation as the catch-all detection layer: Disconnected Operations §7
- Decisions surfaced but not formalized as threats: Origins and Decisions §9
Implementation Readiness — Open Questions
Each entry below identifies a decision not yet locked. Items marked (ADR) should be formalized as an Architecture Decision Record before implementation begins. Items marked (blocking) must be resolved before the relevant feature can be built.
| # | Question | Owner | ADR? | Blocking? |
|---|---|---|---|---|
| 1 | Residual-risk register: who signs off on each accepted risk (§13), and what is the re-review cadence? Need named accepted-by signatures (CTO, CISO, product lead) and an annual review schedule. | Security + Product | Yes — “Residual Risk Governance & Acceptance Process” | Yes |
| 2 | DoS-protection capacity targets: what RPS / concurrent-verifier volumes must the public verification surface sustain before degradation, and what is the auto-throttle policy? | Engineering | Yes — “Verification Surface Capacity & Rate-Limit Policy” | Yes |
| 3 | Drone-capture forensics: what is the legal framework for chain-of-custody, evidence handling, and cross-jurisdiction handoff once a captured aircraft yields data? | Legal + Security | No (policy doc) | No |
| 4 | Malicious-OEM detection thresholds: what statistical / behavioral signals trigger an OEM-attestation review, and at what confidence level do we revoke a manufacturer trust anchor? | Security + Engineering | Yes — “OEM Trust Anchor Revocation Criteria” | Yes |
| 5 | Audit-ledger tampering detection: transparency log (Trillian / Sigstore-style), Merkle-witness mesh, or notarized blockchain anchor? Decision drives the entire integrity story for §11. | Engineering | Yes — “Audit Ledger Integrity Mechanism” | Yes |
| 6 | m-of-n quorum for ATOMx insider control: what are the n and m values, who holds the shares (role separation), and what is the key-ceremony / rotation procedure? | Security | Yes — “Insider-Threat Quorum Topology” | Yes |
| 7 | Social-engineering training program for field verifiers (LEO, FAA, public): owner, curriculum, refresh cadence, and effectiveness measurement. | Product + External (authority partners) | No | No |
| 8 | Traffic-analysis privacy mitigation (§13 residual): is mixnet-style padding, k-anonymous geometry generalization, or differential-privacy geometry-bucketing the path forward? | Engineering (research) | Yes — “Token-Rotation Linkability Mitigation Strategy” | No |
| 9 | Formal threat-modeling tool selection: OWASP Threat Dragon, Microsoft TMT, or IriusRisk for ongoing diagram-driven modeling. Affects tooling spend and onboarding. | Security | Yes — “Threat Modeling Tool & Workflow” | No |
| 10 | Pen-test cadence + vulnerability disclosure / bug bounty program: internal red team only, third-party annual, continuous bug bounty (HackerOne / Bugcrowd)? Who triages? | Security + External | Yes — “Security Testing & Disclosure Program” | No |
| 11 | Security incident response runbook ownership: who owns the on-call rotation, what is the SLA for authority-impacting incidents, and where do runbooks live? | Security + Engineering | No (runbook) | Yes |
| 12 | Regulatory-evasion detection: ML-anomaly model, deterministic heuristic rules, or hybrid? Drives data pipeline, label set, and false-positive tolerance. | Engineering + Product | Yes — “Regulatory-Evasion Detection Approach” | No |