From Evidence to Authorization

User Manual & ATO Walkthrough

An Inalab Group Product  ·  www.inalabgroup.com  ·  Version 0.1.0  ·  April 2026

Table of Contents

  1. Introduction — What is iLAB SecureX?
  2. The Story — Meet the Team
  3. Chapter 1: Register Your System
  4. Chapter 2: The System Dashboard
  5. Chapter 3: Collect Evidence
  6. Chapter 4: Review Evidence Gaps
  7. Chapter 5: The Control Catalog
  8. Chapter 6: Generate AI Narratives
  9. Chapter 7: Review & Approve Narratives
  10. Chapter 8: Export the ATO Package
  11. Chapter 9: Manage Compliance Profiles
  12. Chapter 10: Evidence Adapters
  13. Chapter 11: AI Agents
  14. Chapter 12: POA&M Management
  15. Chapter 13: Continuous Monitoring & cATO
  16. Chapter 14: Threat Intelligence
  17. Chapter 15: AI-Powered ATO Automation
  18. Reference: All Features & Capabilities
Introduction

What is iLAB SecureX?

The platform that turns months of manual ATO work into weeks of automated, evidence-driven compliance.

iLAB SecureX is an automated Authority to Operate (ATO) platform built for DoD and federal government systems. It replaces the traditional manual process — where ISSOs spend 6 to 18 months writing control narratives by hand, taking screenshots of configurations, and formatting 500-page documents — with a live, evidence-driven workflow.

The platform does three things that no manual process can:

  1. Collects evidence automatically — 6 pluggable adapters connect directly to your AWS infrastructure, source code repositories, vulnerability scanners, SBOM generators, STIG compliance tools, and test frameworks. Evidence is collected via API, not screenshots.
  2. Generates narratives with AI — Amazon Bedrock reads your actual evidence and writes control implementation narratives that reference your real IAM roles, VPC configurations, and scan results. Not boilerplate — evidence-specific.
  3. Exports submission-ready packages — One click produces OSCAL JSON, Word documents, PDFs, or eMASS CSV files ready for your assessor.

Core Principle: AI Drafts, Humans Decide

Every AI-generated narrative is saved as a draft. The ISSO reviews, edits if needed, and approves before anything goes into the ATO package. The platform accelerates the work — humans own the decisions.

Who Uses iLAB SecureX?

RoleWhat They Do in the Platform
ISSO (Information System Security Officer)Primary user. Registers systems, collects evidence, reviews AI-generated narratives, approves controls, manages POA&Ms, exports ATO packages.
ISSM (Information System Security Manager)Oversight. Reviews compliance dashboards across systems, approves narrative batches, monitors ATO readiness posture.
DeveloperTriggers evidence collection from CI/CD pipelines, views control status, provides technical context for narratives.
AssessorReviews evidence-backed narratives with integrity hashes, exports OSCAL for machine-readable assessment.
Authorizing Official (AO)Views ATO readiness dashboards, reviews final packages, makes authorization decisions.

Supported Compliance Frameworks

FrameworkControlsUse Case
NIST 800-53 Rev 5 (IL5)325DoD systems on AWS GovCloud
NIST 800-53 Rev 5 (IL4)370DoD systems on AWS GovCloud
FedRAMP High421Cloud service providers
CMMC Level 2110Defense contractors handling CUI
NIST 800-171 Rev 2110Non-federal systems with CUI
Custom profilesAnyAgency-specific or international frameworks
The Story

Meet the Team

Throughout this manual, we follow a fictional team as they use iLAB SecureX to get an ATO for their cloud-native application.

🏢 The Scenario

Organization: Defense Systems Group (DSG), a DoD program office
Application: Task Manager Sample App (TMSA) — a cloud-native web application deployed on AWS
Goal: Obtain an Authority to Operate at NIST 800-53 Moderate baseline
Timeline: 3 weeks (compared to the traditional 6–12 months)

👥 The Team

Sarah Chen, ISSO — Owns the ATO package. She'll register the system, collect evidence, review narratives, and export the final package.
Marcus Williams, Developer — Built the application. He'll provide the source code repository URL and answer technical questions about the architecture.
Dr. Priya Patel, ISSM — Oversees all systems in the program. She'll review the compliance dashboard and approve the final package.
James Rodriguez, Assessor — External reviewer from the assessment team. He'll evaluate the evidence and narratives.

Let's follow Sarah as she takes TMSA from zero to ATO-ready using iLAB SecureX.

Chapter 1

Register Your System

Sarah's first step: tell the platform about the system she needs to authorize.

📖 Story

Sarah logs into iLAB SecureX and sees the Systems page — the starting point for all ATO work. She needs to register TMSA as a new system so the platform knows what it's working with.

The Systems page is the home screen of iLAB SecureX. It shows all authorization boundaries (systems) managed by the platform. Each system has its own controls, evidence, narratives, and export packages.

Systems list page showing all registered systems
Figure 1.1: The Systems page — Sarah's starting point. She can see any existing systems and create new ones.

Creating a New System

Click "Create system" in the top right to open the system registration form.

Create New System form with all fields
Figure 1.2: The Create System form. Sarah fills in the system details that will appear in the SSP.

Form Fields

FieldDescriptionSarah's Entry
System NameThe official name of the information system as it appears in the SSP and ATO package.Task Manager Sample App
AcronymA short identifier (3–6 characters). Used in export filenames and references.TMSA
DescriptionA brief description of what the system does and its purpose.Sample task manager application for ATO demo
Application TypeThe architectural pattern. Options include: cloud-native-web, serverless, container-platform, data-pipeline, ai-ml-platform, and more.cloud-native-web
Deployment EnvironmentWhere the system is deployed. Options: aws-govcloud, aws-commercial, azure-government, on-prem-datacenter, air-gapped, etc.aws-commercial
Impact LevelThe FIPS 199 categorization. Options: LOW, MODERATE, HIGH, IL2, IL4, IL5, IL6.MODERATE

After clicking "Create system", the platform:

  1. Creates the system record with a unique UUID
  2. Loads the control catalog from the assigned compliance profile
  3. Applies the inheritance model (marking CSP-inherited controls)
  4. Redirects Sarah to the System Dashboard

💡 Contextual Help

Throughout the application, you'll see small icons next to labels and column headers. Click or hover over these to see contextual help explaining what each field means and how it's used in the ATO process.

Chapter 2

The System Dashboard

Sarah's command center — a single view of where the system stands in the ATO process.

📖 Story

After creating TMSA, Sarah lands on the System Dashboard. This is her home base for the entire ATO process. She can see at a glance how much work is done and what's left.

System Dashboard showing ATO readiness, controls, evidence health, and timeline
Figure 2.1: The System Dashboard for TMSA. Shows 14% ATO readiness, 141 controls, 170 evidence artifacts, and the ATO Journey Timeline.

Dashboard Sections

Executive Summary

The top card shows the key metrics at a glance:

POA&M Summary

Shows total POA&M items, overdue count, in-progress count, completed count, and a remediation progress bar. High-priority items and upcoming due dates are highlighted.

ATO Journey Timeline

A visual 5-phase timeline showing where you are in the ATO process:

  1. Evidence Collection — Collect compliance artifacts from infrastructure
  2. Narrative Generation — AI generates control implementation narratives
  3. ISSO Review & Approval — Human review and approval of narratives
  4. Security Assessment — Independent assessment by 3PAO or government assessor
  5. ATO Authorization — Final authorization decision by the AO

Evidence Adapters

Shows the status of all 6 evidence adapters — which are connected, how many artifacts each has collected, and when the last collection ran. Click "Configure" to set up any adapter.

Quick Actions

One-click shortcuts to the most common workflows: Collect Evidence, Generate Narratives, Export Package, and Run Gap Assessment.

Control Status Pie Chart

A donut chart showing the breakdown of control implementation status: Implemented (green), Partially Implemented (orange), Planned (blue), Inherited (purple), and Not Applicable (grey).

💡 Header Action Buttons

The three buttons in the page header — Collect Evidence, Generate Narratives, and Export Package — are always visible on the dashboard. They're the primary workflow actions Sarah will use throughout the ATO process.

Chapter 3

Collect Evidence

The foundation of every ATO — real, current evidence from your actual infrastructure.

📖 Story

Sarah clicks "Collect Evidence" from the dashboard. Marcus, the developer, gives her the CodeCommit repository URL for the TMSA application. She selects all 6 adapters and kicks off the collection. In under 2 minutes, the platform collects 170+ evidence artifacts — each with a SHA-256 integrity hash and mapped to specific NIST 800-53 controls.

The Evidence Library page is where you collect, view, and manage all compliance evidence for a system. Evidence is the raw data that proves your system implements security controls.

Evidence Library page with collection panel, adapter checkboxes, and evidence table
Figure 3.1: The Evidence Library. Top: collection panel with adapter selection. Bottom: all 170 collected evidence artifacts with types, sources, control mappings, timestamps, and integrity hashes.

Step-by-Step: Collecting Evidence

1

Select Adapters

Check the adapters you want to run. Each adapter collects different types of evidence:

AdapterWhat It CollectsControls Mapped
Source Code AnalyzerAuthentication patterns, RBAC implementations, audit trail code, input validation, encryption usage, dependency manifests, Docker configurationsAC-3, AU-2, IA-2, SC-13, SI-10, SA-11, CM-7
AWS InfrastructureIAM policies & roles, CloudTrail configuration, KMS key inventory, VPC & security groups, Security Hub findings, CloudWatch log groupsAC-2, AC-3, AC-6, AU-2, AU-6, SC-7, SC-12, SC-28
Trivy Vulnerability ScanContainer vulnerabilities (CVEs), infrastructure misconfigurations, exposed secretsRA-5, SI-2, CM-6
SBOM (Syft + Grype)Software Bill of Materials in SPDX and CycloneDX formats, vulnerability correlationSA-11, SR-4, CM-8
Test ResultsUnit/integration test execution, pass/fail counts, coverage metricsSA-11, SI-6
OpenSCAP STIG ScanSTIG compliance scans via SSM on EC2 instances, XCCDF evaluation resultsCM-6, SI-2, RA-5
2

Enter Repository URL

If you selected Source Code Analyzer, Trivy, SBOM, or Test Results, enter the Git repository URL. The platform supports CodeCommit HTTPS URLs and any Git HTTPS URL.

Example: https://git-codecommit.us-east-2.amazonaws.com/v1/repos/task-manager-app

3

Enter AWS Region

If you selected AWS Infrastructure or OpenSCAP, enter the AWS region where your infrastructure is deployed (e.g., us-east-2).

4

Click "Run Evidence Collection"

The platform runs two types of collection:

  • Synchronous (inline) — Source Code Analyzer and AWS Infrastructure run immediately. Results appear in seconds.
  • Asynchronous (background) — Trivy, SBOM, Test Results, and OpenSCAP run in a container Lambda in the background. A banner shows progress, and the table auto-refreshes every 15 seconds.

Understanding Evidence Artifacts

Each collected artifact has:

Collection Run History

Below the collection panel, a Collection Run History table shows all previous collection runs with their status, adapters used, artifact counts, and timestamps. This provides an audit trail of when evidence was collected.

⚠️ Evidence Freshness

Assessors want to see recent evidence. The dashboard tracks "Fresh Evidence" (collected within 30 days) and "Stale Evidence" (older than 30 days). Re-collect evidence before submission to ensure everything is current.

Chapter 4

Review Evidence Gaps

What's missing? The gap analysis tells you exactly which controls need more evidence.

📖 Story

After collecting evidence, Sarah clicks "View Gaps" to see if any controls are missing required evidence. The gap analysis compares her collected evidence against the profile's requirements and shows exactly what's missing.

Evidence Gap Analysis page showing no gaps detected
Figure 4.1: Evidence Gap Analysis. In this case, all required evidence is present — no gaps detected. When gaps exist, the table shows each control ID, family, severity, and the specific missing evidence types.

The gap analysis page shows:

When gaps are found, the remediation path is clear: go back to the Evidence page, select the appropriate adapter, and collect the missing evidence type.

✅ Sarah's Result

With all 6 adapters run against the TMSA repository and AWS account, Sarah has zero evidence gaps. All required evidence types are present. She's ready to move to narrative generation.

Chapter 5

The Control Catalog

141 controls, searchable and filterable — the backbone of your ATO package.

📖 Story

Sarah navigates to the Control Catalog to see all 141 NIST 800-53 controls required by her compliance profile. She can search by control ID or title, filter by family, and click into any control to see its implementation status, narrative, and linked evidence.

Control Catalog showing 141 controls with search and family filter
Figure 5.1: The Control Catalog. 141 controls with search, family filter, implementation status, responsibility, and narrative status columns.

Understanding Control Status

StatusMeaningColor
ImplementedControl is fully implemented with sufficient evidenceGreen
Partially ImplementedSome aspects are implemented but gaps remainOrange
PlannedControl implementation is planned but not yet in placeBlue
InheritedControl is inherited from the Cloud Service Provider (e.g., AWS)Info
Not ApplicableControl is excluded by the compliance profileGrey

Responsibility Types

Narrative Status

Click any Control ID link to open the Control Detail page.

Chapter 6

Generate AI Narratives

The AI reads your evidence and writes control implementation narratives — specific to your system, not boilerplate.

📖 Story

With 170 evidence artifacts collected, Sarah navigates to the Narrative Review page. She selects "Amazon Nova Pro" as the AI model and clicks "Generate All Narratives." The platform sends each control's evidence to Amazon Bedrock, which generates a narrative specific to TMSA's actual configurations. In about 5–10 minutes, 121 narratives are generated.

Narrative Review page with AI model selector, progress summary, and controls table
Figure 6.1: The Narrative Review page. Shows 121 of 141 narratives generated, 5 approved, 116 pending review. The AI model selector and bulk action buttons are in the header.

Step-by-Step: Generating Narratives

1

Select an AI Model

Choose the model that best fits your needs:

ModelQualitySpeedNotes
Amazon Nova ProHighModerateRecommended for production use
Amazon Nova LiteGoodFastGood for drafts and iteration
Amazon Nova MicroBasicFastestQuick previews
Claude Sonnet 4.5HighestSlowRequires AWS Marketplace subscription
Claude Sonnet 4Very HighModerateRequires AWS Marketplace subscription
Claude 3.5 HaikuGoodFastRequires AWS Marketplace subscription
2

Click "Generate All Narratives"

The platform iterates through every control that has linked evidence and sends the evidence data to the AI model. Controls without evidence are skipped. A progress banner shows the generation status, and the page auto-refreshes every 10 seconds.

3

Monitor Progress

The progress summary at the top updates in real time:

  • Total Controls — Total number of controls in the profile
  • Narratives Generated — How many have been written by the AI
  • Approved — How many the ISSO has approved
  • Pending Review — How many are waiting for human review

The Narrative Approval Progress bar shows the percentage of controls with approved narratives.

What Makes These Narratives Different

Unlike template-based tools that produce generic boilerplate, iLAB SecureX narratives reference your actual data:

💡 Re-generating Narratives

You can re-generate narratives at any time — for example, after collecting new evidence or switching to a different AI model. Previously generated narratives will be overwritten with fresh ones.

Chapter 7

Review & Approve Narratives

AI drafts, humans decide — the ISSO reviews every narrative before it enters the ATO package.

📖 Story

Sarah clicks into AC-2 (Account Management) to review the AI-generated narrative. She reads through it, verifies the evidence citations are accurate, and clicks "Approve." For controls where the narrative needs adjustment, she clicks "Edit" to refine the text before approving.

Control Detail page showing AC-2 with narrative, evidence, and approve button
Figure 7.1: The Control Detail page for AC-2 — Account Management. Shows the AI-generated narrative (draft status), linked evidence artifacts with integrity hashes, and the Generate/Edit/Approve buttons.

The Control Detail Page

Each control has a dedicated detail page showing:

Key Metrics (top bar)

Implementation Narrative

The main section shows the AI-generated narrative text. Three action buttons are available:

Linked Evidence

A table showing all evidence artifacts mapped to this control, including:

Bulk Approval

Back on the Narrative Review page, the "Approve All Drafts" button lets the ISSO bulk-approve all draft narratives at once. The button shows the count of pending drafts (e.g., "Approve All Drafts (116)").

⚠️ Review Before Bulk Approving

While bulk approval is convenient, the ISSO should review at least the high-impact controls individually before approving. The AI is good but not perfect — human judgment is essential for controls like AC-2, AU-2, SC-7, and IA-2.

✅ Sarah's Progress

Sarah reviews the top 20 high-impact controls individually, editing 3 narratives where she wants more specific language. She then bulk-approves the remaining 98 draft narratives. Total time: about 2 hours of review work — compared to weeks of manual writing.

Chapter 8

Export the ATO Package

One click produces submission-ready documents in the format your assessor needs.

📖 Story

With all narratives approved, Sarah navigates to the Export Center. She generates an OSCAL JSON file for machine-readable assessment, a Word document for the traditional SSP review, and an eMASS CSV for import into the DoD's enterprise system. Each export takes a few seconds and produces a downloadable file.

Export Center with format selection tiles and export history
Figure 8.1: The Export Center. Four export formats available, with a history of previously generated exports showing filenames, formats, dates, and content counts.

Available Export Formats

FormatDescriptionUse Case
OSCAL JSONNIST Open Security Controls Assessment Language — machine-readable SSPAutomated assessment tools, GRC platforms, interoperability
Word Document (.docx)Formatted SSP with title page, system description, controls organized by family, and POA&M sectionTraditional assessor review, AO signature
PDFPrint-ready document with automatic page breaks and formattingDistribution, archival, email
eMASS CSVImport format for DoD's Enterprise Mission Assurance Support ServiceDirect import into eMASS

Generating an Export

  1. Select the desired format by clicking its tile
  2. Click "Generate Export"
  3. A success alert appears with the filename and a download link
  4. The export appears in the Export History table below

Each export includes:

💡 Download Links

Export download links are presigned S3 URLs valid for 24 hours. After that, you can regenerate the export from the Export Center.

Chapter 9

Manage Compliance Profiles

Compliance frameworks are configuration, not code — add new frameworks by creating a YAML profile.

📖 Story

Dr. Patel, the ISSM, wants to see what compliance frameworks are available. She navigates to the Profiles page and sees 5 built-in profiles. She also wants to create a custom profile for an agency-specific baseline — the Profile Builder wizard walks her through it in 8 steps.

Profiles page showing 5 built-in compliance profiles
Figure 9.1: The Profiles page. 5 built-in profiles covering IL5, IL4, FedRAMP High, CMMC Level 2, and NIST 800-171. Custom profiles can be created with the Profile Builder.

Built-in Profiles

ProfileFrameworkBaselineControls
IL5 Cloud-Native (AWS)NIST-800-53-Rev5High325
IL4 Cloud-Native (AWS GovCloud)NIST-800-53-Rev5High370
FedRAMP High (AWS)FedRAMPHigh421
CMMC Level 2CMMCLevel 2110
NIST 800-171 Rev 2NIST-800-171Standard110

The Profile Builder (8-Step Wizard)

Click "Create profile" to open the Profile Builder — an 8-step wizard that guides you through creating a custom compliance profile:

  1. Basic Information — Profile ID, name, version, framework, baseline, description, target application types, target environments
  2. Catalog — Control catalog source, baseline, exclusions (controls to skip), and additions (extra controls)
  3. Inheritance (optional) — CSP provider name, SSP reference, and inherited control definitions with status (fully inherited, shared, customer responsible)
  4. Evidence Mappings — Map controls to required evidence types, specify which adapter collects each type, and write AI narrative prompts per control
  5. Narrative Templates (optional) — Handlebars templates for formatting AI-generated narratives
  6. Export Configuration — Enable/disable OSCAL, Word, PDF, and eMASS export formats
  7. Evidence Collection — Collection schedule (cron expression), on-deployment collection toggle, retention period
  8. Review & Save — Summary of all settings with validation

💡 No Code Required

Adding a new compliance framework requires zero code changes. The entire framework definition — controls, inheritance, evidence mappings, AI prompts, export settings — is captured in the profile configuration.

Chapter 10

Evidence Adapters

Pluggable evidence sources — 6 built-in, unlimited custom.

📖 Story

Marcus, the developer, wants to understand what evidence adapters are available and how they work. He navigates to the Adapters page to see all 6 built-in adapters. He also wants to add a custom PostgreSQL adapter for their database — the "Create adapter" button lets him register it with a JSON Schema for connection settings.

Adapters page showing 6 built-in evidence adapters
Figure 10.1: The Adapters page. 6 built-in adapters: AWS Evidence, Source Code Analyzer, Trivy, SBOM, OpenSCAP, and Test Results. Custom adapters can be created with the "Create adapter" button.

Built-in Adapters

AdapterIDEvidence Types
AWS Evidence Adapterawsiam_policy_snapshot, cloudtrail_config, kms_key_inventory, vpc_configuration, security_hub_findings, cloudwatch_log_groups
Source Code Analyzersource-codeauthentication_config, rbac_configuration, audit_trail_config, input_validation, encryption_config, dependency_manifest, docker_configuration
Trivy Vulnerability Scannertrivyvulnerability_scan, misconfiguration_scan, secret_scan
SBOM Generator (Syft + Grype)sbomsbom_spdx, sbom_cyclonedx, vulnerability_correlation
OpenSCAP Compliance Scanneropenscapstig_scan_results, cis_benchmark_results
Test Results Collectortest-resultstest_run_results, test_coverage

Adapter Detail Page

Click any adapter name to open its detail page, which shows:

Creating a Custom Adapter

Click "Create adapter" to register a new evidence source. You provide:

Chapter 11

AI Agents

Pluggable AI reasoning — different agents for different compliance frameworks and tasks.

📖 Story

Sarah wants to run a CMMC assessment against TMSA's evidence. She navigates to the AI Agents page, clicks into the CMMC Level 2 Assessor agent, selects the "cmmc_practice_assessment" capability, chooses the Access Control (AC) domain, and invokes the agent. The AI produces a detailed practice-by-practice assessment using C3PAO language. Sarah reviews the output and clicks "Approve."

AI Agents page showing 4 agents
Figure 11.1: The AI Agents page. 3 built-in agents (Narrative Generator, Gap Analysis, Risk Assessment) plus the custom CMMC Level 2 Assessor.

Built-in Agents

AgentCapabilitiesDescription
Narrative Generatornarrative_generationGenerates SSP control implementation narratives from evidence. The primary agent used in the ATO workflow.
Gap Analysis Agentgap_analysis, evidence_analysisIdentifies missing evidence and recommends specific collection actions.
Risk Assessment Agentrisk_assessment, compliance_scoringScores risk posture with quantified metrics and prioritized recommendations.
CMMC Level 2 Assessorcmmc_narrative_generation, cmmc_practice_assessment, cmmc_gap_analysisCustom agent that generates CMMC-specific assessments using C3PAO language.

Threat Intelligence Agents

Three new agents transform compliance findings into threat-informed intelligence using MITRE ATT&CK, CISA KEV, and EPSS data via a Bedrock Knowledge Base.

AgentCapabilitiesDescription
Threat Exposure Mapperthreat_exposure_mapping, evidence_analysisMaps STIG/CIS findings and control gaps to MITRE ATT&CK techniques. Identifies which adversary groups (APT28, APT29, APT41, Lazarus, etc.) can exploit open findings and traces potential attack paths through the environment.
Adversary Prioritization Agentadversary_prioritization, risk_assessmentPrioritizes remediation based on adversary relevance, CISA KEV status, EPSS scores, and attack path analysis. Produces a threat-informed priority stack instead of flat CAT I/II/III severity ranking.
Threat Landscape Monitorthreat_landscape_monitoring, evidence_analysisContinuously monitors external threat feeds (CVE, CISA KEV, EPSS, ATT&CK updates) and re-prioritizes the remediation queue when the threat landscape changes. Generates alerts when new threats map to open findings.

💡 Threat Intelligence Agent Pipeline

The three threat intelligence agents work as a pipeline: Threat Exposure Mapper maps findings to ATT&CK techniques → Adversary Prioritization Agent ranks remediation by adversary relevance → Threat Landscape Monitor re-prioritizes when the threat landscape changes. Each agent can also be invoked independently.

🔍 Bedrock Knowledge Base

The threat intelligence agents are backed by a Bedrock Knowledge Base containing MITRE ATT&CK data, NIST 800-53 to ATT&CK mappings, CISA KEV catalog, EPSS scores, and curated adversary profiles for DoD/diplomatic sector threat groups. The KB uses a managed vector store — no OpenSearch or Aurora required. Cost is pay-per-query only.

Agent Detail Page

Agent Detail page showing Overview tab with configuration and capabilities
Figure 11.2: Agent Detail page — Overview tab showing configuration (ID, version, provider, source, approval requirement) and capabilities.

Each agent has three tabs:

Overview Tab

Shows the agent's configuration: ID, version, provider (bedrock), source (built-in or custom), whether it requires human approval, and its capabilities.

Invoke Agent Tab

Agent Invoke tab with capability selector, system selector, and instructions
Figure 11.3: The Invoke Agent tab. Select a capability, optionally select a target system, add instructions, and click Invoke.

To invoke an agent:

  1. Select a Capability — Choose which capability to run (e.g., narrative_generation, gap_analysis)
  2. Select a Target System (optional) — The system whose evidence the agent will analyze
  3. CMMC Domain Filter (CMMC agents only) — Scope the assessment to a specific domain (AC, AU, CM, IA, SC, etc.) for thorough per-practice analysis
  4. Additional Instructions (optional) — Free-text instructions for the agent
  5. Click "Invoke"

The agent's output is rendered as formatted HTML with headers, bullet points, and evidence citations. If the agent requires human approval, Approve and Reject buttons appear below the output.

History Tab

Shows all previous invocations for this agent with: invocation ID, capability, status, approval status, who triggered it, date, and action buttons. Click any invocation to view its full output in a modal.

Creating a Custom Agent

Click "Create agent" on the Agents list page. Provide:

Chapter 12

POA&M Management

Track known weaknesses with remediation plans, milestones, and deadlines.

📖 Story

During her review, Sarah identifies 3 controls where the evidence is insufficient for a "MET" determination. She creates POA&M items for each, documenting the weakness, severity, remediation plan, responsible party, and target completion date. The POA&M Summary on the dashboard tracks her progress.

Plan of Action & Milestones (POA&M) items track known security weaknesses that need remediation. They're a required part of every ATO package.

Creating a POA&M Item

Navigate to the POA&M page from the System Dashboard (click "View All POA&Ms"). Each POA&M item includes:

POA&M Summary (Dashboard)

The POA&M Summary card on the System Dashboard shows:

⚠️ Overdue POA&Ms

Overdue POA&M items are flagged in red on the dashboard. Assessors pay close attention to overdue items — they indicate the organization isn't meeting its own remediation commitments.

Chapter 13

Continuous Monitoring & cATO

ATO isn't a one-time event — it's a continuous process. iLAB SecureX is built for cATO.

📖 Story

Three weeks after starting, Sarah has a complete ATO package: 141 controls with approved narratives, 170 evidence artifacts with integrity hashes, and export packages in OSCAL, Word, and eMASS formats. But the work doesn't stop at authorization. Dr. Patel sets up scheduled evidence collection to maintain continuous compliance posture.

Traditional ATO is a point-in-time snapshot that goes stale immediately. iLAB SecureX is designed for Continuous Authority to Operate (cATO):

The cATO Workflow

  1. Initial ATO package is submitted and authorized
  2. Evidence adapters run on schedule (e.g., daily at midnight)
  3. Dashboard shows real-time compliance posture to the ISSM
  4. When drift is detected, the ISSO is notified and creates POA&M items
  5. The AO has a live view of compliance posture — no waiting for the next reauthorization cycle

✅ Sarah's ATO Timeline

Week 1: System registered, evidence collected (170 artifacts), AI narratives generated (121 controls)
Week 2: ISSO review and approval of narratives, POA&M items created for 3 gaps
Week 3: Final package exported (OSCAL + Word + eMASS), submitted for assessment
Total: 3 weeks — compared to the traditional 6–12 months of manual work.

Chapter 14

Threat Intelligence

Transform compliance findings into threat-informed defense — know which adversaries can exploit your open findings.

📖 Story

Sarah has 47 open CAT I STIG findings across 12 systems. The traditional approach: prioritize by severity category and work through them. But Dr. Patel asks a harder question: "Which of these findings actually matter given the adversaries targeting us?"

Sarah invokes the Threat Exposure Mapper agent. It maps her 47 findings to MITRE ATT&CK techniques and identifies that 8 of them enable techniques actively used by APT29 (Russia/SVR) — a Tier 1 threat to their diplomatic infrastructure. 3 of those 8 are on perimeter-exposed systems.

She then runs the Adversary Prioritization Agent, which produces a threat-informed remediation plan: fix those 3 perimeter findings first (blocks APT29's initial access), then the remaining 5 (blocks lateral movement), then the other 39 in EPSS-score order. The flat CAT I list becomes a strategic defense plan.

The Gap: Compliance Data as Threat Intelligence

Every STIG assessment generates thousands of findings, but those findings are treated as compliance checkboxes — not as threat intelligence. The gap: nobody connects "this STIG finding is open" to "this is the specific adversary technique that exploits it."

iLAB SecureX bridges that gap with three threat intelligence agents that transform compliance data into adversary-informed, prioritized intelligence.

The Three-Agent Pipeline

1. Threat Exposure Mapper

Takes STIG/CIS findings and control gaps from the existing compliance pipeline and maps them to MITRE ATT&CK techniques using published NIST 800-53 → ATT&CK mappings.

Input: Open findings and control gaps (already generated by existing agents)

Output: For each finding — the ATT&CK techniques it enables, which adversary groups use those techniques, and potential attack paths through the environment.

Key insight: No new data collection required. It transforms data the platform already produces.

2. Adversary Prioritization Agent

Takes the ATT&CK mappings from the Threat Exposure Mapper and ranks remediation by adversary relevance — not just severity category.

Prioritization factors:

Output: A threat-informed remediation priority stack with immediate/short-term/medium-term/long-term action items and risk reduction projections.

3. Threat Landscape Monitor

Continuously watches external threat feeds and re-prioritizes when the landscape changes.

Triggers:

Output: Real-time re-prioritization alerts with severity, affected findings, and recommended actions.

Recommended schedule: Every 6 hours for continuous monitoring.

Bedrock Knowledge Base

The threat intelligence agents are backed by a Bedrock Knowledge Base containing:

The KB uses Bedrock's managed vector store — no OpenSearch or Aurora required. Cost is pay-per-query only, essentially zero when idle.

Default Adversary Profile: DoD/Diplomatic Sector

TierGroupAttributionPrimary Targets
Tier 1APT29 / Cozy BearRussia/SVRGovernment agencies, diplomatic missions, think tanks
Tier 1APT28 / Fancy BearRussia/GRUGovernment, military, defense, diplomatic organizations
Tier 1APT41 / WinntiChina/MSSGovernment, healthcare, telecom, technology
Tier 1Lazarus GroupNorth Korea/RGBGovernment, defense, financial, cryptocurrency
Tier 1Charming Kitten / APT35Iran/IRGCGovernment, diplomatic, academic, human rights
Tier 2Volt TyphoonChinaUS critical infrastructure, government networks
Tier 2SandwormRussia/GRU Unit 74455Government, energy, critical infrastructure
Tier 2KimsukyNorth KoreaGovernment, think tanks, Korean peninsula policy

Adversary profiles are configurable per deployment. Organizations can define their own priority threat groups based on their sector and mission.

Invoking Threat Intelligence Agents

Navigate to AI Agents → select the threat intelligence agent → Invoke Agent tab:

  1. Select the capability (e.g., threat_exposure_mapping)
  2. Select the target system
  3. Optionally add instructions (e.g., "Focus on perimeter-exposed systems")
  4. Click "Invoke"

Or via API:

POST /api/v1/agents/threat-exposure-mapper/invoke
{
  "capability": "threat_exposure_mapping",
  "systemId": "your-system-uuid",
  "prompt": "Map all open findings to ATT&CK techniques"
}

The Before & After

Before (Compliance Only)After (Threat-Informed)
PrioritizationCAT I → CAT II → CAT IIIBy adversary relevance, EPSS, KEV, attack path
Question answered"Are we compliant?""Are we defended against the adversaries targeting us?"
Remediation orderFlat severity listStrategic defense plan with risk reduction projections
When landscape changesWait for next assessment cycleAutomatic re-prioritization with alerts

💡 Future: GraphRAG

The current Knowledge Base uses vector search for retrieval. The production roadmap includes GraphRAG (Neptune Analytics) for relationship-aware retrieval — enabling multi-hop queries like "which open findings enable attack paths from initial access to data exfiltration for APT29?" across the full ATT&CK graph.

Chapter 15

AI-Powered ATO Automation

Six new AI agents that automate the most time-consuming parts of the ATO process.

Inheritance Resolution

Navigate to your system dashboard and click "Resolve Inheritance". The agent analyzes your deployment environment and determines which controls are inherited from your Cloud Service Provider.

For AWS GovCloud systems, this typically marks 50-70 controls as inherited or shared, jumping your ATO readiness from ~14% to ~50% in one step.

Review the results table, override any decisions you disagree with, then click "Accept All" or "Apply Selected".

POA&M Generator

After running gap analysis, invoke the POA&M Generator agent. It creates properly formatted POA&M items for every evidence gap, complete with weakness descriptions, severity justifications, remediation plans, and milestones.

Package Validator

Before exporting your ATO package, run the Package Validator. It checks that every control has a narrative or POA&M, evidence is fresh, narratives reference real evidence, and framework requirements are met.

Assessor Simulator

The Assessor Simulator predicts what a real 3PAO assessor would flag during review. It identifies weak narratives, missing evidence, and controls that don't match their implementation claims. Fix these issues before the real assessment.

Compliance Trend Analyst

The Compliance Trend Analyst shows how your compliance posture has changed over time, predicts when you'll be ATO-ready based on current velocity, and recommends focus areas for fastest improvement.

Full ATO Orchestrator

The Full ATO Orchestrator runs the complete pipeline in one step: inheritance resolution, gap analysis, POA&M generation, package validation, and assessor simulation. Produces a comprehensive readiness report showing exactly where you stand and what to do next.

The ATO Readiness Process

StepActionImpact
1Resolve Inheritance14% → ~50% (inherited controls counted as compliant)
2Collect EvidenceEvidence mapped to remaining customer controls
3Generate Narratives~50% → ~80% (AI writes implementation statements)
4Review & ApproveISSO approves narratives (1-2 hours)
5Generate POA&MsRemaining gaps tracked with remediation plans
6Validate PackageVerify completeness before submission
7Simulate AssessmentCatch issues before the real assessor does
8Export & SubmitOSCAL, Word, PDF, SAR ready for assessor
Reference

All Features & Capabilities

A complete reference of every feature in iLAB SecureX.

Navigation

The sidebar navigation provides access to all major sections:

System-Level Pages

Within each system, the following pages are available:

API Access

Every feature in iLAB SecureX is accessible via REST API. The dashboard is a consumer of the API — not a separate system. Third-party tools, CI/CD pipelines, and GRC platforms can integrate directly.

API Base URL: https://<your-api-gateway>/v1/api/v1/

Authentication: Bearer token (JWT) or API key (X-API-Key header)

Keyboard Shortcuts & Tips