ATLAS Rationalization Platform

FAA Phase 2 - Portfolio Analysis & 6R Disposition
Secured by Amazon Cognito | AWS GovCloud

ATLAS Rationalization Platform

FAA Phase 2 - Portfolio Analysis & 6R Disposition

6R Disposition Distribution

Composite Score Distribution

Application Portfolio

AWS-Powered Rationalization Pipeline

End-to-end data pipeline. All processing runs in GovCloud (us-gov-west-1) with cross-partition Bedrock access. Click any service below for details.

Reference Architecture - ATLAS Rationalization Platform

Click any service node for live status. All resource IDs shown are real deployed resources.
AWS GovCloud (us-gov-west-1) — 092359260389 DATA PIPELINE Amazon S3 5 Excel Source Files atlas-rationalization-* AWS Glue ETL + Scoring atlas-etl-pipeline DynamoDB Scored App Data 4 tables, 160 items REQUEST FLOW User Browser S3 Website Frontend SPA Cognito Auth (JWT) API Gateway REST API Lambda (x3) api-handler | bedrock-proxy | data-loader reads data Secrets Mgr Bearer Token query + portfolio data HTTPS cross-partition creds CloudFormation CDK IaC — manages all CROSS-PARTITION BOUNDARY AWS Commercial (us-east-1) — 519602868646 RAG PIPELINE Bedrock Claude (generates grounded answer) context Bedrock KB RAG retrieval GSMGPSPMYV chunks OpenSearch Vector search Serverless indexed S3 KB Docs 8 files answer Titan Embeddings Text v2 (1024 dim) IAM User atlas-bedrock-svc FLOW LEGEND Request flow Data read (Lambda→DynamoDB) Cross-partition call (GovCloud→Vista) Response (answer returns to user)
1

Data Ingestion

Amazon S3

5 FAA-provided Excel data extracts uploaded to S3 bucket (atlas-rationalization-092359260389/raw/):
CMDB Extract (101 CIs) | Cloud Export (17 resources) | Business Context (19 apps) | Security/Compliance (20 apps) | Interface Inventory (42 interfaces)

Why S3?

FedRAMP-authorized object storage in GovCloud. Provides durable, encrypted-at-rest storage for source data. Serves as the single source of truth — all downstream processing reads from these immutable source files.

2

Data Processing & Scoring Engine

AWS LambdaAmazon DynamoDB

atlas-data-loader Lambda (Python 3.12) parses all 5 Excel files, merges records across sources by Application ID, and computes 4-dimension scores using deterministic rule-based logic:

  • Mission Value (30%): Business criticality, user count, system-of-record status, regulatory driver, workaround availability
  • Technical Fit (25%): Language/framework currency, hosting model, support status, DB modernity
  • Cost Efficiency (20%): Cloud spend, DB complexity, server count, interface complexity
  • Risk Posture (25%): ATO status/expiration, POA&M items, continuous monitoring, critical interfaces

6R dispositions (Retain/Rehost/Replatform/Refactor/Replace/Retire) assigned via decision tree logic. Results stored in 4 DynamoDB tables with GSIs for fast querying.

Why Lambda + DynamoDB?

Lambda provides serverless compute — no infrastructure to manage, scales to zero when idle. DynamoDB offers single-digit-ms reads with pay-per-request billing. GSIs on sourceAppId, targetAppId, appKey, and ciType enable fast lookups for dependency analysis. Scoring is deterministic (not AI-generated) — satisfying the disclosure requirement.

3

Knowledge Base & Vector Indexing

Amazon Bedrock Knowledge BaseAmazon OpenSearch ServerlessAmazon Titan Embeddings

All 5 data extracts + scoring methodology converted to structured markdown documents and indexed into a Bedrock Knowledge Base. OpenSearch Serverless vector collection stores embeddings generated by Amazon Titan Embed Text v2. Enables semantic retrieval — the AI can find relevant data points even when questions don't use exact field names.

Why RAG (Retrieval-Augmented Generation)?

Instead of sending the entire dataset in every prompt, RAG retrieves only the most relevant records for each query. This enables: (1) More focused answers citing specific data points, (2) Scalability to the full 200+ app portfolio, (3) Traceability — every AI response shows which source documents were retrieved.

4

AI-Powered Analysis (Cross-Partition)

Amazon BedrockClaude OpusAWS Secrets Manager

atlas-bedrock-proxy Lambda performs cross-partition Bedrock calls from GovCloud to commercial AWS using service-specific credentials with bearer token authentication. Credentials stored in Secrets Manager and rotated automatically.

Agentic RAG workflow: Query → Retrieve relevant documents from KB → Augment prompt with retrieved context + full portfolio scores → Generate analysis with Claude Opus → Cache response in DynamoDB (TTL: 24h)

Why Cross-Partition?

Amazon Bedrock with Anthropic Claude is not available in GovCloud partition. The cross-partition pattern uses service-specific credentials (bearer token auth) to securely call Bedrock in commercial us-east-1 from GovCloud Lambda. All credentials encrypted via Secrets Manager. Only bedrock:InvokeModel and bedrock:Converse actions are permitted — minimal blast radius.

5

API Layer

Amazon API GatewayAWS Lambda

atlas-api-handler Lambda serves RESTful endpoints for the frontend:
GET /api/applications | GET /api/interfaces | GET /api/scores/summary | POST /api/analyze | GET /api/deliverables

Data endpoints serve directly from DynamoDB (sub-100ms). Analysis endpoint routes to Bedrock proxy with RAG context injection.

Why API Gateway + Lambda?

HTTP API (v2) provides low-latency request routing with built-in CORS. Lambda functions scale automatically and cost nothing when idle. Separation of data queries (fast, DynamoDB-backed) from AI analysis (Bedrock-backed) ensures the demo UI is responsive.

6

Deliverable Generation

AWS LambdaAmazon S3

7 deliverables generated programmatically from the scored data:

  1. Inventory & Scoring Workbook — Consolidated inventory with 4-dimension scoring and methodology
  2. Dependency Diagrams — 20×20 interface matrix, data flow by domain, modernization sequencing
  3. 6R Disposition List — Disposition with data-traced rationale + 6 consolidation opportunities
  4. Outcome Matrix — Per-app cost/performance/resilience/security outcomes + portfolio KPIs
  5. Risk Register — 12 scored risks with mitigation strategies and heatmap
  6. Cutover & Rollback Approach — 4-wave cutover with rollback triggers and T&E requirements
  7. Schedule with Assumptions — 36-activity master schedule (18 months) + 14 stated assumptions

Data Traceability

Every recommendation in every deliverable cites specific data points: App IDs, CI IDs, Interface IDs, ATO dates, FISMA levels, and POA&M counts. During the 45-minute Q&A, evaluators can trace any recommendation back to the source data.

7

Presentation & Authentication

Amazon S3Amazon Cognito

Static web application hosted on S3 with Cognito user pool authentication. Dashboard, scoring matrix, interactive dependency graph, AI chat interface, and deliverable downloads — all served from GovCloud.

Security

Cognito provides user authentication with email-based login. All API calls authenticated. S3 website hosting in GovCloud with HTTPS. No data leaves the GovCloud boundary except cross-partition Bedrock calls (encrypted, bearer-token-authenticated).

Architecture Summary

10
AWS Services Used
S3, Lambda, DynamoDB, API Gateway, Cognito, Secrets Manager, Bedrock, OpenSearch Serverless, Titan Embeddings, CloudFormation (CDK)
2
AWS Partitions
GovCloud (compute, data, frontend) + Commercial (Bedrock AI, Knowledge Base)
7
Deliverables Generated
All tool-calculated with data traceability back to FAA-provided source files

Scoring Matrix - All Applications Ranked by Composite Score

Weights: Mission Value 30% | Technical Fit 25% | Cost Efficiency 20% | Risk Posture 25%. All scores tool-calculated (deterministic rules).

RankAppNameMission ValueTechnical FitCost EfficiencyRisk PostureCompositeDisposition

Application Dependency Graph with CSP Boundaries

What this shows: Every application and every data connection between them, grouped by where they run (AWS, Azure, On-Prem, SaaS). Use this to understand which apps depend on each other before planning any migration.

How to read it: Each box is an app. Lines are data flows (APIs, shared databases, batch files). Thicker/redder lines = more critical connections. Dashed lines cross cloud boundaries. Click any app to see its details.

AWS Azure SaaS Hybrid On-Prem High Crit Medium Low

AI-Powered Portfolio Analysis (RAG)

How it works: Your question is processed through an agentic RAG pipeline:
1. Query sent to Bedrock Knowledge Base which retrieves relevant records from the indexed portfolio data
2. Retrieved context + scored portfolio data augmented into prompt
3. Claude Opus generates analysis with mandatory data citations
4. Scoring and 6R dispositions shown are tool-calculated (deterministic, not AI)

Generated Deliverables

All 7 deliverables generated from FAA-provided data using the AWS pipeline above.

Loading deliverables

Upload Source Data

Upload updated Excel files to replace source data. After uploading, run the Glue ETL pipeline to reprocess scores and dispositions.

CMDB Extract
cmdb_extract.xlsx - 101 configuration items
Cloud Export
cloud_export.xlsx - Cloud infrastructure resources
Business Context
business_context.xlsx - Capabilities and user data
Security & Compliance
security_compliance.xlsx - FISMA, ATO, POA&M data
Interface Inventory
interface_inventory.xlsx - 42 application interfaces

ETL Pipeline (AWS Glue)

Run the Glue ETL job to reprocess uploaded data, recompute scores and 6R dispositions, and reload DynamoDB tables.

Pipeline steps:
1. Read 5 Excel files from S3 (raw/ prefix)
2. Parse and merge records by Application ID
3. Compute 4-dimension scores (Mission Value, Technical Fit, Cost, Risk)
4. Assign 6R dispositions via decision tree
5. Load results to DynamoDB (4 tables)
Engine: AWS Glue 4.0 (Spark), 2x G.1X workers

Recent Runs

No runs yet

Import AWS Migration Evaluator Results

Upload the ME Data Export XLSX to enrich ATLAS with EC2 right-sizing, pricing models, Graviton eligibility, and RDS recommendations. Each app detail will show ME cost analysis alongside ATLAS scores.

ME Data Export File
File: "DOT - FAA ... - AWS Migration Data Export.xlsx" from console.tsologic.com
What this does: Uploads the ME export to S3, parses all 3 sheets (Shared Tenancy, RDS, Graviton), maps server-level recommendations back to your 20 apps via CMDB CI names, and stores per-app cost summaries in DynamoDB. App detail modals will then show ME data alongside ATLAS scores.