Amazon's AI Full-Stack Breakdown 2026: Deep Pockets, Custom Chips, Global Infrastructure — So Why Is It the Least Visible Player in the AI Era?

Over the past two years, every time I open tech news, the headlines belong to the same names: OpenAI dropped another model, Google Gemini topped another benchmark, Anthropic's Claude 4 series sent developers into a frenzy, Microsoft Copilot landed another enterprise deal…

zhuermu · · 22 min read
aws、agenticAI、bedrock、agentcore

Three Questions That Keep Bugging Me

Over the past two years, every time I open tech news, the headlines belong to the same names: OpenAI dropped another model, Google Gemini topped another benchmark, Anthropic’s Claude 4 series sent developers into a frenzy, Microsoft Copilot landed another enterprise deal…

But what about Amazon?

As one of the “Magnificent Seven,” the world’s largest cloud computing company, and a behemoth pulling in over $50 billion in annual profit — Amazon’s presence in the AI era is wildly disproportionate to its scale.

That leaves me with three nagging questions:

First, has Amazon fallen behind in the AI era? Google has Gemini, sparking global buzz with every update. Microsoft has its exclusive OpenAI partnership (though Amazon recently joined OpenAI’s $50 billion funding round). Every Anthropic Claude 4 release is a developer celebration. But Amazon in AI? Rarely makes headlines.

Second, why does every move Google makes become a global talking point, while Amazon is nearly silent? At re:Invent 2025, AWS announced custom 3nm chips, nearly a hundred models, multiple agents… These are substantial releases by any measure, yet outside the AWS technical community, the discussion was virtually zero.

Third — and this puzzles me most — Amazon has the cash, the chips, and the world’s largest cloud infrastructure. Why can’t it produce a breakout foundation model? In China, beyond the tech giants, even startups like DeepSeek, Moonshot AI, and MiniMax have trained impressively capable models. Amazon invests orders of magnitude more, yet Amazon Nova is practically invisible. What’s missing?

With these three questions in mind, I spent considerable time studying AWS’s current AI full-stack architecture. Let’s first look at what Amazon has actually built, then circle back to answer these questions.


The Big Picture: A Six-Layer Architecture

Before diving into details, let’s establish a mental model. AWS’s AI service ecosystem can be abstracted into six layers:

AWS AI Full-Stack Architecture 2026

This layering isn’t AWS’s official taxonomy — it’s a logical framework to aid understanding. In practice, enterprises can enter at any layer. Most start at Layers 3–4; only a few ever touch the chip and training infrastructure at the bottom.

Let’s unpack each layer from the bottom up.


Layer 1: AI Chips & Compute Infrastructure

Core question: AI compute is absurdly expensive. What can be done?

Training a large model can require thousands of GPUs, and inference costs scale linearly with user volume. NVIDIA GPUs are powerful but scarce and pricey. AWS’s strategy: build custom chips and trade scale for cost. va Forge SDK supports CPT (continued pre-training), SFT (supervised fine-tuning), DPO (direct preference optimization), and RFT (reinforcement fine-tuning), with both LoRA and full-parameter training.

Fine-Tuning on Bedrock

If you don’t need Nova Forge’s heavy-duty approach, Bedrock itself supports model fine-tuning. The reinforcement fine-tuning (RFT) feature launched in December 2025 even provides an OpenAI-compatible API — if you were using OpenAI’s fine-tuning API, migrating to Bedrock is nearly zero-cost.

In one line: S3 Vectors crushes vector storage costs to the floor; Nova Forge lets enterprises train from frontier model checkpoints for the first time — these two are the standout innovations at this layer.


Layer 3: Foundation Model Platform — Amazon Bedrock

Core question: There are too many models. How do you choose and use them?

Bedrock is the hub of AWS’s AI strategy. It’s not a model — it’s a model platform, providing unified API access to over 100 serverless models from multiple providers.

Model Ecosystem

As of early 2026, model providers on Bedrock include:

Third-party models:

  • Anthropic Claude (Claude Opus, Sonnet, Haiku)
  • Meta Llama (Llama 4 series — Scout / Maverick)
  • Mistral (Mistral Large 3, Ministral 3 series)
  • DeepSeek
  • Google Gemma 3
  • OpenAI (GPT series open models)
  • MiniMax
  • Qwen
  • NVIDIA models
  • Moonshot

Amazon Nova 2 (first-party models):

  • Nova 2 Lite: Fast inference model, 1M token context window, built-in code interpreter, web grounding, remote MCP tool support. Great for everyday tasks, exceptional value ($0.035/$0.14 per 1M tokens)
  • Nova 2 Pro: Complex reasoning model for tasks requiring deep thinking
  • Nova 2 Sonic: Voice model for speech interaction scenarios
  • Nova Premier: Most capable model, 1M token context, excels at complex reasoning and multi-step planning; also the best “teacher model” for model distillation

Why Bedrock Instead of Calling APIs Directly?

You might ask: can’t I just call Anthropic’s or OpenAI’s API directly? Of course you can. But Bedrock’s value lies in:

  1. Unified interface: Switching models requires changing one parameter, not rewriting code
  2. Enterprise-grade security: Data stays within AWS, with VPC, IAM, encryption, and compliance support
  3. AWS ecosystem integration: Knowledge Bases, Agents, Guardrails — all work out of the box
  4. Cost optimization: Pay-as-you-go, batch inference, provisioned throughput, and other billing models
  5. Model distillation: Use Nova Premier as a teacher model to distill smaller, faster custom models

In one line: Bedrock is AWS’s “model supermarket” — 100+ models to choose from, unified API access, enterprise-grade security.


Layer 4: Agent Frameworks & Runtime

Core question: You have models. How do you build AI applications that actually do things?

A model alone is just a “brain.” To solve real business problems, you need memory, tool calling, safety guardrails, knowledge retrieval, and more. This layer is the infrastructure for building agents — not the agents themselves, but the tools and frameworks for creating them.

Amazon Bedrock Agents: Giving Models the Ability to Act

Amazon Bedrock Agents transform foundation models from question-answerers into autonomous task executors. They leverage FM reasoning to decompose user requests into multiple steps, call APIs and data sources, and complete complex tasks end-to-end.

Example: A user says “Pull last month’s sales data, generate a report, and send it to the team.” The Agent automatically decomposes this into: query database → generate report → call email API to send. No human intervention required.

Strands Agents SDK: AWS’s Open-Source Agent SDK

In May 2025, AWS open-sourced the Strands Agents SDK, supporting Python and TypeScript. It follows a model-driven design philosophy — instead of hardcoding task flows, the LLM decides how to use tools and plan steps.

A few lines of code create an Agent, with support for complex multi-Agent orchestration (v1.0 shipped July 2025). AWS internal services like Amazon Q Developer and AWS Glue already use it. Strands and AgentCore are complementary: Strands is the development framework, AgentCore is the production runtime.

Amazon Nova Act SDK: Browser Automation Agent Framework

Amazon Nova Act is a Python SDK + dedicated model combo that enables developers to build browser automation agents. It’s not a turnkey product — it’s a framework for writing agents that autonomously operate browsers to complete UI workflows (filling forms, navigating, extracting information, etc.).

Nova Act’s differentiator is vertical integration: the model, orchestrator, tools, and SDK are trained together rather than stitched together. This achieves ~90% reliability in browser automation scenarios — significantly higher than general-purpose agent frameworks. GA in December 2025, with IDE plugin support and AgentCore Browser compatibility.

Amazon Bedrock AgentCore: The “Operating System” for Agents

If Bedrock Agents is about “building an agent,” AgentCore is about “operating an agent.” Launched in 2025, it’s a new agentic platform that addresses the full spectrum of engineering challenges from development to production:

Memory System:

  • Short-term memory: In-session context management
  • Long-term memory: Cross-session user preferences and knowledge accumulation
  • Episodic memory: Records the agent’s reasoning process, actions, and outcomes, enabling it to learn from experience. This isn’t just “remembering conversations” — it’s remembering “how I solved this problem”

Security & Control:

  • Gateway: Unified entry point for agent access to tools and data
  • Cedar Policy: Fine-grained permission control using the Cedar policy language. Example: this agent can read customer data but not modify it; it can issue refunds but not exceeding $100
  • Quality evaluation: Automated assessment of agent output quality

Code Interpreter: Lets agents execute code in a secure sandbox for data analysis, visualization, and similar tasks.

Amazon Bedrock Knowledge Bases: Fully Managed RAG

RAG (Retrieval-Augmented Generation) is one of the most practical AI application patterns today — letting models answer questions based on your private data rather than relying solely on training-time knowledge.

Bedrock Knowledge Bases fully manages the entire RAG pipeline:

  • Drop your documents into S3
  • Automatic chunking, vector embedding generation, and indexing
  • Automatic retrieval of relevant content at query time, injected into the model’s context
  • Every answer includes source citations

Supports multiple vector storage backends (S3 Vectors, OpenSearch, Aurora pgvector, etc.) and structured data queries.

Amazon Bedrock Guardrails: Six Lines of Defense for AI Safety

Deploying AI in production means safety is non-negotiable. Amazon Bedrock Guardrails provides six safety policies:

  1. Content filtering: Filters harmful content (text and images), including prompt injection attack protection
  2. Topic classification: Restricts the model to specific topics, preventing off-topic responses
  3. Sensitive information protection: Automatic PII (personally identifiable information) detection and redaction
  4. Contextual grounding checks: Verifies that model responses are grounded in provided context, reducing hallucinations
  5. Automated reasoning checks: Uses logical reasoning to verify consistency of model outputs
  6. Prompt attack protection: Detects and blocks prompt injection attacks

These policies can be configured independently or combined. For highly regulated industries like finance and healthcare, this is a prerequisite for deploying AI applications in production.

In one line: This layer is the agent-building toolbox — Strands Agents and Nova Act SDK are development frameworks, AgentCore is the production runtime, Knowledge Bases provide memory, and Guardrails provide safety rails.


Layer 5: Ready-to-Use AI Services

Core question: Don’t want to deal with models and frameworks — just want to use AI capabilities directly?

Absolutely. This layer consists of AWS’s long-standing pre-trained AI services. Call an API and go — no machine learning expertise required, no agent assembly needed.

Computer Vision — Amazon Rekognition: Image and video analysis supporting facial recognition, object detection, content moderation, and text recognition. Used in security surveillance, identity verification, and content moderation.

Document Intelligence — Amazon Textract: More than OCR. It understands document layouts, automatically extracts key-value pairs from tables and forms, and handles handwriting. A productivity powerhouse for insurance claims, financial statements, and contract review.

Speech Intelligence — Amazon Transcribe / Amazon Polly: Transcribe converts speech to text (real-time transcription, multi-language, speaker identification); Polly converts text to speech. Together they form a complete voice AI assistant pipeline.

Healthcare — AWS HealthScribe: Automatically transcribes doctor-patient conversations and generates structured clinical notes, with source attribution on every line for physician review. HIPAA compliant. A textbook example of AI landing in a vertical industry.

In one line: These services are the “plug-and-play” AI layer — no model expertise needed, no framework assembly required. Just call an API for vision, speech, document, and other AI capabilities.


Layer 6: Agent Products

Core question: What do AI agents actually look like as products?

Layer 4 provides the tools for building agents. Layer 6 is where the finished agent products live. Every service here is essentially an AI Agent that can autonomously complete specific tasks.

Kiro: A Spec-Driven AI IDE

Kiro is the AI-native IDE AWS launched in July 2025, built on the VS Code core. Its key differentiator from other AI coding tools (Cursor, GitHub Copilot) is Spec-Driven Development.

Most AI coding tools work like this: you give a prompt, it generates code. Kiro works differently — you give a prompt, and it first generates a requirements document, design plan, and task list, then implements step by step following this structured spec.

This sounds like an extra step, but in real projects, this approach produces noticeably higher-quality code because the AI “thinks it through” before writing.

Core features:

  • Spec workflow: prompt → requirements → design → task list → step-by-step implementation
  • Agent Hooks: Automatically trigger AI actions based on IDE events (file saves, code commits, etc.)
  • Custom Sub-agents: Create specialized AI assistants for specific tasks
  • Steering files: Provide project context and coding standards to the AI via Markdown files

Kiro CLI is Kiro’s command-line counterpart, extending AI capabilities to the terminal. You can manage AWS resources, MSK clusters, and DevOps tasks using natural language, with MCP protocol support for extending the toolset. The January 2026 update added Web Fetch permission controls, custom Agent shortcuts, and enhanced diff views. IDE + CLI — two tracks covering the developer’s complete workflow.

Kiro has been iterating continuously through 2026, evolving from its initial Preview into a fairly mature development tool.

AWS DevOps Agent: An Autonomous SRE Teammate

Previewed in December 2025, GA in March 2026. DevOps Agent is an autonomous operations agent positioned as an “always-on SRE teammate.”

What it does:

  • Incident response: Automatic event classification, alert correlation, root cause identification, and remediation recommendations
  • Reduced MTTR: Dramatically shortens mean time to recovery through automated investigation and remediation
  • Cross-environment support: Not just AWS — supports multi-cloud and hybrid environments
  • Collaboration integration: Pushes investigation results and remediation suggestions through Slack, ServiceNow, PagerDuty, and other channels
  • Kubernetes-native: Deep EKS integration, understands K8s-level issues
  • Cross-cloud collaboration: Can even coordinate with Azure SRE Agent for cross-cloud incident investigation

This isn’t a simple alert aggregation tool. It genuinely “investigates” problems — examining logs, analyzing metrics, tracing call chains, then delivering well-reasoned root cause analysis.

AWS Security Agent: Autonomous Security Testing

Also announced at re:Invent 2025 (Preview), Security Agent is an AI-driven autonomous security analyst. It works like a human penetration tester:

  1. Attack surface mapping: Analyzes application documentation and source code to identify potential attack surfaces
  2. Vulnerability discovery: Attempts to exploit vulnerabilities using real attack payloads and attack chains
  3. Verification: Generates reproducible attack paths proving vulnerabilities actually exist (not false positives)
  4. Remediation guidance: Provides developer-friendly fix recommendations

It uses a multi-Agent architecture: one Agent maps the attack surface, another analyzes business logic vulnerabilities, and a third verifies findings and prioritizes them. This division of labor handles far more complex security issues than traditional scanning tools.

Key value: Full-lifecycle security validation from design to deployment. Not waiting until code is in production to run security scans, but continuously validating throughout development.

Amazon Quick Suite: Enterprise AI Workspace

Amazon Quick Suite went GA in October 2025 as an enterprise-grade AI workspace. It evolved from Amazon QuickSight but has far outgrown the BI tool category. It’s an agentic workspace integrating research, analysis, and automation into a single interface.

Five core capabilities:

  • Quick Index: Connect and contextualize enterprise data, supporting 40+ data sources
  • Quick Research: Deep research capabilities, simultaneously searching internal and external data sources
  • Quick Flows: Simple automation workflows
  • Quick Automate: Multi-step agentic automation that autonomously completes complex tasks
  • Quick Sight: BI analytics and data visualization (the original QuickSight capabilities)

Quick Suite’s positioning: enabling everyone in the enterprise (not just technical staff) to interact with data through natural language, gain insights, and take action.

In one line: Kiro/CLI is the developer’s AI teammate, DevOps Agent is the ops team’s AI teammate, Security Agent is the security team’s AI teammate, Quick Suite is the business user’s AI teammate — AI permeating every role in enterprise operations.


Six Layers Unpacked, But Most Enterprises Won’t Use All Six

Let’s be honest — most enterprises will never touch the majority of these services.

Trainium3 and HyperPod are for companies like Anthropic and Stability AI, not typical enterprises. Nova Forge targets large institutions with unique data assets that need domain-specific models — finance, healthcare, legal.

Most enterprises’ real starting point is Layer 3: Bedrock.

Call an API, pick a model, plug in your business logic. That’s the actual starting point for 90% of enterprise AI projects, and where you’ll see results fastest.

Then work your way down as problems arise:

  • Model answers are inaccurate, private data isn’t connected → Add Knowledge Bases for RAG
  • Need to automate multi-step tasks → Use Bedrock Agents
  • In production but worried about safety and compliance → Configure Guardrails
  • Scale grows, cost pressure mounts → Consider Inferentia2 or Nova models to replace pricier third-party models
  • Have unique data and want a custom model → Then look at Nova Forge or Bedrock fine-tuning

This isn’t about “choosing a path” — it’s about solving problems as they arise. AWS’s system essentially provides the right tool at every stage. You don’t need to plan which layers to use upfront; you just need to know what each layer solves.


Now, Back to the Three Questions

The six-layer architecture is laid out. AWS has been busy — custom chips, model platforms, agent frameworks, developer tools, covering the full stack from silicon to applications. But none of this sidesteps the three questions from the beginning.

Question 1: Has Amazon Fallen Behind in the AI Era?

Behind in the “AI narrative.” Not behind in the “AI business.”

On the narrative side: who owned the AI headlines in 2024–2025? OpenAI dropped new models every few months — GPT-4o, o1, o3 — each one a global trending topic. Google’s Gemini went from chasing to leading, igniting communities worldwide with every update. Every Anthropic Claude 4 release was a developer celebration; Claude Code alone changed how a lot of people write software. Microsoft Copilot became nearly synonymous with “AI for work.” Even NVIDIA and Apple have clear AI narratives — one is the “AI picks-and-shovels seller,” the other is the “on-device AI definer.”

Amazon? When was the last time an Amazon AI announcement genuinely excited you? Nova model launch? Honestly, most people can’t even name how many Nova versions exist. re:Invent 2025? The conference shipped a lot (exactly what we unpacked above), but outside the AWS technical community, the discussion didn’t come close to a single OpenAI product event.

The reason isn’t that AWS hasn’t done anything — it’s that Amazon builds infrastructure, and in the AI era, attention belongs to models and products. Google ships Gemini, users go try it immediately. OpenAI ships ChatGPT, the whole world uses it. But AWS ships Trainium3 chips, the AgentCore platform, S3 Vectors… ordinary users and even most developers never directly touch these. They’re the iceberg below the waterline — supporting everything above, but generating no user-perceptible “wow” moment.

On the business side: Bedrock call volume grew over 400% in 2025 (AWS official data). Fortune 500 companies don’t hand their core business data to ChatGPT — they need VPC isolation, IAM access controls, compliance auditing, and SLA guarantees. AWS has been building these for nearly 20 years. The moat is deep.

Narrative and business are two different things. But there’s a hidden fuse connecting them — today’s developer mindset becomes enterprise purchasing decisions three to five years from now.

Question 2: Why Does Amazon Have So Little Voice in the AI Era?

Here’s an honest admission: Amazon did stumble and go through a painful period in the AI era.

The most glaring example is Alexa. Amazon invested over $20 billion in voice assistants, shipped over 500 million Alexa devices, and built the world’s most widely deployed voice assistant. Then ChatGPT arrived, and Alexa looked clumsy next to “real intelligence.” The 2023 Alexa team layoffs were Amazon’s most painful AI lesson — massive investment, massive user base, global deployment, yet nearly all of it was wiped out by the step-change of large language models.

Then there’s AWS’s early AI service strategy. Before 2023, AWS’s AI product line felt like “everything exists, nothing excels” — SageMaker was powerful but had a steep learning curve, Bedrock launched with limited models and a rough experience, CodeWhisperer (the predecessor to Q Developer) was no match for GitHub Copilot. AWS’s instinct was to “turn everything into a service and throw it in the console,” but the AI era demands “works out of the box” experiences, not service catalogs.

After the pain, AWS has been adjusting. From 2024–2025, a few clear pivots are visible:

  • Abandoning “do everything in-house”: $8 billion into Anthropic, joining OpenAI’s $50 billion round — using capital to secure model ecosystem positioning rather than grinding on first-party models
  • From “service pile-up” to “systematic thinking”: AgentCore’s launch signals AWS starting to think holistically about the full agent lifecycle, not just shipping scattered Agent features
  • Kiro’s arrival: This is the first time AWS has built a product genuinely focused on developer experience, not just backend APIs. Whether the Spec-Driven approach wins in the market remains to be seen, but it at least shows AWS recognizes the importance of “developer mindshare”
  • Nova’s repositioning: Looking at Nova 2’s product strategy, AWS has stopped trying to compete head-on with GPT-5 and is instead pursuing “good enough, cheap, deeply integrated” — like a supermarket’s house brand

The direction of these adjustments is right. But adjustments take time, and competitors won’t wait.

Question 3: Deep Pockets and Custom Chips — Why No Breakout Foundation Model?

This is the question most worth digging into. Amazon doesn’t lack money, doesn’t lack compute (custom Trainium + world’s largest cloud infrastructure), doesn’t lack data (world’s largest e-commerce platform). Yet Nova models are mediocre on major benchmarks and have virtually no user mindshare.

Meanwhile in China, DeepSeek, Moonshot AI, MiniMax, and Zhipu AI operate at a fraction of Amazon’s resource level yet trained models that can go toe-to-toe with GPT-4-class products.

What’s the gap? I’ve thought about this a lot. I think it comes down to three things.

Wrong DNA

Google’s AI grew from fundamental research — the Transformer architecture was born at Google, DeepMind has Nobel laureate Demis Hassabis at the helm. Meta’s FAIR is one of the world’s strongest AI research labs, with Yann LeCun’s team producing remarkable research output. OpenAI exists entirely to build models.

Amazon’s DNA is retail, logistics, and operational efficiency. Its culture emphasizes “working backwards from the customer,” “data-driven decisions,” and “measuring everything with metrics.” This culture is unstoppable for products with clear customer requirements, but breakthroughs in foundation models come from basic research without clear objectives — you don’t know if Scaling Laws will keep holding, don’t know where the next architectural breakthrough will come from, don’t even know if the billions you spent training will produce something better than the competition. That uncertainty is fundamentally at odds with Amazon’s culture.

Wrong Talent Gravity

Building a top-tier foundation model requires a small group of the world’s sharpest AI researchers. Money alone can’t solve this.

Top AI researchers choose jobs based on: can I do the most cutting-edge research? Are my colleagues as sharp as I am? This creates a positive feedback loop — great talent attracts greater talent. OpenAI, Google DeepMind, and Anthropic have all built this gravity field. Amazon has strong AI engineering teams, but for “the next breakthrough model,” it has never been a top researcher’s first choice — because AI is just one of Amazon’s many businesses.

Why did China’s foundation model startups succeed? Precisely because they created focused gravity fields. DeepSeek’s parent company High-Flyer Quant accumulated GPU clusters and algorithmic talent from quantitative trading. Moonshot AI’s Yang Zhilin came from Tsinghua NLP and attracted some of China’s best AI researchers. These companies are small and focused — everyone is doing one thing, pushing the model to its limits.

Wrong Strategic Choice — Or Rather, Amazon’s Choice Was Never to Build the Best Model

Look at Bedrock’s positioning — a “model supermarket” with access to nearly 100 models. This is not something a company that believes it has the best model would do. If you believe your model is the strongest, you have users come directly to you (like OpenAI and Google). You only build an aggregation platform when you’ve concluded that models aren’t your core advantage.

AWS’s strategy is crystal clear: be the infrastructure provider for the AI era, not AI itself.

Custom Trainium drives down compute costs to retain customers. Investing in Anthropic and OpenAI ensures the best models are available on the platform. Launching Nova — rather than competing with GPT-5, it’s more accurate to say it gives Bedrock a house brand option. Like Costco’s Kirkland: not the best product, just a “good enough and cheap” choice.


A Final Thought: A Question Without an Answer

After unpacking six layers of tech stack and answering three hard questions, I find myself facing a bigger question with no clear answer:

Where does AI’s value ultimately settle?

In the AI gold rush, there are three roles: the gold miners (OpenAI, Anthropic, DeepSeek), the water sellers (NVIDIA), and the town builders (AWS). History tells us that in a gold rush, the ones who ultimately profit are usually the water sellers and town builders. Amazon is betting exactly on this — models will commoditize, differentiation will narrow, and competition will ultimately return to the infrastructure and platform layer.

If that bet is right, AWS’s six-layer full-stack architecture is the winner’s blueprint.

But what if AI’s value ultimately concentrates in the model layer? What if users complete everything directly through ChatGPT or Gemini, no longer needing to build their own applications or cloud infrastructure — then the “arms dealer” role gets marginalized. This isn’t far-fetched; Google is already moving in that direction.

Amazon took some wrong turns along the way — the $20 billion Alexa lesson, the early AI service sprawl, the mediocre first-party models. But it’s also adjusting — heavy capital bets on the model ecosystem, systematic agent platforms, even starting to take developer experience seriously.

This company’s greatest trait is patience. How did Jeff Bezos put it — “your margin is my opportunity.” Amazon is accustomed to quietly building infrastructure in places others overlook, then waiting for time to deliver the verdict.

Only this time, the competition isn’t retailers. It’s some of the world’s sharpest AI researchers. And the pace of AI iteration may not afford anyone much patience.

How will it end? Honestly, I don’t know. But that’s exactly what makes this era so fascinating.


Data in this article is current as of April 2026. Primary sources include AWS official documentation, re:Invent 2025 announcements, and the AWS official blog. Opinions expressed are the author’s personal views and do not represent any company’s position.