AWS re:Invent 2025: Key Highlights and Announcements

  1. Next leap forward in GenAI capabilities

NEW Frontier Agents – Autonomous AI agents that work for hours or days without intervention

  • AWS unveiled three frontier agents—a new class of AI agents
  1. Kiro Autonomous Agent: An organizations’ virtual developer, navigating code repositories, triaging bugs, and improving test coverage 
  2. AWS Security Agent: A security consultant that understands an organizations’ infrastructure, providing actionable security recommendations directly from within developers’ workflows, embedding security earlier 
  3. AWS DevOps Agent: An always on-call operational team, rapidly detecting, finding root causes, and helping to resolve incidents while reducing operational toil

UPDATE Amazon Bedrock – Expanded Model Selection – 18 new models providing greater choice and flexibility for organizations

  • Includes new models available first in Amazon Bedrock from Mistral AI: Mistral Large 3 (the most advanced open weight model) and Ministral 3 (compact, general-purpose, and multimodal AI) 
  • Additional popular models include Google’s Gemma 3, MiniMax’s M2, NVIDIA’s Nemotron, and OpenAI’s GPT OSS Safeguard

NEW Amazon Nova 2 Model Family – Nova’s four new models that think deeper, and run faster and cheaper at production scale; Nova Forge trains models on organizations’ data for customized models; and Nova Act automates browser-based tasks using natural language without the need to code prescriptive scripts

  • NEW Amazon Nova 2 models (Lite, Pro, Sonic, and Omni) deliver industry-leading price-performance across reasoning, multimodal processing, conversational AI, code generation, and agentic tasks 
  • NEW Nova Forge pioneers “open training,” giving organizations access to pre-trained model checkpoints and the ability to blend proprietary data with Amazon Nova-curated datasets
  • NEW Nova Act gives organizations 90% reliability for browser-based UI automation workflows powered by a custom Nova 2 Lite model that significantly speeds up browser tasks

UPDATE Amazon Bedrock AgentCore – New components to help enterprises accelerate GenAI workloads from prototype to production securely, and at scale

  • Supports any framework (e.g. CrewAI, LangGraph, LlamaIndex, Strands Agents) or model 
  • Downloaded more than two million times in just five months since its preview launch
  • NEW AgentCore Policy (Preview) allows teams to set clear boundaries for agent actions using natural language, giving them fine-grained control without having to write formal policy code
  • NEW AgentCore Evaluations (Preview) simplifies monitoring with 13 pre-built evaluators that check for quality dimensions such as correctness and safety; removing months of data science work, and eliminating the need to build custom evaluators
  • UPDATE AgentCore Memory (Generally Available) introduces episodic functionality that helps agents learn from past experiences and adapt as needed, reducing processing time and eliminating the need for extensive custom instructions 

NEW Kiro Powers – Gives AI coding agents immediate expertise to work with specific technologies and tools, without having to manually configure it 

  • Developers can now give Kiro agents instant expertise in specialized tools and workflows in a single click from sources such as Datadog, Dynatrace, Figma, Neon, Netlify, Postman, Stripe, Supabase, and AWS 
  • Developers can also create and share their own Powers with the community

UPDATE AWS Transform – Enhanced Modernization Capabilities – Accelerates enterprise modernization of legacy infrastructure and applications by using GenAI, reducing costs and improving security

  • New agentic AI capabilities modernize legacy code and applications up to five times faster
  • Now additionally handles full-stack Windows modernization across .NET apps, SQL Server, and user interface frameworks 
  • Eliminates up to 70% of maintenance and licensing costs
  • AWS Transform has saved customers more than 1 million hours of manual effort and analyzed 1.8 billion lines of code
  1. Next-generation infrastructure and silicon

NEW AWS Graviton5 Processors – AWS’s most powerful and efficient custom chip for cloud workloads that delivers up to 25% better performance 

  • New Graviton5-based Amazon EC2 M9g instances deliver up to 25% higher performance than previous generation, with 192 cores per chip and five times larger cache
  • 98% of AWS’ top 1,000 EC2 customers are already benefiting from Graviton’s price performance advantages
  • For the third year in a row, more than half of new CPU capacity added to AWS is powered by Graviton, demonstrating the widespread adoption and trust in this technology across diverse industries

UPDATE Trainium3 UltraServers Now Available – Built for organizations to train and deploy AI models up to 4.4 times faster at half the cost

  • Amazon EC2 Trainium3 UltraServers are powered by AWS’s first 3 nanometer AI chip, and pack up to 144 Trainium3 chips into a single integrated system 
  • Delivers up to 4.4 times more compute performance and four times greater energy efficiency than Trainium2 UltraServers 
  • Customers achieve three times higher throughput per chip while having four times faster response times

NEW AWS AI Factories – Dedicated AWS infrastructure that transforms organizations’ data centers into high-performance AI environments 

  • Provides enterprises and government organizations with dedicated AWS AI infrastructure deployed in their own data centers
  • Combines NVIDIA GPUs, Trainium chips, AWS networking, and AI services like Amazon Bedrock and SageMaker AI 
  • Lets organizations leverage existing data center space and power while meeting data sovereignty and regulatory requirements
  1. Data as the backbone of AI innovation

UPDATE Amazon S3 – Enhanced Capabilities – S3 Vectors cuts AI storage costs by up to 90%; S3 Batch operations speeds up batch operations at a scale of 20 billion; and S3 Tables now supports automatic cost optimization to drive increased savings for organizations

  • UPDATE Amazon S3 Vectors now Generally Available: Scales up to two billion vectors per index (40 times the preview capacity), supporting up to 20 trillion vectors per bucket, delivering two to three times faster frequent-query performance, and reduces costs by up to 90% 
  • Increased maximum S3 object size ten times from 5TB to 50TB, enabling storage of massive data files as single objects 
  • UPDATE S3 Batch Operations now run up to ten times faster for large jobs 
  • UPDATE S3 Tables now support for Intelligent-Tiering storage class with up to 80% storage cost savings, and automatic replication across AWS Regions and accounts
  • Intelligent-Tiering brings the same automatic cost optimization that has saved S3 customers more than US$6 billion by automatically optimizing table data across three access tiers (Frequent Access, Infrequent Access, and Archive Instant Access) based on access patterns—delivering up to 80% storage cost savings without performance impact or operational overhead
  • Automatic replication enables teams to query local data for faster performance, while maintaining consistency across Regions and accounts. Customers can now automatically replicate tables, eliminating manual updates and complex syncing—simplifying compliance and backup management while keeping complete table structures intact and ready to use

Leave a Reply

Your email address will not be published. Required fields are marked *