Amazon launches S3 Files to bridge gap between object storage and AI agent workflows
AWS introduces S3 Files, allowing AI agents to access cloud data as if it were local files without migration or duplication.

Amazon Web Services has launched S3 Files, a new service that enables AI agents to access data stored in Amazon S3 buckets as if it were part of their local file system. The service addresses a technical challenge that has complicated AI agent workflows when working with enterprise data stored in cloud object storage.
Traditionally, AI agents operate using file system tools that navigate directories and read file paths, while enterprise data often resides in object storage systems like Amazon S3 that serve data through API calls rather than file paths. This mismatch previously required organizations to maintain separate file system layers alongside S3, creating data duplication and synchronization challenges.
S3 Files uses Amazon's Elastic File System (EFS) technology to mount S3 buckets directly into an agent's local environment with a single command. The data remains in S3 without requiring migration, while both file system and S3 object APIs remain accessible simultaneously. Andy Warfield, VP and distinguished engineer at AWS, said the company developed the solution after its own engineering teams encountered workflow disruptions when using AI tools like Kiro and Claude Code with S3-stored data.
The service differs from previous FUSE-based solutions that attempted to make object stores appear as file systems. According to AWS, S3 Files provides full native file system semantics rather than workarounds that either compromised object API functionality or restricted file operations. The architecture allows multiple agents to access the same mounted bucket simultaneously, with AWS claiming support for thousands of concurrent connections and aggregate read throughput of multiple terabytes per second.
Industry analysts view S3 Files as addressing fundamental infrastructure challenges for AI workloads. Gartner analyst Jeff Vogel noted that the service eliminates data shuffling between object and file storage, while IDC analyst Dave McCarthy described it as removing friction between large-scale data lakes and autonomous AI systems. The service is now available in most AWS regions.