Want to save up to 95% on cloud storage costs without the hassle of manual management? Amazon S3 Intelligent-Tiering automates your data storage, shifting data to the most cost-effective tier based on access patterns. Since 2018, businesses have saved over £4 billion globally by using this service. Here's why it works:
- Automatic tiering: Data moves between tiers (Frequent, Infrequent, and Archive) based on usage, with no retrieval fees.
- Cost savings: Reduce costs by up to 68% for infrequent access and up to 95% for rarely accessed data.
- No manual effort: Avoid complex lifecycle rules - just set it up and let automation handle the rest.
- Performance maintained: Millisecond access for frequently used data, even in cost-saving tiers.
Perfect for unpredictable data patterns, S3 Intelligent-Tiering is ideal for UK businesses managing data lakes, analytics, or backups. With minimal setup and a small monitoring fee, it’s a practical way to cut storage costs while maintaining accessibility and compliance.
What is the Amazon S3 Intelligent Tier and how does it work?
What is Amazon S3 Intelligent-Tiering
Amazon S3 Intelligent-Tiering is a storage class that automatically adjusts storage costs by moving data to the most cost-effective access tier based on how often it’s accessed [3]. Unlike traditional methods that rely on manual adjustments and complex policies, Intelligent-Tiering continuously monitors data usage and shifts objects to the appropriate tier without impacting performance or incurring retrieval fees [3]. Let’s dive into how this automated system ensures cost efficiency.
This service is particularly useful for data with unpredictable or shifting access patterns [3]. Instead of requiring businesses to predict how their data will be used - an often impossible task - Intelligent-Tiering adapts as usage patterns change. It’s suitable for a wide range of scenarios, from data lakes and analytics to user-generated content and emerging applications [1].
What sets Intelligent-Tiering apart is its object-level automation. Each individual object is assessed and moved to the most cost-effective tier, rather than applying blanket rules to entire datasets [4].
How S3 Intelligent-Tiering Works
S3 Intelligent-Tiering uses continuous monitoring and automatic tier transitions to manage data efficiently. It tracks how often each object is accessed and moves data between tiers based on predefined timeframes [1].
The system includes three default access tiers that function automatically:
Frequent Access tier: This is where all new objects start. It offers millisecond access times and is billed at the same rate as S3 Standard [5]. Data stays in this tier as long as it’s accessed regularly.
Infrequent Access tier: Objects that haven’t been accessed for 30 consecutive days are moved here, offering up to 40% savings [3]. It’s billed at the same rate as S3 Standard-IA, and if an object is accessed later, it automatically returns to the Frequent Access tier [1].
Archive Instant Access tier: Data untouched for 90 days is moved to this tier, providing up to 68% savings [3]. Despite the cost reduction, this tier maintains the same low-latency performance as S3 Standard [3].
For even greater savings, you can enable two optional archive tiers:
- Archive Access tier: Stores data not accessed for 180 days and is billed at the same rate as S3 Glacier Flexible Retrieval [5].
- Deep Archive Access tier: Handles data not accessed for 180 days (if both archive tiers are enabled), offering up to 95% savings for rarely accessed data [3]. This tier matches the performance of S3 Glacier Deep Archive [1].
A key advantage is that there are no additional charges for moving objects between tiers within Intelligent-Tiering, and no retrieval fees apply, unlike other S3 storage classes [1]. However, objects smaller than 128KB are always charged at Frequent Access tier rates.
With these automated features, organisations in the UK can achieve consistent savings while simplifying their storage management. This cost-conscious system is particularly helpful for businesses dealing with fluctuating data access patterns.
Key Benefits for UK Businesses
S3 Intelligent-Tiering offers more than just cost savings; it provides strategic advantages that align with the needs of UK businesses prioritising efficiency and compliance.
Real-world examples highlight its impact: GRAIL reported 40% savings per gigabyte after shifting most of their data to Intelligent-Tiering [1], and Zalando achieved 37% annual storage cost savings [1].
Simplified management is a standout benefit. By removing the need for complex lifecycle policies and constant monitoring, IT teams can focus on higher-value tasks. As D'Arcy Rail-Ip, VP Technology at CineSend, explains:
Using S3 Intelligent-Tiering allowed us to use a 'set-it-and-forget-it' model for stored media content. Confident that frequently and infrequently accessed files are in their correct storage class and that costs are being kept to an efficient minimum, my team is able to focus on our mandate: deliver secure video content across the globe with cutting-edge technology.[1]
This simplicity is especially important for UK businesses, where IT teams often juggle multiple priorities with limited resources.
The service maintains S3 Standard performance across the Frequent Access, Infrequent Access, and Archive Instant Access tiers [3], ensuring applications function seamlessly regardless of data location.
Compliance and data retention are also key strengths. Intelligent-Tiering provides the same 99.999999999% durability across all tiers [3], safeguarding data integrity for compliance purposes. Its automated tiering supports long-term retention strategies without manual effort, making it ideal for businesses navigating GDPR or financial regulations.
For UK companies managing unpredictable access patterns, Intelligent-Tiering proves invaluable. Whether dealing with seasonal spikes in e-commerce or fluctuating analytical demands in financial services, this service eliminates the need for guesswork. Jerzy Grzywinski, Director of Software Engineering at Capital One, notes:
Because the storage usage patterns vary widely across our top buckets, there was no clear-cut rule we could safely apply without taking on some operational overhead. The S3 Intelligent-Tiering storage class delivered automatic storage savings based on the changing access patterns of our data without impact on performance.[1]
Though the service charges a small monthly monitoring and automation fee [1], the savings from automatic optimisation typically outweigh the cost, offering a practical and efficient solution for businesses across the UK.
Setting Up S3 Intelligent-Tiering
You can activate S3 Intelligent-Tiering when uploading new data, by using lifecycle policies, or by setting it as the default storage class for your buckets.
How to Enable S3 Intelligent-Tiering
There are three main ways to enable S3 Intelligent-Tiering: through the AWS Management Console, AWS CLI, or Amazon S3 API [6]. For new uploads, you can select the Intelligent-Tiering storage class during the upload process. In the AWS Management Console, simply choose S3 Intelligent-Tiering
from the available storage class options [7]. For programmatic uploads, include the storage class in the x-amz-storage-class
request header [6].
If you’re working with existing objects, you can transition them to Intelligent-Tiering either individually or in bulk. For large-scale conversions, lifecycle policies offer an automated solution, while recursive copy operations via the AWS CLI can handle more immediate needs [8]. For example, a lifecycle policy targeting objects with a key prefix like documents/
could automatically transition matching files at midnight UTC after their creation [6].
Initially, you might want to exclude Glacier-based archive tiers to simplify retrieval processes [8]. As your requirements evolve, you can later enable the Archive Access or Deep Archive Access tiers for further cost optimisation [6].
AWS highlights the benefits of this automation:
S3 Intelligent-Tiering delivers automatic cost savings by moving data on a granular object level between access tiers when access patterns change.[7]
Setting Bucket Policies and Default Storage Classes
Configuring bucket policies and default storage settings ensures Intelligent-Tiering works smoothly across your storage environment. Data can move to Intelligent-Tiering in two main ways: direct uploads using PUT operations or transitions from other storage classes like S3 Standard through lifecycle configurations [6].
By setting Intelligent-Tiering as the default storage class for new buckets, you can automate cost management without requiring manual adjustments [8]. Lifecycle configurations provide additional control, allowing you to specify when and how objects transition to Intelligent-Tiering. These can be applied at the bucket level or for specific prefixes, giving you flexibility to manage different types of data [6]. You can even decide which objects qualify for archive access tiers by using shared prefixes or object tags [6].
For more complex scenarios, lifecycle configurations can be written in JSON when using the AWS CLI. This approach allows for precise control over transition rules, making it ideal for organisations with structured data storage. For instance, you could apply Intelligent-Tiering to the logs/
prefix while keeping frequently accessed application data in S3 Standard.
S3 Intelligent-Tiering integrates seamlessly with other AWS features such as S3 Inventory, S3 Replication, S3 Storage Lens, server-side encryption, S3 Object Lock, and AWS PrivateLink [9]. Once you’ve configured your bucket policies, it’s important to verify the setup to ensure Intelligent-Tiering functions as expected.
Checking Your Configuration
To confirm that S3 Intelligent-Tiering is working effectively, regular monitoring and validation are key. This ensures the setup delivers the intended cost savings and performance benefits.
Several tools can help you keep track of your configuration. S3 Inventory is particularly useful, providing a detailed list of all your objects and their metadata, including their current Intelligent-Tiering access tier [9]. Event notifications can alert you in real-time when objects transition between tiers, such as moving to Archive Access or Deep Archive Access [2]. Additionally, the AWS CLI allows you to check configurations and object statuses directly, ensuring your policies are applied correctly [6].
Real-world examples underline the benefits of proper configuration. AppsFlyer’s CTO and Co-Founder, Reshef Mann, shared their experience:
S3 Intelligent-Tiering allows us to make better use and be more cost efficient whenever we have to go to historical data and make changes on top of it.[1]
To maintain cost efficiency, consider setting up dashboards to monitor storage costs, reviewing S3 Inventory reports monthly, and creating alerts for unexpected tier transitions. These practices help businesses in the UK balance cost savings with data accessibility, ensuring their storage strategy aligns with their operational needs.
Regular reviews of these settings confirm that cost-saving goals are being met and that the configuration continues to support business objectives effectively.
Cost Efficiency Best Practices
Getting the most out of S3 Intelligent-Tiering takes careful planning and regular fine-tuning. By focusing on the right strategies, UK businesses can maximise savings and see a strong return on their cloud storage investment.
Choosing the Right Workloads and Data Types
S3 Intelligent-Tiering is ideal for datasets with unpredictable access patterns [1]. This makes it a great choice for UK businesses managing diverse data where traditional storage class predictions may not work well.
Examples of suitable workloads include:
- Data lakes
- Analytics workloads
- Media archives
- Enterprise document storage
- Backup and disaster recovery systems
- Compliance archives
These types of data often have access patterns that are hard to predict, making them perfect candidates for Intelligent-Tiering. Many companies have reported significant cost reductions - ranging from 30% to 60% - by identifying and optimising such workloads [1].
Take Illumina, for example. Al Maynard, their Director of Software Engineering, shared:
After just 3 months of using S3 Intelligent-Tiering, Illumina began to see significant monthly cost savings. For every 1 TB of data, the company saves 60 percent on storage costs.[1]
Another example is Zalando, which saved 37% annually by automatically moving objects that were not accessed within 30 days to the infrequent-access tier [1].
For UK businesses, starting with datasets like backup files, logs, and archived content can lead to immediate savings with minimal changes to operations. Once these workloads are identified, continuous monitoring ensures the savings remain consistent.
Monitoring and Adjusting Settings
Ongoing monitoring is crucial to ensure that S3 Intelligent-Tiering continues to deliver savings. Tools like Amazon S3 Storage Lens, S3 Inventory, AWS CloudTrail, and AWS Config provide the insights needed to make timely adjustments [12].
Keep an eye on storage costs and tier transitions. Many organisations start seeing savings within a month of moving their data to Intelligent-Tiering. For instance, companies have achieved monthly savings of about 30% without any performance impact or extra data analysis requirements [11].
When using archive tiers, such as Archive Access and Deep Archive Access, it’s important to enable them selectively based on how quickly you might need to retrieve data. These tiers can reduce costs by up to 95% for rarely accessed objects [1].
Chris S. Borys, Team Manager of Cloud Storage Services at Shutterstock, highlighted their success:
The savings we realized from using S3 Intelligent-Tiering, up to 60% in some buckets, allowed us to further reinvest in our storage infrastructure and replicate our storage environment to a second AWS Region.[1]
To maintain cost efficiency, review monthly storage trends, watch for unexpected tier changes, and track retrieval patterns. Setting up alerts for unusual activity and regularly reviewing S3 Inventory reports can help ensure your configurations stay aligned with your business needs. For larger deployments, automation can simplify monitoring and adjustments.
Automating Intelligent-Tiering at Scale
For businesses managing large-scale deployments, automation is key to keeping operations efficient. Automating transition rules using Python scripts can eliminate the need for manual configurations across multiple buckets [11].
S3 Lifecycle policies are a cornerstone of this automation. They enable automated transitions from S3 Standard to Intelligent-Tiering, offering granular control that can be applied at the bucket level or to specific prefixes [11]. For new buckets, Lambda functions can automatically apply lifecycle configurations when buckets are created [14].
S3 Batch Operations is another tool that simplifies large-scale data management. It allows businesses to manage billions of objects with a single API request, cutting down months of manual work to just hours [13]. Some organisations have reported over 80% savings by automating storage management through batch operations [13].
For UK businesses with complex cloud setups, integrating automation into DevOps workflows is essential. This might include:
- Automated policy deployment
- Monitoring dashboards
- Cost alerting systems
Using resource tags to automate storage class assignments and implementing AWS Config rules can also ensure compliance across multiple accounts [10][12].
For organisations handling petabytes of data across different regions, lifecycle policies that automatically transition data based on age, access patterns, or business rules can reduce manual effort while keeping costs in check as data volumes grow.
Hokstad Consulting has been instrumental in helping UK businesses implement these automation strategies. Their expertise in cloud cost engineering and custom automation development has enabled organisations to significantly reduce storage costs while improving operational efficiency.
S3 Storage Class Comparison
When it comes to cost-efficient cloud storage, comparing Amazon S3 storage classes can help UK businesses decide where S3 Intelligent-Tiering fits into their strategy. A clear understanding of how Intelligent-Tiering stacks up against other options ensures businesses can optimise their storage costs while meeting their performance needs.
Storage Class Comparison Table
Here’s a breakdown of the main differences between S3 storage classes, focusing on cost, performance, and use cases:
Storage Class | Cost per GB/month | First Byte Latency | Minimum Storage Duration | Retrieval Charges | Best Use Cases |
---|---|---|---|---|---|
S3 Standard | From £0.018 | Milliseconds | None | None | Frequently accessed data, websites, content distribution |
S3 Intelligent-Tiering | £0.018–£0.0008 | Milliseconds | None | None | Unpredictable access patterns, data lakes |
S3 Standard-IA | From £0.010 | Milliseconds | 30 days | Per GB retrieved | Infrequently accessed data needing quick retrieval |
S3 One Zone-IA | From £0.008 | Milliseconds | 30 days | Per GB retrieved | Non-critical, reproducible data |
S3 Glacier Instant Retrieval | From £0.003 | Milliseconds | 90 days | Per GB retrieved | Archive data with instant access needs |
S3 Glacier Flexible Retrieval | From £0.003 | Minutes to hours | 90 days | Per GB retrieved | Archive data accessed 1–2 times per year |
S3 Glacier Deep Archive | From £0.0008 | Hours | 180 days | Per GB retrieved | Long-term retention for rarely accessed data |
Pricing reflects UK conversions and may vary by AWS region or exchange rates.
S3 Intelligent-Tiering stands out because it does not charge for data retrieval, making it ideal for datasets with unpredictable access patterns. However, objects smaller than 128KB are always charged at the Frequent Access tier rate, alongside a small monitoring fee of £0.002 per 1,000 objects per month [3]. These nuances are critical for businesses aiming to balance cost and performance.
When to Use Intelligent-Tiering
S3 Intelligent-Tiering shines in scenarios where predicting access patterns is tricky. For UK businesses, this storage class offers savings without the need for constant manual intervention.
Unpredictable access patterns are where Intelligent-Tiering proves its worth. It automatically adjusts to changing usage patterns, making it a smart choice for datasets where traditional storage classes might fall short.
One key advantage is the lack of retrieval fees, which sets it apart from Glacier storage classes. For businesses dealing with archive data, unexpected retrieval costs can quickly add up, but Intelligent-Tiering eliminates this concern.
Scalability is another strength. For instance, Joyn expanded its storage by three times while maintaining the same total cost of ownership (TCO) using S3 Intelligent-Tiering [1]. This makes it a great fit for organisations experiencing rapid data growth.
Savings across tiers also provide a compelling reason to choose Intelligent-Tiering. The Infrequent Access tier can reduce costs by up to 40% compared to S3 Standard, while the Glacier Instant Retrieval tier offers up to 68% savings. For rarely accessed data, the Glacier Deep Archive tier can deliver savings of up to 95% [3].
Beyond cost, operational efficiency is another benefit. AppsFlyer, for example, reduced its storage costs by 18% per GB after transitioning to Intelligent-Tiering [1]. Similarly, SimilarWeb saved 20% while improving data accessibility for its employees [1].
For UK organisations, Intelligent-Tiering is particularly effective for datasets larger than 128KB, unpredictable access patterns, and those looking to avoid the hassle of managing lifecycle policies manually. While the monitoring fee may seem like an additional cost, it’s often offset by the automatic savings, especially for larger datasets.
Hokstad Consulting has supported many UK businesses in identifying the right use cases for S3 Intelligent-Tiering, helping them cut costs and maintain efficiency in their cloud storage solutions.
Need help optimizing your cloud costs?
Get expert advice on how to reduce your cloud expenses without sacrificing performance.
Conclusion
Amazon S3 Intelligent-Tiering presents a smart solution for managing cloud storage costs, especially for UK businesses. By automatically shifting data between storage tiers based on access patterns, it removes the need for manual intervention and guesswork - traditionally time-consuming tasks for DevOps teams. Since its launch in 2018, this service has helped customers collectively save over £4 billion in storage costs compared to S3 Standard [1]. These figures underline its efficiency and practical benefits.
The success stories are hard to ignore. For instance, Stripe has reduced its monthly storage costs by approximately 30% without any impact on performance, while GRAIL reported saving 40% per gigabyte after migrating most of their data to Intelligent-Tiering [1]. These examples demonstrate the reliability and value of automated cost management in real-world scenarios.
For UK organisations dealing with unpredictable data access patterns, Intelligent-Tiering offers distinct advantages. Its lack of retrieval fees and automated tier transitions allow businesses to prioritise innovation over day-to-day storage concerns. With careful planning and testing, these benefits can be fully realised.
Beyond cost savings, Intelligent-Tiering enhances operational flexibility. The ability to cut costs by up to 95% on infrequently accessed data through the Deep Archive tier, while still providing millisecond access for frequently used files, delivers a level of versatility that traditional storage classes simply can’t offer [1]. This automation also reduces the workload for engineering teams, freeing them to focus on critical business projects.
To implement Intelligent-Tiering successfully, organisations need to evaluate access patterns, ensure consistent object tagging, and regularly review lifecycle policies. It’s also important to account for factors like the monitoring fee and the 128KB minimum object size when calculating costs.
Hokstad Consulting has a proven track record of helping UK businesses adopt S3 Intelligent-Tiering as part of broader cloud cost engineering strategies. By integrating this storage class with DevOps practices and Infrastructure as Code, businesses can achieve long-term cost efficiency and operational improvements.
The evidence speaks for itself: S3 Intelligent-Tiering offers measurable results for businesses ready to embrace automated storage management. Whether managing data lakes, backups, or application data with variable access patterns, this storage class provides a clear path to optimising costs without compromising performance or availability.
FAQs
How does S3 Intelligent-Tiering decide the best storage tier for my data?
Amazon S3 Intelligent-Tiering: Smarter Storage Management
Amazon S3 Intelligent-Tiering takes the guesswork out of managing your data storage. By continuously analysing how often your data is accessed, it automatically shifts objects between storage tiers - like frequent or infrequent access - without requiring you to lift a finger.
The result? You’re only charged for the storage tier that aligns with how your data is actually used. This not only helps you save on costs but also ensures your data remains accessible and performs as needed.
How can S3 Intelligent-Tiering help UK businesses manage storage costs for data with unpredictable access patterns?
S3 Intelligent-Tiering: A Cost-Saving Solution for UK Businesses
S3 Intelligent-Tiering offers a smart way for UK businesses to cut storage costs by automatically shifting data between tiers based on how often it's accessed. This means that data you use frequently stays easily accessible, while less-used data is stored in a more budget-friendly way.
One of the standout benefits is its ability to eliminate manual data management. By automating this process, businesses can reduce operational workloads and potentially lower storage expenses by up to 40%. This is especially useful for organisations dealing with fluctuating or unpredictable data access patterns, as it blends cost savings with hassle-free automation.
How can I manage my S3 Intelligent-Tiering settings to maximise cost savings and performance?
Maximising the Benefits of Amazon S3 Intelligent-Tiering
To make the most of Amazon S3 Intelligent-Tiering, it's important to keep an eye on your storage usage and access patterns. Tools like AWS CloudWatch and S3 Storage Lens can provide valuable insights into how your data is accessed and how it transitions between storage tiers. These insights can help you spot areas where you could reduce costs.
You can also set up lifecycle policies to automate the movement of data, ensuring that objects are stored in the most cost-efficient tier based on how often they're accessed. Regularly reviewing your cost and performance metrics is key to refining these policies. If you notice changes in data access patterns, you may need to make manual adjustments to ensure your setup stays aligned with your needs.
By combining proactive monitoring with the right tools, you can keep your S3 Intelligent-Tiering setup running efficiently while keeping costs under control.