The Amazon S3 Glacier storage classes are purpose-built for data archiving, providing a low cost archive storage in the cloud. According to AWS, S3 Glacier storage classes provide virtually unlimited scalability and are designed for 99.999999999% (11 nines) of data durability, and they provide fast access to archive data and low cost.
$0
Per GB Per Month
Amazon S3
Score 8.6 out of 10
N/A
Amazon S3 is a cloud-based object storage service from Amazon Web Services. It's key features are storage management and monitoring, access management and security, data querying, and data transfer.
Glacier is convenient with systems already on AWS and cheaper than S3 for data that needs to be accessed infrequently. A great tool for any team to use that has a legacy system or data.
Since the rest of our infrastructure is in Amazon AWS, coding for sending data to Glacier just makes sense. The others are great as well, for their specific needs and uses, but having *another* third-party software to manage, be billed for, and learn/utilize can be costly in …
The other alternatives for us would involve moving objects out of S3 to some other object storage services, which would generate a lot of network traffic, or keep the objects on more expensive storage.
It is significantly cheaper than other services, however, it is because it actually is a slightly different service. The other services we've tried allow live reading/writing of data as needed, whereas Glacier is a "cold storage" service. So essentially your choice ends up …
S3 is the most mature simple storage service on the web. It has direct competitors from Google and Azure, as well as a bunch of other competitors that focus on different aspects. For example, Backblaze specializes on file backups, and while s3 can also be used for that, Backbla…
We are using other AWS products and AWS products have perfect integration between each other. This was the most important reason to select S3 against its competitors such as Google Data cloud or Fx Data Cloud. So far, we did not face any issues such as losing our data or any …
Prior to using S3, we were hosting all of our assets from the assets pipeline in our Ruby on Rails application. For a small company, this approach was fine but as the assets doubled and tripled, this was no longer the way to go. S3 will help you scale regardless of company …
Amazon S3 is where you want to default to if you want to store a large amount of data. Compared to formatted data that you can store in Amazon RDS or DynamoDB, you can store your data in any format you want on S3. And the data retention policy can be really useful if you use S3 …
If your organization has a lot of archival data that it needs to be backed up for safekeeping, where it won't be touched except in a dire emergency, Amazon Glacier is perfect. In our case, we had a client that generates many TB of video and photo data at annual events and wanted to retain ALL of it, pre- and post- edit for potential use in a future museum. Using the Snowball device, we were able to move hundreds of TB of existing media data that was previously housed on multiple Thunderbolt drives, external RAIDs, etc, in an organized manner, to Amazon Glacier. Then, we were able to setup CloudBerry Backup on their production computers to continually backup any new media that they generated during their annual events.
Amazon S3 is a great service to safely backup your data where redundancy is guaranteed and the cost is fair. We use Amazon S3 for data that we backup and hope we never need to access but in the case of a catastrophic or even small slip of the finger with the delete command we know our data and our client's data is safely backed up by Amazon S3. Transferring data into Amazon S3 is free but transferring data out has an associated, albeit low, cost per GB. This needs to be kept in mind if you plan on transferring out a lot of data frequently. There may be other cost effective options although Amazon S3 prices are really low per GB. Transferring 150TB would cost approximately $50 per month.
Fantastic developer API, including AWS command line and library utilities.
Strong integration with the AWS ecosystem, especially with regards to access permissions.
It's astoundingly stable- you can trust it'll stay online and available for anywhere in the world.
Its static website hosting feature is a hidden gem-- it provides perhaps the cheapest, most stable, most high-performing static web hosting available in PaaS.
Web console can be very confusing and challenging to use, especially for new users
Bucket policies are very flexible, but the composability of the security rules can be very confusing to get right, often leading to security rules in use on buckets other than what you believe they are
It is tricky to get it all set up correctly with policies and getting the IAM settings right. There is also a lot of lifecycle config you can do in terms of moving data to cold/glacier storage. It is also not to be confused with being a OneDrive or SharePoint replacement, they each have their own place in our environment, and S3 is used more by the IT team and accessed by our PHP applications. It is not necessarily used by an average everyday user for storing their pictures or documents, etc.
AWS has always been quick to resolve any support ticket raised. S3 is no exception. We have only ever used it once to get a clarification regarding the costs involved when data is transferred between S3 and other AWS services or the public internet. We got a response from AWS support team within a day.
Since the rest of our infrastructure is in Amazon AWS, coding for sending data to Glacier just makes sense. The others are great as well, for their specific needs and uses, but having *another* third-party software to manage, be billed for, and learn/utilize can be costly in money and time.
Overall, we found that Amazon S3 provided a lot of backend features Google Cloud Storage (GCS) simply couldn't compare to. GCS was way more expensive and really did not live up to it. In terms of setup, Google Cloud Storage may have Amazon S3 beat, however, as it is more of a pseudo advanced version of Google Drive, that was not a hard feat for it to achieve. Overall, evaluating GCS, in comparison to S3, was an utter disappointment.
We seldom need to access our data in Glacier; this means that it is a fraction of the cost of S3, including the infrequent-access storage class.
Transitioning data to Glacier is managed by AWS. We don't need our engineers to build or maintain log pipelines.
Configuring lifecycle policies for S3 and Glacier is simple; it takes our engineers very little time, and there is little risk of errant configuration.
It practically eliminated some real heavy storage servers from our premises and reduced maintenance cost.
The excellent durability and reliability make sure the return of money you invested in.
If the objects which are not active or stale, one needs to remove them. Those objects keep adding cost to each billing cycle. If you are handling a really big infrastructure, sometimes this creates quite a huge bill for preserving un-necessary objects/documents.