Aurora takes away the DB maintenance overheads, and delivers great ROI
Updated March 14, 2023

Aurora takes away the DB maintenance overheads, and delivers great ROI

Piyush Goel | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User

Overall Satisfaction with Amazon Aurora

We use Aurora as the de-facto DBaaS product for hosting our Relational Databases, primarily MySQL. We have over 100 MySQL clusters (Master, Slave) across the microservices and the 5 global regions where Capillary's SaaS products are hosted. Prior to Aurora, all our databases were self-managed and hosted on EC2 instances with EBS volumes, having provisioned IOPS, for storage. As our system scaled, the management of the databases became a full-time job. Also, configuration management, upgrades, and regular maintenance started eating into the team's bandwidth. Furthermore, the chances of human errors and the consequent outages increased with the increase in the number of MySQL set-ups. To address these concerns, in early 2020, we migrated all our MySQL clusters from EC2 to Aurora. The Aurora service hosts over 400 TB of data, and the Aurora instances vary from 4 cores, 32GB RAM to 32 cores, 256 GB RAM configs. The storage layer varies anywhere from 200GB to 30 TB. In a nutshell, all relational, OLTP use-cases for the 700M odd end-consumers touched by Capillary's platform and served out of Aurora.
  • Auto-expansion of the disks. The administrators don't have to worry about disk sizes anymore.
  • Default configuration sets are designed for the majority of the OLTP use-cases. As a developer, I don't have to worry about tuning the MySQL configurations anymore.
  • Better Performance than MySQL hosted on EC2 instances. The Aurora architecture allows faster replication as well.
  • Access to slow query, and error logs is a little cumbersome. Maybe, stream that to an AWS Elasticsearch, and provide searching out of the box (even if it means additional costs).
  • Upgrade to higher versions of MySQL is a problem.
  • Failovers to replica, although, they are not needed often, they can be made more seamless.
  • Well-defined Configuration Sets that take care of most workload requirements. No manual configuration is needed.
  • Auto expansion of the disks that makes scaling easier.
  • Better read/write performance as compared to self-hosted database instances.
  • Improved performance leading to better product experience.
  • As configurations are templated, fewer human errors, and higher stability.
  • Increased costs of the overall infra, but the performance, and stability guarantees are compensating the higher costs.
Aurora vs RDS: Better replication and performance of Aurora as compared to RDS. Almost zero replication lag in most cases which is a big improvement over RDS. Scaling, maintenance, and overall ROI are higher in Aurora.

Aurora vs Percona: Aurora comes well integrated with the AWS ecosystem. So, easier to integrate into the overall infrastructure if you are already on AWS.

Do you think Amazon Aurora delivers good value for the price?


Are you happy with Amazon Aurora's feature set?


Did Amazon Aurora live up to sales and marketing promises?


Did implementation of Amazon Aurora go as expected?


Would you buy Amazon Aurora again?


MongoDB, Redis, Amazon Elastic Kubernetes Service (EKS)
Well Suited: If you have to manage 10 or more MySQL clusters in your environments. Better to use Aurora and configure via a Terraform provider. Don't have to worry about the scalability of your databases. It scaled beautifully with tons of features that make the scaling process easier. Don't have a dedicated infrastructure team. Use the managed service, and let your developers focus on product development.

Less Appropriate: It can be a bit pricey. If you are operating under a budget, this may not be the right tool. RDS is slightly cheaper than Aurora. Configurations and documentation can be confusing at times, but if you have access to the AWS Solution Architects, it gets easier.

Using Amazon Aurora

150 - Aurora is our primary database for all data entities requiring relational semantics and ACID properties. It is used across all the engineering groups, and services - which covers about 150 engineers across development, QA, devops. It is also used by our Data team for running ad-hoc analytics, reporting, and any asks from the business teams.
4 - Our SRE/DevOps group is about 4 people who manage about 80 clusters of Aurora (1 Master, multiple Slaves). They are adept at systems provisioning, configuration, basic database administration skills, Infrastructure as Code technologies like Terraform, and have expert programming skills with Python. We use a lot of Boto for automation our infra management tasks. Although, for a small set-up of Aurora, we don't need special skills, other than basic understanding of database administration skills.
  • Primary datastore for entities requiring relational semantics and ACID properties.
  • Automated back-ups, and point in time recovery capabilities.
  • Ability to auto-scale the readers (slaves) as the read query load increases.
  • We are also evaluating serverless Aurora to handle the bursty traffic.
  • Completely automating the provisioning of Aurora behind Terraform. We don't access the AWS console at all.
  • Differential back-ups and master-slave redundancies depending on criticality of the service and keeping costs in control.
  • Effective utilisation of the Key Management Service for encryption and decryption.
  • Use more Aurora instances as a part of our Data Lake strategy, and couple it with S3, and Redshift.
  • Evaluate the machine learning use-cases along with AWS Sagemaker and Comprehend as they have native integration with Aurora.
  • Aurora has helped us scale our data workloads by 10X in the last 3 years without the need to increase the DBAs.
  • It provides reliable performance and uptime guarantees. We have instances varying from 2 cores, 8GB RAM to 32 cores, 256 GB RAM with heavily predicatable workload.
  • Manageable costs - the ROI on performance and costs is great!