TrustRadius
Get a cache server on steroids with Redis and get rid of those Memcached instances
https://www.trustradius.com/nosql-databasesRedisUnspecified9.1170101
Leonel Quinteros profile photo
Updated September 16, 2019

Get a cache server on steroids with Redis and get rid of those Memcached instances

Score 9 out of 101
Vetted Review
Verified User
Review Source

Overall Satisfaction with Redis

We use Redis as a Cache DB in a microservices environment to store auth tokens, temporary data and sync flags to coordinate processes that are handled by multiple parties asynchronously.
The main problem it solves for us is to need to have a high-performance cache that also provides data persistence so we can restart instances and deploy new ones without losing data in the middle. This is very important for us because of the problem we're tackling. In the case of auth tokens, we don't want to make all users log in again after we restart an instance because the memory got cleared. The same applies for the sync flags that our processes depend on to complete.
  • High performance. Redis is FAST, really fast.
  • Data persistence. Having this feature was the main reason we chose Redis over Memcached.
  • Clustering. Distributing data between multiple instances is easy to do with Redis.
  • Data types. It isn't normal to have native data types supported on cache servers, but Redis covers many areas for this use case.
  • The data type collections aren't extensive and can fall short for some needs.
  • Single-threaded. Redis doesn't support multi-threading, so it won't benefit from multi-core CPUs. Instead, you need to deploy several single-core instances to scale horizontally. While this is a design decision, it may be a downside on some infrastructures.
  • Lack of UI. A visual UI can be a downer for some users.
  • Implementing Redis for the first time in a project was super easy and it didn't add any noticeable cost to development or release processes.
  • Replacing Memcached use cases with Redis was also almost entirely transparent during implementation.
  • Having a high-performance/high-availability software solution for free and open source is a great option in this market.
Being able to deploy different instances of Redis to cover caching, messaging, sub/pub, syncs, and temp storage is helpful as we don't need expertise in many different solutions for all these cases. By just deploying Redis and tweaking each instance for their use case, we get more value from our initial investment (which is only on manpower, because Redis is Free and Open Source), and we can focus more on our business and less on infrastructure/implementation details.
Yes - We replaced some instances of Memcached with Redis.
The main drive for that replacement was the ability of Redis to quickly add data persistence to the in-memory cache functionality. This is super helpful in a microservices environment where all instances can be restarted and redeployed and you don't want to lose data on each deploy. With Memcached, this is impossible to achieve, while with Redis it gets pretty straightforward, without losing performance or availability.
  • Price
  • Product Features
  • Product Usability
  • Product Reputation
  • Prior Experience with the Product
  • Third-party Reviews
The single most important factor in implementing Redis was its features, especially data persistence, replication/clustering, and data types. These are the things that almost all other in-memory cache systems lack. There are NoSQL DB systems and other solutions that cover these features, but they're way too heavy for our use case. Redis sits in a very well defined middle-grown between these other approaches.
We've deployed Redis into a Kubernetes cluster by just using their Docker images.
Deploying this way saves a good amount of time, given that you just need to write the Pod configuration and mount the pre-built Docker image. After that, it's just a matter of deploying your Kubernetes environment and you're good to go.
From there, using an SDK for your programming language is mostly transparent for any developer, and then you have your application integrated with Redis in no time.
Every time you don't need a document DB, you can't go wrong with Redis over MongoDB.
Google Cloud Pub/Sub may have solved one use case, but we'd still have to deploy Redis instances for other use cases, and adding another tech stack would only add complexity to our infrastructure management.
The main competitor for Redis in the cache server space is Memcached, but it falls short on some features like data persistence and data types.
Redis is great for any cache service with data persistence implementation. If you need a super-fast cache, you can always use the in-memory cache (without persistence) to improve performance and still get all the benefits of the service.
It's usually compared to Memcached, and in terms of performance I think they're very similar, and for some critical applications, Memcached may be a better option. But the feature-rich characteristics of Redis will position it in a more competitive place against many applications.

Redis Feature Ratings

Performance
9
Availability
10
Concurrency
6
Security
5
Scalability
8
Data model flexibility
7
Deployment model flexibility
9