Skip to main content
Red Hat Ceph Storage

Red Hat Ceph Storage


What is Red Hat Ceph Storage?

Red Hat Ceph Storage is a software defined storage option.

Read more
Recent Reviews
Read all reviews
Return to navigation

Product Demos

Ceph Storage (Quincy) || Setup Ceph Admin Node || Perform Ceph Administration tasks


Ceph Storage [Quincy] || Setup Ceph Client Node || Connect Ceph Cluster and run Ceph commands


Using Open Data Hub for MLOps Demo


Red Hat Ceph Storage 5: Insert new disk

Return to navigation

Product Details

What is Red Hat Ceph Storage?

Red Hat Ceph Storage Technical Details

Operating SystemsUnspecified
Mobile ApplicationNo
Return to navigation


View all alternatives
Return to navigation

Reviews and Ratings



(1-6 of 6)
Companies can't remove reviews or game the system. Here's why
mustafa mahmoud | TrustRadius Reviewer
Score 8 out of 10
Vetted Review
Verified User
One of the main advantages of Red Hat Ceph Storage is its Self-healing and having the ability to provide object, block, and file storage in a single platform. This allows organizations to easily manage and access their data, regardless of the type of data or the specific use case. Additionally, the software is designed to be highly scalable, which means that organizations can easily add more storage capacity as their data grows.
  • Self healing
  • Redundancy
  • scalable, and cost-effective storage solution
  • provide object, block, and file storage in a single platform.
  • Limited integration with other tools: While Red Hat Ceph Storage can be integrated with other Red Hat products, it may not integrate seamlessly with other tools that organizations are already using.
  • ack of built-in data compression and deduplication: Red Hat Ceph Storage does not have built-in data compression and deduplication capabilities. This can lead to increased storage costs for organizations that are storing large amounts of data.
  • Limited monitoring
  • Limited to 32 Storage node only at proxmox
Large scale data storage: Red Hat Ceph Storage is designed to be highly scalable and can handle large amounts of data. It's well suited for organizations that need to store and manage large amounts of data, such as backups, images, videos, and other types of multimedia content.Cloud-based deployments: Red Hat Ceph Storage can provide object storage services for cloud-based applications such as SaaS and PaaS offerings. It is well suited for organizations that are looking to build their own cloud storage infrastructure or to use it as a storage backend for their cloud-based applications.High-performance computing: Red Hat Ceph Storage can be used to provide storage for high-performance computing (HPC) applications, such as scientific simulations and other types of compute-intensive workloads. It's well suited for organizations that need to store
Asad Khan | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Red Hat Ceph Storage is an excellent solution to manage the data of your data center. I use it to manage the data of telco applications called as VNFs & it provides automatic software-defined storage management for your cloud rack without a dependency on 3PP storage vendors like EMC & NetApp. The replication, resilience & recovery mechanisms are just awesome. Red Hat Ceph Storage gives you the flexibility to define separate storage pools (fast or slow pools) for separate applications & these pools can be defined on top of supported storage devices like SSD or NVMe. Once you setup Red Hat Ceph Storage then you don't have to worry about data replication & recovery unless & until it is a hardware fault.
  • Data replication
  • Data recovery (in case of a HDD fault)
  • Ease of maintenence via Ceph CLI
  • GUI based mainetenence should be developed
  • Unable to detect storage latencies
  • VM to disk mapping should be visible so as to save some critical applications data in case of HDD failures
Well suited for large-scale private data centers & almost all the places where mission-critical data is handled like -
  • Enterprise
  • Telcos
  • Healthcare
  • Banking
  • IT, etc
This is because ceph provides you data management capabilities without 3rd party storage vendors

Less appropriate for POCs & lab environments because of initial setup complexities

Score 8 out of 10
Vetted Review
Verified User
We use Red Hat Ceph Storage to store large binary objects of unstructured data. We ended up in a situation where storing large objects in a relational database wasn’t cost effective to scale and due to this we changed so structured data is stored in a relational database while binary objects such as photos, videos and documents are stored in Red Hat Ceph Storage.
  • Cost effective storage
  • Partitioning data in separate buckets
  • Ability to store large individual objects
  • Authorization on object level could be improved
  • Helper libraries to access Red Hat Ceph Storage from various languages could be improved
  • Ability to attach structured metadata to stored objects could be improved
Red Hat Ceph Storage is a good solution if you have a need to store large amount of unstructured data and have a need to protect it with authorization. If you have a need to search for specific data in Red Hat Ceph Storage you often need to combine it with a relational database or search index like Elastic search. It’s hard to justify Red Hat Ceph Storage usage if you only need to handle limited amount of data.
Gerald Wilson | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Our company depends on this platform to store our data with modern advanced tools. My team depends on the information stored in this product to get analytics. Monitoring our operations is easy from reliable insights that keep our organization running. Shrinking storage volumes to store different scaled-down units is easy. Most of the data that is left unattended is currently in a safe environment and can be attended to later when the demand arises.
  • Breaking down storage units into manageable clusters.
  • Enhancing data protection.
  • The massive storage platform has no room for improvement.
  • No failure from the performance.
I believe in storage packages offered by this product and totally recommend it. The installation process of this tool is easy and customization to different clusters is flexible. The cost of purchasing depends on the purpose of the consumer and no further additional maintenance cost after purchasing. It is suitable for storing large volumes of data with powerful integrated data protection tools.
Valentin Höbel | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
I used Red Hat Ceph Storage by deploying it within a larger PoC setup for a customer. The customer required a storage solution for storing VM disks and user data (files etc.).

Red Hat Ceph Storage was also used as a storage backend for Red Hat Enterprise Virtualization (RHEV).

Since this was a PoC project, this solution was only used for a specific time.
  • Very scalable solution
  • Providing very fast storage
  • Very good integration with KVM, libvirt and OpenStack through Cinder
  • Deployment of Ceph cluster through the Management Console might fail in some cases; better error reporting would be a good improvement there.
  • The Management Console should provide more options for configuring the Ceph cluster in detail.
  • There should be a mechanism for distributing ceph.conf to all nodes.
Red Hat Ceph Storage is very well suited for providing fast and scalable object storage and storage for virtualization hosts. One of the main advantages is that Ceph allows horizontal scaling by adding more and more nodes within hours. A scenario where using Ceph is less appropriate is when one needs a distributed, POSIX-compliant filesystem. While CephFS is considered as production ready, there are other better solutions in many cases.
Colby Shores | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
We where planning on using Ceph storage at one point as a replacement for our Netapp. We had the equipment available on hand in order to make it work but in the end we in our experimentation it wasn't quit the fit we where looking for. We where looking for a highly resilient storage medium to hold our production data and eventually to hold the VMs themselves.
  • Highly resilient, almost every time we attempted to destroy the cluster it was able to recover from a failure. It struggled to when the nodes where down to about 30%(3 replicas on 10 nodes)
  • The cache tiering feature of Ceph is especially nice. We attached solid state disks and assigned them as the cache tier. Our sio benchmarks beat the our Netapp when we benchmarked it years ago (no traffic, clean disks) by a very wide margin.
  • Ceph effectively allows the admin to control the entire stack from top to bottom instead of being tied to any one storage vendor. The cluster can be decentralized and replicated across data centers if necessary although we didn't try that feature ourselves, it gave us some ideas for a disaster recovery solution. We really liked the idea that since we control the hardware and the software, we have infinite upgradability with off the shelf parts which is exactly what it was built for.
  • Ceph is very difficult to set up when we used it. One had to be very careful in how they assigned their crush maps and cache tiering to get it to work right otherwise performance would be impacted and data would not be distributed evenly. From the .96 version I ran, it really is intended to be used for massive data centers in the petabytes. Beyond that the command line arguments for ceph-deploy and ceph are very involved. I would strongly recommend this as a back end for Open Stack with a dedicated Linux savvy storage engineer. Red Hat also said they are working to turn Calamari in to a full featured front end to manage OSD nodes which should make this much easier to manage in the future.
  • It should not be run off of VMs themselves since it is not optimized for a VM Kernel. This advice is coming directly from Red Hat. Unfortunately this means that smaller use cases are out of the question since it literally requires 10 physical machines, each with their own OS to become individual OSD nodes.
  • I believe this is an issue with the OSDs and not the monitors which ran fine for us in a virtual machine environment.
  • We where looking at using this as a NFS work alike and in our experiments encountered a couple of issues. the MDS server struggled to mount the CephFS file system on more than a few systems without seizing up. This isn't a huge concern when it is used as a back end for Open Stack however when using this as shared storage for production data on a web cluster proved to be problematic to us. We also would have liked to have NFS access to the Ceph monitors so we could attach this to VMWare in order to store our VMDKs since VMWare does not support mounting CephFS. When we spoke with VMWare about 7 months ago they said NFS support is in the pipeline which will address all of these concerns.
It is absolutely, hands down the best storage solution for Open Stack. I would even argue it is the only solution if a company is operating at petabyte scale and need resiliency. The storage solution allows any organization to scale their environment using commodity hardware from top to bottom. It has a battle tested track record where it is even being used as the data storage back end for the Large Hadron Collider at Cern.
Return to navigation