TrustRadius
Red Hat Ceph Storage is a software defined storage option.https://dudodiprj2sv7.cloudfront.net/product-logos/XN/yb/IDT38RBO6MIC.pngRed Hat Ceph Storage is a great storage solution, but the management console could be betterI used Red Hat Ceph Storage by deploying it within a larger PoC setup for a customer. The customer required a storage solution for storing VM disks and user data (files etc.). Red Hat Ceph Storage was also used as a storage backend for Red Hat Enterprise Virtualization (RHEV). Since this was a PoC project, this solution was only used for a specific time.,Very scalable solution Providing very fast storage Very good integration with KVM, libvirt and OpenStack through Cinder,Deployment of Ceph cluster through the Management Console might fail in some cases; better error reporting would be a good improvement there. The Management Console should provide more options for configuring the Ceph cluster in detail. There should be a mechanism for distributing ceph.conf to all nodes.,9,Ceph allows my customer to scale out very fast. Ceph allows distributing storage objects through multiple server rooms. Ceph is fault-taulerant, meaning the customer can lose a server room and would still be able to access the storage.,LizardFS, DRBD and GlusterFS,Docker, Consul, HAProxy, GraylogRed Hat Ceph Storage is the most cost effective and resilient storage solution when operating at petabyte scale!We where planning on using Ceph storage at one point as a replacement for our Netapp. We had the equipment available on hand in order to make it work but in the end we in our experimentation it wasn't quit the fit we where looking for. We where looking for a highly resilient storage medium to hold our production data and eventually to hold the VMs themselves.,Highly resilient, almost every time we attempted to destroy the cluster it was able to recover from a failure. It struggled to when the nodes where down to about 30%(3 replicas on 10 nodes) The cache tiering feature of Ceph is especially nice. We attached solid state disks and assigned them as the cache tier. Our sio benchmarks beat the our Netapp when we benchmarked it years ago (no traffic, clean disks) by a very wide margin. Ceph effectively allows the admin to control the entire stack from top to bottom instead of being tied to any one storage vendor. The cluster can be decentralized and replicated across data centers if necessary although we didn't try that feature ourselves, it gave us some ideas for a disaster recovery solution. We really liked the idea that since we control the hardware and the software, we have infinite upgradability with off the shelf parts which is exactly what it was built for.,Ceph is very difficult to set up when we used it. One had to be very careful in how they assigned their crush maps and cache tiering to get it to work right otherwise performance would be impacted and data would not be distributed evenly. From the .96 version I ran, it really is intended to be used for massive data centers in the petabytes. Beyond that the command line arguments for ceph-deploy and ceph are very involved. I would strongly recommend this as a back end for Open Stack with a dedicated Linux savvy storage engineer. Red Hat also said they are working to turn Calamari in to a full featured front end to manage OSD nodes which should make this much easier to manage in the future. It should not be run off of VMs themselves since it is not optimized for a VM Kernel. This advice is coming directly from Red Hat. Unfortunately this means that smaller use cases are out of the question since it literally requires 10 physical machines, each with their own OS to become individual OSD nodes. I believe this is an issue with the OSDs and not the monitors which ran fine for us in a virtual machine environment. We where looking at using this as a NFS work alike and in our experiments encountered a couple of issues. the MDS server struggled to mount the CephFS file system on more than a few systems without seizing up. This isn't a huge concern when it is used as a back end for Open Stack however when using this as shared storage for production data on a web cluster proved to be problematic to us. We also would have liked to have NFS access to the Ceph monitors so we could attach this to VMWare in order to store our VMDKs since VMWare does not support mounting CephFS. When we spoke with VMWare about 7 months ago they said NFS support is in the pipeline which will address all of these concerns.,9,CephFS was unable to handle several mounts at the same time. We will revisit NFS capabilities once available. We gained quit a bit of experience with Ceph and we have a cluster on hand if our storage vendor doesn't pan out at any time in the future. It had a negative impact in the time it took for us to test set up and test the cluster. Like I explained earlier, it was quite difficult to set up for experimentation. That said though, we have a very broad understanding of Ceph for our future products.,Netapp, VMware Virtual SAN and Nutanix,VMware ESXi, Nutanix Acropolis, NetApp SnapMirror
Unspecified
Red Hat Ceph Storage
5 Ratings
Score 8.3 out of 101
TRScore

Red Hat Ceph Storage Reviews

Red Hat Ceph Storage
5 Ratings
Score 8.3 out of 101
Show Filters 
Hide Filters 
Filter 5 vetted Red Hat Ceph Storage reviews and ratings
Clear all filters
Overall Rating
Reviewer's Company Size
Last Updated
By Topic
Industry
Department
Experience
Job Type
Role
Reviews (1-2 of 2)
  Vendors can't alter or remove reviews. Here's why.
Valentin Höbel profile photo
October 25, 2017

Review: "Red Hat Ceph Storage is a great storage solution, but the management console could be better"

Score 9 out of 10
Vetted Review
Reseller
Review Source
I used Red Hat Ceph Storage by deploying it within a larger PoC setup for a customer. The customer required a storage solution for storing VM disks and user data (files etc.).

Red Hat Ceph Storage was also used as a storage backend for Red Hat Enterprise Virtualization (RHEV).

Since this was a PoC project, this solution was only used for a specific time.
  • Very scalable solution
  • Providing very fast storage
  • Very good integration with KVM, libvirt and OpenStack through Cinder
  • Deployment of Ceph cluster through the Management Console might fail in some cases; better error reporting would be a good improvement there.
  • The Management Console should provide more options for configuring the Ceph cluster in detail.
  • There should be a mechanism for distributing ceph.conf to all nodes.
Red Hat Ceph Storage is very well suited for providing fast and scalable object storage and storage for virtualization hosts. One of the main advantages is that Ceph allows horizontal scaling by adding more and more nodes within hours. A scenario where using Ceph is less appropriate is when one needs a distributed, POSIX-compliant filesystem. While CephFS is considered as production ready, there are other better solutions in many cases.
Read Valentin Höbel's full review
Colby Shores profile photo
September 09, 2016

"Red Hat Ceph Storage is the most cost effective and resilient storage solution when operating at petabyte scale!"

Score 9 out of 10
Vetted Review
Verified User
Review Source
We where planning on using Ceph storage at one point as a replacement for our Netapp. We had the equipment available on hand in order to make it work but in the end we in our experimentation it wasn't quit the fit we where looking for. We where looking for a highly resilient storage medium to hold our production data and eventually to hold the VMs themselves.
  • Highly resilient, almost every time we attempted to destroy the cluster it was able to recover from a failure. It struggled to when the nodes where down to about 30%(3 replicas on 10 nodes)
  • The cache tiering feature of Ceph is especially nice. We attached solid state disks and assigned them as the cache tier. Our sio benchmarks beat the our Netapp when we benchmarked it years ago (no traffic, clean disks) by a very wide margin.
  • Ceph effectively allows the admin to control the entire stack from top to bottom instead of being tied to any one storage vendor. The cluster can be decentralized and replicated across data centers if necessary although we didn't try that feature ourselves, it gave us some ideas for a disaster recovery solution. We really liked the idea that since we control the hardware and the software, we have infinite upgradability with off the shelf parts which is exactly what it was built for.
  • Ceph is very difficult to set up when we used it. One had to be very careful in how they assigned their crush maps and cache tiering to get it to work right otherwise performance would be impacted and data would not be distributed evenly. From the .96 version I ran, it really is intended to be used for massive data centers in the petabytes. Beyond that the command line arguments for ceph-deploy and ceph are very involved. I would strongly recommend this as a back end for Open Stack with a dedicated Linux savvy storage engineer. Red Hat also said they are working to turn Calamari in to a full featured front end to manage OSD nodes which should make this much easier to manage in the future.
  • It should not be run off of VMs themselves since it is not optimized for a VM Kernel. This advice is coming directly from Red Hat. Unfortunately this means that smaller use cases are out of the question since it literally requires 10 physical machines, each with their own OS to become individual OSD nodes.
  • I believe this is an issue with the OSDs and not the monitors which ran fine for us in a virtual machine environment.
  • We where looking at using this as a NFS work alike and in our experiments encountered a couple of issues. the MDS server struggled to mount the CephFS file system on more than a few systems without seizing up. This isn't a huge concern when it is used as a back end for Open Stack however when using this as shared storage for production data on a web cluster proved to be problematic to us. We also would have liked to have NFS access to the Ceph monitors so we could attach this to VMWare in order to store our VMDKs since VMWare does not support mounting CephFS. When we spoke with VMWare about 7 months ago they said NFS support is in the pipeline which will address all of these concerns.
It is absolutely, hands down the best storage solution for Open Stack. I would even argue it is the only solution if a company is operating at petabyte scale and need resiliency. The storage solution allows any organization to scale their environment using commodity hardware from top to bottom. It has a battle tested track record where it is even being used as the data storage back end for the Large Hadron Collider at Cern.
Read Colby Shores's full review

Red Hat Ceph Storage Scorecard Summary

About Red Hat Ceph Storage

Red Hat Ceph Storage is a software defined storage option.

Red Hat Ceph Storage Technical Details

Operating Systems: Unspecified
Mobile Application:No