Red Hat Ceph Storage

Red Hat Ceph Storage

About TrustRadius Scoring
Score 9.1 out of 100
Red Hat Ceph Storage

Overview

Recent Reviews

Simplified data storage platform

10
December 09, 2021
Our company depends on this platform to store our data with modern advanced tools. My team depends on the information stored in this …

Reviewer Pros & Cons

View all pros & cons

Video Reviews

Leaving a video review helps other professionals like you evaluate products. Be the first one in your network to record a review of Red Hat Ceph Storage, and make your voice heard!

Pricing

View all pricing
N/A
Unavailable

What is Red Hat Ceph Storage?

Red Hat Ceph Storage is a software defined storage option.

Entry-level set up fee?

  • No setup fee

Offerings

  • Free Trial
  • Free/Freemium Version
  • Premium Consulting / Integration Services

Would you like us to let the vendor know that you want pricing?

4 people want pricing too

Alternatives Pricing

What is Azure Blob Storage?

Microsoft's Blob Storage system on Azure is designed to make unstructured data available to customers anywhere through REST-based object storage.

Features Scorecard

No scorecards have been submitted for this product yet..

Product Details

What is Red Hat Ceph Storage?

Red Hat Ceph Storage is a software defined storage option.

Red Hat Ceph Storage Technical Details

Operating SystemsUnspecified
Mobile ApplicationNo

Comparisons

View all alternatives

Reviews and Ratings

 (11)

Reviews

(1-3 of 3)
Companies can't remove reviews or game the system. Here's why
Valentin Höbel | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Reseller
Review Source
  • Very scalable solution
  • Providing very fast storage
  • Very good integration with KVM, libvirt and OpenStack through Cinder
  • Deployment of Ceph cluster through the Management Console might fail in some cases; better error reporting would be a good improvement there.
  • The Management Console should provide more options for configuring the Ceph cluster in detail.
  • There should be a mechanism for distributing ceph.conf to all nodes.
Colby Shores | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
Review Source
  • Highly resilient, almost every time we attempted to destroy the cluster it was able to recover from a failure. It struggled to when the nodes where down to about 30%(3 replicas on 10 nodes)
  • The cache tiering feature of Ceph is especially nice. We attached solid state disks and assigned them as the cache tier. Our sio benchmarks beat the our Netapp when we benchmarked it years ago (no traffic, clean disks) by a very wide margin.
  • Ceph effectively allows the admin to control the entire stack from top to bottom instead of being tied to any one storage vendor. The cluster can be decentralized and replicated across data centers if necessary although we didn't try that feature ourselves, it gave us some ideas for a disaster recovery solution. We really liked the idea that since we control the hardware and the software, we have infinite upgradability with off the shelf parts which is exactly what it was built for.
  • Ceph is very difficult to set up when we used it. One had to be very careful in how they assigned their crush maps and cache tiering to get it to work right otherwise performance would be impacted and data would not be distributed evenly. From the .96 version I ran, it really is intended to be used for massive data centers in the petabytes. Beyond that the command line arguments for ceph-deploy and ceph are very involved. I would strongly recommend this as a back end for Open Stack with a dedicated Linux savvy storage engineer. Red Hat also said they are working to turn Calamari in to a full featured front end to manage OSD nodes which should make this much easier to manage in the future.
  • It should not be run off of VMs themselves since it is not optimized for a VM Kernel. This advice is coming directly from Red Hat. Unfortunately this means that smaller use cases are out of the question since it literally requires 10 physical machines, each with their own OS to become individual OSD nodes.
  • I believe this is an issue with the OSDs and not the monitors which ran fine for us in a virtual machine environment.
  • We where looking at using this as a NFS work alike and in our experiments encountered a couple of issues. the MDS server struggled to mount the CephFS file system on more than a few systems without seizing up. This isn't a huge concern when it is used as a back end for Open Stack however when using this as shared storage for production data on a web cluster proved to be problematic to us. We also would have liked to have NFS access to the Ceph monitors so we could attach this to VMWare in order to store our VMDKs since VMWare does not support mounting CephFS. When we spoke with VMWare about 7 months ago they said NFS support is in the pipeline which will address all of these concerns.