IBM® Storage Ceph® is a software-defined storage platform that consolidates block, file and object storage to help organizations eliminate data silos and deliver a cloud-like experience while retaining the cost benefits and data sovereignty advantages of on-premises IT.
N/A
Red Hat Gluster Storage
Score 6.0 out of 10
N/A
Red Hat Gluster Storage is a software-defined storage option; Red Hat acquired Gluster in 2011.
VSAN (Virtual SAN) and Ceph are both software-defined storage solutions, but they have some key differences in terms of their architecture and capabilities.VSAN is a software-defined storage solution that is built into the VMware vSphere hypervisor. It allows organizations to …
Our data centers use simpler hardware & Red Hat Ceph Storage is simpler to use for moderate-sized data centers with simple hardware. Also, glusterFS is more suitable for a large amount of data (Zetabytes) with large file sizes which is not our requirement. It is easy to make …
Large scale data storage: Red Hat Ceph Storage is designed to be highly scalable and can handle large amounts of data. It's well suited for organizations that need to store and manage large amounts of data, such as backups, images, videos, and other types of multimedia content.Cloud-based deployments: Red Hat Ceph Storage can provide object storage services for cloud-based applications such as SaaS and PaaS offerings. It is well suited for organizations that are looking to build their own cloud storage infrastructure or to use it as a storage backend for their cloud-based applications.High-performance computing: Red Hat Ceph Storage can be used to provide storage for high-performance computing (HPC) applications, such as scientific simulations and other types of compute-intensive workloads. It's well suited for organizations that need to store
GFS is well suited for DEVOPS type environments where organizations prefer to invest in servers and DAS (direct attached storage) versus purchasing storage solutions/appliances. GFS allows organizations to scale their storage capacity at a fraction of the price using DAS HDDs versus committing to purchase licenses and hardware from a dedicated storage manufacturer (e.g. NetApp, Dell/EMC, HP, etc.).
Highly resilient, almost every time we attempted to destroy the cluster it was able to recover from a failure. It struggled to when the nodes where down to about 30%(3 replicas on 10 nodes)
The cache tiering feature of Ceph is especially nice. We attached solid state disks and assigned them as the cache tier. Our sio benchmarks beat the our Netapp when we benchmarked it years ago (no traffic, clean disks) by a very wide margin.
Ceph effectively allows the admin to control the entire stack from top to bottom instead of being tied to any one storage vendor. The cluster can be decentralized and replicated across data centers if necessary although we didn't try that feature ourselves, it gave us some ideas for a disaster recovery solution. We really liked the idea that since we control the hardware and the software, we have infinite upgradability with off the shelf parts which is exactly what it was built for.
Scales; bricks can be easily added to increase storage capacity
Performs; I/O is spread across multiple spindles (HDDs), thereby increasing read and write performance
Integrates well with RHEL/CentOS 7; if your organization is using RHEL 7, Gluster (GFS) integrates extremely well with that baseline, especially since it's come under the Red Hat portfolio of tools.
Documentation; using readthedocs demonstrates that the Gluster project isn't always kept up-to-date as far as documentation is concerned. Many of the guides are for previous versions of the product and can be cumbersome to follow at times.
Self-healing; our use of GFS required the administrator to trigger an auto-heal operation manually whenever bricks were added/removed from the pool. This would be a great feature to incorporate using autonomous self-healing whenever a brick is added/removed from the pool.
Performance metrics are scarce; our team received feedback that online RDBMS transactions did not perform well on distributed file systems (such as GFS), however this could not be substantiated via any online research or white papers.
MongoDB offers better search ability compared to Red Hat Ceph Storage but it’s more optimized for large number of object while Red Hat Ceph Storage is preferred if you need to store binary data or large individual objects. To get acceptable search functionality you really need to compile Red Hat Ceph Storage with another database where the search metadata related to Red Hat Ceph Storage objects are stored.
Gluster is a lot lower cost than the storage industry leaders. However, NetApp and Dell/EMC's product documentation is (IMHO) more mature and hardened against usage in operational scenarios and environments. Using Gluster avoids "vendor lock-in" from the perspective on now having to purchase dedicated hardware and licenses to run it. Albeit, should an organization choose to pay for support for Gluster, they would be paying licensing costs to Red Hat instead of NetApp, Dell, EMC, HP, or VMware. It could be assumed, however, that if an organization wanted to use Gluster, that they were already a Linux shop and potentially already paying Red Hat or Canonical (Debian) for product support, thereby the use of GFS would be a nominal cost adder from a maintenance/training perspective.