13 Reviews and Ratings
47 Reviews and Ratings
Large scale data storage: Red Hat Ceph Storage is designed to be highly scalable and can handle large amounts of data. It's well suited for organizations that need to store and manage large amounts of data, such as backups, images, videos, and other types of multimedia content.Cloud-based deployments: Red Hat Ceph Storage can provide object storage services for cloud-based applications such as SaaS and PaaS offerings. It is well suited for organizations that are looking to build their own cloud storage infrastructure or to use it as a storage backend for their cloud-based applications.High-performance computing: Red Hat Ceph Storage can be used to provide storage for high-performance computing (HPC) applications, such as scientific simulations and other types of compute-intensive workloads. It's well suited for organizations that need to store Incentivized
For a large, robust, well-secured, and stable storage system, TrueNAS is very well suited. Virtual Machine support is great. Shared filesystems (SMB, NFS, iSCSI, WebDav, AFP) are very well implemented. Time machine support is fantastic; security is very granular. Do not try to use it as a replacement for VMWare... (no migration, etc.)
Highly resilient, almost every time we attempted to destroy the cluster it was able to recover from a failure. It struggled to when the nodes where down to about 30%(3 replicas on 10 nodes)The cache tiering feature of Ceph is especially nice. We attached solid state disks and assigned them as the cache tier. Our sio benchmarks beat the our Netapp when we benchmarked it years ago (no traffic, clean disks) by a very wide margin.Ceph effectively allows the admin to control the entire stack from top to bottom instead of being tied to any one storage vendor. The cluster can be decentralized and replicated across data centers if necessary although we didn't try that feature ourselves, it gave us some ideas for a disaster recovery solution. We really liked the idea that since we control the hardware and the software, we have infinite upgradability with off the shelf parts which is exactly what it was built for.Incentivized
iSCSI Datastores for virtualization.NFS store for unix storage or backups over networking.Very fast performance, sometimes outclassing SSD arrays even in NFS.The ZFS filesystem has given use much greater flexibility.Using their newer servers we could in theory scale to any height of required storage.
GUI based mainetenence should be developedUnable to detect storage latenciesVM to disk mapping should be visible so as to save some critical applications data in case of HDD failuresIncentivized
more graphical interface to admin features like plugins, jails, list are well but a tiles aproach will be betterallow bulk upload/download/update to Groups or user accounts from SMB shares.some script language template featured to create/config/change/delete storage pools /dataset or shares .
The software has been amazing. It has saved me a lot of headache in the past few years. Also, it's nice to knowing that if any of our current Synology devices were to die I can have an iSCSI system up and running very shortly. I didn't give a 10 score because I find their support to be rather slow and pedantic. They test many things when the answer is right in front of them. The compute sytem (not storage) we purchased from them came with pcie gen4 nvme's. They didn't work, but rather than believe me about the spec's in the motherboard manual saying the onboard was pcie3 ONLY they shipped me 2 replacements until I showed them an old pcie3 device worked just fine. The part that rather frustrated me was the machine was claimed to have been tested / burnt in. How can this be true if the server won't even boot up into the BIOS?
The software is fairly straight forward and if you mess up the network interfaces you can login locally at the console and fix any issues that you may have had with VLANS etc denying you network access. There was a little bit of annoying issues when setting up multiple network interface cards. Rather than keeping one interface setup with DHCP, when you add a second one with a new network it disables the first. Which makes it impossible to login again. However if you wait it will revert. I learned after works that you need to set up the network cards and then go back and setup the first one again and THEN test / apply. After that it was pretty good. The summary of the devices is very nice to. You get an accurate snapshot of how well your system is doing as soon as you login
The support was responsive for opening cases. However I found solutions to simple problems took far too long. When we had a bad power supply and we had another with the exact same firmware version they should have sent replacement for both. We had to file another case for the other PSU that started dyeing the same week. They also had to do a lot of troubleshooting to replace the fans that were not behaving as they should. I'm not a home user. I know when certain things are failing and the silly hoops the jump through made it frustrating. However, once we finally got the problem identified we had parts shipped out via advance replacement which was nice.
The implementation went well after we got the boot drive working properly. The device was setup exactly as i asked with the hardware except for the boot drive. The reason I chose 9 instead of 10 was the boot drive put us back about a week for the part to arrive. I ended up using a personal drive to show them that they were wrong sending use the gen4 drives.
MongoDB offers better search ability compared to Red Hat Ceph Storage but it’s more optimized for large number of object while Red Hat Ceph Storage is preferred if you need to store binary data or large individual objects. To get acceptable search functionality you really need to compile Red Hat Ceph Storage with another database where the search metadata related to Red Hat Ceph Storage objects are stored.Incentivized
Having a better, trusted filesystem to build upon makes a huge difference. I want to know that if something I've written is read, it was the thing I wrote. And if it can't be read, I want to know that soon and know how to repair it.
Ceph allows my customer to scale out very fast.Ceph allows distributing storage objects through multiple server rooms.Ceph is fault-taulerant, meaning the customer can lose a server room and would still be able to access the storage.Incentivized
Using a TruNAS integrated solution has reduced support overhead compared to using custom hardware.Being cheaper than full flash storage arrays, this unit allows for a good balance of speed with its use of SSD-based caching drives.The reliability of the hardware/software integration means I spend less time troubleshooting and more time doing business. Coming from a custom-built solution it is apparent that IX Systems has done some extensive testing.