GFS is well suited for DEVOPS type environments where organizations prefer to invest in servers and DAS (direct attached storage) versus purchasing storage solutions/appliances. GFS allows organizations to scale their storage capacity at a fraction of the price using DAS HDDs versus committing to purchase licenses and hardware from a dedicated storage manufacturer (e.g. NetApp, Dell/EMC, HP, etc.).
For a large, robust, well-secured, and stable storage system, TrueNAS is very well suited. Virtual Machine support is great. Shared filesystems (SMB, NFS, iSCSI, WebDav, AFP) are very well implemented. Time machine support is fantastic; security is very granular. Do not try to use it as a replacement for VMWare... (no migration, etc.)
Scales; bricks can be easily added to increase storage capacity
Performs; I/O is spread across multiple spindles (HDDs), thereby increasing read and write performance
Integrates well with RHEL/CentOS 7; if your organization is using RHEL 7, Gluster (GFS) integrates extremely well with that baseline, especially since it's come under the Red Hat portfolio of tools.
Documentation; using readthedocs demonstrates that the Gluster project isn't always kept up-to-date as far as documentation is concerned. Many of the guides are for previous versions of the product and can be cumbersome to follow at times.
Self-healing; our use of GFS required the administrator to trigger an auto-heal operation manually whenever bricks were added/removed from the pool. This would be a great feature to incorporate using autonomous self-healing whenever a brick is added/removed from the pool.
Performance metrics are scarce; our team received feedback that online RDBMS transactions did not perform well on distributed file systems (such as GFS), however this could not be substantiated via any online research or white papers.
The software has been amazing. It has saved me a lot of headache in the past few years. Also, it's nice to knowing that if any of our current Synology devices were to die I can have an iSCSI system up and running very shortly. I didn't give a 10 score because I find their support to be rather slow and pedantic. They test many things when the answer is right in front of them. The compute sytem (not storage) we purchased from them came with pcie gen4 nvme's. They didn't work, but rather than believe me about the spec's in the motherboard manual saying the onboard was pcie3 ONLY they shipped me 2 replacements until I showed them an old pcie3 device worked just fine. The part that rather frustrated me was the machine was claimed to have been tested / burnt in. How can this be true if the server won't even boot up into the BIOS?
The software is fairly straight forward and if you mess up the network interfaces you can login locally at the console and fix any issues that you may have had with VLANS etc denying you network access. There was a little bit of annoying issues when setting up multiple network interface cards. Rather than keeping one interface setup with DHCP, when you add a second one with a new network it disables the first. Which makes it impossible to login again. However if you wait it will revert. I learned after works that you need to set up the network cards and then go back and setup the first one again and THEN test / apply. After that it was pretty good. The summary of the devices is very nice to. You get an accurate snapshot of how well your system is doing as soon as you login
The support was responsive for opening cases. However I found solutions to simple problems took far too long. When we had a bad power supply and we had another with the exact same firmware version they should have sent replacement for both. We had to file another case for the other PSU that started dyeing the same week. They also had to do a lot of troubleshooting to replace the fans that were not behaving as they should. I'm not a home user. I know when certain things are failing and the silly hoops the jump through made it frustrating. However, once we finally got the problem identified we had parts shipped out via advance replacement which was nice.
The implementation went well after we got the boot drive working properly. The device was setup exactly as i asked with the hardware except for the boot drive. The reason I chose 9 instead of 10 was the boot drive put us back about a week for the part to arrive. I ended up using a personal drive to show them that they were wrong sending use the gen4 drives.
Gluster is a lot lower cost than the storage industry leaders. However, NetApp and Dell/EMC's product documentation is (IMHO) more mature and hardened against usage in operational scenarios and environments. Using Gluster avoids "vendor lock-in" from the perspective on now having to purchase dedicated hardware and licenses to run it. Albeit, should an organization choose to pay for support for Gluster, they would be paying licensing costs to Red Hat instead of NetApp, Dell, EMC, HP, or VMware. It could be assumed, however, that if an organization wanted to use Gluster, that they were already a Linux shop and potentially already paying Red Hat or Canonical (Debian) for product support, thereby the use of GFS would be a nominal cost adder from a maintenance/training perspective.
Having a better, trusted filesystem to build upon makes a huge difference. I want to know that if something I've written is read, it was the thing I wrote. And if it can't be read, I want to know that soon and know how to repair it.
Using a TruNAS integrated solution has reduced support overhead compared to using custom hardware.
Being cheaper than full flash storage arrays, this unit allows for a good balance of speed with its use of SSD-based caching drives.
The reliability of the hardware/software integration means I spend less time troubleshooting and more time doing business. Coming from a custom-built solution it is apparent that IX Systems has done some extensive testing.