Cisco HyperFlex Systems is a hyper-converged infrastructure product, based on technology acquired with SpringPath (acquired September 2017). Cisco's modern HCI solution is Cisco Compute Hyperconverged with Nutanix.
N/A
StorPool
Score 8.0 out of 10
N/A
StorPool is a block-storage software that uses standard hardware and builds a storage system out of this hardware. It is installed on the servers and creates a shared storage pool from their local drives in these servers. Compared to traditional SANs, all-flash arrays, or other storage software StorPool is faster, more reliable and scalable.
Smaller sites that would benefit from a cluster of 2-5 nodes. Not saying that it can't scale above that, but I find HyperFlex a great solution for those sites. A simple 3-node edge cluster can provide a huge amount of resources and redundancy. It's also really easy to scale the environment to meet growth requirements.
Storpool performs well on block level (and that is what we use it for). It is not yet supporting a kind of distributed filesystem or object storage - a filesystem layer needs to be built on top of it.
UCS manager in HX is truly helping us in doing one touch firmware upgrades. Scaling of HX cluster (in few minutes) is too seamlessly due to service profiles.
HX does not hold you back by creating a single data store unlike other HCI products. With HX, you can create multiple data stores and allocate those to desired services. This help logically separate the install base on HX and removes confusion for the admins too.
We run high IOPs workload on HX, and we never felt latency issues due to the Cisco backbone (as you get FI as a TOR switch and options to choose 10G or 40G speeds).
With HX you truly enjoy a single window support from Cisco including for the top of the rack switch (FI in HX case). In other HCI infra, you certainly have to bank on to network switch vendor for support and bring HCI and switch vendor at one pane for troubleshooting latency related issues.
While we increased our footprint on HX, we didn't added additional administrators to support the landscape. This was possible because of the simplicity in managing HX clusters.
With HX we had setup stretched cluster between two near site data centres. This is a unique proposition in HX (we have 2 nodes in each data centre) and data centre failover works absolutely seamless.
there is the problem with starting cluster where there are not outside DNS and NTP services so we need to workaround this with additional storage or hosting it on the local storage.. many clusters has internal DNS/NTP services not available from outside and they need to be hosted on the HX
there is not RBAC or user mgmt on the CVMs so it is difficult to not add full permission for the people responsible for just shutdown and power on the cluster
native snapshots support with ibm backup products
running from not the only last snapshot in all use cases
More documentation is available now than when the product initially came out (which was an issue early on). Because it only supports UCS hardware, I think it does help with support issues. Nutanix has to support much more hardware. At the same time, you're dealing with the Cisco TAC, which can be mixed at times.
HyperFlex is built on top of Cisco UCS infrastructure, which allows us to manage other non-HX servers attached to the same UCS environment. This allows us to tie everything together via Intersight and see all of the servers in our data centers. Other platforms don't really have a comparable offering.
We made a very careful selection of our storage vendor and solution. After researching the newest technologies, our team decided to deploy a software-defined storage solution from StorPool.
The simplified management makes it easier to operate and prevents mistakes.
Guided installation using the installer VM means you don't have to configure every component by hand. Improves deployment speed and lowers the risk of configuration issues.
Performance increase of 40-90% compared to our previous compute/storage cluster.
We have not calculated precise ROI. We focused on getting the best solution at a reasonable price, based on market research. Initially, we didn’t need a lot of capacity, so we invested in servers and network, which could handle several times more capacity, but bought smaller drives to keep the investment low. We achieved a starting price of $3.2/GB usable and $1.4/GB logical. Later we expanded the capacity by adding more drives to the system. Currently, the system has a price of approximately $2.3/GB usable and $0.99/GB logical and a price of $0.09/IOPS.