OpenShift is Red Hat's Cloud Computing Platform as a Service (PaaS) offering. OpenShift is an application platform in the cloud where application developers and teams can build, test, deploy, and run their applications.
OpenShift pode ajudar a modernizar suas aplicações de várias maneiras. Ele oferece uma plataforma de contêineres que facilita a implantação, gerenciamento e escalabilidade de suas aplicações. Com recursos como orquestração de contêineres, integração contínua e implantação …
If budgets are stretched, Hyper-V is a very cost effective solution. Any veteran MS Windows administrators will have little issue in getting to grips with this. If you are familiar with VMware solutions, then you may find Hyper-V a little frustrating as it does lack some of the functionality of those products, however nothing that will prevent you from managing your virtual workloads and estate. Since rolling out Hyper-V 2019 we have had no real issues with it; ESXi seemed to have more issues and was less forgiving with hardware compatibility.
Red Hat OpenShift, despite its complexity and overhead, remains the most complete and enterprise-ready Kubernetes platform available. It excels in research projects like ours, where we need robust CI/CD, GPU scheduling, and tight integration with tools like Jupyter, OpenDataHub, and Quiskit. Its security, scalability, and operator ecosystem make it ideal for experimental and production-grade AI workloads. However, for simpler general hosting tasks—such as serving static websites or lightweight backend services—we find traditional VMs, Docker, or LXD more practical and resource-efficient. Red Hat OpenShift shines in complex, container-native workflows, but can be overkill for basic infrastructure needs.
Easy to use GUI - very easy for someone with sufficient Windows experience - not necessarily a system administrator.
Provisioning VMs with different OSes - we mostly rely on different flavors of Windows Server, but having a few *nix distributions was not that difficult.
Managing virtual networks - we usually have 1 or 2 VLANs for our business purposes, but we are happy with the outcomes.
We had a few microservices that dealt with notifications and alerts. We used OpenShift to deploy these microservices, which handle and deliver notifications using publish-subscribe models.
We had to expose an API to consumers via MTLS, which was implemented using Server secret integration in OpenShift. We were then able to deploy the APIs on OpenShift with API security.
We integrated Splunk with OpenShift to view the logs of our applications and gain real-time insights into usage, as well as provide high availability.
The only issue I have with Hyper-V is I am unable to use Veeam on my Windows 2016 Server to backup my FreeBSD HAProxy VM.
There is some sort of checkpoint issue that I have been unable to figure out, but it works just fine on my Windows 2012 Servers. I do believe this is a Microsoft issue and not a Veeam issue though.
Another thing that could be useful that Hyper-V does not have would be some sort of GUI that shows the status of all the VM's on a given server to help us manage them easier and know what is going on. However, I do have Zabbix for this and that does a good job at monitoring all my servers.
OpenShift virtualization has a little room for improvement. I'm coming from it as a Rev customer. There's some things in that OpenShift virtualization that were in Rev that I would like to see in OpenShift virtualization. I realized that they're chasing the VMware crowd and that's fine, but from us old Rev customers, we'd like to see some things that was in Rev around via migration and things of that nature that could be in OpenShift virtualization, I hope is being planned to be put in.
Cheap and easy is the name of the game. It has great support, it doesn't require additional licenses, it works the same if it is a cluster or stand-alone, and all the servers can be centrally managed from a system center virtual machine manager server, even when located at remote sites.
OpenShift is really easy of use through its management console. OpenShift gives a very large flexibility through many inbuilt functionalities, all gathered in the same place (it's a very convenient tool to learn DevOps technics hands on) OpenShift is an ideal integrated development / deployment platform for containers
It is very easy to configure new virtual machines and manage them. But you have to use different interfaces to perform various tasks. Especially as soon as it comes to clustering you have to use at least two different interfaces (Hyper-V Manager and Failover-Cluster Manager) to perform all necessary tasks. The newly released Windows Admin Center is a way into the right direction to get all management tasks into one single interface.
The virtualization part takes some getting used to it you are coming from a more traditional hypervisor. Customization options are not intuitive to these users. The process should be more clear. Perhaps a guide to Openshift Virtualization for users of RHV, VMware, etc. would ease this transition into the new platform
In the past 2 years our Hyper-V servers have only had a handful of instances where the VM's on them were unreachable and the physical Hyper-V server had to be restarted. One time this was due to a RAM issue with the physical box and was resolved when we stopped using dynamic memory in Hyper-V. The other times were after updates were installed and the physical box was not restarted after the updates were installed.
Redhat openshift is generally reliable and available platform, it ensures high availability for most the situations. in fact the product where we put openshift in a box, we ensure that the availability is also happening at node and network level and also at storage level, so some of the factors that are outside of Openshift realm are also working in HA manner.
Hyper-V itself works quickly and rarely gave performance issues but this can be more attributed to the physical server specifications that the actual Hyper-V software in my opinion as Hyper-V technically just utilizes config files such as xml, and a data drive file (VHD, VHDX, etc) to perform its' duties.
Overall, this platform is beneficial. The only downsides we have encountered have been with pods that occasionally hang. This results in resources being dedicated to dead or zombie pods. Over time, these wasted resources occasionally cause us issues, and we have had difficulty monitoring these pods. However, this issue does not overshadow the benefits we get from Openshift.
Hyper-V is greatly supported by techs around the world. There are tons of forums, help websites and individuals ready to answer questions. I've never needed to contact Microsoft for help...because help is so easy to find out there. Do a search online for anything related to Hyper-V and you will certainly find an article with spelled out steps on how to do what you are looking to do.
Every time we need to get support all the Red Hat team move forward looking to solve the problem. Sometimes this was not easy and requires the scalation to product team, and we always get a response. Most of the minor issues were solved with the information from access.redhat.com
We had in person training from a third party and while it was very in depth it was at a beginner's level and by the time we received the training we had advanced past this level so it was monotonous and redundant at that point. It was good training though and would have provided a solid foundation for learning the rest of Hyper-V had I had it from the beginning.
I was not involved in the in person training, so i can not answer this question, but the team in my org worked directly with Openshift and able to get the in person training done easily, i did not hear problem or complain in this space, so i hope things happen seamlessly without any issue.
The training was easy to read and find. There were good examples in the training and it is plentiful if you use third party resources also. It is not perfect as sometimes you may have a specific question and have to spend time learning or in the rare case you get an error you might have to research that error code which could have multiple causes.
We went thru the training material on RH webesite, i think its very descriptive and the handson lab sesssions are very useful. It would be good to create more short duration videos covering one single aspect of openshift, this wll keep the interest and also it breaks down the complexity to reasonable chunks.
initial configuration of hyper-v is intuitive to anyone familiar with windows and roles for basic items like single server deployments, storage and basic networking. the majority of the problems were with implementing advanced features like high availability and more complex networking. There is a lot of documentation on how to do it but it is not seamless, even to experienced virtualization professionals.
VMware is the pioneer of virtualization but when you compare it with Hyper-V, VMware lacks the flexibility of hardware customization and configuration options Hyper-V has also GPU virtualization still not adequate for both platforms. VMware has better graphical interface and control options for virtual machines. Another advantage VMware has is it does not need a dedicated os GUI base installation only needs small resources and can easily install on any host.
The Tanzu Platform seemed overly complicated, and the frequent changes to the portfolio as well as the messaging made us uneasy. We also decided it would not be wise to tie our application platform to a specific infrastructure provider, as Tanzu cannot be deployed on anything other than vSphere. SUSE Rancher seemed good overall, but ultimately felt closer to a DIY approach versus the comprehensive package that Red Hat OpenShift provides.
It's easy to understand what are being billed and what's included in each type of subscription. Same with the support (Std or Premium) you know exactly what to expect when you need to use it. The "core" unit approach on the subscription made really simple to scale and carry the workloads from one site to another.
Nothing is perfect but Hyper-V does a great job of showing the necessary data to users to ensure that there is enough resources to perform essential functions. You can also select what fields show on the management console which is helpful for a quick glance. There are notifications that can be set up and if things go unnoticed and a Hyper-V server runs out of a resource it will safely and quickly shut down the VM's it needs to in order to ensure no Hardware failure or unnecessary data loss.
This is a great platform to deployment container applications designed for multiple use cases. Its reasonably scalable platform, that can host multiple instances of applications, which can seamlessly handle the node and pod failure, if they are configured properly. There should be some scalability best practices guide would be very useful
Hyper-V has provided for an extremely cost-effective virtual environment with disaster recovery. For the size of our business, it's all we need to ensure our desired level of continuity of services and protection against hardware failures.
Since we are a Windows shop, deploying Hyper-V means we don't have the added cost of a hypervisor, since it's included in the cost of the Windows Server license. It's all we needed to achieve our goal of running all our virtual machines on a single server with another, less expensive server on tap for replication and failover.
We wanted easy deployment and management with disaster recovery while having the ability to leverage our years of Windows SysAdmin experience. Hyper-V fit the bill.
All of the above. Red Hat OpenShift going into a developer-type setting can be stood up very quickly. There's a very short period to have developers onboard to it and they're able to become productive much faster than a grow your own type solution.