Kubernetes is an open-source container cluster manager.
N/A
Red Hat OpenShift
Score 9.2 out of 10
N/A
OpenShift is Red Hat's Cloud Computing Platform as a Service (PaaS) offering. OpenShift is an application platform in the cloud where application developers and teams can build, test, deploy, and run their applications.
$0.08
per hour
Vultr
Score 8.8 out of 10
N/A
Vultr is an independent cloud computing platform on a mission to provide businesses and developers around the world with unrivaled ease of use, price-to-performance, and global reach.
$2.50
per month
Pricing
Kubernetes
Red Hat OpenShift
Vultr
Editions & Modules
No answers on this topic
No answers on this topic
Block Storage
$1
per month
Cloud Compute
$2.50
per month
Object Storage
$5
per month
Kubernetes Engine
$10
per month
Load Balancers
$10
per month
Managed Databases
$15
per month
Optimized Cloud Compute
$28
per month
Cloud GPU
$90
per month
Bare Metal
$120
per month
Offerings
Pricing Offerings
Kubernetes
Red Hat OpenShift
Vultr
Free Trial
No
Yes
No
Free/Freemium Version
No
Yes
No
Premium Consulting/Integration Services
No
No
No
Entry-level Setup Fee
No setup fee
No setup fee
No setup fee
Additional Details
—
—
Pricing is based on specifications chosen in each product category. Bandwidth is also included up to a certain amount per month.
I used OpenShift v2 - which was pre-Kubernetes. (It now uses Kubernetes under the hood - but keeps it fairly hidden). Kubernetes was a ton more stable and easier to use. No more custom CLI to use in order to script together deployments. No more messy ‘push your entire code …
It stacks well against OpenShift. The only downside for OpenShift is the multiple operators and the custom logic implemented in the product, plus the upgrades, which tend to be a bit longer due to the more complex implementation. Overall, these are similar products but with a …
Kubernetes cluster is cable to manage multiple nodes on on-premises or cloud infrastructure. In Kubernetes, we can easily add new nodes when ever required. We can easily update and rollback our application hosted on Kubernetes with the help of rolling and blue green deployment. …
Verified User
Administrator
Chose Kubernetes
I didn't have too much experience or exposure to OpenShift but I do remember that in certain areas our organization found Kubernetes to be more useful and met our needs in comparison to OpenShift. Although I can't compare, I think it's easier to customize Kubernetes because of …
Comparing the 2, open source Kubernetes is quicker to setup by about 75%, less restrictive, and free of course, but it lacks the security and support of Red Hat, and deploying features is much harder compared to with operators. For buisiness purposes, OpenShift is just more …
We looked at a few other options like plain Kubernetes and some managed services, but Red Hat OpenShift stood out because it’s enterprise-ready out of the box. The built-in security, automation tools, and support made a big difference.
We explore a lot of services to use in. But in todays world everything is cloud and the on premise solutions are not very strong until we discover Red Hat OpenShift which still very committed to maintain on premise solutions, we select Openshift and since first day we are very …
Red Hat OpenShift was the product that my team has been using since I've joined so it has been the only product in this area that I have used. With that being said, I have really no complaints and love implemented Red Hat OpenShift in my work to help be more efficient with my …
greate UI UX, easy to use, even when you have no clue about any command lines, you still can manage your apps. Also, public documentation is great, if you search for anything you can find it online. A great community and a support system
Red Hat OpenShift has a better security posture than EKS. I enjoy the console on Red Hat OpenShift more as well. I believe there is greater observability for Red Hat OpenShift.
RedHat OpenShift can run on-prem and on Azure, meaning we can get support from RedHat from two platforms, despite it being on those different platforms.
K8s should be avoided - If your application works well without being converted into microservices-based architecture & fits correctly in a VM, needs less scaling, have a fixed traffic pattern then it is better to keep away from Kubernetes. Otherwise, the operational challenges & technical expertise will add a lot to the OPEX. Also, if you're the one who thinks that containers consume fewer resources as compared to VMs then this is not true. As soon as you convert your application to a microservice-based architecture, a lot of components will add up, shooting your resource consumption even higher than VMs so, please beware. Kubernetes is a good choice - When the application needs quick scaling, is already in microservice-based architecture, has no fixed traffic pattern, most of the employees already have desired skills.
Red Hat OpenShift, despite its complexity and overhead, remains the most complete and enterprise-ready Kubernetes platform available. It excels in research projects like ours, where we need robust CI/CD, GPU scheduling, and tight integration with tools like Jupyter, OpenDataHub, and Quiskit. Its security, scalability, and operator ecosystem make it ideal for experimental and production-grade AI workloads. However, for simpler general hosting tasks—such as serving static websites or lightweight backend services—we find traditional VMs, Docker, or LXD more practical and resource-efficient. Red Hat OpenShift shines in complex, container-native workflows, but can be overkill for basic infrastructure needs.
I've been with Vultr over 5 years hosting multiple businesses and email related services. I never experienced a significant outage or data loss. Migration has always been successful as well. Support is top tier and IP reputation is clean. I like the choices of OS, ease of platform use and multiple hosting/ region options.
We had a few microservices that dealt with notifications and alerts. We used OpenShift to deploy these microservices, which handle and deliver notifications using publish-subscribe models.
We had to expose an API to consumers via MTLS, which was implemented using Server secret integration in OpenShift. We were then able to deploy the APIs on OpenShift with API security.
We integrated Splunk with OpenShift to view the logs of our applications and gain real-time insights into usage, as well as provide high availability.
Local development, Kubernetes does tend to be a bit complicated and unnecessary in environments where all development is done locally.
The need for add-ons, Helm is almost required when running Kubernetes. This brings a whole new tool to manage and learn before a developer can really start to use Kubernetes effectively.
Finicy configmap schemes. Kubernetes configmaps often have environment breaking hangups. The fail safes surrounding configmaps are sadly lacking.
I wouldn't necessarily say there is look everyday technology transform. I can see a trend wherein Red Hat OpenShift is adopting all the new technology trends and helping their customers align with their priorities and the emerging technology trends. I wouldn't call out various scope for development every day. There is scope for development. It is all how the organizations adopt it and how they deliver it to their customers. I don't want to call out there is scope for development. It's happening. It is a never ending process.
At the moment, I don't have anything to call out. We are experiencing Red Hat OpenShift and we can see every day they're coming up with new features as and when they come up with new features, we want to experience it more and more. We are looking for opportunities wherein this can be leveraged to help our users and partners.
The Kubernetes is going to be highly likely renewed as the technologies that will be placed on top of it are long term as of planning. There shouldn't be any last minute changes in the adoption and I do not anticipate sudden change of the core underlying technology. It is just that the slow process of technology adoption that makes it hard to switch to something else.
This is the current strategy for the company, most of the products in the organisation are aligning to Openshift and various use cases it support. Also lot of applications are being developed for AI use case, openshift.AI provides opportunity to host and leverage the AI capabilities for these applications
Just a great product with no bells and whistles, which is the advantage. We spend very little time learning and using Vultr and more time using the systems we have in Vultr to complete our tasks. Not having to worry about the IT overhead is huge and saves a great deal of time
It is an eminently usable platform. However, its popularity is overshadowed by its complexity. To properly leverage the capabilities and possibilities of Kubernetes as a platform, you need to have excellent understanding of your use case, even better understanding of whether you even need Kubernetes, and if yes - be ready to invest in good engineering support for the platform itself
As I said before, the obserability is one of the weakest point of OpenShift and that has a lot to do with usability. The Kibana console is not fully integrated with OpenShift console and you have to switch from tab to tab to use it. Same with Prometheus, Jaeger and Grafan, it's a "simple" integration but if you want to do complex queries or dashboards you have to go to the specific console
easy to use and configure. great bang for the buck. I need an affordable solution to host in the cloud data from systems installed at our client's site with the ability to drill down and change the configuration remotely. Vultr enabled us to do that in an efficient and affordable way.
Redhat openshift is generally reliable and available platform, it ensures high availability for most the situations. in fact the product where we put openshift in a box, we ensure that the availability is also happening at node and network level and also at storage level, so some of the factors that are outside of Openshift realm are also working in HA manner.
Overall, this platform is beneficial. The only downsides we have encountered have been with pods that occasionally hang. This results in resources being dedicated to dead or zombie pods. Over time, these wasted resources occasionally cause us issues, and we have had difficulty monitoring these pods. However, this issue does not overshadow the benefits we get from Openshift.
Their customer support team is good and quick to respond. On a couple of occassions, they have helped us in solving some issues which we were finding a tad difficult to comprehend. On a rare occasion, the response was a bit slow but maybe it was because of the festival season. Overall a good experience on this front.
Vultr makes it easy to contact technical support. The techs are very competent. In a number of occasions they have bounced the responsibility back to me when they could have saved us all time and heartache by simply implementing the solution directly
I was not involved in the in person training, so i can not answer this question, but the team in my org worked directly with Openshift and able to get the in person training done easily, i did not hear problem or complain in this space, so i hope things happen seamlessly without any issue.
We went thru the training material on RH webesite, i think its very descriptive and the handson lab sesssions are very useful. It would be good to create more short duration videos covering one single aspect of openshift, this wll keep the interest and also it breaks down the complexity to reasonable chunks.
Vultr implementation seemed based on open-source tools and basic cloud principles - some things were more complicated to do compared with more developed cloud providers, but on the other hand it was more extensible by open-source tools.
Most of the required features for any orchestration tool or framework, which is provided by Kubernetes. After understanding all modules and features of the K8S, it is the best fit for us as compared with others out there.
The Tanzu Platform seemed overly complicated, and the frequent changes to the portfolio as well as the messaging made us uneasy. We also decided it would not be wise to tie our application platform to a specific infrastructure provider, as Tanzu cannot be deployed on anything other than vSphere. SUSE Rancher seemed good overall, but ultimately felt closer to a DIY approach versus the comprehensive package that Red Hat OpenShift provides.
Linode is a more old-school offering. Linode pricing model and infrastructure rely on classic Virtual Machines. What we like about Vultr is that they offer the same at the front, but in the back, the machines are much more flexible and can be tailor-made to our needs, which of course also impacts the costs of running the infrastructure.
It's easy to understand what are being billed and what's included in each type of subscription. Same with the support (Std or Premium) you know exactly what to expect when you need to use it. The "core" unit approach on the subscription made really simple to scale and carry the workloads from one site to another.
This is a great platform to deployment container applications designed for multiple use cases. Its reasonably scalable platform, that can host multiple instances of applications, which can seamlessly handle the node and pod failure, if they are configured properly. There should be some scalability best practices guide would be very useful
That is a complicated question and one that's not easy for me to answer. There's a lot of factors that go into all of the stuff that we just don't have an easy way of measuring. And we realize that while we're implementing Red Hat OpenShift, we've tried to start measuring some of that stuff, but we don't have a baseline to go on. So it's hard to say. What I can tell you is general experience with the platform has been extremely positive from the development aspect. Teams have been very, very happy with the speed at which they're able to do stuff. They've been happy with that. The way it works in one environment is exactly the way it works in the next environment because we don't have configuration drift, that type of thing, and has had very positive impacts. But we didn't have a baseline to start with. So I can't talk about getting there faster or anything like that.