Skip to main content
TrustRadius
Kubernetes

Kubernetes

Overview

What is Kubernetes?

Kubernetes is an open-source container cluster manager.

Read more
Recent Reviews

TrustRadius Insights

Telcos have found Kubernetes to be a valuable tool for deploying and managing their legacy telco applications. By converting these …
Continue reading

Kubernetes Review

10 out of 10
April 07, 2022
Currently we are using Kubernetes in our project to orchestrate the containers. We are using it for our banking client where some point of …
Continue reading
Read all reviews

Reviewer Pros & Cons

View all pros & cons
Return to navigation

Pricing

View all pricing
N/A
Unavailable

What is Kubernetes?

Kubernetes is an open-source container cluster manager.

Entry-level set up fee?

  • No setup fee

Offerings

  • Free Trial
  • Free/Freemium Version
  • Premium Consulting/Integration Services

Would you like us to let the vendor know that you want pricing?

2 people also want pricing

Alternatives Pricing

What is Vultr?

Vultr is an independent cloud computing platform on a mission to provide businesses and developers around the world with unrivaled ease of use, price-to-performance, and global reach.

Return to navigation

Product Demos

Kubernetes Beginner Tutorial 8 | Step by Step Play with Kubernetes (K8s) Demo

YouTube

Demo: Intro to Rancher container management

YouTube

[ Kube 68 ] Kubernetes RBAC Demo | Creating Users and Roles

YouTube

Kubernetes for the Absolute Beginners - Setup Kubernetes - kubeadm

YouTube

Kubernetes Deployment Tutorial - yaml explained + Demo

YouTube
Return to navigation

Product Details

What is Kubernetes?

Kubernetes is an open-source container cluster manager.

Kubernetes Technical Details

Operating SystemsUnspecified
Mobile ApplicationNo
Return to navigation

Comparisons

View all alternatives
Return to navigation

Reviews and Ratings

(164)

Community Insights

TrustRadius Insights are summaries of user sentiment data from TrustRadius reviews and, when necessary, 3rd-party data sources. Have feedback on this content? Let us know!

Telcos have found Kubernetes to be a valuable tool for deploying and managing their legacy telco applications. By converting these applications into Kubernetes objects, telcos have been able to improve uptime and scalability. The simplicity and speed of Kubernetes make it ideal for managing microservices, enabling easy deployment, service discovery, configuration management, autoscaling, and fault tolerance. This has been particularly useful for organizations like LinkedIn, which has used Kubernetes as an experimental product for building and managing Machine Learning pipelines and accessing GPU clusters. Additionally, Kubernetes is widely adopted as a PaaS solution throughout organizations, solving the problem of immutable infrastructure and providing a low learning curve for users. It offers scalability and reliability, making it suitable for managing developer and customer environments at both departmental and organizational levels. Moreover, Kubernetes excels in orchestration across diverse hardware infrastructures, including data centers and multiple cloud providers. It effectively manages containerization applications consisting of hundreds of containers deployed on physical machines, virtual machines, or cloud machines. This addresses resource allocation and scheduling challenges by creating and tearing down containers based on resource demand. Furthermore, Kubernetes serves as a powerful tool for containerizing on-premises servers for seamless deployment to the cloud. Its versatility and standard deployment through Helm have made it the preferred microservice container orchestration platform for deploying web-based applications. Overall, Kubernetes offers a wide range of use cases that enhance the deployment, management, and scalability of various applications in different environments.

Flexibility in Customization: Many reviewers have praised Kubernetes for its flexibility in choosing networking, storage, monitoring, and other solutions, allowing them to customize their workload according to their needs. This feature has been appreciated by a significant number of users.

Seamless Upgrades: Users have mentioned that Kubernetes provides the ability to upgrade applications to a new version without any downtime, making it seamless and efficient. Several reviewers have highlighted this as a valuable feature of the platform.

High Portability: The high level of portability offered by Kubernetes has been positively acknowledged by many users. They appreciate being able to move their applications to different environments easily.

Complex Application Design: Several users have found designing applications on Kubernetes to be complex and time-consuming, especially when manually writing YAML manifests and validating them for errors.

Steep Learning Curve: Many reviewers have mentioned that the learning curve for Kubernetes is slow due to a large number of objects and new concepts. They suggest adding GUI-based operations to help with tasks like finding latency points or identifying resource-consuming pods.

Difficulty in Troubleshooting and Documentation: Users have encountered challenges in understanding and troubleshooting Kubernetes, particularly for beginners. Some users have also found it difficult to find relevant information as the documentation is scattered. They suggest better documentation and versioning for easier access to relevant information.

Based on user reviews, users commonly recommend the following for Kubernetes:

Consider using Kubernetes for companies with a large microservice environment. Users believe that Kubernetes is helpful for managing complex applications and recommend it specifically for organizations with a significant number of microservices.

Acquire a basic understanding and knowledge of Kubernetes before using it. Users suggest that having some familiarity with Kubernetes before implementation is beneficial in order to fully utilize its features and capabilities.

Utilize specialized support and platforms like Rancher when deploying Kubernetes. Users recommend seeking assistance from specialized companies that provide support for Kubernetes, as well as using platforms like Rancher in conjunction with Kubernetes.

Overall, users emphasize the importance of evaluating specific requirements and capabilities before choosing Kubernetes as the container management solution, acquiring knowledge beforehand, and leveraging external support to enhance the deployment experience.

Reviews

(1-16 of 16)
Companies can't remove reviews or game the system. Here's why
Score 9 out of 10
Vetted Review
Verified User
We use Kubernetes in a big bare metal cluster, and we are here at the Red Hat summit to talk about a migration directly to OpenShift to create an hybrid infrastructure that can help us to achieve a more resilient services for our clients, actually we got a lot of problem with the limits of k8s, especially with the calico service middleware, when our transactions are with high demand calico gets crazy and reset connections, for example, also we got and issue with the middleware limits, we are running 160 pods for worker, and we got 6 mega hardware workers, that we cant use it fully hardware for the middleware limits, thats the reason we are leaving k8s and working in a migration to OpenShift en AWS
  • multiples deploys in the same infraestructure
  • escalability of the solution
  • fastes deployments automatized with Ansible
  • posibility to handle better the hardware limites, if my hardware is very powerfull i cant use it fully because the limites that i explained before
i think that Kubernetes is well suited in multiple deploys schemas where you don't need full access to the hypervisor, for other instances is better to use a virtualization schema, I think that today Kubernetes its a little bit green to move all traditional infrastructure, but for infrastructure oriented to microservices is better than swarm, because you can find better resources online and better support of the community, also works great within the limits
Asad Khan | TrustRadius Reviewer
Score 8 out of 10
Vetted Review
Verified User
Incentivized
I deploy & manage telco workloads on top of Kubernetes. These are called CNFs (Containerized network functions) which are legacy telco applications converted into K8s objects & connected via a networking & storage solution of your choice, managed by K8s. Just like every other industry, Telcos are no exception converting all their legacy applications sitting on proprietary hardware & boxes to COTS hardware & software architectures adapted toward cloud technologies. Kubernetes helps us to manage the CNFs efficiently & gives a better uptime as compared to VM-based architecture. I see the scope of flexibility & easy scaling in K8s as compared to any other technology.
  • Makes sure that the workload remains UP & running by maintaining the desired state.
  • Gives a lot of flexibility in choosing the networking, storage, monitoring, etc solutions of your choice.
  • The biggest advantage is to upgrade the application with a new version without any downtime.
  • Portability of the code is possible up to a great extent.
  • Flexibility gives birth to complexity & therefore designing an application on K8s is also complex.
  • Writing Yaml manifests manually & then validating them for errors is a pain that should be worked upon with a solution that can write YAMLs & Helm charts in the background with the user designing the application on a GUI-based sketch. Just like they do in OpenStack.
  • The overall approach of operations should be shifted from CLI to GUI-based for ease of use.
  • Due to a lot of objects & new concepts, the learning curve is really flat i.e. slow.
  • Adding GUI-based operations like finding the exact point causing latency OR showing the POD consuming the highest CPU/RAM would be of great help.
K8s should be avoided - If your application works well without being converted into microservices-based architecture & fits correctly in a VM, needs less scaling, have a fixed traffic pattern then it is better to keep away from Kubernetes. Otherwise, the operational challenges & technical expertise will add a lot to the OPEX. Also, if you're the one who thinks that containers consume fewer resources as compared to VMs then this is not true. As soon as you convert your application to a microservice-based architecture, a lot of components will add up, shooting your resource consumption even higher than VMs so, please beware.

Kubernetes is a good choice - When the application needs quick scaling, is already in microservice-based architecture, has no fixed traffic pattern, most of the employees already have desired skills.
Score 7 out of 10
Vetted Review
Verified User
Incentivized
We have multiple Kubernetes clusters that deploy mainly web-based applications. Containers/Kubernetes. Helm has become a widely used platform for deploying applications and many applications offer this as their preferred standard deployment.
  • Deploy applications on multiple nodes.
  • Store application definitions in source control.
  • Abstract away the implementations of storage and networking.
  • Kubernetes is very high-maintenance compared to VM deployments in my opinion.
  • Some failure scenarios are hard to recover from.
  • High effort is needed for upgrading clusters and deployments to new versions of Kubernetes.
Kubernetes is well-suited for deploying stateless, web-based applications. We have had mixed results with deploying databases on Kubernetes, and suspect it has a lot to do with the characteristics of the underlying storage provider. Lastly, Kubernetes is not well-suited for non-HTTP workloads and those sensitive to certain IPs, e.g. SMTP gateways.
April 07, 2022

Kubernetes Review

Score 10 out of 10
Vetted Review
Verified User
Currently we are using Kubernetes in our project to orchestrate the containers. We are using it for our banking client where some point of time user transection get increased while they try to use banking applications. whenever load get increase Kubernetes spin new pods in the cluster using replicaset to handle the load of user transections.
  • container orchestration
  • Horizontal pod scaling
  • load balancing
  • Routes help in exposing internal traffic
  • GUI interface
  • Monitoring tools like Prometheus and Grafana
Kubernetes really required where we expected user load fluctuate. Kubernetes handles it very well by spin new pods of same application when load get high and terminate pods when load get reduce. it do all these thing without any manual intervention. We just need to define the HPA to perform it. we can easily run it on clouds.
Score 10 out of 10
Vetted Review
Verified User
Incentivized
On the foundation, Kubernetes manages containers such as docker containers or from other technology and it helps you manage containerization applications that are made of hundreds of containers in different environments such as physical machines, virtual machines or cloud machines, or even hybrid deployment environments. As in my specific scenario, we install a Kubernetes cluster on CentOS 8 with one master node and two worker nodes for orchestration of our existing applications containers.
  • With high availability, the application has no downtime means always accessible to users.
  • Users have a very high response rate from the application means high performance with scalability.
  • Backup and restore - Disaster recovery.
  • Specifically, the installation process of the k8s cluster on Linux machines such as CentOS has required an experienced person.
  • Kubernetes requires a lot to learn for beginners.
  • For small applications k8s overkill.
Along with all the best features and support by k8s, the automatic container scheduling to worker nodes and also self-healing containers which is what I like the most. On the other side, when I was installing the k8s cluster on CentOS 8, it was quite difficult for me, but never mind it is working as we expected and it is a one-time effort. Especially, in my case, there are more than 7 application containers required to run and communicate with each other, so for us, Kubernetes is an optimal solution.
Score 9 out of 10
Vetted Review
Verified User
Incentivized
We have moved almost all the stateless services to Kubernetes so managing the umpty number of services can be so easy. Kubernetes helps us in scaling the services up or down based on business needs. It helps us in upgrading the bunch of clusters with zero to minimal downtime based on the applications. We also moved stateful database services (mostly NoSQL) to Kubernetes to manage a single place and to keep the cost down.
  • Scaling the application processes/pods up/down based on business needs.
  • Managing the pods from a single source.
  • Better security along with different layers of security.
  • Orchestrating the pods and the available resources in different machines.
  • Easier way to update multiple deployments.
  • Better way to manage backups.
Kubernetes as such makes our life easy in terms of deploying, orchestrating, and managing stateless and stateful services/pods from a single place along with security. We use k9s which makes it easier to manage Kubernetes because of the simple but effective GUI it provides. When it comes to database/stateful services we need to be more cautious when it comes to managing storage. Also, unless tested properly Kubernetes needs some more tweaking when it comes to hosting RDBMS databases.
Score 10 out of 10
Vetted Review
Verified User
Incentivized
Kubernetes was used in my organization by a specific department. The business problem it attempted to address was resource allocation and scheduling. Creating and tearing down containers at will dependant on resource demand. These resources provided API services to the front-end website.
  • Resource allocation and scheduling.
  • Managing container instances and run-files.
  • Allowing for infrastructure as code.
  • Usability and user friendliness.
  • There is no front end and anything attempting to provide a self-service model must be created currently.
  • It uses pretty new technologies so there is a relatively steep learning curve.
Any sort of stateless service that is under heavy utilization or demand is a great candidate for containers in general and therefore kubernetes. Kubernetes should not be implemented in a specific department or for specific purposes. It is a general solution to a large problem and should be put to use accordingly.
Score 10 out of 10
Vetted Review
Verified User
Incentivized
I am working in a company that is currently working on moving everything to the cloud. We want to leverage as many as cloud managed services as we can. Kubernetes came to our sight when we thinking of containerizing our on-premises servers for fast and easy deployment to cloud. We are still experimenting with Kubernetes but based on what we've got now, it works perfectly for our department.
  • Kubernetes is a great tool for managing Docker images. It has great features for managing your containers.
  • It is supported almost in every cloud platform. AWS, GCP, Azure. We are mainly using GCP for our products and Kubernetes works great in it.
  • It is not hard to learn. Although you can learn and deploy your Kubernetes in a hard way, you do not have to.
  • I know Kubernetes was designed to be stateless and it has a lot of reasons to do it. But working to make stateful can be hard sometimes.
  • Like any other cloud migration projects, using Kubernetes can be a hard thing to bring to the team. I would not see that as a con of Kubernetes', just a fact.
  • Kubernetes is very easy to deploy in the cloud but not easy for platforms other than AWS, GCP, Azure.
If your company is 1) looking into moving to cloud, 2) thinking of designing a CI/CD pipeline, and 3) comfortable with taking the time and effort to deploy clusters, then Kubernetes is definitely worth the resources. It will bring much more benefits with almost no tradeoffs. But for small size companies who have few servers, Kubernetes might not be the best choice.
Nitin Pasumarthy | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
Incentivized
Kubernetes is currently used as an experimental product for building and managing Machine Learning pipelines (ML) at LinkedIn. It is currently used by very few teams to access GPU clusters. Kubernetes makes it easy to deploy training and monitoring workloads on clusters really simple with a robust CLI. It has a very small learning curve as is mainly driven by config files.
  • Complex cluster management can be done with simple commands with strong authentication and authorization schemes
  • Exhaustive documentation and open community smoothens the learning process
  • As a user a few concepts like pod, deployment and service are sufficient to go a long way
  • We had several problems with its NFS, which is responsible for syncing the code across the cluster
  • On several instances the pods go into UNKNOWN state in which case restarting the entire node is the only solution
  • As a user of the existing setup given to me, I wasn't able to allocate only some CPU cores on a single host. It was either all or zero making cluster utilization sub-optimal
  1. Kubernetes is very easy to get started and to set up
  2. It has various deployment options, file systems and service types making it suitable for several use cases besides Machine Learning
  3. Extends the functionality of Docker's rich functionality making it a deadly combination
  4. The rough edges in file system, utilization and resource management should be fixed to be adopted as a standard in a company
  5. Its extremely vast Python library makes it easy to build services on top of kubernetes. However the API is quite complex and documentation is quite poor
Score 9 out of 10
Vetted Review
Verified User
Incentivized
We use Kubernetes at a department level and across the whole organization. At a department level, we use Kubernetes to manage our developer environments. These environments are made up of 30 containers containing compilers, single sign-on managers, and various other linting tools. We selected Kubernetes to manage these containers because it's quick to deploy and immensely customizable. Across the organization, we use Kubernetes to manage our customer environments. These environments are made up of ~8 containers running various managed web services. Kubernetes was selected for this because it is open source, scalable, and reliable. This allows us to cost-effectively deploy a solution and be confident that it will perform as needed.
  • Cost-effectiveness, Kubernetes is free and open source.
  • Scalability, Kubernetes works regardless of how many pods it's managing; be it ten or a thousand.
  • Low overhead, Kubernetes adds very little performance cost per developer per machine. The benefits of having a managed system vastly outweigh the minor performance cost.
  • Large market share, Kubernetes is one of the top container orchestration tools used by developers today. This has been immensely helpful when finding new talent who are familiar with Kubernetes.
  • Local development, Kubernetes does tend to be a bit complicated and unnecessary in environments where all development is done locally.
  • The need for add-ons, Helm is almost required when running Kubernetes. This brings a whole new tool to manage and learn before a developer can really start to use Kubernetes effectively.
  • Finicy configmap schemes. Kubernetes configmaps often have environment breaking hangups. The fail safes surrounding configmaps are sadly lacking.
Kubernetes is well suited for environments where products are hosted on AWS or another managed server, and where multiple software products need to all work together. When working with a managed server Kubernetes gives us a single point that allows us to control the entire environment. This has proved to be immensely helpful when working on large systems because it keeps track of nodes at no extra cost.

Kubernetes is less suited for environments where all development is done locally, the cost of getting all the nodes running often outweighs the potential benefits when a developer can access all the containers locally.
Score 10 out of 10
Vetted Review
Verified User
Incentivized
Kubernetes has become the microservice container orchestration platform of choice. All our software deployed into Kubernetes - including public websites.
  • Scales extremely well
  • Handles proper rolling updates of microservices
  • Makes it super easy to establish a proper CI/CD pipeline
  • Makes it easy for developers to adopt and therefore use
  • Could improve user access. Currently uses RBAC - but depending on your implementation (i.e. Amazon EKS) - setting up permissions and adding users to the cluster that are to use the cluster only is sometimes challenging.
  • Security can always be improved
  • Hooks for identity management; there are open source projects (i.e. Dex) - would be nice to see these adopted to the mainline
  • UI Dashboard needs some major improvments
Kubernetes is well suited for deployment of container based applications and microservices. Anything that doesn’t require a lot of disk space (i.e. a database) works really well with this system.
Score 9 out of 10
Vetted Review
Verified User
Incentivized
We as an organization have very diverse hardware infrastructure. We have our own data centers and multiple cloud providers. The technologies we use are also again very diverse, we use VMs, containers as well as server-less technologies. When it comes to container technologies we are using Docker and orchestrate it with Kubernetes. In most of the cases, each Business unit have their own Kubernetes clusters for application hosting, and categorize it separately for preproduction and production environments.
  • Kubernetes can run anywhere, i.e in in-house datacenters as well as in Public cloud
  • Very efficient management of containers and self healing.
  • Out of the box Automated deployment and rollbacks. And support for many deployment strategies like blue-green, rolling update and recreate.
  • Efficient secret and configuration management
  • Understanding Kubernetes is little hard and has a steep learning curve.
  • Kubernetes is complex, it has its own concepts called pods, services and deployments.
  • Debugging and troubleshooting in Kubernetes is quite hard and requires experience.
Kubernetes is a container-centric platform which can be run on in house data centers as well as public cloud. It is not only a platform run Docker containers, but also a very efficient network and application orchestrator. It has very powerful robust and extensible APIs. It is mostly declarative.
October 05, 2017

Worth the Learning Curve

Adam Eivy | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Incentivized
Kubernetes has massively simplified and sped up the management of microservices deployed within my team. If we need to spin up a new service, even if it doesn't relate to the other services in the cluster, we can simply deploy the docker container to the cluster, complete with service discovery, configuration management, autoscale and fault tolerance. This is invaluable.
  • Fault tolerance - the things it does under the hood to handle failure is near magical.
  • Configuration management - the ease of managing configs and secrets in kubernetes makes it a snap for integrating services.
  • Service discovery - getting services to talk to each other with automated internal DNS and service-discovery makes shipping service dependencies easy.
  • Speed of error detection - many times, in attempting to fix a problem, I found that kubernetes just had a delay in handling an automated fix. By changing the system, I was playing a cat and mouse game with kubernetes' attempts to auto-fix the error.
  • Sensible logging - many of the logs are difficult to decipher and too verbose to be useful.
  • The learning curve is high - it took many months of working with Google, in which both I and Google Support Engineers learned a lot about how Kubernetes works. The learning curve is not for people looking for quick and easy out of the box.
If you are managing microservices, need service-discovery, autoscale and config management, kubernetes provides everything you need right out of the gate with simple YAML config files, allowing you to store your infrastructure as code within your repos. Kubernetes works best with non-homogenous loads, so putting multiple types of services into the cluster that utilize different components (memory, CPU, network) will provide better results than a single service that takes up one type of resource.
April 14, 2017

Poor man's review

Manish Rajkarnikar | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Incentivized
  • Whole organization.
  • Used as a PaaS.
  • Used to deploy mostly stateless and cloud-ready apps.
  • Solves the problem of immutable infrastructure. No need for Chef, Puppet or Ansible.
  • Low learning curve for users.
  • Apps start on failure, can auto scale; burst into cloud;
  • Infra is cloud agnostic; works in an in-house datacenter too. Gives leverage to mangement for negotiations with cloud providers
  • Container orchestration
  • Application scale up and down
  • Good PaaS with fluentd, service discovery, secrets etc.
  • Huge community support
  • Free kubeconfig video, which is awesome
  • Quick releases (every quarter)
  • Extensive documentation. Design discussion and decisions are all documented.
  • Huge ecosystem and a lot of tools built around it. A lot of companies are behind it (Google, Microsoft, Coreos etc.). This project is not going anywhere.
  • Better documentation; no document versioning. Stuff is all over. It's difficult to find the right stuff sometimes.
  • Easy installation. kubeadm is partly there but not fully HA; minikube is awesome but does not work for multi-node installation; other installation such as kops, kargo are Anisible based, not fully immutable.
[Kubernetes is] suited for running docker containers in scale. I would not use it for running stuff like Cassandra, Hadoop, or a stateful application. Although they have stateful sets/pvc/pv. It's not there yet.
Jake Luby | TrustRadius Reviewer
Score 9 out of 10
Vetted Review
Verified User
Incentivized
It is being used across many departments, with more being added every day. My team was one of the first teams to use Kubernetes 8 for our microservice deployments. It addresses the problem of HDHA applications, agile development, rolling deployments with no downtime. We are also utilizing its service discovery with spring boot admin to provide node level details for all nodes in the cluster.
  • Single process microservice containerization that can be scaled up and down at a moments notice.
  • Rolling deployments with zero downtime.
  • Artifactory/DockerHub integration for deploying from an artifact repo.
  • Spring Boot integration with configmaps & secret managment.
  • Ingress is HTTP only, so something that is TCP only must be in the cluster.
  • Multi-process containers don't behave well.
  • Sizing constraints cause slow startups for Spring Boot apps.
  • Ingress' are slow to start up.
It is well suited for stateless microservices (single APIs that perform a single function, message consumers/producers, single session UIs, etc.). It is also great for teams that are deploying a lot since they are fast and rolling with no pods being down. It is not well suited for things that require a state or any kind of persistence layer in the app or cluster.
David Long, SPA | TrustRadius Reviewer
Score 10 out of 10
Vetted Review
Verified User
Incentivized
For managing containers across a cluster of servers, I won't use anything but Kubernetes. It makes scheduling containers extremely easy. Bundling applications that we develop into Docker images has made deployment a really simple process for us. It's made it so that we don't have to think too much about the clash that comes of running multiple applications on the same set of hosts. It's also helped our engineers to write idempotent applications better because we scale up and down often.
  • Container Scheduling
  • Deployments
  • Extensibility
  • SSL Management
  • Cluster Installation
  • Ingress Management
Kubernetes handles web applications wrapped into containers really well. Essentially, if it's something that you can containerize, Kubernetes will run it well. You can allocate resources towards specific containers if you have some that need more resources than others. Putting a service in front of containers makes it easy to communicate between pods of containers or the outside world.
Return to navigation