How are Google Cluster Documents created

Tutorial: The Basics of Kubernetes

With the help of containers, developers can concentrate fully on their apps while the operations team takes care of the infrastructure. Container orchestration is the way you manage these deployments in your company.

Kubernetes is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.

With Kubernetes, you can deploy and manage containerized, legacy and cloud-native apps on a large scale - as well as applications that have been split into different environments as microservices, including private clouds and public clouds from major service providers such as Amazon Web Services (AWS), Google Cloud , IBM Cloud and Microsoft Azure.

Kubernetes architecture

With Kubernetes you get a platform on which you can plan and run containers in clusters on physical or virtual machines. With the Kubernetes architecture, clusters are divided into components that together maintain the defined state of the respective cluster.

A Kubernetes cluster consists of a number of node machines that run containerized applications. A Kubernetes cluster consists of two parts: the control plane and the computing machines or nodes. Each node is its own Linux® environment and can either be a physical or a virtual machine. Pods, which are made up of containers, run on the nodes.

The Kubernetes API is the front end of the Kubernetes control plane and controls the interaction of users with the Kubernetes cluster. The API server determines whether a request is valid and then processes it.

The API is the interface that is used to manage, create, and configure Kubernetes clusters. It allows users, external components and parts of your cluster to communicate with one another.

This short Kubernetes tutorial shows you how to create clusters and deploy applications.

Other components of Kubernetes clusters

These machines perform the requested tasks assigned by the control plane.

A group of one or more containers implemented in a single node. A pod is the smallest and simplest Kubernetes object.

A way to make an application running on a number of pods available as a network service. This decouples working definitions from the pods.

The command line from which you can manage your Kubernetes cluster. Learn basic kubectl and Helm commands.

A gadget in every node that communicates with the control plane. The kubelet ensures that the containers run in a pod.

When you're ready to get started with Kubernetes, you can use the open source Minikube tool to set up a local Kubernetes cluster and try out Kubernetes on a laptop.

This is how Kubernetes works

Kubernetes is based on the principle of a desired and an actual state. Kubernetes objects represent the state of a cluster and communicate to Kubernetes what the workload should look like.

Once an object has been created and defined, Kubernetes ensures that it exists continuously.

Controllers take over the active management of the state of Kubernetes objects and make changes that move the cluster from the actual to the desired state.

Developers or system administrators indicate the desired state with the YAML or JSON files they submit to the Kubernetes API. With the help of a controller, Kubernetes compares the newly defined with the actual state of the cluster.

The desired state of the Kubernetes cluster defines which applications or other workloads should be running, which container images they should use, which resources should be made available to them, and other configuration details.

Configuration data and information about the health of the cluster are located in Etcd, a storage database for key values. Etcd is fault-tolerant, decentralized and forms the "Ultimate Source of Truth" for your cluster.

Kubernetes automatically manages your cluster so that it corresponds to the desired state. This is usually done by controllers sending messages to the API server and thus triggering changes. Some Kubernetes resources have built-in controllers.

Example of how Kubernetes manages the desired state: You deploy an application with the desired state “3”, where 3 means that three replicas of the application are to be executed.

If one of these containers crashes, the Kubernetes ReplicaSet detects that only two replicas are running, so another is added to meet the desired state.

ReplicaSets can be thought of as a kind of controller that ensures that a certain number of pods are running at a specific time.

Kubernetes deployments are the preferred way to manage ReplicaSets and provide declarative updates for pods so that you don't have to manage them yourself.

You can also use Kubernetes' autoscaling to scale your services based on user needs. When you specify the desired state of an application / service, you can instruct the controller to deploy additional pods when demand increases.

The desired state of your application could grow from the usual three replicas to ten at peak times.

Kubernetes deployments

A Kubernetes deployment is a resource object in Kubernetes that provides declarative updates for apps.

A deployment can be used to describe the lifecycle of an application, such as which images the app uses, how many pods it needs, and how to update them.

Updating containerized applications manually can be time-consuming and labor-intensive. A Kubernetes deployment automates this process and makes it repeatable.

Deployments are completely managed by the Kubernetes backend, and the entire update process is done on the server side, with no client interaction.

With the Kubernetes Deployment Object, you can:

  • Deploy ReplicaSet or Pods
  • Update Pods and ReplicaSets
  • Roll back to previous deployment versions
  • Scale deployments
  • Suspend or resume deployments

Kubernetes patterns

Kubernetes patterns are design patterns for container-based applications and services.

Kubernetes can support developers in programming cloud-native apps and provides a library with APIs and tools for this purpose.

However, Kubernetes does not provide guidelines for developers and architects on how to use it to create a complete system that meets specific business needs and goals.

With the help of patterns, architectures can be reused. Instead of developing the entire architecture yourself, you can use existing Kubernetes patterns, which also ensure that all components work as expected.

Patterns are the tools a Kubernetes developer needs to demonstrate how to build your system.

O'Reilly: Kubernetes Operators

A Kubernetes operator is a method of packaging, deploying, and managing a Kubernetes application. Kubernetes applications are deployed and managed on Kubernetes using the Kubernetes API and kubectl tooling.

A Kubernetes operator is an application-specific controller that extends the functionality of the Kubernetes API to create, configure and manage instances of complex applications for a Kubernetes user.

Learn how you can use the Operator SDK to build a Kubernetes operator in ten minutes.

General Kubernetes concepts for resources and controllers are used for this purpose. However, domain or application-specific knowledge is also required to automate the entire lifecycle of the managed software.

For more details on Kubernetes operators or reasons why they are so important, see: How to explain Kubernetes Operators in plain English

Operators allow you to write task automation code well beyond the basic automation features of Kubernetes. Teams using a DevOps or Site Reliability Engineering (SRE) approach can use operators to integrate SRE practices with Kubernetes.

Learn more about how Kubernetes Operators work, including practical examples, and how to build them using the Operator Framework and Software Development Kits.

Support of DevOps with Kubernetes

DevOps is based on the automation of routine tasks and the standardization of environments over the entire lifecycle of an app.

Containers enable a unified environment for development, deployment and automation and ensure the easy migration of apps between development, test and production environments.

A big advantage of implementing DevOps is the CI / CD (Continuous Integration / Continuous Deployment) pipeline. With CI / CD, you can often deliver apps to your customers and validate software quality without major manual intervention.

When a container lifecycle is managed with Kubernetes, using Kubernetes deployments and operators along with a DevOps approach, software development and IT operations can be more closely aligned to enable a CI / CD pipeline.

Training for Kubernetes

Deploying Containerized Applications Tech Overview

This on-demand series of short lectures and in-depth presentations introduces Linux containers and container orchestration technology with Docker, Kubernetes, and Red Hat® OpenShift® Container Platform.

Red Hat OpenShift Administration

In this course, you will learn how to install and manage the Red Hat OpenShift Container Platform. Learn how to install, configure, and manage OpenShift clusters in this interactive, hands-on course. You will also implement sample applications to understand how developers use the platform.

Introduction to OpenShift Applications

In this introductory course, developers will learn how to use OpenShift to build, deploy, scale, and troubleshoot applications. As OpenShift and Kubernetes become more prevalent, today's developers need to know how to develop, build, and deploy applications using a containerized application platform.

Kubernetes for the company

Red Hat OpenShift is a Kubernetes platform for businesses. It integrates all of the additional technology components that make Kubernetes powerful and enterprise-ready, including registry, networking, telemetry, security, automation and services.

With OpenShift, developers can build new containerized apps, host them, and deploy them in the cloud with the scalability, control, and orchestration needed to turn a good idea into a new business quickly and easily.

You can try Red Hat OpenShift for 60 days free of charge to automate your container operations.