Kubernetes provides a set of primitives for running robust distributed applications. It takes care of automatic failover and scaling of your application, and provides deployment patterns and APIs that you can use to automate resource management and deploy new workloads.
One of the biggest challenges for developers is focusing more on the details of the code than the infrastructure on which that code runs. Because of this, serverless is one of the main architectural paradigms to address this challenge. There are several platforms you can use to run serverless applications, implemented as individual functions or run in containers, for example, Examples include AWS Lambda, AWS Fargate and Azure Functions. These managed platforms have some disadvantages such as:
-Dependency on a provider
-Limit the size of app binaries/artifacts
- Cold start performance
You may be in a situation where you are only allowed to run applications in a private data center or you are using Kubernetes but want the benefits of serverless. There are several open source platforms such as Knative and OpenFaaS that use Kubernetes to abstract the infrastructure away from the developer, allowing you to deploy and manage your applications using serverless architecture and standards. Using any of these platforms eliminates the problems mentioned in the previous paragraph.
This article will show you how to deploy and manage serverless applications using Knative and Kubernetes.
Serverless computing is a development model that lets you build and run applications without having to manage servers. It describes a model in which a cloud provider handles the heavy lifting of provisioning, maintaining, and scaling the server infrastructure, while developers can simply package and upload their code for deployment. Serverless applications can automatically grow or shrink as needed, with no additional configuration by the developer.
as in aWhite paperFrom the serverless CNCF working group, there are two main people serverless:
-Developer- Write code for the serverless platform and benefit from it, giving them the belief that there are no servers and your code is always running.
-Offering- Provides the serverless platform for an external or internal customer.
DieOfferingyou need to manage servers (or containers) and incur some costs to run the platform even when it is idle. A self-hosted system can still be considered serverless: typically one team acts as a provider and another as a developer.
There are several ways to run serverless applications in the Kubernetes environment. This can be done through managed serverless platforms such as IBM Cloud Code and Google Cloud Run, or open source alternatives that you can host yourself such as OpenFaaS and Knative.
Introduction to Knative
Knative is a set of Kubernetes components that provide serverless functionality. It provides an event-driven platform that can be used to deploy and run applications and services that can automatically scale as needed, with out-of-the-box support for monitoring, automatic TLS certificate renewal, and more.
Knative is used by many companies. In fact, it supports Google Cloud Run platform, IBM Cloud Code Engine and Scaleway serverless features.
Knative's basic implementation unit is a container that can receive incoming traffic. You give it a container image to run, and Knative takes care of all the other components it needs to run and scale the app. Container applications are deployed and managed by one of Knative's core components, Knative Serving. Knative Serving is the Knative component that manages the provisioning and deployment of stateless services and their networking and autoscaling requirements.
The other main component of Knative is called Knative Eventing. This component provides an abstract way to consumecloud eventsfrom internal and external sources without having to write additional code for different event sources. This article will focus on Knative Serving, but you'll learn how to use and configure Knative Eventing for different use cases in a future article.
To install Knative and deploy your application, you need a Kubernetes cluster and the following tools installed:
- Flight attendant
-kubectl, the Kubernetes command-line tool
-kn CLI, the CLI for managing Knative applications and settings
To install Docker, go to the URLdocs.docker.com/get-dockerand download the appropriate binary for your operating system.
The Kubernetes Command Line Toolkubectlallows you to run commands on Kubernetes clusters. Docker Desktop will install kubectl for you. So if you followed the previous section when installing Docker Desktop, kubectl should already be installed and you can skip this step. If you don't have kubectl installed, follow the instructions below to install it.
If you are using Linux or macOS, you can install kubectl with homebrew by running the command
prepare kubectl installation. Make sure the version you installed is up to date by running the command
kubectl version --client.
If you are using Windows, run the command
curl -LO https://dl.k8s.io/release/v1.21.0/bin/windows/amd64/kubectl.exeto install kubectl, add the binary to your PATH. Make sure the version you installed is up to date by running the command
kubectl version --client. You must have version 1.20.x or v1.21.x because in a future section you will create a server cluster with version 1.21.x of Kubernetes.
Install Kn CLI
The kn CLI provides a quick and easy interface for creating Knative resources such as services and event sources without directly creating or modifying YAML files. kn also makes it easy to perform complex procedures such as autoscaling and traffic sharing.
To install kn on macOS or Linux, run the command
To install kn on Windows, download and install a stable binary fromhttps://mirror.openshift.com/pub/openshift-v4/clients/serverless/latest. Then add the binary file to the system PATH.
Creating a Kubernetes Cluster
You need a Kubernetes cluster to run Knative. In this article, you'll work with a local Kubernetes cluster running on Docker. You must have Docker Desktop installed.
Create a cluster with Docker Desktop
Docker Desktop includes a separate Kubernetes server and client. This is a single-node cluster running in a Docker container on your local system and should only be used for local testing.
To enable Kubernetes support and install a standalone Kubernetes instance running as a Docker container, go toSettings > Kubernetesand then clickEnable Kubernetes.
clickapply and restartto save the settings and then click Install to confirm as shown below.
This instantiates the images needed to run the Kubernetes server as containers.
Kubernetes status is displayed in the Docker menu and is pointed to by contextDocker-Desktop, as shown in the following image.
Alternatively, create a cluster with children
You can also create a cluster with kind, a tool for running local Kubernetes clusters using Docker container nodes. If you have kind installed, you can run the following command to create your kind cluster and set the kubectl context.
curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/01-kind.sh | Sch
Install the Knative service
Knative Serving manages service deployments, patches, networking, and scaling. The Knative Serving component provides its service via an HTTP URL and has secure default settings for its configurations.
User friendly, follow these instructions to install Knative Serving:
-Run the command
curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/02-servindo.sh | Schto install Knative Serving.
-When finished, run the command
curl -sL https://raw.githubusercontent.com/csantanapr/knative-kind/master/02-kourier.sh | Schto install and configure Kourier.
For Docker Desktop users, run the command
curl -sL https://raw.githubusercontent.com/csantanapr/knative-docker-desktop/main/demo.sh | Sch.
Deploy your first app
You will then deploy a simple "Hello World" application to learn how to deploy and configure an application in Knative. You can deploy an application with a YAML file and the kubectl command or with the kn command and pass the correct options. For this article, I'll be using the kn command. The sample container image you are using is hostedgcr.io/knative-samples/helloworld-go.
To deploy an application, use the
create kn servicecommand and you must specify the application name and container image to use.
Run the following command to create a service called
olausing the imagehttps://gcr.io/knative-samples/helloworld-go.
kn service create hello \--image gcr.io/knative-samples/helloworld-go \--port 8080 \--revision-name=world
The command creates and starts a new service with the specified image and port. An environment variable is also defined.
The revision name is set to
--RevisionsnamePossibility. Knative uses revisions to keep track of all changes made to a service. Whenever a service is updated, a new revision is created and promoted to be the current version of the application. This feature allows you to revert to the previous version of the service if needed. If you give the patch a name, you can easily identify it.
When the service is created and ready, the following output should be printed to the console.
The hello service created for the latest hello-world revision is available at URL: http://hello.default.127.0.0.1.nip.io
Confirm that the app is running by running the command
rice http://hello.default.127.0.0.1.nip.io. You must get the outputHello World!imprinted no console.
Suppose you want to update the service; you can use the...
kn-Service-UpdateCommand to make changes to the service. Each change creates a new revision and directs all traffic to the new revision once started and healthy.
Update the TARGET environment variable by running the following command:
kn hallo service update \--env TARGET=Encoder \--revision-name=encoder
You should get the following output when the command completes.
The hello service has been updated to the latest version of hello-coder and is available at URL: http://hello.default.127.0.0.1.nip.io
Run the curl command again and you should getHello coders!printed.
~ curl http://hello.default.127.0.0.1.nip.io~ Hello coder!
Traffic distribution and analysis
Knative Revision is similar to a tag or version control tag and is immutable. Each Knative revision is associated with a corresponding Kubernetes implementation; This allows the application to revert to one of the previous revisions. You can view the list of available patches by running the command
kn revision list. This should print out a list of available patches for each service, with information on how much traffic each patch receives, as shown in the image below. By default, each new patch forwards 100% of the traffic as it is created.
For hotfixes, you might want to deploy applications using common deployment patterns such as Canary or Blue-Green. You need more than one review of a service to use these standards. TheolaThe service you deployed in the previous section already has two named revisionsHello Worldyhello coderso You can split the traffic for each revision by 50% with the following command:
service update kn hello \--traffic hello-world=50 \--traffic hello-coder=50
rice http://hello.default.127.0.0.1.nip.ioCommand a few times to see what you getHello World!sometimes it isHello coders!other times.
One of the benefits of serverless is the ability to scale up and down as needed. When traffic is not arriving, it must be reduced, and when it is peaking, it must be increased to meet demand. Knative sizes pods for a Knative service based on incoming HTTP traffic. After a period of inactivity (60 seconds by default), Knative shuts down all pods for that service. In other words, it is reduced to zero. This Knative autoscaling feature is managed byKnative Horizontal Capsule Automatic SizingCombined withAutomatic Horizontal Pod SizerIntegrated with Kubernetes.
If you haven't accessed the Hello service for more than a minute, the pods should have already stopped. Executing the command
kubectl get pod -l serve.knative.dev/service=hola -wit should show an empty result. To see autoscaling in action, open the service URL in your browser and verify that the pods have started and are responding to the request. You should get output similar to the following.
There you have the amazing Serverless autoscaling feature.
If you have an application that is severely affected by cold start performance and you want at least one instance of the application to continue running, run the command
service update kn <SERVICE_NAME> --scale-min <VALUE>. For example, to keep at least one instance of the Hello service running at all times, you could use the command
service update kn hello --scale-min 1.
Kubernetes has become a standard tool for managing container workloads. Trusted by many companies to build and scale cloud-native applications, it powers many of the products and services you use today. While companies are embracing and getting some benefits from Kubernetes, developers are not interested in the low-level details of Kubernetes and therefore want to focus on their code without worrying about the infrastructure parts to run the application.
Knative provides a set of tools and CLIs that developers can use to deploy their code and allow Knative to manage their application's infrastructure needs. In this article, you saw how to install the Knative Serving component and deploy the services that run on it. You also learned how to deploy services and manage their configuration using the kn command-line interface. If you want to learn more about using the kn CLI, check out this free cheat sheet I made atcheatsheet.pmbanugo.me/knative-servindo.
In a future article, I'll show you how to work with Knative Eventing and how your application can react to cloud events inside and outside your cluster.
In the meantime, you can take my book.How to build a serverless application platform on Kubernetes. You will learn how to build a platform to deploy and manage applications and web services using cloud-native technologies. You'll learn about Serverless, Knative, Tekton, GitHub Apps, Cloud Native Buildpacks, and more!
Get your copy belowbooks.pmbanugo.me/plataforma-de-aplicaciones-sin-servidor
Let's continue the conversation!where you can deepen your understanding of Kubernetes and share your experiences.
(1 visit, 1 visit today)