Posted on April 13, 2020
Originally posted onEpsagon-Blogby Ran Ribenzaft, co-founder and CTO of Epsagon
This article will discuss some of the above frameworks and go into detail about OpenFaaS and Knative to introduce their architecture, core components, and basic installation steps. If you are interested in this topic and plan to develop serverless applications using open source platforms, then this article will give you a better understanding of these solutions.
In recent years, serverless architectures have grown rapidly in popularity. The main benefit of this technology is the ability to build and run applications without having to manage the infrastructure. In other words, by using a serverless architecture, developers no longer need to allocate resources, scale, and maintain servers to run applications or manage databases and storage systems. Your only responsibility is to write quality code.
There are many open source projects for creating serverless frameworks (Apache OpenWhisk, IronFunctions, Oracle's Fn, OpenFaaS, Kubeless, Knative, Project Riff, etc.). Also, since open source platforms provide access to IT innovations, many developers are interested in open source solutions.
OpenWhisk, Firecracker y Oracle FN
Before we dive into OpenFaaS and Knative, let's briefly describe these three platforms.
Apache OpenWhisk is an open cloud platform for serverless computing that uses cloud computing resources as services. Compared to other open source projects (Fission, Kubeless, IronFunctions)Apache OpenWhiskNameis characterized by a large codebase, high-quality features, and a large number of contributors. However, the oversized tools of this platform (CouchDB, Kafka, Nginx, Redis and Zookeeper) create difficulties for developers. Also, this platform is imperfect in terms of security.
Firecracker is a virtualization technology introduced by Amazon. This technology provides virtual machines with minimal overhead and enables the creation and management of isolated environments and services. Firecracker offers lightweight virtual machines, called micro VMs, that use hardware-based virtualization technologies for complete isolation while delivering traditional container-level performance and flexibility. One of the drawbacks for developers is that all developments on this technology are written in the Rust language. A condensed software environment with a minimal set of components is also used. To save memory, reduce boot time, and increase security in environments, a modified Linux kernel is released, from which everything superfluous has been removed. In addition, the functionality and support of the device will be reduced. The project was developed on Amazon Web Services to improve the performance and efficiency of the AWS Lambda and AWS Fargate platforms.
Oracle F is an open, serverless platform that provides an additional layer of abstraction for cloud systems to enable Functions as a Service (FaaS). As with other open platforms in Oracle Fn, the developer implements the logic at the individual function level. Unlike existing commercial FaaS platforms, such as Amazon AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions, Oracle's solution is positioned to be vendor-independent. The user can choose any cloud solution provider to launch the Fn infrastructure, combine different cloud systems or run the platform on their own devices.
Kubeless is a framework that supports deploying serverless functions to your cluster and allows us to make HTTP and event changes to your Python, Node.js, or Ruby code. Kubeless is a platform built withKubernetesBasic functions such as implementation, services, configuration maps (ConfigMaps), etc. This saves the kubeless codebase with a small footprint and also means that developers do not have to rework large parts of the planned logical code that is already present in the kubernetes kernel.
Fission is an open source platform that provides a serverless architecture on top of Kubernetes. One of the advantages of Fission is that it takes care of most of the automatic resource scaling tasks in Kubernetes and frees you from manual resource management. The second advantage of Fission is that you are not tied to one provider and can freely switch from one provider to another as long as they support Kubernetes clusters (and any other specific needs your application may have).
Key benefits of using OpenFaaS and Knative
OpenFaaS and Knative are free and publicly available open source environments for building and hosting serverless functions. These platforms allow:
- Reduce idle resources.
- process data quickly.
- Connect to other services.
- Load balancing by intensively processing a large number of requests.
However, despite the advantages of both platforms and serverless computing in general, developers must evaluate the application logic before beginning implementation. This means that you first have to break the logic into individual tasks and only then can you write any code.
For clarity, let's consider each of these open source serverless solutions separately.
How to create and implement serverless functions with OpenFaaS
The main goal of OpenFaaS is to simplify serverless functions with Docker containers, allowing you to run a complex and flexible infrastructure.
OpenFaasDesign and Architecture
The OpenFaaS architecture is based on a cloud-native standard and includes the following components: API Gateway, Function Watchdog, and Kubernetes container orchestrators Docker Swarm, Prometheus, and Docker. According to the architecture shown below, when a developer works with OpenFaaS, the process starts with Docker installation and ends with API Gateway.
A path to where all functions are located is provided via the API gateway and cloud-native metrics are collected via Prometheus.
A watchdog component is built into each container to support a serverless application and provides a common interface between the user and the role.
One of the main tasks of the Watchdog is to orchestrate an HTTP request received at the API gateway and invoke the selected application.
With this component, you can get the dynamics of metric changes at any time, compare them with others, convert them and view them in text or graphical form, without leaving the main page of the web interface. Prometheus stores the collected metrics in RAM and saves them to disk when a certain size threshold is reached or after a certain period of time.
Docker Swarm y Kubernetes
Docker Swarm and Kubernetes are the orchestration engines. Components like API Gateway, Function Watchdog, and a Prometheus instance work on top of these orchestrators. Kubernetes is recommended for developing products, while Docker Swarm is better for building local features.
In addition, all developed features, microservices, and products are stored in the Docker container, which serves as the main OpenFaaS platform for developers and system administrators to develop, deploy, and run serverless containerized applications.
Key points to install OpenFaaS on Docker
OpenFaaS API Gateway is built on built-in functions provided by your chosen Docker Orchestrator. To do this, API Gateway connects to the appropriate plugin for the selected orchestrator, logs various function metrics to Prometheus, and scales the functions based on alerts received from Prometheus through the AlertManager.
Suppose you are working on a computer running the Linux operating system and you want to write a simple function on one node of a Docker cluster using OpenFaaS. To do so, you would simply follow these steps:
- Install Docker CE 17.05 or later.
- Run docker:
$ docker run hello-world
- Initialize Docker Swarm:
$ docker swarm start
- Clone OpenFaaS from Github:
git clone https://github.com/openphase/phase && \cd phase && \ ./deploy_stack.sh
- Enter no UI portal I http://127.0.0.1:8080.
Docker is now out of the box and no longer needs to be installed when writing more functions.
Prepare the OpenFaaS CLI to create functions
To develop a feature, you must install the latest version from the command line using a script. To prepare this would be $ brew install faas-cli. For curl, you would use $ curl -sL https://cli.get-faas.com/ | use sudo sch.
Different programming languages with OpenFaas
To create and implement a function with OpenFaaS using templates in the CLI, you can write a controller in almost any programming language. For example:
- Create new role:
$ faas-cli new --lang prog language <<function name>>
- Generate file and folder in batches:
$ git clone https://github.com/openfaas/faas \ cd faas \ git checkout 0.6.5 \ ./deploy_stack.sh
- Create the function:
$ faas-cli build -f <<stack file>> Deploy the function: $ faas-cli deployment -f <<stack file>>
Testing the function via the OpenFaaS user interface
You can quickly test the function in several ways through the OpenFaas user interface, as shown below:
- Access the OpenFaaS user interface:
- Use rhymes:
$ curl -d "10" http://localhost:8080/function/fib
- Use the user interface
At first glance, everything seems to be quite simple. However, you have to deal with many nuances. This is especially the case when you need to work with Kubernetes, need a lot of resources, or need to add additional dependencies to the main FaaS code base.
there is onethe entire OpenFaas developer community on GitHubwhere you can also find useful information.
Pros and cons of OpenFaaS
OpenFaaS simplifies the structure of the system. Troubleshooting becomes easier and adding new features to the system is much faster than with a monolithic application. In other words, OpenFaaS allows you to run code in any programming language anytime, anywhere.
However, there are disadvantages:
- Increased cold start time for some programming languages.
- The start time of the container depends on the provider.
- Limited function lifetime, i. H. Not all systems can function without a server. (With OpenFaaS, compute containers cannot store executable application code in memory for long. The platform automatically creates and destroys them, so a stateless state is not possible.)
Deploy and run with native features
With Knative, you can develop and deploy container-based server applications that can be easily moved between cloud providers. Knative is an open source platform that is just beginning to gain popularity, but is currently of great interest to developers.
Architecture and Components by Knative
The Knative architecture consists of build, event, and service components.
The Knative build component is responsible for starting container mounts on the cluster from source. This component builds on and extends existing Kubernetes primitives.
of theNative Ventilation Componentis responsible for universal signaling, event delivery and management, as well as creating communication between loosely coupled architectural components. Also, this component allows you to scale the load on the server.
The main purpose of the service component is to support serverless application and feature deployment, automatic scaling from scratch, routing and network scheduling for Istio components, and snapshots of deployed code and configurations. Knative uses Kubernetes as the orchestrator and Istio handles query routing and advanced load balancing.
Example of the simplest functions using Knative
You can use several methods to create a server application in Knative. Your choice will depend on your existing skills and experience with various services, including Istio, Gloo, Ambassador, Google and specifically KubernetesEngine, IBM Cloud, Microsoft Azure Kubernetes Service, Minikube, and Gardener.
Just select the installation file for each of the Knative components. Links to the main installation files for the three required components are below:
Each of these components is identified by a set of objects. More information on the syntax and installation of these components can be found here.on Knative's own development page.
Knative pros and cons
Knative has a number of advantages. Like OpenFaaS, Knative allows you to create serverless environments using containers. This, in turn, allows you to achieve an event-driven, on-premises architecture that is not constrained by public cloud services. Knative also allows you to automate the container composition process, allowing for automatic scaling. Because of this, the capability of serverless functions is based on predefined thresholds and event handling mechanisms.
Additionally, you can use Knative to build apps on-premises, in the cloud, or in a third-party data center. This means that you are not tied to any cloud provider. And because Knative is built on Kubernetes and Istio, Knative has a higher adoption rate and adoption potential.
A big disadvantage of Knative is the need to independently manage the container infrastructure. Simply put, Knative is not intended for end users. However, because of this, more commercially managed Knative offerings are available, such as: B. Google Kubernetes Engine and Managed Knative for IBM Cloud Kubernetes Service.
Despite the growing number ofopen source serverless platforms, OpenFaaS, and Knative will continue to gain popularity among developers. It is worth noting that these platforms cannot simply be compared, as they are designed for different tasks.
Unlike OpenFaas, Knative is not a complete serverless platform, but it is better positioned as a platform for building, deploying, and managing serverless workloads. However, from a configuration and maintenance point of view, OpenFaas is simpler. Unlike Knative, OpenFaas does not require all components to be installed separately, nor does it require you to remove old configurations and features for new development when the necessary components are already installed.
However, as mentioned above, one big disadvantage of OpenFaaS is that the container startup time is vendor dependent, while Knative is not tied to a single cloud solution provider. Based on the pros and cons of both, organizations can also choose to use Knative and OpenFaaS together to achieve different goals efficiently.