Knative project as Inversion of Native for Serverless Applications isolates business hard choices Knative Project as inversion of native for serverless applications
Skip to main content

Select your location

Knative Project as Inversion of Native for Serverless Applications

Knative Project as inversion of native for serverless applications

Recently Google, Pivotal, and other industry leaders announced a new open-source software project, Knative.

We have observed over the last few years that the words “cloud” and “native” have become closely associated. Software vendors are trying to answer the question: “What are cloud-based applications?” Businesses are trying to adopt “cloud-native” strategies and “cloud-native” culture, while the rest of the world is just trying to understand the meaning of “cloud-native.”

Is native just a new, trendy word for anything related to the cloud? This article looks at the value propositions of “nativeness” in Knative beyond the hype.

Definition of Native

Wikipedia states the definition of native as, “In computing, software or data formats that are native to a system are those the system supports with minimal computational overhead or additional components. This word is used in terms such as native mode and native code.”

Existing serverless applications are native cloud functions
 

Recently we have been observing the continuously increasing popularity of functional programming. The functional paradigm makes it easier to reason about software, utilize concurrency patterns, and implement scale processing and event-driven architectures.

Major cloud providers quickly adopted the idea of running functions in the cloud and started to offer platforms for short-lived (a.k.a., serverless) applications that shield clients from knowing about the details of servers where these applications are invoked. AWS Lambda, Microsoft Azure Function, and Google Cloud function are examples of these offerings. Numerous sources compare their pros, cons, similarities, and differences, including “Serverless Comparison Lambda vs. Azure vs. GCP vs. OpenWhisk” and “An Essential Guide to the Serverless Ecosystem.”

Serverless applications are enabling a usage-based billing model (no more payments for idle time) and have the ability to scale in the cloud from zero to thousands running instances and back within seconds. They are able to automatically retry after failures and run on demand or in response to the changes registered by cloud infrastructure components (schedulers, queues, topics, etc.), and they have gained in popularity. Serverless applications enabled numerous new ways to accomplish distributed batch, real time, scheduling, stream, and event processing. These new methods are united under the umbrella name of serverless architectures. There are multiple use cases for serverless applications. Some are outlined in the ITOpsTimes’s article “10 Use Cases for Serverless.” This list is not even close to being exhaustive. We at Kin + Carta are creating unique serverless architectures for our clients to solve business problems at scale, within a short period of time, with minimal costs and risks.

Cloud serverless applications (a.k.a., cloud functions) come with their own sets of problems. Peter Wayner states, “These apps are serverless in the same way that restaurants are kitchen-less. If what you want is on the menu and you like how the cook prepares it, sitting down in a restaurant is great. But if you want a different dish, if you want different spices, well, you’d better get your own kitchen.” To extend this metaphor further, you are also not able to get the same dish from multiple restaurants or ask for different kinds of plates.

Most of the problems stem from the fact that existing serverless applications are still hardwired (native) to their vendors’ systems. This has a number of serious consequences: 

  • Business serverless functionality is heavily intermixed with infrastructure concerns (even the term “serverless” refers to the infrastructure and not to the functionality). This creates additional complexity that negatively affects most systems development life cycle (SDLC) design, construction, testing, deployment, and support.
  • This additional complexity spills over to executive and business stakeholders, exposing them to low-level technical details and making them struggle in the dense forests of serverless choices.
  • Cloud function created for one cloud provider is not compatible with other cloud providers, not to mention its incompatibility with dedicated data centers or developers’ local machines.
  • AWS Lambda, Microsoft Azure Function, and Google Cloud function often need to be written in incompatible programming languages and binary bundled with incompatible compile and run-time dependencies.
  • Serverless application deployment mechanisms are also vendor specific.
  • Any vendor lock-in is a poor fit for function reuse and migration to hybrid clouds.
  • Cloud providers have different and often obscure ways (products) of combining multiple serverless functions into larger execution flows.
  • The concept of cloud events, when it exists, is abstracted in terms of vendor-specific resources and heavily intermixed with messaging semantics. 
  • Last but not least, none of these cloud functions supports the concept of streams.

There are multiple fragmented offerings that either partially solve some of these issues of nativeness or address all of them at the cost of adding more complexity.

Spring Cloud Function aims to use existing serverless functions and abstract their heterogeneity on the source code level by reusing the same common code bundled with different adapters for different vendors. However, it currently does not cover all major cloud providers, is limited to the Java programming language only, carries a heavy memory run-time penalty, and is still not reusable on the binary level between different cloud providers due to the adapter bundling into binary artifact.

Apache OpenWhisk is a nice cloud platform on its own that competes with vendors’ solutions. It addresses most of the above mentioned issues but enforces its own design paradigm, requires complex infrastructure, and is not actively supported by major software companies.

Inversion of native

It is not uncommon in software circles to see a breakthrough in problem-solving when developers address a known problem in a way that is completely opposite to previous attempts. These "Aha!" moments of solving problems using the opposite approach are often captured as eye-catching “inversion” design principles such as inversion of control and dependency inversion.

Is it possible to apply inversion to the nativeness of functions to conquer their limitations?

Designers of programming languages like Java and JavaScript did this by adopting new language syntax elements to accept functions and lambdas as first-class language citizens. This removed the necessity of cumbersome coding constructs for functional programming and streamlined construction of truly functional applications.

Knative does this for cloud functions. Instead of requiring serverless applications to be native to the cloud systems, Knative provides a cloud ecosystem that is adapted (native) to functions in the cloud. The ecosystem supports multiple SDLC phases, that is, design, build, deployment, and execution. It comprises pluggable components that are already supported by all major cloud providers. Moreover, the plug-in architecture of these components allows for replacement and customization.

Knative can address business, software design, delivery, infrastructure, and security concerns separately and in a vendor-independent way.

These concerns map into components of major Knative features: buildeventing, and serving. These features allow containerization of serverless applications with their infrastructure, and Kubernetes is used as an orchestration engine. Knative relies on Istio for network routing, security, and monitoring.

 

Design and Coding with Knative

Knative serving uses a Sidecar cloud design pattern in which service hosts a function and plays the role of the application container. Containerization enables functions to be bundled while language-specific run times are programmed at build time.

With these features, according to Pivotal’s director of technical marketing, Dan Baskette, “Functions are the next abstraction you need to care about. . . Developers can focus entirely on their function code to process an event.” Functions could be written in many programming languages without any additional dependencies or special run-time libraries. GitHub features Knative functions written in C#GoJavaNode.jsPHPPythonRuby, and Rust. Designers can chain multiple cloud functions into powerful business workflows by connecting serving route-named endpoints.

Route supports HTTP and gRPC endpoints. The latter enables endpoints’ distributed streaming. Knative pluggable architecture lets open-source add-ons like Project Riff invokers connect streaming endpoints with multilingual streaming functions. The functions need only to have well-known stream primitives as parameters and return values similar to this sample. This way, Knative provides a native streaming platform for serverless applications.

Events and event sourcing are staples of modern software-distributed architectures. The Knative eventing feature enables invocation of loosely coupled cloud services and serverless applications in an asynchronous manner. Its powerful generic abstractions for cloud events and communication primitives, such as Knative events, channel, subscription, and bus, allow architects and developers to concentrate on designing asynchronous flows instead of wrestling with messaging infrastructures. Knative commitment to the standard CloudEvents nomenclature protects event-driven design investments from current and future proprietary event frameworks.

 

 

Building with Knative

Knative’s build feature enables on-cluster container builds from source code. Serverless applications are containerized with source-to-container templates that bring the benefits of immutable infrastructure and traceable builds. Build is a custom resource in Knative building that is managed with Kubernetes. This allows the extension of concepts such as “pipeline as code” and “infrastructure as code” to “pipeline as code in container” and “infrastructure as code in container.” Knative build features allow plug-in of custom build processes, e.g.: Dockerfiles, Cloud Foundry buildpacks, and so on.


 

Running with Knative

This is where the inversion of native truly shines. Pods with containerized serverless applications and infrastructure adaptor containers are automatically deployed, scaled, and managed with Kubernetes. Kubernetes allows the user to run serverless flows in hybrid clouds across multiple cloud vendors, on premises, and on local development machines in the same way.

Istio helps cloud functions achieve enterprise-grade application quality on unreliable networks. It provides a uniform, configurable, declarative, policy-based approach; many operational features, such as network routing; retries on failures; and circuit breaking and security. Istio also enables monitoring as well as Zipkin tracing for services and serverless applications. In short, Knative brings the benefits of Istio to serverless applications.
 

Instead of a Conclusion


Does Knative magically make designing, building, and running serverless applications in the cloud easy? Not really. It makes transparent some complexities of these processes for application developers. However, transparency does not imply the absence of complexity. There are still tough design decisions and trade-offs to be made among cloud vendors, infrastructure components, building tools, run-time configurations, and so on.
With the inversion of native for serverless applications, Knative isolates these hard choices from business functionality and allows designers to address them individually in a systematic way.

We at Kin + Carta are helping our clients to grow their businesses by making these design decisions with them. We will be glad to help you grow your business, too.
 
For more information on these topics, read here:

Serverless Architectures, Martin Fowler
Serverless in the cloud: AWS vs. Google Cloud vs. Microsoft Azure, Peter Wayner 
Knative Enables Portable Serverless Platforms on Kubernetes, for Any Cloud, Dan Baskette


 

We can help you develop a personalized cloud-based application for your business.

Get in touch