Pivotal + VMware: Transforming how more of the world builds software
Tech Insights
Five Minute Read

Serverless:
Accelerating innovation

Serverless technologies bring new, efficient ways to build, deploy, and consume applications in a cloud-native environment.


What is serverless?

Serverless computing is the first real cloud-native computing paradigm. Technologists are still settling on what “serverless” actually means. You can find a complete definition from serverless advocate Simon Wardley:

“[Serverless is] an event-driven, utility-based, stateless, code execution environment in which you write code and consume services.”

Rachel Stephens at RedMonk offers an even more concise definition:

“[Serverless means] managed services that scale to zero.”

Both descriptions highlight the most important aspect of the serverless paradigm—developers don’t manage anything but the application. From the developer’s perspective, there is no managing of infrastructure whatsoever. No provisioning. No patching. No capacity planning. No scaling. All that’s required is to bring your code or managed service; the serverless runtime takes care of everything else.

The following table summarizes the differences between serverless and the traditional (serverful) approach.


ServerlessServerful
Program runs when event happensProgram runs continuously until stopped
State kept in storage (stateless)State kept anywhere (stateful or stateless)
Max memory size ~3 GBMax memory size > 10 TB
Maximum runtime in minutesNo limit on runtime
OS and machine selected by providerOS and instance selected by user
Provider responsible for scalingUser responsible for scaling

Source: Cloud Programming Simplified: A Berkeley View on Serverless Computing


This is the serverless operational model. It’s common for serverless workloads to follow a particular set of architectural patterns as well. Developers who build serverless applications typically adhere to these principles:


Write code as functions.

Functions are small, single-purpose pieces of code that run dynamically. They‘re usually called in response to an event trigger. (You can think of functions as the purest form of microservices.) Many serverless products deliver this functionality through a functions-as-a-service (FaaS) offering. To get started with FaaS, developers simply bring their function code and wire up event triggers.

Though “serverless” and “functions” are sometimes used interchangeably, they aren’t the same thing. Functions are frequently used as the compute layer for serverless workloads. But you can write serverless applications without using functions. And you can write functions that don’t run on serverless platforms.


Use event-driven architecture.

Event-driven architecture (EDA) refers to a system where code runs in response to events. An event includes any change of state in the system, such as the receipt of a message, a completed file upload, or the insertion of a record in a database.

Functions are usually purpose-built to work with events and data streams. That makes them a perfect fit for EDA. In fact, FaaS solutions commonly include integrations with components like message brokers or data stores. Developers may trigger functions to respond to events related to these services.


Connect managed services together.

Since functions are stateless, all states and configurations live in backing services. Such services include databases, message queues, authentication providers, or routing services such as an API gateway. In a serverless architecture, these services are all managed. Just as the developer does not manage the infrastructure required to run their code, they don’t worry about the infrastructure required to support any related services either. With functions as the “code glue,” developers can connect managed services together, so there’s no need to think about infrastructure anywhere.


Scale-to-zero.

Scale-to-zero is a key concept of serverless. Here, compute resources are only consumed when in use. Serverless runtimes will automatically scale out the function to handle the increased load. The runtime will also scale-to-zero when the function is not in use. When the next request comes in, the runtime is ready to spin back up at the first request. With public cloud serverless products, customers are only billed for the time that the function is running.


Why serverless matters

As Joe Emison points out, serverless can help your business in many ways, depending on what you want to optimize for. Maybe you’re looking to reduce your overall compute costs. Perhaps you’re looking for faster time to market. When done right, a serverless architecture brings a number of benefits to help you optimize software development and maintainability.


Write less code.

The FaaS model seeks to allow developers to write as little code as possible. By writing narrowly scoped units of code, it’s easier to focus on writing business logic. Serverless platforms abstract the complexity of packaging, deployment, and event consumption. This way, developers don’t fiddle with complex integrations or boilerplate code.

There are many advantages to having less code. Lower complexity leads to fewer bugs. Your attack surface is smaller, improving code security. Less code also means less technical debt; your code is easier to maintain over time.


Pay for consumption—not allocation.

With scale-to-zero, you only use the compute you need, when you need it. In the public cloud, this means pay-per-use. This is more granular than the regular allocation model of the cloud. With virtual machines, you pay for some allocated amount of compute or memory that you may or may not consume. In a serverless model, if it’s not being used, you don’t pay for it. For particularly bursty workloads, with unpredictable usage, this can turn out to be very cost effective.


Ship code faster.

You’re always looking for ways to get your application to market as soon as possible. Serverless technologies let you ship code faster than ever before. Developers leverage functions to stitch managed services together into a system. There’s less to build from scratch and no need to worry about wiring up custom components. Plus, serverless handles routing, DNS, load balancing, or firewall rules. The serverless runtime handles all this heavy lifting, making deployments that much easier.


Manage only what you build.

Serverless workloads depend mostly on managed services. Teams are only responsible for what they've built—nothing more than their function code. Day 2 operational tasks look different from a more traditional server-based model. There's no dealing with low-level implementation details.


Focus on business outcomes.

In a serverless architecture, there's an emphasis on writing business logic instead of plumbing or packaging. This frees up developers to zero in on solving specific business problems. That means your organization can concentrate on these outcomes instead of managing technology. It’s a pure realization of the cloud-native model—the underlying infrastructure complexities are abstracted away.



What to keep in mind if you're considering serverless

Serverless is a powerful idea. It can enable many promising capabilities for your organization. It’s also a different way of thinking about software than you may be used to. Before going too far down the serverless path, consider the following:


Serverless is still a relatively immature space.

Though it’s growing in popularity, these are still the early days for serverless. The space is just emerging, new frameworks routinely pop up, and patterns constantly evolve. Some projects, like Knative, have begun to gain traction. But there’s more consolidation to come as the industry settles on standards. Especially for enterprises, it's not clear yet where the sure bet is for serverless. With so much uncertainty, it's important to be aware of the potential for change.


Your team may have to develop new skill sets.

Building applications to embrace serverless architecture is a fundamental change. As with any new technology, there’s a learning curve. Developers face new kinds of challenges when working with serverless. For some, event-driven patterns and asynchronous operations are new concepts to master. What’s more, teams must become familiar with the managed services they’re connecting. Serverless functions may also be hard to monitor or debug with existing processes. It’s also sometimes tricky to test code locally. Developers need to adapt to a new set of tooling.

Though some organizations look to serverless as a path to the cloud, there’s no possibility for lift-and-shift. More extensive application transformation efforts are required. A good rule of thumb: if you’re not ready for microservices, you’re not ready for serverless.


Not all workloads lend themselves to serverless—yet.

Despite its many advantages, serverless architecture is not fit for all workloads. There are still some limitations to what applications can run in a serverless model. In a paper from UC Berkeley, researchers point to specific constraints of existing serverless solutions. For example, the authors suggest that systems reliant on very large data sets don’t always work well with FaaS platforms.

Serverless technology will continue to expand the kinds of scenarios it supports. For now, serverless is yet another tool in your cloud-native toolbox. If your app doesn’t work well in a serverless context, consider other abstractions like an app platform or container-as-a-service offering.


Plan for performance.

If you care about performance, plan accordingly. The pay-per-use model may seem attractive, but it doesn’t come for free. For one thing, when a function scales to zero, it has to be ready to spin up and back into action when triggered by an event. (The time it takes for this to happen is called a “cold start.”)

The performance hit on the first request might be significant, depending on your code, chosen serverless runtime, and use case. Don’t forget to factor in startup times and network latency. Also, be aware of any additional charges associated with adjacent managed services. You may opt to receive or send data in batches to limit bandwidth charges or connection costs.


If you do serverless wrong, it could cost you.

When adopting any new architecture, be realistic. Technology alone is rarely enough without the culture and practice to go with it. Don’t try to use serverless patterns to connect legacy data services together. If you aren’t using cloud-native service APIs, you could run into problems with persistent connections. Avoid long-running functions. Don’t write functions with too many dependencies. As with microservices, doing serverless wrong could be detrimental and end up costing you even more.


Choose your managed services carefully.

Serverless patterns encourage “custom research, not custom code,” says Joe Emison. You may spend more time researching than coding for serverless applications. That’s because developers rely on managed services rather than custom code. Developers often find themselves deciding on solutions for identity management, databases, messaging queues, API gateways, and mobile notifications.

The list of choices for managed services is ever growing. Pick the right ones for the job, and properly integrate them with your application. The wrong choice creates just the kind of technical debt you’re trying to avoid by using serverless in the first place!



Serverless at Pivotal

Pivotal has partnered with Google and other industry leaders to develop Knative. The set of building blocks that Knative provides simplifies the way to deploy and run functions atop Kubernetes. Pivotal has also introduced Project riff for running functions, with a particular focus on responding to event streams. Project riff and Knative are the technology behind Pivotal Function Service (PFS). PFS is currently an alpha preview release designed for evaluation deployments.

Plenty of Pivotal customers are already using the event-driven paradigms encouraged by serverless architecture. With Spring Cloud Data Flow, developers use Spring Boot and Spring Cloud Stream to build flexible data integrations and real-time data processing pipelines. Spring Cloud Function is also a great way to take advantage of Spring Boot features on serverless providers across clouds.

Many of today’s technologies have taken serverless to the next level. But Pivotal has long been a proponent of its fundamental principles. Pivotal Application Service (PAS) simplifies the developer workflow and hides the underlying infrastructure. With PAS, developers just push code, and the platform takes care of the rest. It’s also easy to connect applications to managed services using the Open Service Broker API.



Contactez-nous