Going Serverless

The context of this week's discussion is building an infrastructure to support the vast de-centralized computing environment we are at the forefront of entering. Not only are we currently supporting hundreds of millions of mobile devices but will soon need to support billions of IoT devices.

If any of you have ever set up geo-diverse data centers you can appreciate the complexity in supporting that environment. If you ever had to handle large scale consumer facing web applications that needed to scale in response to unknown events, you can appreciate the complexity of what is required to architect that environment. I've had the opportunity to do both of these using various tools such as layer 7 load balancing (HAProxy), front end caching (Varnish), virtualization (KVM & Open vSwitch), high availability (DRBD, Pacemaker), database replication (PostgreSQL), SAN replication (Equallogic) and global DNS (DNSMadeEasy). While these tools exist to tackle these challenges, scaling turns into a finite problem if you host your own systems. You may be able to purchase sufficient equipment to handle a 10x load factor but what if you experience a 15x load factor surprise.

The hundreds of millions of mobile devices and billions of IoT devices have to send data/requests to something. Why not let someone else handle the scaling part who has proven they are capable of supporting it. Clearly the big cloud players are the vendors of choice.

Most people started with their own managed data center or co-located equipment at a vendor managed data center. When AWS came along, many jumped at the opportunity to move their servers to the cloud. My experience is that this type of move isn't guaranteed to save any money. In fact, it can cost more. Mimicking your local data center in the cloud can simply be a panacea to say you are in the cloud.

The cloud environments have made sense for startups who needed computing resources. It saves all of the upfront capital expense.

For those of us needing to support the vast mobile/IoT ecosystem, skip the step of moving your data center to the cloud and go directly to serverless computing.

Have you ever thought we would see the day when no "servers" were required to run your organization? Well, that day has arrived. Google, Amazon and Microsoft have created serverless solutions in their cloud offering. For me, serverless computing is what the cloud is meant to be.

The advantage of a serverless solution is that you get all of the benefits of the highly available cloud environment including massive scaling while only paying for the actual execution time of your code. The concept is not new. It was used back in time sharing mainframe systems. Now you can take advantage of the massive compute resources of the cloud providers.

So what is a serverless solution? We have to start with the progression of the available cloud resources.

Most people are familiar with renting servers in Google (Compute Engine), AWS (EC2), or Azure (VMs). These are self managed servers that you pay for regardless if you are using them or not. You can install virtually any software you want on these instances

The next step up the ladder are the managed server environments. Google's is called App Engine, AWS uses Elastic Beanstalk, and Azure has App Service. In these environments, you only pay for server resources actually consumed. The cloud provider manages the infrastructure (patching, security, monitoring) and you dictate how the resources scale up/down. There are still servers being used but you can become more efficient with their use. You can use a variety of languages but are restricted on the server resources.

The latest offering is serverless computing. Google's is called Google Functions, AWS is called Lambda and Azure is called Azure Functions. This code gets executed by the cloud provider and appears to have unlimited scalability. You only pay when your code gets executed. The primary limitation right now is that Google Functions and AWS Lambda are written in NodeJS. Azure Functions are written in NodeJS or C#. Google Functions is in beta with no pricing available. AWS Lambda is free for the first 1M requests per month and $0.0000002 per request after 1M. Azure also provides 1M free requests per month with very low per request costs above 1M. Memory usage can impact pricing if your NodeJS functions require additional memory to function.

If you are familiar with the Javascript event loop concept, you can think of going serverless in those terms. Some event triggers the function to execute. It can range from an http request to a pub/sub message to a cloud storage event.

To setup a web based serverless environment, you can marry these with other offerings. Google Functions can handle API requests natively by integrating frameworks such as HapiJS or Express. On the backend you'll want a scalable database like their recently released Spanner product. AWS Lambda can be paired with the AWS API Gateway to create a fully managed API environment. On the backend you'd want to chose their managed RDS or DynamoDB offerings. Azure Functions follow the same path as Google Functions.

It is hard to believe that it is now possible to truly and completely unload the data center to someone else. We make money by the applications we create and sell. These new serverless solutions make it possible to focus on our core competencies and support the new world of mobile and IoT devices.