Serverless computing: a fad or the future?
Serverless computing has been in the news a lot lately. To some experts in the industry, it’s the next big thing. To others, it’s considered a fad.
Serverless computing—also known as Function as a Service (FaaS)—doesn’t actually mean there aren’t any servers; it just means that a developer doesn’t have to worry about managing a physical server on premise as a host for code. The code is stored in a cloud-based service such as Amazon Web Services Lambda or Iron.io.
We sat down with site reliability engineer Micah Hausler and internal solutions engineer Javid Igani to talk about the pros and cons of serverless computing and what the future looks like for the tech industry’s latest craze.
What are some other companies besides Amazon that specialize in serverless computing? And what’s all the buzz about?
Javid: There are multiple companies, including IBM OpenWhisk, Serverless, Inc, Azure Functions, and Google Functions.
Micah: There are actually some open source efforts, too, where it’s not really serverless—you still have to manage a server, but you can run a process on your server that emulates this serverless computing for you. Even if you have your own physical servers, you’re not paying for cloud VMAs. You could run a serverless architecture where your code gets run on-demand.
Javid: That’s where you save a whole bunch of money, too, because you’re not just sitting there with a computer staying on, paying for the power bills.
What are some other good things about serverless computing?
Micah: One of the big benefits is that there’s a really significant reduction in how long it takes to develop something and get it out there and ready to serve. When you provision a server, there’s time to spin that up to do all of the configuration.
Javid: Security updates and that type of thing, right?
Micah: Yeah, maintaining all kinds of things, like security updates and operations work. If some of that can be traded away for serverless, then all you really have to do is have an Internet connection and be able to upload code. That is another one of those big advantages to running certain services serverless.
Javid: I’d say the scalability is very interesting because if you have one user visiting a site, clicking on a search feature, you’re only running one instance of that search function. Let’s say you have 7,000 users running one instance of that search function at the same time. Instead of potentially having to spend a little time spinning up, even dynamically spinning up, those new servers to deal with that demand, you could just have the functions themselves spin up and then close whenever they’re done. You can have multiple concurrent processes running at the same time to handle demand, so it’s extremely scalable.
Micah: That scalability is well suited for irregular traffic patterns. For example, at night in the US, we have very low traffic. Traditional provisioned servers might be a really good fit because you can spin up servers during the day, and spin them down at night. But if you are running a global site that can see weird spikes at all times of day, serverless and the scalability of that can be a really good fit.
What about reduction of operations?
Micah: Speaking with my ops hat on, I think there’s a lot more that operations folks do than just manage servers. That’s a small part of it. It’s also anytime you’re running software, there are a lot of needs that operations takes care of. Those needs shift to the responsibility of the developer. The same operations tasks that an operations person might look after, like errors, log management, and aggregation, are operations tasks. When you want to start looking at performance of your application, even though you’re not technically managing a server, you’re still billed for CPU time.
Also, when you’re looking at creating efficiencies and scalability, you can save a lot of money at this point when you’re talking about serverless, because you’re only being billed for CPU time, reducing the operational overhead. Although those responsibilities shift for operations people, there’s still going to be a lot of that type of work required in the background. Serverless is a great fit for some use cases.You may have plug-and-play playability of functions to swap out business logic, or you can even have multiple versions of a function running side by side for acceptance testing. Whenever you want to, you can turn off unnecessary functions. You can keep your business running while you’re trying new features and updating your business logic.
Let’s talk about some of the cons of serverless computing.
Who would serverless computing be good for? Who would it not be good for?
Micah: Yeah, there are a lot of enterprise applications, especially, that require longer running backend processes, processes that don’t have a web interface necessarily, but they are churning data. They are crunching numbers. A lot of enterprises have a large Hadoop system setup to run aggregate, whether it’s financial numbers or business data.
Javid: Distributed jobs.
Micah: Distributed jobs, all that kind of stuff. There’s a lot of investment in that. Completely switching paradigms doesn’t make sense for that, but also there’s some high IO (input/output) needs. They need fast disks. Since with serverless, you don’t have access to a server, you can’t say, “I need this really fast SSD drive. I need that attached to this function.” It doesn’t work like that. They are the same with high network requirements. Say you have a lot of data you need to get in and out quickly. You can’t, at this point, provision a function to have a big one-gig Internet connection. They don’t tell you exactly how much, what your network capacity is. Those are things you have to weigh when you’re thinking about running serverless—what are my requirements? What kind of computing needs do I have with this? For example, Amazon specifically has a five-minute cap on how long a function can run.
Javid: That’s right.
Micah: Most of the time, they expect you to run it in the 99th percentile—might be a few seconds at most, maybe up to a minute, but a hard cap of five minutes. If you need anything to run longer than that, serverless is not a great fit.
What about moving your infrastructure into the cloud?
Javid: If you’re hearing about serverless, and you’re thinking oh, it sounds like a great way to save time and money, we need to go ahead and move our resources to the cloud, keep in mind that that is a very “shoot from the hip” kind of approach to why you’d even want to use serverless. You need to have an appropriate use case for it. Your current enterprise application probably doesn’t fit serverless, but it may. There may be some parts of it that will benefit from it. That just requires a lot of analysis and conversation before you would want to even look at moving to the cloud. If you’re already moving to the cloud, you definitely want to have those conversations. Because of the limitations that are inherent in serverless computing, many applications won’t benefit from it or may not even be able to be used by, or reimplemented in, a serverless architecture.
Micah: A lot of enterprises are interested in serverless computing as they move to the cloud. The big benefit to them is obviously financial. They’re moving a capital expense of running servers and having people manage those servers to an operating expense, a monthly payment for running that server. Oftentimes, over time, that’s significantly less. For moving to the cloud, it’s oftentimes a lift, move, drop. It’s the same application that’s running in both places. Once you’re in the cloud, for new applications I would say is where serverless might be a good fit. Serverless is not a full replacement for existing applications.
Charlie: It would be a combination of servers and serverless?
Charlie: So, there would never be a scenario where an entire system went serverless?
Micah: For most companies that’s never going to be the case just because there are things that need to be run continuously.
Charlie: But if it’s just a few applications then that would best benefit enterprises?
Javid: Right, say you have a few services that are doing just a couple of small tasks, maybe periodically throughout the day, or maybe they’re on-demand services. If you can easily abstract it into a serverless paradigm, then those services would benefit from moving to a serverless architecture, but your other systems are most likely still going to stay on servers in the cloud.
Micah: It really does depend on the use case, I would say even on a per-project level. In talking about Skuid for a minute, we, at this point, have very little that runs serverless. But in the instance we use it, it happens to be a good fit. We’re using Amazon’s log service. They have a trigger so that whenever we post, our application generates a log.
Javid: Something to mention there is that not all programming languages are implemented in serverless architectures. I think Amazon has three or four. They have Java, Node.js and Python. Google I think has one of those. If you were going to move from one service to another, you may need to rewrite the functions as well.
What about technical debt?
Javid: Technical debt is not something unique to serverless. If you have an enterprise application, that might be 10 years old. Five years ago, the one person that knew how it works and once supported it leaves the company. Then, this year, you have to do a mandatory update and move the application to a new operating system, but now nobody knows what this application even does.
That’s something that already exists in the enterprise world. From the perspective of serverless, I might say, “What is this function doing?” It could be very abstract. This function does one very specific thing. It may not be easy. Maybe with services like Amazon X-Ray now, you can see what the chain of events are, but it still may not be easy to see exactly what it’s doing and know how it works in the grand scheme. That’s just with a single function. If there’s a collection of functions that no one understands, that can really compound the problem.
Micah: This is an organizational problem. It can be made more acute because it’s so easy to just write a function and have it run. If someone doesn’t document that function properly and save the source code in a responsible place like a version control system, it just all of a sudden gets run.
Say someone is coming along and saying, “Oh, this function is getting called occasionally. It hasn’t been called in two months. It doesn’t look like it’s being used,” and then deletes it. All of a sudden, you can imagine someone going to save or update a contact and all of a sudden, the contact details don’t get actually updated. That can be really bad.
Will server management ever become a thing of the past?
Micah: No. Stepping back from just serverless, I think that the traditional way servers have been managed has changed a lot and is going to continue to change. Things like Kubernetes are already changing how people think about running a server. It abstracts away, not quite to the point of Function-as-a-Service (FaaS), but a lot of the dirty work of running a server where you just interact with a Kubernetes API and say, “I want to run this process.” It picks a server for you, very similar to serverless computing, but it’s a little bit more fleshed out. You could, in some sense, call that serverless in that you’re not managing so much an individual server, and the files on it. But serverless, FaaS computing will not remove the need for servers.
Javid: I think about serverless as an attribute of the development process. As a developer, I don’t have to worry about what server it’s on or managing that server, so my process is serverless. It doesn’t mean that it’s not running on a server – at this point, we haven’t figured out a better method.
What do you see for the future of serverless computing? How do you think it’ll change?
Javid: I know a lot of companies are already using it for machine-learning applications and for some distributed processing for data science. I’m thinking robotics is probably going to play into it, where you have to offload a lot of the processing workload. That’ll help keep the logic off of the actual machine itself so that you can have smaller chips that are more energy efficient and machines that are still capable of benefitting from complex logic. You can just offload a request and receive the response whenever the cloud is done processing.
Micah: I think a lot of the cons we’ve talked about—I think that’s where there’s a lot of opportunity for growth. Some of those criticisms now can definitely be addressed. I think one of the really interesting tie-ins, or just one of the ways that it will really be used, will be with IoT, or the Internet of Things. If you think about it, it fits that paradigm really well. You have a lot of independent clients accessing a service at some unspecified intervals doing very small autonomic operations that only concern some of them. An example is a home thermostat or a light bulb—those don’t necessarily care about your neighbor down the street or across the country’s light bulb or thermostat. It’s more about the individual one. On your backend, you might be doing some larger aggregation, but in terms of checking in on a per-request basis, IoT has a lot of good tie-in with serverless.
Charlie: Okay, thanks for chatting fellas.
Javid: Thank you.