The evolution of event-driven architectures combined with the modern best practice of building web apps as decoupled microservices, amongst other factors, has greatly contributed to rapid adoption of serverless technology.
Although not strictly “serverless” (as web servers still play a central role in the code execution model), serverless computing is crossing the chasm, fundamentally shifting how DevOps teams and software engineers design and architect web based systems. Whilst the benefits are not automatic, going serverless enables cost-effective, easy to maintain ‘NoOps’ solutions, providing high scalability, performance, and reduced complexity overall.
In 2006, Amazon Web Services (AWS) introduced the Elastic Compute Cloud (EC2), which provides customers with the ability to pay for computing resources on-demand, by the hour. The introduction of EC2 is considered to have ignited the cloud computing revolution, fundamentally disrupting the world of information technology and shaping the internet we know today.
EC2 offers what you might think of as ‘traditional’ web servers on a flexible and scalable model. When you make a request to one, by hitting a website or using a cloud-hosted application, the message is carried in much the same way as if you made a request to a server sitting in the corner of your own server room. Infrastructure teams don’t have to worry about requesting capital investment for new servers, or think so hard about disaster recovery, but they still need to get the sizing and setup of the servers right to meet the requirements of the applications being hosted.
With the arrival of serverless, cloud providers can extend the on-demand utility capabilities of running apps in the cloud. So, rather than paying for resources by the hour, you only pay for what you use; you are charged on the number of requests and the duration of each one.
On the surface, the benefits appear to be clear and obvious. However, as early adopters of cloud applications know too well (remember “cloud ready”?), applications need to be engineered and designed with particular principles, techniques, and consideration to fully leverage the advantages of going serverless.
For example, aspects of our ticketing application, such as SeatCurve and CultureCast, have been engineered in a particular way to leverage serverless when integrating with Tessitura in a cost-effective and scalable way. We’ve also launched a new version of our virtual waiting room product CrowdHandler to run serverless on AWS.
The trend towards serverless computing will continue so long as the commercial drivers are there, and cloud providers like AWS continue to release and improve their eco-system of serverless tools. The idea of a server, or even an instance of a server, is becoming ever more ephemeral in the world of computing, and is moving closer to the conceptual ‘server’ responding to a request from a client, than a metaphorical, or dare I say physical box of metal at your local data center.
This article is taken from Made Next, a new newsletter from Made Media looking at emerging and future trends at the intersection of digital technology and the world of arts and culture.
Subscribe to the
Sign up now to our utterly private, spam-free and occasionally insightful newsletter.