[Serverless architecture #1] On the road to a serverless containerized CMS website
This article is the first one of a series of four articles dedicated to the serverless and containerization concepts and how it can be applied in the context of a typical web application involving:
- A Content Management System (CMS) using Strapi.
- A public facing front end application using Gatsby.
- A web analytics tool to measure the behavior of users on our website using Umami.
The serverless/containerization concept that will be concretely implemented throughout these articles will use the above frameworks/tools, though the same principles can be most likely extended to others, let alone the time it takes to adapt them.
When I develop new web applications I tend to put a strong emphasis on the ecological aspect. Is the code too heavy or not optimized enough? How much data transits to the client? Can I deploy it in a serverless fashion?
That last question is something I became particularly interested in within the last couple of years.
In my conception of the web, applications are not meant to be perpetually deployed, instead they should only be deployed when required, activated by a system of trigger events whenever possible or worth it.
In one sentence, I would recommend in many contexts: Deploy what you need, not what you can.
A typical website using a CMS will generally use at least one provisioned server. Sometimes the back office and the public front end are running on the same server, sometimes they are deployed on two different instances. Sometimes, we even have the same set up for each environment.
73 millions of websites are built using a CMS. Out of these, I let you imagine how many are running on a provisioned server or VM..
Out of all the improvements made in IT when it comes to virtualization and containerization, a provisioned server still consumes a part of energy just to remain available.
For many of these websites, we also know how little content is created/edited on a daily basis. Based on this, we can only assume that these instances have a very high idle time, and you can only imagine the absurd waste of energy and pollution they produce in comparison to their needs.
You have probably seen it before, the serverless architecture can bring a number of advantages, the number one of them being: no infrastructure management.
Of course, working with a serverless architecture can sometimes bring its lot of challenges when it comes to speed and performance issues, caching, or the development experience, but it is far from always being the case.
A concrete scenario
A typical CMS website gather generally a few components such as:
- A front end which can be static or dynamic (via SSR)
- A CMS, which can be deployed along with the front end application or separately depending on the framework
- A web analytics tool to measure the audience and the behavior of our visitors.
Considering that we want to host everything ourselves and have full control over the data and the deployment, in a typical scenario, there is then no less than two provisioned servers needed: one for the CMS and one for the the web analytics tool. This also means that these two servers are always up and running, hence consuming constant energy.
My goal will be to remove the need for these provisioned servers and only rely on containers and serverless architecture while keeping necessary availability and running workflows.
Description of the solution
Our cloud provider of choice for this demo will be Microsoft Azure. Some terms/products specific to Azure will be used throughout these articles.
NB: There is most likely an equivalent within another cloud provider.
A CMS does not need to be always up and running. It can and should able to start and stop on demand in the event when an editor needs to edit the content. This is precisely what we will implement.
An Azure Function triggered via http request will allow an editor to create an instance of our CMS on demand (via Azure Container instance). Within 1 to 2 minutes the instance is ready, and the editor can make as many changes to the content as needed. When the editor is done and the CMS instance has been idle for a while, it is disposed automatically by that same Azure function.
NB: For simplification purpose, our SQL database engine will be SQLite., hosted on an Azure File Share that Strapi will be able to query at any time.
Our main goal is to reduce server provisioning, it is then natural to go for a static website generator to build our web application. Among the different static website generators I know, the one I am the most familiar with is Gatsby. It will integrate perfectly with Azure Static Web app and GitHub actions for its deployment.
What would be a CMS if we could not benefit from the WYSIWYG feature?
Because it is headless, Strapi does not come with such feature. However most headless CMS provide hooks that we can use to trigger a hot reloading of our front end, most of the time when we work locally.
How can we then have the same feature for an editor working online?
Simply by instantiating a development instance of Gatsby on demand side by side with our CMS. That way, Strapi can trigger a refresh of our Gatsby development server whenever a change has occurred and editors can see changes in real time. Once the editor is satisfied with the changes, another hook can be triggered to deploy the changes to our production Gatsby web app.
The same way our CMS is disposed after being idle for a while, our development server is also disposed.
Web Analytics tool
It is always nice to know what kind of traffic happens on our website. This is the mission that fulfill Web analytics tools.
But how can we accomplish that without a provisioned server or delegating that mission to a third party platform?
Our solution consists the principle of delayed computing. We then redirect all tracking requests to an Azure Function that will in turn store all requests to a queue within Azure Service Bus. On a daily basis (or at a chosen frequency), all messages in the queue are consumed by another Azure function, that will have for mission to instantiate a temporary instance of our web analytics tool (Umami in our case) and send all messages to it so they can be finally accounted.
Another Azure function, triggered via http request this time, also allows temporary access to our web analytics tool whenever needed following the same model as our CMS.
The components of this solution have been tested altogether in a couple of projects already. However as the tech stack is very specific, I have split its implementation into 3 parts so you can pick any that might be of interest for your own projects and make it your own.
All the pieces of the solution are detailed in 3 subsequent articles:
- Containerization of headless CMS Strapi and its on demand instantiation.
- Set up of Instant Preview feature with Gatsby and Strapi.
- Set up of Umami analytics in a serverless architecture by applying the principle of delayed computing.
Each article provide a more in depth detail of the technical solution along with a dedicated GitHub repo if you are interested.
This kind of solution being more complex than a typical set up, it is preferable but not mandatory to have prior knowledge of the different tools and frameworks. Prior knowledge of the Azure platform is highly recommended though.
NB: Each repo comes with a Terraform script that will ease the set up, though some steps still require manual executions, such as the deployment of the Static Web App or the Azure Functions for example.
Feel free to leave a comment if there is any questions.