Modern applications place high demands on UI developers. Web applications require complex functionality, and the lion’s share of the work falls on UI developers:

  • by building modern, easy-to-use interfaces
  • creating interactive elements and complex animations
  • complex application space management
  • meta-programming: build scripts, transponders, bundles, liners, etc.
  • reading from REST, GraphQL and other APIs
  • intermediate programming: proxies, redirects, routing, middleware, authentication, etc.

This list is scary alone, but it gets really rough if the technical stack doesn’t optimize for simplicity. A complex infrastructure brings hidden responsibilities that cause risks, slowdowns, and frustration.

Depending on the infrastructure chosen, we may also accidentally add server configuration, publication management, and other DevOps tasks to the UI developer disk.

Software architecture has a direct impact on team productivity. Choose tools that avoid hidden complexity to help your teams achieve more and feel less congested.

Poor intermediate level – where front end tasks can be complex

Let’s look at a task I’ve seen assigned to multiple UI teams: create a simple REST API to combine data from a few services into a single UI request. If you just shouted at your computer, “But that’s not the interface’s job!” – I agree! But who am I to let the facts prevent the delay?

Only the API required by the user interface is included intermediate programming. For example, if an interface combines data from multiple background services and results in a few extra fields, the general approach is to add a proxy API so that the interface does not allow multiple API calls and make a set of business logic on the client side.

There is no clear line as to where the backend group should own such an API. Getting it to the other team late – and getting updates for the future – can be a bureaucratic nightmare, so the front team will be held accountable.

This is a story that ends differently depending on the architectural choices we make. Let us consider two general approaches to dealing with this task:

  • Create an REST API by building an Express application on the Node service
  • Create a REST API with serverless functionality

The Express + Node contains a surprising amount of hidden complexity and overhead. Serverless allows UI developers to deploy and scale the UI quickly so they can return to other UI tasks.

Solution 1: Build and install the API using Node and Express (and Docker and Kubernetes)

Earlier in my career, the usual practice was to use Node and Express services to stand up to the REST API. On the surface, this looks relatively straightforward. We can create an entire REST API with a file named server.js:

const express = require('express');

const PORT = 8080;
const HOST = '';

const app = express();


// simple REST API to load movies by slug
const movies = require('./data.json');

app.get('/api/movies/:slug', (req, res) => {
  const { slug } = req.params;
  const movie = movies.find((m) => m.slug === slug);


app.listen(PORT, HOST, () => {
  console.log(`app running on http://${HOST}:${PORT}`);

This code does not exist too far from the front end JavaScript. Here’s a decent number of pots that will trigger the interface if they’ve never seen it before, but it’s manageable.

If we run node server.js, we can visit http://localhost:8080/api/movies/some-movie and see a JSON object with details of the movie on the snail some-movie (assuming you have specified it data.json).

However, building the API is just the beginning. We need to make this API so that it can handle a reasonable amount of traffic without falling. Suddenly things get much more complicated.

We still need several tools:

  • somewhere to enable this (e.g. DigitalOcean, Google Cloud Platform, AWS)
  • tank to maintain the coherence of local development and production (i.e. Docker)
  • a way to ensure that deployment remains ongoing and can handle traffic congestion (i.e. Kubernetes)

In this situation, we are just outside the front end area. I’ve done this kind of work before, but my solution was a copy-and-paste tutorial or a stack overflow response.

The Docker configuration is a bit understandable, but I have no idea if it’s safe or optimized:

FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "node", "server.js" ]

Next, we need to figure out how to deploy the Docker container to Kubernetes. Why? I’m not really sure, but it’s used by the company’s background teams, so we should follow best practices.

This requires more configuration (all copying and pasting). We believe our destiny in Google and invented it Docker instructions for placing a container in Kubernetes.

Our original task of “stand up a quick Node API” has formed into a series of tasks that do not match our core competencies. The first time I got such a task, I lost several days when things were configured and waited for feedback from the background groups to make sure I wasn’t causing more problems than I was solving.

Some companies have a DevOps team to review this work and make sure it doesn’t do anything terrible. Others ultimately trust Stack Overflow’s humanity and hope for the best.

With this approach, things start to take control with some node codes, but they quickly revolve around many config layers that span areas of expertise that clearly go beyond what we expect a UI developer to know.

Solution 2: Build the same REST API with serverless functionality

If we choose serverless features, the story can be dramatically different. Serverless is an excellent partner for Jamstack web applications, offering UI developers the ability to handle mid-level programming without the unnecessary complexity of figuring out how to deploy and extend a server.

There are several frameworks and platforms that make deploying serverless functionality painless. My primary solution is to use Netlify, as it enables automatic continuous delivery for both the user interface and serverless functions. In this example we use Netlify functions to manage the serverless API.

Using functions as a service (a great way to describe platforms that handle infrastructure and scale serverless functionality) means we can focus only business logic and we know that our mid-range service can handle huge amounts of traffic without falling. We don’t have to deal with Docker containers, Kubernetes, or even a node server roof disk – it just works ™ so we can deliver the solution and move on to our next task.

First, we can configure our REST API in a serverless function at netlify/functions/movie-by-slug.js:

const movies = require('./data.json');

exports.handler = async (event) => {
  const slug = event.path.replace('/api/movies/', '');
  const movie = movies.find((m) => m.slug === slug);

  return {
    statusCode: 200,
    body: JSON.stringify(movie),

To increase proper routing, we can create netlify.toml at the heart of the project:

  from = "/api/movies/*"
  to = "/.netlify/functions/movie-by-slug"
  status = 200

This is significantly less configuration than we would need for a Node / Express approach. I prefer this approach to the fact that the configuration here is stripped only of what we care about: certain paths that the API should handle. The rest – build commands, ports, and so on – are handled by us with good default settings.

If we have Netlify CLI installed, we can do this locally immediately with the command ntl devwho knows how to search for serverless functions netlify/functions directory.

Stay http://localhost:888/api/movies/booper displays a JSON object that contains details about the “booper” movie.

So far, this does not seem too differ from Node and Express settings. However, when we go into use, the difference is huge. Deploying this site into production requires:

  1. Perform the serverless operation and netlify.toml repo and push it up on GitHub, Bitbucket or GitLab
  2. Create a new site connected to the git repo using the Netlify CLI: ntl init

That’s it! The API is now up and running and can scale to millions of hits on demand. Changes are applied whenever they are pushed to the repo main branch.

You can see this in action at and check it out source code on GitHub.

Serverless ones open up a huge amount of potential for front-end developers

Serverless functionality does not replace all backgrounds, but they are a very effective alternative for handling mid-range development. Serverless to avoid unintentional complexity that can cause organizational bottlenecks and serious efficiency problems.

Serverless functionality allows UI developers to perform mid-level programming tasks without the need for additional boiler and DevOps overhead, creating risk and reducing productivity.

If our goal is to enable UI teams to deliver software quickly and with confidence, choosing serverless functionality will bring productivity to the infrastructure. By adopting this approach as the default launcher for Jamstack, I have been able to submit faster than ever, whether I work alone, with other interfaces, or cross-functionally with company teams.


Please enter your comment!
Please enter your name here