Strength Journal Arch 2

Sun Jan 15 2023

The Problem

When I first started coding Strength Journal, I made a deliberate decision to structure the application as a monolith. By “monolith”, I mean that all functionality was implemented by a single server application, rather than separating concerns across multiple services. The term “monolith” carries a lot negative connotations these days, but this style of architecture has a few strong points that tend to get overlooked:

  • Deployment and release is brutally simple since there’s no need orchestrate dependencies

  • Development is often faster (at least in the early stages), since adding new functionality doesn’t require coordinating changes across service boundaries 

  • Local development and pre-production environments are easier to spin up since the whole service can run out of a single process

As I start to look forward to some new features I’m considering, I realize that I’m pushing up against one of the major  downsides of monoliths; since there’s no process-level separation between domains, there is a tendency towards coupling between concerns that really ought to be separated. In my case, feature management, settings and identity access management were devolving into a big ball of mud. This prompted the need to decompose my architecture into multiple services.

Also, keeping it 100, when you’re building a personal project, you often have a technology in mind and then look for a problem to solve retroactively. In my case, I’m trying to skill up in Azure, and a lot of the tooling tested in the AZ-204 is oriented towards containerized, service oriented architectures. Furthermore, I was reading about reverse proxies recently, and a lightbulb went on for something that previously eluded me -- how a website spanning multiple applications could be exposed through a single domain. So naturally, I decided to decompose my application so that I could play around with both of these areas.

I’ve recently made the repository public on GitHub, so you can check out the code here:

https://github.com/joelj1995/strengthjournal

New Architecture

A picture is worth 1000 words. So instead of talking at you more, let me provide some chicken scratches to look at:

The broad idea is that Strength Journal is decomposed into four distinct services:

  • Landing Page -- ASP.NET MVC

  • Single Page Application -- Angular

  • Protected Endpoints for Journal Functionality -- ASP.NET WebAPI

  • Identify Access Management Endpoints -- ASP.NET WebAPI

Each service runs as a containerized process. I chose Azure Container Apps as my cloud orchestrator since it was fairly easy to get up and running, and learning Kubernetes is off the table for the time being. For local development, I use Docker Compose, which I was already familiar with. In the cloud deployment, the container environment is doubled up for production in order to support blue/green deployments. 

Nginx is the only service exposed with a public ingress. It parses the URL segments to decide which service to proxy requests to (though the SPA is served directly off the file system), acting as a gateway to each application. 

In the cloud deployment, Azure Application Gateway, a Layer 7 load balancer, sits between the browser and the Nginx ingress. It serves three purposes in this architecture 1) it is the point of SSL termination 2) it dynamically replaces the hosts header which, coupled with my wildcard cert, lets me bind the environment to any subdomain 3) it’s the point at which I can choose whether to route to my blue or green backend pool, so that I can deploy without any downtime.

Refactoring Approach

The first refactoring stage involved actually decomposing my monolith into the services I had identified. I created new subdirectories for each service, adding new VS projects as needed. The rest was mostly just a cut and paste effort, to move code into the relevant services. I also added a “Core” .NET project for common components shared between my services. For the time being, this includes the entire data model, since I’m not keen on decomposing the database schema (yet).

Concurrent with this effort, I added Dockerfiles for each service. A docker-compose.yml file was created so I could spin up the full suite locally. With things working well for local development, the next step was to move this all to the cloud.

Deployment Changes

My deployment strategy was a pretty run-of-mill container-based approach. A branch trigger on main builds each container, tags it with the git revision hash, and pushes it to my image repository (Azure Container Repository). Each image is environment-agnostic because I’m (mostly?) sane, and I want to do promotional builds. 

The deployment step handles both container infrastructure and the code/image. A bicep template defines the container environment, and the revision sha gets passed down as a parameter so that I can target the latest image. The template is the unit that I deploy to the target Azure resource group. Deployment to Test gets triggered automatically on changes to main. A manual approval is needed to deploy to the blue or green production environment.

Challenges and Learnings

The early refactoring steps were lengthy, but relatively straightforward. After a bit of toil, I was able to spin up a local development environment in the new architecture without any surprises. The fun started when I moved things into the cloud. In Azure, the containers were running in Azure Container Apps instead of Docker Compose. Despite parity between the configurations of the two orchestrators, nginx was returning a 426 (Upgrade Required) error from the public Azure Container Apps ingress. Apparently the Container Apps ingress only supports HTTP1.1 or HTTP2.0, while nginx proxies requests using HTTP1.0 by default. To mitigate this, I reconfigured nginx to proxy calls using HTTP1.1, so that it was compatible with the protocol supported by the ingress. A side effect was that the host header from the origin server was appearing in the response, rewriting the URL seen in the browser. I made another configuration change to suppress this header, which seemed to get things working. The distinction between HTTP1.0 and HTTP1.1 is kind of new for me so I hope I’m not doing anything really dumb here. Let me know if I am.

Another challenge came after I transitioned the new environment to the www domain for go-live. Although the application worked fine from my temporary www2 subdomain on which I smoked the deployment, Angular service calls began failing in my browser after the switchover. I quickly realized that my browser had the old angular code cached, which called against the old URL structure. I had missed a best practice in both my new and old architecture, which was to suppress caching of the SPA’s index page through the Cache-Conrol header (Angular has an alternative hash-based mechanism that handles the JS and CSS). I reconfigured my server to add this header, but this wouldn’t help much for those who already had the page cached. To bust existing page caches, I added an alternative ‘/dashboard’ route segment to the SPA’s root page, and updated references to point to that segment. Since I’d altered the route, browsers would not match it to the cached page.

Wrapping Up

I haven’t quite settled on where I want to take Strength Journal next. The new architecture bridges nicely into some news features I’d like to add like video uploads and social networking. But honestly the biggest area that needs to improve is the UX. The site is a bit of an eye sore, and some of the user flows are a quite clunky. I have some sketches drawn together for a new workout editor, and I’m tempted to rebuild the entire UI in PrimeNG. We’ll see where things go.