Public Learning

project-centric learning for becoming a software engineer

Week 11: A Recap

Crush, Kill, Deploy

I was almost ecstatic in last week’s recap about finally having a working (if ultra-barebones) version. But what good is it to have it running on localhost? None. So the first task this week was to actually deploy 200 OK. That was really a single bullet point on my todo list for last Monday: register domain and deploy. I wish it was so easy.

Well, registering the domain was easy. I decided on, because it was the most fitting TLD still available. Creating the necessary DNS records to point at my virtual server was a quick task. Everything afterwards was new to me, so I had a lot of reading to do. The appeal of completely relying on Docker for any deployment certainly became clear to me: instead of manually configuring my whole server environment, I would do this once for any of my services (two Node/Express apps, a MongoDB database and the nginx reverse proxy). But while this sounds very straight-forward, I started realizing that Docker is certainly not as trivial as it might sometimes appear. Long story short: I ended my Docker experimentation relatively soon to focus on a simpler manual deployment. That meant installing and configuring all the services mentioned above directly in the shell of my cloud server.

There were a few gotchas I didn’t expect, especially regarding nginx and its function as the SSL termination point (so that it’s the only endpoint that needs an SSL certificate, proxying the internal requests via normal HTTP). Let’s Encrypt is an awesome free service for obtaining SSL/TLS certificates, but requesting a wildcard certificate that includes all possible subdomains (so that it’s valid for * threw a few error messages and had me switch to a manual authentication method where I had to create TXT records for the domain to successfully pass the validation challenge from Let’s Encrypt‘s certbot client. This means that I still have to find a way to automatically renew my certificates once they expire, something that I didn’t get to work without having to invoke that manual DNS record verification again. Well, I have three months to solve that issue.

Setting up nginx involved some prolonged research into how its configuration files need to be set up. It didn’t help that this was my first time using nginx. Eventually, I managed to configure everything in a way that nginx reverse proxies all requests made to the domain itself (and the www subdomain) to the Node application handling all the frontend configuration stuff and all requests made on a subdomain to the other application that deals with the actual user-created APIs. And it terminates the SSL connection before that, so that I don’t have to deal with certifications and the https server module in Node.js.

While the MongoDB data store is simply set up as a daemon process, managing the Node processes requires a bit more effort. pm2 is a process manager for Node and it allowed me to run both applications from one command with ensured restarts in the case of a crash. I still need to figure out proper logging for that scenario, but as far as manual deploys go, 200 OK does not require too many steps to run an updated master branch. Still, there are a few kinks left to be ironed out and I might have another try with Docker in the future. Also, I took notes on each step of my setup and deployment process for reference purposes here and it’s quite an involved process of manual steps to follow. I’ll be thankful for these notes if I ever have to do it again from scratch.

Coding In Times Of Corona

Once I had my first deployment online and working, I fixed a few bugs and started with authentication, because each of the next feature implementations requires being logged in (in order to tie APIs and users together). I decided for now to rely on GitHub’s OAuth integration, which allows me to not care about excessive account security (proper password handling, resetting etc.). I use Passport.js again to manage the authentication flow in Node.js, and just like the first time, it took me a while to fully grok how to implement that strategy with my data store and routes.

High on my list of things to do next is implementing Server-sent events to be able to achieve live debugging of the request/response cycle for each API. This is the most challenging feature I intend to complete and it’s fun to dive into it. It will probably add a Redis instance to my setup to quickly share requests between the two backend applications.

But in the second half of last week I got distracted a bit by the rapid developments regarding the global pandemic that is now affecting life in Germany. My girlfriend lamented the sudden closing of all public schools and the distinct lack of digital tools for continuing basic education during the student’s absence. It seems like the decision makers still uphold strong privacy regulations, essentially preventing teachers from even using file sharing services like Dropbox or Google Drive.
In an attempt to solve that problem, I spent my weekend designing and prototyping a small file sharing platform for my girlfriend’s school and figured that I would be able to deploy a working platform in a couple of days. Alas, in the end the school decided to rely on good old confusing e-mail to distribute homework and other material. It’s a shame, building such a platform would have been pretty easy after all my work on 200 OK and I would have loved to help out with the (so far) glacial pace of the education system’s digital transformation. But not even Corona will accelerate that.



Time spent this week: 32.5 hours (plus ~15 hours of writing parts of a file sharing platform. It was still fun, even though it will never see the light of day)