Ten Years of ISS

This post was written for the Lobsters Blog Carnival, the topic for which was "What have you made for yourself?". I couldn't think of a better topic than my decade long labor of love, a piece of forum software called ISS. I made it for a small community I was/am a part of, so I guess I made it "for us", but it's one of the only substantial pieces of software that I've written wherein I was the first and last say in design and implementation and it's the one that I feel my own sensibilities are best represented in.

When I started working on ISS I had gone through a period of surveying other forum software that existed at the time. The community I had inherited had used vBulletin 3 for many years and, despite its flaws, it had been very reliable in that time (at least so far as I could tell) and it served its purpose well. By 2015 it had reached end of life and you couldn't buy a license for it anymore. I couldn't find anything, free or paid, that really hit all the points I was looking for (which was shocking to me, given web forums were already becoming old hat by this point), so I decided to make my own thing. After a weekend hacking together a proof of concept and proving I could bridge the auth system from what we had been using, I wrote down a list of design principles that I wanted to guide my new project.

A decade later I'm amazed to find these decisions, made by my early 20s self, still sound like good advice to me, and more importantly have proven useful touchpoints in development of a piece of software over a long span of time. I know that my aesthetics and values aren't everyone's, and given that this piece of software has but a few active deployments after all this time it may not look like success by the standards of the software industry today. But consider that I built this thing to serve my own needs first and that it's something that could be developed, operated, and deeply understood by a single person for a long span of time. How many commercial systems could that be said about?

I figured it would be fun to look back at those design principles I set out, and reflect on how they guided development, what benefits I reaped from them, and what they cost. So without further adieu, here's what my past self thought was important in forum software:

Javascript should be optional

So the practical consideration here was that a non-trivial portion of the community I was seeking to give a home to was very security conscious, some have to use technologies like torbrowser due to where they live, so anything that required client side scripting to operate implied asking users to compromise their security position to be part of the community. Some ancillary benefits are that obscure browsers are easy to support (lynx has a nice experience without special considerations), and scraping/scripting is trivial.

I won't lie, living up to this has been a real pain in the ass at times. Not so much that a purely nojs solution is impossible but creating interactions that work without JS but have enhanced variants when JS is available is kind of a lost art. There are a lot of places in the UI where a button is a small form that works without JS, but when JS is available the form behavior is overwritten with something that saves a full page refresh.

I think one of the most appealing things about react-style frameworks is that all production of DOM is done in the same style, even if you're doing SSR your "templating" is all in javascript. There's a real awkwardness that creeps in if you're doing older style server rendered content, and then need to modify that on the client. In simple cases it works fine, but as you move to more complex UIs you quickly realize even simple things like "format a timestamp into a human readable string" requires two different implementations and user TZ management and those have to be very carefully kept in sync.

I ended up cutting this the opposite way, when UI elements need to be updated without a full page refresh the client calls out to the server which renders fresh HTML which is then swapped out in the DOM. With a little glue code this actually works quite nicely, and for what it's worth this is what people thought about back when "AJAX" was still a fresh term. Admittedly, this is a strategy that gets hairy if a user interaction requires updating multiple non-continuous areas of the DOM, but with pretty minimal design concessions I've found it scales pretty well.

One really nice side effect here is that all the access control considerations that happen at API boundaries just kind of fades away. I don't have to worry about "who can see this user's email address" at an API, rendering a chunk of HTML for client side insertion can have access to raw data out of the database, as long as a field doesn't end up in the rendered HTML, it's not exposed to the client. Templating decisions can use data in its logic that isn't exposed to the client at any point. If you've worked in an API+thick client world for a long time you might be so used to having to think about this kind of thing you don't even realize you have to do it anymore, but once you don't it's a really pleasant paradigm to build in.

One thing I have to own up to here is that for many years I had to used recaptcha during registration for spam control. It was always a flag you could turn on or off, but I had to turn it on in my own instance and this implied users, at least once, would have to turn on javascript. Eventually I did write my own primitive captcha system that worked without javascript and so this promise was truly fulfilled, but I'm a little ashamed that it took me as long as it did to live up to my own standards here. u

UI shouldn't second guess user agents

The idea here is more "don't get cute" than anything. If your link can't be middle clicked to open in a new tab then it's broken. Web design has been in a near 20 year love affair with these goofy iOS style toggle controls but they're just wasteful checkboxes.

My guiding light here is that if I can't articulate a tangible way in which a more elaborate UI implementation makes a user's life better than the browser-native way (that isn't "it looks nice") then I don't do it. I'd also like user supplied stylesheets to generally be honored as this is an important accessibility consideration I think we often forget about in trying to make things look nice.

Honestly it's kind of bewildering that this isn't common sense in the industry, you get free accessibility (and having done a11y at a major company, I can testify to the immense cost of making "rich" UIs accessible that craigslist doesn't even have to think about), you save piles of developer time, and usually you actually end up with a better user experience.

Should be at least somewhat usable on a potato using an EDGE connection

I want people using low end devices with slow and unreliable connections to be able to use my thing. There is absolutely nothing about a web forum that should require a fast internet connection or powerful device to use, and yet these seem to be soft requirements all too often on the web today. And while it's almost a cliche to mention these days, it's completely accurate that building for the low end helps everyone, even fancy Americans who buy a new iPhone every year will appreciate less bandwidth consumption, less power usage, and faster load times.

So what's involved in building for the low end? Given JS usage is going to be minimal to zero, most of the wins here are going to be on the network front: reducing the number of requests that have to be made, and the amount of data that needs to be transferred. There are some interesting tradeoffs to be made here, because optimizing some things will come at the expense of others.

Making no-js a design priority is very liberating here, while ISS will load a grand total of 32kB of gzipped JS to users who have JS turned on (which is quite low by modern standards where multiple megabytes of JS is not unusual), if you build in a way where the JS enhances the experience but isn't strictly necessary then that JS payload stops being in the critical path. You can defer that JS until later (it'll still load quite snappily after being cached), and get a usable UI on the screen fast, and wait for JS to transfer to later for enhancement. Contrast this with modern react SSR strategies that bend over backwards and involve immense complexity to get a static non-interactable state on the screen "quickly", and then wait on a JS payload before any user interaction can be processed. This isn't to take away from some of the genuinely impressive engineering involved in these kinds of systems, but if you're willing to embrace a low-js diet you'll get more for less when it comes to load times.

Speaking of critical path, loading the front page of an ISS instance involves exactly two necessary requests to have a visually appealing functional page: ~6kB of HTML and 22kB of CSS. Now that CSS file, that's pretty heavy right? It's true that almost 4 times more styling info vs. semantic content is a bit of a severe ratio, but this is an intentional tradeoff: that (admittedly relatively heavy) CSS file is 100% of the styling for the entire site, it's wholly static and can be cached indefinitely and thus only strictly needs to be requested by the client once. Since it's referenced from the HTML and its url contains a hash of its content, there isn't even a need to revalidate the content during navigation. This is a tradeoff wherein the first pageload of the site requires that extra data, and the first load is often the most important, but given the nature of forums I figured this was a worthwhile exchange.

This is one place where the mainstream has taken the opposite tradeoff. Modern bundlers usually aim to split up the required JS for a page into 6~8 roughly equally sized files so that the resources can be loaded in parallel. This has tangible benefits when you have a massive JS payload to deliver, but it creates a brutal experience on an unreliable connection: when a non-trivial number of your requests fail stochastically every additional request that needs to be made is another chance for a non-functioning page. So admittedly this is one case where you have to choose between prioritizing low end vs. high end users, but if you have a light JS payload (especially if it's not in the critical path) then the penalty for high end users is negligible and it makes a world of difference on the low end.

Performance and correctness over feature richness

Honestly this is probably just a sneaky way of saying "programmer reserves the right to change requirements on a whim". Working as a professional developer against a set of requirements you didn't write, you'll quickly find that trivial requirements that represent little or no value to the user often represent the majority of the work, complexity, and bugs in an implementation. If you have the privilege to work on something where you can let the implementation shape the requirements instead (of course within the bounds of still serving project goals) then you'll find it much easier to maintain a simple, correct design. Occasionally you'll also find that a clean implementation will lead you to an objectively better UX than if you had sat down and defined behavior ahead of time.

Responsive pages

This originally came with the addendum "1-col for lyfe!". This is probably more obvious today, but years ago the idea of building the "desktop version" and the "mobile version" (and god forbid a "tablet version") of your UI still had sway. In fact I think we still haven't entirely gotten away from this idea. In ISS, users with small screens and touch screens get a first class experience, served off the exact same DOM structure as conventional desktop users, CSS media queries turned out to be all that was necessary.

This turned out to be a pretty easy policy to hold to, but I suspect only because ISS has never been built against preordained UI design. In my experience in commercial work, most desktop/mobile awkwardness has been a product of a designer (or worse two designers) sitting down with two different files and building the same UI two separate times, then a programmer having to try and serve those two masters. This is another place where allowing form to follow function, implementation to inform design, gets you a lot of mileage. This isn't to say that you don't ever have to make concessions for small screens, there are absolutely different design considerations for touch interfaces, but in general my policy was to start with a desktop implementation, scale it down, and see what works and what doesn't. Sometimes the solution that was natural on mobile ended up working just as well or better on desktop. Sometimes it was the other way around. But leaving yourself the room to call that design audible on a case-by-case basis was really liberating and really opened my eyes to how easy responsive design is if you leave yourself that flexibility vs. attempting pixel perfect replications of designer output.

Should be able to withstand the spamocalypse

This was the late-comer to the list, something I added about a year in. ISS uses Django. I could write at some length about the virtues of Django but I'll restrain myself. One thing that's really amazing about Django is the automatic admin interface, informed by its ORM. It's not pretty but it saves a really impressive amount of busy work and it was the sole moderation tool in ISS for some time. After a while running a real life community on ISS though, it became obvious that the automatic admin interface wasn't going to scale gracefully to the sort of mass deletions and user management that an open forum requires. I don't think there's much that will be generally interesting to say here, a lot of this involves the specifics of forum software and the economics of spam advertising campaigns. Anyone who's dealt with toxic traffic like DDoS attacks or AI crawlers will get the gist: you don't need something bulletproof, you just need to burn enough of the toxic party's resources and minimize inconvenience to legitimate users.

I guess the more general principle here is: invest in management tooling. This is something that's absolutely held in every software project I've participated in. I don't know why, but the value of operational tooling seems to be very consistently underestimated, projects like "make a button to do that user management thing in a click you have to do every couple days" tend to pay themselves off very quickly and remain useful for years. I guess it's because these kind of things don't obviously serve the goals of most projects, but especially when you're both developer and operator of a thing you realize a few timesaves like this free up enough time to build substantial new user facing features, so even if operational efficiency isn't a priority the time you save can be applied to something you care more about.

Development using notepad is highly encouraged

This is mostly an inside joke, legend has it on one of the communities that preceded the one I now host, the forum software was so shitty it could only have been written with the venerable MS notepad. Or maybe our admin at the time publicly advertised it, I'm not sure, it's in the realm of myth at this point but it's satisfying to pay homage to the past. And for what it's worth, although I'm a lifelong vim user, I did fire up the ole gaming PC and write several of the early commits to the project using good ole notepad.exe.

Maybe a bit of an asspull, but there is something to be said for building without heavy duty tooling like LSP, automatic refactoring, etc. If you use simpler tools, it does impose some limits on the level of complexity you can tolerate, and complexity does usually work against reliability. Honestly though this one is primarily a meme.

Conclusion

What should you take away from this? I honestly don't know. I do think these are worthy principles that have stood an impressively (and surprisingly!) long time, at least by the standards of web tech. But I'm not naïve enough to think this even if this is the silver bullet that there's a snowball's chance running most commercial software projects this way. And honestly "I don't know anyone who turns off JS, screw those weirdos and save a week of dev time" is probably the better stance for most businesses to take. Maybe the meta point is that taking the time to sit down and think about your technical goals with a project before jumping in can be really valuable. Some might look at it as over planning, but the point isn't really to plan the implementation but to lay out your goals and constraints: there's still plenty of room to maneuver technically but you want to know generally what properties a successful implementation has before you start trying to build. If your response to an engineering challenge is to change your success criteria then you're not really doing engineering, you're just settling for whatever's expedient.