Designing a system’s architecture is recognized as one of the most important parts of software development—hence, it’s usually done by the most experienced people on the team, such as architects and senior developers.
The architecture designer needs to address several crucial questions: What components will the system have, how will they be split, and how will they communicate with each other? What storage or web servers will be used? Will more supporting software, such as caches or queues, be needed?
Let’s take a look at how traditional design was done, what problems it had, and how they were solved to develop it into what eventually became the modern design.
The Design Process
The process has changed a lot in the past few decades, mainly because the requirements for software have increased, the minimal acceptable quality has become higher, and the scale in which the software operates is now different than it was 25 years ago.
Currently, systems can have tens of millions of active users, whereas a few decades ago, most software was written to be used by one user at a time, and there was no online connectivity.
Remember that the design should be the simplest possible version that meets the required standards. All the complexity in today’s software—such as having distributed software with databases consisting of hundreds of physical machines, all of which talk to each other—is done for one simple reason: The market demands it.
Adding things that aren’t required to meet the acceptance criteria is the biggest problem in software design.
Take a simple flower delivery service in a small town. Realistically, it is impossible to have millions of active weekly users—we are, after all, operating only in a small town with a few thousand people. It may be tempting to design the flower shop with a distributed database or add a caching layer, but it is overkill and likely out of the budget.
Let’s now take a flower delivery business with 150 physical shops all around the state/country. That is 150 cities, some of them small, some of them big. The company is likely to have a centralized system (as long as the jurisdiction of the towns is the same).
The audience may consist of a few million potential customers. For such a possible user base, it would make sense to add some DDos protection, caching layers, and a few servers behind a load balancer.
There are other factors that will dictate the design, for example, expected service time. The one thing every factor has in common is those that will have an impact have a direct correlation to the business.
So why do people sometimes introduce unnecessary complexity in architecture design? For one thing, every developer likes to use the latest trendy language, tool, or library. Another thing is that software can be sold as something it is not, as was the case with MongoDB. Lastly, we are human and we make mistakes. Regardless of how much experience you have, while designing a system, your priority should be meeting all the requirements with the least complexity.
In the past, design was reasonably simple, partially due to a small user base for most software, and partially due to a lack of many options. This did not make the job trivial, though.
Updates, on average, were significantly harder to make than they are today, so mistakes used to cost more. Furthermore, most mistakes were discovered very late in the development—development, not design—process due to the nature of how work was done back then.
The waterfall methodology was the king in those days. The essential thing about it is that a phase (like planning, analysis, design, development, quality assurance, etc.) must be 100 percent completed before the next one can begin.
However, this logic has been since proven flawed. The main issue with the waterfall methodology is the excruciatingly slow feedback. If you introduced a flaw in your design, you would usually discover it only when you had 10 to 15 percent of the allocated time left. Even worse, it would have already been implemented, so the time of architects and developers would have been wasted.
Since the design process was very formal, many artifacts in the form of Unified Modeling Language diagrams would usually be produced explaining all of the entities, subsystems, and interfaces in a system.
In time, people began to notice that reducing the time required for feedback reduced the total development time, which in turn reduced the cost for building software. Therefore, besides the other major issues involved in the waterfall methodology, approaches that favor short feedback loops are preferred.
Nowadays, agile methodologies and process frameworks like scrum are preferred. Although proven not to be a silver bullet, as there are plenty of issues to deal with, it is an improvement from the waterfall model all around. And how is the game different for the design of software?
Software requirements are, unfortunately, never set in stone. They constantly change. The truth is that no one knows the full requirements of a system until it starts functioning. Having short feedback loops helps catch more of these mistakes early on and adapt the design and architecture to accommodate them better.
In a way, the first sprints are tracer bullets, a bare-bones implementation that will hopefully catch the major flaws in a design. It is highly unlikely that all mistakes will be caught, however.
This means that the design should be flexible enough to allow changes. The cost of flexibility is added complexity, and as mentioned above, the goal of a design is to meet all requirements with the least amount of complexity.
We are at a crossroads and usually, it isn’t clear which way to go. Whether to choose flexibility or simplicity is something developed with experience and there isn’t a rule that can be followed every time.
Besides the approach, the actual design has also changed over the decades. In the past, all of the development was done at once without much feedback while building it, and as a result, the software was usually built in one big piece. This architecture is called monolithic.
Nowadays, due to agile, everything is done in small chunks timewise—i.e., you have sprints, which are periods of one to six weeks in which you work on a specific thing, receive feedback, and when the feedback is applied, work on the next part begins.
This naturally leads to having the software broken down into small chunks, each of which does a specific thing. This is called a microservice-oriented approach.
The “marketing” for this approach will list a lot of benefits for developing software this way. However, the reason it is popular is that since the software is developed one chunk at a time (from one to six weeks), using an architecture that consists of a lot of relatively small, independent components feels natural.
In reality, managing and monitoring a lot of small services tends to be more cumbersome than a single application. This is not to say that microservice-oriented design is bad, but it is far from the silver bullet it is made out to be by a lot of people.
Trends in Design
Although the mobile platform and the web are two very different worlds, the trends in their design are somewhat similar. After the dot com boom, perl and PHP were used to build websites. There were no standards, no frameworks, no tools. It was way too early to talk about any sort of ecosystem. So, what happened next?
Everyone started building custom frameworks for everything. Needless to say, most of the frameworks are crap and 99 percent of them were used exclusively by the people creating them. And yet some of them paved the way for the next generation of frameworks and the eventual ecosystems of technologies we have today.
Open-source and collaboration bloomed, so much so that it was a staple of showing how engaged you were. Nowadays, it is expected by a lot of companies.
Today, there are tons of different architectures, and depending on the specifics of the software, any architecture can work. One trend that I see, though, is using a lot of services instead of hosting your own. For example, nowadays you have services like a relational database as a service, emails, queues, key-value stores, caches, and so on.
The many pluses and minuses of using services instead of hosting your own solutions is beyond the scope of this article, but it will likely be the topic of a future post.
At the beginning of the year, I was consulting for an educational app that will be used for seminars for preschoolers. I presented several designs and the one that was chosen was the one that used the most services.
In this particular case, the lifespan of the project was expected to be two or three years. We chose services with strict service-level agreements about data recovery, backups, and uptime. The expected lifespan of the project was, as I mentioned, up to three years, and it would cost several thousand dollars for the services, which was cheaper and faster to build.
It would cost more money in the long run and if there were changes, but since we needed to roll out version 1.0 and call it a day, it seemed like (and turned out to be) a good way to go.
Design Is About Finding the Balance
The goal of design is to meet all business requirements with the least amount of complexity. Although the design process might change and evolve, its end goal remains the same.
Mistakes in the design phase are inevitable. Therefore, the feedback loop should be as short as possible in order to fix design and architecture issues without wasting the time of the rest of the team.
Changes in a design are inevitable, not only due to mistakes made by the architect, but because people aren’t exactly sure what they need. And so, with that in mind, a balance must be maintained between low complexity and flexibility.