• Members of the previous forum can retrieve their temporary password here, (login and check your PM).

The Design Stamina Hypothesis

Nydex

Silicon-based Lifeform
Staff member
Moderator
Donator
Greetings fellas,

Quite a long time ago (nearly 2 decades), Martin Fowler came up with a cool name for a concept in software engineering that I very much tend to agree with - the Design Stamina Hypothesis. It's annotated as a 'hypothesis' precisely because it can't reliably and objectively be proven true, nor can it be proven false.

In general most developers, myself included, judge this kind of concept based on intuition and feel, on the basis of a certain amount of experience working on projects falling on both sides of this hypothesis. Fowler describes it as the comparison between velocity of progress in a software project with good design versus one with no design at all. This graph, taken directly from his website, depicts the concept quite well:

1743935538702.png

It plots two projects - one with good design (red line), and one with no design (blue line) - and showcases the relationship between cumulative functionality and time in development. A project where no time or effort is spent initially on design naturally moves faster in the beginning. More features are being developed in that period of time compared to a project that has spent its first weeks (sometimes months if it's a bigger concept) in the design stage and no code has been written yet.

However, at a certain point (the design payoff line), the velocity with which new (quality) features are being added diverges greatly. A project that has been set off with good design in mind can follow that structure in an organized and time-efficient manner, resulting in a much larger amount of stable features being developed. On the contrary, a project with no design at all hits a major obstacle quite early in development - tech debt. It's a concept that deserves its own thread, which I will honor it with in the future.

As a result, designless projects quickly start deteriorating and become harder to modify and maintain, drastically slowing down the addition of new features. Without a designated portion of effort being put into clearing tech debt, the problem evolves and becomes worse and worse with time, to the point where it halts development entirely and the whole team's effort needs to be put into refactoring and rewriting (both of whom also deserve their own threads in the future).

To put the idea in the form of a metaphor, imagine you set out to sea in two ships - one with a rudder being constructed and added in the port, and one without which can head out immediately. The rudderless ship might get carried on the wind for a while and make progress while the other ship is under construction, but sooner rather than later the need of a rudder becomes apparent otherwise you crash or go in a direction you have no purpose in going. Then the crew scrambles to add a rudder on the go, but it turns out the current structure of the ship doesn't allow adding a rudder without first modifying the hull of the ship (i.e. breaking existing features to fix issues in your code).

Meanwhile, the ship that set out to sea with a rudder already in place is steadily, methodically, and effectively making progress and soon overtakes the drifting ship that's undergoing major fixes out in the middle of nowhere. Its crew is not bothered by the lack of a rudder and can work on perfecting their control of the ship and adding more nice features like a brand new navigation system.


That's just a simple example of why (in my opinion) it's definitely worth delaying the actual coding and focusing on design first. Sure, the design might need to change and evolve as time goes on, but it's certainly better than having no design at all and having to deal with all of the disjointed pieces of your software not working well together because they followed no unified 30,000ft perspective of the whole purpose of the project.

Naturally, this needs to be a judgement call and depends from project to project. As Martin Fowler says on his website:
The hypothesis has a corollary, which comes from the design payoff line. If the functionality for your initial release is below the design payoff line, then it may be worth trading off design quality for speed; but if it's above the line then the trade-off is illusory. When your delivery is above the design payoff line neglecting design always makes you ship later. In technical debt terms it's like taking out a loan but not using the principal for so long that by the time you use it you've paid out more in interest payments.

The way I see it, in general for smaller projects where you work with exceptionally tight deadlines and have no time to develop features (i.e. customer-facing POCs, university projects, etc), perhaps it's worth considering going without an initial design phase. That, however, should be done only if you're confident you have enough quality talent on your team, and they know how to work together. There's danger even in that scenario, but sometimes, as we know, the industry pushes us and we have no choice.

What are your thoughts on this? When do you prioritize design and when do you tend to skip it? What are your ways of dealing with accumulating tech debt in projects that have no overarching design? Do you have any interesting stories that fit well into this conceptual framework?

Code on, fellas.

With love,
Nydex <3
 
Nice hypothesis. I think it’s something I’ve seen in field I worked in and did research in. When you look at physical products, you actually see a similar trend. People are really eager to get to a physical product as fast as possible, so they can start selling it. But often, those are the kinds of products don’t perform as well, let alone be successful.

In industry and my thinking, good design isn’t something that necessarily takes a long time it’s something that happens with strategic speed. Moving fast is mainly to have something nice to show on a PowerPoint, or to pitch something exciting. Yet it’s usually leading to the realization, that certain things just don’t work. But by that time, you’ve already invested so much that you basically can’t go back anymore. And that’s where the sunk cost fallacy kicks in, of course.

In desig research this diagram is often used to explain the relationship between design and design freedom and costs. The point of just enough design is up for debate but it shows the paradox really clear.

1743940103008.png

A great example of not thinking hard enough in the beginning can be found in The Guardian article about Juicezero. For me a case where the team rushed into creating a product too quickly, and the sunk cost fallacy grew to the point where it became impossible to think about the logic behind it, ending up with being one of the most well-known product failures in recent history.

Thanks for sharing
 
(Note: I don't yet have experience in complex projects, so I'm talking from a mix of theory, experience read from others, and my own experience with relatively simple projects)

I think the tradeoff is related not only to how tight deadlines are, but also to how well the problem is specified and understood. For a yet very poorly understood project, there may be no way of designing without exploratory programming. Given my own lack of experience, I'm very often faced with an incorrect understanding of the problem that leads me to a design that turns out to be wrong in important ways as soon as I start programming. For what I know, the same happens regularly to experienced programmers and teams for difficult or unexplored problems. So I think there can be a lot of value on exploratory programming that is done with no intention to use that code. There is of course the very real risk that the code will be used in the end, but that's a matter of discipline.

For very well understood problems, the need of exploratory programming is much lower. For example, compilers are an area that has been extensively researched and explored, so it would make a lot of sense to have a quite complete design before even starting. Different architectural choices can be made with good knowledge of the impact they will have in other parts of the system. Also, even though compilers are very complex in some ways, in others they are relatively simple: they perform batch work taking text as input and generating either binary objects or errors as output; they can be modeled as an stateless function. I think it's generally easier to design without exploratory programming when the problem can be modeled as relatively stateless, and much harder when it's highly stateful (such as anything that will be used interactively).

I will share a recent experience. I had to complete a project with microcontrollers that communicated through radiofrequency without any experience in any of that. I had decided the specifics of the project myself, designed a protocol, designed the abstract interfaces for the code, etc. However, when I started programming, I quickly discovered that there were many (previously) unknown unknowns, and in the end I had to redesign everything. My key error was sticking for too long with the original design, modifying it as I learned more, the classic sunk cost fallacy. In the end, despite working frenetically, the deadline got me without a working system (other than the very basics). I think I should have done some exploratory programming first, and I would have come up with a much better design. Even though this was a consequence of my inexperience, I think in software there is always a point where everybody is inexperienced. My "inexperience threshold" is much much lower than that of a seasoned senior programmer, but that doesn't mean that the senior programmer doesn't have one.

In summary, I think that prototyping and exploratory programming can give you some of the speed advantages of less design without having to pay the long term price of a lack of design. In the cases where it pays off, it should end up growing faster in the cumulative functionality axis than full upfront design or full exploratory programming.
 
Nice hypothesis. I think it’s something I’ve seen in field I worked in and did research in. When you look at physical products, you actually see a similar trend. People are really eager to get to a physical product as fast as possible, so they can start selling it. But often, those are the kinds of products don’t perform as well, let alone be successful.

In industry and my thinking, good design isn’t something that necessarily takes a long time it’s something that happens with strategic speed. Moving fast is mainly to have something nice to show on a PowerPoint, or to pitch something exciting. Yet it’s usually leading to the realization, that certain things just don’t work. But by that time, you’ve already invested so much that you basically can’t go back anymore. And that’s where the sunk cost fallacy kicks in, of course.

In desig research this diagram is often used to explain the relationship between design and design freedom and costs. The point of just enough design is up for debate but it shows the paradox really clear.

View attachment 103123

A great example of not thinking hard enough in the beginning can be found in The Guardian article about Juicezero. For me a case where the team rushed into creating a product too quickly, and the sunk cost fallacy grew to the point where it became impossible to think about the logic behind it, ending up with being one of the most well-known product failures in recent history.

Thanks for sharing
Interesting graph. Does the horizontal axis portray time?

In any case, it does make sense. The more a product grows, the more expensive and difficult it becomes to modify, and that's even if it starts with good design and solid practices. Without them, it's doomed to hit a wall relatively soon in its development.

And regarding Juicero, I don't even know how they raised the funding they did. I didn't follow them at all, but I suppose some very clever marketing was going on for them to be able to raise as much money as they did for an essentially useless machine (that was also advertised falsely for having a 4 ton pressing force which is ridiculous to even consider based on the size of this thing).

Sillicon valley millionaire investors have been known to put money into some pretty stupid ideas, but this is on another plain of stupid entirely lol in this case I wouldn't even blame the lack of design for this absolute flop, the product was doomed to fail from the start because it's essentially a "solution" to a non-existing problem.
 
(Note: I don't yet have experience in complex projects, so I'm talking from a mix of theory, experience read from others, and my own experience with relatively simple projects)

I think the tradeoff is related not only to how tight deadlines are, but also to how well the problem is specified and understood. For a yet very poorly understood project, there may be no way of designing without exploratory programming. Given my own lack of experience, I'm very often faced with an incorrect understanding of the problem that leads me to a design that turns out to be wrong in important ways as soon as I start programming. For what I know, the same happens regularly to experienced programmers and teams for difficult or unexplored problems. So I think there can be a lot of value on exploratory programming that is done with no intention to use that code. There is of course the very real risk that the code will be used in the end, but that's a matter of discipline.

For very well understood problems, the need of exploratory programming is much lower. For example, compilers are an area that has been extensively researched and explored, so it would make a lot of sense to have a quite complete design before even starting. Different architectural choices can be made with good knowledge of the impact they will have in other parts of the system. Also, even though compilers are very complex in some ways, in others they are relatively simple: they perform batch work taking text as input and generating either binary objects or errors as output; they can be modeled as an stateless function. I think it's generally easier to design without exploratory programming when the problem can be modeled as relatively stateless, and much harder when it's highly stateful (such as anything that will be used interactively).

I will share a recent experience. I had to complete a project with microcontrollers that communicated through radiofrequency without any experience in any of that. I had decided the specifics of the project myself, designed a protocol, designed the abstract interfaces for the code, etc. However, when I started programming, I quickly discovered that there were many (previously) unknown unknowns, and in the end I had to redesign everything. My key error was sticking for too long with the original design, modifying it as I learned more, the classic sunk cost fallacy. In the end, despite working frenetically, the deadline got me without a working system (other than the very basics). I think I should have done some exploratory programming first, and I would have come up with a much better design. Even though this was a consequence of my inexperience, I think in software there is always a point where everybody is inexperienced. My "inexperience threshold" is much much lower than that of a seasoned senior programmer, but that doesn't mean that the senior programmer doesn't have one.

In summary, I think that prototyping and exploratory programming can give you some of the speed advantages of less design without having to pay the long term price of a lack of design. In the cases where it pays off, it should end up growing faster in the cumulative functionality axis than full upfront design or full exploratory programming.

You may say you don't have much experience but you sound very rational about it all, so you might be underestimating yourself. Those are all very fair points. That's why in agile there are these terms "spike" and "poc", where spike is the process of investigating an issue and its potential solutions, and the poc (proof of concept) is doing a small demo project that implements the chosen solution and trying to figure out if this solution can scale to the needs and architecture of the desired end product.

Both are valuable tools when approaching a complex project. But one thing you homed in on is the value of having experienced people on the team, preferably people that have worked together before and are used to the inner dynamics.

Prototyping can definitely give you an advantage, as long as you don't spend too long on it, because sunk cost fallacy will kick in there as well.

It's a lovely balance to learn to maintain. Some companies never manage to do it. Others get luckier with teams that work well together. And others yet learn through a series of mistakes.

There is no one right way, it's always a matter of choosing what works best for your particular case.

Thanks for sharing your experience, really valuable stuff. <3
 


Write your reply...
Back
Top Bottom