My long and rocky road to TDD
Internet is full of odes to TDD competing over a longer list of benefits and providing basic examples, like how to test a Sum function of a Calc class.
Despite all of that the entry barrier for TDD is very high, at least for strongly typed languages like C# where the compiler and good code-completion tools can provide a certain level of illusionary confidence of working software.
I switched to TDD about six years ago. And here are the biggest struggles I’ve had, when I just started and I’ve had them for quite a long time.
The upside down #
One of the most advertised perks of TDD is “better interfaces” that is achieved by “first you use, then implement” approach. Well, at least for me, it didn’t work out as easy as it sounds.
The way I used to work before TDD, was to start each feature with deep and detailed analysis. I’d try to compile a “big picture” in my head, typically with the help of a pen and paper. I’d try to plan which components I’d reuse and which ones I’d need to create.
In the beginning, I haven’t seen the contradiction with “first you use, then implement” in that, so I’d start each new feature following the “big picture” approach, which typically would be followed by one of two scenarios:
-
I’d start working on lower-level components (like repository) responsibly writing tests first, as the approach requires (pretending that I’m using that). When the lower level was done, I’d go one level higher and would do the same until I’d reach the top.
-
I’d start implementing from the top, but still using the model I’ve planed imagining the “big picture”.
Neither works! There was no effect, it was awkward, clumsy, and slow. It would cost me lots of time and patience polishing and rewriting the tests again and again until some reasonable level of simplicity and more or less clear semantics were reached, but even then, in certain cases, changing that wouldn’t be an easy task.
Things changed a lot, when I’ve tried to give up the “big picture” approach an go the “dumb” way.
By that I mean — starting with the entry point of a use case and not stepping out of it, defining dependency interfaces the way I need them here and now to make my test easy first. And only after the test is done, I’d go to implement the dependencies or eventually replace the exact matches with the existing ones.
Obviously the “big picture” part is still necessary, but it should take place only after the feature is implemented in the simplest way possible! Only then it’s a time to revise the components and refactor the code to extract code duplications in separate components, apply certain optimizations and etc.
So the proper name for the approach I called “dumb” should be an “iterative” or simply “agile”. This approach actually embraces the natural evolution of code and leads to TDD promised “better interfaces” and some other practical perks like:
- As there is no need to change existing code, the features are being implemented faster and safer.
- The post-factum emerged design solutions are better in a way as they put emphasis on use cases and their specifics, whereas thought ahead design emphasizes the common part and hides specifics.
- Since the revision/refactoring can be done separately, apart from the KISS version, the efforts can be measured separately as well.
That’s not cool #
Technically the approach described above killed all the fun, or at least that’s how I felt at the beginning.
Before switching to an “iterative ” process (the one mentioned above), which was kinda forced by TDD, one of the coolest parts of development was planning ahead. Having lots of discussions with the dev team and business about long and short term plans, especially with new projects (so-called green fields), putting together all the knowledge, all speculations, all previous experience to design a perfect foundation… this time.
That felt like an extremely important and valuable thing to do. The only thing that might have felt even better was occasionally stating: “ah, that’s already done!”, “that will work out of the box”, “that won’t take much time, we thought it through back then (might be even years ago)” while discussing new features and obviously getting credit for that.
Well, all of that was gone! No thinking ahead, no perfect solutions, no more feeling of self-importance — not cool at all. Even more, sometimes it felt extremely “hacky”, like knowing that I could do more, care more, but I didn’t… It felt like something very unprofessional.
On top of that, as planing ahead was gone, my very deep language knowledge (C# at that moment) had become almost useless, since there was no area to apply it.
In a while the pragmatism started paying back:
-
Refactoring test-driven code is way “easier” or in other words — faster and safer.
-
In comparison with speculative planning, revising existing architecture allows to define exact and very granular goals with quite precise estimates. This is also the point where deep language and framework knowledge kicks in, but in this case — to solve real problems.
-
In addition to more accurate estimates and metrics, as there are less ups and downs, the estimates become also more predictable for business which adds more trust in the relationship between business and developers.
In the end, with TDD I’ve started spending much more time on application architecture than ever before without having any doubts from the business side.
The personal doubts went away as well, as solving real problems in a short time feels and is way more professional than wasting time on solving speculated problems no matter how realistic they might seem at the moment.
Am I doing it right? #
The deeper I was going to TDD the more granular my code was becoming. More classes, smaller classes, tons of classes with one public function. All test-driven, therefore well tested, BUT will it work? Yet again and again I didn’t have confidence in that and had to retest it manually. That was also adding more to the doubt pile about the value of so much time invested in tests.
Until one day it hit me! I don’t need to do it manually. For whatever I doubt, whatever I have to retest — I should write a test. The fun part — sometimes it might be quite challenging to achieve.
For example, with .net core MVC application, having doubts if the app will start successfully I can write a test for it. I might have forgotten to register one of the pipeline components for a feature I was working on — I can write a test to check that all controllers can be resolved.
The same applies to work with framework and other 3rd party components. Whenever there are doubts about how exactly it behaves or there is uncertainty about behavior in future versions — just test it.
Some may argue these are not unit tests and this is not TDD. I’m not going to debate the semantics. The fact is — it works and it’s efficient. It allows to move on, being confident that the code is working. The TDD doesn’t prevent you from automating manual tests. The one thing remains though, for TDD to go on, the tests should be fast.
The takeaways #
-
There is no perfect design to be applied to any project. The TDD embraces SOLID principles and allows refactoring on cheap. Continuous refactoring is the way to keep the project architecture clean, to embrace the use cases and specifics.
-
Thinking ahead gives a very nice feeling of importance, but it comes with a huge price. It makes testing and maintenance complicated (i.e. costly) as well as emphasizes obviously common things and hides the import specifics of an application use cases.
-
The main purpose of testing is confidence in working software. Whatever is doubted should be tested.