On what has been said
It's been interesting to read all the posts around DHH's recent writings on the death of TDD and the various responses. I picked up TDD about a year ago and am still working on getting the most out of it. So I read, and am influenced by, a lot from the people that responded to David's posts.
I find a theme in David's writings that troubles me. I have a computer science background and try to argue my convictions dispassionately and based on fact as much as possible. I find very few facts in the presentation of the faults of TDD that David makes. It was the facts around TDD the convinced me it was worth learning in the first place. That it did lead to better products in the end.
Some of these things are apparent, both to David and to me. He finishes his "Slow database test fallacy"-post by acknowledging that deep coupling in the code is bad and that test coverage is good. So far we do agree and generally I have found that when someone doesn’t agree with the tenets of TDD it’s often because they do not understand or agree with the underlying design principles, such as the SOLID principles, or don’t believe that they are in fact traits of good design.
I want more facts. Show me where TDD hurts design. Give practical examples of the damages caused, just like proponents of TDD give examples of how it improves design. Or at least argue on a factual, case by case basis.
I’m firmly in the pragmatic camp, nothing about TDD is religion to me. I’m currently preparing for my masters thesis, it's on our cognitive limitations as humans and how that relates to developing complex computer systems. To summarise all the reposts and experiments I have read; we suck at programming at a cognitive level.
Humans did not evolve to build complex, abstract systems in abstract, formalized representations of logic. We have so many things against us that it’s a miracle we got this far. TDD gives us a set of tools and principles that help decompose this complex task into sizes we can control and grasp in our brains.
We decompose to abstract away parts of the problem that is not relevant right now. And this is not optional, even if you do not decompose the problem your brain will happily ignore the parts it feels is not important, and that is not the same parts you would have ignored, trust me.
So I see TDD not as a religion but as a toolset. A toolset the same as object oriented design or functional programming or the language I choose for solving a problem. It’s all about the results and I can happily admit I don’t do 100% TDD and I don’t think anyone should. Heck, even Gary Bernhardt admits he does 50%-70% TDD, so why would I feel ashamed?
TDD is a toolset, learning to use that set of tools naturally requires you to find out when they work and when they don’t. As a beginner I try to do 100% TDD on larger projects that I know will live for years. That’s where I’ll get the biggest payoff for my slowdown. But I don’t TDD shell scripts, prototypes or weekend projects. I don’t have time to do that yet, but I will eventually, as I speed up.
There have been studies into what effects TDD have. A summary of such studies was compiled in 2006 by Maria Siniaalto in her paper “Test-Driven Development: empirical body of evidence”. In 2011 Tomaž Dogša and David Batič published “The effectiveness of test-driven development: an industrial case study” and this year, 2014, another meta study from Helsinki university, “Eﬀects of Test-Driven Development: A Comparative Analysis of Empirical Studies”, was published by Simo Mäkinen and Jürgen Münch.
In all of these the conclusion is a weak tendency towards increased quality and increased development time. And they were all conducted on people with mostly no previous TDD experience. Let’s analyse this for a while: All the big names in TDD seem to agree that it takes years to achieve your full potential in TDD (as with anything it’s those 10000 hours to achieve mastery again), and as a beginner I can attest to the initial increase in time. After all you are learning a new way of working, new tools and a new way of thinking.
To me the astonishing thing here is that you can measure a positive effect at all when giving someone a few weeks to a few months with TDD. But that it exists at all speaks volumes to the effect of TDD in the hands of someone who has had a few years of training in the technique.
If we agree that at least some of the things that gets enforced with TDD, such as decoupling, modularization, composition, smaller objects and methods, are good things. And if the science shows us that even beginners get some of these benefits right of the bat. Why should we, the proponents of TDD, not stand proud and talk warmly about a set of tools and principles that we believe in and that give proven results?
This is where that trouble with David's writings come back to haunt me. Where are the facts? He might have very good reasons for his views that TDD is bad for design, I don’t believe that but I’m always prepared to listen to new facts and reevaluate reality in the light of them.
And then we get to the part about tests and time. I feel strongly that TDD is not about the tests in themselves, they are in the order of a very nice bonus, but in the design that springs from writing tests first. That the tests become fast when you only test one class at a time is nice and lets you adopt a different workflow, such as that Gary Bernhardt talks about, sub-second test runs that basically just exists in the corner of your eye and stops you at the moment things go wrong.
That is a very powerful place to be and if you have been there you do not like to have it taken away. That might be the reason behind some of the focus on test times when it comes to Rails. But that is only because it prevents the fluency of thought. And since you can get that by decoupling from the framework most choose to do so. This is not really something that reflects badly on Rails specifically, it reflects badly on any large system.
I hold some personal opinions about the early design choices in Rails, its tight coupling to the database and mashing of helpers into a single namespace etc. Most of this is being improved on as you read this and have been improving for years. But I cannot help myself wondering if these problems of coupling and dependencies would never have existed if Rails had been built test first from the start?
And it doesn’t help Davids case in my mind that when I look at Rails, that he famously designed, I see it suffering from some design problems that could have been solved using TDD. All the while he’s talking about the design damage TDD incurres…
It smacks more than a little of arrogance and that, together with the fact that his arguments are non-factual and based on emotion and opinion, makes it hard for me to take him seriously.
As Tom Stuart so poignantly puts it in his recent lightning talk from Scottish Ruby Conference (and I'm paraphrasing): "DHH is just one man. His experiences are important, but they are just the experiences of one man." I would add to that that DHH is young, in relative terms, and trying to duke it out with some serious heavy weights when it comes to experience and TDD.
I choose to accept the, mostly, dispassionate, fact based arguments from Kent Beck, Corey Haines, Robert C Martin, Gary Bernhardt and many more over the passionate, loud and mono-experience views of on DHH.
What do you choose?
TDD is dead. Long live testing. - Davids original post.
Monogamous TDD - Robert Martins reply to Davids post.
Test-induced design damage - Davids second post.
Design-Damage - Robert Martins reply to Davids second post.
Slow database test fallacy - Davids third post.
When TDD does not work - Robert Martins reply to Davids third post.
TDD, Straw Men, and Rhetoric - Gary Bernhardt reply to Davids post about the slow database test fallacy.
Speeding Up ActiveRecord Tests - Corey Haines's take on the problems with TDD David puts forward.
UnitTest - Martin Fowlers correction of history and dive into the origin of unit testing.
The DHH problem - Tom Stuarts to the point and very funny lightning talk from Scottish Ruby Conference 2014.