Sunday, November 29, 2015

Everybody is doing TDD, take two

In my previous post Everybody is doing TDD, I tried to make a point by telling a story. But most people missed the point and argued about unrelated problems. I guess that was mostly my fault, so in this post I will attempt to explain the point directly.

My claim is this : Sooner or later, every software development effort slips into same kind of workflow : 

  1. Define a test case.
  2. Change or extend implementation so that above test case passes.
  3. Execute the test case. If it passes go to 1. If it fails, go to 2.
This workflow is so far only one for which we know produces software that fits the specification. Only difference is how this workflow is implemented. For example, as software development process, the steps can be:
  1. Create a specification.
  2. Implement an application according to specification.
  3. Have testers make sure the implementation is according to specification. If this fails, return to step 2.
This process works as long as no corners are cut. Which means defining the specification with high levels of detail and having army of testers, so whole specification can be validated. But much worse is situation when the whole workflow is non-conscious, as presented by Josh in previous post : 
  1. Test case is kept in programmer's memory as steps to run the use case.
  2. Change or add code to fix current use case.
  3. Run the steps as defined in 1.
It should be obvious why this is so bad, but that is not a point of this post. The point is that no other workflow exists that would give software developer ability to create software that works according to specification. Or at least I don't know about any.

That is why I pose a question, primarily to those that claim TDD doesn't work : Does any other workflow exist, that cannot be reduced to this one?

There are few options, none are good.

First option is to implement the software properly the first time. I believe this is just a dream and that this is only possible for simplest of use cases. If software gets even little bit more complex, it becomes impossible to create it without executing the test cases during the development.

Second option came from the comments on the previous post : That developer can "feel" when software is correct. And while I agree developer can limit amount of tests that need to be executed by using his experience and intuition while looking at code itself, there is still huge possibility of bias. So while it is possible to come up with new test cases and minimize testing by having good experience, it is not full replacement. 

So if we agree that this is the best workflow to follow, it then starts a question: "Which implementation of this workflow is best?" But that is question for another post (with obvious conclusion).

Tuesday, November 24, 2015

Everybody is doing TDD

This statement will make some angry, but I ask those to continue reading and realize it is actually true: Everybody is doing TDD. It is just that some people are more competent at it than other. And if they are saying they are not doing it, then they are either lying or are fully incompetent.

Lets illustrate this on sample situation. The goal is to implement an application. The specification mention multiple text boxes and buttons along with many use cases in which in which sequence of inputs and button presses are mentioned to get desired outputs.

Lets see how Josh, novice programmer right out of college, full of energy and passion, would tackle this problem:

  1. He creates the UI, which is obvious considering the whole spec is written for UI.
  2. He goes through the use cases and, knowing he fully understands them, starts implementing the code.
  3. As he is reaching completion, he figures out he should try a use case if it works. And to his surprise, it doesn't.
  4. So he implements the code as to fix the use case and repeats 3.
  5. As he reaches what he believes to be end of the implementation, he decides to do some previously done use cases, and they all work. If any of them doesn't, then he goes to 3.
  6. As he commits the code, he doesn't do any other changes, because then he would have to go through all the use cases, which would take him quite some time, considering they all require entering many values. And he doesn't have that time right now.
And now, lets see how Sarah, self-styled software craftsman with many years of experience in design of software, would do it:
  1. She realizes that UI is just a detail and that it is not important, instead she creates a library that would contain the logic of the calculations described by the spec.
  2. She then picks simplest use case and implements a code so that it simulates the behavior required by spec. Button clicks as method calls and text boxes as properties.
  3. She then implements the code so that the above created code succeeds.
  4. Satisfied with the result, she picks next use case and goes back to 2.
  5. As she is reaching last of the use cases, she often runs the code that simulates the behavior defined by spec. And because this takes less than a second, she can do it after every small change, making sure she didn't break anything in the process of implementing new use case.
  6. As last step of implementation, she creates the UI and wires it up to the library she created and runs the application once to make sure the wireup is correct.
  7. Before she commits the code, she analyzes the code from point of someone who is seeing the code for the first time, changing it so it is much easier to understand. She can do this, because after every change, the simulation code will tell her if she didn't break something.
We can all see what the output of the two developers will be. Josh's code will most probably be broken (eg. not following the specification), hard to understand and anyone inheriting his code will curse Josh's very existence. Sarah's code on the other hand will be working according specification, be easy to understand and anyone taking it over would know what the code is supposed to do even if original specification is lost to time.

But that is not what this article is about. It should be obvious that the workflow in both of those cases is extremely similar. Even though Josh tried to implement the logic in one go, he soon drifted towards cycle of testing one use case and implementing the code so that single use case works, just like Sarah. Except in Josh's case, he has to start the application and enter the values manually every time he wants to test the use case. So while Sarah did spend some time writing more code, she spent less time than Josh, who is entering the values manually every time. It then comes as no surprise that they both follow the same workflow:
  1. Define a test case.
  2. Implement code so the test case can pass.
  3. Ensure test case passes.
  4. If it passes, go to 1. If not go to 2.
The incompetent developers simply define their test case as "Enter values and click buttons" instead of automating it in code, thus making the Ensure test case step extremely laborious and time consuming. Thus making it hard to repeat reliably and often.

There are only two cases I can think of where this workflow is not used. Developer is either able to write the whole implementation the first time and then execute the manual test cases after the fact. In this case, I call him liar. Second case is situation where developer doesn't bother repeating the previously finished use cases, in which case the code will be broken, because those use cases will not work as specified. As such I call him fully incompetent.

Sunday, November 9, 2014

Open/Closed Principle

Open/Closed principle says that "Software entities (classes, modules, etc..) should be closed for modification, but open for extension." While this doesn't make much sense, it is actually quite simple: to change behavior of a system (or a class or a module) you should not change existing code (it is closed for modification), but you should add new code (it is open for extension).

This principle is greatly tied to correct abstractions being present in existing code. If right abstraction is present, realizing this abstraction and plugging this realization into software allows for safer development, because there is only one new piece of behavior to test.

The problem with OCP is those very abstractions. It is impossible to predict what abstractions you are going to need in the future. Coming up with correct abstractions is one of the most important jobs of software developer. Developer needs to use their whole experience and knowledge of both software and domain to create abstractions, that can be expanded upon later on, while they don't unnecessarily complicate the design.

I believe OCP is one of the most important principle, especially in environment, where requirements change and demands on software change often. This is especially true in agile environment.

Monday, October 20, 2014


Single Responsibility Principle (SRP) says that "Each context should have only one responsibility."
This is simple to understand, but it is too ambiguous to actually apply it in practice. The major reason why is the word "responsibility". In original, Martin defines it as "reason for change", but that doesn't make things clearer. The problem is that responsibilities exist at many different levels of abstraction and in many different contexts. Responsibility can be "Responsibility to print a report", through "Responsibility to execute this query" to "Responsibility to add those two numbers". If this principle was followed rigorously, the code would end up as many tiny classes or functions each having  extremely simple and low-level responsibility.

SRP represents much more important concepts and those are cohesion and coupling. Other set of principles (GRASP) say, that cohesion should be high while number of coupling should be low.

But those two go against each other, so it is key to balance those two in equilibrium. Following SRP rigorously would then result in low cohesion and high coupling. Resulting in something that should be avoided.

One good areas where SRP is good fit is business modeling. It is much easier to use SRP on business responsibilities, than trying to come up with your own.

In light of this, I believe that SRP only serves to remind us, that we should always look at how cohesive and coupled our code is and separate or combine classes or methods based on that.

Thursday, September 25, 2014

Not so SOLID principles

SOLID principles appeared around year 2000 by Robert C. Martin aka. "Uncle Bob". They were principles, if followed by developer, would result in software, that is maintainable and extensible. Since then, they gained lots of popularity and recognition. I can see them being mentioned on Programmers all the time. And I do think that they are good thing to know and use.

But, as all things, those principles are not without problems. One of those problems is that they are ambiguous in their description. This leads to misrepresentation of what they really mean and how they should be used. This is why some of the principles have multiple definitions, as people tried to be more specific about it. This is also why I believe those principles are good guidelines, but they should not be followed rigorously.

The following posts will go over each of the five popular principles, in which I will try to explain possible problems with their understanding and how I understand them.

The principles are :

  • Single responsibility principle
  • Open/Closed principle
  • Liskov Substitution principle
  • Interface segregation principle
  • Dependency inversion principle