I was writing a mock test yesterday which had some explicit expectations about the number of times a particular member would get called.
- When I ran the test outside the debugger, the test would fail as i expected it to (being test-first and all).
- When I ran the test inside the debugger, the test would fail for different reasons
- When I ran the test inside the debugger AND stepped through the code line-by-line it would fail in yet again a different location
This seemed all too weird at the time. There was nothing different in either case which should cause this problem. The only clue I had was the fact that when running within the debugger, I was getting unexpected results, and when running independently of the debugger, it worked as expected. This lead to the conclusion that the problem is caused by debugging the test. Ie: The act of observing my test run, is changing the test execution itself. Also known as the Observer Principle.
The answer to the problem is far less exciting. I realised after a few minutes of tinkering that I had some of the mocked member fields hooked into my Watch window. So upon each step and execution, the Watch was re-evaluating my members which in turn was eating up my predefined expectations!
It totally makes sense when you realise what’s going on, but until that point this problem exhibits itself in a very weird way…
I’ve since discovered that this sort of bug (which changes itself while you probe it) is also known as a Heisenbug