World of Warcraft: It’s entirely possible to drop one of your professions, take up enchanting, and be at lvl 275 enchanting (high enough to d/e Outland blues) in one weekend. Don’t let anyone tell you otherwise – i did it.

Only downside is that you’ll spend upward of 1200g. I’m sure i could have easily continued onto lvl 300 (high enough to d/e any game item), but I ran out of money 🙁

I was writing a mock test yesterday which had some explicit expectations about the number of times a particular member would get called.

  • When I ran the test outside the debugger, the test would fail as i expected it to (being test-first and all).
  • When I ran the test inside the debugger, the test would fail for different reasons
  • When I ran the test inside the debugger AND stepped through the code line-by-line it would fail in yet again a different location

This seemed all too weird at the time. There was nothing different in either case which should cause this problem. The only clue I had was the fact that when running within the debugger, I was getting unexpected results, and when running independently of the debugger, it worked as expected. This lead to the conclusion that the problem is caused by debugging the test. Ie: The act of observing my test run, is changing the test execution itself. Also known as the Observer Principle.

The answer to the problem is far less exciting. I realised after a few minutes of tinkering that I had some of the mocked member fields hooked into my Watch window. So upon each step and execution, the Watch was re-evaluating my members which in turn was eating up my predefined expectations!

It totally makes sense when you realise what’s going on, but until that point this problem exhibits itself in a very weird way…

I’ve since discovered that this sort of bug (which changes itself while you probe it) is also known as a Heisenbug

I’ve seen this now twice in about 30 mins, and its bugging me.

One of the developers i work with is writing code like this:

string postingUrl = CmsHttpContext.Current.Posting.Url.ToLower(CultureInfo.InvariantCulture);

Whats bugs me with this code is that it shows no understanding or care for defensive programming.

Q1: Why do you assume that CmsHttpContext.Current is safe? Sure, within ASP.NET the framework creates a new Context for you upon request, and (in this case) Microsoft CMS wraps the HttpContext and guarantees you a copy of a context for you. Under these known conditions, the CmsHttpContext.Current is safe.

Now that’s been chained onto Posting, which throws the exception. Why assume that you’ve hit a posting? Why assume that there would be a posting at all? This kind of lack of thinking just demonstrates lazy programming, to me.


The great SQL Performance saga continued today, taking a turn for the….ah…..different…


  • The servers are exhibiting high disk I/O activity.
  • So much disk activity, that all queries are nearly brought to a grinding halt while the disks are thundering away.
  • The Query Plans for all I/O intense queries use index seeks – no scanning at all.

This leads me to conclude that the behaviour of the server is fine, and that the system is doing everything expected of it. Which leads me to ask the question why is an index seek taking so long to run off the disk?

Well it turns out there is in fact a way to determine the level of fragmentation of SQL indexes.

I’d written a simple query based on this content to calculate the amount of fragmentation on the live server’s indexes, and needless to say i’m a bit shocked. I sincerely thought this is something that the “qualified” DBA consultant on the other end was capable of doing, however in his defense he might have forgotten to run it and then moved onto the next high-paying job, leaving us to sweep up the shambles.

I’m no DBA….Although if I’m correct i’ll be justified in calling myself one and, i’ll also have a few more letters to add to the end of my job title and hopefully a zero to throw on the end of my salary….all for doing SFA.

FWIW, here is the query which revealed all the gory details:

select	i.object_id, i.index_id,, s.avg_fragmentation_in_percent, s.avg_fragment_size_in_pages
from	sys.indexes i 
JOIN	sys.dm_db_index_physical_stats(DB_id('DATABASE_NAME_GOES_HERE'), null, null,null, null) s
	on i.object_id = s.object_id 	
	AND i.index_id = s.index_id
	where     (
		s.avg_fragmentation_in_percent > 0 
		OR s.avg_fragment_size_in_pages > 0
	AND i.index_id <> 0

I’ve been wondering recently whether it’s important to take a purist line or whether to be practical about the way something is developed or the tools we use…In almost all cases, I agree fundamentally with the purist point of view, but i’m a man of practicality, meaning practicality wins out most times….So why then do I second-guess myself this time around compared to any other time?

The situation I have is that i’m the lead maintainer of a legacy project. I can’t *really* call it legacy, because the client (whom i know reads this blog so i have to tip-toe around here ;)) invests money into updating their website and introducing new features. But as far as technologies go, ASP.NET 1.1 is well considered legacy. The website was written originally with no unit testing…at all. In fact, i was lucky enough to introduce TDD and unit testing in particular to my company and at least forge a stream in that direction.

After working on the project for nearly 18 months, and in that time introducing hundreds of unit tests, i’ve come to realise just how much of the test code i’d written was boilerplate Setup/Teardown of database content for unit testing. Remember this is a legacy project, so although IoC/DI are great concepts, breaking dependencies on a large project such as this wont happen with the click of a finger, so some heavyweight unit testing is unavoidable if I wanted to do unit testing at all (which i do).

The kind of Setup/Teardown code i’m talking about is pretty intense:

  1. SETUP – Insert 3-4 records, plus their associated records
  2. SETUP – Update foreign keys where applicable
  3. SETUP – Precondition assertion that the data was inserted successfully
  4. TEARDOWN – Delete inserted records
  5. TEARDOWN – Postcondition assertion that the data used for testing was removed (IE: maintaining ATOM’icity of tests)

Yes, a lot of this common functionality is encapsulated into helper classes, but some unit tests are still turning out 30-40 lines long. The test code is becoming harder to maintain because there’s just so much of it, and although i’m not quite as anal about using regions, I despise the thought of hiding code in a method with regions (here’s one thing the purist part of me won’t relent over :)). And worse yet, because of all the transactional overhead from inserting/rolling-back and pre/post-checking i’m sure our test-run times are suffering too (probably in the order of 10-20 seconds, but its the purist argument that they should be as fast as possible)

So my solution? To generate a specific database used with the intention of being in a known state for the test fixtures. Then, if i need to prepare the database for some tests, I only need to insert data into the testing database and that’s that. My test is no longer responsible for setting up of a stack of test data and ensuring its perfectly clean afterward. I could hire a new junior developer tomorrow, and it wouldn’t have disastrous side-effects if his unit tests didn’t sweep up after themselves (because they wouldn’t necessarily need to!)

We did this in my last job, and the approach worked pretty well from a practical aspect, however I always felt that it violated the purist argument that a unit test is:

  1. ATOMic
  2. responsible for ensuring it leaves the test environment in the pristine condition it was given in
  3. Does not make assumptions about the operating environment of the machine it is running on

The first two points i’ve talked to death about. It’s the third one which i’ve either made up in my head to try and take away my sleep at night, or it’s really a sounds piece of advice that your test should be self-containing and not relying on anything being pre-configured within it’s environment in order to do whatever it needs to do. Which quite frankly is a right-royal PITA.

So i’m not happy about it…I’ve concluded (again) that practicality will win out and having this database is not such a terrible thing because the long-term benefits it provides such as faster test execution time, and (more importantly) cleaner code far outweigh any unjustified idealism I may or may not hold.

I’ll do it, but i’ll argue with myself about it for a while to come yet…

I’ve long been noticing the following errors come up at the end of our NUnit testrun, but never had a chance to look into it in detail until now…They’ve never resulted in a broken build, and they do clearly look like something happening outside the scope of our unit test (even if it was caused by our test)

Unhandled exceptions:
1) : System.InvalidOperationException: This action is invalid when the mock object is in verified state.
at Rhino.Mocks.Impl.VerifiedMockState.MethodCall(IInvocation invocation, MethodInfo method, Object[] args)
at Rhino.Mocks.MockRepository.MethodCall(IInvocation invocation, Object proxy, MethodInfo method, Object[] args)
at Rhino.Mocks.Impl.RhinoInterceptor.Intercept(IInvocation invocation, Object[] args)
at ProxyInterfaceSystemSystemObject_System_DataIDataReader_Rhino_Mocks_InterfacesIMockedObject_System_Runtime_SerializationISerializable.Close()
at ExternalDataComponent.Data.DataRecordEnumerator.Dispose()
at ExternalDataComponent.Data.DataRecordEnumerator.Finalize()

Any obvious names have been masked to protect the innocent/stupid.

The error message quite is quite obvious in what the problem is, and leaves little doubt as to where it’s originating from….But the difficulty in this problem is finding out what causes the exception in the first place.

After attaching the VS debugger to NUnit, and turning on “Break On Exception” for the System.InvalidOperationException exception, All i would get is a debugger breaking into a location which was not relevant to any executing code, and the entire call stack consisted of non-managed code.

After a lot of thought, and with a little luck, i realised that what’s happening is that some of our tests are creating a stub object and passing it into the ExternalDataComponent during testing. The ExternalDataComponent is assuming that it is safe to call dispose on the object and thus doing so. The Rhino.Mocks framework then throws an exception when it intercepts the Dispose() method of the mocked interface, because the mock object “is in [the] verified state”.

My GoogleFu is in good form.

Unfortunately, we aren’t able to make changes to the ExternalDataComponent i’ve so eloquently disguised, as it’s a dependency library we’re using, so the “solution” is to reset all mocks to the original Record state before calling dispose (which additionally the tests weren’t doing).

The DateTime.Parse() method (and all its derivatives, i assume) are locale sensitive and will assume that the string you provide it is in the standard ISO date format, or in the format for your locale.

The buggy implementation I found in our system was calling DateTime.Parse() with a value of “23-07-2007”, which threw an exception citing that the string was not in the correct format. I dug around a bit with the code and tried different implementations with different results. It was only after i provided it a US date value (which it did not baulk at) I realised that my system’s locale was set to English (US).

The code was wrong to assume the current system locale is the expected input data format, in the first place, but this one tripped me up for a little while.

This evening I wanted to get back to my old habits of creating silly spike projects which just test out a particular piece of functionality/feature.

Tonight I have mixed together some WPF forms and binding with some LINQ queries i was learning.

Part 1 – WPF Forms and Binding
What I wanted to learn from this was to create my own ListBox with a custom drawn ListItem, and bind it to a simple List<String>. The catch was I wanted to do this without looking it up on the net thereby forcing myself to learn the framework and in particular the syntax around the Binding interpreter because there’s no IntelliSense for any Binding properties. The result XAML is below:


Part 2 – LINQ Basics
Although I was watching the video, I went off and did my own thing for the most part, and was playing around with using LINQ to select a list of directories which matched a certain criteria. This sample demonstrated how to add any method as part of the filter condition for a LINQ query, and a brief touch on ordering.


Part 3 – LINQ Objects

This idea is to use LINQ to return an anonymous type which is defined inline of the LINQ query itself. (ignore the bad naming convention, here 😉 )


Part 4 – LINQ Grouping
This exercise was to play around with grouping of LINQ queries and the ways that you can select out of the grouped query result. I still haven’t got the knack of this one, just yet….


That just about wraps it up for tonight….Not a bad way to spend my time offline