As a number of you might know, I have been tutoring programming subjects at my old university for a number of years now. Both C# and VB.NET and the one common theme i’ve seen in these classes is that despite having completed a mandatory object-oriented programming subject, a large proportion of students just don’t get OO. Furthermore, most of them have no real idea of how to solve a programming problem other than a very heavy, top-down method:

  1. Students are given a programming assignment to solve, and 5 weeks to complete it
  2. They look at problem at the wholistic level just to try and understand the problem
  3. Start designing a UI and use all the wonderful draggy-droppy components on their forms
  4. Spend 2 weeks tweaking their UI colours and text box alignments.
  5. Spend 1 week hacking together some code so that their UI starts to interact
  6. Spend 1 week tweaking more UI elements
  7. Realise there’s only 1 week left and they are missing 30% of the functionality
  8. Panic, ask for extensions, submit the assignment late, or all of those together.

After seeing this pattern over and over, I’ve tried different ways to address the problem in my classes. The first thing I tried was to verbally (and written in the student forums) express my sentiments that students should focus less on their UI upfront and try to make sure they have basic UI completeness first, all of the functionality implemented and finally come back to polish the UI. After all, I was seeing a great number of assignments come through with functionality either untested, totally broken or just altogether missing. Sadly my words seemed to fall upon deaf ears, and the quality of assignments were not up to the standard I was expecting.

Tackling this problem differently, last year I thought I’d try and introduce the concept of unit-testing, TDD and told them to think less about the upfront-design and instead focus on testing individual components of their system independently and build up to a solution. At the time, it seemed like a lot of students were receptive to the idea of unit-testing and test-upfront, but when the assignments came down, it looked like they just fell back into old habits and the quality of assignments seemed (on the whole) not much different to previous semesters’. After talking to some of them, I suspect the problem was that it became too difficult for them to manage this new “style” of writing their programs and keep up on top of their other coursework. The quickest and dirtiest appeared to work, and the mentality that this was purely just another assignment for the sake of passing uni seemed clearer than ever. I’m guilty of this attitude too, as when I was in uni it was more of a concern for me to make sure I completed the assignment as quickly as possible so I could spend quality time on other activities. The problem with the unit-test/TDD approach was that it offers little to a student in a reward system. Unit-tested software takes longer to write than it’s direct-implementation counterpart. This pays off in the long-run, but when your “long-run” is only as long as a 6 month semester, then why investing extra time? It’s a classic ROI question. What I needed was a way to motivate students to improve the quality of their code, and without the seemingly large overhead of the automated tests.

This semester i’ve shaken things up once again, and have dropped the unit-testing/TDD mantra. Instead, i’m focusing *heavily* on problem decomposition, units of work, single responsibility and class separation. If unit-testing doesn’t drive students to think about their application functionality upfront, then hopefully teaching them a way to make a large problem seem easily palatable will motivate them to start thinking code-first instead of UI-first. Introduction to the concept of having one class per file seemed quite foreign and unnecessary when I discussed it in our first class last week. That didn’t fill me with confidence. Despite that, i committed to persisting with this trend, and re-enforced the importance of responsibility separation under the guise of making code easier to understand, but more importantly giving the students a method for decomposing a problem and forming a solution. This I think has been the key, because receptiveness to the idea suddenly picked up when I demonstrated building a simple Guess The Number game using a number of small, discrete components. They saw what at first appeared to be a task far too large for a 90 minute class, but by identifying single functions (like “ask the user to guess a number” and “determine if the user guessed too high”) and tackling them one-by-one we produced a solution, and it’s final composition was easy to follow. #win.

I’d also changed my presentation style and was more boisterous than normal. (for anyone who knows me in person, this would be pretty intense). My theory was that if I captured their attention, it would provoke them to treat my class differently to any of their others. After all, if i’m putting the effort in, they would reciprocate, wouldn’t they? Well i’m pleased to say that after 2 weeks (yes it’s early days still) i’ve had more students hang around for the full class than before and most importantly, when i’ve run overtime nearly 70% of my students are staying back (up to an hour – 10pm at night) to ask questions and learn more.

It still awaits to be proven whether this approach makes a difference with their assignments, but certainly I’ve seen a rather large improvement in the participation of students. I’m attributing this to a combination of making the classes more entertaining, and slicing the content so it’s easily digested. Historically this has been pretty bad at university level. Hopefully this time i’ve hit a winning formula.

This week, I hit an interesting problem which I don’t feel like was solved in the best possible way. The problem was that we needed to filter a list of objects based on some known criteria. Testing the specification is pretty important as there are a series of and’ed negating conditions (eg: IF this AND NOT that AND NOT the other etc…), in total about 5 unique criteria for the one filter.

Ordinarily this kind of implementation would lend itself nicely to the Specification Pattern given that all information required to determine if the specification is satisfied exists on the object member being passed in at the time of evaluation. In my case, however, I had 3 conditions that the object being evaluated must not exist in 3 different lists. To give you an example the object under evaluation is a model, and this step of filtering is part of a much larger process involving running the model through a recursive algorithm. At each step of the algorithm, the model object could have

  • Run through the algorithm succesfully
  • Been aborted during execution of the algorithm
  • Still awaiting to be executed through the algorithm

These three states are tracked by keeping 3 lists – one for each criteria. The filter that I was working on had to filter based on whether the model under evaluation was NOT in those three lists. I realise the wordiness of me explaining this doesn’t really clear the air, so lets look at some code (with relevant types changed).

This is my model:

    public class User
    {
       public bool Enabled { get; set; }
       public string Name { get; set; }
       public UserType TypeOfUser { get; set; }
    }

The code which uses my filter looks something like this. This guy will be called recursively based on the result that gets returned from here. In this case, i’m building the allValidUsersFilter.

public class ReplacementUserFinder
{
        ...ctor...fields...etc

        public User FindReplacementUser(User userToReplace, IList<User> allPossibleUsers, IList<User> usersStillToBeEvaluated, IList<User> usersAlreadyEvaluated, IList<User> usersAbortedDuringEvaluation)
        {
            var validUsers = _allValidUsersFilter.Filter(allPossibleUsers, usersStillToBeEvaluated, usersAlreadyEvaluated, usersAbortedDuringEvaluation);
            var replacementUser = _bestReplacementCandidateFinder.Find(validUsers, userToReplace);

            return replacementUser;
        }
}

and the interface for the AllValidUsersFilter – it’s purpose is to filter the list of all users to a list of potential candidates given the list of all users:

        public IList<User> Filter(IList<User> allpossibleUsers, IList<User> usersStillToBeEvaluated, IList<User> usersAlreadyEvaluated, IList<User> usersAbortedDuringEvaluation)
        {
            return allpossibleUsers.Where(x => 
                _isUserEnabledSpecification.IsSatisfiedBy(x) &&
                _isOverseasUserSpecification.IsSatisfiedBy(x) &&
                    !_isUserStillToBeEvaluatedSpecification.IsSatisfiedBy(x, usersStillToBeEvaluated) &&
                    !_isUserAlreadyEvaluatedSpecification.IsSatisfiedBy(x, usersAlreadyEvaluated) &&
                    !_isUserAbortedDuringEvaluationSpecification.IsSatisfiedBy(x, usersAbortedDuringEvaluation)
            ).ToList();
        }

The specification instances here are being ctor injected into my filter’s instance so that I can use a behavioural style assertion to check that the specification is invoked correctly by the filter.

The IsUserEnabledSpecification, and IsOverseasUserSpecification just use the well-known ISpecification interface pattern, but in order to evaluate the the other three, I had to create an IListSpecification and it feels somehow unsatisfying because the only thing different between them is that I have to pass in the list to the IsSatisfiedBy methods.

I’m not happy with this result, and we went through several different options before settling on this one purely just so we could move forward, and come back to address this later.

Hoping someone out there might have some suggestions…After writing this post, i’ve come up with another idea which would probably be cleaner…need to try it out.

I’ve needed this a few times in the past and figured I best note it down for my own reference.

If you have created a number of objects and you want to be able to compare them during debugging, you can assign an Object ID to each instance of the object in the watch window like this:
Assign Object ID to object instance

Once assigned, you can evaluate that particular object instance either in the watch window or in the expression evaluator:
Evaluating an object using its ID

(The above images have been pilfered from Jim Griesmer)

In my last article on the topic of database development, I covered performing database migration using MigratorDotNET.

Next, I wanted to look at the mapping process to the object model. I’d already decided I was going to use NHibernate as my ORM, but the detail was in hooking up NHibernate to the database. NHibernate’s XML syntax is pretty straightforward and incredibly powerful, but once again it’s another language i’m forced to deal with – i’m most efficient working in my primary language so the idea of slowing down when dealing with these mapping files by hand was really not in question.

Initially I looked into ActiveRecord (admittedly not into a whole lot of detail) but wasn’t excited by the framework. ActiveRecord performs it’s mappings from objects to database through attributes and it just seemed to me like an abuse of SoC. If i should want to change my ORM (unlikely, but i’m working within that constraint on this project), then it would have to support the same attribute syntax. Having said that however, there is a lot of momentum behind ActiveRecord, so i’m still reserving judgement on it’s applicability.

So apart from ActiveRecord, I went in search of another alternative to the XML mapping and recalled reading about Fluent NHibernate. In a nut-shell, Fluent NHibernate is a replacement for the XML mapping layer in NHibernate by defining the mapping in C#. Essentially the same benefits I got by using MigratorDotNet (type safety, compile-time checking etc) were available for defining my DB->OM mapping. Sweet!

A very quick spike with the project, and I immediately liked what I saw:

	public class NoteMap : ClassMap<Note>
	{
		public NoteMap()
		{
			Id(x => x.NoteId).GeneratedBy.Guid();
			Map(x => x.NoteTitle).WithLengthOf(64);
			Map(x => x.NoteData);
		}
	}

The equivalent XML mapping file (I won’t discuss it in detail in this post) would have been at least twice the size, and more importantly not refactor-friendly.

Because it’s in C# I was easily able to unit-test this mapping, with a little assistance. In fact it was so successful, it helped me discover a bug in my database migration script!

	[TestFixture]
	public class NoteMapping_Test : BaseTestMappings
	{
		[Test]
		public void TestCanAddNote()
		{
			Note note = new Note
			            	{
			            		NoteTitle = "Title",
			            		NoteData = "Data`"
			            	};
			Session.Save(note);

			Session.Flush();
			Session.Clear();
			Note fromDb = Session.Get<Note>(note.NoteId);
			Assert.AreNotSame(note, fromDb);
			Assert.AreEqual(note.NoteData, fromDb.NoteData);
			Assert.AreEqual(note.NoteTitle, fromDb.NoteTitle);
			Assert.AreEqual(note.NoteId, fromDb.NoteId);
		}
	}


	public class BaseTestMappings
	{
		protected SessionSource Source { get; set; }
		protected ISession Session { get; private set; }

		[SetUp]
		public void SetUp()
		{
			Source = new SessionSource(new TestModel());
			Session = Source.CreateSession();
			Source.BuildSchema(Session);
			CreateInitialData(Session);
			Session.Clear();
			Session.Transaction.Begin();
		}

		[TearDown]
		public void TearDown()
		{
			Session.Transaction.Rollback();
			Session.Close();
			Session.Dispose();
		}

		public class TestModel : PersistenceModel
		{
			public TestModel()
			{
				Assembly ass = typeof(NoteMap).Assembly;
				addMappingsFromAssembly(ass);
			}
		}
	}

What’s happening here is that Fluent NHibernate allows me to instantiate an NHibernate instance just by creating a Model. The model contains a list of all of the mappings applicable for my application and I pass that directly into the NHibernate session. Any of my tests which I want connected to a database will now have transaction management and a session for performing querying. I can use this in my application too with just about the exact same code, so instantiate a session and pass in the model.

It works very well and i’ve sucessfully removed the XML file with a type-safe C# mapping engine. The problem this poses, however is now I have two places where I have defined what my data structures look like. One in the MigratorDotNET framework, and the other in FluentNHibernate to map the data model. This means that to make any any change to the model would involve no less than 3 changes – the POCO, FluentNHibernate and MigratorDotNET.

Next time, I want to discuss ways of reducing this friction and streamline the refactoring process.

(There are some websites which i wish to attribute for some of the code and ideas i’ve expressed here but have since lost the links. If you see anything that’s yours, please let me know so I can appropriately credit)

This post is the first what I intend to be an open-ended series of posts about my current experience working with a pet development project for learning purposes.


Not long ago, I set myself the task of wanting to experiment with some new/upcoming projects in terms of .NET development. The original intention started off basically as me being interested in finding out what they were and how they were built.

More lately, however, I’ve given myself a pet-project with the task of building it using TDD and more importantly, to try out these different projects and work out a solution which i’m comfortable has zero-friction (or very little). I’m not asking a lot – just for proper SoC and at the end i’d (theoretically) have a project which is a first-round example of TDD and proper design.

  • I want to provide multiple UIs for the core application (Web, Win, WPF) – I’ve seen a number application architectures which employ practices to separate presentation from logic, but ALL of them either failed over time or never actually tried moving to a different presentation platform (at which point they too probably would have failed). For this reason, I wanted to build something which (from day zero) has been built in parallel with several UIs
  • I want to be able to rip out the database and put a new one in – This is more academic than anything else. IMO there are a lot of unsubstantiated claims floating around the software world that proper abstraction will *easily* allow you to just swap datastores on a whim. Well I want to put that to the test.

So in pursuit of these goals, i’ve started writing my simple note-taking application, and over a very disparate period of weeks i’ve done a lot of reading and discovering of tools for the ORM, migration and object model.

In terms of data migration, I was impressed with what MigratorDotNET was capable of. It basically allowed me to speak one programming language, and with a little NAnt scripting I was able to generate a database migration system.

using System;
using System.Data;
using Migrator.Framework;

namespace DBMigrations
{
	[Migration(1)]
	public class _0001_CreateNoteTable : Migration
	{
		public override void Up()
		{
			Database.AddTable("Note", new Column[]
              	{
              		new Column("NoteId", DbType.Guid, ColumnProperty.PrimaryKey | ColumnProperty.NotNull, "NEWID()"),
              		new Column("NoteTitle", DbType.String, 64, ColumnProperty.NotNull),
              		new Column("NoteData", DbType.String, 32768)
              	});
		}

		public override void Down()
		{
			Database.RemoveTable("Note");
		}
	}
}

Not that I strictly had a major problem with the thought behind the approach offered by MigratorDotNET, but the two things that I didn’t like about the system:

  • Database schema names are defined as strings – any typo’s aren’t picked up at compile time, they’re only picked up at unit-test time (if you’re diligent). You could work-around this by using string constants to define the column and table names, but then you’d had to maintain that list too.
  • Changes to the database had to be run independently of the application. In my case i’d used the NAnt provider to perform migration, but I had the intention of integrating it into the application so that it could self-upgrade without the need for an external component.

At the time I started the project, this system seemed fine, and I was able to write tests for the migration using a base generic class:

	[TestFixture]
	public class MigrationTester<T%gt; where T: Migration, new()
	{
		public virtual void AssertMigrateUp(T migration) {}
		public virtual void AssertMigrateDown(T migration) { }

		[Test]
		public void Test_Migrate()
		{
			MockRepository repos = new MockRepository();
			ITransformationProvider mockDB = (ITransformationProvider)repos.Stub(typeof(ITransformationProvider));

			T migration = new T()
			{
				Database =
					(ITransformationProvider)
					MockRepository.GenerateStub(typeof(ITransformationProvider))
			};


			Exception thrownException = null;
			try
			{
				migration.Database.BeginTransaction();
				migration.Up();
				AssertMigrateUp(migration);

				migration.Down();
				AssertMigrateDown(migration);
			}
			catch (Exception ex)
			{
				thrownException = ex;
			}
			finally
			{
				migration.Database.Rollback();
				if (thrownException != null)
				{
					throw new Exception(String.Format("Failed to migrate up and down for migration {0}", typeof(T).ToString()), thrownException);
				}
			}
		}
	}

	public class _0001_CreateNoteTableTest : MigrationTester<_0001_CreateNoteTable>
	{
		public override void AssertMigrateUp(_0001_CreateNoteTable migration)
		{
			Assert.IsTrue(migration.Database.TableExists("Note"));
			Assert.IsTrue(migration.Database.ColumnExists("Note", "NoteId"));
			Assert.IsTrue(migration.Database.ColumnExists("Note", "NoteTitle"));
			Assert.IsTrue(migration.Database.ColumnExists("Note", "NoteData"));
			Assert.IsTrue(migration.Database.ColumnExists("Note", "NoteId"));
			
			// ...etc... //
		}
	}

So the first part was done – I had a system for migrating my database and a means for testing it.

This evening I wanted to wish someone a happy birthday. In my infinate geekdom, I figured writing a tiny app to do it would be a nice way to waste 20 mins this evening. I started off writing in Notepad (because this really isn’t worth opening Visual Studio for), and compiled using CSC jsut to make sure it worked in the end (oops, forgot a few semi-colons…)

using System;

public class BirthdayWisher
{
    public BirthdayWisher(string personName, DateTime birthDate)
    {
      this.PersonName = personName;
      this.BirthDate = birthDate;
    }
    public readonly DateTime BirthDate;
    public readonly string PersonName;

    const string BirthdayMessage = 
        "rnDear {0}! Wishing you a happy birthday for {1}. Congratulations on being {2} years old!";

    public void WishBirthday()
    {
      Console.WriteLine(String.Format(BirthdayMessage, PersonName, BirthDate.ToString("MMMM dd"), DateTime.Today.Year - BirthDate.Year));
    }
}

public class NaomiBirthday
{
  [STAThread]
  public static void Main(string[] args)
  {
    BirthdayWisher naomi = new BirthdayWisher("Naomi", new DateTime(1983, 11, 21));
    naomi.WishBirthday();
    
    Console.WriteLine("rnPress any key to end.");
    Console.ReadKey();
  }
}

So i sent her an IM message with that code and the following instructions to run this:
1. Copy the text above and save it to your desktop as a file called “birthday.cs”
2. Click “Start –> Run” and paste the following into the box and then press enter.

%windir%system32cmd.exe /c “%WINDIR%Microsoft.NETFrameworkv2.0.50727csc /out:”%USERPROFILE%birthday.exe” “%USERPROFILE%desktopbirthday.cs” > null&&”%USERPROFILE%birthday.exe”&& del “%USERPROFILE%birthday.exe” ”

Yes, i am a massive geek, but now I can re-use the same code for someone else’s birthday – yay for reuse! The harsest part about this whole thing wasn’t the code, but in the command-prompt syntax required to chain commands properly and get it to execute as i’d expect. Even after spending maybe 30 mins on it, I don’t think I got it right, and certianly it could be improved upon….I guess that leaves room for Happy Birthday v2 :)

It’s no secret that i’ve had a job on the side for a number of months now tutoring programming at the local university. I won’t get into the gritty details about the experience (at least not yet), but I want to briefly talk about my experience with catching out students who have cheated in their assignments.

The subject is Visual Basic .NET. I’m not a great VB.NET programmer, in fact i’m not a VB.NET programmer at all, but i’m able to stay one week ahead of the students and generally have an answer to the questions they ask. If I don’t have an answer, I at least offer the consolation of providing them a solution in C# :). The major assessment for this subject is to write a project (based on some given specs) in VB.NET and submit your working binaries + all code in a compressed ZIP file, and we mark it based on some marking scheme.

When the project works, I don’t really need to delve into too much depth when assessing their solution. The fact that it works indicates they’ve understood at least the majority of what was being asked of them and I’m satisfied that it works.

However – when it doesn’t work, my triggers get fired. Why did they submit an assignment which doesn’t work?. I attach a debugger, and start looking at code. Any programmer will understand what i’m about to say…Code, is a lot like a face. You see it once and then in future, although you might have a problem figuring out WHERE you originally saw it, you’ll recognise the same code.

Where am I going with all this? Well firstly – I found one assessment which didn’t work. Attached debugger, saw code, imprinted in brain. 3 DAYS LATER I opened another person’s assignment, assignment didn’t work (hmmm i’ve seen an error like that before….), attached a debugger and saw code (hmmm….I know i’ve seen that code before….). Hopefully now the title of this post makes more sense. So yes, I indeed did find two students who cheated. They used the same code, and made obvious attempts at trying to cover it up….Not terribly clever attempts, mind you…Which then led me to wonder how many others have also done the same thing…And here’s where I talk about my QnD app to check for duplicates.

Some time back, I posted about some technologies I wanted to play around with. This was a good opportunity to start with that and some features of framework 3.5 i’ve been meaning to try out.

The basic idea behind the utility is to search through a folder (and its subfolders) and find files which are identical (or similar). I want it to be modular so that I can write several different algorithms for determining similarity and swap them in/out easily. The first iteration I knocked together in a few hours and sure enough, I found another set of students who had copied their work. The first algorithm used is just a straight file-compare. Nothing fancy, just straight text comparisons. If two students have the same code byte for byte, they would need the excuse of the century to explain how it is not plagiarism.

Although I haven’t added Castle Windsor to it yet, i’ve at least written it in such a way that it will be easy for me to implement. Code is attached below in case you wish to go forth and conquer.

DuplicatesChecker Code

I was recently reading a post about writing non-threadsafe code which talks about the main peril of multi-threading, and one way you can work around it.

I’ve long been a believer that doing anything multi-threaded is fraught with danger and you have to tread incredibly carefully when doing so. I say this with experience. What I learnt from reading that post wasn’t in the content, but in the comments, which talked about the Interlocked class for performing simple, thread-safe increments and decrements of operators.

So i decided to try it and see what the benefit really is, and i was surprised by by the results! I did my own profile against 3 scenarios:

  1. No thread safety (fast comparitively, though gave incorrect results)
  2. Locking using “lock” keyword (correct, but very slow by magnitude of nearly 10x)
  3. Locking using Interlocked class (correct, and fast – faster than no thread safety in some test runs)

Clearly these results aren’t scientific, but are quite good to give relative indicators of performance. I’ve reproduced the code below.


using System;
using System.Diagnostics;
using System.Threading;
using NUnit.Framework;

namespace ThreadingExample
{
	public interface IThreadTest
	{
		int Value { get; }
		void Debit();
		void Credit();
	}

	public class NonThreadSafe : IThreadTest
	{
		public int Value { get; private set; }

		public void Debit()
		{
			Value--;
		}

		public void Credit()
		{
			Value++;
		}
	}

	public class ThreadSafe : IThreadTest
	{
		public int Value { get; private set; }

		object lockSentinel = new object();

		public void Debit()
		{
			lock (lockSentinel)
			{
				Value--;
			}
		}

		public void Credit()
		{
			lock (lockSentinel)
			{
				Value++;
			}
		}
	}

	public class ThreadSafeUsingInterlocking : IThreadTest
	{
		private int value;
		public int Value
		{
			get { return value; }
			private set { this.value = value; }
		}

		public void Debit()
		{
			Interlocked.Decrement(ref value);
		}

		public void Credit()
		{
			Interlocked.Increment(ref value);
		}
	}

	[TestFixture]
	public class TestClass
	{
		[Test]
		public void TestNonThreadSafe()
		{
			NonThreadSafe nts = new NonThreadSafe();

			ExecuteThreadedTest(nts);

			Assert.AreEqual(0, nts.Value);
		}

		[Test]
		public void TestThreadSafe()
		{
			ThreadSafe ts = new ThreadSafe();

			ExecuteThreadedTest(ts);

			Assert.AreEqual(0, ts.Value);
		}

		[Test]
		public void TestThreadSafeUsingInterlocking()
		{
			ThreadSafeUsingInterlocking tsui = new ThreadSafeUsingInterlocking();

			ExecuteThreadedTest(tsui);

			Assert.AreEqual(0, tsui.Value);
		}

		private void ExecuteThreadedTest(IThreadTest threadTest)
		{
			int maxIterations = 99999999;
			DateTime start = DateTime.Now;
			Thread t1 = new Thread(() =>
			{
				for (int i = 0; i < maxIterations; i++)
				{
					threadTest.Credit();
				}
			}
			);
			t1.Name = "t1";

			Thread t2 = new Thread(() =>
			{
				for (int i = 0; i < maxIterations; i++)
				{
					threadTest.Debit();
				}
			}
			);
			t2.Name = "t2";

			t1.Start();
			t2.Start();

			t1.Join();
			t2.Join();

			DateTime finish = DateTime.Now;
			Debug.WriteLine(String.Format("Took {0}ms to complete", (finish - start).TotalMilliseconds));
		}
	}
}

I’ve just read a blog post about why you should pass interfaces instead of concrete classes as arguments to your methods.

I normally try to think about the most appropriate usages of interfaces for my own classes, but what this post alerted me to was the necessity to use interfaces when working with framework classes.
IE: IDictionary instead of Dictionary

The reasons the author discusses i believe are quite valid….It’s something i’m going to more actively do when writing code…