Resizing Raspbian image for QEmu

June 2nd, 2013 by Xerxes 1 comment »

I’m running Raspbian (Wheesy 2013-05-29) under QEmu just to make dev a bit quicker. The problem goes that the Raspbian image is 2GB which leaves only 200Mb free for apps/data.

To resize the image, start by resizing the image and giving it an extra 2GB (this is run on the host):

> qemu-img resize 2013-05-29-wheezy-armel.img +2G

Now boot into Raspbian and issue the following to resize the partition (from within linux):

1. > sudo fdisk /dev/sda
2. Print the partition table ("p"). Take note of the starting block of the main partition
3. Delete the main partition ("d"). Should be partition 2.
4. Create (n)ew partition. (P)rimary. Position (2)
5. Start block should be the same as start block from the original partition
6. Size should be the full size of the image file (just press "enter")
7. Now write the partition table (w)
8. Reboot (shutdown -r now). After reboot,
9. > sudo resize2fs /dev/sda2

 

When that’s all done, run “df -h” and you should have plenty of space.


				

git-describe and the tale of the wrong commits

December 20th, 2010 by Xerxes No comments »

Like most things in life, our current software project is powered by Git, and historically we’ve been using git to generate the version numbers for our builds by sucking it out from the tags in our repository via git-describe.

$ git describe --tags --long --match v*
v0.3.0-0-g865eb5f

This works wonderfully well when you have a single-line commit history, as the tag for versioning is the most recent tag.

Recently, however, we switched to using Vincent Driessen’s Git branching model, which opens up a serious hole in the simple versioning system. When you prepare a hotfix release for master, you tag the hotfix branch at the point the hotfix is being applied. This has the unfortunate side-effect of screwing up the way git describe determines the “nearest” tag.

I’ve created a sample repo demonstrating this problem. If you’re not keen on grabbing the repo, follow along with the screenshot below.

Basically what we see here is a hotfix being applied for release v0.1.0 while development continues on v0.2.0. The hotfix is merged back into develop (but decided not to be merged into master as the hotfix could wait until the next release).

Running the git-describe command above *SHOULD* yield v0.2.0-4-gba68c2f as the tag for develop in order to be true, however it comes back as v0.1.1-4-gba68c2f, which leads to our builds being completely mis-versioned. (We just versioned 0.2.0 code as 0.1.1 – how shit are we?)

Okay, so why is git picking up my v0.1.1 tag instead of the v0.2.0 tag? Turns out it has a lot to do with the explanation of how git-describe works:

If multiple tags were found during the walk then the tag which has the fewest commits different from the input committish will be selected and output. Here fewest commits different is defined as the number of commits which would be shown by git log tag..input will be the smallest number of commits possible.

Which is all well and good, however the describe algorithm ended up traversing a merge-branch down from develop and erroneously (for our purposes) finding v0.1.1 because it was closer to HEAD than v0.2.0 (well specifically in this example case they have the same number of commits, but the depth of the merged branch seems to be more appealing to git)

Digging around a bit more, I found that the git log command actually has an argument to have git-log search only the first (oldest) of multiple parent commits (ie: merges). enter –first-parent

Follow only the first parent commit upon seeing a merge commit. This option can give a better overview when viewing the evolution of a particular topic branch, because merges into a topic branch tend to be only about adjusting to updated upstream from time to time, and this option allows you to ignore the individual commits brought in to your history by such a merge.

…and when you use that to find the appropriate tag, it does!
$ git log --oneline --decorate=short --first-parent | grep '(tag:' | head -n1
b4aa13c (tag: v0.2.0) continued work on develop

So first-parent history search behaviour is what I want, but it’s not available on git-describe. Turns out, i’m not the only one who’s come across this…It’s a shame, really because describe does everything else perfectly, except for the algorithm to find the closest tag.

Unfortunately there’s no clear work-around or even a solution as to when a –first-parent argument will be made available for git-describe, which meant I had to come up with this monstrous flake of rake script to get the build version identifier (formatting doesn’t do it justice):

def git_version_identifier
  tag_number = `git log --oneline --decorate=short --first-parent | grep '(tag:' | head -n1`

  version_number = /v(\d+)\.(\d+)\.(\d+)/.match(tag_number)

  `git describe --tags --long --match #{version_number}`.chomp
end

which (in a nutshell):

  1. Finds the appropriate tag number for the current branch as per the bash-fu you saw earlier
  2. Parses the previous output for the tag identifier (vx.x.x as per our convention)
  3. Fires THAT tag id into git describe to get it to generate the identifier properly, bypassing its search mechanism

Seems convoluted and i’m not really happy with the result. Hoping that someone out there knows something I dont.

Scripting Google Chrome for OSX using AppleScript

September 1st, 2010 by Xerxes No comments »

When it comes to software tools, i like to spend my time bleeding on the edge, where possible. One of the downsides to this, however is when you hit the carotid artery and bleed your tech heart all over the place. Having recently experienced this problem while using the dev channel build of Chrome, i’m fairly cautious of all the tabs I have open and losing them was a (queue Elton John…) sad sad situation, to say the least.

More recently I became the custodian of an older generation MacBook Pro as a challenge to see how much of my life could transport over to OSX from Windows. One of the things I wanted to make sure of was that (at the very least) I managed to keep a backup of all my open tabs, should the shit hit the fan and a dev build of Chrome for Mac bit the proverbial. The nice thing about Chrome is that when your normal life is replaced with a gLife, you can sync all your bookmarks with big brother and they’ll keep them around for you to access anywhere.

So after a few tweets and pointers in the right direction, I learned that Mac has long had this thing called AppleScript – basically a language which can be used to automate any part of the operating system and programs which it hosts. After a lot of googling, reading help files and finding the incredibly useful Ukelele i managed to scrounge together the following script (saved here in case i ever need it again)

tell application "Google Chrome" to activate
tell application "System Events"
	-- bookmark all tabs
	keystroke "d" using {command down, shift down}
	tell application process "Google Chrome"
		repeat until sheet 1 of window 1 exists
		end repeat
	end tell
	
	keystroke ((current date) as string)
	
	-- tab to bookmark folder list
	key code 48
	
	keystroke "Other"
	key code 124 -- retain focus on "Other..." folder
	key code 125 -- down arrow for "Tabs" subfolder
	
	keystroke return
end tell

It’s not a highly dynamic or robust script, but then again nor does it need to be – it’s running in a controlled environment and does the job of what I need (near) perfectly well.

Scheduled an iCal event to execute the script at 1am, and hey presto i’m happy to be playing around on the edge once more.

Notice in a previous paragraph I said the script is near perfect. One of the limitations of AppleScript is that its higher-level functions require a library of commands to be built into any application you intend on scripting. Currently, the script library for Chrome is pretty average, so everything has to be done by simulating keystrokes. The only thing I can’t do (or at least couldn’t immediately see how to do) was to prune old backups – the only way I’ve found I could do this was to highlight the bookmark folder from the bookmarks bar and right-click to delete.

Either way, im happy with the result, and it’s another language I can add to my arsenal.

BASH – [: too many arguments

August 23rd, 2010 by Xerxes No comments »

I’ve been writing a little bash script to wrap up the functionality in the JIRA CLI, and i’d noticed that sometimes, my script was spitting out the following error:

sh.exe": [: too many arguments

The code in question was:

j () {
if [ -z $3 ]; then
echo jira --action progressIssue --issue "$1" --step "$2";
...
...

It turns out the problem was that the variable $3 in the second line was being substituted directly into the if statement, without being quote delimited. So Bash is treating any input in $3 with spaces as multiple arguments (hence the “too many arguments” error). Duh.

Solution is to quote the $3 variable in the conditional to treat it as a single argument. Shouldn’t make this mistake again…

Setting up Time Machine on a LaCie Network Space, wireless-ly

August 22nd, 2010 by Xerxes No comments »

The marketing of the LaCie Network Space 2 claims that the device is Time Machine compatible, but when you try and dig for instructions on how to get it to work, the tubes becomes very very empty.

If you’re connecting your Network Space via USB, I imagine the device works out of the box. i.e. you just connect it to the Mac and Time Machine will find the drive and use it. In my case, my Network Space is connected to a wireless router, and I wanted to use it as if it was a Time Capsule.

Ordinarily, when you try and set up a Time Capsule, Time Machine will try and search for the Time Capsule over Bonjour, but because the Network Space doesnt identify itself as a Time Capsule it doesnt show up in the list. As it turns out, you can still do this, but you first need to tell Mac to chill out and show Time Machine capable devices which AREN’T the Time Capsule. In Terminal:

defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

Now, when you “Show Supported Devices” in Time Machine, the OpenShare share is discovered and you’re good to go.

Command Line Replacement for Gitk

August 10th, 2010 by Xerxes No comments »

It’s no surprise I love git. To date, the worst part about git however, was the lacklustre downright shithouse  state of the pre-packaged GUI tools – git gui and gitk.

Well here’s a command-line replacement i’ve devised for gitk which does a reasonable job at visualising a project tree.

git log --oneline --decorate=full --graph --remotes

Sick.

Design of (Almost) Everyday Things: Impact Drill

August 7th, 2010 by Xerxes 3 comments »

Hopefully Don will forgive me for the creative licence I’ve taken in bastardising on the title of his book, but it seemed quite fitting given my recent experience with my Ryobi impact drill.

Basically what we’re talking about here is a standard, off-the-shelf drill with two operating modes: Drill mode and Impact/Hammer mode

Without getting into the detail of why you need two drill modes, the point remains that the drill itself contains a switch to flick between both modes.


(click the image to see a bigger version)

So the question is – which way do you need to flick the switch in order to engage hammer mode? Do you flick it right so that you can see the Hammer glyph, or do you flick it to the left so that the Hammer is covered up?

I asked a number of people this question, and interestingly NOBODY got the answer right. If you answered “flick it to the left so that the Hammer is covered up” then you got it WRONG. The switch actually needs to be flicked to the right. I cant explain why it was done like this and it seems completely counter-intuitive to me. There are much better ways of representing this modal change – the most obvious being if the glyphs themselves were shown on the side of the switch rather than at the bottom, and they swapped positions. This means the switch is then used to “engage” a particular mode, and the ambiguity is now removed by having to physically move the switch closer to the appropriate glyph.

So in the course of typing this blog-post, I thought about giving Ryobi a chance to respond and sent them a brief description of the problem and my proposed solution. Sadly I haven’t heard back from them. Shows how much they care about customer feedback in general. Ryobi fail.


Chrome App Extensions

July 29th, 2010 by Xerxes No comments »

In the last few weeks I happened to come across Chrome web-apps. This feature of Chrome 6 obsoletes the Create Application Shortcut feature and replaces it with a more featured (or at the very least better thought-out) system for hosting a web-application within the browser. Just like Chrome Extensions, the web-applications are packaged in .CRX files and are installed using the Extension Manager (chrome://extensions)

At their heart, the web-apps can run in one of two modes:

  1. Server-less App – All of the content required for the web-app is self-contained within the package. All code and images are stored on the local machine, and there’s no network access required by the plugin. These plugins can be installed by anyone, from anywhere
  2. Hosted App – On the other hand, if you have a website/webapp already written, you can easily package up the relevant URLs, give the app a few hi-res icons and you’re good to go. The only catch here is that the only domains able to serve the extension are those registered in the app itself.

Upon finding out about this feature, I found the two default apps which come with Chrome in the %AppData%\Local\Google\Chrome\Application\<version>\Resources\* folder. Here you’ll find apps for GMail, GDocs and GCal. To install them, just open up the Extension Manager and select Load Unpacked Extension. The new icons will be shown on your New Tab screen.

Recently, i’ve become a big fan of Evernote, and have been using it a fair amount for both taking notes and just sharing content with myself across several computers. I figured it was an opportune moment to create an Evernote web-app extension for Chrome. It was pretty easy too – just whack in the right URLs and create a nice transparent icon.

Installing the plugins:

Firstly, you need to be running Chrome 6 (any build from the dev channel), so it’s immediately not suited to non-techs (at the time of writing). Secondly, you need to enable app mode in Chrome by running it with the –enable-apps command line option. Easiest way of doing this is to modify the shortcut you use to run Chrome (here’s a more detailed description of enabling apps in Chrome)

Now as a developer, it was easy for me to create this extension and install it myself. Providing it to others, however is not so simple. As mentioned previously, hosted app extensions can only be installed by downloading them from the website for which the app is written. This means to use my Evernote plugin, it would ideally need to be hosted on www.evernote.com. There are other ways though, and you can install the plugin by right-clicking the file and saving it to disk and then drag-drop the extension into Chrome from where you saved it.

Below i’ve attached the extensions for both Evernote and a quick one I whipped up for Google Reader. I’m looking forward to creating more of these :)

Enhanced by Zemanta

Adding attachments of any type to WordPress

July 20th, 2010 by Xerxes No comments »

All content here is written in relation to WordPress 3.0 self-hosted

Amazingly I failed to find a single page on the net describing this problem, or how to attach any files to WordPress.

It seems that the main problem is that the Content Uploader is geared towards uploading media – GIF, PNG, JPG, PDF, DOC etc… But if you want to upload other files (eg: .ZIP files) then the uploader will error out with a very useless File type does not meet security guidelines. Try another..

The problem here is that WordPress is looking at the extension of the file, blocking the content from being uploaded, but then provides no system to update the list of allowed extensions/MIME types.

Eventually I found a plugin (pjw-mime-config) which allows you to modify the registered MIME types supported by a WordPress installation. Simply install the plugin, visit the config section (Settings -> MIME Types) and add your extension/MIME type to the list.

JetBrains: Teach with pleasure!

May 8th, 2010 by Xerxes 2 comments »

Whenever i’m teaching my class, my laptop is connected to the projector and i’m running through an example on the screen, I make a conscious effort to NOT use ReSharper because its refactorings do so much and the keystrokes are so fast that my students are unable to keep up with the content i’m presenting. Remember in most cases, these are students who don’t know how to create a property or a field or variable, so my “Extract Field” refactor just completely loses them. Just because I make a conscious effort not to however, doesn’t mean I subconsciously make the mistake every now and then. And almost every time I make the mistake I have to explain what just happened, what the tool was, and what it did for me. Most classes just nod their head and placate me so that we don’t run overtime (again). Keep the loud, talky-talky man happy and we might all get out before 10pm…

Except this semester. For the first time, I had students come up after the class and ask me about ReSharper, and what it did. I gave them a 5 minute whirl-wind tour of the tool, showing them the extract field/variable/method refactorings, move file refactoring, live templates and the integrated unit-testing. It seemed like they were hooked. In the weeks following, I was inspecting code they were writing on their personal computers, and noticed the ReSharper menu in the VS toolbar. Seemed like a victory! One step closer to bridging the gap between academia and industry.

This got me thinking about ways I could further motivate my students to put more effort into this subject, and had an idea of getting in touch with JetBrains to see if they’d be interested in partnering by providing a licence for ReSharper to give away to the student who performs best in the subject. The idea would be for me to dedicate 10-15 minutes at the start of my lesson to actually explain the tool to the class, why it’s important for them to use this particular tool and give them the demo to blow their metaphorical pants off. This all started to brew about 2 months ago…

Well as of last night, the first part of my plan fell into place. After long discussions with JetBrains, I’m pleased to say they’re happy to award a personal edition licence of ReSharper for C# to the best student in the class, for each semester as an ongoing initiative! I really wasn’t expecting such a committed outcome from them, but thoroughly pleased with the result!

Now all I have to do is evangelise…I figure that the students who get the best marks are the ones who have put genuine care and effort into the project, and one of them deserves to be rewarded with a tool which makes them more productive and helps them enjoy what they do more.

Just wanted to shout-out here to anyone who reads this blog and hasn’t used ReSharper – you really are missing out.

The Right Person For The Job

April 6th, 2010 by Xerxes 1 comment »

I’ve been involved in the recruitment processes for many companies over the last few years, and i’ve seen time and time again the same common (simple) mistakes which often affect how a candidate is perceived.

First of all – your resume and cover sheet (or equivalent thereof) is of paramount importance. Its primary goal is to introduce us and start a (mental) conversation. When you apply for a job, you’re basically telling me “I exist! I have the skills you need, and I’m the kind of person you’d like to work with”. The first part seems quite trivial, but remember I DON’T KNOW YOU and up until about 5 seconds ago YOU DIDN’T EXIST. So you want to take every possible opportunity to help me get to know you better.

Don’t tell me things I already know. In particular I don’t want you to copy-paste content from our corporate website and spew it back to me. I’m reading your resume because I want to know about you. If you want to show your great research skills, find some other way of doing it…And even if you ARE going to copy-paste at least change the content so it reads as if you wrote it from your perspective, instead of leaving it in the corporate marketing speak so commonly read (true story…)? Humour me, please?

Me personally – i’m a stickler for grammar, spelling and punctuation. If you’re submitting a resume as a Word doc, I have to assume that you’re submitting it not only to me but to a number of other people too. If you apply for 20 jobs, each of these companies have 2 people reading your resume, and each person spends 5 minutes reading it, that’s 3 hours of total reading time that your document is analysed and scrutinised. In that time, some people will be guaranteed to find spelling mistakes – and these reflect on your attention to detail. If you were writing a book that would take 3 hours to read, you’d (hopefully) put a bit of effort in to make sure that the book had minimal errors – so why would you not do the same for your 3 page resume? Which leads me nicely into the next point..

Keep your resume short. Don’t send me 50 pages. Don’t send me 10 pages. 3 pages is perfectly fine and 5 is pushing the limit. If you can’t succinctly condense your work experience into 5 pages, you’re not concentrating hard enough on telling me your best skills to suit the role we’re hiring for. I’m not interested in reading about a job you had 10 years ago where you made the best damn Cookie Man cookies in Central Plaza. Tell me about your work experience which is relevant to the job you’ve applied for.

But don’t limit yourself to just “work” experience. I’m interested in reading about your pet-projects, (software related) things that you do outside hours, and maybe even some not-so-software related. Please don’t write a 4-line paragraph extolling your abilities at feeding the ducks in your local park. Mentioning it as one of your interests gets me thinking about how you’re not just a 9-5 desk jockey, and anything more is just detracting from the goal at hand (see above).

Lying on your resume. I know a few people who quietly admit that they lie on their resume about their skills and abilities, arguing that they can always learn said technology/tool in time and no-one would ever know. Things start becoming unstuck when you don’t know, and you’re caught with your pants at your ankles struggling to explain how it is that you’re able to re-compile the Linux kernel using only a pocket knife and some paper-clips. Just don’t do it. This practice reflects badly on you when it backfires and if you do happen to “get away with it” during the hiring process, it will sooner or later reveal itself once we’re working together.

The Manager’s Guide To Technology. Here, i’m referring to a giant list of every-single technology you can think you’ve ever come in contact with, and writing it up as a “skill”. XML isn’t a skill. Nor is Microsoft Word. I’ll pat you on the back for opening a Word Document if you really want, but I won’t be using that as criteria for offering you a job. If you must, then please only list things that are genuine skills/practices. TDD, BDD, CI, whatever…If you need to explicitly mention software you’ve specialised in, then mention that separately. Whilst it might be an achievement on your part, reading hundreds of them is just noise on my part.

Be creative. Throw in a little design or layout your content differently to the standard Page template. It shows me you’re willing to give your resume some attention in order to grab mine. I like being grabbed (quote, end-quote). Besides, it’s an opportunity to show me your creativity. I wouldn’t recommend going crazy and being radical about your design – after all I still need to read it and follow it. But all things considered, being different is an advantage – you’re more likely to be remembered that way.

They’re pretty much the main ones which come to mind. I reserve the right to change this list the next time I see something stupid.

Lessons from the classroom

March 16th, 2010 by Xerxes 5 comments »

As a number of you might know, I have been tutoring programming subjects at my old university for a number of years now. Both C# and VB.NET and the one common theme i’ve seen in these classes is that despite having completed a mandatory object-oriented programming subject, a large proportion of students just don’t get OO. Furthermore, most of them have no real idea of how to solve a programming problem other than a very heavy, top-down method:

  1. Students are given a programming assignment to solve, and 5 weeks to complete it
  2. They look at problem at the wholistic level just to try and understand the problem
  3. Start designing a UI and use all the wonderful draggy-droppy components on their forms
  4. Spend 2 weeks tweaking their UI colours and text box alignments.
  5. Spend 1 week hacking together some code so that their UI starts to interact
  6. Spend 1 week tweaking more UI elements
  7. Realise there’s only 1 week left and they are missing 30% of the functionality
  8. Panic, ask for extensions, submit the assignment late, or all of those together.

After seeing this pattern over and over, I’ve tried different ways to address the problem in my classes. The first thing I tried was to verbally (and written in the student forums) express my sentiments that students should focus less on their UI upfront and try to make sure they have basic UI completeness first, all of the functionality implemented and finally come back to polish the UI. After all, I was seeing a great number of assignments come through with functionality either untested, totally broken or just altogether missing. Sadly my words seemed to fall upon deaf ears, and the quality of assignments were not up to the standard I was expecting.

Tackling this problem differently, last year I thought I’d try and introduce the concept of unit-testing, TDD and told them to think less about the upfront-design and instead focus on testing individual components of their system independently and build up to a solution. At the time, it seemed like a lot of students were receptive to the idea of unit-testing and test-upfront, but when the assignments came down, it looked like they just fell back into old habits and the quality of assignments seemed (on the whole) not much different to previous semesters’. After talking to some of them, I suspect the problem was that it became too difficult for them to manage this new “style” of writing their programs and keep up on top of their other coursework. The quickest and dirtiest appeared to work, and the mentality that this was purely just another assignment for the sake of passing uni seemed clearer than ever. I’m guilty of this attitude too, as when I was in uni it was more of a concern for me to make sure I completed the assignment as quickly as possible so I could spend quality time on other activities. The problem with the unit-test/TDD approach was that it offers little to a student in a reward system. Unit-tested software takes longer to write than it’s direct-implementation counterpart. This pays off in the long-run, but when your “long-run” is only as long as a 6 month semester, then why investing extra time? It’s a classic ROI question. What I needed was a way to motivate students to improve the quality of their code, and without the seemingly large overhead of the automated tests.

This semester i’ve shaken things up once again, and have dropped the unit-testing/TDD mantra. Instead, i’m focusing *heavily* on problem decomposition, units of work, single responsibility and class separation. If unit-testing doesn’t drive students to think about their application functionality upfront, then hopefully teaching them a way to make a large problem seem easily palatable will motivate them to start thinking code-first instead of UI-first. Introduction to the concept of having one class per file seemed quite foreign and unnecessary when I discussed it in our first class last week. That didn’t fill me with confidence. Despite that, i committed to persisting with this trend, and re-enforced the importance of responsibility separation under the guise of making code easier to understand, but more importantly giving the students a method for decomposing a problem and forming a solution. This I think has been the key, because receptiveness to the idea suddenly picked up when I demonstrated building a simple Guess The Number game using a number of small, discrete components. They saw what at first appeared to be a task far too large for a 90 minute class, but by identifying single functions (like “ask the user to guess a number” and “determine if the user guessed too high”) and tackling them one-by-one we produced a solution, and it’s final composition was easy to follow. #win.

I’d also changed my presentation style and was more boisterous than normal. (for anyone who knows me in person, this would be pretty intense). My theory was that if I captured their attention, it would provoke them to treat my class differently to any of their others. After all, if i’m putting the effort in, they would reciprocate, wouldn’t they? Well i’m pleased to say that after 2 weeks (yes it’s early days still) i’ve had more students hang around for the full class than before and most importantly, when i’ve run overtime nearly 70% of my students are staying back (up to an hour – 10pm at night) to ask questions and learn more.

It still awaits to be proven whether this approach makes a difference with their assignments, but certainly I’ve seen a rather large improvement in the participation of students. I’m attributing this to a combination of making the classes more entertaining, and slicing the content so it’s easily digested. Historically this has been pretty bad at university level. Hopefully this time i’ve hit a winning formula.

Making Resolving Conflicts in Git Easier

March 10th, 2010 by Xerxes 3 comments »

One thing that bugs me when using Git, is that resolving merge conflicts isn’t a seamless process. It involves the fiddly task of opening files which conflicted, then resolving the conflict and then staging the changes.

The part which bugs me the most is having to either type the full filename in order to open it in any editor, OR i have to use the mouse to clipboard the filename and then paste it onto the commandline. Just not straightforward enough.

So i knocked up a quick bash script which makes use of git’s ability to create extensions for git commands. This script issues a rebase (i also have one for merge) and will fire off my preferred editor for editing files outside Visual Studio.

#!/bin/bash
# git-resolve-rebase

git rebase $1 
modified=`git status | grep 'unmerged' | uniq`

if [ -n "$modified" ]; then
	git status | grep 'unmerged' | awk '{print $3}' | uniq | xargs -n1 e
fi

(where e is the shell-script to launch E-Text Editor

To use this extension, all i need to do is:

$ (MyBranch)> git resolve-rebase master

Unsatisfied By My Specification Pattern

February 7th, 2010 by Xerxes 1 comment »

This week, I hit an interesting problem which I don’t feel like was solved in the best possible way. The problem was that we needed to filter a list of objects based on some known criteria. Testing the specification is pretty important as there are a series of and’ed negating conditions (eg: IF this AND NOT that AND NOT the other etc…), in total about 5 unique criteria for the one filter.

Ordinarily this kind of implementation would lend itself nicely to the Specification Pattern given that all information required to determine if the specification is satisfied exists on the object member being passed in at the time of evaluation. In my case, however, I had 3 conditions that the object being evaluated must not exist in 3 different lists. To give you an example the object under evaluation is a model, and this step of filtering is part of a much larger process involving running the model through a recursive algorithm. At each step of the algorithm, the model object could have

  • Run through the algorithm succesfully
  • Been aborted during execution of the algorithm
  • Still awaiting to be executed through the algorithm

These three states are tracked by keeping 3 lists – one for each criteria. The filter that I was working on had to filter based on whether the model under evaluation was NOT in those three lists. I realise the wordiness of me explaining this doesn’t really clear the air, so lets look at some code (with relevant types changed).

This is my model:

    public class User
    {
       public bool Enabled { get; set; }
       public string Name { get; set; }
       public UserType TypeOfUser { get; set; }
    }

The code which uses my filter looks something like this. This guy will be called recursively based on the result that gets returned from here. In this case, i’m building the allValidUsersFilter.

public class ReplacementUserFinder
{
        ...ctor...fields...etc

        public User FindReplacementUser(User userToReplace, IList<User> allPossibleUsers, IList<User> usersStillToBeEvaluated, IList<User> usersAlreadyEvaluated, IList<User> usersAbortedDuringEvaluation)
        {
            var validUsers = _allValidUsersFilter.Filter(allPossibleUsers, usersStillToBeEvaluated, usersAlreadyEvaluated, usersAbortedDuringEvaluation);
            var replacementUser = _bestReplacementCandidateFinder.Find(validUsers, userToReplace);

            return replacementUser;
        }
}

and the interface for the AllValidUsersFilter – it’s purpose is to filter the list of all users to a list of potential candidates given the list of all users:

        public IList<User> Filter(IList<User> allpossibleUsers, IList<User> usersStillToBeEvaluated, IList<User> usersAlreadyEvaluated, IList<User> usersAbortedDuringEvaluation)
        {
            return allpossibleUsers.Where(x => 
                _isUserEnabledSpecification.IsSatisfiedBy(x) &&
                _isOverseasUserSpecification.IsSatisfiedBy(x) &&
                    !_isUserStillToBeEvaluatedSpecification.IsSatisfiedBy(x, usersStillToBeEvaluated) &&
                    !_isUserAlreadyEvaluatedSpecification.IsSatisfiedBy(x, usersAlreadyEvaluated) &&
                    !_isUserAbortedDuringEvaluationSpecification.IsSatisfiedBy(x, usersAbortedDuringEvaluation)
            ).ToList();
        }

The specification instances here are being ctor injected into my filter’s instance so that I can use a behavioural style assertion to check that the specification is invoked correctly by the filter.

The IsUserEnabledSpecification, and IsOverseasUserSpecification just use the well-known ISpecification interface pattern, but in order to evaluate the the other three, I had to create an IListSpecification and it feels somehow unsatisfying because the only thing different between them is that I have to pass in the list to the IsSatisfiedBy methods.

I’m not happy with this result, and we went through several different options before settling on this one purely just so we could move forward, and come back to address this later.

Hoping someone out there might have some suggestions…After writing this post, i’ve come up with another idea which would probably be cleaner…need to try it out.

Mount a VHD in Windows 7

January 19th, 2010 by Xerxes No comments »

Windows 7 (and possibly even Vista) has the ability to mount a VHD. The VHD could have been created from Virtual PC or Virtual Server, or it could be a System Image backup created by Windows Backup.

To mount the VHD, open the Computer Management console (Start -> “Computer Management”). Right-click the Disk Management option in the tree and select Attach VHD.

Awesome.