I’ve been involved in the recruitment processes for many companies over the last few years, and i’ve seen time and time again the same common (simple) mistakes which often affect how a candidate is perceived.

First of all – your resume and cover sheet (or equivalent thereof) is of paramount importance. Its primary goal is to introduce us and start a (mental) conversation. When you apply for a job, you’re basically telling me “I exist! I have the skills you need, and I’m the kind of person you’d like to work with”. The first part seems quite trivial, but remember I DON’T KNOW YOU and up until about 5 seconds ago YOU DIDN’T EXIST. So you want to take every possible opportunity to help me get to know you better.

Don’t tell me things I already know. In particular I don’t want you to copy-paste content from our corporate website and spew it back to me. I’m reading your resume because I want to know about you. If you want to show your great research skills, find some other way of doing it…And even if you ARE going to copy-paste at least change the content so it reads as if you wrote it from your perspective, instead of leaving it in the corporate marketing speak so commonly read (true story…)? Humour me, please?

Me personally – i’m a stickler for grammar, spelling and punctuation. If you’re submitting a resume as a Word doc, I have to assume that you’re submitting it not only to me but to a number of other people too. If you apply for 20 jobs, each of these companies have 2 people reading your resume, and each person spends 5 minutes reading it, that’s 3 hours of total reading time that your document is analysed and scrutinised. In that time, some people will be guaranteed to find spelling mistakes – and these reflect on your attention to detail. If you were writing a book that would take 3 hours to read, you’d (hopefully) put a bit of effort in to make sure that the book had minimal errors – so why would you not do the same for your 3 page resume? Which leads me nicely into the next point..

Keep your resume short. Don’t send me 50 pages. Don’t send me 10 pages. 3 pages is perfectly fine and 5 is pushing the limit. If you can’t succinctly condense your work experience into 5 pages, you’re not concentrating hard enough on telling me your best skills to suit the role we’re hiring for. I’m not interested in reading about a job you had 10 years ago where you made the best damn Cookie Man cookies in Central Plaza. Tell me about your work experience which is relevant to the job you’ve applied for.

But don’t limit yourself to just “work” experience. I’m interested in reading about your pet-projects, (software related) things that you do outside hours, and maybe even some not-so-software related. Please don’t write a 4-line paragraph extolling your abilities at feeding the ducks in your local park. Mentioning it as one of your interests gets me thinking about how you’re not just a 9-5 desk jockey, and anything more is just detracting from the goal at hand (see above).

Lying on your resume. I know a few people who quietly admit that they lie on their resume about their skills and abilities, arguing that they can always learn said technology/tool in time and no-one would ever know. Things start becoming unstuck when you don’t know, and you’re caught with your pants at your ankles struggling to explain how it is that you’re able to re-compile the Linux kernel using only a pocket knife and some paper-clips. Just don’t do it. This practice reflects badly on you when it backfires and if you do happen to “get away with it” during the hiring process, it will sooner or later reveal itself once we’re working together.

The Manager’s Guide To Technology. Here, i’m referring to a giant list of every-single technology you can think you’ve ever come in contact with, and writing it up as a “skill”. XML isn’t a skill. Nor is Microsoft Word. I’ll pat you on the back for opening a Word Document if you really want, but I won’t be using that as criteria for offering you a job. If you must, then please only list things that are genuine skills/practices. TDD, BDD, CI, whatever…If you need to explicitly mention software you’ve specialised in, then mention that separately. Whilst it might be an achievement on your part, reading hundreds of them is just noise on my part.

Be creative. Throw in a little design or layout your content differently to the standard Page template. It shows me you’re willing to give your resume some attention in order to grab mine. I like being grabbed (quote, end-quote). Besides, it’s an opportunity to show me your creativity. I wouldn’t recommend going crazy and being radical about your design – after all I still need to read it and follow it. But all things considered, being different is an advantage – you’re more likely to be remembered that way.

They’re pretty much the main ones which come to mind. I reserve the right to change this list the next time I see something stupid.

As a number of you might know, I have been tutoring programming subjects at my old university for a number of years now. Both C# and VB.NET and the one common theme i’ve seen in these classes is that despite having completed a mandatory object-oriented programming subject, a large proportion of students just don’t get OO. Furthermore, most of them have no real idea of how to solve a programming problem other than a very heavy, top-down method:

  1. Students are given a programming assignment to solve, and 5 weeks to complete it
  2. They look at problem at the wholistic level just to try and understand the problem
  3. Start designing a UI and use all the wonderful draggy-droppy components on their forms
  4. Spend 2 weeks tweaking their UI colours and text box alignments.
  5. Spend 1 week hacking together some code so that their UI starts to interact
  6. Spend 1 week tweaking more UI elements
  7. Realise there’s only 1 week left and they are missing 30% of the functionality
  8. Panic, ask for extensions, submit the assignment late, or all of those together.

After seeing this pattern over and over, I’ve tried different ways to address the problem in my classes. The first thing I tried was to verbally (and written in the student forums) express my sentiments that students should focus less on their UI upfront and try to make sure they have basic UI completeness first, all of the functionality implemented and finally come back to polish the UI. After all, I was seeing a great number of assignments come through with functionality either untested, totally broken or just altogether missing. Sadly my words seemed to fall upon deaf ears, and the quality of assignments were not up to the standard I was expecting.

Tackling this problem differently, last year I thought I’d try and introduce the concept of unit-testing, TDD and told them to think less about the upfront-design and instead focus on testing individual components of their system independently and build up to a solution. At the time, it seemed like a lot of students were receptive to the idea of unit-testing and test-upfront, but when the assignments came down, it looked like they just fell back into old habits and the quality of assignments seemed (on the whole) not much different to previous semesters’. After talking to some of them, I suspect the problem was that it became too difficult for them to manage this new “style” of writing their programs and keep up on top of their other coursework. The quickest and dirtiest appeared to work, and the mentality that this was purely just another assignment for the sake of passing uni seemed clearer than ever. I’m guilty of this attitude too, as when I was in uni it was more of a concern for me to make sure I completed the assignment as quickly as possible so I could spend quality time on other activities. The problem with the unit-test/TDD approach was that it offers little to a student in a reward system. Unit-tested software takes longer to write than it’s direct-implementation counterpart. This pays off in the long-run, but when your “long-run” is only as long as a 6 month semester, then why investing extra time? It’s a classic ROI question. What I needed was a way to motivate students to improve the quality of their code, and without the seemingly large overhead of the automated tests.

This semester i’ve shaken things up once again, and have dropped the unit-testing/TDD mantra. Instead, i’m focusing *heavily* on problem decomposition, units of work, single responsibility and class separation. If unit-testing doesn’t drive students to think about their application functionality upfront, then hopefully teaching them a way to make a large problem seem easily palatable will motivate them to start thinking code-first instead of UI-first. Introduction to the concept of having one class per file seemed quite foreign and unnecessary when I discussed it in our first class last week. That didn’t fill me with confidence. Despite that, i committed to persisting with this trend, and re-enforced the importance of responsibility separation under the guise of making code easier to understand, but more importantly giving the students a method for decomposing a problem and forming a solution. This I think has been the key, because receptiveness to the idea suddenly picked up when I demonstrated building a simple Guess The Number game using a number of small, discrete components. They saw what at first appeared to be a task far too large for a 90 minute class, but by identifying single functions (like “ask the user to guess a number” and “determine if the user guessed too high”) and tackling them one-by-one we produced a solution, and it’s final composition was easy to follow. #win.

I’d also changed my presentation style and was more boisterous than normal. (for anyone who knows me in person, this would be pretty intense). My theory was that if I captured their attention, it would provoke them to treat my class differently to any of their others. After all, if i’m putting the effort in, they would reciprocate, wouldn’t they? Well i’m pleased to say that after 2 weeks (yes it’s early days still) i’ve had more students hang around for the full class than before and most importantly, when i’ve run overtime nearly 70% of my students are staying back (up to an hour – 10pm at night) to ask questions and learn more.

It still awaits to be proven whether this approach makes a difference with their assignments, but certainly I’ve seen a rather large improvement in the participation of students. I’m attributing this to a combination of making the classes more entertaining, and slicing the content so it’s easily digested. Historically this has been pretty bad at university level. Hopefully this time i’ve hit a winning formula.

One thing that bugs me when using Git, is that resolving merge conflicts isn’t a seamless process. It involves the fiddly task of opening files which conflicted, then resolving the conflict and then staging the changes.

The part which bugs me the most is having to either type the full filename in order to open it in any editor, OR i have to use the mouse to clipboard the filename and then paste it onto the commandline. Just not straightforward enough.

So i knocked up a quick bash script which makes use of git’s ability to create extensions for git commands. This script issues a rebase (i also have one for merge) and will fire off my preferred editor for editing files outside Visual Studio.

# git-resolve-rebase

git rebase $1 
modified=`git status | grep 'unmerged' | uniq`

if [ -n "$modified" ]; then
	git status | grep 'unmerged' | awk '{print $3}' | uniq | xargs -n1 e

(where e is the shell-script to launch E-Text Editor

To use this extension, all i need to do is:

$ (MyBranch)> git resolve-rebase master

This week, I hit an interesting problem which I don’t feel like was solved in the best possible way. The problem was that we needed to filter a list of objects based on some known criteria. Testing the specification is pretty important as there are a series of and’ed negating conditions (eg: IF this AND NOT that AND NOT the other etc…), in total about 5 unique criteria for the one filter.

Ordinarily this kind of implementation would lend itself nicely to the Specification Pattern given that all information required to determine if the specification is satisfied exists on the object member being passed in at the time of evaluation. In my case, however, I had 3 conditions that the object being evaluated must not exist in 3 different lists. To give you an example the object under evaluation is a model, and this step of filtering is part of a much larger process involving running the model through a recursive algorithm. At each step of the algorithm, the model object could have

  • Run through the algorithm succesfully
  • Been aborted during execution of the algorithm
  • Still awaiting to be executed through the algorithm

These three states are tracked by keeping 3 lists – one for each criteria. The filter that I was working on had to filter based on whether the model under evaluation was NOT in those three lists. I realise the wordiness of me explaining this doesn’t really clear the air, so lets look at some code (with relevant types changed).

This is my model:

    public class User
       public bool Enabled { get; set; }
       public string Name { get; set; }
       public UserType TypeOfUser { get; set; }

The code which uses my filter looks something like this. This guy will be called recursively based on the result that gets returned from here. In this case, i’m building the allValidUsersFilter.

public class ReplacementUserFinder

        public User FindReplacementUser(User userToReplace, IList<User> allPossibleUsers, IList<User> usersStillToBeEvaluated, IList<User> usersAlreadyEvaluated, IList<User> usersAbortedDuringEvaluation)
            var validUsers = _allValidUsersFilter.Filter(allPossibleUsers, usersStillToBeEvaluated, usersAlreadyEvaluated, usersAbortedDuringEvaluation);
            var replacementUser = _bestReplacementCandidateFinder.Find(validUsers, userToReplace);

            return replacementUser;

and the interface for the AllValidUsersFilter – it’s purpose is to filter the list of all users to a list of potential candidates given the list of all users:

        public IList<User> Filter(IList<User> allpossibleUsers, IList<User> usersStillToBeEvaluated, IList<User> usersAlreadyEvaluated, IList<User> usersAbortedDuringEvaluation)
            return allpossibleUsers.Where(x => 
                _isUserEnabledSpecification.IsSatisfiedBy(x) &&
                _isOverseasUserSpecification.IsSatisfiedBy(x) &&
                    !_isUserStillToBeEvaluatedSpecification.IsSatisfiedBy(x, usersStillToBeEvaluated) &&
                    !_isUserAlreadyEvaluatedSpecification.IsSatisfiedBy(x, usersAlreadyEvaluated) &&
                    !_isUserAbortedDuringEvaluationSpecification.IsSatisfiedBy(x, usersAbortedDuringEvaluation)

The specification instances here are being ctor injected into my filter’s instance so that I can use a behavioural style assertion to check that the specification is invoked correctly by the filter.

The IsUserEnabledSpecification, and IsOverseasUserSpecification just use the well-known ISpecification interface pattern, but in order to evaluate the the other three, I had to create an IListSpecification and it feels somehow unsatisfying because the only thing different between them is that I have to pass in the list to the IsSatisfiedBy methods.

I’m not happy with this result, and we went through several different options before settling on this one purely just so we could move forward, and come back to address this later.

Hoping someone out there might have some suggestions…After writing this post, i’ve come up with another idea which would probably be cleaner…need to try it out.

Windows 7 (and possibly even Vista) has the ability to mount a VHD. The VHD could have been created from Virtual PC or Virtual Server, or it could be a System Image backup created by Windows Backup.

To mount the VHD, open the Computer Management console (Start -> “Computer Management”). Right-click the Disk Management option in the tree and select Attach VHD.


Making any kind of noises is apparently all the rage now….ha-ha-ha….coo ooo ooo mmmm. basically anything which doesn’t add up to “i’m hungry” or “i need my bum changed”.

patience, xerx….

In the last few OS rebuilds of my machine, i’ve preferred to relocate my user profile to a different partition, and leave my C: as small and light as possible. The added benefit is that I can then backup an entire partition in a snap and be guaranteed i haven’t lost any major user-data.

To do this (under Windows 7), follow the steps below. This assumes you have formatted your machine and have a clean install of Win7 with your user account (with admin access) created.

Enable the Administrator account

This account is disabled by default and you will need to enable it in order to move your profile around.

  1. Press Start, and type “Computer Management” and run the first program
  2. Under Local Users and GroupsUsers, you will see the Administrator account. Right-click the Administrator, select Properties. In the General tab, uncheck “Account is disabled”. Apply. Ok.
  3. If you don’t know the password for the Admin account, I suggest right-clicking the Administrator and Reset Password
  4. Log out of your current user account and log in as the Administrator
Enabling the Administrator account

Enabling the Administrator account


Copy The Profile To Its New Location

Logged in as Administrator, open Windows Explorer and navigate to C:Users. You will see your account folder there. **Move (dont copy!)** your account folder to the new location you want. In my case, I moved it to D:home.

Update The Profile’s Home Folder Path

You now need to update the user account to tell it that the profile exists in a different location.

  1. Press Start, and type “Computer Management” and run the first program
  2. Under Local Users and GroupsUsers, you will see your user account. Right-click the account, select Properties. In the Profile tab, select “Local path”, and type in the new path of the profile. (eg: D:homexerxes). Apply. Ok.
Profile Home Path

Profile Home Path


Update Registry To Find New Profile

I probably should have mentioned this before, but if you’re not comfortable modifying your Windows Registry, you probably shouldn’t attempt this. Moreover, you probably shouldn’t be reading my blog.

  1. Start -> Run, “Regedit”, Enter
  2. Find the key HKEY_LOCAL_MACHINESOFTWAREMicrosoftWindows NTCurrentVersionProfileList. There will be a string value called Profiles Directory. Update the value to be your new profile home directory. (eg: D:home)
  3. Within the ProfileList key, there are a bunch of sub-keys, one for each user profile registered on the system. Scroll through them and look at the value ProfileImagePath, and find the one which mentions your username. Update the value of the ProfileImagePath to be the new path to your user profile (eg: D:homexerxes)
Updating User Profile Location In Registry

Updating User Profile Location In Registry



You’re all done. Log out of the administrator account and try logging back into your own profile. If you did everything right, this should just work. If you want to disable the Administrator account again, you can do so after logging back into yours.

Recently, my programming-pair and I adopted the Pomodoro time-management technique and applied it to our software dev tasks. We’ve been doing this for about a week and we’ve found it incredibly helpful. Our work environment is fantastic, but sometimes it’s too easy to be distracted by work-related interruptions (or other). This system allows us to focus on core development activities for a block of time, and a short-break in between. The purpose of this system is NOT to force us into being more productive for the sake of management or metrics – it’s to allow us to make the best use of the time we already have.

In a nutshell, the pomodoro (tomato in Italian) technique involves:

  1. Pick a task and work to complete it in 25 minutes
  2. Break for 5 minutes
  3. Repeat steps 1 & 2 three more times (4x25min blocks + 5x5min breaks in total)
  4. Have a 15 minute break

Although we’re still aiming to achieve 4 consecutive pomodoros in a row, we both have noticed the increased focus and commitment to delivering on our development tasks. The 25 minute time block actually acts as an incentive to try and complete a problem up to a logically clean point before time runs out – this has in turn helped us break the problem down into smaller tasks.

This system worked so well within the first few days of using it, we even set-up a box and screen right in the middle of our desks which (amongst other things) is used to display a big timer from the website http://tomatoi.st/. Any visitors who come to talk to us are kindly pointed to the big count-down and asked to come back at a time that’s better for us. Whilst it might seem unprofessional, we’ve yet to encounter someone who felt it important to disturb us in the middle of a timed task. There’s something about two people working diligently at a desk with a count-down over their heads that prevents others from wanting to disturb them.

What’s also nice about the tomatoi.st timer, is that it keeps track of the last 7 actions (either pomodoro or break). I whipped up a Yahoo pipe to convert the pomodoro list into an RSS feed. The purpose of the RSS feed is to at least keep historical records of how we’ve been tracking to see how whether or not we’re improving. This content could be fed into uladoo and tracked via Twitter or some other unnecessarily Web 2.0 service.

No doubt we’ll refine this process the more we work with it, but it’s been quite beneficial thusfar. I suspect a lot of other development teams will also benefit from this practice.

Reblog this post [with Zemanta]

There are many stupid people in this world. This guy is another one of them:
Picture of Xerxes Battiwalla

Yes that’s right, yours truly really screwed up this time.

For future reference (and in a nutshell), git checkout is not the same as git reset, particularly when you throw in the hard option. I know this now, and in fact I knew it when i accidentally blew away half my repository – didn’t make the end result any less pleasing.

Here’s the gitk output of what my tree looked like before the wipe:

Full project history from gitk

I was intending to checkout a previous revision of the repo to show how some code had evolved over time. Normally I would do this from the command line, but this time I (crazily) chose to use (the very disabled) gitk to do this, and blindly chose the “hard” option without thinking. So what I was left with was a very lacking repository and even sorely bruised ego. Goodbye repo history! It’s been nice knowing you….

Wiped Repository

Thanks to the ever-knowledgeable davesquared, there’s a way to recover your git repo once you bone yourself. If you escape the clutches of the gitk demon, the command line interface gives you a better level of control (and content which is crawlable by search engines 😉 )

It turns out that until the repository is cleaned (git gc) and stale objects are cleared, the repo still has the necessary content stored, it’s just not visible. (if the following block looks terrible in your renderer, click through the link to read it properly.)

xerxes@laptop /d/source/dotnet/CodeKatas/.git/logs (GIT_DIR!)
$ cat HEAD
0000000000000000000000000000000000000000 f167fd4068e4b92134964e073f2e69a0cc8fced9 Xerxes  1250928395 +1000 commit (initial): Initial commit of binary search
f167fd4068e4b92134964e073f2e69a0cc8fced9 dd31afc8d793aaa952e032a77675b8e67f6b26bc Xerxes  1251007341 +1000 commit: Removedbin/obj from source control
dd31afc8d793aaa952e032a77675b8e67f6b26bc 0642a5e64a513f8949f7aa9e1d35a298fb713bfc Xerxes  1251007422 +1000 commit: removed.suo from source control
0642a5e64a513f8949f7aa9e1d35a298fb713bfc 0cf0929cfb9a31707b2ae937f0c6e73bd9e5bcb9 Xerxes  1251007479 +1000 commit: Implemented first binary tree implementation. it is shit
0cf0929cfb9a31707b2ae937f0c6e73bd9e5bcb9 6c27c54370bf2b79369dc5ef02ef7338bbf2865a Xerxes  1251010817 +1000 commit: Refactored first implementation to remove unnecessary elements
6c27c54370bf2b79369dc5ef02ef7338bbf2865a 937fcb1c36bf437e0df11847ee579ca60151144d Xerxes  1251021898 +1000 commit: removedresharper settings file from project
937fcb1c36bf437e0df11847ee579ca60151144d b12a95ea3c636615345559979d3fc0e93fecc0bc Xerxes  1251022036 +1000 commit: Rewritten first implementation of b-search to get practice.
b12a95ea3c636615345559979d3fc0e93fecc0bc f57503a6116822c9632e6b4dc6cb423640c6a152 Xerxes  1251028214 +1000 commit: 3rd implementation of binary search using shifted bounds.
f57503a6116822c9632e6b4dc6cb423640c6a152 be510f0f88baa38d8d2e645f568405d052f5bb14 Xerxes  1251028265 +1000 commit (amend):Another rewrite of binary search using shifted bounds.
be510f0f88baa38d8d2e645f568405d052f5bb14 2c0b21e7aea2dd27fc7281d1a20a887e2e1f3d0d Xerxes  1251090393 +1000 commit: Yet another re-write of the shifted boundary method.
2c0b21e7aea2dd27fc7281d1a20a887e2e1f3d0d d2a23e1cad8b36de1361700b89733a3b08401e2c Xerxes  1251253854 +1000 commit: Moved BinaryTree project to top-level folder
d2a23e1cad8b36de1361700b89733a3b08401e2c 4e8bd70336e89f9000f29553d53d98cb568bc809 Xerxes  1251253901 +1000 commit: Added nunit to the list of dependencies
4e8bd70336e89f9000f29553d53d98cb568bc809 551c754f27f10826dab43c50d8a3151b7e2740f5 Xerxes  1251253922 +1000 commit: Implemented FizzBuzz
551c754f27f10826dab43c50d8a3151b7e2740f5 11884f5935c85d4ec1497fe3a8eb39211d731fdc Xerxes  1251269688 +1000 commit (amend):Moved parameters onto FizzBuzz method and out of constructor
11884f5935c85d4ec1497fe3a8eb39211d731fdc cbda7a8c68c88900eb1c58ab7bf613bad26892ad Xerxes  1251269732 +1000 commit (amend):Implemented FizzBuzz solution
cbda7a8c68c88900eb1c58ab7bf613bad26892ad b40c951696a2d11854fce6a9fca1be9fc436e61b Xerxes  1251341643 +1000 commit: Anotherre-implementation of the shifted-boundaries method
b40c951696a2d11854fce6a9fca1be9fc436e61b b2c2a8b103aa43dac8e443cfc866fa35f9ffb048 Xerxes  1251341967 +1000 commit: RenamedBinarySearch to ShiftedBoundariesBinarySearch
b2c2a8b103aa43dac8e443cfc866fa35f9ffb048 30a224eb40c441df8a18b06eeb68af47a86bc37f Xerxes  1251344036 +1000 commit: Implemented RecursiveBinarySearch (badly). requires refactor
30a224eb40c441df8a18b06eeb68af47a86bc37f 0fbf8ace753f408b0e972b593e2b6a03dd2d0354 Xerxes  1251345463 +1000 commit: CreatedTreeNode Search
0fbf8ace753f408b0e972b593e2b6a03dd2d0354 0cf0929cfb9a31707b2ae937f0c6e73bd9e5bcb9 Xerxes  1251345505 +1000 0cf0929cfb9a31707b2ae937f0c6e73bd9e5bcb9: updating HEAD
0cf0929cfb9a31707b2ae937f0c6e73bd9e5bcb9 2c0b21e7aea2dd27fc7281d1a20a887e2e1f3d0d Xerxes  1251345635 +1000 checkout: moving from master to 2c0b2

xerxes@laptop /d/source/dotnet/CodeKatas/.git/logs (GIT_DIR!)

The beauty here is that the log has kept full record of all the SHAs for each commit in the repo. NOW i’m able to reset my master back to the appropriate commit by using the SHA-1 hash. So i checked out onto the master branch and issued:

xerxes@laptop /d/source/dotnet/CodeKatas (master)
$ git reset --hard 0fbf8ace753f408b0e972b593e2b6a03dd2d0354 

and that reset my master back to the right revision, thereby restoring my history, code and sanity.

nb: i have taken some of these screenshots trying to reproduce after the event, so they might look a little doctored. Despite this, the findings and end results are the same.