I’ve used a lot of SCM‘s in my time, and none of them have been as esoteric as Git. This post serves as a reminder of the different ways to “revert” changes to a git repository

Scenario:

We are in directory with a local git repository. This repo contains 4 files and no sub-directories. Each file is in one of the 4 different states a file could be in for Git (not considering ignored files for the time being)

File Name State
Unchanged.txt File is unchanged in local directory
New.txt File is new to the repo
Deleted.txt File has been deleted from local directory
Modified.txt File has been modified in the local directory

All files except New.txt are being tracked and none of these changes have been staged/committed (yet).

Action: checkout

$> git checkout
  • This command has an implicit head of [HEAD]
  • This command has an implicit working file/directory of the CWD
  • This command will only affect tracked files
File Name State
Unchanged.txt no action
New.txt no action (because it’s untracked)
Deleted.txt no action (will not restore the file UNLESS explicitly stating file in command)
Modified.txt no action (will not restore the file UNLESS explicitly stating file in command)

Action: checkout <file/path>

$> git checkout .
$> git checkout Deleted.txt
  • This command has an implicit head of [HEAD]
  • This command has an explicit working file/directory of the path on line 1, and of the file Deleted.txt on line 2
  • This command will only affect tracked files
File Name State
Unchanged.txt no action
New.txt no action (because it’s untracked)
Deleted.txt File is reverted back to its original from the current head
Modified.txt File is reverted back to its original from the current head

Action: reset

$> git reset .
  • This command has an implicit head of [HEAD]
  • This command has an explicit working directory of the CWD
  • This command will unstage staged changes
File Name State
Unchanged.txt no action (changes are unstaged)
New.txt no action (changes are unstaged)
Deleted.txt no action (changes are unstaged)
Modified.txt no action (changes are unstaged)

Action: reset (with staged content)

$> git add .
$> git reset
  • This action assumes that all changes have been staged (line 1), so the repo is in the following state:
    File Name State
    Unchanged.txt File is unchanged in repo
    New.txt File is staged for adding
    Deleted.txt File has been staged for deletion
    Modified.txt File has been staged with modification
  • This command has an implicit head of [HEAD]
  • This command has an implicit working directory of the CWD
  • This command will unstage staged changes
File Name State
Unchanged.txt no action (changes are unstaged)
New.txt no action (changes are unstaged)
Deleted.txt no action (changes are unstaged)
Modified.txt no action (changes are unstaged)

Action: clean

$> git clean -d -f
  • This command has an implicit head of [HEAD]
  • This command has an explicit working directory of the CWD
  • This command will remove files and directories which are untracked in the repo
File Name State
Unchanged.txt no action
New.txt File would be deleted
Deleted.txt no action
Modified.txt no action

So in summary, if you want to completely revert your working directory to a clean state (IE: the equivalent of an svn revert) is to:

$> git clean -fd
$> git checkout .

I think it’s safe for me to say “shea right – if i hate AOL Search as much as I despise AOL the ISP, this article will not be favourable to AOL in any way, shape or form”.

Let the games begin.

AOL Search fails to render properly in Google Chrome

AOL Search fails to render properly in Google Chrome

Wow off to a flying start here, boys…. </sarcasm>

The second thing to peeve me off is that AOL Search doesn’t have a search provider exposed in their meta-data. So I have to create one for myself. Fortunately, Chrome makes this pretty easy, but that’s not the point – You guys are providing a search service. Irrespective of how shit it may or may not be, FFS at least make it easy for me to *TRY* and use your product?

…all this, and I haven’t actually started using it yet. How ominous. I’m hoping that the little “powered by Google” actually means “we cant do search anymore, and have given up. Here’s one which does it better”.

……………

One week (maybe a little more) has passed and well, lets just say i’m not as disappointed as I thought I would be. Mainly because AOL search really does seem to be effectively a wrapper around Google. As an example, I searched the hottest topic going around on the tubes at the moment (the effects of socialism on post-war Germany), and most of the results were the same, except that Google also listed a result to its Book Search service. Apart from little things like that, these two are inseparable. Even AOL’s image search is just a face-mask over Google Images.

The design of the page leaves a little to be desired however, as AOL shamelessly place advertisements on the top of the page in an attempt to drive click-throughs. My ad-busting eyeballs detect this easily so the placement of the ad isn’t so much the problem. The problem is that they have sneakily set the length of the HREF for each paid link to be the full width of the page, which means by clicking in what should be “blank space” you trigger the link and click-through the paid ad. Naughty, naughty.

All said and done, I couldn’t help but realise I just commented to a colleauge without realising that I’m finishing up this post so that “I can go back to using Google”. I guess even subconsciously I find any experience outside Google’s to be less than engaging.

On a final note, The Wolfram Alpha didn’t launch as soon as I was hoping it would, so there’ll be a week’s rest where I go back to Google, before trying out the new kid on the block on 18th May. Yes, I am aware of the broad misrepresentation of Alpha being as “Google killer” but it would still be fun to try 🙂

In this, the 3rd installment of The (Not So) Great Search Engine Showdown, I reflect on my experience using Ask.com compared to Google.

I don’t have a great deal of time so this post is going to be brief. I really only have one _serious_ gripe about Ask – that stupid fu#@$%g Answerbar at the top of the page everytime you navigate to a search result. NO, ask.com! I wanted you to give me the search result, not a pain in the ass waste of screen real estate. What also frustrated me about this “feature” was it’s sheer unpredictability. Most web results would display the Anusbar at the top, but others (like Wikipedia) would be displayed in full glory without being crippled.
The “Close Permanently” button was never hit with such gusto, i’m sure. To demonstrate just how much, i’ve prepared the following illustration:

How to close the Ask.com Answerbar

How to close the Ask.com Answerbar

By way of quality of results, I actually found Ask to be better than I was expecting. Certainly I felt like I wasn’t missing Google, though on a few occassions I had to drop back just to be sure I wasn’t missing anything (turns out I wasn’t). Overall the web search results were as good as Yahoo’s, though one thing that irritated was that Ask.com mixes the paid advertising results in with the organic search results. I’m sure they’ll claim that they’re putting the top-most organic result first and then allowing the rest of the results to be shown underneath the paid section, but we all know the truth. Money grubbers.

When it was originally launched as “Ask Jeeves”, the website’s search technology was based on doing some NLP against your search query and it would try to return the best results based on the context of your question. A few years ago Jeeve’s was given the arse from his job, and the company took the arse to their search results, because (quite simply) their NLP wasn’t advanced enough to provide accurate results compared to Big Brother

However having played with with Ask.com this week, I noticed they still have a Q&A section (it claims is in Beta) which allows you to phrase a question and let the NLP try and answer it for you. Not one to turn down a good opportunity to test NLP products (and get a comparative feeling for the upcoming Wolfram Alpha test i’ll hopefully be performing), I Ask’ed the following question in the name of science:

Putting Ask.com's NLP to the Public Service Announcement test.

Putting Ask.com's NLP to the Public Service Announcement test.

It’s heart-warming to see that even if you speak broken English like the second guy, you can still get valuable advice on the interwebs.

This week, I throw away all credibility as I try out AOL’s search. If using this website results in me getting another fking AOL starter CD, i’ll sht the roof.

As part of evaluating several libraries for specification testing in Ruby (MSpec, RSpec, Bacon), I wanted to benchmark the performance of the library against a simple suite of tests to see if one was particularly slower than any of the others. It wasn’t intended to be very scientific but to at least expose a slow framework, if any.

Each benchmark was performed by creating a suite of specifications based around Bacon’s whirlwind sample (consisting of 5 specs), and executed the suite 10,000 times. This benchmark test was run 5 times in order to weed out any statistical anomolies. Nb: For this analysis, I didn’t benchmark RSpec because it’s not terribly compatible with IronRuby just yet.

The results and code can be found below. In a nutshell, it does seem as though MSpec performs slower than Bacon, but when you consider that over a 50,000 test sample it was only (roughly) 10 seconds slower, the difference is negligible.

  Bacon MSpec
Run 1 27.642 37.337
Run 2 25.598 37.755
Run 3 25.607 37.424
Run 4 25.317 36.439
Run 5 25.105 36.352
# This is the code for the spec test wrapper. 
# To execute, save the file as "spec_runner.rb" and execute
#    ruby spec_runner.rb
#
#

ITERATIONS = 10000

require 'rubygems'

@old_stdout = $stdout
$stdout = StringIO.new

def milestone(n)
$stdout = @old_stdout
puts ("Reached milestone: ##{n}")
$stdout = StringIO.new
	
end

def time_it(&func)
	start_time = Time.now
	1.upto(ITERATIONS) do |n|
		func.call
		milestone(n) if n % 1000 == 0
	end
	end_time = Time.now
	end_time-start_time
end

bacon_time = time_it do 
	load 'whirlwind_bacon.rb' 
end

mspec_time = time_it do 
	load 'whirlwind_mspec.rb' 
end

$stdout = @old_stdout
puts "bacon time: #{bacon_time}"
puts "mspec time: #{mspec_time}"
# This is the Bacon test file. Save it as "whirlwind_bacon.rb"
#
#

require 'bacon'

describe 'A new array' do
	before do
		@ary = Array.new
	end

	it 'should be empty' do
		@ary.should.be.empty
		@ary < < 1
		@ary.should.include 1
	end

	it 'should have zero size' do
		@ary.size.should.equal 0
		@ary.size.should.be.close 0.1, 0.5
	end

	it 'should raise on trying fetch any index' do
		lambda { @ary.fetch 0 }.
			should.raise(IndexError).
			message.should.match(/out of array/)
	end

	it 'should have an object identity' do
		@ary.should.not.be.same_as Array.new
	end

	palindrome = lambda { |obj| obj == obj.reverse }
	it 'should be a palindrome' do
		@ary.should.be.a palindrome
	end
end
# This is the MSpec test file. Save it as "whirlwind_mspec.rb"
#
#
require 'mspec'

describe 'A new array' do
	before do
		@ary = Array.new
	end

	it 'should be empty' do
		@ary.should be_empty
		@ary < < 1
		@ary.should include(1)
	end

	it 'should have zero size' do
		@ary.size.should.equal 0
		@ary.size.should be_close(0.1, 0.5)
	end

	it 'should raise on trying fetch any index' do
		d = lambda { @ary.fetch 0 }
		d.should raise_error(IndexError, /out of array/)
	end

	it 'should have an object identity' do
		@ary.should !equal(Array.new)
	end

	palindrome = lambda { |obj| obj == obj.reverse }
	it 'should be a palindrome' do
		(palindrome.call @ary).should be_true
	end
end

In the second installment in my series of evaluating search engines, I take a look at Yahoo’s search offering – specifically the locally-branded Yahoo7 search

The first test – TICK. A Yahoo search on my name turns up very good results. My website first, and underneath that one of my blog posts. Closely followed by Facebook and LinkedIn. If i wanted to stalk myself, this is clearly a good place to start.

A cute little feature is that my Facebook search result contains deep links to come Facebook features like “Send Message“, and “Poke“. Way to get in with the 2.0, Yahoo.

After that, it starts getting a bit weird, and the results lose a lot of meaning. Some old documentation I wrote when in another job shows up on the first page, despite it being excessively out-of-date and not updated for at least 3 years, I didn’t think this content would fare at all.

In terms of visuals, the search results are very Google’esque…nay, identical. Yahoo results are minimalistic with Web, Image, Video, News, Maps and More at the top of the screen and a link to the cached version located conveniently in a position which makes defending a case of plagiarism from Google infinitely hard. I guess the up-side to this is that people will hit Yahoo search results and feel like they’re in familiar territory.

Which I guess leads me into Yahoo’s foray into Federated search called Alpha. Yahoo claim that “Alpha is a new beta product from Yahoo!7 that introduces the concept of Federated Search. With Alpha, you can search across many different information sources all on one place”.. Holy tuna, batman! “Search across many information sources from one place”?….Sounds like a regular search engine to me. *bored* The quality of search results don’t appear to be any different to regular Yahoo, but the UI is very different. Kind of like Live Search (and we all remember how that went)…

<fast-forward one week>

I’ve been using Y7’s search now for the week and I have to admit, I was acutally quite comfortable with the results it was giving. When evaluating MS Live Search, I was constantly living in this fear that I was missing quality search results and would fall back to Google just to make sure I was getting the right information when I needed it. However with Yahoo, I felt confident enough with what it gave me to not feel like I was missing out on good results. I honestly feel like I could replace Google with Yahoo if I needed to (which I don’t).

The next engine to go under the knife – Ask.com. They don’t have any locally branded content, and I’ve just got a gut-feeling this will be a difficult week 😐

CruiseControl.NET is an automated build system ported from Java to the .NET framework. The current stable release of CCNET is v1.4.3. Unfortunately this version of CCNET does not natively support using Git as a source control provider. So if you’re making the switch from (say) SVN or VSS, at the time of writing, you will have a few bumps in the road ahead. NB: This page assumes you have a working copy of git running on your machine

To get Git working with CCNET, I found the excellent ccnet.git.plugin project on Github. This code is a plugin for CCNET which exposes basic functionality (and a little more) to allow CCNET to use Git as a source repository.

Firstly you need to download said source and compile the binaries. In case you’re super lazy, here’s one I prepared earlier – ccnet.git.plugin binary download

The plugin works by dropping it straight into your CCNET server’s folder with the other binaries. In most cases, this will be c:Program FilesCruiseControl.NETserver. Make sure your restart CCNET.

The next thing is to configure your project to use git as the source control provider. The README has an excellent example of how to configure the project. My initial project block ended up looking something like this (renamed to protect the innocent):

  <project name="FittingApp.Project" queue="FittingApp.Project">
    <sourcecontrol type="git">
	<repository>git@bumblebee:FittingProject.git</repository>
	<timeout>30000</timeout>
	<executable>c:program filesgitbingit.exe</executable>
	<workingDirectory>C:buildprojectsFittingApp.Project</workingDirectory>
    </sourcecontrol>
	
    <triggers>
    </triggers>
	
    <tasks>
    </tasks>

    <publishers>
      <xmllogger />
      <statistics />
    </publishers>
  </project>

One important thing to note is that the README (at the time of writing) doesn’t mention the timeout element you can use in your configuration. The default value is quite high. I prefer to lower it and found this property by perusing the tests.

Finally after all that, everything should be done and ready to rock, right? Turns out not so. One problem I stumbled into (and took a while to resolve) was the build timing out when it was doing a fetch. The CPU was idle and there was no traffic over the network. The process would timeout and the build would fail. The funny thing was that I could open a command-prompt console myself and fetch the remote repo no problem. But when being performed by CCNET, it would timeout during the fetch.

Afer digging further, it looked like the SSH authentication wasn’t working and that the auth process didn’t accept the default SSH credentials I created earlier. I suspect it was waiting for me to enter a password for the remote git account. Of course there’s no interaction with this process so eventually it times out. After a long back-and-forth with the problem, I got in touch with the author of the plugin and he suggested checking that the HOME environment variable is set to %USERPROFILE%, otherwise git wouldn’t be able to find the git config settings. This solved the problem, and the build started working sweet. (big props, Kevin – thanks :))

With all that done, you should now be staring down the barrel of a CCNET installation successfully talking to Git. Hope this helps someone else out there.

In my (what i hope to complete) series of comparing Google to other search engines, The first engine i’m testing out is Microsoft Live Search.

I guess the first (obvious) thing to try is to search for my name (on a side note, i’m soooo tempted to use “google” as a verb, but that would be inappropriate when testing out the competition, no?). The first two results are correct, or at least relevant (ie: my website) and the rest of the results are neither here nor there in terms of relevance – there really wasn’t a whole lot it could do with my name except find literal matches in page content.

One thing which surprises me is that the results returned vary greatly depending on how many results I show on each page. Setting the limit to the minimum of 10, and I seem to get results all about me and most of them pretty relevant chronologically. However if i switch it to 30 results per page the niceties pretty much drop dead as the search results spew into variations of my Facebook profile in different culture sub-domains (ja-jp being the most relevant out of about 7 others). FAIL.

Live search, (unlike Google) has a neat little equation solver (example), which would have been great about 10 years ago when i was actually doing calculus and solving quadratic equations. Relevant now? Probably not. I would expect the Wolfram Alpha to drop a big steaming shizzle all over this feature given it’s company history. So Maths equation solving – FAIL.

One problem i’m finding is that i’m just not used to the format of search results from Live search. I find that if what i’m searching for is not essential to what i’m doing (or should be doing) it turns me off and i want to just leave the site without getting any results. This is a very bad UX and it’s probably all in my perception of what “good” search results look like.
Must fight urge to judge a book by it’s cover.

……<fast forward a few days>……

and so it is i come to the end of the week and in all seriousness it couldn’t come soon enough. I tried, I really did. Microsoft has a loooong way to go before they could even begin to think about claiming that their search engine is actually a competitor to Google, and not just another smoking pile of crap. You know things are in trouble when you need to create a short-cut to Google’s search because the Live results are just plain inadequate.

Suffice to say, i’m very disappointed with Live Search and don’t think it’s ready to be considered a contender for search king of the net. i’m glad to get my browser away from it and move onto something else.

Yahoo – stand up. You’re next.

I’m in a situation where I want to keep different settings for several Git repositories. My work’s Git repo and settings (like email address and private key) would be different to my GitHub email address and key.

After following the setup details on GitHub of how to setup username and email for github, and providing your SSH keys , I was left in an awkward situation where my global configuration was setup for GitHub, but didn’t know how to configure my work repo to authenticate properly.

It turns out that Live search does actually work for one scenario, and I found another guide on GitHub explaining everything required to configure multiple Git accounts.

  1. What’s most important is knowing that unless you’re using the same public/private key pair, you will need to generate a new key for the server, and give it a filename different to the default id_rsa
    $ ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/c/Documents and Settings/Xerxes/.ssh/id_rsa): /c/Documents and Settings/Xerxes/.ssh/id_rsa_github

    This file needs to be given a name different to the default id_rsa, ideally consisting the name of the repo.

  2. Once the key is generated, you need to create a config file in your ~/.ssh/ directory. This file allows you to configure connection settings per repository, overriding the global values set earlier.
    Host github.com
      HostName github.com
      User git
      IdentityFile ~/.ssh/id_rsa_github
    

    Save that file.

  3. One final step in the mix is to configure the repo itself to use the correct email address when committing to the git repo. This is really only to ensure that the commit history has a valid email address associated to it. For instance, I don’t want my private email address being recorded in my work commit logs, and similarly I don’t want my work email address getting recorded in my GitHub commit logs.

    I’m sure there would have to be a way to do this using the console, but the way I know to set the email address for a single repo is to use the git gui command, goto Edit -> Options and do it via the interface.

    git gui repo configuration

    git gui repo configuration

Now you should be right to issue any commands to GitHub and have it authenticate using the key. When you push back to the origin, it will now also use the repo settings and not the global settings.

EDIT: For some reason, i omitted the “.com” in the github.com host entry. Thanks to @davetchepak for the pickup

Not long ago, I posted a big fat blat of information from my investigation in trying to get Ruby based spec testing integrated with .NET. In this post, I make some sense of all that content and (more importantly) drop a sample of taking advantage of this. (nb: This post is essentially a direct-rip of an internal document I created for this purpose)

Overview

The purpose of this page is to run through a process which will ultimately allow the reader (thats you) to write Ruby based specifications for your .NET code.

Why?

Why would you want to do this? The intended purpose for this practice is to gain the most benefit when doing BDD. Trying to do BDD in C# results in a lot of syntactical noise in the code which distracts from the goal of having clear, readable specifcations of how the intended function should behave. Additionally, any traditional C# BDD toolset requires the specifications to be statically compiled into a test binary in order to be executed. The advantage of using Ruby is that the scripted nature of the language allows physical (as well as logical) separation of speficiations from code, opening up the realm of possibility that specifications are written by non-technical folk. Furthermore, the Ruby syntax lends itself to building DSLs perfect for the purpose of allowing clean, almost human-readable code.

Scope of investigation

The investigation work preceding this post was set 3 goals:

  • Determine the viability of using Cucumber as an automated feature verification utility
  • Determine the viability of using RSpec as an automated specification verification utility
  • Determine the viability of using IronRuby as conduit to allow Ruby specificaitons to execute against compiled C# code. (applicability to any other CLR-supported language is then assumed).

Investigation results

The results of the investigation showed that:

  • RSpec is currently not supported on IronRuby due to a number of bugs in the IronRuby project. (Based upon a discussion with @jschementi and this article (toward the bottom))
  • Accordingly, Cucumber is currently not supported on IronRuby due to it using RSpec internally (explained onthe ruby forums)
  • The IronRuby team have worked to incorporate support for a more lean specification testing tool MSpec which is very similar in syntax to RSpec, but not as functionally complete.
  • MSpec will work with IronRuby to write Ruby based specifications to verify .NET compiled applications.

What tools are we using?

Based on the results of the investigation, the best way to approach this method of testing is to use the MSpec library to write specifications against C# code, execute them using IronRuby and in future, once IronRuby is more stable we can look to migrating over to cucumber for feature-style verification on top of M/RSpec.

RubyGems

RubyGems is a package which allows you to download ruby components and utilities (known as Gems). The default RubyGems package which comes with the one-click installer might be outdated when you download it, so the best thing to do here is to update RubyGems to the latest version

gem update –system

In the event you’re behind a company firewall, or you need to use an HTTP proxy for whatever reason, you need to tell the GEM command to use the http proxy as it doesn’t honour your default internet options. Substitute the server and port where appropriate and then run the gem update:

SET HTTP_PROXY=http://your.proxy.server:3128

You’re now minty fresh with the latest RubyGem package.

IronRuby (IR)

Go and download the latest release of the IronRuby project from ironruby.net. The current “official” pre-release release is v0.3, and it doesn’t have any installer. To “install”, make sure you extract the contents to the location

c:ironruby*

This is the standard installation location for IR. Once there, it’s recommended that you update your system path to include the path your IR’s bin folder.

Setting PATH Environment for IronRuby

Setting PATH Environment for IronRuby


Required Gems

We now get to the part where you need to install some of the gems required for specification testing. As mentioned at the start, RSpec and Cucumber isn’t 100% working with IR just yet, however it’s worthwhile installing them anyway to test things are working as expected and whatnot.

gem install mspec
gem install cucumber
gem install win32console
##gem install rspec
##gem install hoe

The last 2 should automatically be installed when you install cucumber as they’re dependencies, but if they don’t make sure you install them! If you really want to keep it lean you can get away with just mspec and none of the others.

Now the standard install of IR has its own repository of gems which can be managed thorugh IR’s igem utility. The reason we don’t use IR’s gem utility to install mspec is because mspec is a pretty special script which (basically) allows us to tell it which ruby interpreter we wish to use for running tests (explanation, thanks to @jredville). The neato thing here is that we then don’t need to install mspec speficially for IR, we can repurpose the MRI’s version.

Testing it out

Now that all the major stuff is installed lets test it out by creating a simple app in C# and write a specification in Ruby to verify its behaviour. Create a new folder to save the source files in.

using System;

namespace HelloWorld
{
    public class HelloClass
    {
        public string SayHello()
        {
            return "Hello from C#";
        }
    }
}

Here we have a class which returns a string when the method SayHello() is invoked.

require "mspec"
require "HelloWorld.dll"

describe "the hello dot net app" do 
	before do 
		@app = HelloWorld::HelloClass.new
	end

	it "should say hello from c#" do 
		@app.say_hello.to_s.should == "Hello from C#"
	end
end

This is our specification for the behaviour of the application.

To compile the C# class, open up a Visual Studio Command Prompt, CD to the source directory and type

csc /target:library /out:HelloWorld.dll HelloWorld.cs

…and now to run this puppy:

mspec -t c:ironrubybinir.exe sayhello_spec.rb

Here, we are invoking the mspec ruby script and passing two arguments. The first -t c:ironrubybinir.exe tells the ruby script that we with to execute the mspec specifications using a different Ruby interpreter to MRI. The interpreter we want to use in this case is IronRuby. The second argument tells it which spec we’re running. When mspec runs, it finds the -t argument and hands-off execution of the spec to another instance of mspec executing under IronRuby. This gives us the flexibility of being able to execute standard ruby specs and also calling out to IronRuby for .NET interop if needed.

The observant of you might notice that the call to @app.say_hello has a to_s chained afterward. IronRuby will return a ClrString as the object type when the interop call returns a CLR type string. the CLR’s ClrString and Ruby’s string are not interchangeable. You need to call to_s on the ClrString to treat it like a ruby string. This behaviour is at least explained, albeit I need to dig deeper to understand why they couldn’t have an implicit cast operator (or dynamic language equivalent thereof).

One thing that’s important to note is that although i’ve dropped a few source files without too much explanation, you would actually build this up iteratively using the same TDD testing style you’ve always been used to. In fact, this form of specification testing makes test-first easier to do.

Further work

This article gives a straightforward overview of how to begin testing C# code with Ruby but it doesn’t go all the way.

  • Ideally we would like to use Cucumber for automated feature acceptance test verification. Unfortunately the current build of IronRuby doesn’t work with Cucumber and RSpec but there should be ways to get the current IR implementation to work with a few tweaks.
  • Need to define and configure a standard project skeleton such that you don’t need to download and extract IR in order to get the system working. In a perfect situation, we could download only the source for the software without requiring any dependencies installed (including ruby!).

I had a little time over the long weekend to reflect on things in the past, and one conversation which came to mind was a casual chat with an engineer at Yahoo7 I met at a party about a year ago. I can’t for the life of me remember his name, but I do remember our conversation.

Maybe it was bravado, maybe it was arrogance, and it certainly was alcohol induced, but I asked him point blank “You work at Yahoo7. Compared to Google, how do you personally find Y7’s search results?”. Not unlike me to put some fuel into the fire, I was kind of expecting him to defend his company, defend the search engine backing his company’s website, stomp his foot and slap me across the face with a glove………and he did (except for the glove).

The reason this moment stuck with me wasn’t because he launched into a tirade of fact vs fiction and MapReduce mumbo-jumbo, but because his answer was a brutally honest “Personally, I find the results on par and sometimes maybe a little less than Google, but the real test is in inviting you to try it”. The night went on, and i’m sure i stumbled into a taxi and got home safely, but I never really forgot his response.

Admittedly i’ve been putting it off for a while, and on occasion i’ve considered doing it but always found an excuse to stay within the comfort zone that Google provides. Well that changes this week as I’ve finally decided to bite the bullet and drop Google for a few weeks as I try using a different search engine each week in my daily routine and see how it feels and whether all search engines really are so close that Google’s superiority of results is just perception.

Having looked at some statistics of search engine market share, the candidates up for testing in this very un-scientific assessment are:

This at least helps me weed out most of the smaller players and the engines of eras long since forgotten.

Where appropriate, i’ve tried to use locally branded variants of the website purely for my own benefit. I’ve thrown the Wolfram Alpha into the list because it generated significant interest in the blogosphere in the last 30 days to at least warrant a look once it’s released.

Starting this week, i’m going to try Live search. It’s set as my default search engine for Chrome and i’ll be consciously trying to use it over Google..

Wish me luck.