Testing Windows Phone 7 class libraries with MSTest

In this post, I explain how you can unit-test your Silverlight and Windows Phone 7 class libraries using MSTest and the built-in test system of VS2010, enabling you to right-click a test to run it.

image I am a test driven developer, and simply hate the testing harnesses available to SilverLight (SL) and Phone 7 (wp7) development enviroments. I do believe that Microsoft should make it a priority to allow MSTest to execute the class libraries written for other language runtimes than the CLR in windows, but that is a side issue.

Today I bring you my proposal for a workaround.


I wrote a helloWorld application for wp7, and downloaded Roger Peter’s cheat sheet for unit testing apps this platform. The stuff does work, but looks highly impractical in my view – mostly due to the lack of screen estate, but also because it does not integrate with the Visual Studio 2010 test harness.


The key is in the link

The trick to getting your logic tested is to link the source files into a new, regular window class library project. Here’s the recepy:

  1. Hatch an idea for a wp7 app (or SL) with the potential to rule the world (don’t they all?)
  2. Create a solution for your application, and add a wp7 Project
  3. Add another wp7 project for your tests
  4. Create a regular Windows Class Library and choose to add existing item. Pay close attention to the drop-down arrow on the ADD button. Use this to select “Add as Link”
  5. Do this for all the classes that you wish to test


Within your project, you should see the linked files marked with a small arrow, marking it as linked:

It mostly works!

Having done this, you now have a regular windows class library that you can use to unit test the logic of your application using MSTest. As an added bonus, should you want to create a wpf applicaton later, you can, of course, re-use code in this way for cross-target platform work.

Changes done to the linked file happen only in the original file, so you can now drive the logic in your class file with unit tests by using your unit testing framework of preference.


Why it has a smell to it

This is still somewhat smelly workaround, consider that you cannot unit-test wp7/SL specific functionality. It is only a workaround that applies to code that compiles to windows and wp7/SL at the same time. Having said that – if you follow the SOLID principles, you should not have any problems unit-testing most of your code, it would only be the platform specific parts and GUI that you leave to the lacking test harness, for those you can safely use the recommended test harnesses, such as the one proposed by Roger Peter.

The perfect solution would, of course be to have MSTest support these other runtimes as a silent simulator or something.

Ranting over change – an exercise in futility?

I’m an avid blog reader/commenter and have seen the rise of a wave of rants about Microsoft’s LightSwitch and Microsoft WebMatrix. These are products designed to make writing windows applications as well as web pages even easier than it already is, making the process of creation accessible to more people than ever. Some express direct rage about this, others are concerned about their bread & butter.


"State of the Art" Amiga Demo, winner of The Party 1992I’ve been around since the ZX81 (in excess of 26 years) as a developer, flipping through an endless plethora of developer languages.
In those days, developers learned assembler first, after a few months of punching in BASIC code from a dubious UK-based computer magazine.

Remember all those cool demo’s on the Spectrum/C64 and later Amiga scenes? 90% of’em were done in pure assembler! Pascal was hot for a short while before yielding to C/C++ .


I switched over to the Microsoft platform sometime during the MSDOS / Windows 3.11 era.
(i386SX without a floating point processor!)
In my opinion, the moment you loose control over the CPU’s registers and/or it’s memory, is the moment you lost control.

MSIL/BYTECODE languages (java, C#, VB…), interpreted languages (ruby, python) are not ,  by that definition, "pure" languages. They take away control from the developer in order to prevent the developer from making mistakes that tear down all of it’s surrounding applications (and often, the OS). Actually, when I come to think of it, even Assembler takes away some control in that you can no longer address invalid registers, or shift memory to non-existing locations without a compiler error.

Time, money, and big feet

How often do I read how "messy" C++ is because you have to handle the memory yourself. The fact of the matter is that C++ requires a strong sense of discipline; if you understand the language, then you can write Greedy brains!applications that make the best Java and .Net apps look really, really neandrathal, in terms of anything! (performance, memory footprint, program size…) – at the cost of time!

And let’s just say, when people started enrolling in developer classes in the 90’s, it wasn’t because they had a sudden geek awakening, they saw money in software business, and wanted to be a part of it. Today, they’re the vast majority of developers out there. Microsoft makes money on software licenses. It is only natural that they write code for these masses.

But at the cost of performance and memory footprint, C#, Java and other languages make our everyday easier; I can whip out a complete, working business application mockup in a day or two using modern tools (SilverLight/SketchFlow). I used to be employed in a large consultant business where the vast majority of solutions delivered were MS Access "applications".


So what makes it all good enough?
Solving the customer’s problem.


Conventional Purist Pattern Pride

In all our purism, dogmas, theorems and idelogies, the fundamental truth is: The customer doesen’t give a rat’s jewels about how you solve his problem! He looks at you as being a huge expense that has to be made, nothing more. If you can satisfy his needs with a technical solution that is less expensive than the competition, then you’re more than likely to have a satisfied customer. Ayende has a great image on his rant on this – really boils it down to the essence!
I am a purist myself, make no mistake about that,  I do take pride in my software craftmansship, but I’ve also seen so much “bad” sofware out there, and the customer is happy!
– At the end of the day, that’s really all that matters!

For long running, or high-risk software that requires quality;
I more than often see that it really just boils down to convention. Patterns tend to be tweaked around to circumvene technical limitation, or even more common, user ignorance. Who does not have a “Tweaked” MVP pattern, or a “somewhat modified version of” MVC.. recognize yourself? 

My opinion is that it is you, and not the software, that sets the standards. Just like a carpenter, if you do not have pride in the work you do, you simply cannot deliver quality software, regardless of how good tools you have. Granted, using a nailgun instead of a hammer, you can still produce cleaner looking wallboards without the dents and bruises of 60 missed hammerhits, but if your nails are spread around shotgun style, you know that wall aint gonna last long anyway.

MetaProcess, MetaDeliver, MetaWin:

Microsoft is making it easier and easier to shovel out software that requires less skill to develop with products like LightSwitch. Is this bad?
I say – “No, that isnt necessarily bad”

ANY “good” software has undergone the following metaprocess:

  • Have a clear definition of application’s domain (what does it do?)
  • Plan for re-use and upgradeability(modularity) where possible
  • Make the application as maintainable as possible (clean code, clear intentions, refactor)
  • Cover your application’s functionality with tests(TDD, DDD, DDT)

Neither language, nor technology have any impact on this metaprocess.


What is important, is that the technology’s operator understands the technology (a question of syntax and experience). If it helps me deliver software at a lower cost without compromising my craftmansship, then by all means, give it here!

In my view, WebPrism and LightSwitch must also undergo the same metaprocess in order to be developer platforms that are usable for corporate offices.


Some references (Links go directly to the articles):

InfoQ article

The Inquisitive Coder

Jason Zanders WebLog


Ayende’s Blog

TDD: Using databind to object for the ultimate tdd exerience

Figured I need to share with you how I normally go about designing UI, and how I make that design as testable as possible.

To make things easy, I will use a login screen as an example, since it has few controls, and is relatively simple to follow.

Sample VS2008 solution can be downloaded here

The criteria

  • Create a login  dialog with a username, password, OK and Cancel button
  • OK Button is only enabled when both a username and password are set
  • Should be fully unit-testable


I start by creating an empty solution with my typical folder structure.


The numbering is just something I add to the folders because I like to have the top-down view. Note that I have a clear separation between UserInterfaces and Presentation:

User Interface: Dumb dialog, form, or page containing bindable controls that the user interacts with

Presentation: Smart, data-bindable classes that represent a user interface’s logic and state.


Create a login  dialog with a username, password, OK and Cancel button

Next, I’ll add the windows forms project to the User Interfaces folder that contains my login dialog, and design our choice login box:


I am only interested in the design at this stage. Aside from setting the property UseSystemPasswordChar on the textbox for the password, and naturally giving the controls some meaningful names, I do not bother looking at code here.


Preparing to code

The next bit is half the magic. I am going to create a class to represent my login dialog. By implementing the INotifyPropertyChanged interface (found in System.ComponentModel), I am telling this clas that it can be databound to windows forms, WPF and silverlight controls.

I begin by adding a Presentation class library to contain the login class, as well as a test project where I can put all the facts related to it:


The Solution Items folder that you see at the bottom contains the test list and testrunconfig files, it is autogenerated by visual studio the first time you add a test project to your solution.

  • Presentation is a regular class library
  • Presentation.UnitTests is a test project

    Must be fully unit-testable

    In visual studio, it’s hard to write tests for objects that do not exist, this is due to intellisense trying to help you as you go actually turns into something you have to fight. Creating a skeleton Login class makes this process a lot easier.

    Initally, it looks like this:
public class Login : INotifyPropertyChanged
    public event PropertyChangedEventHandler  PropertyChanged;

    public string Username            { get; set; }
    public string Password            { get; set; }
    public bool   OkButtonEnabled    { get; }


The PropertyChanged event is the mechanism used for databinding. More on that later.

Initially, I am interested in the following behavior from my login class:


Implementing these tests is fairly straight forward. When done, I can proceed to getting them to pas.

SIDENOTE: Unsure on how to verify events in a test? here is a smart way to do it:

public void PropertyChanged_SetPassword_EventFires( )
    // Prepare
    Login login            = new Login( );            
    bool eventWasFired    = false;
    login.PropertyChanged += (sender, e ) => eventWasFired = true;

    // Invoke
    login.Password = _testPassword;

    // Assert
    Assert.IsTrue( eventWasFired );


Back to our Login class, we want the Properties to “announce” that they have been changed,

this can be done like so:

public string Username            
    get{ return _userName; }
        if( _userName == value )
        //TODO: Validate username
        _userName = value;
        NotifyPropertyChanged( "Username" );

private void NotifyPropertyChanged( string propertyName )
    if( PropertyChanged != null )
        PropertyChanged( this, new PropertyChangedEventArgs( propertyName ) );

Basically: If I can set the value, I announce it with my event handler. There is no point announcing a value that was never set.

Finally, the OkButtonEnabled property simply checks the username and password:

public bool OkButtonEnabled    
        if( string.IsNullOrEmpty( Username ) )
            return false;

        if( string.IsNullOrEmpty( Password ) )
            return false;

        return true;

I’m a sucker for readability, can you tell? 🙂

After a very brief syntax check, I’m done with the Login class for now:



Binding it to the form

At this stage, I now have a Login form with absolutely no code behind it. Additionally, I’ve created a Login class that announces every property that changes, it is time to bind the two together. The process is simply:

  • Declare the login class as a data source
  • Bind properties from the login class to our form
  • Initialize the binding in the form’s constructor (in the code behind)
Declare the login class as a data source

In design mode, bring up the properties of the username textbox, find the databinding section. Since this is the first time we’re doing a data source, I can pre-select the Text property that I want to bind to, and then click Add project data source link


This brings up the following sequence (I’ll just run through the images, no comments should be necessary):









Having completed this process, you can now simply bind the TextBox.Text property to your bound class object with a simple drop-down:



The OK Button requires a special binding, because we want to bind it’s true/false value to the Enabled property, so we open up the Advanced data binding dialog:


Find the Enabled property, then simply choose to bind that to the OkButtonEnabled in our Login class:


Press OK to save your changes.

Initialize the binding in the form’s constructor

The final step in the binding process is to perform some initalization on the login form, so that we have an actual object to store the values for username and password in. This can be passed to the form as a constructor argument, a property, or method, or be a built-in object. For the sake of this blog entry, I’ll simply use it as a publicly available property. Choose your login-form, switch to code view, and set the following lines of code:

public partial class frmLogin : Form
    // Our notification object:
    private Login _loginObject;

    public frmLogin( )
        InitializeComponent( );

        _loginObject = new Login( );

        // Associate the databind with our notificationObject
        loginBindingSource.Add( _loginObject );


That’s it.

When you run your application, you will see that the OK button does not enable itself before username and password have values. What you may find odd, is that you have to change focus from one textbox to another in order to see this. This is because the value from the textbox is only passed to the object when it loses focus. If you want a quicker, more live update, you can, for example, choose to update the object on the KeyUp event, as an example.


Databinding forms to class objects is simply a matter of implementing the INotifyPropertyChanged interface. You can only databind properties, but with a little fantasy, the possibilities are many.

You also have the added benefit of being able to unit-test ALL of the behavior that goes on in your dialog without requiring manual intervenion.

As a result, you can take presentation behavior classes with you from a windows forms project to a WPF or SilverLight project with very little effort, both tests and behavior are already coded, all you have to do is to bind that class to a different GUI. Rumor has it that Microsoft may do something to bring INotifyPropertyChanged funcionality to the asp.net platform aswell, but at the time of writing this blog entry, this is not supported.

Sample project can be downloaded here

Within the context of testing

This post  is written for .Net, using C#, but the methods in use should be fairly easy to implement in most other computer languages.



I’ve often looked for a good way to write reusable testing scenarios that were clean to use, easy to set up and most importantly – easy to read six months afterwards.

Looking at one of Ayende’s blog posts, where he discusses  scenario driven tests, I found something lacking: Whilst he has a great thing going for testing scenarios against a set context, he has limited himself to a very constrained situation, something that I needed to expand to suit my needs. The issue is that having a context that represents “the system” is not enough – I want to be able to specify exactly the state that my system is in when I execute a scenario, with the ambition of being able to re-use the various situations.


Situations and Settings

I wanted to be able to change contexts with ease, so that my test scenarios ultimately look something like:

public class EventMonitorTests : TestBase
    public void Given_When_Then( )
        using( DdTestContext testContext = new DdTestContext( true ) )
            // Prepare
            int initialCount = testContext.Clients.TotalResponseCount( );
            PrepareSituation< RegisterThreeClients >( testContext );

            // Invoke                
            ExecuteScenario< RespondToAllClients >( testContext );

            int actualCount = testContext.Clients.TotalResponseCount( );

            // Assert
            actualCount.ShouldBeExactly( initialCount + 3 );


The idea is to introduce an interface ISituation as well as IScenario that are known to my test base-class. I can then invoke a Situation and Scenario in various combinations, so that I can test behaviors under different circumstances without having to repeat myself.

ISituation has a Prepare statement, while an IAction  has an Execute method, both accepting the testContext object:

public interface ISituation
    void Prepare( DdTestContext testContext );

public interface IScenario
    void Execute( DdTestContext context );

I gave it some thought as to whether I needed this separation or not, but concluded that having a clean separation between rigging a situation and executing a scenario better helps the reader to understand the tests.

Additionally, I needed to declare the methods within the base class that are responsible for executing PrepareSituation and ExecuteScenario:

public void PrepareSituation< T >( DdTestContext testContext ) where T : ISituation, new( )
    T situation = new T( );
    situation.Prepare( testContext );

public void PrepareScenario< T >( DdTestContext testContext ) where T : IScenario, new( )
    T scenario = new T( );
    scenario.Execute( testContext );

The functions are limited to accepting only objects that honor the respective interfaces.

This takes care of the execution, as you can guess,For the sake of being practical, whenever I am writing a new batch of scenarios that I would like to test, I often end up writing most ISituation implementations within one single Situations.cs file as well as gathering the IScenario implementations in a corresponding Scenarios.cs file. This is purely to keep the aspects together, when they are simple.



This depends largely on the complexity of the situations and scenarios. For rigging up more complex scenarios, I would normally use a sub-folder within my solition explorer, and name each file after it’s class name, as is normal.


The Context class

Initially, I wanted a nice, controlled way to control transactions for my tests, as they often interact with database. The class DdTestContext represents the system under test as a whole, and can thus contain some sort of transaction object that is configured for the database being tested.


public class DdTestContext : IDisposable
    private Transaction _transaction;

    public DdTestContext( bool useTransaction )
        if( useTransaction )
            _transaction = new Transaction( );
            _transaction.Begin( );

    public void Dispose( )
        if( _transaction == null )

Thus, with this construct, I now have a somewhat stable way of executing my tests within a transactional context without too much effort:

public void Given_When_Then( )
    using( DdTestContext testContext = new DdTestContext( true ) )
        // Prepare

        // Invoke

        // Assert
        Assert.Inconclusive( "Not yet implemented" );

This instantly became a code snippet 🙂

Every test that executes within the using clause will initiate a transaction, and roll it back again once the scope is out (or unhandled exceptions occur). My database is safe, for now.

Whenever I don’t need database support (Unit tests), I can simply opt for false as the constructor argument and no transaction will be initiated for that particular context.

I dragged the using  statement into each test as opposed to initializing the testcontext in a pre/post method in order to control the use of transactions individually for each test.


Still to come

  • How I expanded the testcontext with useful functions
  • Using extension methods extensively

Comments are very welcome 🙂