Testing Windows Phone 7 class libraries with MSTest

In this post, I explain how you can unit-test your Silverlight and Windows Phone 7 class libraries using MSTest and the built-in test system of VS2010, enabling you to right-click a test to run it.

image I am a test driven developer, and simply hate the testing harnesses available to SilverLight (SL) and Phone 7 (wp7) development enviroments. I do believe that Microsoft should make it a priority to allow MSTest to execute the class libraries written for other language runtimes than the CLR in windows, but that is a side issue.

Today I bring you my proposal for a workaround.

Origin

I wrote a helloWorld application for wp7, and downloaded Roger Peter’s cheat sheet for unit testing apps this platform. The stuff does work, but looks highly impractical in my view – mostly due to the lack of screen estate, but also because it does not integrate with the Visual Studio 2010 test harness.

 

The key is in the link

The trick to getting your logic tested is to link the source files into a new, regular window class library project. Here’s the recepy:

  1. Hatch an idea for a wp7 app (or SL) with the potential to rule the world (don’t they all?)
  2. Create a solution for your application, and add a wp7 Project
  3. Add another wp7 project for your tests
  4. Create a regular Windows Class Library and choose to add existing item. Pay close attention to the drop-down arrow on the ADD button. Use this to select “Add as Link”
    image
  5. Do this for all the classes that you wish to test

 

Within your project, you should see the linked files marked with a small arrow, marking it as linked:
image

It mostly works!

Having done this, you now have a regular windows class library that you can use to unit test the logic of your application using MSTest. As an added bonus, should you want to create a wpf applicaton later, you can, of course, re-use code in this way for cross-target platform work.

Changes done to the linked file happen only in the original file, so you can now drive the logic in your class file with unit tests by using your unit testing framework of preference.

image

Why it has a smell to it

This is still somewhat smelly workaround, consider that you cannot unit-test wp7/SL specific functionality. It is only a workaround that applies to code that compiles to windows and wp7/SL at the same time. Having said that – if you follow the SOLID principles, you should not have any problems unit-testing most of your code, it would only be the platform specific parts and GUI that you leave to the lacking test harness, for those you can safely use the recommended test harnesses, such as the one proposed by Roger Peter.

The perfect solution would, of course be to have MSTest support these other runtimes as a silent simulator or something.

Ranting over change – an exercise in futility?

I’m an avid blog reader/commenter and have seen the rise of a wave of rants about Microsoft’s LightSwitch and Microsoft WebMatrix. These are products designed to make writing windows applications as well as web pages even easier than it already is, making the process of creation accessible to more people than ever. Some express direct rage about this, others are concerned about their bread & butter.

 

"State of the Art" Amiga Demo, winner of The Party 1992I’ve been around since the ZX81 (in excess of 26 years) as a developer, flipping through an endless plethora of developer languages.
In those days, developers learned assembler first, after a few months of punching in BASIC code from a dubious UK-based computer magazine.

Remember all those cool demo’s on the Spectrum/C64 and later Amiga scenes? 90% of’em were done in pure assembler! Pascal was hot for a short while before yielding to C/C++ .

 

I switched over to the Microsoft platform sometime during the MSDOS / Windows 3.11 era.
(i386SX without a floating point processor!)
In my opinion, the moment you loose control over the CPU’s registers and/or it’s memory, is the moment you lost control.

MSIL/BYTECODE languages (java, C#, VB…), interpreted languages (ruby, python) are not ,  by that definition, "pure" languages. They take away control from the developer in order to prevent the developer from making mistakes that tear down all of it’s surrounding applications (and often, the OS). Actually, when I come to think of it, even Assembler takes away some control in that you can no longer address invalid registers, or shift memory to non-existing locations without a compiler error.

Time, money, and big feet

How often do I read how "messy" C++ is because you have to handle the memory yourself. The fact of the matter is that C++ requires a strong sense of discipline; if you understand the language, then you can write Greedy brains!applications that make the best Java and .Net apps look really, really neandrathal, in terms of anything! (performance, memory footprint, program size…) – at the cost of time!

And let’s just say, when people started enrolling in developer classes in the 90’s, it wasn’t because they had a sudden geek awakening, they saw money in software business, and wanted to be a part of it. Today, they’re the vast majority of developers out there. Microsoft makes money on software licenses. It is only natural that they write code for these masses.

But at the cost of performance and memory footprint, C#, Java and other languages make our everyday easier; I can whip out a complete, working business application mockup in a day or two using modern tools (SilverLight/SketchFlow). I used to be employed in a large consultant business where the vast majority of solutions delivered were MS Access "applications".

 

So what makes it all good enough?
Solving the customer’s problem.

 

Conventional Purist Pattern Pride

In all our purism, dogmas, theorems and idelogies, the fundamental truth is: The customer doesen’t give a rat’s jewels about how you solve his problem! He looks at you as being a huge expense that has to be made, nothing more. If you can satisfy his needs with a technical solution that is less expensive than the competition, then you’re more than likely to have a satisfied customer. Ayende has a great image on his rant on this – really boils it down to the essence!
I am a purist myself, make no mistake about that,  I do take pride in my software craftmansship, but I’ve also seen so much “bad” sofware out there, and the customer is happy!
– At the end of the day, that’s really all that matters!

For long running, or high-risk software that requires quality;
I more than often see that it really just boils down to convention. Patterns tend to be tweaked around to circumvene technical limitation, or even more common, user ignorance. Who does not have a “Tweaked” MVP pattern, or a “somewhat modified version of” MVC.. recognize yourself? 

My opinion is that it is you, and not the software, that sets the standards. Just like a carpenter, if you do not have pride in the work you do, you simply cannot deliver quality software, regardless of how good tools you have. Granted, using a nailgun instead of a hammer, you can still produce cleaner looking wallboards without the dents and bruises of 60 missed hammerhits, but if your nails are spread around shotgun style, you know that wall aint gonna last long anyway.

MetaProcess, MetaDeliver, MetaWin:

Microsoft is making it easier and easier to shovel out software that requires less skill to develop with products like LightSwitch. Is this bad?
I say – “No, that isnt necessarily bad”

ANY “good” software has undergone the following metaprocess:

  • Have a clear definition of application’s domain (what does it do?)
  • Plan for re-use and upgradeability(modularity) where possible
  • Make the application as maintainable as possible (clean code, clear intentions, refactor)
  • Cover your application’s functionality with tests(TDD, DDD, DDT)

Neither language, nor technology have any impact on this metaprocess.

meta-tag-seo

What is important, is that the technology’s operator understands the technology (a question of syntax and experience). If it helps me deliver software at a lower cost without compromising my craftmansship, then by all means, give it here!

In my view, WebPrism and LightSwitch must also undergo the same metaprocess in order to be developer platforms that are usable for corporate offices.

 

Some references (Links go directly to the articles):

InfoQ article

The Inquisitive Coder

Jason Zanders WebLog

PCWorld.com

Ayende’s Blog

Finally caved – site now hosted by WordPress

It took me a while – 14 years to be exact – to leave the concept of writing my own home page and instead just use something ready made. You can still look up what is left of my most recent site by clicking on this here text, but I wont make any promises about it staying there for long.

I am still hosting the site though, the wordpress server is running locally on my MySql database.

As you can tell, I was already using asp.net to display the contents of my wordpress blog on my homepage by nosing into WP’s database – but I was finding the solution ugly, and to be able to comment, you still had to navigate to the WP page and use the functionality there.

So, instead, I will add any relevant links here on this page, presumably, the guestbook will be one of the first linkes that I will redesign, and expect me to do a whole lot more with the photography bit. As it is now, its completely uselss.

So, I hope you all like it, I found a theme that I find easy on the eyes, comments should be a whole lot easier to write for you all now.

Oh, and by the way, Lego’ homepage is still where it was at …

Cheers!

Time to play with Lego!

I’ve always had this distinct notion that my house will not be without a pet. So when we lost Quita, we soon started to look around kennels to see if any of them had a fresh litter with puppies for sale. Some of you may recall that Quita came into our house  the moment we learned that Bella had cancer, they spent 3 wonderful months together before Bella finally yielded to her disease.

Not many days after, we found Lego, a Cavalier King Charles Spaniel, who was born on May the 20th, he was one of two males left in a litter of five. He is the firstborn, and has a distinct white-ish color on the tip of his two back paws.

Why the small race, you say?

As some of you know, Both Quita and Bella were a Doberman breed, and I have to admit, I do love them, but our local county law (Skedsmo kommune) introduced a “leash all year, everywhere” rule that basically makes life miserable for medium and large-sized dogs. Dogs need to run free, and to be able to socialize with other dogs in playing with them, and not being only restricted to a quick whiff at the end of a tight leash. This only makes dogs more aggressive.

Rules like this have no meaning – there is no wildlife near my town that need this type of protection. There are no farms with animals nearby that warrant this law. Do we need a law to protect people from dogs?? –  if so, it is  an epic declaration of fail for humanity! 

 

Coming home

We picked Lego up yesterday, at 8 weeks of age. He was the last of his litter to abandon the home. All of his siblings left the house the day before. In a way,  that was a good thing. Lego slept alone in his crib for a night in known surroundings. His mom even started to reject him (as they do when the pups reach this age) so I feel we got him at the optimal time! First born, and last to leave.

BUT – I’m not going to write too much about Lego on this site.

He was first born, but last to leave his home. In our waiting, we created Lego’s own news page, so that you can follow his progress.

So point your browser to http://lego.digitaldias.com for news, updates and photos of Lego. That’s where the news will be – for convenience, it’s also a WordPress blog, so you can point your favourite newsreader to it and get the news when they are fresh!

Happy summer!

Pedro & Lego

My dear Quita is dead

Quita

My dearest dog Quita, or Strolls Querida, as was her full name, is dead.

We joked about how Quita was born in a bucket of fuel, she was allways happy, crazy, but with a very short attention span.

Over the years, there had been a few minor incidents where she’d bark or even nibble at someone as a perfectly reasonable reaction to something sudden, such as someone accidentally stepping on her, but never a brawl, or anything major.

Last year, Quita began showing more hostility after a period of false maternity, she took a few rubber pets as her children, and behaved agressively (barking) against every person and animal she laid eyes on. Our veterinary recommended we remove the pets, and low and behold, we saw a completely different dog, much calmer, docile than she had ben for the last 2-3 years!

Then we took a trip to Germany, and on the way back, she bit a stranger in the leg. The bite itself is excusable – she was eating out the back of the car when the person almost stumbled over her. It was only skin-deep, so she did hold back, but the episode was enough to put a deep scare into me and Bergfrid. A few days ago, she scared my neighbour as he walked by our entrance, she charged at him, barking and missed him only by a meter when her leash stopped her from getting at him. This is what sparked us to get to the veterinary to have a talk.

Quita in the hallway

The veterinary knows Quita well, as he has been her exclusive veterinary ever since she came into our home. We have visited him at least twice a year for the last 5 years, and so he knows her history well, and was pretty firm in his conclusion: She’s too old to be of value to the police/military, and to give her away to some other family is just going to be handing the problems over to them. We were left with the choice of limiting her life further by incarcerating her movements even more (more at home, tied up when we have visitors etc) or just ending it.

Knowing her inside out made the choice easy. Quita loves to run and socialize. To take that away from her would mean to make her life miserable, so we decided to end her life there, before we got back home and started regretting it, and potensially risk a catastrophe when some kid playing ball accidentally ran into our garden while Quita was in there.

The veterinary found a nice room for us, and gave Quita an injection with a heavy muscle relaxant so that she would feel relaxed and drowsy. He then left, so that we could make our goodbyes, and then came back around half an hour later for the final, lethal injection. This put her out almost immediately, there was no cramps or anything. We stayed with her until she was getting cold.

P1010080

It is not possible for me to explain how much Quita meant to me. It is true what they say: “You cannot possibly grasp what a best-friend is unless you’ve had a dog”.

The attachment, joy, and companionship that Quita gave will be so, oh so missed!

It was the right decision to make, but  it’s a whole lot easier to say than to do. It was possibly the hardest decision that I’ve made,

-ever.

Close( ) – the MVVM chaos

Technorati Tags: ,,

pp_DSC1924 copy

I am an avid adopter of the Model-View-ViewModel pattern for designing applications. It is a sleek, very testable way to write software, but it has one major problem:

Because the ViewModel is unaware of its view, it follows that it is difficult to command a window to close itself.

I googled long and hard for solutions, but what I found was so complex and intricate that it would scare off any developer wanting to do some actual work.

What I give you here, is my own compromise between the want for simple, yet testable software vs the want to have a clean separation between a view and it’s viewmodel.

The BaseViewModel

Because every ViewModel implements INotifyPropertyChanged it is generally a good idea to write a base class that encapsulates this behaviour in most software projects.

In my opinion, every viewModel should also be able to request that a view should disappear for some reason.

Thus, my implementation of a baseclass for ViewModels looks like this:

image

By providing both an ICommand and a Method that both invoke the RequestCloseEvent, I can choose whether a View should close by binding to a button, or as a consequence of some logic in the viewmodel.

The CloseCommand property simply calls the RequestCloseEvent nothing more.

DataContext Binding

The practical approach to binding a View to its ViewModel should not require more than a one-liner:

image

The ( Application.Current as App ).ServiceLocator is my IoC Container; it has a public property for every ViewModel that I write. I add the container as a public property in app.xaml.cs. This way, it can be reached from all views.

The line above uses a simple extension method to do two things:

  • Register the ViewModel provided by the ServiceLocator as the DataContext for that view
  • Attach the RequestCloseEvent to the Views Close() method

Here’s the code:

image

The idea is that the event always closes the dialog.

You can then either bind a close button to the CloseCommand property of the BaseViewModel, or you can have your ViewModel fire the event through calling RequestClose( ) – or both.

 

image

Figure: Binding directly to the base class

 

 

image

Figure: Calling Close from a ViewModel method

So…how testable is this?

For all intents and purposes, I now have a loose enough coupling between my ViewModel and View to verify that the ViewModel is requesting a dialog to close:

image

Conclusion

The method I’ve given you gives a loose enough coupling to be testable, and keeps things simple. There is one single line of code to attach the View to it’s ViewModel, something I find to be an acceptable tradeoff from a pure separation.

The ViewModel also remains 100% compatible with the concept of test driven design, and is simple enough for teams of developers working on large software projects to use

PS: Who else does M.C.Escher inspired photography? :)

TDD: Using databind to object for the ultimate tdd exerience

Figured I need to share with you how I normally go about designing UI, and how I make that design as testable as possible.

To make things easy, I will use a login screen as an example, since it has few controls, and is relatively simple to follow.

Sample VS2008 solution can be downloaded here

The criteria

  • Create a login  dialog with a username, password, OK and Cancel button
  • OK Button is only enabled when both a username and password are set
  • Should be fully unit-testable

Implementation

I start by creating an empty solution with my typical folder structure.

image

The numbering is just something I add to the folders because I like to have the top-down view. Note that I have a clear separation between UserInterfaces and Presentation:

User Interface: Dumb dialog, form, or page containing bindable controls that the user interacts with

Presentation: Smart, data-bindable classes that represent a user interface’s logic and state.

 

Create a login  dialog with a username, password, OK and Cancel button

Next, I’ll add the windows forms project to the User Interfaces folder that contains my login dialog, and design our choice login box:

image

I am only interested in the design at this stage. Aside from setting the property UseSystemPasswordChar on the textbox for the password, and naturally giving the controls some meaningful names, I do not bother looking at code here.

 

Preparing to code

The next bit is half the magic. I am going to create a class to represent my login dialog. By implementing the INotifyPropertyChanged interface (found in System.ComponentModel), I am telling this clas that it can be databound to windows forms, WPF and silverlight controls.

I begin by adding a Presentation class library to contain the login class, as well as a test project where I can put all the facts related to it:

image

The Solution Items folder that you see at the bottom contains the test list and testrunconfig files, it is autogenerated by visual studio the first time you add a test project to your solution.

  • Presentation is a regular class library
  • Presentation.UnitTests is a test project

    Must be fully unit-testable

    In visual studio, it’s hard to write tests for objects that do not exist, this is due to intellisense trying to help you as you go actually turns into something you have to fight. Creating a skeleton Login class makes this process a lot easier.

    Initally, it looks like this:
public class Login : INotifyPropertyChanged
{
    public event PropertyChangedEventHandler  PropertyChanged;

    public string Username            { get; set; }
    public string Password            { get; set; }
    public bool   OkButtonEnabled    { get; }
}

 

The PropertyChanged event is the mechanism used for databinding. More on that later.

Initially, I am interested in the following behavior from my login class:

image

Implementing these tests is fairly straight forward. When done, I can proceed to getting them to pas.

SIDENOTE: Unsure on how to verify events in a test? here is a smart way to do it:

[TestMethod]
public void PropertyChanged_SetPassword_EventFires( )
{
    // Prepare
    Login login            = new Login( );            
    bool eventWasFired    = false;
    login.PropertyChanged += (sender, e ) => eventWasFired = true;

    // Invoke
    login.Password = _testPassword;

    // Assert
    Assert.IsTrue( eventWasFired );
}

 

Back to our Login class, we want the Properties to “announce” that they have been changed,

this can be done like so:

public string Username            
{
    get{ return _userName; }
    set
    {
        if( _userName == value )
            return;
        //TODO: Validate username
        _userName = value;
        NotifyPropertyChanged( "Username" );
    }
}

private void NotifyPropertyChanged( string propertyName )
{
    if( PropertyChanged != null )
        PropertyChanged( this, new PropertyChangedEventArgs( propertyName ) );
}

Basically: If I can set the value, I announce it with my event handler. There is no point announcing a value that was never set.

Finally, the OkButtonEnabled property simply checks the username and password:

public bool OkButtonEnabled    
{
    get
    {
        if( string.IsNullOrEmpty( Username ) )
            return false;

        if( string.IsNullOrEmpty( Password ) )
            return false;

        return true;
    }
}

I’m a sucker for readability, can you tell? :)

After a very brief syntax check, I’m done with the Login class for now:

image

 

Binding it to the form

At this stage, I now have a Login form with absolutely no code behind it. Additionally, I’ve created a Login class that announces every property that changes, it is time to bind the two together. The process is simply:

  • Declare the login class as a data source
  • Bind properties from the login class to our form
  • Initialize the binding in the form’s constructor (in the code behind)
Declare the login class as a data source

In design mode, bring up the properties of the username textbox, find the databinding section. Since this is the first time we’re doing a data source, I can pre-select the Text property that I want to bind to, and then click Add project data source link

image

This brings up the following sequence (I’ll just run through the images, no comments should be necessary):

 

image

 

 

image

 

image

 

Having completed this process, you can now simply bind the TextBox.Text property to your bound class object with a simple drop-down:

image

 

The OK Button requires a special binding, because we want to bind it’s true/false value to the Enabled property, so we open up the Advanced data binding dialog:

image

Find the Enabled property, then simply choose to bind that to the OkButtonEnabled in our Login class:

image 

Press OK to save your changes.

Initialize the binding in the form’s constructor

The final step in the binding process is to perform some initalization on the login form, so that we have an actual object to store the values for username and password in. This can be passed to the form as a constructor argument, a property, or method, or be a built-in object. For the sake of this blog entry, I’ll simply use it as a publicly available property. Choose your login-form, switch to code view, and set the following lines of code:

public partial class frmLogin : Form
{
    // Our notification object:
    private Login _loginObject;

    public frmLogin( )
    {
        InitializeComponent( );

        _loginObject = new Login( );

        // Associate the databind with our notificationObject
        loginBindingSource.Add( _loginObject );
    }
}

 

That’s it.

When you run your application, you will see that the OK button does not enable itself before username and password have values. What you may find odd, is that you have to change focus from one textbox to another in order to see this. This is because the value from the textbox is only passed to the object when it loses focus. If you want a quicker, more live update, you can, for example, choose to update the object on the KeyUp event, as an example.

Summary

Databinding forms to class objects is simply a matter of implementing the INotifyPropertyChanged interface. You can only databind properties, but with a little fantasy, the possibilities are many.

You also have the added benefit of being able to unit-test ALL of the behavior that goes on in your dialog without requiring manual intervenion.

As a result, you can take presentation behavior classes with you from a windows forms project to a WPF or SilverLight project with very little effort, both tests and behavior are already coded, all you have to do is to bind that class to a different GUI. Rumor has it that Microsoft may do something to bring INotifyPropertyChanged funcionality to the asp.net platform aswell, but at the time of writing this blog entry, this is not supported.

Sample project can be downloaded here

And then, cold came!

DSC_0054

Woke up this morning with the sense of my bed being unusually cozy.. Leaving it did not seem quite the thing to do.

I got me one of those cheap inside/outside termometers, a few years ago, so I tippy-toed downstairs to have  a quick glance at it, and sure, it said 16 negative degrees (That’s 3.2F for the yanks) outside, but worse, inside we had no more than Just over 17C (62.6F). Who unplugged the global heater??

17 Degrees inside, man, ,that’s no cool at all, it’s enough to make you shiver unless you pack a double set of everything; double socks, double sweaters, speak twice as fast to have the friction warm you up a little.

And look at my car (you can click on the images btw)!! DSC_0058

The poor thing, we’re going to IKEA today to get another shelf for the living room, and this  is what greets us!

So I had to go outside, in my jammies, hook up the car with the engine heater, run back in again and defrost my family jewels. And while I wait for the thing to thaw, I snapped a few photos for you to see with my brand new Nikon D700 and 24/70mm :)

Happy holidays, all!

Within the context of testing

This post  is written for .Net, using C#, but the methods in use should be fairly easy to implement in most other computer languages.

 

Background

I’ve often looked for a good way to write reusable testing scenarios that were clean to use, easy to set up and most importantly – easy to read six months afterwards.

Looking at one of Ayende’s blog posts, where he discusses  scenario driven tests, I found something lacking: Whilst he has a great thing going for testing scenarios against a set context, he has limited himself to a very constrained situation, something that I needed to expand to suit my needs. The issue is that having a context that represents “the system” is not enough – I want to be able to specify exactly the state that my system is in when I execute a scenario, with the ambition of being able to re-use the various situations.

 

Situations and Settings

I wanted to be able to change contexts with ease, so that my test scenarios ultimately look something like:

[TestClass]
public class EventMonitorTests : TestBase
{
    [TestMethod]
    public void Given_When_Then( )
    {
        using( DdTestContext testContext = new DdTestContext( true ) )
        {
            // Prepare
            int initialCount = testContext.Clients.TotalResponseCount( );
            PrepareSituation< RegisterThreeClients >( testContext );

            // Invoke                
            ExecuteScenario< RespondToAllClients >( testContext );

            int actualCount = testContext.Clients.TotalResponseCount( );

            // Assert
            actualCount.ShouldBeExactly( initialCount + 3 );
        }
    }
}

 

The idea is to introduce an interface ISituation as well as IScenario that are known to my test base-class. I can then invoke a Situation and Scenario in various combinations, so that I can test behaviors under different circumstances without having to repeat myself.

ISituation has a Prepare statement, while an IAction  has an Execute method, both accepting the testContext object:

public interface ISituation
{
    void Prepare( DdTestContext testContext );
}

public interface IScenario
{
    void Execute( DdTestContext context );
}

I gave it some thought as to whether I needed this separation or not, but concluded that having a clean separation between rigging a situation and executing a scenario better helps the reader to understand the tests.

Additionally, I needed to declare the methods within the base class that are responsible for executing PrepareSituation and ExecuteScenario:

public void PrepareSituation< T >( DdTestContext testContext ) where T : ISituation, new( )
{
    T situation = new T( );
    situation.Prepare( testContext );
}

public void PrepareScenario< T >( DdTestContext testContext ) where T : IScenario, new( )
{
    T scenario = new T( );
    scenario.Execute( testContext );
}

The functions are limited to accepting only objects that honor the respective interfaces.

This takes care of the execution, as you can guess,For the sake of being practical, whenever I am writing a new batch of scenarios that I would like to test, I often end up writing most ISituation implementations within one single Situations.cs file as well as gathering the IScenario implementations in a corresponding Scenarios.cs file. This is purely to keep the aspects together, when they are simple.

 

image

This depends largely on the complexity of the situations and scenarios. For rigging up more complex scenarios, I would normally use a sub-folder within my solition explorer, and name each file after it’s class name, as is normal.

 

The Context class

Initially, I wanted a nice, controlled way to control transactions for my tests, as they often interact with database. The class DdTestContext represents the system under test as a whole, and can thus contain some sort of transaction object that is configured for the database being tested.

 

public class DdTestContext : IDisposable
{
    private Transaction _transaction;

    public DdTestContext( bool useTransaction )
    {
        if( useTransaction )
        {
            _transaction = new Transaction( );
            _transaction.Begin( );
        }
    }

    public void Dispose( )
    {
        if( _transaction == null )
            _transaction.Rollback();
    }
}

Thus, with this construct, I now have a somewhat stable way of executing my tests within a transactional context without too much effort:

[TestMethod]
public void Given_When_Then( )
{
    using( DdTestContext testContext = new DdTestContext( true ) )
    {
        // Prepare

        // Invoke

        // Assert
        Assert.Inconclusive( "Not yet implemented" );
    }
}

This instantly became a code snippet :)

Every test that executes within the using clause will initiate a transaction, and roll it back again once the scope is out (or unhandled exceptions occur). My database is safe, for now.

Whenever I don’t need database support (Unit tests), I can simply opt for false as the constructor argument and no transaction will be initiated for that particular context.

I dragged the using  statement into each test as opposed to initializing the testcontext in a pre/post method in order to control the use of transactions individually for each test.

 

Still to come

  • How I expanded the testcontext with useful functions
  • Using extension methods extensively

Comments are very welcome :)