Getting the newest entry from Azure Table Storage

Sometimes, all you want is to be able to quickly get to the last value of a sensor, or the freshest product in your Azure Table without having to do complex artsy queries to achieve that result. Trouble is, querying against Azure tables does not give you a Last() option, so we have to get sneaky!

Turns out, Azure Tables are ordered by their RowKeys, which are indexed, so we’re in luck. The challenge is that you need to input a string that is ever descending in value, so that the newest elements are always fresh on top. Here’s a trick to do just that:

DateTime to the rescue!

The simple trick is to use the DateTime.MaxValue property, which gives us the highest possible value of DateTime. Then we convert that to ticks in order to get a huge number. Subtract the DateTime.Now value from that, and what we end up with is a large enough number to use as a RowKey that is “ever descending”:

image

The string formatting is just to populate the value with 19 digits.

The RowKeys are now stored in an ever descending order, here’s a snip of a table I’m storing some sensor values in (using Azure Storage Explorer):

image

On the reading side, you can now simply execute your query knowing that the order of the returned values will always have the first item as the last inserted:

image

 

Learn it, love it, live it!

  • Professional resume and cover letter
  • Bruk av lagring i skyene

    Her kommer et blogginnlegg på norsk!

    Jeg har laget en liten video som viser en øvelse jeg gjør for deltakerne på den månedtlige Microsoft Azure Camp. Dette er et månedtlig event på ca. 3 timer der utviklere kan komme på besøk til meg og få litt hands-on erfaring med å lage et program som bruker lagring i skyen.

    PartyImageUploader

    Programmet går ut på å implementere en automatisk opplasting av festbilder til Microsoft Azure, slik at disse senere kan vises i f.eks. en website som er laget for formålet. Øvelsen er rask å skrive, og fungerer flott som en kode kata til de som trenger å styrke seg på bruk av cloud tjenester.

    YouTube Video

    Til formålet har jeg laget en 20 minutter lang video hvor jeg bygger løsningen fra scratch.

    Viktig:

    Velg 1080p eller 720p for å kunne lese teksten i denne videoen!

    Sees best i fullskjerm på 1080p eller 720p

     

    Tilbakemeldinger og forbedringsforslag mottas med glede, enten på youtube direkte, eller her

    Smilefjes

    Pedro

    Faster and Slower

    Growing concerns about the direction of Xaml-based applications

    Microsoft, what the hell do you think you are doing by diverging WPF, Silverlight and Silverlight for WP7?? None of those 3 destination platforms have any solid foothold as of yet. By making different options of XAML available in different destination platforms, you’re only doing one thing: pissing off developers. Stop doing that, this is really simple:

    WPF

    imagesCA28TSXTWPF should be the mother of all xaml based apps and have every available technology to it – including webcam support as in silverlight, MEF, etc. WPF needs to be that “unlimited” target platform from which both silverlight and WP7 pick their features from.

    Silverlight

    imagesCA197ISS

    Why oh why can I not use data triggers in SL? What is the reasoning for it? I know MS has “shifted focus” for Silverlight. This, in my ears, is bull. Silverlight on iOS and Android will give developers reason to use it. WPF alone cannot succeed as the only xaml-based platform, and SL makes sense for servicing the current craze of tablets and smart phones. Very few people will disagree when I say that MS powered devices (tablets and phones) are lagging far, far behind. For SL to be a success, it needs to penetrate iOS and Android. End of story. rest is just bull.

    Silverlight for WP7

    phone7I accept that SilverLight for windows Phone 7 will offer different capabilities from Silverlight as a xap, but what I don’t get is why this version of Silverlight has to be a framework behind the current release of Silverlight web?? It makes no sense, whatsoever to keep developers in confusion station by not holding back releases until the technology is ready on all platforms!

    Converge now!

    What Microsoft needs to do, is to hold back releases, so they can do a unified XAML platform upgrade targetting windows, SL and WP7 with the same developer options and syntax. No data trigger support for WP7 means dont release it for Windows or SL either! This is FAR better for developers than the mess you’re giving us now! XAML as a developer platform, needs a unified version number, we dont want to have WPF for .Net 4.0, Silverlight 5.0 for web and SL 3.5 gutted for WP7. 

    So, where was I?

    You may have notice that digitaldias was down for a week or two.

    I’ve been using an SHDSL line (Single-Pair High-speed Digital Subscriber Line) for the last 6 years, giving me a whopping 2Mbit in both directions!

    Recently, though, I’ve been on the lookout for higher download speeds, as iPads, laptops, and even the PS3 consume more and more information from the web. When I was offered the option of 20Mbit down, and 1Mbit up for much less moolah, I took it.

    My blogs will load at half speed (as if you care!), but then again, I dont connect back to my office over VPN anymore, so I dodn’t have any good excuse to pay that much for a decent speed out anymore.

    I still want higher output speed, for using skype in HD, but that’ll have to come when the prices (and availability) fits.

    Logo

    My ISP, Nextgentel delivered fast and reasonably priced this time. For that, they get a nice, well deserved kudos from me Smile

    Hosting a Silverlight app in Azure

    A quick introduction to how you can get up and running with Microsoft Azure – It is a hands-on guide into creating a silverlight application that uses a REST api to manage SQL data in the cloud.

    Who should read this:

    This article assumes that:

    • You know (and love!!) the SOLID programming principles
    • You know what WCF is and how to host and consume such services through IIS
    • You have some knowledge of the Entity Framework ORM
    • You want to get something out on windows Azure, but you’re not quite sure how to

     

    The concept

    I am writing an inventorizer application. The idea is to keep track of my movies and to know where in my house they are supposed to be. This way, I know where to put a stray movie, as well as check that all movies that are supposed to be in a specific shelf actually are there.

    Later, I will extend the application to access my movie list from mobile devices, so it’s going to require a REST api right from the start.

    Entities and storage

    To get started, I defined 3 basic entities for my application:

    Entity Detail
    Location Room / Area in my home
    Storage Shelf, drawer, box, etc. Exists inside a Location
    Movie Stored inside a piece of storage

     

    Using Entity Framework, I started by creating a model from a blank database:

    image

    I’ve explicitly given the entities the prefix “Db” in order to separate the objects from my C# domain objects.  Automapper does the conversion for me – pretty straightforward. I keep my domain objects clean, and clear of the Sql Server, as you should too.

    SQL Azure

    To work with SQL Azure, you need to have a valid Azure account and you also need to have created a database for the purpose. I won’t go into the details of the database creation process; basically, you follow a database creation wizard that does what you expect it to.

    Once created, you want to connect your Visual Studio Server explorer to this newly created database. To do that, you first allow yourself through SQL Azures firewall, which is fairly simple, flip to the Firewall Settings tab and click on the button “Add Rule” which brings up this:

     image

    Complete the firewall rule by setting your IP number then click OK and flip back to the databases tab to get a connection string:

    imageThe connection string does not have your password in it. You’ll have to edit that in after you put it in your settings file. If you need help in pushing your model to Azure SQL, just drop me a line, and I’ll help you out.

    Setting up the REST service

    Setting up the REST service is a mattter of

    1. Defining your service interface
    2. Implementing the service in some class
    3. Setting up the service endpoint configuration in your service configuration file

    Important note:
    In order to implement REST and use WebGet and such, you need to include a reference to System.ServiceModel.Web. Make sure in your project properties that you’ve selected the full .Net Framework 4.0 and not the .Net Framework 4.0 Client profile as your target framework, or System.ServiceModel.Web won’t be visible for you to reference.

    Defining the service interface

    Not much hassle here, the special consideration is the REST way of making the endpoints accessible:

    image

    Implementing the service

    Since we started with the EF model, implementing the service simply means creating a repository interface (for convenience) and then implementing it with the generated context class

    image

    Setting up the service endpoint configuration

    To roll out a successful REST service that serves both POX (plain old xml) and JSON data, I had to actually create two different binding configurations even though they’re equal in configuration.

    image
    Second, set up a couple of behaviors, differenciating only in the default response format:

    image
    Finally, set up the endpoints you need:
    image

    Since we are hosting this in Azure, we do not specify any addresses.

    Creating the client

    Now that both the database and REST API is up and running, you only need to create a regular silverlight client, point it to the service, and you’re in business. I actually created a SOAP endpoint in addition to the POX and JSON addresses since I do not need to box data between .Net clients, thus my Silverlight client config has the following service reference:
    image
    Notice the relative address, since I’m hositing the Silverlight client from the same location as the service, I use the relative address to avoid cross-domain issues. This took me some time to figure out. I usually start out with a basicHttpBinding and then swap over to TCP/IP once everything is up and ok.

    If you need more details on how to write a silverlight client, just drop me a message.

    Azure considerations

    So, having completed, tested, and debugged the project here on earth, it was time to deploy the pacakge to Azure. There was one last remaining thing to do, and that is to put a checkmark on your Sql Azure configuration screen in order to allow your services to connect to the database:

    image
    This is definetely another one of those “easy to forget, hard to figure out” things…

    Integration Testing

    I wanted to have a set of integration tests that directly referenced the SQL Azure database without destroying any data, so I opted for the transaction approach where you basically begin a transaction before each test, and then roll back all changes after running it. This led me to the following base class:

    image

    The base class basically implements the TestInitialize and TestCleanup methods to begin a transaction before each test, and roll it back (Dispose()) after each test has run. Any test that throws an exception will then automatically roll back the database.

    TIP:
    If you use the TestInitialize or TestCleanup in a base class, your derived test class won’t be able to use those attributes. This is why I added the virtual Given() function so that I can do my test setup there, should I need to.

    An example of use:
    image

    The testclass above creates an instance of the class StorageRepositorySql and the test that is run is then packaged inside a transaction scope and rolled back so to not disturb my SQL server data. If you want more details on the base class, just let me know.

    Running these tests is surprisingly fast, on my 2Mbit internet line, most of my tests run for less than 50ms each, which is pretty amazing, considering the transactions and that I’m in Norway while the Azure store probably is in Ireland!

    Conclusion

    Microsoft promises that “going Azure” should be pretty straightforward, and not much different from what you’re already used to. I tend to agree, it has been surprisingly easy to get something up there and running. Most of the challenges were actually in configuring the REST endpoints and figuring out how to allow the WCF services to access the SQL database, but other than that, the rest is straightforward.

    At the end of this article, I’ve prepared a short Silverlight application that simply lists the locations in my SQL Server. It should be available through the following URL:

    http://digitaldias.cloudapp.net

    However, since this is work in progress, you may see a more advanced thing on this page as my application progresses, or something completely different, or, perhaps nothing at all – I make no guarantees, other than that it should be there if this article isn’t too old Smile 

    P.

    Testing Windows Phone 7 class libraries with MSTest

    In this post, I explain how you can unit-test your Silverlight and Windows Phone 7 class libraries using MSTest and the built-in test system of VS2010, enabling you to right-click a test to run it.

    image I am a test driven developer, and simply hate the testing harnesses available to SilverLight (SL) and Phone 7 (wp7) development enviroments. I do believe that Microsoft should make it a priority to allow MSTest to execute the class libraries written for other language runtimes than the CLR in windows, but that is a side issue.

    Today I bring you my proposal for a workaround.

    Origin

    I wrote a helloWorld application for wp7, and downloaded Roger Peter’s cheat sheet for unit testing apps this platform. The stuff does work, but looks highly impractical in my view – mostly due to the lack of screen estate, but also because it does not integrate with the Visual Studio 2010 test harness.

     

    The key is in the link

    The trick to getting your logic tested is to link the source files into a new, regular window class library project. Here’s the recepy:

    1. Hatch an idea for a wp7 app (or SL) with the potential to rule the world (don’t they all?)
    2. Create a solution for your application, and add a wp7 Project
    3. Add another wp7 project for your tests
    4. Create a regular Windows Class Library and choose to add existing item. Pay close attention to the drop-down arrow on the ADD button. Use this to select “Add as Link”
      image
    5. Do this for all the classes that you wish to test

     

    Within your project, you should see the linked files marked with a small arrow, marking it as linked:
    image

    It mostly works!

    Having done this, you now have a regular windows class library that you can use to unit test the logic of your application using MSTest. As an added bonus, should you want to create a wpf applicaton later, you can, of course, re-use code in this way for cross-target platform work.

    Changes done to the linked file happen only in the original file, so you can now drive the logic in your class file with unit tests by using your unit testing framework of preference.

    image

    Why it has a smell to it

    This is still somewhat smelly workaround, consider that you cannot unit-test wp7/SL specific functionality. It is only a workaround that applies to code that compiles to windows and wp7/SL at the same time. Having said that – if you follow the SOLID principles, you should not have any problems unit-testing most of your code, it would only be the platform specific parts and GUI that you leave to the lacking test harness, for those you can safely use the recommended test harnesses, such as the one proposed by Roger Peter.

    The perfect solution would, of course be to have MSTest support these other runtimes as a silent simulator or something.

    Ranting over change – an exercise in futility?

    I’m an avid blog reader/commenter and have seen the rise of a wave of rants about Microsoft’s LightSwitch and Microsoft WebMatrix. These are products designed to make writing windows applications as well as web pages even easier than it already is, making the process of creation accessible to more people than ever. Some express direct rage about this, others are concerned about their bread & butter.

     

    "State of the Art" Amiga Demo, winner of The Party 1992I’ve been around since the ZX81 (in excess of 26 years) as a developer, flipping through an endless plethora of developer languages.
    In those days, developers learned assembler first, after a few months of punching in BASIC code from a dubious UK-based computer magazine.

    Remember all those cool demo’s on the Spectrum/C64 and later Amiga scenes? 90% of’em were done in pure assembler! Pascal was hot for a short while before yielding to C/C++ .

     

    I switched over to the Microsoft platform sometime during the MSDOS / Windows 3.11 era.
    (i386SX without a floating point processor!)
    In my opinion, the moment you loose control over the CPU’s registers and/or it’s memory, is the moment you lost control.

    MSIL/BYTECODE languages (java, C#, VB…), interpreted languages (ruby, python) are not ,  by that definition, "pure" languages. They take away control from the developer in order to prevent the developer from making mistakes that tear down all of it’s surrounding applications (and often, the OS). Actually, when I come to think of it, even Assembler takes away some control in that you can no longer address invalid registers, or shift memory to non-existing locations without a compiler error.

    Time, money, and big feet

    How often do I read how "messy" C++ is because you have to handle the memory yourself. The fact of the matter is that C++ requires a strong sense of discipline; if you understand the language, then you can write Greedy brains!applications that make the best Java and .Net apps look really, really neandrathal, in terms of anything! (performance, memory footprint, program size…) – at the cost of time!

    And let’s just say, when people started enrolling in developer classes in the 90’s, it wasn’t because they had a sudden geek awakening, they saw money in software business, and wanted to be a part of it. Today, they’re the vast majority of developers out there. Microsoft makes money on software licenses. It is only natural that they write code for these masses.

    But at the cost of performance and memory footprint, C#, Java and other languages make our everyday easier; I can whip out a complete, working business application mockup in a day or two using modern tools (SilverLight/SketchFlow). I used to be employed in a large consultant business where the vast majority of solutions delivered were MS Access "applications".

     

    So what makes it all good enough?
    Solving the customer’s problem.

     

    Conventional Purist Pattern Pride

    In all our purism, dogmas, theorems and idelogies, the fundamental truth is: The customer doesen’t give a rat’s jewels about how you solve his problem! He looks at you as being a huge expense that has to be made, nothing more. If you can satisfy his needs with a technical solution that is less expensive than the competition, then you’re more than likely to have a satisfied customer. Ayende has a great image on his rant on this – really boils it down to the essence!
    I am a purist myself, make no mistake about that,  I do take pride in my software craftmansship, but I’ve also seen so much “bad” sofware out there, and the customer is happy!
    – At the end of the day, that’s really all that matters!

    For long running, or high-risk software that requires quality;
    I more than often see that it really just boils down to convention. Patterns tend to be tweaked around to circumvene technical limitation, or even more common, user ignorance. Who does not have a “Tweaked” MVP pattern, or a “somewhat modified version of” MVC.. recognize yourself? 

    My opinion is that it is you, and not the software, that sets the standards. Just like a carpenter, if you do not have pride in the work you do, you simply cannot deliver quality software, regardless of how good tools you have. Granted, using a nailgun instead of a hammer, you can still produce cleaner looking wallboards without the dents and bruises of 60 missed hammerhits, but if your nails are spread around shotgun style, you know that wall aint gonna last long anyway.

    MetaProcess, MetaDeliver, MetaWin:

    Microsoft is making it easier and easier to shovel out software that requires less skill to develop with products like LightSwitch. Is this bad?
    I say – “No, that isnt necessarily bad”

    ANY “good” software has undergone the following metaprocess:

    • Have a clear definition of application’s domain (what does it do?)
    • Plan for re-use and upgradeability(modularity) where possible
    • Make the application as maintainable as possible (clean code, clear intentions, refactor)
    • Cover your application’s functionality with tests(TDD, DDD, DDT)

    Neither language, nor technology have any impact on this metaprocess.

    meta-tag-seo

    What is important, is that the technology’s operator understands the technology (a question of syntax and experience). If it helps me deliver software at a lower cost without compromising my craftmansship, then by all means, give it here!

    In my view, WebPrism and LightSwitch must also undergo the same metaprocess in order to be developer platforms that are usable for corporate offices.

     

    Some references (Links go directly to the articles):

    InfoQ article

    The Inquisitive Coder

    Jason Zanders WebLog

    PCWorld.com

    Ayende’s Blog

    Close( ) – the MVVM chaos

    Technorati Tags: ,,

    pp_DSC1924 copy

    I am an avid adopter of the Model-View-ViewModel pattern for designing applications. It is a sleek, very testable way to write software, but it has one major problem:

    Because the ViewModel is unaware of its view, it follows that it is difficult to command a window to close itself.

    I googled long and hard for solutions, but what I found was so complex and intricate that it would scare off any developer wanting to do some actual work.

    What I give you here, is my own compromise between the want for simple, yet testable software vs the want to have a clean separation between a view and it’s viewmodel.

    The BaseViewModel

    Because every ViewModel implements INotifyPropertyChanged it is generally a good idea to write a base class that encapsulates this behaviour in most software projects.

    In my opinion, every viewModel should also be able to request that a view should disappear for some reason.

    Thus, my implementation of a baseclass for ViewModels looks like this:

    image

    By providing both an ICommand and a Method that both invoke the RequestCloseEvent, I can choose whether a View should close by binding to a button, or as a consequence of some logic in the viewmodel.

    The CloseCommand property simply calls the RequestCloseEvent nothing more.

    DataContext Binding

    The practical approach to binding a View to its ViewModel should not require more than a one-liner:

    image

    The ( Application.Current as App ).ServiceLocator is my IoC Container; it has a public property for every ViewModel that I write. I add the container as a public property in app.xaml.cs. This way, it can be reached from all views.

    The line above uses a simple extension method to do two things:

    • Register the ViewModel provided by the ServiceLocator as the DataContext for that view
    • Attach the RequestCloseEvent to the Views Close() method

    Here’s the code:

    image

    The idea is that the event always closes the dialog.

    You can then either bind a close button to the CloseCommand property of the BaseViewModel, or you can have your ViewModel fire the event through calling RequestClose( ) – or both.

     

    image

    Figure: Binding directly to the base class

     

     

    image

    Figure: Calling Close from a ViewModel method

    So…how testable is this?

    For all intents and purposes, I now have a loose enough coupling between my ViewModel and View to verify that the ViewModel is requesting a dialog to close:

    image

    Conclusion

    The method I’ve given you gives a loose enough coupling to be testable, and keeps things simple. There is one single line of code to attach the View to it’s ViewModel, something I find to be an acceptable tradeoff from a pure separation.

    The ViewModel also remains 100% compatible with the concept of test driven design, and is simple enough for teams of developers working on large software projects to use

    PS: Who else does M.C.Escher inspired photography? :)

    TDD: Using databind to object for the ultimate tdd exerience

    Figured I need to share with you how I normally go about designing UI, and how I make that design as testable as possible.

    To make things easy, I will use a login screen as an example, since it has few controls, and is relatively simple to follow.

    Sample VS2008 solution can be downloaded here

    The criteria

    • Create a login  dialog with a username, password, OK and Cancel button
    • OK Button is only enabled when both a username and password are set
    • Should be fully unit-testable

    Implementation

    I start by creating an empty solution with my typical folder structure.

    image

    The numbering is just something I add to the folders because I like to have the top-down view. Note that I have a clear separation between UserInterfaces and Presentation:

    User Interface: Dumb dialog, form, or page containing bindable controls that the user interacts with

    Presentation: Smart, data-bindable classes that represent a user interface’s logic and state.

     

    Create a login  dialog with a username, password, OK and Cancel button

    Next, I’ll add the windows forms project to the User Interfaces folder that contains my login dialog, and design our choice login box:

    image

    I am only interested in the design at this stage. Aside from setting the property UseSystemPasswordChar on the textbox for the password, and naturally giving the controls some meaningful names, I do not bother looking at code here.

     

    Preparing to code

    The next bit is half the magic. I am going to create a class to represent my login dialog. By implementing the INotifyPropertyChanged interface (found in System.ComponentModel), I am telling this clas that it can be databound to windows forms, WPF and silverlight controls.

    I begin by adding a Presentation class library to contain the login class, as well as a test project where I can put all the facts related to it:

    image

    The Solution Items folder that you see at the bottom contains the test list and testrunconfig files, it is autogenerated by visual studio the first time you add a test project to your solution.

    • Presentation is a regular class library
    • Presentation.UnitTests is a test project

      Must be fully unit-testable

      In visual studio, it’s hard to write tests for objects that do not exist, this is due to intellisense trying to help you as you go actually turns into something you have to fight. Creating a skeleton Login class makes this process a lot easier.

      Initally, it looks like this:
    public class Login : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler  PropertyChanged;

        public string Username            { get; set; }
        public string Password            { get; set; }
        public bool   OkButtonEnabled    { get; }
    }

     

    The PropertyChanged event is the mechanism used for databinding. More on that later.

    Initially, I am interested in the following behavior from my login class:

    image

    Implementing these tests is fairly straight forward. When done, I can proceed to getting them to pas.

    SIDENOTE: Unsure on how to verify events in a test? here is a smart way to do it:

    [TestMethod]
    public void PropertyChanged_SetPassword_EventFires( )
    {
        // Prepare
        Login login            = new Login( );            
        bool eventWasFired    = false;
        login.PropertyChanged += (sender, e ) => eventWasFired = true;

        // Invoke
        login.Password = _testPassword;

        // Assert
        Assert.IsTrue( eventWasFired );
    }

     

    Back to our Login class, we want the Properties to “announce” that they have been changed,

    this can be done like so:

    public string Username            
    {
        get{ return _userName; }
        set
        {
            if( _userName == value )
                return;
            //TODO: Validate username
            _userName = value;
            NotifyPropertyChanged( "Username" );
        }
    }

    private void NotifyPropertyChanged( string propertyName )
    {
        if( PropertyChanged != null )
            PropertyChanged( this, new PropertyChangedEventArgs( propertyName ) );
    }

    Basically: If I can set the value, I announce it with my event handler. There is no point announcing a value that was never set.

    Finally, the OkButtonEnabled property simply checks the username and password:

    public bool OkButtonEnabled    
    {
        get
        {
            if( string.IsNullOrEmpty( Username ) )
                return false;

            if( string.IsNullOrEmpty( Password ) )
                return false;

            return true;
        }
    }

    I’m a sucker for readability, can you tell? :)

    After a very brief syntax check, I’m done with the Login class for now:

    image

     

    Binding it to the form

    At this stage, I now have a Login form with absolutely no code behind it. Additionally, I’ve created a Login class that announces every property that changes, it is time to bind the two together. The process is simply:

    • Declare the login class as a data source
    • Bind properties from the login class to our form
    • Initialize the binding in the form’s constructor (in the code behind)
    Declare the login class as a data source

    In design mode, bring up the properties of the username textbox, find the databinding section. Since this is the first time we’re doing a data source, I can pre-select the Text property that I want to bind to, and then click Add project data source link

    image

    This brings up the following sequence (I’ll just run through the images, no comments should be necessary):

     

    image

     

     

    image

     

    image

     

    Having completed this process, you can now simply bind the TextBox.Text property to your bound class object with a simple drop-down:

    image

     

    The OK Button requires a special binding, because we want to bind it’s true/false value to the Enabled property, so we open up the Advanced data binding dialog:

    image

    Find the Enabled property, then simply choose to bind that to the OkButtonEnabled in our Login class:

    image 

    Press OK to save your changes.

    Initialize the binding in the form’s constructor

    The final step in the binding process is to perform some initalization on the login form, so that we have an actual object to store the values for username and password in. This can be passed to the form as a constructor argument, a property, or method, or be a built-in object. For the sake of this blog entry, I’ll simply use it as a publicly available property. Choose your login-form, switch to code view, and set the following lines of code:

    public partial class frmLogin : Form
    {
        // Our notification object:
        private Login _loginObject;

        public frmLogin( )
        {
            InitializeComponent( );

            _loginObject = new Login( );

            // Associate the databind with our notificationObject
            loginBindingSource.Add( _loginObject );
        }
    }

     

    That’s it.

    When you run your application, you will see that the OK button does not enable itself before username and password have values. What you may find odd, is that you have to change focus from one textbox to another in order to see this. This is because the value from the textbox is only passed to the object when it loses focus. If you want a quicker, more live update, you can, for example, choose to update the object on the KeyUp event, as an example.

    Summary

    Databinding forms to class objects is simply a matter of implementing the INotifyPropertyChanged interface. You can only databind properties, but with a little fantasy, the possibilities are many.

    You also have the added benefit of being able to unit-test ALL of the behavior that goes on in your dialog without requiring manual intervenion.

    As a result, you can take presentation behavior classes with you from a windows forms project to a WPF or SilverLight project with very little effort, both tests and behavior are already coded, all you have to do is to bind that class to a different GUI. Rumor has it that Microsoft may do something to bring INotifyPropertyChanged funcionality to the asp.net platform aswell, but at the time of writing this blog entry, this is not supported.

    Sample project can be downloaded here

    Within the context of testing

    This post  is written for .Net, using C#, but the methods in use should be fairly easy to implement in most other computer languages.

     

    Background

    I’ve often looked for a good way to write reusable testing scenarios that were clean to use, easy to set up and most importantly – easy to read six months afterwards.

    Looking at one of Ayende’s blog posts, where he discusses  scenario driven tests, I found something lacking: Whilst he has a great thing going for testing scenarios against a set context, he has limited himself to a very constrained situation, something that I needed to expand to suit my needs. The issue is that having a context that represents “the system” is not enough – I want to be able to specify exactly the state that my system is in when I execute a scenario, with the ambition of being able to re-use the various situations.

     

    Situations and Settings

    I wanted to be able to change contexts with ease, so that my test scenarios ultimately look something like:

    [TestClass]
    public class EventMonitorTests : TestBase
    {
        [TestMethod]
        public void Given_When_Then( )
        {
            using( DdTestContext testContext = new DdTestContext( true ) )
            {
                // Prepare
                int initialCount = testContext.Clients.TotalResponseCount( );
                PrepareSituation< RegisterThreeClients >( testContext );
    
                // Invoke                
                ExecuteScenario< RespondToAllClients >( testContext );
    
                int actualCount = testContext.Clients.TotalResponseCount( );
    
                // Assert
                actualCount.ShouldBeExactly( initialCount + 3 );
            }
        }
    }

     

    The idea is to introduce an interface ISituation as well as IScenario that are known to my test base-class. I can then invoke a Situation and Scenario in various combinations, so that I can test behaviors under different circumstances without having to repeat myself.

    ISituation has a Prepare statement, while an IAction  has an Execute method, both accepting the testContext object:

    public interface ISituation
    {
        void Prepare( DdTestContext testContext );
    }
    
    public interface IScenario
    {
        void Execute( DdTestContext context );
    }

    I gave it some thought as to whether I needed this separation or not, but concluded that having a clean separation between rigging a situation and executing a scenario better helps the reader to understand the tests.

    Additionally, I needed to declare the methods within the base class that are responsible for executing PrepareSituation and ExecuteScenario:

    public void PrepareSituation< T >( DdTestContext testContext ) where T : ISituation, new( )
    {
        T situation = new T( );
        situation.Prepare( testContext );
    }
    
    public void PrepareScenario< T >( DdTestContext testContext ) where T : IScenario, new( )
    {
        T scenario = new T( );
        scenario.Execute( testContext );
    }

    The functions are limited to accepting only objects that honor the respective interfaces.

    This takes care of the execution, as you can guess,For the sake of being practical, whenever I am writing a new batch of scenarios that I would like to test, I often end up writing most ISituation implementations within one single Situations.cs file as well as gathering the IScenario implementations in a corresponding Scenarios.cs file. This is purely to keep the aspects together, when they are simple.

     

    image

    This depends largely on the complexity of the situations and scenarios. For rigging up more complex scenarios, I would normally use a sub-folder within my solition explorer, and name each file after it’s class name, as is normal.

     

    The Context class

    Initially, I wanted a nice, controlled way to control transactions for my tests, as they often interact with database. The class DdTestContext represents the system under test as a whole, and can thus contain some sort of transaction object that is configured for the database being tested.

     

    public class DdTestContext : IDisposable
    {
        private Transaction _transaction;
    
        public DdTestContext( bool useTransaction )
        {
            if( useTransaction )
            {
                _transaction = new Transaction( );
                _transaction.Begin( );
            }
        }
    
        public void Dispose( )
        {
            if( _transaction == null )
                _transaction.Rollback();
        }
    }

    Thus, with this construct, I now have a somewhat stable way of executing my tests within a transactional context without too much effort:

    [TestMethod]
    public void Given_When_Then( )
    {
        using( DdTestContext testContext = new DdTestContext( true ) )
        {
            // Prepare
    
            // Invoke
    
            // Assert
            Assert.Inconclusive( "Not yet implemented" );
        }
    }

    This instantly became a code snippet :)

    Every test that executes within the using clause will initiate a transaction, and roll it back again once the scope is out (or unhandled exceptions occur). My database is safe, for now.

    Whenever I don’t need database support (Unit tests), I can simply opt for false as the constructor argument and no transaction will be initiated for that particular context.

    I dragged the using  statement into each test as opposed to initializing the testcontext in a pre/post method in order to control the use of transactions individually for each test.

     

    Still to come

    • How I expanded the testcontext with useful functions
    • Using extension methods extensively

    Comments are very welcome :)