ESP8266 running on batteries

If you haven’t yet heard about the ESP8266 then it’s time for you to wake up and get serious! This little beauty is a seriously low-cost WiFi chip with a full TCP/IP stack and microcontroller made by ExpressIf. It sports 16 GPIO pins, SPI, I2C, UART, and even has a 10-bit ADC for good measure. And the price? Less than $2 on AliExpress! It is the size of a coin, and supports b/g/n wireless protocols. What’s not to love about this thing?

Wemos D1 mini ESP8266 Board Supports Shields with a Temperature ...

The device isn’t particularly heavy on electricity to start with, but it can pull as much as 200mA when it’s transmitting over WiFi, and generally around 75mA just being awake.

If you plan on running it on batteries, you’ll soon find that they only last a few days before the thing dies down!


Enter Deep Sleep Mode

Luckily, there is a very nice way to help on this. You see, the device has a hibernation mode (among other modes, see the full list here) that allows it to drop to as low as 60 microAmpere, and it is really easy to do it. Here are the relevant bits of code to make it happen (using Arduino Sketch or Visual Studio with VisualMicro extension:

extern "C" {
#include "user_interface.h"

void setup()

void loop()
    // Gather sensor data
    // Transmit sensor data over wifi somewhere

    Serial.println("Entering deep sleep mode");

    auto fiveMinutes = 5 * 60 * 1000000;


At the top, we’re referencing a C-library, which is why it is wrapped in the extern scope. Fail to do this, and it won’t compile due to types being different and other woodoo.

The ESP8266 enters a state where everything is shut off except for the realtime clock to hit the 60µA. After the clock fires up again, the device essencially reboots, running setup() again before entering loop().

This does NOT work unless you create a short circuit between GPIO pin 16 and the RESET pin on the device. The clock needs this wire in order to wake itself up after the timer ends. So remember, there is a hardware change included in getting this right!



Setting the ESP8266 in deep-sleep mode means that you can start to make battery driven solutions that will last for months instead of days. At an average of 75mA in normal mode, a 2450ma Eneloop would last around 32 hours or less on battery power, but on the same battery, you should be able to expect months (depending on how hard you drive the WIFI and sensor power consumption ofc.) Also remember that the power converters from battery to 3.3v aren’t perfect, giving you at best 85% of stated battery capacity, but all in all, it’s a good and handy thing to know about!

Getting the newest entry from Azure Table Storage

Sometimes, all you want is to be able to quickly get to the last value of a sensor, or the freshest product in your Azure Table without having to do complex artsy queries to achieve that result. Trouble is, querying against Azure tables does not give you a Last() option, so we have to get sneaky!

Turns out, Azure Tables are ordered by their RowKeys, which are indexed, so we’re in luck. The challenge is that you need to input a string that is ever descending in value, so that the newest elements are always fresh on top. Here’s a trick to do just that:

DateTime to the rescue!

The simple trick is to use the DateTime.MaxValue property, which gives us the highest possible value of DateTime. Then we convert that to ticks in order to get a huge number. Subtract the DateTime.Now value from that, and what we end up with is a large enough number to use as a RowKey that is “ever descending”:


The string formatting is just to populate the value with 19 digits.

The RowKeys are now stored in an ever descending order, here’s a snip of a table I’m storing some sensor values in (using Azure Storage Explorer):


On the reading side, you can now simply execute your query knowing that the order of the returned values will always have the first item as the last inserted:



Learn it, love it, live it!

  • Professional resume and cover letter
  • Bruk av lagring i skyene

    Her kommer et blogginnlegg på norsk!

    Jeg har laget en liten video som viser en øvelse jeg gjør for deltakerne på den månedtlige Microsoft Azure Camp. Dette er et månedtlig event på ca. 3 timer der utviklere kan komme på besøk til meg og få litt hands-on erfaring med å lage et program som bruker lagring i skyen.


    Programmet går ut på å implementere en automatisk opplasting av festbilder til Microsoft Azure, slik at disse senere kan vises i f.eks. en website som er laget for formålet. Øvelsen er rask å skrive, og fungerer flott som en kode kata til de som trenger å styrke seg på bruk av cloud tjenester.

    YouTube Video

    Til formålet har jeg laget en 20 minutter lang video hvor jeg bygger løsningen fra scratch.


    Velg 1080p eller 720p for å kunne lese teksten i denne videoen!

    Sees best i fullskjerm på 1080p eller 720p


    Tilbakemeldinger og forbedringsforslag mottas med glede, enten på youtube direkte, eller her



    Sapien amet…

    (Planning the project)

    Hi all, and sorry for the slow updates. My new job has had me so busy that I have not found any time at all to work on my plant watering project. In addition, with the release of Visual Studio 2013 the gadgeteer project that I had going no longer works, since the .Net micro framework and Gadgeteer API’s need to be upgraded for the latest version of Visual Studio! I also need to get a new mainboard for the project as the one I have does not support WiFi!!

    Visual Studio Online – The project planning tool

    Today, I want to tell you about using Visual Studio Online (VSO). That’s the new name to “Team Foundation Service” which was introduced a while ago.

    VSO is free to use in projects up to five people, and if you are an MSDN subscriber, you do not even count towards those five. So, for all your hobby needs, you should need no more than this to get started on your next project. I will look at the agile planning part of VSO today, so this is not only for the .Net people!
    Let me repeat this:

    It does not matter what language you develop in, or even if you are a developer at all. VSO is a purebreed project planning tool that will see your Agile project needs fulfilled!



    When you plan a project, you typically have some different areas and concepts that need to be defined. These areas, or features as they’re called in VSO (“epics” in Jira) allow you to group together a set of user stories for which you plan your work. My Water My Plants (WMP) project is split into the following epic parts:


    Using features, it’s easy to find a home for all the backlog items in the project. As you can see from the image above, I have 6 main features in my WMP project, each with their set of backlog items. For a more general LOB application, you could have features such as “invoicing” or “maps”. Some books describe epics as user stories that represent too much work for one single sprint.

    With the above list of features done, I can easilly just change viewmode to “features to backlog items” in order to start planning the user stories. This viewmode basically adds a huge plus-sign to the left of the highlighted feature and clicking it add a new user story below it. You can see in the next image that I’m working on “Windows Azure” and setting up some work that needs to be done there:


    Backlog items are user stories, and not tasks. In a large project, you would plan for the features and backlog items together with the stakeholders of the project (such as the company owner, project lead etc) and once you’ve got a Product Backlogyou order it by priority so that the developer team can start planning their work.

    Get sprinting

    The sprint planning process is identical to backlog creation; assemble your developers, and walk through the backlog items having selected view “Backlog items to tasks”.


    As you can see, now you’re clicking the big plus-sign on a backlog item in order to define what tasks need to be done in order to deliver the backlog item. The tasks are described and estimated. The smaller (in time/complexity) the task, the better, because smaller tasks are easier to estimate. Your team commits entire backlog items (with all the tasks) for a sprint – as may as they think they can manage in the alloted period.

    It is important to remember that your team commits entire backlog items to the sprint, and not individual tasks! I can’t begin to tell you how many times I see teams trying to deliver individual tasks in sprints. This gives no value to the stakeholders, because as long as there is a missing task to the backlog item, then it cannot be delivered and tested.

    Once your team has planned enough tasks to last the duration of the sprint, they can now focus on the work getting done, and follow their progress on the burndown graph.


    What I like about VSO is the clean interface, and tight integration with Visual studio. Inside Visual Studio, I have a prioritized “Assigned to me” query that has been put in my team page:


    Clicking on this gives me the work items that are either bugs or tasks todo or in progress. I can then easilly associate each check-in with the task that I was working on. The order of the tasks is, as you’d expect, the order of priority that I arrange the backlog items that the tasks belong to.

    Systems like these are the recepy for success in any modern software project. I honestly believe that VSO has no match because of the tight integration with Visual Studio and MS Office.
    (you can hook up Excel to this just as easilly).

    So there you have it, this is how my plant watering project is managed on a larger scale.

    Read more about this on Application Lifecycle Manage and Agile planning on the Visual Studio site

    – and lets hope that the API’s for working with Gadgeteer comes to Visual Studio 2013 soon!

    Merry Christmas!


    Faster and Slower

    Growing concerns about the direction of Xaml-based applications

    Microsoft, what the hell do you think you are doing by diverging WPF, Silverlight and Silverlight for WP7?? None of those 3 destination platforms have any solid foothold as of yet. By making different options of XAML available in different destination platforms, you’re only doing one thing: pissing off developers. Stop doing that, this is really simple:


    imagesCA28TSXTWPF should be the mother of all xaml based apps and have every available technology to it – including webcam support as in silverlight, MEF, etc. WPF needs to be that “unlimited” target platform from which both silverlight and WP7 pick their features from.



    Why oh why can I not use data triggers in SL? What is the reasoning for it? I know MS has “shifted focus” for Silverlight. This, in my ears, is bull. Silverlight on iOS and Android will give developers reason to use it. WPF alone cannot succeed as the only xaml-based platform, and SL makes sense for servicing the current craze of tablets and smart phones. Very few people will disagree when I say that MS powered devices (tablets and phones) are lagging far, far behind. For SL to be a success, it needs to penetrate iOS and Android. End of story. rest is just bull.

    Silverlight for WP7

    phone7I accept that SilverLight for windows Phone 7 will offer different capabilities from Silverlight as a xap, but what I don’t get is why this version of Silverlight has to be a framework behind the current release of Silverlight web?? It makes no sense, whatsoever to keep developers in confusion station by not holding back releases until the technology is ready on all platforms!

    Converge now!

    What Microsoft needs to do, is to hold back releases, so they can do a unified XAML platform upgrade targetting windows, SL and WP7 with the same developer options and syntax. No data trigger support for WP7 means dont release it for Windows or SL either! This is FAR better for developers than the mess you’re giving us now! XAML as a developer platform, needs a unified version number, we dont want to have WPF for .Net 4.0, Silverlight 5.0 for web and SL 3.5 gutted for WP7. 

    So, where was I?

    You may have notice that digitaldias was down for a week or two.

    I’ve been using an SHDSL line (Single-Pair High-speed Digital Subscriber Line) for the last 6 years, giving me a whopping 2Mbit in both directions!

    Recently, though, I’ve been on the lookout for higher download speeds, as iPads, laptops, and even the PS3 consume more and more information from the web. When I was offered the option of 20Mbit down, and 1Mbit up for much less moolah, I took it.

    My blogs will load at half speed (as if you care!), but then again, I dont connect back to my office over VPN anymore, so I dodn’t have any good excuse to pay that much for a decent speed out anymore.

    I still want higher output speed, for using skype in HD, but that’ll have to come when the prices (and availability) fits.


    My ISP, Nextgentel delivered fast and reasonably priced this time. For that, they get a nice, well deserved kudos from me Smile

    Hosting a Silverlight app in Azure

    A quick introduction to how you can get up and running with Microsoft Azure – It is a hands-on guide into creating a silverlight application that uses a REST api to manage SQL data in the cloud.

    Who should read this:

    This article assumes that:

    • You know (and love!!) the SOLID programming principles
    • You know what WCF is and how to host and consume such services through IIS
    • You have some knowledge of the Entity Framework ORM
    • You want to get something out on windows Azure, but you’re not quite sure how to


    The concept

    I am writing an inventorizer application. The idea is to keep track of my movies and to know where in my house they are supposed to be. This way, I know where to put a stray movie, as well as check that all movies that are supposed to be in a specific shelf actually are there.

    Later, I will extend the application to access my movie list from mobile devices, so it’s going to require a REST api right from the start.

    Entities and storage

    To get started, I defined 3 basic entities for my application:

    Entity Detail
    Location Room / Area in my home
    Storage Shelf, drawer, box, etc. Exists inside a Location
    Movie Stored inside a piece of storage


    Using Entity Framework, I started by creating a model from a blank database:


    I’ve explicitly given the entities the prefix “Db” in order to separate the objects from my C# domain objects.  Automapper does the conversion for me – pretty straightforward. I keep my domain objects clean, and clear of the Sql Server, as you should too.

    SQL Azure

    To work with SQL Azure, you need to have a valid Azure account and you also need to have created a database for the purpose. I won’t go into the details of the database creation process; basically, you follow a database creation wizard that does what you expect it to.

    Once created, you want to connect your Visual Studio Server explorer to this newly created database. To do that, you first allow yourself through SQL Azures firewall, which is fairly simple, flip to the Firewall Settings tab and click on the button “Add Rule” which brings up this:


    Complete the firewall rule by setting your IP number then click OK and flip back to the databases tab to get a connection string:

    imageThe connection string does not have your password in it. You’ll have to edit that in after you put it in your settings file. If you need help in pushing your model to Azure SQL, just drop me a line, and I’ll help you out.

    Setting up the REST service

    Setting up the REST service is a mattter of

    1. Defining your service interface
    2. Implementing the service in some class
    3. Setting up the service endpoint configuration in your service configuration file

    Important note:
    In order to implement REST and use WebGet and such, you need to include a reference to System.ServiceModel.Web. Make sure in your project properties that you’ve selected the full .Net Framework 4.0 and not the .Net Framework 4.0 Client profile as your target framework, or System.ServiceModel.Web won’t be visible for you to reference.

    Defining the service interface

    Not much hassle here, the special consideration is the REST way of making the endpoints accessible:


    Implementing the service

    Since we started with the EF model, implementing the service simply means creating a repository interface (for convenience) and then implementing it with the generated context class


    Setting up the service endpoint configuration

    To roll out a successful REST service that serves both POX (plain old xml) and JSON data, I had to actually create two different binding configurations even though they’re equal in configuration.

    Second, set up a couple of behaviors, differenciating only in the default response format:

    Finally, set up the endpoints you need:

    Since we are hosting this in Azure, we do not specify any addresses.

    Creating the client

    Now that both the database and REST API is up and running, you only need to create a regular silverlight client, point it to the service, and you’re in business. I actually created a SOAP endpoint in addition to the POX and JSON addresses since I do not need to box data between .Net clients, thus my Silverlight client config has the following service reference:
    Notice the relative address, since I’m hositing the Silverlight client from the same location as the service, I use the relative address to avoid cross-domain issues. This took me some time to figure out. I usually start out with a basicHttpBinding and then swap over to TCP/IP once everything is up and ok.

    If you need more details on how to write a silverlight client, just drop me a message.

    Azure considerations

    So, having completed, tested, and debugged the project here on earth, it was time to deploy the pacakge to Azure. There was one last remaining thing to do, and that is to put a checkmark on your Sql Azure configuration screen in order to allow your services to connect to the database:

    This is definetely another one of those “easy to forget, hard to figure out” things…

    Integration Testing

    I wanted to have a set of integration tests that directly referenced the SQL Azure database without destroying any data, so I opted for the transaction approach where you basically begin a transaction before each test, and then roll back all changes after running it. This led me to the following base class:


    The base class basically implements the TestInitialize and TestCleanup methods to begin a transaction before each test, and roll it back (Dispose()) after each test has run. Any test that throws an exception will then automatically roll back the database.

    If you use the TestInitialize or TestCleanup in a base class, your derived test class won’t be able to use those attributes. This is why I added the virtual Given() function so that I can do my test setup there, should I need to.

    An example of use:

    The testclass above creates an instance of the class StorageRepositorySql and the test that is run is then packaged inside a transaction scope and rolled back so to not disturb my SQL server data. If you want more details on the base class, just let me know.

    Running these tests is surprisingly fast, on my 2Mbit internet line, most of my tests run for less than 50ms each, which is pretty amazing, considering the transactions and that I’m in Norway while the Azure store probably is in Ireland!


    Microsoft promises that “going Azure” should be pretty straightforward, and not much different from what you’re already used to. I tend to agree, it has been surprisingly easy to get something up there and running. Most of the challenges were actually in configuring the REST endpoints and figuring out how to allow the WCF services to access the SQL database, but other than that, the rest is straightforward.

    At the end of this article, I’ve prepared a short Silverlight application that simply lists the locations in my SQL Server. It should be available through the following URL:

    However, since this is work in progress, you may see a more advanced thing on this page as my application progresses, or something completely different, or, perhaps nothing at all – I make no guarantees, other than that it should be there if this article isn’t too old Smile 


    Testing Windows Phone 7 class libraries with MSTest

    In this post, I explain how you can unit-test your Silverlight and Windows Phone 7 class libraries using MSTest and the built-in test system of VS2010, enabling you to right-click a test to run it.

    image I am a test driven developer, and simply hate the testing harnesses available to SilverLight (SL) and Phone 7 (wp7) development enviroments. I do believe that Microsoft should make it a priority to allow MSTest to execute the class libraries written for other language runtimes than the CLR in windows, but that is a side issue.

    Today I bring you my proposal for a workaround.


    I wrote a helloWorld application for wp7, and downloaded Roger Peter’s cheat sheet for unit testing apps this platform. The stuff does work, but looks highly impractical in my view – mostly due to the lack of screen estate, but also because it does not integrate with the Visual Studio 2010 test harness.


    The key is in the link

    The trick to getting your logic tested is to link the source files into a new, regular window class library project. Here’s the recepy:

    1. Hatch an idea for a wp7 app (or SL) with the potential to rule the world (don’t they all?)
    2. Create a solution for your application, and add a wp7 Project
    3. Add another wp7 project for your tests
    4. Create a regular Windows Class Library and choose to add existing item. Pay close attention to the drop-down arrow on the ADD button. Use this to select “Add as Link”
    5. Do this for all the classes that you wish to test


    Within your project, you should see the linked files marked with a small arrow, marking it as linked:

    It mostly works!

    Having done this, you now have a regular windows class library that you can use to unit test the logic of your application using MSTest. As an added bonus, should you want to create a wpf applicaton later, you can, of course, re-use code in this way for cross-target platform work.

    Changes done to the linked file happen only in the original file, so you can now drive the logic in your class file with unit tests by using your unit testing framework of preference.


    Why it has a smell to it

    This is still somewhat smelly workaround, consider that you cannot unit-test wp7/SL specific functionality. It is only a workaround that applies to code that compiles to windows and wp7/SL at the same time. Having said that – if you follow the SOLID principles, you should not have any problems unit-testing most of your code, it would only be the platform specific parts and GUI that you leave to the lacking test harness, for those you can safely use the recommended test harnesses, such as the one proposed by Roger Peter.

    The perfect solution would, of course be to have MSTest support these other runtimes as a silent simulator or something.

    Ranting over change – an exercise in futility?

    I’m an avid blog reader/commenter and have seen the rise of a wave of rants about Microsoft’s LightSwitch and Microsoft WebMatrix. These are products designed to make writing windows applications as well as web pages even easier than it already is, making the process of creation accessible to more people than ever. Some express direct rage about this, others are concerned about their bread & butter.


    "State of the Art" Amiga Demo, winner of The Party 1992I’ve been around since the ZX81 (in excess of 26 years) as a developer, flipping through an endless plethora of developer languages.
    In those days, developers learned assembler first, after a few months of punching in BASIC code from a dubious UK-based computer magazine.

    Remember all those cool demo’s on the Spectrum/C64 and later Amiga scenes? 90% of’em were done in pure assembler! Pascal was hot for a short while before yielding to C/C++ .


    I switched over to the Microsoft platform sometime during the MSDOS / Windows 3.11 era.
    (i386SX without a floating point processor!)
    In my opinion, the moment you loose control over the CPU’s registers and/or it’s memory, is the moment you lost control.

    MSIL/BYTECODE languages (java, C#, VB…), interpreted languages (ruby, python) are not ,  by that definition, "pure" languages. They take away control from the developer in order to prevent the developer from making mistakes that tear down all of it’s surrounding applications (and often, the OS). Actually, when I come to think of it, even Assembler takes away some control in that you can no longer address invalid registers, or shift memory to non-existing locations without a compiler error.

    Time, money, and big feet

    How often do I read how "messy" C++ is because you have to handle the memory yourself. The fact of the matter is that C++ requires a strong sense of discipline; if you understand the language, then you can write Greedy brains!applications that make the best Java and .Net apps look really, really neandrathal, in terms of anything! (performance, memory footprint, program size…) – at the cost of time!

    And let’s just say, when people started enrolling in developer classes in the 90’s, it wasn’t because they had a sudden geek awakening, they saw money in software business, and wanted to be a part of it. Today, they’re the vast majority of developers out there. Microsoft makes money on software licenses. It is only natural that they write code for these masses.

    But at the cost of performance and memory footprint, C#, Java and other languages make our everyday easier; I can whip out a complete, working business application mockup in a day or two using modern tools (SilverLight/SketchFlow). I used to be employed in a large consultant business where the vast majority of solutions delivered were MS Access "applications".


    So what makes it all good enough?
    Solving the customer’s problem.


    Conventional Purist Pattern Pride

    In all our purism, dogmas, theorems and idelogies, the fundamental truth is: The customer doesen’t give a rat’s jewels about how you solve his problem! He looks at you as being a huge expense that has to be made, nothing more. If you can satisfy his needs with a technical solution that is less expensive than the competition, then you’re more than likely to have a satisfied customer. Ayende has a great image on his rant on this – really boils it down to the essence!
    I am a purist myself, make no mistake about that,  I do take pride in my software craftmansship, but I’ve also seen so much “bad” sofware out there, and the customer is happy!
    – At the end of the day, that’s really all that matters!

    For long running, or high-risk software that requires quality;
    I more than often see that it really just boils down to convention. Patterns tend to be tweaked around to circumvene technical limitation, or even more common, user ignorance. Who does not have a “Tweaked” MVP pattern, or a “somewhat modified version of” MVC.. recognize yourself? 

    My opinion is that it is you, and not the software, that sets the standards. Just like a carpenter, if you do not have pride in the work you do, you simply cannot deliver quality software, regardless of how good tools you have. Granted, using a nailgun instead of a hammer, you can still produce cleaner looking wallboards without the dents and bruises of 60 missed hammerhits, but if your nails are spread around shotgun style, you know that wall aint gonna last long anyway.

    MetaProcess, MetaDeliver, MetaWin:

    Microsoft is making it easier and easier to shovel out software that requires less skill to develop with products like LightSwitch. Is this bad?
    I say – “No, that isnt necessarily bad”

    ANY “good” software has undergone the following metaprocess:

    • Have a clear definition of application’s domain (what does it do?)
    • Plan for re-use and upgradeability(modularity) where possible
    • Make the application as maintainable as possible (clean code, clear intentions, refactor)
    • Cover your application’s functionality with tests(TDD, DDD, DDT)

    Neither language, nor technology have any impact on this metaprocess.


    What is important, is that the technology’s operator understands the technology (a question of syntax and experience). If it helps me deliver software at a lower cost without compromising my craftmansship, then by all means, give it here!

    In my view, WebPrism and LightSwitch must also undergo the same metaprocess in order to be developer platforms that are usable for corporate offices.


    Some references (Links go directly to the articles):

    InfoQ article

    The Inquisitive Coder

    Jason Zanders WebLog

    Ayende’s Blog

    Close( ) – the MVVM chaos

    Technorati Tags: ,,

    pp_DSC1924 copy

    I am an avid adopter of the Model-View-ViewModel pattern for designing applications. It is a sleek, very testable way to write software, but it has one major problem:

    Because the ViewModel is unaware of its view, it follows that it is difficult to command a window to close itself.

    I googled long and hard for solutions, but what I found was so complex and intricate that it would scare off any developer wanting to do some actual work.

    What I give you here, is my own compromise between the want for simple, yet testable software vs the want to have a clean separation between a view and it’s viewmodel.

    The BaseViewModel

    Because every ViewModel implements INotifyPropertyChanged it is generally a good idea to write a base class that encapsulates this behaviour in most software projects.

    In my opinion, every viewModel should also be able to request that a view should disappear for some reason.

    Thus, my implementation of a baseclass for ViewModels looks like this:


    By providing both an ICommand and a Method that both invoke the RequestCloseEvent, I can choose whether a View should close by binding to a button, or as a consequence of some logic in the viewmodel.

    The CloseCommand property simply calls the RequestCloseEvent nothing more.

    DataContext Binding

    The practical approach to binding a View to its ViewModel should not require more than a one-liner:


    The ( Application.Current as App ).ServiceLocator is my IoC Container; it has a public property for every ViewModel that I write. I add the container as a public property in app.xaml.cs. This way, it can be reached from all views.

    The line above uses a simple extension method to do two things:

    • Register the ViewModel provided by the ServiceLocator as the DataContext for that view
    • Attach the RequestCloseEvent to the Views Close() method

    Here’s the code:


    The idea is that the event always closes the dialog.

    You can then either bind a close button to the CloseCommand property of the BaseViewModel, or you can have your ViewModel fire the event through calling RequestClose( ) – or both.



    Figure: Binding directly to the base class




    Figure: Calling Close from a ViewModel method

    So…how testable is this?

    For all intents and purposes, I now have a loose enough coupling between my ViewModel and View to verify that the ViewModel is requesting a dialog to close:



    The method I’ve given you gives a loose enough coupling to be testable, and keeps things simple. There is one single line of code to attach the View to it’s ViewModel, something I find to be an acceptable tradeoff from a pure separation.

    The ViewModel also remains 100% compatible with the concept of test driven design, and is simple enough for teams of developers working on large software projects to use

    PS: Who else does M.C.Escher inspired photography? 🙂

    TDD: Using databind to object for the ultimate tdd exerience

    Figured I need to share with you how I normally go about designing UI, and how I make that design as testable as possible.

    To make things easy, I will use a login screen as an example, since it has few controls, and is relatively simple to follow.

    Sample VS2008 solution can be downloaded here

    The criteria

    • Create a login  dialog with a username, password, OK and Cancel button
    • OK Button is only enabled when both a username and password are set
    • Should be fully unit-testable


    I start by creating an empty solution with my typical folder structure.


    The numbering is just something I add to the folders because I like to have the top-down view. Note that I have a clear separation between UserInterfaces and Presentation:

    User Interface: Dumb dialog, form, or page containing bindable controls that the user interacts with

    Presentation: Smart, data-bindable classes that represent a user interface’s logic and state.


    Create a login  dialog with a username, password, OK and Cancel button

    Next, I’ll add the windows forms project to the User Interfaces folder that contains my login dialog, and design our choice login box:


    I am only interested in the design at this stage. Aside from setting the property UseSystemPasswordChar on the textbox for the password, and naturally giving the controls some meaningful names, I do not bother looking at code here.


    Preparing to code

    The next bit is half the magic. I am going to create a class to represent my login dialog. By implementing the INotifyPropertyChanged interface (found in System.ComponentModel), I am telling this clas that it can be databound to windows forms, WPF and silverlight controls.

    I begin by adding a Presentation class library to contain the login class, as well as a test project where I can put all the facts related to it:


    The Solution Items folder that you see at the bottom contains the test list and testrunconfig files, it is autogenerated by visual studio the first time you add a test project to your solution.

    • Presentation is a regular class library
    • Presentation.UnitTests is a test project

      Must be fully unit-testable

      In visual studio, it’s hard to write tests for objects that do not exist, this is due to intellisense trying to help you as you go actually turns into something you have to fight. Creating a skeleton Login class makes this process a lot easier.

      Initally, it looks like this:
    public class Login : INotifyPropertyChanged
        public event PropertyChangedEventHandler  PropertyChanged;

        public string Username            { get; set; }
        public string Password            { get; set; }
        public bool   OkButtonEnabled    { get; }


    The PropertyChanged event is the mechanism used for databinding. More on that later.

    Initially, I am interested in the following behavior from my login class:


    Implementing these tests is fairly straight forward. When done, I can proceed to getting them to pas.

    SIDENOTE: Unsure on how to verify events in a test? here is a smart way to do it:

    public void PropertyChanged_SetPassword_EventFires( )
        // Prepare
        Login login            = new Login( );            
        bool eventWasFired    = false;
        login.PropertyChanged += (sender, e ) => eventWasFired = true;

        // Invoke
        login.Password = _testPassword;

        // Assert
        Assert.IsTrue( eventWasFired );


    Back to our Login class, we want the Properties to “announce” that they have been changed,

    this can be done like so:

    public string Username            
        get{ return _userName; }
            if( _userName == value )
            //TODO: Validate username
            _userName = value;
            NotifyPropertyChanged( "Username" );

    private void NotifyPropertyChanged( string propertyName )
        if( PropertyChanged != null )
            PropertyChanged( this, new PropertyChangedEventArgs( propertyName ) );

    Basically: If I can set the value, I announce it with my event handler. There is no point announcing a value that was never set.

    Finally, the OkButtonEnabled property simply checks the username and password:

    public bool OkButtonEnabled    
            if( string.IsNullOrEmpty( Username ) )
                return false;

            if( string.IsNullOrEmpty( Password ) )
                return false;

            return true;

    I’m a sucker for readability, can you tell? 🙂

    After a very brief syntax check, I’m done with the Login class for now:



    Binding it to the form

    At this stage, I now have a Login form with absolutely no code behind it. Additionally, I’ve created a Login class that announces every property that changes, it is time to bind the two together. The process is simply:

    • Declare the login class as a data source
    • Bind properties from the login class to our form
    • Initialize the binding in the form’s constructor (in the code behind)
    Declare the login class as a data source

    In design mode, bring up the properties of the username textbox, find the databinding section. Since this is the first time we’re doing a data source, I can pre-select the Text property that I want to bind to, and then click Add project data source link


    This brings up the following sequence (I’ll just run through the images, no comments should be necessary):









    Having completed this process, you can now simply bind the TextBox.Text property to your bound class object with a simple drop-down:



    The OK Button requires a special binding, because we want to bind it’s true/false value to the Enabled property, so we open up the Advanced data binding dialog:


    Find the Enabled property, then simply choose to bind that to the OkButtonEnabled in our Login class:


    Press OK to save your changes.

    Initialize the binding in the form’s constructor

    The final step in the binding process is to perform some initalization on the login form, so that we have an actual object to store the values for username and password in. This can be passed to the form as a constructor argument, a property, or method, or be a built-in object. For the sake of this blog entry, I’ll simply use it as a publicly available property. Choose your login-form, switch to code view, and set the following lines of code:

    public partial class frmLogin : Form
        // Our notification object:
        private Login _loginObject;

        public frmLogin( )
            InitializeComponent( );

            _loginObject = new Login( );

            // Associate the databind with our notificationObject
            loginBindingSource.Add( _loginObject );


    That’s it.

    When you run your application, you will see that the OK button does not enable itself before username and password have values. What you may find odd, is that you have to change focus from one textbox to another in order to see this. This is because the value from the textbox is only passed to the object when it loses focus. If you want a quicker, more live update, you can, for example, choose to update the object on the KeyUp event, as an example.


    Databinding forms to class objects is simply a matter of implementing the INotifyPropertyChanged interface. You can only databind properties, but with a little fantasy, the possibilities are many.

    You also have the added benefit of being able to unit-test ALL of the behavior that goes on in your dialog without requiring manual intervenion.

    As a result, you can take presentation behavior classes with you from a windows forms project to a WPF or SilverLight project with very little effort, both tests and behavior are already coded, all you have to do is to bind that class to a different GUI. Rumor has it that Microsoft may do something to bring INotifyPropertyChanged funcionality to the platform aswell, but at the time of writing this blog entry, this is not supported.

    Sample project can be downloaded here