Profectus Est!

(It is started!)

Project Water my Plants has begun. This blog post marks the arrival of the first bits of electronics, and provides a simple overview of how the getting started with gadgeteer felt like.

White stripe spider plant
Image 1: The victim: A white stripe spider plant(Chlorophytum comosum ‘Vittatum’)

My box of Gadgeteer devices has arrived. I’ve found a suitable plant to experiment with and all of the software required to get started is now installed in two places (on my desktop and on my laptop (for field experimentation). As you can see from the brown bits, this plant is already suffering, and requires immediate attention of the highest technical quality to survive…

Overview of the project

The project has been set up using Visual Studio 2012 and will be managed by Team foundation Service, which will also function as my source control provider.

 Initial Solution Layout
Figure 1: Overview of project layout

The project structure follows the typical N-layered approach as seen on the image above. Within folder 10.10 (Application/Micro Framework) is where the electronics software are made – the rest of the folders are hosts to the Web page and Web Services. The Specifications folder contains screenshots and some frameworks for running the project in a BDD harness using a combination of SpecFlow and CasperJs 

Getting started with Gadgeteer

Image 2: The Gadgeteer Moisture sensor half buried in the plant soil

Installing the .Net gadgeteer project was straight forward, except for a couple of things:

  • You should install the entire 4.2 SDK, and then upgrade to 4.3
  • There is a Visual Studio 2012 compatible Core + Templates for Gadgeteer that you need to install once you’re done installing the 4.2 stuff, otherwise, you won’t be able to see the Gadgeteer application project types.


    Once installed, creating the project was as easy as adding a new project to the solution, choosing Gadgeteer Application and following the wizard:

    Gadgeteer Wizard
    Image 3: The first page of the wizard asks what type of mainboard you have

    The wizard starts by asking what type of mainboard you have, and then proceeds to create a diagram where the mainboard is visualized. From there, it is as simple as just dragging the components you want from the Toolbox onto the designer, and connecting them by clicking on the gates and drawing a line:

    Gadgeteer Designer in VS2012
    Image 4: Using the designer tool to drag the components and connect them to the board

    Auto-generated code

    Every time you save the designer, the auto-generated partial class Program is updated with the new components and port assignments. The generated code is clean and easy to understand. No cluttering or cryptic names here. The module names are default the same as the module type.

    OGadgeteer Designer Generated Code
    Image 5: Designer-generated code

  • Measure My Plant

    The first batch of electronics to arrive were needed for my first application, MeasureMyPlant.exe which is a device application that is going to do two things:

    1. Take readings of the environment that the plant is in (moisture, light, temperature)
    2. Transmit data over wifi to a cloud service in Azure


    Only part 1 was done in this stage, because I waited with the more expensive WiFi component to get a cheap proof of concept going. It turns out that I should’ve ordered everything at once. The electronics simply worked! Within 5 minutes of typing, I had the following program running on the Hydra board:

    Image 6: My first running program on the hydra board

    Time to order some more stuff!

    I will now proceed with some of the more expensive modules. While I await the arrival of these, I will be working on the website. There may be a blogpost or two about that soon…

    For now, a few more images of the protagonists of this blog post:

    Image 7: The modules used in this blog (click for larger image)

    Image 8: My Fez Hydra mainboard with all the connection ports(click for larger image)

    Geeky meetup with Novanet

    Novanet Doorbell

    I had the pleasure of being invited to a late-night geek meet, hosted Novanet, a consulting company focusing on the Microsoft platform. I was invited to attend the meet by my future collegue in Microsoft, Arif Shafique, something I appreciated greatly.

    The first subject of the evening was held by Anders Nygaard, whom gave an introductory speak about developing Windows modern UI apps using HTML5/JavaScript. He gave some great tips on how to get started, with clear, easy to follow examples along the way.You should follow his “One app each month” blog where he has committed himself to releasing one new app every month on the Microsoft Appstore. If you ever get the chance to attend one of his talks, then you’re in for a treat!

    The second speaker, Einar Engebretsen, gave a very good introduction to Domain-Driven Design. It smoothly transitionined to the open-sourced Bifrost Framework, which focuses on delivering business value through the use of well-established patterns such as CQRS and MVVM. Followhing this speech, I have decided to use this framework in my soon-to-be-announced super-secret “Water my Plants” gadgeteerproject, and and the same time see if I can mend some of the misinterpretations that I have built up around DDD.

    Additionally, Einar also introduced us to another project he is working on; Forseti. Simply put, Forseti is a test runner for most JavaScript test harnesses, and, of course, it’s built in .Net. We only had a couple of examples shown, but they were enough to say that it is way faster than anything I’ve seen so far, compared to standalone Jasmine, Grunt and other runners.

    Einar is the kind of speaker who demonstrates by giving examples, all coding is done live, and as usual accompanied by deep insight and inspiration. Einar is, in short, pure coding awesomeness!

    Once more, thank you, Novanet for a great and geeky meetup!

    Structuremap: Singleton class with multiple interfaces

    In a recent project, I had the need to implement multiple interfaces in a single Repository class

    public class UsageStatisticsRepository : DbContext, IApiKeyCounter, IIpAddresssCounter
    // …

    We use Structuremap as our IoC container, and attempting to configure the two interfaces as Singletons gave us two instances (naturally), once for each interface.

    The solution, was to use the not-so-obvious keyword Forward in order to re-use the Singleton instance for another interface as so:

    public class DigitalDiasRegistry : Registry
        public DigitalDiasRegistry()
            Forward<IApiKeyCounter, IIpAddressValidator>();


    With that, the problem was solved, and only one instance occured. Since I fiddled some time to figure this out, I thought sharing it would help someone else.

    Hett smilefjes

    Using Structuremap to resolve ViewModels

    One of the things I find myself doing over and over again when working in xaml-based applications is to create a ViewModelFactory based on Jeremy D. Miller’s StructureMap.

    All my ViewModel classes are based on a class BaseViewModel:

        public class BaseViewModel : INotifyPropertyChanged

    The fun, of course, begins when your viewmodels need to know about other viewmodels. Lets say, for instance that your main view contains a Container control in which you would like to swap views in. Using structuremap, it is easy to inherit the Registry class in order to stitch together associations between views and models:

    public class ViewModelsRegistry : Registry
        public ViewModelsRegistry()
            Rig<MainWindowView, MainVindowViewModel>(Registered.MainWindow);
            Rig<TrioEventsView, TrioEventsViewModel>(Registered.TrioEvents);
            Rig<NewsContentView, NewsContentViewModel>(Registered.NewsContent);
        private void Rig<TControl, TViewModel>(Registered name) 
            where  TControl : ContentControl where TViewModel : BaseViewModel

    The Registered reference is an enum that I use to keep tabs on what views/viewmodels that are implemented:

        public enum Registered
            Unknown = 0,

    As you can tell, the ViewModelsRegistry uses ContentControl as it’s container type. This makes it  compatible with all derivates, including Window and UserControl.
    Following the factory pattern, the result follows:

    public static class ViewModelFactory
        public static void Initialize(Registry registry)
            ObjectFactory.Configure(o => o.AddRegistry(registry));            
        public static ContentControl Get(Registered name)
            var control = ObjectFactory.GetNamedInstance<ContentControl>(name.ToString());
            control.Loaded += (s, e) => {
                control.DataContext = GetDataContextFor(name);
            return control;
        public static BaseViewModel GetDataContextFor(Registered name)
            return ObjectFactory.GetNamedInstance<BaseViewModel>(name.ToString());

    The ViewModelFactory basically leans on structuremap providing the method Get() that stitches together a view and it’s model. One example of such use:


            public void LoadEvents(object dummy)
            private void SetViewModel(Registered name)
                    _loadedControls.Add(name, ViewModelFactory.Get(name));
                CurrentContent = _loadedControls[name];

    In the above snippet, taken from the MainWindowViewModel,  LoadEvents is bound to a button click on the Main Window. Once clicked, If I haven’t already loaded the control, I use the factory.Get() method to create a UserControl with initalized datacontext, before I finally set the ContentPresenter (CurrentContent) on the Main window viewmodel.

    The factory can easilly be expanded to swap out datacontexts, provide different views. This blog post is mainly about using structuremap to make it happen.

    Jeg bare tøyser


    Running CasperJs scripts from a C# console app

    I recently came across the need to write a console application that can take a CasperJs script (or folder) and run it with console output on. I wanted to make this tool in order to integrate it with Visual Studio External tools, so that whenever I am editing a casperJs script, I can hit my keyboard shortcut to verify that all is ok.

    Doing this in C# is mostly straightforward:

    • You recieve the CasperJs file or folder to run as an argument to your console app
    • Inspect the environment variables to find:
      • Where Python is installed (by inspecting your path variable)
      • Where CasperJs is installed
    • You start a Process.Process() and provide it with enough details to run the script


    Challenge: Output

    The tricky bit, is to grab the output, and display it in your own app. Here’s my go at it:

    private static void ExecutePythonScript(DirectoryInfo workingDir, FileInfo pythonPath, 
    string casperArguments)
        var p                              = new Process();
        p.StartInfo.WorkingDirectory       = workingDir.FullName;
        p.StartInfo.FileName               = pythonPath.FullName;
        p.StartInfo.Arguments              = casperArguments;
        p.StartInfo.UseShellExecute        = false;
        p.StartInfo.CreateNoWindow           = true;
        p.StartInfo.RedirectStandardError  = true;
        p.StartInfo.RedirectStandardInput  = true;
        p.StartInfo.RedirectStandardOutput = true;
        p.ErrorDataReceived += (s, e) => {
            if (!string.IsNullOrEmpty(e.Data))
                Console.WriteLine("e> " + e.Data);
        p.OutputDataReceived += (s, e) => {
            if (!string.IsNullOrEmpty(e.Data))
                Console.WriteLine("->" + e.Data);


    The tricky bit to figure out was that after you start the processs (p.Start()), you need to call the asynchronous methods BeginOutputReadLine() and BeginErrorReadLine(), otherwise, events will never be posted. Sarching the internet for invoking processes did not include this little gem, hence this post.

    Happy coding!

    Jeg bare tøyser

    A good basis for unit testing

    Over time, one developers techniques and habits that “just work”. In this post, I would like to share with you a set of base classes that I’ve been using and refining over time. They make my Unit Test writing a little more efficient and easy on the eyes.

    The base of the base

    I often start a new software project by setting up some form of base unit test class with functionality that I know I will be using, such as using embedded resource documents, various kinds of verifiers, test names etc. I usually name this class “BaseUnitTest”. This blog post will not be focussing on this base of bases, but to give you an idea, this is roughly how it starts out.

    public class BaseUnitTest
        private Assembly _testAssembly;
        public EmbeddedResourceManger ResourceManager { get; internal set; }
        public IoVerificationManager IoVerifier { get; internal set; }
        public BaseUnitTest ()
            _testAssembly = Assembly.GetCallingAssembly();
            ResourceManager = new EmbeddedResourceManger(_testAssembly);
            IoVerifier = new IoVerificationManager();

    I usually add on methods to this class in the form of Verfiers of different sorts. A handy base class for your Unit  Tests is not the scope of this blog though…

    Wrapping AutoMocking and Instance creation with BasedOn<T>

    Now follows the real nugget. A base class for your unit tests that uses StuctureMap’s AutoMocker to construct a nice, testable innstance of the class under test.

    public abstract class BasedOn<TInstance> : BaseUnitTest where TInstance : class
        public MoqAutoMocker<TInstance> AutoMocker { get; internal set; }
        public TInstance Instance { get; internal set; }
        public void RunBeforeEachTest() 
            AutoMocker = new MoqAutoMocker<TInstance>();
            Instance = AutoMocker.ClassUnderTest;
        public Mock<TInterface> GetMockFor<TInterface>() where TInterface : class
            return Mock.Get(AutoMocker.Get<TInterface>());
        /// <summary>
        /// Use this from your OverrideMocks() method so that all replacements are made before
        /// the Instance is created
        /// </summary>
        public void Replace<TInterface>(TInterface with) where TInterface: class
        public virtual void OverrideMocks()
        public virtual void ExecuteBeforeEachTest()

    How it works

    The BasedOn<T> class defines an AutoMocker for the class that will be tested as well as a member named ‘Instance’ that is used in the unit tests. The method RunBeforeEachTest() is attributed with [TestInialize] so that it will execute prior to each unit test.

    Note: Because this attribute is used in the baseclass, you cannot re-attribute any method in your derived unit test class. That is why there is a method named ExecuteBeforeEachTest() that you can override to accomodate your Unit testing needs.

    If you need to prepare a special Mock or stub into the class you are testing, you can override the method OverrideMocks() in your unit test class – this method is executed just before the Instance member is created. The method Replace<T>(T with) is meant to be used for this very purpose.

    Example of use

    The class PersonManager is a business layer service that handles Person objects. In the following example, we are setting up a Unit test to ensure that person objects that do not have surnames cannot be updated. Using the BasedOn<T> base class:

    public class PersonManagerTests : BasedOn<PersonManager>
        public void UpdatePerson_PersonHasNoLastName_DoesNotInvokeRepositoryUpdate()
            // Arrange
            var testPerson = new Person();
            // Act
            // Assert
            var repoMock = GetMockFor<IPersonRepository>();
            repoMock.Verify(o => o.Update(testPerson), Times.Never());
    In the next example, I am preparing the repository to throw an exception to verify
    that an exception uses the logger interface:
    public void Create_RepositoryThrowsException_ErrorMessageIsLogged()
        // Arrange
        var somePerson       = new Person();
        string exceptionText = "Something really bad just happened";
        var someException    = new Exception(exceptionText);
        var repoMock         = GetMockFor<IPersonRepository>();
        repoMock.Setup(o => o.Create(somePerson)).Throws(someException);
        // Act
        // Assert
        var loggerMock = GetMockFor<ILogger>();
        loggerMock.Verify(o => o.LogException(exceptionText), Times.Once());

    The magic performed by AutoMocker combined with a relatively simple base class has helped me immensly in making my tests more readable – and for others to read! At the time of this writing, AutoMocker supports Moq, Castle, and TypeMock.

    For those classes that you do not want automocked, you can always bypass the BasedOn<T> class and inherit the BaseUnitTest directly for access to the test helper methods and verifiers.


    Smart Reflection with dynamic type in .Net 4.0

    A collegue of mine at my current project in NRK showed me how to use the dynamic keyword in C# to perform dynamic casting from a base type to a derived type. I just had to share!

    The example starts with the following hierarchy of types:

    public class Person{
    public class Police : Person {
    public class Fireman : Person{

    The heart of the matter

    Typically, you will have a controller/manager class that calls different workers based on the type of object at hand. In ASP.NET, for example, you may have a controller that will return a different rendering of a view based on the 
    type of person to render. In its simplest form, the following 3 functions could form such a controller:
    public void Draw(Person p) {
        Console.WriteLine("Drawing a generic person");
    public void Draw(Police police)    {
        Console.WriteLine("Drawing the police");
    public void Draw(Fireman fireman) {
        Console.WriteLine("Drawing a firefighter");
    Easy peasy, you say and proceed to write some test code: 
    // Arrange
    var people = new List<Person>{
        new Person(),
        new Police(),
        new Fireman()
    // Act
    foreach (Person p in people)
    Do you see the problem?

    …the above code will issue 3 calls to the function that draws the base person type. The other two functions are never called. This is self-explanatory, since the foreach loop defines a Person reference. Using var has the same effect, because it is getting populated from a list of Person objects. The compiler simply does not see the Police or Firemen objects.

    Now, if you change the foreach-loop to this:

    // Act
    foreach (dynamic p in people)

    (use of the dynamic keyword requires that you add a reference to Microsoft.CSharp and System.Core)

    The dynamic keyword is strongly typed, but, the compiler is told that the type of object, p will be determined at runtime, and not during compile time, thus each person object in the list will become a strongly typed police and fireman during runtime, similar to dynamic_cast in c++ (damned near identical if you ask me!)


    Any type of run-time-type checking will involve reflection in some form or another, however, Microsoft has gone a long way in ensuring that this is second-generation reflection, using a funky named tri-state cache to keep things fast. Another concern, of course, is that you can always cast any type to dynamic without compile-time checking, so you basically want to ensure 100% coverage by unit tests as well as ensure that the unit tests will catch type-mismatches.


    Using the dynamic keyword will clean up the code considerably at the cost of having to keep in mind that you wil have no compile time checking of any lines of code that involve the dynamic keyword, so make sure you write those unit tests. Additionally, because this is run-time checking, you should consider other options if you depend on faster code, but for most of your code, the performance impact is non-measurable.

    More on dynamic keyword here – Official documentation on MSDN

    Dissecting the C# 4.0 Dynamic programming – Good article about the inner workings of dynamic

    Faster and Slower

    Growing concerns about the direction of Xaml-based applications

    Microsoft, what the hell do you think you are doing by diverging WPF, Silverlight and Silverlight for WP7?? None of those 3 destination platforms have any solid foothold as of yet. By making different options of XAML available in different destination platforms, you’re only doing one thing: pissing off developers. Stop doing that, this is really simple:


    imagesCA28TSXTWPF should be the mother of all xaml based apps and have every available technology to it – including webcam support as in silverlight, MEF, etc. WPF needs to be that “unlimited” target platform from which both silverlight and WP7 pick their features from.



    Why oh why can I not use data triggers in SL? What is the reasoning for it? I know MS has “shifted focus” for Silverlight. This, in my ears, is bull. Silverlight on iOS and Android will give developers reason to use it. WPF alone cannot succeed as the only xaml-based platform, and SL makes sense for servicing the current craze of tablets and smart phones. Very few people will disagree when I say that MS powered devices (tablets and phones) are lagging far, far behind. For SL to be a success, it needs to penetrate iOS and Android. End of story. rest is just bull.

    Silverlight for WP7

    phone7I accept that SilverLight for windows Phone 7 will offer different capabilities from Silverlight as a xap, but what I don’t get is why this version of Silverlight has to be a framework behind the current release of Silverlight web?? It makes no sense, whatsoever to keep developers in confusion station by not holding back releases until the technology is ready on all platforms!

    Converge now!

    What Microsoft needs to do, is to hold back releases, so they can do a unified XAML platform upgrade targetting windows, SL and WP7 with the same developer options and syntax. No data trigger support for WP7 means dont release it for Windows or SL either! This is FAR better for developers than the mess you’re giving us now! XAML as a developer platform, needs a unified version number, we dont want to have WPF for .Net 4.0, Silverlight 5.0 for web and SL 3.5 gutted for WP7. 

    So, where was I?

    You may have notice that digitaldias was down for a week or two.

    I’ve been using an SHDSL line (Single-Pair High-speed Digital Subscriber Line) for the last 6 years, giving me a whopping 2Mbit in both directions!

    Recently, though, I’ve been on the lookout for higher download speeds, as iPads, laptops, and even the PS3 consume more and more information from the web. When I was offered the option of 20Mbit down, and 1Mbit up for much less moolah, I took it.

    My blogs will load at half speed (as if you care!), but then again, I dont connect back to my office over VPN anymore, so I dodn’t have any good excuse to pay that much for a decent speed out anymore.

    I still want higher output speed, for using skype in HD, but that’ll have to come when the prices (and availability) fits.


    My ISP, Nextgentel delivered fast and reasonably priced this time. For that, they get a nice, well deserved kudos from me Smile

    Hosting a Silverlight app in Azure

    A quick introduction to how you can get up and running with Microsoft Azure – It is a hands-on guide into creating a silverlight application that uses a REST api to manage SQL data in the cloud.

    Who should read this:

    This article assumes that:

    • You know (and love!!) the SOLID programming principles
    • You know what WCF is and how to host and consume such services through IIS
    • You have some knowledge of the Entity Framework ORM
    • You want to get something out on windows Azure, but you’re not quite sure how to


    The concept

    I am writing an inventorizer application. The idea is to keep track of my movies and to know where in my house they are supposed to be. This way, I know where to put a stray movie, as well as check that all movies that are supposed to be in a specific shelf actually are there.

    Later, I will extend the application to access my movie list from mobile devices, so it’s going to require a REST api right from the start.

    Entities and storage

    To get started, I defined 3 basic entities for my application:

    Entity Detail
    Location Room / Area in my home
    Storage Shelf, drawer, box, etc. Exists inside a Location
    Movie Stored inside a piece of storage


    Using Entity Framework, I started by creating a model from a blank database:


    I’ve explicitly given the entities the prefix “Db” in order to separate the objects from my C# domain objects.  Automapper does the conversion for me – pretty straightforward. I keep my domain objects clean, and clear of the Sql Server, as you should too.

    SQL Azure

    To work with SQL Azure, you need to have a valid Azure account and you also need to have created a database for the purpose. I won’t go into the details of the database creation process; basically, you follow a database creation wizard that does what you expect it to.

    Once created, you want to connect your Visual Studio Server explorer to this newly created database. To do that, you first allow yourself through SQL Azures firewall, which is fairly simple, flip to the Firewall Settings tab and click on the button “Add Rule” which brings up this:


    Complete the firewall rule by setting your IP number then click OK and flip back to the databases tab to get a connection string:

    imageThe connection string does not have your password in it. You’ll have to edit that in after you put it in your settings file. If you need help in pushing your model to Azure SQL, just drop me a line, and I’ll help you out.

    Setting up the REST service

    Setting up the REST service is a mattter of

    1. Defining your service interface
    2. Implementing the service in some class
    3. Setting up the service endpoint configuration in your service configuration file

    Important note:
    In order to implement REST and use WebGet and such, you need to include a reference to System.ServiceModel.Web. Make sure in your project properties that you’ve selected the full .Net Framework 4.0 and not the .Net Framework 4.0 Client profile as your target framework, or System.ServiceModel.Web won’t be visible for you to reference.

    Defining the service interface

    Not much hassle here, the special consideration is the REST way of making the endpoints accessible:


    Implementing the service

    Since we started with the EF model, implementing the service simply means creating a repository interface (for convenience) and then implementing it with the generated context class


    Setting up the service endpoint configuration

    To roll out a successful REST service that serves both POX (plain old xml) and JSON data, I had to actually create two different binding configurations even though they’re equal in configuration.

    Second, set up a couple of behaviors, differenciating only in the default response format:

    Finally, set up the endpoints you need:

    Since we are hosting this in Azure, we do not specify any addresses.

    Creating the client

    Now that both the database and REST API is up and running, you only need to create a regular silverlight client, point it to the service, and you’re in business. I actually created a SOAP endpoint in addition to the POX and JSON addresses since I do not need to box data between .Net clients, thus my Silverlight client config has the following service reference:
    Notice the relative address, since I’m hositing the Silverlight client from the same location as the service, I use the relative address to avoid cross-domain issues. This took me some time to figure out. I usually start out with a basicHttpBinding and then swap over to TCP/IP once everything is up and ok.

    If you need more details on how to write a silverlight client, just drop me a message.

    Azure considerations

    So, having completed, tested, and debugged the project here on earth, it was time to deploy the pacakge to Azure. There was one last remaining thing to do, and that is to put a checkmark on your Sql Azure configuration screen in order to allow your services to connect to the database:

    This is definetely another one of those “easy to forget, hard to figure out” things…

    Integration Testing

    I wanted to have a set of integration tests that directly referenced the SQL Azure database without destroying any data, so I opted for the transaction approach where you basically begin a transaction before each test, and then roll back all changes after running it. This led me to the following base class:


    The base class basically implements the TestInitialize and TestCleanup methods to begin a transaction before each test, and roll it back (Dispose()) after each test has run. Any test that throws an exception will then automatically roll back the database.

    If you use the TestInitialize or TestCleanup in a base class, your derived test class won’t be able to use those attributes. This is why I added the virtual Given() function so that I can do my test setup there, should I need to.

    An example of use:

    The testclass above creates an instance of the class StorageRepositorySql and the test that is run is then packaged inside a transaction scope and rolled back so to not disturb my SQL server data. If you want more details on the base class, just let me know.

    Running these tests is surprisingly fast, on my 2Mbit internet line, most of my tests run for less than 50ms each, which is pretty amazing, considering the transactions and that I’m in Norway while the Azure store probably is in Ireland!


    Microsoft promises that “going Azure” should be pretty straightforward, and not much different from what you’re already used to. I tend to agree, it has been surprisingly easy to get something up there and running. Most of the challenges were actually in configuring the REST endpoints and figuring out how to allow the WCF services to access the SQL database, but other than that, the rest is straightforward.

    At the end of this article, I’ve prepared a short Silverlight application that simply lists the locations in my SQL Server. It should be available through the following URL:

    However, since this is work in progress, you may see a more advanced thing on this page as my application progresses, or something completely different, or, perhaps nothing at all – I make no guarantees, other than that it should be there if this article isn’t too old Smile 


    Watt’s a dog?

    – “Yes he is”

    My old blog server was sucking out roughly 700 Watts/Hour. By buying a new, modern, cheap PC, I’ve reduced the cost of hosting my own blog to 1/10th of what I was paying!

    How much was the old server costing me?

    700 watts on idle setting means 0.7 kWh, 24 hours a day, 365 days a year = ~6100 Kwh per year! This was an age-old Compaq Proliant Server with two power supplies and a series of cooling fans – not only did it pollute my power bill, it also made a serious amount of noise!

    Given today’s power prices, that means roughly  3,000 NOK ( 368 EUR / 511 USD) per year just for having my blog available 24/7! Something had to be done, and fast!


    I started to look for a replacement by checking out some servers, but most of these use around 200W of power, and were fairly expensive compared to desktop PC’s (I wanted at least a dual core and 2GB RAM)

    I looked into a few Windows Home Servers, but  frankly, I need IIS7, and dont want to limit myself to what I can do on that platform. It is a point, however, that Home Server CAN host WordPress, and HP MediaSmart is rumored to use around 80W/hour, which is not bad at all!

    Cheap desktop PC to the rescue

    I landed on a cheap-ish Packard Bell iMedia PC (image below) that looked like a match for the job. Priced at roughly the same as a year worth of server power, I got a dual-core AMD processor with 3GB of ram and some 300GB of drive space. plenty  for hosting IIS7 and WordPress on MySql, and no restrictions in case I want to deploy an ASP.NET application or two.

    Packard Bell imedia Desktop PC

    Once at home, I hooked it up to my watt meter and low and behold, the darned thing does not use more than 50W on balanced setting (around 62W on high performance setting)

    50W is a number I can live with 24/7!! I’ve got lightbulbs that use more than that! New cost per year (in power):

    238 NOK ( 29 EUR /  40 USD)

    That is less than the monthly fee for using one of the cheaper web-hotels out there!



    Investing roughly a year’s worth of electrical power to my old web-server, I was able to cut the power consumption of my blog to 1/10th. Since I don’t have that much traffic, the hardware is more than able to respond to my needs, and I got rid of that old, noisy huge box that was doing nothing but costing me money. Less electricity spent = more frogs to kiss somewhere…

    You’re actually reading this on the new hardware right now!