ESP8266 running on batteries

If you haven’t yet heard about the ESP8266 then it’s time for you to wake up and get serious! This little beauty is a seriously low-cost WiFi chip with a full TCP/IP stack and microcontroller made by ExpressIf. It sports 16 GPIO pins, SPI, I2C, UART, and even has a 10-bit ADC for good measure. And the price? Less than $2 on AliExpress! It is the size of a coin, and supports b/g/n wireless protocols. What’s not to love about this thing?

Wemos D1 mini ESP8266 Board Supports Shields with a Temperature ...

The device isn’t particularly heavy on electricity to start with, but it can pull as much as 200mA when it’s transmitting over WiFi, and generally around 75mA just being awake.

If you plan on running it on batteries, you’ll soon find that they only last a few days before the thing dies down!

 

Enter Deep Sleep Mode

Luckily, there is a very nice way to help on this. You see, the device has a hibernation mode (among other modes, see the full list here) that allows it to drop to as low as 60 microAmpere, and it is really easy to do it. Here are the relevant bits of code to make it happen (using Arduino Sketch or Visual Studio with VisualMicro extension:

extern "C" {
#include "user_interface.h"
}

void setup()
{
    initSerial();
    initWifi();
    initSensors();
}

void loop()
{
    // Gather sensor data
    // Transmit sensor data over wifi somewhere

    Serial.println("Entering deep sleep mode");

    auto fiveMinutes = 5 * 60 * 1000000;

    system_deep_sleep(fiveMinutes);
}

At the top, we’re referencing a C-library, which is why it is wrapped in the extern scope. Fail to do this, and it won’t compile due to types being different and other woodoo.

The ESP8266 enters a state where everything is shut off except for the realtime clock to hit the 60µA. After the clock fires up again, the device essencially reboots, running setup() again before entering loop().

NOTE
This does NOT work unless you create a short circuit between GPIO pin 16 and the RESET pin on the device. The clock needs this wire in order to wake itself up after the timer ends. So remember, there is a hardware change included in getting this right!

Diagram

Conclusion

Setting the ESP8266 in deep-sleep mode means that you can start to make battery driven solutions that will last for months instead of days. At an average of 75mA in normal mode, a 2450ma Eneloop would last around 32 hours or less on battery power, but on the same battery, you should be able to expect months (depending on how hard you drive the WIFI and sensor power consumption ofc.) Also remember that the power converters from battery to 3.3v aren’t perfect, giving you at best 85% of stated battery capacity, but all in all, it’s a good and handy thing to know about!

Fun with english words

In this post, I explain how I made the program “SubWordsFinder” that takes a string, and finds all real English words that contain that string, then removes the string from the match, and sees if the remaining characters form a real English word or not.

It started with me stumbling across the following “truism” on Facebook the other day:

My first thought was “gee, these guys have too much time on their hands. My second thought was: “Hmm, maybe I can do that as well?”

i.e. if I run SubWordsFinder tea, I expect the results to include:
”protease –> prose”

and SubWordsFinder ox should include
”toxic –> tic”

Finding all the words

So I started to search for “all English words download” on Bing and quickly found The English Open Word List (EOWL), which I then downloaded and extracted into a folder:
image
The ZIP file included all English words in both a CSV format as well as LF separated entries in a series of textfiles. I chose to attack the latter, and leave the CSV format alone.

Loading the files

I opted to create class WordsFinder that takes the path to the folder containing all the text files as its input:
image

Next, I created a LoadWords() method that loads all the files in parallel. I used a ConcurrentDictionary<string, IEnumerable<string>> in order to be able to work in parallel and not having to worry about locking issuesNyah-Nyah:
image
I did however, need to lock the variable finalWordCount in order to avoid writing to it from two separate threads.

Finding the SubString

Next challenge was to identify all words containing the substring that I was after. Turns out, this was really easy as well. Because I had split all the  words into a dictionary where the first letter of the word is the key, and because I had them all in a ConcurrentDictionary, I could parallelize the search to make it blazing fast:
image
Again, using Concurrent collections makes writing to them thread-safe, so a ConcurrentBag is where I stored all my matches. In my main program, I store all matches in the variable wordsContainingMatch.

Ok, so now I have a way of finding all words in the English dictionary that contains any substring. Next up is to remove that substring from the match:
image

The idea here is simply to create pairs of words containing the substring, and the corresponding suggestion for a word without it. These aren’t yet verified as English words, they’re simply strings with that substring removed. You can see that the form of the words is a string with a colon ‘:’ where the original match is on the left, and the suggested word without that match is on the right.

Last piece of the puzzle

With this new collection, wordsWithMatchRemoved, It was simply a matter of figuring out if the right side of the colon is a real English word or not, by checking against our dictionary:
image
I extract the string to the right of the colon, and see if that string exists in my dictionary using the Contains method:
image

As long as the string exists as an English word, it is added to the collection actualWordsWithMatchRemoved. I know, I could probably come up with better names…

Running it

Putting this all together in a running console application, I added a few System.Diagnostics.Stopwatch instances to time the program. Here’s running against the string PI:

image
Searching for lad gave me these:

image

So I could say “If you take the lad away from the paladin, you’ll end up with pain

I know, weird humor. But a fun way to spend 50 minutes on a saturday morning.

Contact me if you want the full source code, it’s so ugly that I’m not going to post it on github Smile

 

Pedro

How is my plant doing, really?

I figured it’s time to post some pictures of my plant project at the same time that I’m trying our Word 2016‘s WordPress feature J

So, here’s Benjamin, or “Benny” as I’m calling him:

You can clearly see how healthy he looks? No wonder! Benny is now in control of his life! He does it all from turning on his growth-light, to turning the moisturizer on and off on dry days!

The soil moisture sensor keeps Benny happy. It lets him regulate the watering, and reports back to me when it’s time for a drink. Right now, the water pump is in house, and the relay that controls it is connected, however, I’m still searching for a transparent tube (esthetics matter!)

The light was a bit tricky. See, turning the light on and off isn’t really a big deal, but getting the light sensor to understand where the light is coming from required some smart thinking. It’d turn on the light at 6am, and then turn it off again 5 minutes later, because the light sensor was telling Benny “Hey, there’s enough light here, turn off the light!” – Benny doesn’t know the difference between sunlight and artificial light. I ended up moving the sensor to the back of my electronics panel, so that it is now facing the window directly, and not at all hit by the light from the growth lamp. Success!

Above you see some of the hardcore electronics. It’s all .Net Gadgeteer by GHI Electronics which allows me to skip soldering altogether. The modules that are connected together are:

  • Mainboard: the Hydra Mainboard
  • Power Relay (the red thing with the light on it)
  • Ethernet Connection (because I’m dead cheap, and didn’t want to spend more on a Wi-Fi Adapter)
  • A 16-character display
  • An earth moisture Sensor (in the soil)
  • An Air humidity and temperature sensor
  • Light Sensor (on the back)
  • 3 power Relays to regulate the water pump, the air humidifier, and the growth lamp
  • On the back, the triangular alien-looking thing is my Asus wireless Access point

At the time I write this blog entry, I’ve gathered over 12 000 readings, in intervals of 5 minutes. Benny takes a new reading every single second, but averages it against the previous one for up to five minutes before he submits the average of those 300 readings to an Azure Table. Table storage in Azure is dirt cheap. Those 12 000 readings aren’t costing me anything so far (bout 1-2 kroner MAYBE). It is by far the cheapest data store I’ve ever used. I plan to use these in Machine learning once I have a full year or so of readings.

So there you have it, some images and updates around Benny and I J

 

Oh, and if you’d like to see the live data, don’t hesitate to hop in to Benny’s webpage:

http://plants.digitaldias.com/Home/Values

Pedro

Light regulation is now active

DSC_5256Apologies for the long absence of information regarding my Plant watering project

Today I hung up a lamp above my plant in order to provide it with enough light as the Norwegian summer goes over to fall/winter.

The process required only a programmable relay (the RelayX1 from GHI Electronics, which is a simple on/off switch controllable from my Fez Hydra mainboard.

The intent, of course, is to preserve bulb life and energy by only providing enough light when the outside light from the window is insufficient. The plant, a ficus benjamina, requires a solid 15 hours of light every day to be happy, and with this, I’ve ensured that as far as light goes, it will smile all year round!

DSC_5255

 

 

 

Image: The bulb is nice looking on the cheap-o lamp housing that I got for it

 

DSC_5254
Image: The RelayX1 is the blue box in the middle of the picture that regulates the light on/off

Rules for turning the light on/off

The time frame the light bulb operates in is between 06:00 and 21:00 (15 hours). The following rules/pseudo code are then applied every second:

  1. Outside of the time frame (21:00 – 06:00)
    1. If the lamp is on, then turn it off immediately
    2. If the lamp is already off, then do nothing
  2. Inside the time frame (06:00 – 21:00)
    1. When the amount of lux is below our threshold and the lamp is off,  only then do we turn it on
    2. If the lamp has been on for at least one hour, and there is now sufficient light coming in through the window, then turn it off, otherwise, keep it on.

Placement

DSC_5252As advised on the light bulb documentation, I’ve placed the lamp around 0.5m from the top of the plant, as you can see on the image.

You can check out the values that my plant is emitting at the following url:

http://plants.digitaldias.com/home/values

The site should be semi-responsive!

Getting the newest entry from Azure Table Storage

Sometimes, all you want is to be able to quickly get to the last value of a sensor, or the freshest product in your Azure Table without having to do complex artsy queries to achieve that result. Trouble is, querying against Azure tables does not give you a Last() option, so we have to get sneaky!

Turns out, Azure Tables are ordered by their RowKeys, which are indexed, so we’re in luck. The challenge is that you need to input a string that is ever descending in value, so that the newest elements are always fresh on top. Here’s a trick to do just that:

DateTime to the rescue!

The simple trick is to use the DateTime.MaxValue property, which gives us the highest possible value of DateTime. Then we convert that to ticks in order to get a huge number. Subtract the DateTime.Now value from that, and what we end up with is a large enough number to use as a RowKey that is “ever descending”:

image

The string formatting is just to populate the value with 19 digits.

The RowKeys are now stored in an ever descending order, here’s a snip of a table I’m storing some sensor values in (using Azure Storage Explorer):

image

On the reading side, you can now simply execute your query knowing that the order of the returned values will always have the first item as the last inserted:

image

 

Learn it, love it, live it!

  • Professional resume and cover letter
  • Temporary file download links in Azure Storage

    Lets say that you have a project in which you want to make files available in Azure Storage for a limited amount of time. This is not only possible, it’s super simple! Here’s how you do it:

    First, make something that will connect you to your Azure Storage Account:

    image

    The image above shows my class AttachedFileRepository and it’s constructor, where we do the regular stuff of connecting to a CloudBlobClient, which is the object that represent that class’s state.

     

    Next, provide a time-limited download URL in the method:

    image

    The magic, of course, happens in the method GetContainerSharedAccessUri:

    image

    Basically, what I’m doing there, is to create a SharedAccessBlobPolicy object where I set the expiration time of the policy and limit the access to the blob to read only.

    Once the policy is created, I can create a “Shared Access Signature” using that policy that will provide access to the blob even though it is in a private container, and the blob itself is marked as private.

    Appending the sharedAccessSignature to the blob URI is all it takes to open up that blob for download for the set time. Ingenious!

    Uses

    If you need to protect your files, and grant access only to paying customers, then this will provide the customer with a time-bombed download URL. You could, for example, allow 2 minutes download link.

    Another scenario is provding download links to only authenticated users, for example a class that wants access to some class-related files, must be logged on to get the list of files. If they attempt to publish the download link to someone else, that link will only be valid for the time that you’ve set.

    Bruk av lagring i skyene

    Her kommer et blogginnlegg på norsk!

    Jeg har laget en liten video som viser en øvelse jeg gjør for deltakerne på den månedtlige Microsoft Azure Camp. Dette er et månedtlig event på ca. 3 timer der utviklere kan komme på besøk til meg og få litt hands-on erfaring med å lage et program som bruker lagring i skyen.

    PartyImageUploader

    Programmet går ut på å implementere en automatisk opplasting av festbilder til Microsoft Azure, slik at disse senere kan vises i f.eks. en website som er laget for formålet. Øvelsen er rask å skrive, og fungerer flott som en kode kata til de som trenger å styrke seg på bruk av cloud tjenester.

    YouTube Video

    Til formålet har jeg laget en 20 minutter lang video hvor jeg bygger løsningen fra scratch.

    Viktig:

    Velg 1080p eller 720p for å kunne lese teksten i denne videoen!

    Sees best i fullskjerm på 1080p eller 720p

     

    Tilbakemeldinger og forbedringsforslag mottas med glede, enten på youtube direkte, eller her

    Smilefjes

    Pedro

    Sapien amet…

    (Planning the project)

    Hi all, and sorry for the slow updates. My new job has had me so busy that I have not found any time at all to work on my plant watering project. In addition, with the release of Visual Studio 2013 the gadgeteer project that I had going no longer works, since the .Net micro framework and Gadgeteer API’s need to be upgraded for the latest version of Visual Studio! I also need to get a new mainboard for the project as the one I have does not support WiFi!!

    Visual Studio Online – The project planning tool

    Today, I want to tell you about using Visual Studio Online (VSO). That’s the new name to “Team Foundation Service” which was introduced a while ago.

    VSO is free to use in projects up to five people, and if you are an MSDN subscriber, you do not even count towards those five. So, for all your hobby needs, you should need no more than this to get started on your next project. I will look at the agile planning part of VSO today, so this is not only for the .Net people!
    Let me repeat this:

    It does not matter what language you develop in, or even if you are a developer at all. VSO is a purebreed project planning tool that will see your Agile project needs fulfilled!

     

    Planning

    When you plan a project, you typically have some different areas and concepts that need to be defined. These areas, or features as they’re called in VSO (“epics” in Jira) allow you to group together a set of user stories for which you plan your work. My Water My Plants (WMP) project is split into the following epic parts:

    image

    Using features, it’s easy to find a home for all the backlog items in the project. As you can see from the image above, I have 6 main features in my WMP project, each with their set of backlog items. For a more general LOB application, you could have features such as “invoicing” or “maps”. Some books describe epics as user stories that represent too much work for one single sprint.

    With the above list of features done, I can easilly just change viewmode to “features to backlog items” in order to start planning the user stories. This viewmode basically adds a huge plus-sign to the left of the highlighted feature and clicking it add a new user story below it. You can see in the next image that I’m working on “Windows Azure” and setting up some work that needs to be done there:

    image

    Backlog items are user stories, and not tasks. In a large project, you would plan for the features and backlog items together with the stakeholders of the project (such as the company owner, project lead etc) and once you’ve got a Product Backlogyou order it by priority so that the developer team can start planning their work.

    Get sprinting

    The sprint planning process is identical to backlog creation; assemble your developers, and walk through the backlog items having selected view “Backlog items to tasks”.

    image

    As you can see, now you’re clicking the big plus-sign on a backlog item in order to define what tasks need to be done in order to deliver the backlog item. The tasks are described and estimated. The smaller (in time/complexity) the task, the better, because smaller tasks are easier to estimate. Your team commits entire backlog items (with all the tasks) for a sprint – as may as they think they can manage in the alloted period.

    It is important to remember that your team commits entire backlog items to the sprint, and not individual tasks! I can’t begin to tell you how many times I see teams trying to deliver individual tasks in sprints. This gives no value to the stakeholders, because as long as there is a missing task to the backlog item, then it cannot be delivered and tested.

    Once your team has planned enough tasks to last the duration of the sprint, they can now focus on the work getting done, and follow their progress on the burndown graph.

    image

    What I like about VSO is the clean interface, and tight integration with Visual studio. Inside Visual Studio, I have a prioritized “Assigned to me” query that has been put in my team page:

    image

    Clicking on this gives me the work items that are either bugs or tasks todo or in progress. I can then easilly associate each check-in with the task that I was working on. The order of the tasks is, as you’d expect, the order of priority that I arrange the backlog items that the tasks belong to.

    Systems like these are the recepy for success in any modern software project. I honestly believe that VSO has no match because of the tight integration with Visual Studio and MS Office.
    (you can hook up Excel to this just as easilly).

    So there you have it, this is how my plant watering project is managed on a larger scale.

    Read more about this on Application Lifecycle Manage and Agile planning on the Visual Studio site

    – and lets hope that the API’s for working with Gadgeteer comes to Visual Studio 2013 soon!

    Merry Christmas!

    P.

    Repono historicae mensuras

    (Storage of historical measurements)

    Summary / TL;DR

    This is the third post in the WaterMyPlants project. The first post, which describes the project in some detail is here. The second, about collecting weather measurements is here.

    This time, I speak about how quickly one can set up a database using windows Azure, and how the interface used to store and retrieve data with is a core domain component, and at the same time, the implementation of that interface is a detail that belongs in the data layer.

    Status

    It has been a while since the last post, this is mostly to do with a small setback in my project. The mainboard that I had started with (the Fez Hydra) turns out to be an open source mainboard that does not support all the modules from GHI, among them, the WiFi RS21 Module, which is essential to my project – I need my plant to be able to connect to the web services in the cloud directly, and not via a computer in my home. Doing so would just be introducing another point of failure.

    DSC_8863
    Image 1: My desk at home. I run Synergy between all 3 desktops, so that I only have to use the center mouse & keyboard for 99% of all my inputs.

    Saving the measurements in SQL Azure

    There will come no good out of this project unless I am able to store the data somewhere, so for the third part in this series, I wanted to expose my thinking around that.

    Creating the database

    09.06Setting up a database in windows Azure is a matter of logging on to the Azure portal, then select the SQL Databases. For this project, I went with all defaults: new database –> Sql database –> Quick Create and chose my database name WaterMyPlantsDb.

    1. One of the first things that you’ll want to do once the database has been created is to allow your home network access through the windows Azure SQL server firewall. By default, all external IP addresses are blocked until you’ve explicitly allowed access.

     

    After that, you’re nearly done, just grab the connection string by clicking on the “View SQL Database Connection strings” from the database page, and you’re set to go. Notice that the connection string does not contain the password that you chose for the database.

     

     

     

    Specification first – Always!

    The act of storing and reading to/from a database should never matter to the logic of your program, those are application boundaries that your main logic should not care about. In other words, if my logic requires a measurement to be stored, then that is what is going to happen – where and how is not the concern of our domain logic: 

    09.06
    Image 3: Writing my specifications in gherkin syntax makes it easy to automate. Notice the lack of mentioning of specific technologies.

    I already have WeatherMeasurement as an entity in my project, so all I need is to “discover” my new Interface, and then implement it. The gherkin above is tied to the following code:

    09.06 - bdd
    Image 4: Implementation of the specification leads to the discovery of the required objects and methods

    The interface IMeasurementStorage does not exist in my solution yet. Neither does the method Add() or the property Newest, however, once I write these, the system will have an implementation of the contract that should be working with the SQL database. Notice also how I extended the WeatherMeasurement with a Note property. This was to differentiate two otherwise equal measurements, and to make test data unique by setting a random GUID as the note.

    Also note that I did not decorate the interface with methods that I do not need. The specification drove forth one property and one method, and those are the only items that I am interested in implementing. Doing anything more than that would be gold-plating the code; another malpractice often seen in various projects. Embrace YAGNI – That is solid advice.

    IWeatherMeasurement
    Image 5: The discovered interface is a vital part of the domain

    This interface is a core component in my domain, however the implementation is highly Sql Server specific, and belongs in the data layer. Since this is the first class that uses Sql server, a new pair of projects need to be added to the solution:

    sqlAdded
    Image 6: We now have somewhere to put all our Sql-server IO classes

    The implementation from here on, is strictly TDD. I specify / implement in micro-steps following the same pattern as described in previous blogs. I will be posting about the use of Entity Framework and code-first approach in my next post.

    A story about the weather

    This is the second blog post of my project WaterMyPlants. You can find the first blog post here

    Introduction

    This time, I just wanted to share a little about the process of going through one task in TFS from the very beginning until the task was pegged “done” – all using BDD and TDD, of course – there is no other way!

    image
    Image 1: The task for Collecting weather data from api.yr.no is now resolved

    Since I am the only developer on this project, and also project owner, the project architect, and project code monkey, my scrum board tends to look like the teeth of a shark. My backlog is wafer-thin, and the user stories often come about AFTER the tasks have been described. Normally, you would have a well-defined backlog of user stories and for each sprint, a set of tasks for every user story planned.

    User Story and Scenario

    In order to know where to start, I always and without exception define scenarios for the task at hand based on the user story. This time, the story was about collecting wetaher data from an external source – I do not own a weather station, so the next-best thing is to grab the weather from a trusted source: 

    The user story


    @story62
    Feature: There is a component that can retrieve weather information from Yr.no
    
    	As a system developer
    	I can obtain information about the local weather from api.yr.no
    	So that I can find patterns between plant thirst and outside weather
     
    @task61 @performsIO
    Scenario: The component retrieves data from api.yr.no based on provided GPS location
    	Given that the system has implemented weather retrieval component for Yr.No
    	When I ask the system to retrieve weather data for my gps position
    	Then the system responds with some arbitrary weather data

    Code: User story 62 is about getting weather information from an external source

    The tag performsIO needs some explanation: What it does is basically tell SpecFlow to use a runtime configuration prior to running this scenario. I use Structuremap as my IoC container to keep things simple.

    [BeforeScenario("performsIO")]
    public void BeforeMockedEnpointsScenarios()
    {
    	ObjectFactory.Initialize(init => init.AddRegistry(new WebRuntimeRegistry()));	
    }
    
    I usually have one “real” registry that I provide the applications with, and one registry 
    where all the IO performing classes are replaced with mocks.
     

     

    Implementing the scenario

    All that was left to do was to write the code for each scenario step and “discover” 
    my components methods: 

    [Given(@"that the system has implemented weather retrieval component for Yr\.No")]
    public void GivenThatTheSystemHasImplementedWeatherRetrievalComponentForYr_No()
    {
       // No code required for this step
    }
    
    [When(@"I ask the system to retrieve weather data for my gps position")]
    public void WhenIAskTheSystemToRetrieveWeatherDataForMyGpsPosition()
    {
        var collector = ObjectFactory.GetInstance<IWeatherCollector>();            
        var measurement = collector.GetWeather(new GpsLocation());
    
        ScenarioContext.Current.Set(measurement);
    }
    
    [Then(@"the system responds with some arbitrary weather data")]
    public void ThenTheSystemRespondsWithSomeArbitraryWeatherData()
    {
        var weatherMeasurement = ScenarioContext.Current.Get<WeatherMeasurement>();
    
       // Inspection… 

    }


    I often write steps that do not require code. This is simply because I deem the scenario text 
    to be a first class citizen, and seeing as it promotes understandability of the system, I 
    opt for verbosity.
     
    The GpsLocation class defaults to use the longitude and latitude of my house, which is why it 
    only requires instanciation. 
     
    Lastly, the inspection step simply verifies that the values of the weathermeasurement look ok. 
    I’ve hidden the implementation details behind the comment “// Inspection”
     
     

    The Test Driven part

    BDD has now driven forth the need for an IWeatherCollector with the method GetWeather(GpsLocation). Additionally, it also specifies that it the collector needs to return some form of WeatherMeasurement class.

    It is time to bring out the SOLID principles from the closet and do some test driven work. I will spare you the code details (do email me if you want them), but here are some of the points that I would like to make:

    image
    Image 2: Test explorer could use a “document order” sorting option

    As you can tell, I am using Roy Osherove’s naming system for my unit tests. Since I see these as micro-specifications, I just love the way these work as fine-grained API documentation for the method I just described. Unfortunately, Visual Studio does not show me the tests in document-order!

    This would give the most meaning to anyone looking at the unit-test explorer. A workaroundis to hit the shortcut-key ctrl+m+o between two unit tests in VS2012 to contract all methods:

    image
    Image 3: Collapsed view of my unit tests using Roy Osherove’s naming convention

    Reading the unit-tests in document-order is, in my view, the best way to understand a method.

    TDD is also about discovering contracts

    Following the SOLID principles of object-oriented design, it quickly became clear during the implementation of  IWeatherCollector that the Single Responsability Principle drove me to use the following contracts: 

    image
    image
    Image 4: Requirements of the WeatherCollector class identified during TDD

    Why: The WeatherCollector class is no more than a service class that requests some xml from an external source and then translates that xml into a WeatherMeasurement object. The single responsability of the collector, is simply to combine the result of a reader and pass it along to the proper converter if all looks ok. This could possibly lead to using the name WeatherRepository instead, however, Collector was a term I felt more at ease with. Readability > patterns!

    I won’t bother you with the details, but the implementations of the WebReader and Converter followed the same TDD approach – one after the other.

    Geen tests lead to green story!

    So, with all the specifications (unit-tests) now passing, and no more System.NotImplementedException() I ran SpecFlow to see if I had forgotten anything else:

    image
    Image 5: All done from SpecFlow. The user story was implemented as specified

    General approach: Once you’re done with all the unit-tests for a method, you should run the user story to see if you have forgotten anything else. In my case, I only had one scenario, but I found out twice that I had to implement WebReader and then IConverter<XElement, WebMeasurement>. If you forget one, the SpecFlow report shines in red, telling you that sometihng is not yet implemented.  
    – Do you still wonder why I love BDD?

    image 
    Image 6: Looking at the details of the scenario; the real output from api.yr.no

    Conclusion

    In this second post, I wanted to share with you how I use BDD and TDD to drive code:

    • Drag the task to “In progress” on your scrum board
    • Find or create the SpecFlow feature file for the user story that the task belongs to
    • Write one or more scenarios required to deliver that single task
    • Implement the scenario steps and drive forth the interfaces, their methods and objects required by the task. One scenario at the time! Do not implement classes yet, just let the interfaces sit until you’re done writing the scenarios.
    • Once done with one scenario, proceed to implementing the interfaces and objects using TDD
      • Create the classes
      • Implement the methods using TDD
      • Discover additional objects and contracts, but do not implement them before you’re done with the method you’re working on.
    • Every time you’re done with the unit-tests for a method, run a complete SpecFlow report and see if you have more scenarios to implement
      • Implement the missing scenarios, one at the time until report is all green
    • On the scrum board, drag your task from “In progress” to done
    • Pick the next task

     

    Writing this blog post took me far longer than actually doing the steps above. Without the blog post, I would’ve been done within the hour.