WatinN to automate browser and test sophisticated ASP.NET AJAX sites

WatiN is a great .NET library for writing automated browser based tests that uses real browser to go to websites, perform actions and check for browser output. Combined with a unit test library like xUnit, you can use WatiN to perform automated regression tests on your websites and save many hours of manual testing every release. Moreover, WatiN can be used to stress test Javascripts on the page as it can push the browser to perform operations repeatedly and measure how long it takes for Javascripts to run. Thus you can test your Javascripts for performance, rendering speed of your website and ensure the overall presentation is fast and smooth for users.

Read this article for details:

http://www.codeproject.com/KB/aspnet/watinajaxtest.aspx

I have written some extension methods for WatiN to help facilitate AJAX related tests, especially with ASP.NET UpdatePanel, jQuery and dealing with element positions. I will show you how to use these libraries to test sophisticated AJAX websites, like the one I have built – Dropthings, which is a widget powered ASP.NET AJAX portal using ASP.NET UpdatePanel, jQuery to create a Web 2.0 presentation. You can simulate and test UpdatePanel updates, AJAX calls and UI update and even drag & drop of widgets!

You can see the implementation of automated tests in my open source project codebase.

TestControlAdapterOpt

Tests written using WatiN replaces the need for human driven tests and thus sheds significant time off your regular regression test suite. Moreover, it empowers developers with a way to quickly run regression tests whenever they need to, without waiting for human QA resource’s availability. When you hook it with xUnit like test frameworks and integrate with your continuous build, you can run UI tests automatically to test all the UI scenarios overnight after your nightly build and generate reports without requiring any manual intervention.

User story is worthless, Behavior is what we need

User Story is suitable for describing what user needs but not what user does and how system reacts to user actions within different contexts. It basically gives product team a way to quantify their output and let their boss know that they are doing their job. As a developer, you can’t write code from user stories because you have no clue on what what is the sequence of user actions and system reactions, what are the validations, what APIs to call and so on. As a QA, you can’t test the software from user stories because it does not capture the context, the sequence of events, all possible system reactions. User stories add little value to dev lifecycle. It only helps product team understand how much work they have to do eventually and it helps finance team get a view on how much money people are talking about. But to UI designers, solution designers, developers, they are nothing but blobs of highly imprecise statements that leave room for hundreds of questions to be answered. The absence of “Context” and “Cause and Effect”, and the imprecise way of saying “As a…I want… so that…” leaves room for so many misinterpretations that there’s no way development team can produce software from just user stories without spending significant time all over again analysing the user stories. Software, and the universe eventually, is all about Cause and Effect. The Cause and Effect is not described in a user story. 

Unlike user stories, the “Behavior” suggested by Behavior Driven Development (BDD) is a much better approach because the format of a behavior (Givencontext, When event, Then outcome), when used correctly, lets you think in terms of sequence of events, where the context, event and outcome are captured for each and every action user or system does, and thus works as a definite spec for designing the UI and architecture. It follows the Cause and Effect model, thus can explain how the world (or your software) works. It can be so precise that sometimes a behavior work as guideline for a developer to write a single function! Not just the develoeprs, even the QA team can clearly capture what action they need to perform and how the system should respond. However, to get the real fruit out of behaviors, you need to to write them properly, following the right format. So, let me give you some examples on how you can write good behaviors for UI, business layer, services and even functions and thus eliminate repeated requirement analysis that usually happens throughout the user-story driven development lifecycle.

Read more about how user stories suck and if behavior is used throughout the development lifecycle, it can greatly reduce repeated requirement analysis effort and can make the communication between product, design, development and QA team much more effective:

http://www.codeproject.com/KB/architecture/userstorysucks.aspx 

If you like it, vote for it!

Finally! Entity Framework working in fully disconnected N-tier web app

Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern.

http://www.codeproject.com/KB/linq/ef.aspx

In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems.

You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

Step by Step screencasts to do Behavior Driven Development on WCF and UI using xUnit

For those who have missed my presentation, I am trying to encourage my team to get into Behavior Driven Development (BDD). So, I made two quick video tutorials to show how BDD can be done from early requirement collection stage to late integration tests. It explains breaking user stories into behaviors, and then developers and test engineers taking the behavior specs and writing a WCF service and unit test for it, in parallel, and then eventually integrating the WCF service and doing the integration tests. It introduces how mocking is done using the Moq library. Moreover, it shows a way how you can write test once and do both unit and integration tests at the flip of a config setting.

Watch the screencast here:

The next video tutorial is about doing BDD to do automated UI tests. It shows how test engineers can take behaviors and then write tests that tests a prototype UI in isolation (just like Service Contract) in order to ensure the prototype conforms to the expected behaviors, while developers can write the real code and build the real product in parallel. When the real stuff is done, the same test can test the real stuff and ensure the agreed behaviors are satisfied. I have used WatiN to automate UI and test UI for expected behaviors.

 

Hope you like it!

Rescue overdue offshore projects and convince management to use automated tests

I have published two articles on codeproject recently. One is a story where an offshore project was two months overdue, my friend who runs it was paying the team from his own pocket and he was drowning in ever increasing number of change requests and how we brainstormed together to come out of that situation.

Tips and Tricks to rescue overdue projects

Next one is about convincing management to go for automated test and give developers extra time per sprint, at the cost of reduced productivity for couple of sprints. It’s hard to negotiate this with even dev leads, let alone managers. Whenever you tell them – there’s going to be less features/bug fixes delivered for next 3 or 4 sprints because we want to automate the tests and reduce manual QA effort; everyone gets furious and kicks you out of the meeting. Especially in a startup where every sprint is jam packed with new features and priority bug fixes to satisfy various stakeholders, including the VCs, it’s very hard to communicate the benefits of automated tests across the board. Let me tell you of a story of one of my startups where I had the pleasure to argue on this and came out victorious.

How to convince developers and management to use automated test instead of manual test

If you like these, please vote for me!

ParallelWork: Feature rich multithreaded fluent task execution library for WPF

ParallelWork is an open source free helper class that
lets you run multiple work in parallel threads, get success,
failure and progress update on the WPF UI thread, wait for work to
complete, abort all work (in case of shutdown), queue work to run
after certain time, chain parallel work one after another.
It’s more convenient than using .NET’s
BackgroundWorker because you don’t have to declare one
component per work, nor do you need to declare event handlers to
receive notification and carry additional data through private
variables. You can safely pass objects produced from different
thread to the success callback. Moreover, you can wait for work to
complete before you do certain operation and you can abort all
parallel work while they are in-flight. If you are building highly
responsive WPF UI where you have to carry out multiple job in
parallel yet want full control over those parallel jobs completion
and cancellation, then the ParallelWork library is the right
solution for you.

I am using the ParallelWork library in my PlantUmlEditor
project, which is a free open source UML editor built on WPF. You
can see some realistic use of the ParallelWork library
there. Moreover, the test project comes with 400 lines of Behavior
Driven Development flavored tests, that confirms it really does
what it says it does.

The source code of the library is part of the
“Utilities” project in PlantUmlEditor
source code hosted at Google Code.

The library comes in two flavors, one is the ParallelWork
static class, which has a collection of static methods that you can
call. Another is the Start class, which is a fluent wrapper
over the ParallelWork class to make it more readable and
aesthetically pleasing code.

ParallelWork allows you to start work immediately on
separate thread or you can queue a work to start after some
duration. You can start an immediate work in a new thread using the
following methods:

  • void StartNow(Action doWork, Action onComplete)
  • void StartNow(Action doWork, Action onComplete,
    Action failed)

For example,

ParallelWork.StartNow(() =>
{
    workStartedAt = DateTime.Now;
    Thread.Sleep(howLongWorkTakes);
},
() =>
{
    workEndedAt = DateTime.Now; 
});

Or you can use the fluent way Start.Work:

Start.Work(() =>
    {
        workStartedAt = DateTime.Now;
        Thread.Sleep(howLongWorkTakes);
    })
    .OnComplete(() =>
    {
        workCompletedAt = DateTime.Now;
    })
    .Run();

Besides simple execution of work on a parallel thread, you can
have the parallel thread produce some object and then pass it to
the success callback by using these overloads:

  • void StartNow(Func doWork, Action
    onComplete)
  • void StartNow(Func doWork, Action
    onComplete, Action fail)

For example,

ParallelWork.StartNow<Dictionary<string, string>>(
    () =>
    {
        test = new Dictionary<string,string>();
        test.Add("test", "test");

        return test;
    },
    (result) =>
    {
Assert.True(result.ContainsKey("test"));
});

Or, the fluent way:

Start<Dictionary<string, string>>.Work(() =>
    {
        test = new Dictionary<string, string>();
        test.Add("test", "test");

        return test;
    })
    .OnComplete((result) =>
    {
        Assert.True(result.ContainsKey("test"));
    })
    .Run();

You can also start a work to happen after some time using these
methods:

  • DispatcherTimer StartAfter(Action onComplete, TimeSpan
    duration)
  • DispatcherTimer StartAfter(Action doWork,Action
    onComplete,TimeSpan duration)

You can use this to perform some timed operation on the UI
thread, as well as perform some operation in separate thread after
some time.

ParallelWork.StartAfter(
    () =>
    {
        workStartedAt = DateTime.Now;
        Thread.Sleep(howLongWorkTakes);
    },
    () =>
    {
        workCompletedAt = DateTime.Now;
    },
    waitDuration);

Or, the fluent way:

Start.Work(() =>
    {
        workStartedAt = DateTime.Now;
        Thread.Sleep(howLongWorkTakes);
    })
    .OnComplete(() =>
    {
        workCompletedAt = DateTime.Now;
    })
    .RunAfter(waitDuration);

There are several overloads of these functions to have a
exception callback for handling exceptions or get progress update
from background thread while work is in progress. For example, I
use it in my PlantUmlEditor to
perform background update of the application.

// Check if there's a newer version of the app
Start<bool>.Work(() =>
{
    return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl);
})
.OnComplete((hasUpdate) =>
{
    if (hasUpdate)
    {
        if (MessageBox.Show(Window.GetWindow(me),
            "There's a newer version available. 
Do you want to download and install?"
, "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time,
it will try downloading again."
, "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); });

The above code shows you how to get exception callbacks on the
UI thread so that you can take necessary actions on the UI.
Moreover, it shows how you can chain two parallel works to happen
one after another.

Sometimes you want to do some parallel work when user does some
activity on the UI. For example, you might want to save file in an
editor while user is typing every 10 second. In such case, you need
to make sure you don’t start another parallel work every 10
seconds while a work is already queued. You need to make sure you
start a new work only when there’s no other background work
going on. Here’s how you can do it:

private void ContentEditor_TextChanged(object sender, EventArgs e)
{
  if (!ParallelWork.IsAnyWorkRunning())
  {
     ParallelWork.StartAfter(SaveAndRefreshDiagram,
                                 TimeSpan.FromSeconds(10));
  }
}

If you want to shutdown your application and want to make sure
no parallel work is going on, then you can call the
StopAll() method.

ParallelWork.StopAll();

If you want to
wait for parallel works to complete without a timeout, then you can
call the WaitForAllWork(TimeSpan timeout). It will block the
current thread until the all parallel work completes or the timeout
period elapses.

result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1));

The result is
true, if all parallel work completed. If it’s false, then the
timeout period elapsed and all parallel work did not complete.

For details how this library is built and how it works, please
read the following codeproject article:

ParallelWork: Feature rich multithreaded fluent task
execution library for WPF

http://www.codeproject.com/KB/WPF/parallelwork.aspx

If you like the article, please vote for me.

Do Unit Test and Integration Test from same test code using Conditional Compilation

You usually write unit test and integration test code separately
using different technologies. For example, for unit test, you use
some mocking framework like Moq to
do the mocking. For integration test, you do not use any mocking,
just some test classes that hits some service or facade to do
end-to-end integration test. However, sometimes you see that the
integration and unit test are more or less same, they test the same
class using its interface and perform the same tests against the
same expectation. For example, if you think about a WCF service,
you write unit test to test the ServiceContract using the
interface where you use some mocking framework to mock the
interface of the WCF Service. If you look at the following example,
I am using Moq to test IPortalService interface which is a
ServiceContract for a WCF service. I am using xUnit and
SubSpec to do BDD style tests.

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
    var portalServiceMock = new Mock<IPortalService>();
    var portalService = portalServiceMock.Object;

    "Given a already populated widget gallery".Context(() =>
    {
        portalServiceMock.Setup(p => p.GetAllWidgetDefinitions())
            .Returns(new Widget[] { new Widget { ID = 1 }, new Widget { ID = 2 }})
            .Verifiable();
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and first 
column on the same page"
.Assert(() => { portalServiceMock.VerifyAll(); Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

Now when I want to do an end-to-end test to see if the service
really works by connecting all the wires, then I write a test like
this:

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
    var portalService = new ManageCustomerPortalClient();

    "Given a already populated widget gallery".Context(() =>
    {
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and 
first column on the same page"
.Assert(() => { Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

If you look at the difference, it’s very little. The
mockings are gone. The same operation is called using the same
parameters. The same Asserts are done to test against the
same expectation. It’s an awful duplication of code.

Conditional compilation saves the day. You could write the unit
test using some conditional compilation directive so that in real
environment, those mockings are gone and the real stuff gets run.
For example, the following code does both unit test and integration
test for me. All I do is turn on/off some conditional
compilation.

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
#if MOCK
    var portalServiceMock = new Mock<IPortalService>();
    var portalService = portalServiceMock.Object;
#else
    var portalService = new ManageCustomerPortalClient();
#endif

    "Given a already populated widget gallery".Context(() =>
    {
#if MOCK
        portalServiceMock.Setup(p => p.GetAllWidgetDefinitions())
            .Returns(new Widget[] { new Widget { ID = 1 }, new Widget { ID = 2 }})
            .Verifiable();
#endif
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and 
first column on the same page"
.Assert(() => { #if MOCK portalServiceMock.VerifyAll(); #endif Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

The code is now in unit test mode. When I run this, it performs
unit test using Moq. When I want to switch to integration test
mode, all I do is take out the “MOCK” word from Project
Properties->Build->Conditional Compilation.


image

Hope this gives you ideas to save unit test and integration test
coding time.