Finally! Entity Framework working in fully disconnected N-tier web app

Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern.

http://www.codeproject.com/KB/linq/ef.aspx

In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems.

You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

C# with keyword equivalent

There’s no with keyword in C#, like Visual Basic.
So you end up writing code like this:

this.StatusProgressBar.IsIndeterminate = false;
this.StatusProgressBar.Visibility = Visibility.Visible;
this.StatusProgressBar.Minimum = 0;
this.StatusProgressBar.Maximum = 100;
this.StatusProgressBar.Value = percentage;

Here’s a work around to this:

this.StatusProgressBar.Use(p =>
{
  p.IsIndeterminate = false;
  p.Visibility = Visibility.Visible;
  p.Minimum = 0;
  p.Maximum = 100;
  p.Value = percentage;
});

Saves you repeatedly typing the same class instance or control
name over and over again. It also makes code more readable since it
clearly says that you are working with a progress bar control
within the block. It you are setting properties of several controls
one after another, it’s easier to read such code this way
since you will have dedicated block for each control.

It’s a very simple one line function that does it:

public static void Use<T>(this T item, Action<T> work)
{
    work(item);
}

You could argue that you can just do this:

var p = this.StatusProgressBar;
p.IsIndeterminate = false;
p.Visibility = Visibility.Visible;
p.Minimum = 0;
p.Maximum = 100;
p.Value = percentage;

But it’s
not elegant. You are introducing a variable “p” in the
local scope of the whole function. This goes against naming
conventions. Morever, you can’t limit the scope of
“p” within a certain place in the function.

Update: Previously I proposed a way to do it without generic
extention method which was not so clean. Andy T posted this cleaner
solution in comments.

ParallelWork: Feature rich multithreaded fluent task execution library for WPF

ParallelWork is an open source free helper class that
lets you run multiple work in parallel threads, get success,
failure and progress update on the WPF UI thread, wait for work to
complete, abort all work (in case of shutdown), queue work to run
after certain time, chain parallel work one after another.
It’s more convenient than using .NET’s
BackgroundWorker because you don’t have to declare one
component per work, nor do you need to declare event handlers to
receive notification and carry additional data through private
variables. You can safely pass objects produced from different
thread to the success callback. Moreover, you can wait for work to
complete before you do certain operation and you can abort all
parallel work while they are in-flight. If you are building highly
responsive WPF UI where you have to carry out multiple job in
parallel yet want full control over those parallel jobs completion
and cancellation, then the ParallelWork library is the right
solution for you.

I am using the ParallelWork library in my PlantUmlEditor
project, which is a free open source UML editor built on WPF. You
can see some realistic use of the ParallelWork library
there. Moreover, the test project comes with 400 lines of Behavior
Driven Development flavored tests, that confirms it really does
what it says it does.

The source code of the library is part of the
“Utilities” project in PlantUmlEditor
source code hosted at Google Code.

The library comes in two flavors, one is the ParallelWork
static class, which has a collection of static methods that you can
call. Another is the Start class, which is a fluent wrapper
over the ParallelWork class to make it more readable and
aesthetically pleasing code.

ParallelWork allows you to start work immediately on
separate thread or you can queue a work to start after some
duration. You can start an immediate work in a new thread using the
following methods:

  • void StartNow(Action doWork, Action onComplete)
  • void StartNow(Action doWork, Action onComplete,
    Action failed)

For example,

ParallelWork.StartNow(() =>
{
    workStartedAt = DateTime.Now;
    Thread.Sleep(howLongWorkTakes);
},
() =>
{
    workEndedAt = DateTime.Now; 
});

Or you can use the fluent way Start.Work:

Start.Work(() =>
    {
        workStartedAt = DateTime.Now;
        Thread.Sleep(howLongWorkTakes);
    })
    .OnComplete(() =>
    {
        workCompletedAt = DateTime.Now;
    })
    .Run();

Besides simple execution of work on a parallel thread, you can
have the parallel thread produce some object and then pass it to
the success callback by using these overloads:

  • void StartNow(Func doWork, Action
    onComplete)
  • void StartNow(Func doWork, Action
    onComplete, Action fail)

For example,

ParallelWork.StartNow<Dictionary<string, string>>(
    () =>
    {
        test = new Dictionary<string,string>();
        test.Add("test", "test");

        return test;
    },
    (result) =>
    {
Assert.True(result.ContainsKey("test"));
});

Or, the fluent way:

Start<Dictionary<string, string>>.Work(() =>
    {
        test = new Dictionary<string, string>();
        test.Add("test", "test");

        return test;
    })
    .OnComplete((result) =>
    {
        Assert.True(result.ContainsKey("test"));
    })
    .Run();

You can also start a work to happen after some time using these
methods:

  • DispatcherTimer StartAfter(Action onComplete, TimeSpan
    duration)
  • DispatcherTimer StartAfter(Action doWork,Action
    onComplete,TimeSpan duration)

You can use this to perform some timed operation on the UI
thread, as well as perform some operation in separate thread after
some time.

ParallelWork.StartAfter(
    () =>
    {
        workStartedAt = DateTime.Now;
        Thread.Sleep(howLongWorkTakes);
    },
    () =>
    {
        workCompletedAt = DateTime.Now;
    },
    waitDuration);

Or, the fluent way:

Start.Work(() =>
    {
        workStartedAt = DateTime.Now;
        Thread.Sleep(howLongWorkTakes);
    })
    .OnComplete(() =>
    {
        workCompletedAt = DateTime.Now;
    })
    .RunAfter(waitDuration);

There are several overloads of these functions to have a
exception callback for handling exceptions or get progress update
from background thread while work is in progress. For example, I
use it in my PlantUmlEditor to
perform background update of the application.

// Check if there's a newer version of the app
Start<bool>.Work(() =>
{
    return UpdateChecker.HasUpdate(Settings.Default.DownloadUrl);
})
.OnComplete((hasUpdate) =>
{
    if (hasUpdate)
    {
        if (MessageBox.Show(Window.GetWindow(me),
            "There's a newer version available. 
Do you want to download and install?"
, "New version available", MessageBoxButton.YesNo, MessageBoxImage.Information) == MessageBoxResult.Yes) { ParallelWork.StartNow(() => { var tempPath = System.IO.Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData), Settings.Default.SetupExeName); UpdateChecker.DownloadLatestUpdate(Settings.Default.DownloadUrl, tempPath); }, () => { }, (x) => { MessageBox.Show(Window.GetWindow(me), "Download failed. When you run next time,
it will try downloading again."
, "Download failed", MessageBoxButton.OK, MessageBoxImage.Warning); }); } } }) .OnException((x) => { MessageBox.Show(Window.GetWindow(me), x.Message, "Download failed", MessageBoxButton.OK, MessageBoxImage.Exclamation); });

The above code shows you how to get exception callbacks on the
UI thread so that you can take necessary actions on the UI.
Moreover, it shows how you can chain two parallel works to happen
one after another.

Sometimes you want to do some parallel work when user does some
activity on the UI. For example, you might want to save file in an
editor while user is typing every 10 second. In such case, you need
to make sure you don’t start another parallel work every 10
seconds while a work is already queued. You need to make sure you
start a new work only when there’s no other background work
going on. Here’s how you can do it:

private void ContentEditor_TextChanged(object sender, EventArgs e)
{
  if (!ParallelWork.IsAnyWorkRunning())
  {
     ParallelWork.StartAfter(SaveAndRefreshDiagram,
                                 TimeSpan.FromSeconds(10));
  }
}

If you want to shutdown your application and want to make sure
no parallel work is going on, then you can call the
StopAll() method.

ParallelWork.StopAll();

If you want to
wait for parallel works to complete without a timeout, then you can
call the WaitForAllWork(TimeSpan timeout). It will block the
current thread until the all parallel work completes or the timeout
period elapses.

result = ParallelWork.WaitForAllWork(TimeSpan.FromSeconds(1));

The result is
true, if all parallel work completed. If it’s false, then the
timeout period elapsed and all parallel work did not complete.

For details how this library is built and how it works, please
read the following codeproject article:

ParallelWork: Feature rich multithreaded fluent task
execution library for WPF

http://www.codeproject.com/KB/WPF/parallelwork.aspx

If you like the article, please vote for me.

Open Source WPF UML Design tool

PlantUmlEditor is my
new free open source UML designer project built using WPF and .NET
3.5. If you have used plantuml before, you know
that you can quickly create sophisitcated UML diagrams without
struggling with a designer. Especially those who use Visio to draw
UML diagrams (God forbid!), you will be at heaven. This is a super
fast way to get your diagrams up and ready for show. You can
*write* UML diagrams in plain English, following a simple syntax
and get diagrams generated on-the-fly.

This editor really saves time designing UML diagrams. I have to
produce quick diagrams to convey ideas quickly to Architects,
Designers and Developers everyday. So, I use this tool to write
some quick diagrams at the speed of coding, and the diagrams get
generated on the fly. Instead of writing a long mail explaining
some complex operation or some business process in English, I can
quickly write it in the editor in almost plain English, and get a
nice looking activity/sequence diagram generated instantly. Making
major changes is also as easy as doing search-replace and
copy-pasting blocks here and there. You don’t get such agility in
any conventional mouse-based UML designers.

PlantUML editor screencast

I have submited a full codeproject article to give you a detail
walkthrough how I have built this. Please read this article and
vote for me if you like it.

PlantUML Editor: A fast and simple UML editor using WPF

http://www.codeproject.com/KB/smart/plantumleditor.aspx

You can download the project from here:

http://code.google.com/p/plantumleditor/

Do Unit Test and Integration Test from same test code using Conditional Compilation

You usually write unit test and integration test code separately
using different technologies. For example, for unit test, you use
some mocking framework like Moq to
do the mocking. For integration test, you do not use any mocking,
just some test classes that hits some service or facade to do
end-to-end integration test. However, sometimes you see that the
integration and unit test are more or less same, they test the same
class using its interface and perform the same tests against the
same expectation. For example, if you think about a WCF service,
you write unit test to test the ServiceContract using the
interface where you use some mocking framework to mock the
interface of the WCF Service. If you look at the following example,
I am using Moq to test IPortalService interface which is a
ServiceContract for a WCF service. I am using xUnit and
SubSpec to do BDD style tests.

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
    var portalServiceMock = new Mock<IPortalService>();
    var portalService = portalServiceMock.Object;

    "Given a already populated widget gallery".Context(() =>
    {
        portalServiceMock.Setup(p => p.GetAllWidgetDefinitions())
            .Returns(new Widget[] { new Widget { ID = 1 }, new Widget { ID = 2 }})
            .Verifiable();
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and first 
column on the same page"
.Assert(() => { portalServiceMock.VerifyAll(); Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

Now when I want to do an end-to-end test to see if the service
really works by connecting all the wires, then I write a test like
this:

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
    var portalService = new ManageCustomerPortalClient();

    "Given a already populated widget gallery".Context(() =>
    {
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and 
first column on the same page"
.Assert(() => { Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

If you look at the difference, it’s very little. The
mockings are gone. The same operation is called using the same
parameters. The same Asserts are done to test against the
same expectation. It’s an awful duplication of code.

Conditional compilation saves the day. You could write the unit
test using some conditional compilation directive so that in real
environment, those mockings are gone and the real stuff gets run.
For example, the following code does both unit test and integration
test for me. All I do is turn on/off some conditional
compilation.

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
#if MOCK
    var portalServiceMock = new Mock<IPortalService>();
    var portalService = portalServiceMock.Object;
#else
    var portalService = new ManageCustomerPortalClient();
#endif

    "Given a already populated widget gallery".Context(() =>
    {
#if MOCK
        portalServiceMock.Setup(p => p.GetAllWidgetDefinitions())
            .Returns(new Widget[] { new Widget { ID = 1 }, new Widget { ID = 2 }})
            .Verifiable();
#endif
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and 
first column on the same page"
.Assert(() => { #if MOCK portalServiceMock.VerifyAll(); #endif Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

The code is now in unit test mode. When I run this, it performs
unit test using Moq. When I want to switch to integration test
mode, all I do is take out the “MOCK” word from Project
Properties->Build->Conditional Compilation.


image

Hope this gives you ideas to save unit test and integration test
coding time.

Is your computer running slow, battery running out quickly?

If your computer is running hot or battery running out quickly
then it is most likely due to some application or process consuming
high CPU or memory. If you keep running applications for a long
time, for example, Outlook, then it continues to grow in memory
consumption and does not free up memory efficiently. As a result,
your computer runs out of physical memory and other applications
run slower. Sometimes Outlook, browser, image editing applications
or some other application start taking full CPU as they get into
some heavy internal processing and make your CPU hot and other
applications perform slower.

My new CPUAlert is an
application that monitors CPU and memory consumption of
applications and alerts you if some application is consistently
taking high CPU or high memory. It not only saves your CPU and
Battery’s lifetime but also makes your computer run smooth
and let your active applications run as fast as they can be.

While it is running, if some process is consuming more than 200
MB memory, it will show you an alert:


image

Here you can see my Outlook is taking 244 MB of physical
RAM.

You can either postpone the alert for 5 mins (just press ESC),
or ignore the process permanently so that you no longer receive
alert for the process anymore, or you can close it and reclaim
memory.

The handy feature is “Restart” which closes the
application and starts again. This generally frees up memory that
clogs up in the process.

Same alert will come if some process is consuming more than 30%
CPU for over 5 mins.

You can configure all these settings like what’s the
tolerable limit for CPU and memory, how frequently to show alert,
how long to wait before closing application etc by right clicking
on the Task bar icon and choosing Settings.


image


image

Source code of the project is available at:

http://code.google.com/p/cpualert/

The installer can also be downloaded from there.

Warning: The code is not in a good shape. I was frustrated at
some process taking high CPU and memory and I wrote this app within
hours to get the job done for me.

If you like the application, spread the word!

AspectF fluent way to put Aspects into your code for separation of concern

Aspects are common features that you write every now and then in
different parts of your project. it can be some specific way of
handling exceptions in your code, or logging method calls, or
timing execution of methods, or retrying some methods and so on. If
you are not doing it using any Aspect Oriented Programming
framework, then you are repeating a lot of similar code throughout
the project, which is making your code hard to maintain. For ex,
say you have a business layer where methods need to be logged,
errors need to be handled in a certain way, execution needs to be
timed, database operations need to be retried and so on. So, you
write code like this:

public bool InsertCustomer(string firstName, string lastName, int age,
    Dictionary<string, string> attributes)
{
    if (string.IsNullOrEmpty(firstName))
        throw new ApplicationException("first name cannot be empty");

    if (string.IsNullOrEmpty(lastName))
        throw new ApplicationException("last name cannot be empty");

    if (age < 0)
        throw new ApplicationException("Age must be non-zero");

    if (null == attributes)
        throw new ApplicationException("Attributes must not be null");

    // Log customer inserts and time the execution
    Logger.Writer.WriteLine("Inserting customer data...");
    DateTime start = DateTime.Now;

    try
    {
        CustomerData data = new CustomerData();
        bool result = data.Insert(firstName, lastName, age, attributes);
        if (result == true)
        {
            Logger.Writer.Write("Successfully inserted customer data in "
                + (DateTime.Now-start).TotalSeconds + " seconds");
        }
        return result;
    }
    catch (Exception x)
    {
        // Try once more, may be it was a network blip or some temporary downtime
        try
        {
            CustomerData data = new CustomerData();
            if (result == true)
            {
                Logger.Writer.Write("Successfully inserted customer data in "
                    + (DateTime.Now-start).TotalSeconds + " seconds");
            }
            return result;
        }
        catch
        {
            // Failed on retry, safe to assume permanent failure.

            // Log the exceptions produced
            Exception current = x;
            int indent = 0;
            while (current != null)
            {
                string message = new string(Enumerable.Repeat('t', indent).ToArray())
                    + current.Message;
                Debug.WriteLine(message);
                Logger.Writer.WriteLine(message);
                current = current.InnerException;
                indent++;
            }
            Debug.WriteLine(x.StackTrace);
            Logger.Writer.WriteLine(x.StackTrace);

            return false;
        }
    }

}

Here you see the two lines of real code, which inserts the
Customer calling a class, is hardly visible due to all the
concerns (log, retry, exception handling, timing)
you have to implement in business layer. There’s validation,
error handling, caching, logging, timing, auditing, retring,
dependency resolving and what not in business layers nowadays. The
more a project matures, the more concerns get into your codebase.
So, you keep copying and pasting boilerplate codes and write the
tiny amount of real stuff somewhere inside that boilerplate.
What’s worse, you have to do this for every business layer
method. Say now you want to add a UpdateCustomer method in
your business layer. you have to copy all the concerns again and
put the two lines of real code somewhere inside that
boilerplate.

Think of a scenario where you have to make a project wide change
on how errors are handled. You have to go through all the hundreds
of business layer functions you wrote and change it one by one. Say
you need to change the way you time execution. You have to go
through hundreds of functions again and do that.

Aspect Oriented Programming solves these challenges. When you
are doing AOP, you do it the cool way:

[EnsureNonNullParameters]
[
Log]
[
TimeExecution]
[
RetryOnceOnFailure] public void InsertCustomerTheCoolway(string firstName, string lastName, int age, Dictionary<string, string> attributes) { CustomerData data = new CustomerData(); data.Insert(firstName, lastName, age, attributes); }

Here you have
separated the common stuffs like logging, timing, retrying,
validation, which are formally called ‘concern’,
completely out of your real code. The method is nice and clean, to
the point. All the concerns are taken out of the code of the
function and added to the function using Attribute. Each
Attribute here represents one Aspect. For example, you can
add Logging aspect to any function just by adding the Log
attribute. Whatever AOP framework you use, the framework ensures
the Aspects are weaved into the code either at build time or at
runtime.

There are AOP frameworks which allows you to weave the Aspects
at compile time by using post build events and IL manipulations eg
PostSharp, some does it at runtime
using
DynamicProxy
and some requires your classes to inherit from
ContextBoundObject in order to
support Aspects
using C# built-in features. All of these have
some barrier to entry, you have to justify using some external
library, do enough performance test to make sure the libraries
scale and so on. What you need is a dead simple way to achieve
“separation of concern”, may not be full blown Aspect
Oriented Programming. Remember, the purpose here is separation of
concern and keep code nice and clean.

So, let me show you a dead simple way of separation of concern,
writing standard C# code, no Attribute or IL manipulation black
magics, simple calls to classes and delegates, yet achieve nice
separation of concern in a reusable and maintainable way. Best of
all, it’s light, just one small class.

public void InsertCustomerTheEasyWay(string firstName, string lastName, int age,
    Dictionary<string, string> attributes)
{
    AspectF.Define
        .Log(Logger.Writer, "Inserting customer the easy way")
        .HowLong(Logger.Writer, "Starting customer insert", 
"Inserted customer in {1} seconds") .Retry() .Do(() => { CustomerData data = new CustomerData(); data.Insert(firstName, lastName, age, attributes); }); }

If you want to read details about how it works and it can save
you hundreds of hours of repeatetive coding, read on:

AspectF
fluent way to add Aspects for cleaner maintainable code

If you like it, please vote for me!