Windows 7 64bit works!

Windows 7 64bit finally works! This is the first 64bit OS I
could really use in my daily acitvities. I tried Vista 64bit, it
was unreliable. It would show blue screen right when I am about to
make a presentation to the CEO. Until Microsoft released SP1, Vista
64 bit was not usable at all. Then came Windows 7 beta. I
immediately tried the 64bit version of Windows 7 beta. It was even
worse than Vista. It would crash every now and then – waking
up from standby, trying to do livemeeting share, switching screens,
plugging in external USB drives and what not. So, I patiently
waited for the final version to come out before I get on installing
it on all my laptops. Happy to say, the final version works
perfectly on HP tx2000 Tablet PC, DELL Vostro 1500, DELL Inspiron
1520. Once you do a full windows update and install some drivers
here and there, it all works perfectly. And let me say, Windows 7
is beautiful. I found back the joy of working on computers
again!

Working on 64bit Operating System is challenging. You
don’t always find the right printer driver. Your cool
external USB speakers won’t work – even if it is made
by Microsoft. And above all, there’s that C:WindowsWinsxs
folder which keeps increasing forever. By the time I was done with
Vista 64bit (two years approx in business), my Winsxs folder was
staggering 26 GB eating up every bit out of my C: partition. I had
no choice but to format and start over. It seems like this folder
keeps copy of every single DLL version it ever sees. The more
windows update I do, the larger it gets. Now on a fresh new Windows
7 installation, after installing VS 2008, Office Applications,
Windows Live applications and some handy tools, the Winsxs folder
is 5.62 GB. Let’s see how it keeps growing over the year. A
useful information for 64bit wannabes, make sure your C partition
is at least 60 GB. I just installed Windows 7 64bit 3 days back and
it has already taken 31 GB space.


image

Since I am doing a totally useless post, let me sprinkle some
productivity tips on it before you lose interest reading my
blog.

I realized I do a lot of context swiching. I get over 200 mails
per day, so I pretty much switch focus from Visual Studio/Browser
to Outlook once every minute, which is big cencentration killer.
So, I tried the above setup on my 25” screen and it works
great!

The left half of the screen is visual studio and the right half
screen shows Outlook and my todolist. As you see, I can see the
emails coming up on Outlook without ever switching. The Visual
Studio screen width is the right size to read code without
horizontally scrolling. The right bottom half of the screen shows
my toodlist so that I am always doing the right task from my
todolist and not wondering around heedless. If I browse, I bring up
the browser on top of the Visual Studio and keep the right half
same so that while browsing I am not missing important mails and I
still have an eye on my next actions.

I have been using Toodledo for a year. I love it! It has a geat
iPhone app which is the only reason I use Toodledo and not other
alternatives. The ajax interface is slick, especially when you use
Google Chrome to make an application out of it on your desktop. You
can turn on keyboard shortcuts and then the toodledo inside Google
Chrome’s application like view becomes the best web based
todolist application out there. Whenever I file a task, I hit
‘n’, enter the task title, press tab, 1/2 for priority,
hit enter and I am done. How convenient! Especially when I read
mails and file actionable tasks at least 40 to 60 times per
day.

AspectF fluent way to put Aspects into your code for separation of concern

Aspects are common features that you write every now and then in
different parts of your project. it can be some specific way of
handling exceptions in your code, or logging method calls, or
timing execution of methods, or retrying some methods and so on. If
you are not doing it using any Aspect Oriented Programming
framework, then you are repeating a lot of similar code throughout
the project, which is making your code hard to maintain. For ex,
say you have a business layer where methods need to be logged,
errors need to be handled in a certain way, execution needs to be
timed, database operations need to be retried and so on. So, you
write code like this:

public bool InsertCustomer(string firstName, string lastName, int age,
    Dictionary<string, string> attributes)
{
    if (string.IsNullOrEmpty(firstName))
        throw new ApplicationException("first name cannot be empty");

    if (string.IsNullOrEmpty(lastName))
        throw new ApplicationException("last name cannot be empty");

    if (age < 0)
        throw new ApplicationException("Age must be non-zero");

    if (null == attributes)
        throw new ApplicationException("Attributes must not be null");

    // Log customer inserts and time the execution
    Logger.Writer.WriteLine("Inserting customer data...");
    DateTime start = DateTime.Now;

    try
    {
        CustomerData data = new CustomerData();
        bool result = data.Insert(firstName, lastName, age, attributes);
        if (result == true)
        {
            Logger.Writer.Write("Successfully inserted customer data in "
                + (DateTime.Now-start).TotalSeconds + " seconds");
        }
        return result;
    }
    catch (Exception x)
    {
        // Try once more, may be it was a network blip or some temporary downtime
        try
        {
            CustomerData data = new CustomerData();
            if (result == true)
            {
                Logger.Writer.Write("Successfully inserted customer data in "
                    + (DateTime.Now-start).TotalSeconds + " seconds");
            }
            return result;
        }
        catch
        {
            // Failed on retry, safe to assume permanent failure.

            // Log the exceptions produced
            Exception current = x;
            int indent = 0;
            while (current != null)
            {
                string message = new string(Enumerable.Repeat('t', indent).ToArray())
                    + current.Message;
                Debug.WriteLine(message);
                Logger.Writer.WriteLine(message);
                current = current.InnerException;
                indent++;
            }
            Debug.WriteLine(x.StackTrace);
            Logger.Writer.WriteLine(x.StackTrace);

            return false;
        }
    }

}

Here you see the two lines of real code, which inserts the
Customer calling a class, is hardly visible due to all the
concerns (log, retry, exception handling, timing)
you have to implement in business layer. There’s validation,
error handling, caching, logging, timing, auditing, retring,
dependency resolving and what not in business layers nowadays. The
more a project matures, the more concerns get into your codebase.
So, you keep copying and pasting boilerplate codes and write the
tiny amount of real stuff somewhere inside that boilerplate.
What’s worse, you have to do this for every business layer
method. Say now you want to add a UpdateCustomer method in
your business layer. you have to copy all the concerns again and
put the two lines of real code somewhere inside that
boilerplate.

Think of a scenario where you have to make a project wide change
on how errors are handled. You have to go through all the hundreds
of business layer functions you wrote and change it one by one. Say
you need to change the way you time execution. You have to go
through hundreds of functions again and do that.

Aspect Oriented Programming solves these challenges. When you
are doing AOP, you do it the cool way:

[EnsureNonNullParameters]
[
Log]
[
TimeExecution]
[
RetryOnceOnFailure] public void InsertCustomerTheCoolway(string firstName, string lastName, int age, Dictionary<string, string> attributes) { CustomerData data = new CustomerData(); data.Insert(firstName, lastName, age, attributes); }

Here you have
separated the common stuffs like logging, timing, retrying,
validation, which are formally called ‘concern’,
completely out of your real code. The method is nice and clean, to
the point. All the concerns are taken out of the code of the
function and added to the function using Attribute. Each
Attribute here represents one Aspect. For example, you can
add Logging aspect to any function just by adding the Log
attribute. Whatever AOP framework you use, the framework ensures
the Aspects are weaved into the code either at build time or at
runtime.

There are AOP frameworks which allows you to weave the Aspects
at compile time by using post build events and IL manipulations eg
PostSharp, some does it at runtime
using
DynamicProxy
and some requires your classes to inherit from
ContextBoundObject in order to
support Aspects
using C# built-in features. All of these have
some barrier to entry, you have to justify using some external
library, do enough performance test to make sure the libraries
scale and so on. What you need is a dead simple way to achieve
“separation of concern”, may not be full blown Aspect
Oriented Programming. Remember, the purpose here is separation of
concern and keep code nice and clean.

So, let me show you a dead simple way of separation of concern,
writing standard C# code, no Attribute or IL manipulation black
magics, simple calls to classes and delegates, yet achieve nice
separation of concern in a reusable and maintainable way. Best of
all, it’s light, just one small class.

public void InsertCustomerTheEasyWay(string firstName, string lastName, int age,
    Dictionary<string, string> attributes)
{
    AspectF.Define
        .Log(Logger.Writer, "Inserting customer the easy way")
        .HowLong(Logger.Writer, "Starting customer insert", 
"Inserted customer in {1} seconds") .Retry() .Do(() => { CustomerData data = new CustomerData(); data.Insert(firstName, lastName, age, attributes); }); }

If you want to read details about how it works and it can save
you hundreds of hours of repeatetive coding, read on:

AspectF
fluent way to add Aspects for cleaner maintainable code

If you like it, please vote for me!

ASP.NET AJAX testing made easy using Visual Studio 2008 Web Test

Visual Studio 2008 comes with rich Web Testing support, but
it’s not rich enough to test highly dynamic AJAX websites
where the page content is generated dynamically from database and
the same page output changes very frequently based on some external
data source e.g. RSS feed. Although you can use the Web Test Record
feature to record some browser actions by running a real browser
and then play it back. But if the page that you are testing changes
everytime you visit the page, then your recorded tests no longer
work as expected. The problem with recorded Web Test is that it
stores the generated ASP.NET Control ID, Form field names inside
the test. If the page is no longer producing the same ASP.NET
Control ID or same Form fields, then the recorded test no longer
works. A very simple example is in VS Web Test, you can say
“click the button with ID
ctrl00_UpdatePanel003_SubmitButton002”, but you cannot
say “click the 2nd Submit button inside the third
UpdatePanel”. Another key limitation is in Web Tests, you
cannot address Controls using the Server side Control ID like
“SubmitButton”. You have to always use the generated
Client ID which is something weird like
“ctrl_00_SomeControl001_SubmitButton”. Moreover, if you
are making AJAX calls where certain call returns some JSON or
updates some UpdatePanel and then based on the server
returned response, you want to make further AJAX calls or post the
refreshed UpdatePanel, then recorded tests don’t work
properly. You *do* have the option to write the tests hand coded
and write code to handle such scenario but it’s pretty
difficult to write hand coded tests when you are using
UpdatePanels because you have to keep track of the page
viewstates, form hidden variables etc across async post backs. So,
I have built a library that makes it significantly easier to test
dynamic AJAX websites and UpdatePanel rich web pages. There
are several ExtractionRule and ValidationRule
available in the library which makes testing Cookies, Response
Headers, JSON output, discovering all UpdatePanel in a page,
finding controls in the response body, finding controls inside some
UpdatePanel all very easy.

First, let me give you an example of what can be tested using
this library. My open source project Dropthings produces a Web
2.0 Start Page where the page is composed of widgets.


image

Each widget is composed of two UpdatePanel. There’s
a header area in each widget which is one UpdatePanel and
the body area is another UpdatePanel. Each widget is
rendered from database using the unique ID of the widget row, which
is an INT IDENTITY. Every page has unique widgets, with unique
ASP.NET Control ID. As a result, there’s no way you can
record a test and play it back because none of the ASP.NET Control
IDs are ever same for the same page on different visits. This is
where my library comes to the rescue.

See the web test I did:


image

This test simulates an anonymous user visit. When anonymous user
visits Dropthings for the first time, two pages are created with
some default widgets. You can also add new widgets on the page, you
can drag & drop widgets, you can delete a widget that you
don’t like.

This Web Test simulates these behaviors automatically:

  • Visit the homepage
  • Show the widget list which is an UpdatePanel. It checks
    if the UpdatePanel contains the BBC World widget.
  • Then it clicks on the “Edit” link of the “How
    to of the day” widget which brings up some options
    dynamically inside an UpdatePanel. Then it tries to change
    the Dropdown value inside the UpdatePanel to 10.
  • Adds a new widget from the Widget List. Ensures that the
    UpdatePanel postback successfully renders the new
    widget.
  • Deletes the newly added widget and ensures the widget is
    gone.
  • Logs user out.

If you want to learn details about the project, read my
codeproject article:

http://www.codeproject.com/KB/aspnet/aspnetajaxtesting.aspx

Please vote if you find this useful.

Web 2.0 AJAX Portal using jQuery, ASP.NET 3.5, Silverlight, Linq to SQL, WF and Unity

Dropthings
– my open
source
Web 2.0 Ajax Portal has gone through a technology
overhauling. Previously it was built using ASP.NET AJAX, a little
bit of Workflow Foundation and Linq to SQL. Now Dropthings boasts
full jQuery front-end combined with ASP.NET AJAX
UpdatePanel, Silverlight widget, full
Workflow Foundation implementation on the business
layer, 100% Linq to SQL Compiled Queries on the
data access layer, Dependency Injection and Inversion of Control
(IoC) using Microsoft Enterprise Library 4.1 and
Unity. It also has a ASP.NET AJAX Web Test
framework that makes it real easy to write Web Tests that simulates
real user actions on AJAX web pages. This article will walk you
through the challenges in getting these new technologies to work in
an ASP.NET website and how performance, scalability, extensibility
and maintainability has significantly improved by the new
technologies. Dropthings has been licensed for commercial use by
prominent companies including BT Business, Intel, Microsoft IS,
Denmark Government portal for Citizens; Startups like Limead and
many more. So, this is serious stuff! There’s a very cool
open source implementation of Dropthings framework available at
National
University of Singapore
portal.

Visit: http://dropthings.omaralzabir.com


Dropthings AJAX Portal

I have published a new article on this on CodeProject:

http://www.codeproject.com/KB/ajax/Web20Portal.aspx

Get the source code

Latest source code is hosted at Google code:

http://code.google.com/p/dropthings

There’s a CodePlex site for documentation and issue
tracking:

http://www.codeplex.com/dropthings

You will need Visual Studio 2008 Team Suite with Service Pack 1
and Silverlight 2 SDK in order to run all the projects. If you have
only Visual Studio 2008 Professional, then you will have to remove
the Dropthings.Test project.

New features introduced

Dropthings new release has the following features:

  • Template users – you can define a user who’s pages
    and widgets are used as a template for new users. Whatever you put
    in that template user’s pages, it will be copied for every
    new user. Thus this is an easier way to define the default pages
    and widgets for new users. Similarly you can do the same for a
    registered user. The template users can be defined in the
    web.config.
  • Widget-to-Widget communication – Widgets can send message
    to each other. Widgets can subscribe to an Event Broker and
    exchange messages using a Pub-Sub pattern.
  • WidgetZone – you can create any number of zones in any
    shape on the page. You can have widgets laid in horizontal layout,
    you can have zones on different places on the page and so on. With
    this zone model, you are no longer limited to the Page-Column model
    where you could only have N vertical columns.
  • Role based widgets – now widgets are mapped to roles so
    that you can allow different users to see different widget list
    using ManageWidgetPersmission.aspx.
  • Role based page setup – you can define page setup for
    different roles. For ex, Managers see different pages and widgets
    than Employees.
  • Widget maximize – you can maximize a widget to take full
    screen. Handy for widgets with lots of content.
  • Free form resize – you can freely resize widgets
    vertically.
  • Silverlight Widgets – You can now make widgets in
    Silverlight!

Why the technology overhauling

Performance, Scalability, Maintainability and Extensibility
– four key reasons for the overhauling. Each new technology
solved one of more of these problems.

First, jQuery was used to replace my personal hand-coded large
amount of Javascript code that offered the client side drag &
drop and other UI effects. jQuery already has a rich set of library
for Drag & Drop, Animations, Event handling, cross browser
javascript framework and so on. So, using jQuery means opening the
door to thousands of jQuery plugins to be offered on Dropthings.
This made Dropthings highly extensible on the client side.
Moreover, jQuery is very light. Unlike AJAX Control Toolkit jumbo
sized framework and heavy control extenders, jQuery is very lean.
So, total javascript size decreased significantly resulting in
improved page load time. In total, the jQuery framework, AJAX basic
framework, all my stuffs are total 395KB, sweet! Performance is
key; it makes or breaks a product.

Secondly, Linq to SQL queries are replaced with Compiled
Queries. Dropthings did not survive a load test when regular lambda
expressions were used to query database. I could only reach up to
12 Req/Sec using 20 concurrent users without burning up web server
CPU on a Quad Core DELL server.

Thirdly, Workflow Foundation is used to build operations that
require multiple Data Access Classes to perform together in a
single transaction. Instead of writing large functions with many
if…else conditions, for…loops, it’s better to
write them in a Workflow because you can visually see the flow of
execution and you can reuse Activities among different Workflows.
Best of all, architects can design workflows and developers can
fill-in code inside Activities. So, I could design a complex
operations in a workflow without writing the real code inside
Activities and then ask someone else to implement each Activity. It
is like handing over a design document to developers to implement
each unit module, only that here everything is strongly typed and
verified by compiler. If you strictly follow Single Responsibility
Principle for your Activities, which is a smart way of saying one
Activity does only one and very simple task, you end up with a
highly reusable and maintainable business layer and a very clean
code that’s easily extensible.

Fourthly, Unity
Dependency Injection (DI) framework is used to pave the path for
unit testing and dependency injection. It offers Inversion of
Control (IoC), which enables testing individual classes in
isolation. Moreover, it has a handy feature to control lifetime of
objects. Instead of creating instance of commonly used classes
several times within the same request, you can make instances
thread level, which means only one instance is created per thread
and subsequent calls reuse the same instance. Are these going over
your head? No worries, continue reading, I will explain later
on.

Fifthly, enabling API for Silverlight widgets allows more
interactive widgets to be built using Silverlight. HTML and
Javascripts still have limitations on smooth graphics and
continuous transmission of data from web server. Silverlight solves
all of these problems.

Read the article for details on how all these improvements were
done and how all these hot techs play together in a very useful
open source project for enterprises.

http://www.codeproject.com/KB/ajax/Web20Portal.aspx

Don’t forget to vote for me if you like it.

Memory Leak with delegates and workflow foundation

Recently after Load Testing my open source project Dropthings, I
encountered a lot of memory leak. I found lots of Workflow
Instances and Linq Entities were left in memory and never
collected. After profiling the web application using .NET Memory Profiler, it showed the real picture:


image

It shows you that instances of the several types are being
created but not being removed. You see the “New” column
has positive value, but the “Remove” column has 0. That
means new instances are being created, but not removed. Basically
the way you do Memory Profiling is, you take two snapshots. Say you
take one snapshot when you first visit your website. Then you do
some action on the website that results in allocation of objects.
Then you take another snapshot. When you compare both snapshots,
you can see how many instances of classes were created between
these two snapshots and how many were removed. If they are not
equal, then you have leak. Generally in web application many
objects are created on every page hit and the end of the request,
all those objects are supposed to be released. If they are not
released, then we have a problem. But that’s the scenario for
desktop applications because in a desktop application, objects can
remain in memory until app is closed. But you should know best from
the code which objects were supposed to go out of scope and get
released.

For beginners, leak means objects are being allocated but not
being freed because someone is holding reference to the objects.
When objects leak, they remain in memory forever, until the process
(or app domain) is closed. So, if you have a leaky website, your
website is continuously taking up memory until it runs out of
memory on the web server and thus crash. So, memory leak is a bad
– it prevents you from running your product for long duration
and requires frequent restart of app pool.

So, the above screenshot shows Workflow and Linq related classes
are not being removed, and thus leaking. This means somewhere
workflow instances are not being released and thus all workflow
related objects are remaining. You can see the number is same 48
for all workflow related objects. This is a good indication that,
almost every instance of workflow is leaked because there were
total 48 workflows created and ran. Moreover it indicates we have a
leak from a top Workflow instance level, not in some specific
Activity or somewhere deep in the code.

As the workflows use Linq stuff, they held reference to the Linq
stuffs and thus the Linq stuffs leaked as well. Sometimes you might
be looking for why A is leaking. But you actually end up finding
that since B was holding reference to A and B was leaking and thus
A was leaking as well. This is sometimes tricky to figure out and
you spend a lot of time looking at the wrong direction.

Now let me show you the buggy code:

ManualWorkflowSchedulerService manualScheduler = 
workflowRuntime.GetService<ManualWorkflowSchedulerService>(); WorkflowInstance instance = workflowRuntime.CreateWorkflow(workflowType, properties); instance.Start(); EventHandler<WorkflowCompletedEventArgs> completedHandler = null; completedHandler = delegate(object o, WorkflowCompletedEventArgs e) { if (e.WorkflowInstance.InstanceId == instance.InstanceId) // 1. instance { workflowRuntime.WorkflowCompleted -= completedHandler; // 2. terminatedhandler // copy the output parameters in the specified properties dictionary Dictionary<string,object>.Enumerator enumerator =
e.OutputParameters.GetEnumerator(); while( enumerator.MoveNext() ) { KeyValuePair<string,object> pair = enumerator.Current; if( properties.ContainsKey(pair.Key) ) { properties[pair.Key] = pair.Value; } } } }; Exception x = null; EventHandler<WorkflowTerminatedEventArgs> terminatedHandler = null; terminatedHandler = delegate(object o, WorkflowTerminatedEventArgs e) { if (e.WorkflowInstance.InstanceId == instance.InstanceId) // 3. instance { workflowRuntime.WorkflowTerminated -= terminatedHandler; // 4. completeHandler Debug.WriteLine( e.Exception ); x = e.Exception; } }; workflowRuntime.WorkflowCompleted += completedHandler; workflowRuntime.WorkflowTerminated += terminatedHandler; manualScheduler.RunWorkflow(instance.InstanceId);

Can you spot the code where it leaked?

I have numbered the lines in comment where the leak is
happening. Here the delegate is acting like a closure
and those who are from Javascript background know closure is evil.
They leak memory unless very carefully written. Here the
delegate keeps a reference to the
instance object. So, if somehow delegate
is not released, the instance will remain in memory
forever and thus leak. Now can you find a situation when the
delegate will not be released?

Say the workflow completed. It will fire the completeHandler. But the
completeHandler will not release the
terminateHandler. Thus the
terminateHandler remains in memory and it also holds
reference to the instance. So, we have a leaky
delegate leaking whatever it is holding onto outside
it’s scope. Here the only thing outside the scope if the
instance, which it is tried to access from the parent
function.

Since the workflow instance is not released, all the properties
the workflow and all the activities inside it are holding onto
remains in memory. Most of the workflows and activities expose
public properties which are Linq Entities. Thus the Linq Entities
remain in memory. Now Linq Entities keep a reference to the
DataContext from where it is produced. Thus we have
DataContext remaining in memory. Moreover,
DataContext keeps reference to many internal objects
and metadata cacahe, so they remain in memory as well.

So, the correct code is:

ManualWorkflowSchedulerService manualScheduler = 
workflowRuntime.GetService<ManualWorkflowSchedulerService>(); WorkflowInstance instance = workflowRuntime.CreateWorkflow(workflowType, properties); instance.Start(); var instanceId = instance.InstanceId; EventHandler<WorkflowCompletedEventArgs> completedHandler = null; completedHandler = delegate(object o, WorkflowCompletedEventArgs e) { if (e.WorkflowInstance.InstanceId == instanceId) // 1. instanceId is a Guid { // copy the output parameters in the specified properties dictionary Dictionary<string,object>.Enumerator enumerator =
e.OutputParameters.GetEnumerator(); while( enumerator.MoveNext() ) { KeyValuePair<string,object> pair = enumerator.Current; if( properties.ContainsKey(pair.Key) ) { properties[pair.Key] = pair.Value; } } } }; Exception x = null; EventHandler<WorkflowTerminatedEventArgs> terminatedHandler = null; terminatedHandler = delegate(object o, WorkflowTerminatedEventArgs e) { if (e.WorkflowInstance.InstanceId == instanceId) // 2. instanceId is a Guid { x = e.Exception; Debug.WriteLine(e.Exception); } }; workflowRuntime.WorkflowCompleted += completedHandler; workflowRuntime.WorkflowTerminated += terminatedHandler; manualScheduler.RunWorkflow(instance.InstanceId); // 3. Both delegates are now released
workflowRuntime.WorkflowTerminated -= terminatedHandler; workflowRuntime.WorkflowCompleted -= completedHandler;

There are two changes – in both delegates, the
instanceId variable is passed, instead of the
instance. Since instanceId is a Guid,
which is a struct type data type, not a class, there’s no
issue of referencing. Structs are copied, not referenced. So, they
don’t leak memory. Secondly, both delegates are
released at the end of the workflow execution, thus releasing both
references.

In Dropthings, I am using the famous CallWorkflow Activity by John Flanders, which
is widely used to execute one Workflow from another synchronously.
There’s a CallWorkflowService class which is
responsible for synchronously executing another workflow and that
has similar memory leak problem. The original code of the service
is as following:

public class CallWorkflowService : WorkflowRuntimeService
{
    #region Methods

    public void StartWorkflow(Type workflowType,Dictionary<string,object> inparms, 
Guid caller,IComparable qn) { WorkflowRuntime wr = this.Runtime; WorkflowInstance wi = wr.CreateWorkflow(workflowType,inparms); wi.Start(); ManualWorkflowSchedulerService ss =
wr.GetService<ManualWorkflowSchedulerService>(); if (ss != null) ss.RunWorkflow(wi.InstanceId); EventHandler<WorkflowCompletedEventArgs> d = null; d = delegate(object o, WorkflowCompletedEventArgs e) { if (e.WorkflowInstance.InstanceId ==wi.InstanceId) { wr.WorkflowCompleted -= d; WorkflowInstance c = wr.GetWorkflow(caller); c.EnqueueItem(qn, e.OutputParameters, null, null); } }; EventHandler<WorkflowTerminatedEventArgs> te = null; te = delegate(object o, WorkflowTerminatedEventArgs e) { if (e.WorkflowInstance.InstanceId == wi.InstanceId) { wr.WorkflowTerminated -= te; WorkflowInstance c = wr.GetWorkflow(caller); c.EnqueueItem(qn, new Exception("Called Workflow Terminated",
e.Exception), null, null); } }; wr.WorkflowCompleted += d; wr.WorkflowTerminated += te; } #endregion Methods }

As you see, it has that same delegate holding reference to
instance object problem. Moreover, there’s some queue stuff
there, which requires the caller and qn
parameter passed to the StartWorkflow function. So,
not a straight forward fix.

I tried to rewrite the whole CallWorkflowService so
that it does not require two delegates to be created per Workflow.
Then I took the delegates out. Thus there’s no chance of
closure holding reference to unwanted objects. The result looks
like this:

public class CallWorkflowService : WorkflowRuntimeService
{
    #region Fields

    private EventHandler<WorkflowCompletedEventArgs> _CompletedHandler = null;
    private EventHandler<WorkflowTerminatedEventArgs> _TerminatedHandler = null;
    private Dictionary<Guid, WorkflowInfo> _WorkflowQueue = 
new Dictionary<Guid, WorkflowInfo>(); #endregion Fields #region Methods public void StartWorkflow(Type workflowType,Dictionary<string,object> inparms,
Guid caller,IComparable qn) { WorkflowRuntime wr = this.Runtime; WorkflowInstance wi = wr.CreateWorkflow(workflowType,inparms); wi.Start(); var instanceId = wi.InstanceId; _WorkflowQueue[instanceId] = new WorkflowInfo { Caller = caller, qn = qn }; ManualWorkflowSchedulerService ss =
wr.GetService<ManualWorkflowSchedulerService>(); if (ss != null) ss.RunWorkflow(wi.InstanceId); } protected override void OnStarted() { base.OnStarted(); if (null == _CompletedHandler) { _CompletedHandler = delegate(object o, WorkflowCompletedEventArgs e) { var instanceId = e.WorkflowInstance.InstanceId; if (_WorkflowQueue.ContainsKey(instanceId)) { WorkflowInfo wf = _WorkflowQueue[instanceId]; WorkflowInstance c = this.Runtime.GetWorkflow(wf.Caller); c.EnqueueItem(wf.qn, e.OutputParameters, null, null); _WorkflowQueue.Remove(instanceId); } }; this.Runtime.WorkflowCompleted += _CompletedHandler; } if (null == _TerminatedHandler) { _TerminatedHandler = delegate(object o, WorkflowTerminatedEventArgs e) { var instanceId = e.WorkflowInstance.InstanceId; if (_WorkflowQueue.ContainsKey(instanceId)) { WorkflowInfo wf = _WorkflowQueue[instanceId]; WorkflowInstance c = this.Runtime.GetWorkflow(wf.Caller); c.EnqueueItem(wf.qn,
new Exception("Called Workflow Terminated", e.Exception),
null, null); _WorkflowQueue.Remove(instanceId); } }; this.Runtime.WorkflowTerminated += _TerminatedHandler; } } protected override void OnStopped() { _WorkflowQueue.Clear(); base.OnStopped(); } #endregion Methods #region Nested Types private struct WorkflowInfo { #region Fields public Guid Caller; public IComparable qn; #endregion Fields } #endregion Nested Types }

After fixing the problem, another Memory Profile result showed
the leak is gone:


image

As you see, the numbers vary, which means there’s no
consistent leak. Moreover, looking at the types that remains in
memory, they look more like metadata than instances of
classes. So, they are basically cached instances of metadata,
not instances allocated during workflow execution which are
supposed to be freed. So, we solved the memory leak!

Now you know how to write anonymous delegates without leaking
memory and how to run workflow without leaking them. Basically, the
principle theory is – if you are referencing some outside
object from an anonymous delegate, make sure that
object is not holding reference to the delegate in
some way, may be directly or may be via some child objects of its
own. Because then you have a circular reference. If possible, do
not try to access objects e.g. instance inside an
anonymous delegate that is declared outside the delegate. Try
accessing instrinsic data types like int, string, DateTime, Guid
etc which are not reference type variables. So, instead of
referencing to an object, you should declare local variables e.g.
instanceId that gets the value of properties (e.g.
instance.InstanceId) from the object and then use
those local variables inside the anonymous delegate.

Optimize ASP.NET Membership Stored Procedures for greater speed and scalability

Last year at Pageflakes,
when we were getting millions of hits per day, we were having query
timeout due to lock timeout and Transaction Deadlock errors. These
locks were produced from aspnet_Users and
aspnet_Membership tables. Since both of these tables
are very high read (almost every request causes a read on these
tables) and high write (every anonymous visit creates a row on
aspnet_Users), there were just way too many locks
created on these tables per second. SQL Counters showed thousands
of locks per second being created. Moreover, we had queries that
would select thousands of rows from these tables frequently and
thus produced more locks for longer period, forcing other queries
to timeout and thus throw errors on the website.

If you have read my last blog post, you know why such locks
happen. Basically every table when it grows up to hold millions of
records and becomes popular goes through this trouble. It’s
just a part of scalability problem that is common to database. But
we rarely take prevention about it in our early design.

The solution is simple, you should either have WITH
(NOLOCK)
or SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED
before SELECT queries. Either of this will do.
They tell SQL Server not to hold any lock on the table while it is
reading the table. If some row is locked while the read is
happening, it will just ignore that row. When you are reading a
table thousand times per second, without these options, you are
issuing lock on many places around the table thousand times per
second. It not only makes read from table slower, but also so many
lock prevents insert, update, delete from happening timely and thus
queries timeout. If you have queries like “show the currently
online users from last one hour based on
LastActivityDate field”, that is going to issue
such a wide lock that even other harmless select queries will
timeout. And did I tell you that there’s no index on
LastActivityDate on aspnet_Users
table?

Now don’t blame yourself for not putting either of these
options on your every stored proc and every dynamically generated
SQL from the very first day. ASP.NET developers made the same
mistake. You won’t see either of these used in any of the
stored procs used by ASP.NET Membership. For example, the following
stored proc gets called whenever you access Profile
object:

ALTER PROCEDURE [dbo].[aspnet_Profile_GetProperties]
@ApplicationName
nvarchar(256),
@UserName nvarchar(256),
@CurrentTimeUtc datetime
AS
BEGIN

DECLARE
@ApplicationId uniqueidentifier
SELECT
@ApplicationId = NULL
SELECT
@ApplicationId = ApplicationId FROM
dbo.aspnet_Applications WHERE LOWER(@ApplicationName) = LoweredApplicationName
IF (@ApplicationId IS NULL)
RETURN

DECLARE
@UserId uniqueidentifier
DECLARE
@LastActivityDate datetime
SELECT
@UserId = NULL

SELECT
@UserId = UserId, @LastActivityDate = LastActivityDate
FROM dbo.aspnet_Users
WHERE ApplicationId = @ApplicationId AND LoweredUserName = LOWER(@UserName)

IF (@UserId IS NULL)
RETURN
SELECT TOP
1 PropertyNames, PropertyValuesString, PropertyValuesBinary
FROM dbo.aspnet_Profile
WHERE UserId = @UserId

IF (@@ROWCOUNT > 0)
BEGIN
UPDATE
dbo.aspnet_Users
SET LastActivityDate=@CurrentTimeUtc
WHERE UserId = @UserId
END
END

There are two
SELECT operations that hold lock on two very high read tables
aspnet_Users and aspnet_Profile.
Moreover, there’s a nasty UPDATE statement. It tries to
update the LastActivityDate of a user whenever you
access Profile object for the first time within a http
request.

This stored proc alone is enough to bring your site down. It did
to us because we are using Profile Provider
everywhere. This stored proc was called around 300 times/sec. We
were having nightmarish slow performance on the website and many
lock timeouts and transaction deadlocks. So, we added the
transaction isolation level and we also modified the UPDATE
statement to only perform an update when the
LastActivityDate is over an hour. So, this means, the
same user’s LastActivityDate won’t be
updated if the user hits the site within the same hour.

So, after the modifications, the stored proc looked like
this:

ALTER PROCEDURE [dbo].[aspnet_Profile_GetProperties]
@ApplicationName
nvarchar(256),
@UserName nvarchar(256),
@CurrentTimeUtc datetime
AS
BEGIN
-- 1. Please no more locks during reads
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

DECLARE @ApplicationId uniqueidentifier
--SELECT @ApplicationId = NULL
--SELECT @ApplicationId = ApplicationId FROM dbo.aspnet_Applications
WHERE LOWER(@ApplicationName) = LoweredApplicationName
--IF (@ApplicationId IS NULL)
-- RETURN

-- 2. No more call to Application table. We have only one app dude!
SET @ApplicationId = dbo.udfGetAppId()

DECLARE @UserId uniqueidentifier
DECLARE
@LastActivityDate datetime
SELECT
@UserId = NULL

SELECT
@UserId = UserId, @LastActivityDate = LastActivityDate
FROM dbo.aspnet_Users
WHERE ApplicationId = @ApplicationId AND LoweredUserName = LOWER(@UserName)

IF (@UserId IS NULL)
RETURN
SELECT TOP
1 PropertyNames, PropertyValuesString, PropertyValuesBinary
FROM dbo.aspnet_Profile
WHERE UserId = @UserId

IF (@@ROWCOUNT > 0)
BEGIN
-- 3. Do not update the same user within an hour
IF DateDiff(n, @LastActivityDate, @CurrentTimeUtc) > 60
BEGIN
-- 4. Use ROWLOCK to lock only a row since we know this query
-- is highly selective
UPDATE dbo.aspnet_Users WITH(ROWLOCK)
SET LastActivityDate=@CurrentTimeUtc
WHERE UserId = @UserId
END
END
END

The changes I
made are numbered and commented. No need for further explanation.
The only tricky thing here is, I have eliminate call to Application
table just to get the ApplicationID from ApplicationName. Since
there’s only one application in a database (ever heard of
multiple applications storing their user separately on the same
database and the same table?), we don’t need to look up the
ApplicationID on every call to every Membership stored proc. We can
just get the ID and hard code it in a function.

CREATE FUNCTION dbo.udfGetAppId()
RETURNS uniqueidentifier
WITH EXECUTE AS
CALLER
AS
BEGIN
RETURN CONVERT
(uniqueidentifier, 'fd639154-299a-4a9d-b273-69dc28eb6388')
END;

This UDF returns the ApplicationID that I have
hardcoded copying from the Application table. Thus it eliminates
the need for quering on the Application table.

Similarly you should do the changes in all other stored
procedures that belong to Membership Provider. All the stroc procs
are missing proper locking, issues aggressive lock during update
and too frequent updates than practical need. Most of them also try
to resolve ApplicationID from ApplicationName, which is unnecessary
when you have only one web application per database. Make these
changes and enjoy lock contention free super performance from
Membership Provider!


kick it on DotNetKicks.com

Linq to SQL solve Transaction deadlock and Query timeout problem using uncommitted reads

When your database tables start accumulating thousands of rows
and many users start working on the same table concurrently, SELECT
queries on the tables start producing lock contentions and
transaction deadlocks. This is a common problem in any high volume
website. As soon as you start getting several concurrent users
hitting your website that results in SELECT queries on some large
table like aspnet_users table that are also being updated
very frequently, you end up having one of these errors:

Transaction (Process ID ##) was deadlocked on lock resources
with another process and has been chosen as the deadlock victim.
Rerun the transaction.

Or,

Timeout Expired. The Timeout Period Elapsed Prior To Completion
Of The Operation Or The Server Is Not Responding.

The solution to these problems are – use proper index on
the table and use transaction isolation level Read
Uncommitted
or WITH (NOLOCK) in your SELECT queries. So,
if you had a query like this:

SELECT * FORM aspnet_users
where ApplicationID =’xxx’ AND LoweredUserName = 'someuser'

You should end up having any of the above errors under high
load. There are two ways to solve this:

SET TRANSACTION LEVEL READ UNCOMMITTED;
SELECT * FROM aspnet_Users
WHERE ApplicationID =’xxx’ AND LoweredUserName = 'someuser'

Or use the WITH (NOLOCK):

SELECT * FROM aspnet_Users WITH (NOLOCK)
WHERE ApplicationID =’xxx’ AND LoweredUserName = 'someuser'

The reason for the errors are that since aspnet_users is
a high read and high write table, during read, the table is
partially locked and during write, it is also locked. So, when the
locks overlap on each other from several queries and especially
when there’s a query that’s trying to read a large
number of rows and thus locking large number of rows, some of the
queries either timeout or produce deadlocks.

Linq to Sql does not produce queries with the WITH
(NOLOCK)
option nor does it use READ UNCOMMITTED. So, if
you are using Linq to SQL queries, you are going to end up with any
of these problems on production pretty soon when your site becomes
highly popular.

For example, here’s a very simple query:

using (var db = new DropthingsDataContext()) { var user = db.aspnet_Users.First(); var pages = user.Pages.ToList(); }

DropthingsDataContext is a DataContext built from Dropthings database.

When you attach SQL Profiler, you get this:

You see none of the queries have READ UNCOMMITTED or WITH

(NOLOCK).

The fix is to do this:

using (var db = new DropthingsDataContext2()) { db.Connection.Open(); db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;"); var user = db.aspnet_Users.First(); var pages = user.Pages.ToList(); }

This will result in the following profiler output

As you see, both queries execute within the same connection and
the isolation level is set before the queries execute. So, both
queries enjoy the isolation level.

Now there’s a catch, the connection does not close. This
seems to be a bug in the DataContext that when it is disposed, it
does not dispose the connection it is holding onto.

In order to solve this, I have made a child class of the
DropthingsDataContext named DropthingsDataContext2
which overrides the Dispose method and closes the
connection.

   class DropthingsDataContext2 : DropthingsDataContext, IDisposable { public new void Dispose() { if (base.Connection != null) if (base.Connection.State != System.Data.ConnectionState.Closed) { base.Connection.Close(); base.Connection.Dispose(); } base.Dispose(); } }

This solved the connection problem.

There you have it, no more transaction deadlock or lock
contention from Linq to SQL queries. But remember, this is only to
eliminate such problems when your database already has the right
indexes. If you do not have the proper index, then you will end up
having lock contention and query timeouts anyway.

There’s one more catch, READ UNCOMMITTED will return rows
from transactions that have not completed yet. So, you might be
reading rows from transactions that will rollback. Since
that’s generally an exceptional scenario, you are more or
less safe with uncommitted read, but not for financial applications
where transaction rollback is a common scenario. In such case, go
for committed read or repeatable read.

There’s another way you can achieve the same, which seems
to work, that is using .NET Transactions. Here’s the code
snippet:

using (var transaction = new TransactionScope( TransactionScopeOption.RequiresNew, new TransactionOptions() { IsolationLevel = IsolationLevel.ReadUncommitted, Timeout = TimeSpan.FromSeconds(30) })) { using (var db = new DropthingsDataContext()) { var user = db.aspnet_Users.First(); var pages = user.Pages.ToList(); transaction.Complete(); } }

Profiler shows a transaction begins and ends:

The downside is it wraps your calls in a transaction. So, you

are unnecessarily creating transactions even for SELECT operations.
When you do this hundred times per second on a web application,
it’s a significant over head.

Some really good examples of deadlocks are given in this
article:

http://www.code-magazine.com/article.aspx?quickid=0309101&page=2

I highly recommend it.

 

Strongly typed workflow input and output arguments

When you run a Workflow using Workflow Foundation, you
pass arguments to the workflow in a Dictionary form where
the type of Dictionary is Dictionary.
This means you miss the strong typing features of .NET languages.
You have to know what arguments the workflow expects by looking at
the Workflow public properties. Moreover, there’s no
way to make arguments required. You pass parameter, expect it to
run, if it throws exception, you pass more arguments, hope it works
now. Similarly, if you are running workflow synchronously using
ManualWorkflowSchedulerService, you expect return arguments
from the Workflow immediately, but there again, you have to rely on
the Dictionary key and value pair. No strong typing there as
well.

In order to solve this, so that you could pass Workflow
arguments as strongly typed classes, you can establish a format
that every Workflow has only two arguments named
“Request” and “Response” and none other. Whatever
needs to be passed to the Workflow and expected out of it,
must be passed via Request and must be expected via Response
properties. Now the type of these arguments can be workflow
specific, it can be any class with one or more parameters. This
way, you could write code like this:


Running workflow with strongly typed argument

The advantages of these strongly typed approach are:

  • Compile time validation of input parameters passed to workflow.
    No risk of passing unexpected object in Dictionary’s
    object type value.
  • Enforce required values by creating Request objects with
    non-default constructor.
  • Establish a fixed contract for Workflow input and output via
    the strongly typed Request and Response classes or interfaces.
  • Validate input arguments for the Workflow directly from the
    Request class, without going through the overhead of running a
    workflow.

If we follow this approach, we create workflows with only two
DependencyProperty, one for Request and one for
Response. Showing you an example from my open source project
Dropthings, which uses Workflow for the entire
Business Layer. Below you see the Workflow that executes when a new
user visits Dropthings.com, creates a new user and setups all the
pages and widgets for the user. It has only two Dependency
property – Request and Response.


image

The Request parameters is of type
IUserVisitWorkflowRequest. So, you can pass any class as
Request argument that implements the interface.


image

Here I have used fancy inheritance to create Request object
hierarchy. You don’t need to do that. Just remember, you can
pass any class. You don’t even need to use interface for
Request parameter. It can be a class directly. I use all these
interfaces in order to facilitate Dependency Inversion.

Similarly, the Response object is also a class.


image

The Response returns quite some properties. So, it’s kinda
handy to wrap them all in one property.

So, there you have it, strongly typed Workflow arguments. You
can attach properties of the Request object to any activity
directly form the designer:


image

There’s really no compromise to make in this approach.
Everything works as before.

In order to make workflow execution simpler, I use a helper
method like the following, that takes the Request and Response
object and creates the Dictionary for me. This Dictionary always
contains one “Request” and one “Response”
entry.


image

This way, I can run Workflow in strongly typed fashion:


image

Here I can specify the Request, Response and Workflow type using
strong typing. This way I get strongly typed return object as well
as pass strongly type Request object. There’s no dictionary
building, no risky string key and object type value passing.
You can ignore the ObjectContainer.Resolve() stuff, because
that’s just returning me an existing reference of
WorkflowRuntime.

Hope you like this approach.

99.99% available ASP.NET and SQL Server SaaS Production Architecture

You have a hot ASP.NET+SQL Server product, growing at thousand
users per day and you have hit the limit of your own garage hosting
capability. Now that you have enough VC money in your pocket, you
are planning to go out and host on some real hosting facility,
maybe a colocation or managed hosting. So, you are thinking, how to
design a physical architecture that will ensure performance,
scalability, security and availability of your product? How can you
achieve four-nine (99.99%) availability? How do you securely let
your development team connect to production servers? How do you
choose the right hardware for web and database server? Should you
use Storage Area Network (SAN) or just local disks on RAID? How do
you securely connect your office computers to production
environment?

Here I will answer all these queries. Let me first show you a
diagram that I made for Pageflakes where we ensured we get
four-nine availability. Since Pageflakes is a Level 3
SaaS
, it’s absolutely important that we build a high
performance, highly available product that can be used from
anywhere in the world 24/7 and end-user gets quick access to their
content with complete personalization and customization of content
and can share it with others and to the world. So, you can take
this production architecture as a very good candidate for Level 3
SaaS:


Hosting_environment

Here’s a CodeProject article that explains all the
ideas:

99.99% available ASP.NET and SQL Server SaaS Production
Architecture

Hope you like it. Appreciate your vote.


kick it on DotNetKicks.com