Do Unit Test and Integration Test from same test code using Conditional Compilation

You usually write unit test and integration test code separately
using different technologies. For example, for unit test, you use
some mocking framework like Moq to
do the mocking. For integration test, you do not use any mocking,
just some test classes that hits some service or facade to do
end-to-end integration test. However, sometimes you see that the
integration and unit test are more or less same, they test the same
class using its interface and perform the same tests against the
same expectation. For example, if you think about a WCF service,
you write unit test to test the ServiceContract using the
interface where you use some mocking framework to mock the
interface of the WCF Service. If you look at the following example,
I am using Moq to test IPortalService interface which is a
ServiceContract for a WCF service. I am using xUnit and
SubSpec to do BDD style tests.

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
    var portalServiceMock = new Mock<IPortalService>();
    var portalService = portalServiceMock.Object;

    "Given a already populated widget gallery".Context(() =>
    {
        portalServiceMock.Setup(p => p.GetAllWidgetDefinitions())
            .Returns(new Widget[] { new Widget { ID = 1 }, new Widget { ID = 2 }})
            .Verifiable();
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and first 
column on the same page"
.Assert(() => { portalServiceMock.VerifyAll(); Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

Now when I want to do an end-to-end test to see if the service
really works by connecting all the wires, then I write a test like
this:

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
    var portalService = new ManageCustomerPortalClient();

    "Given a already populated widget gallery".Context(() =>
    {
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and 
first column on the same page"
.Assert(() => { Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

If you look at the difference, it’s very little. The
mockings are gone. The same operation is called using the same
parameters. The same Asserts are done to test against the
same expectation. It’s an awful duplication of code.

Conditional compilation saves the day. You could write the unit
test using some conditional compilation directive so that in real
environment, those mockings are gone and the real stuff gets run.
For example, the following code does both unit test and integration
test for me. All I do is turn on/off some conditional
compilation.

[Specification]
public void GetAllWidgetDefinitions_should_return_all_widget_in_widget_gallery()
{
#if MOCK
    var portalServiceMock = new Mock<IPortalService>();
    var portalService = portalServiceMock.Object;
#else
    var portalService = new ManageCustomerPortalClient();
#endif

    "Given a already populated widget gallery".Context(() =>
    {
#if MOCK
        portalServiceMock.Setup(p => p.GetAllWidgetDefinitions())
            .Returns(new Widget[] { new Widget { ID = 1 }, new Widget { ID = 2 }})
            .Verifiable();
#endif
    });

    Widget[] widgets = default(Widget[]);
    "When a widget is added to one of the page".Do(() =>
    {
        widgets = portalService.GetAllWidgetDefinitions();
    });

    "It should create the widget on the first row and 
first column on the same page"
.Assert(() => { #if MOCK portalServiceMock.VerifyAll(); #endif Assert.NotEqual(0, widgets.Length); Assert.NotEqual(0, widgets[0].ID); }); }

The code is now in unit test mode. When I run this, it performs
unit test using Moq. When I want to switch to integration test
mode, all I do is take out the “MOCK” word from Project
Properties->Build->Conditional Compilation.


image

Hope this gives you ideas to save unit test and integration test
coding time.

Is your computer running slow, battery running out quickly?

If your computer is running hot or battery running out quickly
then it is most likely due to some application or process consuming
high CPU or memory. If you keep running applications for a long
time, for example, Outlook, then it continues to grow in memory
consumption and does not free up memory efficiently. As a result,
your computer runs out of physical memory and other applications
run slower. Sometimes Outlook, browser, image editing applications
or some other application start taking full CPU as they get into
some heavy internal processing and make your CPU hot and other
applications perform slower.

My new CPUAlert is an
application that monitors CPU and memory consumption of
applications and alerts you if some application is consistently
taking high CPU or high memory. It not only saves your CPU and
Battery’s lifetime but also makes your computer run smooth
and let your active applications run as fast as they can be.

While it is running, if some process is consuming more than 200
MB memory, it will show you an alert:


image

Here you can see my Outlook is taking 244 MB of physical
RAM.

You can either postpone the alert for 5 mins (just press ESC),
or ignore the process permanently so that you no longer receive
alert for the process anymore, or you can close it and reclaim
memory.

The handy feature is “Restart” which closes the
application and starts again. This generally frees up memory that
clogs up in the process.

Same alert will come if some process is consuming more than 30%
CPU for over 5 mins.

You can configure all these settings like what’s the
tolerable limit for CPU and memory, how frequently to show alert,
how long to wait before closing application etc by right clicking
on the Task bar icon and choosing Settings.


image


image

Source code of the project is available at:

http://code.google.com/p/cpualert/

The installer can also be downloaded from there.

Warning: The code is not in a good shape. I was frustrated at
some process taking high CPU and memory and I wrote this app within
hours to get the job done for me.

If you like the application, spread the word!

Fast Streaming Ajax Proxy with GET PUT POST DELETE

I have enhanced my streaming Ajax Proxy with POST, PUT and
DELETE features. Previously it supported only GET. Now it supports
all 4 popular methods for complete REST support. Using this proxy,
you can call REST API on external domain directly from your
website’s javascript code. You can test the proxy from this
link:


labs.omaralzabir.com/ajaxstreamingproxy/GetPutDeleteTest.aspx

The latest source code for the Ajax Proxy is available here:

http://code.google.com/p/fastajaxproxy/

You can find a detail CodeProject article that explains how the
streaming asynchronous aspect of this proxy works:

Fast, Scalable,
Streaming AJAX Proxy – continuously deliver data from across
domains

Here’s how the test UI looks like where you can test POST,
PUT and DELETE:


image

If you want to run the sample source code on your local IIS,
make sure you allow the POST, PUT, and DELETE headers on .ashx
extension from IIS properties:


image

The sample project shows how you can use the proxy to make calls
to external domains. You can directly hit any external URL and
perform POST or DELETE from your javascript code:

var proxyUrl = "StreamingProxy.ashx";
function download(method, proxyUrl, contentUrl, isJson, bodyContent, completeCallback) { var request = new Sys.Net.WebRequest(); if (method == "POST" || method == "PUT") request.set_httpVerb("POST"); else request.set_httpVerb("GET"); var url = proxyUrl + "?m=" + method +
(
isJson ? "&t=" + escape("application/json") : "") + "&u=" + escape(contentUrl); request.set_url(url); if (bodyContent.length > 0) { request.set_body(bodyContent); request.get_headers()["Content-Length"] = bodyContent.length; } var startTime = new Date().getTime(); request.add_completed(function(executor) { if (executor.get_responseAvailable()) { var content = executor.get_responseData(); var endTime = new Date().getTime(); var statistics = "Duration: " + (endTime - startTime) + "ms" + 'n' + "Length: " + content.length + " bytes" + 'n' + "Status Code: " + executor.get_statusCode() + " " + 'n' + "Status: [" + executor.get_statusText() + "]" + 'n'; appendStat(statistics); get('resultContent').value = content; completeCallback(); } }); var executor = new Sys.Net.XMLHttpExecutor(); request.set_executor(executor); executor.executeRequest(); }

I am using MS AJAX here. You can use jQuery to perform the same
test as well. All you need to do is hit the URL of the
StreamingProxy.ashx and pass the actual URL in query string
parameter “u” and pass the type of the http method in
query string parameter “m”. That’s it!

Windows 7 64bit, Outlook 2010 64bit, Conferencing Addin 64bit, Macbook Pro 64bit

I am a 64bit freak. I got Windows 7, Outlook 2010, Conferencing
Addin all 64 bit versions to work on a Macbook pro. Those who are
thinking about moving to 64bit and hesitating whether something
will break, GO AHEAD! Macbook Pro hardware and Microsoft’s
software are the best combination out there. You will enjoy every
moment you spend with your laptop. Moreover, I have tried these
combinations on HP tablet PC, Sony VAIO, Dell Inspiron and Dell
Vostro. HP works best. Others are struggling with driver
issues.

I will give you positive and negative feedback with the apps I
have tried so far:

Outlook 2010 64 bit:


image

Here are my negative feedback. Outlook Product Manager, please
read this. I am a hardcore Outlook customer of you.

  • All my Outlook COM addins are dead. Outlook 2010 64bit does not
    support them. Looks like not so good backward compatibility.
  • Not so significant improvement with Exchange 2007. The startup
    time has improved from about 5 secs to 2 secs. But the startup time
    saving is not really a big saver since I start outlook and it keeps
    running for days until my PC is so screwed that I need a
    restart.
  • Office Communicator 2005 does not work.
  • The beta Office 2010 applications are CPU hungry. I see 30% to
    40% CPU most of the time.
  • It took me over 30 hours until Outlook 2010 started to perform
    well. All this time, it was indexing and indexing and indexing and
    burning CPU.
  • There’s nothing so ground breaking and productivity
    enhancing in Outlook 2010 yet. After upgrading and using it for
    couple of days, I don’t see something so attractive that
    justifies the time spent in upgrading for busy professional.
    It’s not upgrade at this stage so far. You have to uninstall
    all Office 2007 or earlier products, addins etc and then install
    Outlook 2010.
  • Outlook Keyboard shortcuts are changed, having hard time
    adjusting. My precious Alt+L for Reply to All is gone. Now
    it’s Ctrl+Shift+R. Come on guys, when do you just Reply and
    not to Reply to All? I barely remember ever using Reply only.
    It’s always Reply to All. Can’t you make a easier
    shortcut for this?
  • Keyboard focus gets lost to some weird place sometimes and my
    navigation using cursor gets broken. I have to click using mouse to
    get into track.
  • Quick Tasks are kind of limited. For ex, “Reply &
    Delete”, who would want to press CTRL+SHIFT+1 to do reply and
    delete? It’s more natural to press Ctrl+R to reply and then
    send it and hit DEL. The choices on Quick Tasks are limited as
    well. I was hoping I would be able to chain multiple commands like
    – open a new message window, select a specific account to
    send mail using, select a specific signature and after the mail is
    sent, show move dialog box to move the conversation to a specific
    folder. Nope, it does not work this way. First of all there are
    limited commands which does not even support this. Secondly, all
    the actions are performed instantly one after another without
    waiting for the first action to complete.
  • Quick Steps cannot be added to Quick Access Toolbar. Go
    figure!


image

Now the good things:

  • Overall Outlook experience is smooth. Opening new mail, typing
    address, doing search, moving messages, viewing a folder on
    conversation view mode are all significantly faster, even with
    Exchange. It’s hard to say if it’s due to fully 64 bit
    environment or due to the fact that none of my COM addins are
    working.
  • Outlook exits. Finally! None of the previous Outlook would
    terminate the process if I exited Outlook. It remains in memory
    forever unless I kill it from task manager. Now the Outlook really
    closes, or at least kills itself when I exit. Whenever I exit
    Outlook and start again, I see it doing some Data Integrity check.
    This means it is not really closing itself properly, but killing
    itself. I assume that’s bad and my data in Outlook are slowly
    getting messed up.
  • The conversation view is great!
  • Inline appointment viewer is a life saver. When I get an
    appointment invite, the email preview shows a small view of the
    calendar around the meeting time. I can see if I am occupied or if
    there’s an available time before or after the meeting. This
    saves me a lot of time everyday as rescheduling meeting is a
    tedious job in my company and it takes around 4 to 7 reschedules
    attempts to get a suitable time slot in everyone’s diary for every
    darn meeting.
  • Quick Steps is more or less useful. I am getting used to using
    CTRL+SHIFT+1 to “reply to all and delete” and
    CTRL+SHIFT+2 for “reply to all and move to folder”. You
    just have to configure the quick steps to make it suit you.
    Previously I used to use QuickFile addin, which was a super useful
    tool, worth paying 39.95.

Onenote 2010 64 bit

The UI is certainly much slicker. It really looks and feels like
a notebook now. Sketching performance is improved.

However, a big bug. I was sketching and suddenly my pointer
switched to selection mode from pen. All pen options are disabled.
I tried exiting and coming back. Nope. Can’t go back to pen
mode at all. I am using a Genius Tablet. Looks like Onenote is
Tablet PC friendly only. Hope Apple makes a Tablet Macbook Pro
soon.

Word 2010 64 bit

Haven’t used it much. Ribbons are as confusing as before.
The File menu is even more confusing now. No new shape styles that
makes word documents stand out from the rest. No new Smart Art
worth mentioning. Overall – disappointing.

The print features are much improved!

Powerpoint 2010 64 bit

I did not notice any significant new feature in Powerpoint,
sadly. The ribbon has been made more useful than before.
There’s a “Transition” and
“Animations” ribbon bar which is very useful to use and
saves time putting animations in slides. But that’s all I
could see from my limited trial. This is disappointing. I was
expecting there would be richer collection of shapes which are
really cool to look and makes presentations look like Web 2.0
sites, a lot of new Smart Arts, but nothing.


image

Visio 2010 64 bit

The UML Diagram designer is as crappy as ever. Come on
Microsoft, watch the other UML designers and learn from them.
Currently Visio is my last choice for UML design and makes my job
life unhappy because my company forces me to use it. I use PlantUML wherever I can.

I don’t see any new amazingly cool diagram either. I was
hoping the Detailed Network Diagram stencil would be much improved
with smooth round glossy servers, amazingly cool looking router
icons etc. But no luck. The new ribbon interface is as confusing as
other Office applications.

Conclusion

So far I can see significant improvement in Outlook only. Other
apps do not have anything that stands out.

Unit Testing and Integration Testing in real projects

I am yet to find a proper sample on how to do realistic Test
Driven Development (TDD) and how to write proper unit tests for
complex business applications, that gives you enough confidence to
stop doing manual tests anymore. Generally
the samples
show you how to test a Stack or a LinkedList, which
is far simpler than testing a typical N-tier application,
especially if you are using Entity Framework or Linq to SQL or some
ORM in data access layer, and doing logging, validation, caching,
error handling at middle tier. There are many
articles
, blog
posts
, video tutorials on
how to write unit tests, which I believe are all very good starting
points. But all these examples show you basic tests, not good
enough to let your QA team go. So, let me try to show you some
realistic unit and integration test examples which should help you
write tests that gives you confidence and helps you gradually move
towards TDD.

I will show you tests done on my open source project Dropthings, which is a Web
2.0 AJAX portal built using jQuery, ASP.NET 3.5, Linq to SQL,
Dependency Injection using Unity, caching using Microsoft
Enterprise Library, Velocity and so on. Basically all the hot techs
you can grasp in one shot. The project is a typical N-tier
application where there’s a web layer, a business layer and a
data access layer. Writing unit tests, integration tests and load
tests for this project was challenging, and thus interesting to
share so that you can see how you can implement Unit Testing and
Integration Testing in a real project and gradually get into Test
Driven Development.


image

Read this codeproject article of mine to learn how I did
Integration Tests and Unit Tests using Behavior Driven Development
approach:

Unit Testing and Integration Testing in business
applications

http://www.codeproject.com/KB/testing/realtesting.aspx

If you like it, please vote for me.

Simple way to cache objects and collections for greater performance and scalability

Caching of frequently used data greatly increases the
scalability of your application since you can avoid repeated
queries on database, file system or to webservices. When objects
are cached, it can be retrieved from the cache which is lot faster
and more scalable than loading from database, file or web service.
However, implementing caching is tricky and monotonous when you
have to do it for many classes. Your data access layer gets a whole
lot of code that deals with caching objects and collection,
updating cache when objects change or get deleted, expire
collections when a contained object changes or gets deleted and so
on. The more code you write, the more maintenance overhead you add.
Here I will show you how you can make the caching a lot easier
using Linq to SQL and my library AspectF. It’s a
library that helps you get rid of thousands of lines of repeated
code from a medium sized project and eliminates plumbing (logging,
error handling, retrying etc) type code completely.

Here’s an example how caching significantly improves the
performance and scalabitlity of applications. Dropthings – my
open source Web 2.0 AJAX portal, without caching can only serve
about 11 request/sec with 10 concurrent users on a dual core 64 bit
PC. Here data is loaded from database as well as from external
sources. Avg page response time is 1.44 sec.


Load Test Without Cache

After implementing caching, it became significantly faster,
around 32 requests/sec. Page load time decreased
significantly as well to 0.41 sec only. During the
load test, CPU utilization was around 60%.


Load Test with in memory cache

It shows clearly the significant difference it can make to your
application. If you are suffering from poor page load performance
and high CPU or disk activity on your database and application
server, then caching Top 5 most frequently used objects in your
application will solve that problem right away. It’s a quick
win to make your application a lot faster than doing complex
re-engineering in your application.

Common approaches to caching objects and
collections

Sometimes the caching can be simple, for example caching a
single object which does not belong to a collection and does not
have child collections that are cached separately. In such case,
you write simple code like this:

  • Is the object being requested already in cache?
    • Yes, then serve it from cache.
    • No, then load it from database and then cache it.

On the other hand, when you are dealing with cached collection
where each item in the collection is also cached separately, then
the caching logic is not so simple. For example, say you have
cached a User collection. But each User
object is also cached separately because you need to load
individual User objects frequently. Then the caching
logic gets more complicated:

  • Is the collection being requested already in cache?
    • Yes. Get the collection. For each object in the collection:
      • Is that object individually available in cache?
        • Yes, get the individual object from cache. Update it in the
          collection.
        • No, discard the whole collection from cache. Go to next
          step:
    • No. Load the collection from source (eg database) and cache
      each item in the collection separately. Then cache the
      collection.

You might be thinking why do we need to read each individual
item from cache and why do we need to cache each item in collection
separarely when the whole collection is already in cache? There are
two scenarios you need to address when you cache a collection and
individual items in that collection are also cached separately:

  • An individual item has been updated and the updated item is in
    cache. But the collection, which contains all those individual
    items, has not been refreshed. So, if you get the collection from
    cache and return as it is, you will get stale individual items
    inside that collection. This is why each item needs to be retrieved
    from cache separately.
  • An item in the collection may have been force expired in cache.
    For ex, something changed in the object or the object has been
    deleted. So, you expired it in cache so that on next retrieval it
    comes from database. If you load the collection from cache only,
    then the collection will contain the stale object.

If you are doing it the conventional way, you will be writing a
lot of repeated code in your data access layer. For example, say
you are loading a Page collection that belongs to a
user. If you want to cache the collection of Page for
a user as well as cache individual Page objects so
that each Page can be retrieved from Cache directly.
Then you need to write code like this:

public List<Page> GetPagesOfUserOldSchool(Guid userGuid)
{
    ICache cache = Services.Get<ICache>();
    bool isCacheStale = false;
    string cacheKey = CacheSetup.CacheKeys.PagesOfUser(userGuid);
    var cachedPages = cache.Get(cacheKey) as List<Page>;
    if (cachedPages != null)
    {
        var resultantPages = new List<Page>();
        // If each item in the collection is no longer in cache, invalidate the collection
        // and load again.
        foreach (Page cachedPage in cachedPages)
        {
            var individualPageInCache = cache.Get(CacheSetup.CacheKeys.PageId(cachedPage.ID)) as Page;
            if (null == individualPageInCache)
            {
                // Some item is missing in cache. So, the collection is stale.
                isCacheStale = true;
            }
            else
            {
                resultantPages.Add(individualPageInCache);
            }
        }

        cachedPages = resultantPages;
    }

    if (isCacheStale)
    {
        // Collection not cached. Need to load collection from database and then cache it.
        var pagesOfUser = _database.GetList<Page, Guid>(...);
        pagesOfUser.Each(page =>
        {
            page.Detach();
            cache.Add(CacheSetup.CacheKeys.PageId(page.ID), page);
        });
        cache.Add(cacheKey, pagesOfUser);
        return pagesOfUser;
    }
    else
    {
        return cachedPages;
    }
}

Imagine writing this kind of code over and over again for each
and every entity that you want to cache. This becomes a maintenace
nightmare as your project grows.

Here’s how you could do it using AspectF:

public List<Page> GetPagesOfUser(Guid userGuid)
{
    return AspectF.Define
        .CacheList<Page, List<Page>>(Services.Get<ICache>(), 
CacheSetup.CacheKeys.PagesOfUser(userGuid),
page => CacheSetup.CacheKeys.PageId(page.ID)) .Return<List<Page>>(() => _database.GetList<Page, Guid>(...).Select(p => p.Detach()).ToList()); }

Instead of 42 lines of code, you can do it in 5 lines!

Read my article Simple
way to cache objects and collections for greater performance and
scalability
on CodeProject and learn:

  • Caching Linq to SQL entities
  • Handling update and delete scenarios
  • Expiring dependent objects and collections in cache
  • Handling objects that’s cached with multiple keys
  • Avoid database query optimizations when you cache sets of
    data

Enjoy. Don’t forget to vote for me!

7 tips for for loading Javascript rich Web 2.0-like sites significantly faster

Introduction

When you create rich Ajax application, you use external
JavaScript frameworks and you have your own homemade code that
drives your application. The problem with well known JavaScript
framework is, they offer rich set of features which are not always
necessary in its entirety. You may end up using only 30% of jQuery
but you still download the full jQuery framework. So, you are
downloading 70% unnecessary scripts. Similarly, you might have
written your own javascripts which are not always used. There might
be features which are not used when the site loads
for the first time, resulting in unnecessary download during
initial load. Initial loading time is crucial – it can make
or break your website. We did some analysis and found that every
500ms we added to initial loading, we lost approx 30% traffic who
never wait for the whole page to load and just close browser or go
away. So, saving initial loading time, even by couple of hundred
milliseconds, is crucial for survival of a startup, especially if
it’s a Rich AJAX website.

You must have noticed Microsoft’s new tool Doloto
which helps solve the following problem:

Modern Web 2.0 applications, such as GMail, Live Maps, Facebook
and many others, use a combination of Dynamic HTML, JavaScript and
other Web browser technologies commonly referred as AJAX to push
page generation and content manipulation to the client web browser.
This improves the responsiveness of these network-bound
applications, but the shift of application execution from a
back-end server to the client also often dramatically increases the
amount of code that must first be downloaded to the browser. This
creates an unfortunate Catch-22: to create responsive distributed
Web 2.0 applications developers move code to the client, but for an
application to be responsive, the code must first be transferred
there, which takes time.

Microsoft Research looked at this problem and published
this
research paper in 2008
, where they showed how much improvement
can be achieved on initial loading if there was a way to split the
javascripts frameworks into two parts – one primary part
which is absolutely essential for initial rendering of the page and
one auxiliary part which is not essential for initial load and can
be downloaded later or on-demand when user does some action. They
looked at my earlier startup Pageflakes and reported:

2.2.2 Dynamic Loading: Pageflakes
A contrast to Bunny Hunt is the Pageflakes application, an
industrial-strength mashup page providing portal-like
functionality.
While the download size for Pageflakes is over 1 MB, its
initial
execution time appears to be quite fast. Examining network
activity
reveals that Pageflakes downloads only a small stub of code
with the initial page, and loads the rest of its code dynamically
in
the background. As illustrated by Pageflakes, developers today
can
use dynamic code loading to improve their web application’s
performance.
However, designing an application architecture that is
amenable to dynamic code loading requires careful consideration
of JavaScript language issues such as function closures,
scoping,
etc. Moreover, an optimal decomposition of code into
dynamically
loaded components often requires developers to set aside the
semantic
groupings of code and instead primarily consider the execution
order of functions. Of course, evolving code and changing
user workloads make both of these issues a software maintenance
nightmare.

Back in 2007, I was looking at ways to improve the initial load
time and reduce user dropout. The number of users who would not
wait for the page to load and go away was growing day by day as we
introduced new and cool features. It was a surprise. We thought new
features will keep more users on our site but the opposite
happened. Analysis concluded it was the initial loading time that
caused more dropout than it retained users. So, all our hard work
was essentially going to drain and we had to come up with something
ground breaking to solve the problem. Of course we had already
tried all the basic stuffs –
IIS compression
,
browser caching
, on-demand loading of JavaScript,
css and html
when user does something, deferred
JavaScript execution
– but nothing helped. The frameworks
and our own hand coded framework was just too large. So, the idea
tricked me, what if we could load functions inside a class in two
steps. First step will load the class with absolutely essential
functions and second step will inject more functions to the
existing classes.

I published a codeproject article which shows you 7 tricks to
significantly improve page load time even if you have large amount
of Javascript used on the page.

7
Tips for Loading JavaScript Rich Web 2.0-like Sites Significantly
Faster

  1. Use Doloto
  2. Split a Class into Multiple JavaScript Files
  3. Stub the Functions Which Aren’t Called During Initial Load
  4. JavaScript Code in Text
  5. Break UI Loading into Multiple Stages
  6. Always Grow Content from Top to Bottom, Never Shrink or
    Jump
  7. Deliver Browser Specific Script from Server

If you like these tricks, please vote for me!

Windows 7 64bit works!

Windows 7 64bit finally works! This is the first 64bit OS I
could really use in my daily acitvities. I tried Vista 64bit, it
was unreliable. It would show blue screen right when I am about to
make a presentation to the CEO. Until Microsoft released SP1, Vista
64 bit was not usable at all. Then came Windows 7 beta. I
immediately tried the 64bit version of Windows 7 beta. It was even
worse than Vista. It would crash every now and then – waking
up from standby, trying to do livemeeting share, switching screens,
plugging in external USB drives and what not. So, I patiently
waited for the final version to come out before I get on installing
it on all my laptops. Happy to say, the final version works
perfectly on HP tx2000 Tablet PC, DELL Vostro 1500, DELL Inspiron
1520. Once you do a full windows update and install some drivers
here and there, it all works perfectly. And let me say, Windows 7
is beautiful. I found back the joy of working on computers
again!

Working on 64bit Operating System is challenging. You
don’t always find the right printer driver. Your cool
external USB speakers won’t work – even if it is made
by Microsoft. And above all, there’s that C:WindowsWinsxs
folder which keeps increasing forever. By the time I was done with
Vista 64bit (two years approx in business), my Winsxs folder was
staggering 26 GB eating up every bit out of my C: partition. I had
no choice but to format and start over. It seems like this folder
keeps copy of every single DLL version it ever sees. The more
windows update I do, the larger it gets. Now on a fresh new Windows
7 installation, after installing VS 2008, Office Applications,
Windows Live applications and some handy tools, the Winsxs folder
is 5.62 GB. Let’s see how it keeps growing over the year. A
useful information for 64bit wannabes, make sure your C partition
is at least 60 GB. I just installed Windows 7 64bit 3 days back and
it has already taken 31 GB space.


image

Since I am doing a totally useless post, let me sprinkle some
productivity tips on it before you lose interest reading my
blog.

I realized I do a lot of context swiching. I get over 200 mails
per day, so I pretty much switch focus from Visual Studio/Browser
to Outlook once every minute, which is big cencentration killer.
So, I tried the above setup on my 25” screen and it works
great!

The left half of the screen is visual studio and the right half
screen shows Outlook and my todolist. As you see, I can see the
emails coming up on Outlook without ever switching. The Visual
Studio screen width is the right size to read code without
horizontally scrolling. The right bottom half of the screen shows
my toodlist so that I am always doing the right task from my
todolist and not wondering around heedless. If I browse, I bring up
the browser on top of the Visual Studio and keep the right half
same so that while browsing I am not missing important mails and I
still have an eye on my next actions.

I have been using Toodledo for a year. I love it! It has a geat
iPhone app which is the only reason I use Toodledo and not other
alternatives. The ajax interface is slick, especially when you use
Google Chrome to make an application out of it on your desktop. You
can turn on keyboard shortcuts and then the toodledo inside Google
Chrome’s application like view becomes the best web based
todolist application out there. Whenever I file a task, I hit
‘n’, enter the task title, press tab, 1/2 for priority,
hit enter and I am done. How convenient! Especially when I read
mails and file actionable tasks at least 40 to 60 times per
day.

AspectF fluent way to put Aspects into your code for separation of concern

Aspects are common features that you write every now and then in
different parts of your project. it can be some specific way of
handling exceptions in your code, or logging method calls, or
timing execution of methods, or retrying some methods and so on. If
you are not doing it using any Aspect Oriented Programming
framework, then you are repeating a lot of similar code throughout
the project, which is making your code hard to maintain. For ex,
say you have a business layer where methods need to be logged,
errors need to be handled in a certain way, execution needs to be
timed, database operations need to be retried and so on. So, you
write code like this:

public bool InsertCustomer(string firstName, string lastName, int age,
    Dictionary<string, string> attributes)
{
    if (string.IsNullOrEmpty(firstName))
        throw new ApplicationException("first name cannot be empty");

    if (string.IsNullOrEmpty(lastName))
        throw new ApplicationException("last name cannot be empty");

    if (age < 0)
        throw new ApplicationException("Age must be non-zero");

    if (null == attributes)
        throw new ApplicationException("Attributes must not be null");

    // Log customer inserts and time the execution
    Logger.Writer.WriteLine("Inserting customer data...");
    DateTime start = DateTime.Now;

    try
    {
        CustomerData data = new CustomerData();
        bool result = data.Insert(firstName, lastName, age, attributes);
        if (result == true)
        {
            Logger.Writer.Write("Successfully inserted customer data in "
                + (DateTime.Now-start).TotalSeconds + " seconds");
        }
        return result;
    }
    catch (Exception x)
    {
        // Try once more, may be it was a network blip or some temporary downtime
        try
        {
            CustomerData data = new CustomerData();
            if (result == true)
            {
                Logger.Writer.Write("Successfully inserted customer data in "
                    + (DateTime.Now-start).TotalSeconds + " seconds");
            }
            return result;
        }
        catch
        {
            // Failed on retry, safe to assume permanent failure.

            // Log the exceptions produced
            Exception current = x;
            int indent = 0;
            while (current != null)
            {
                string message = new string(Enumerable.Repeat('t', indent).ToArray())
                    + current.Message;
                Debug.WriteLine(message);
                Logger.Writer.WriteLine(message);
                current = current.InnerException;
                indent++;
            }
            Debug.WriteLine(x.StackTrace);
            Logger.Writer.WriteLine(x.StackTrace);

            return false;
        }
    }

}

Here you see the two lines of real code, which inserts the
Customer calling a class, is hardly visible due to all the
concerns (log, retry, exception handling, timing)
you have to implement in business layer. There’s validation,
error handling, caching, logging, timing, auditing, retring,
dependency resolving and what not in business layers nowadays. The
more a project matures, the more concerns get into your codebase.
So, you keep copying and pasting boilerplate codes and write the
tiny amount of real stuff somewhere inside that boilerplate.
What’s worse, you have to do this for every business layer
method. Say now you want to add a UpdateCustomer method in
your business layer. you have to copy all the concerns again and
put the two lines of real code somewhere inside that
boilerplate.

Think of a scenario where you have to make a project wide change
on how errors are handled. You have to go through all the hundreds
of business layer functions you wrote and change it one by one. Say
you need to change the way you time execution. You have to go
through hundreds of functions again and do that.

Aspect Oriented Programming solves these challenges. When you
are doing AOP, you do it the cool way:

[EnsureNonNullParameters]
[
Log]
[
TimeExecution]
[
RetryOnceOnFailure] public void InsertCustomerTheCoolway(string firstName, string lastName, int age, Dictionary<string, string> attributes) { CustomerData data = new CustomerData(); data.Insert(firstName, lastName, age, attributes); }

Here you have
separated the common stuffs like logging, timing, retrying,
validation, which are formally called ‘concern’,
completely out of your real code. The method is nice and clean, to
the point. All the concerns are taken out of the code of the
function and added to the function using Attribute. Each
Attribute here represents one Aspect. For example, you can
add Logging aspect to any function just by adding the Log
attribute. Whatever AOP framework you use, the framework ensures
the Aspects are weaved into the code either at build time or at
runtime.

There are AOP frameworks which allows you to weave the Aspects
at compile time by using post build events and IL manipulations eg
PostSharp, some does it at runtime
using
DynamicProxy
and some requires your classes to inherit from
ContextBoundObject in order to
support Aspects
using C# built-in features. All of these have
some barrier to entry, you have to justify using some external
library, do enough performance test to make sure the libraries
scale and so on. What you need is a dead simple way to achieve
“separation of concern”, may not be full blown Aspect
Oriented Programming. Remember, the purpose here is separation of
concern and keep code nice and clean.

So, let me show you a dead simple way of separation of concern,
writing standard C# code, no Attribute or IL manipulation black
magics, simple calls to classes and delegates, yet achieve nice
separation of concern in a reusable and maintainable way. Best of
all, it’s light, just one small class.

public void InsertCustomerTheEasyWay(string firstName, string lastName, int age,
    Dictionary<string, string> attributes)
{
    AspectF.Define
        .Log(Logger.Writer, "Inserting customer the easy way")
        .HowLong(Logger.Writer, "Starting customer insert", 
"Inserted customer in {1} seconds") .Retry() .Do(() => { CustomerData data = new CustomerData(); data.Insert(firstName, lastName, age, attributes); }); }

If you want to read details about how it works and it can save
you hundreds of hours of repeatetive coding, read on:

AspectF
fluent way to add Aspects for cleaner maintainable code

If you like it, please vote for me!