Website diagnostics page to diagnose your ASP.NET website

Whenever you change web.config file or deploy your website on a new environment, you have to try out many relevant features to confirm if the configuration changes or the environment is correct. Sometimes you have to run a smoke test on the website to confirm if the site is running fine. Moreover, if some external database, webservice or network connectivity is down, it takes time to nail down exactly where the problem is. Having a self-diagnostics page on your website like the one you see on your printer can help identify exactly where’s the problem. Here’s a way how you can quickly create a simple self-diagnostics page in a single page without spending too much effort. This diagnostics page tests for common configuration settings like connection string, ASP.NET Membership configurations, SMTP settings, <appSettings> file paths and URLs items and some application specific settings to confirm if the changes are all correct.

Diagnostics page

Read how to build such a diagnostics page:

http://www.codeproject.com/KB/aspnet/selfdiagnostics.aspx

Vote for me if you find this useful.

Munq is for web, Unity is for Enterprise

The Unity Application Block (Unity) is a lightweight extensible dependency injection container with support for constructor, property, and method call injection. It’s a great library for facilitating Inversion of Control and the recent version supports AOP as well. However, when it comes to performance, it’s CPU hungry. In fact it’s so CPU hungry that it makes it impossible to make it work at Internet Scale. I was investigating some CPU issue on a portal that gets around 3MM hits per day and I found unusually high CPU. Here’s why:

Unity_High_CPU

I did some CPU profiling on my open source project Dropthings and found that the highest CPU is consumed by Unity’s Resolve<>(). There’s no funky use of Unity in the project. Straightforward Register<>() and Resolve<>(). But as you can see, Resolve<>() is consuming significantly high CPU even after the site is warm and has been running for a while.

Then I tried Munq, which is a basic Dependency Injection Container. It has everything you will usually need in a regular project. It boasts to be the fastest DI out there. So, I converted all Unity code to Munq in Dropthings and did a CPU profile and Whala!

MunqMuchBetter

 

There’s no trace of any Munq calls anywhere. That proves Munq is a lot faster than Unity.

New and Improved Dropthings – the Open Source Web 2.0 AJAX portal

I have made some significant improvements and bug fixes on the latest 2.5.4 release of Dropthings, my open source Web 2.0-style AJAX Portal built on ASP.NET 3.5, Linq to SQL, Linq to Xml, Silverlight, Enterprise Library, Unity, Velocity, and what not. All the cool production quality techs you want to see in action are there in Dropthings – in a production quality open source project that powers critical portals around the world.

You can get the latest code from here:

http://code.google.com/p/dropthings/

I have uploaded some how to video tutorials to help you get started with Dropthings easily and troubleshoot common problems.

Here’s a list of new stuffs that were added and stabilized in this new release:

  • Run Dropthings under a virtual directory.
  • Widget drag & drop, add/remove improvements and many bug fixes on some not-so-common use cases.
  • Velocity Caching support. Dropthings can now be run in web farm and/or web garden mode and use Velocity for the distributed cache. This gives you more scalability and you can deploy on a large web farm and run a heavy traffic website. I have done enough load test to prove Velocity does make Dropthings scale. You can turn on/off Velocity from web.config. Just create a cache store named “Dropthings”, turn on the config and you are good to go.
  • AspectF implementation to put sensitive operations under transaction, retry, logging and error handling. Error logging is more streamlined. There are two log files where one log contains information and the other contains exceptions. They are both in App_Data folder.
  • Rich set of xUnit tests in Behavior Driven Development style. Important operations like First Visit, Revisit are now covered under automated tests.
  • Addition of “admin” role and “admin” user in database, who can Manage widgets and assign/revoke roles to widgets. You can setup admin user and admin role in your existing database using the ASP.NET Configuration tool.
  • A new page /Admin/ManageWidgets.aspx which is a one stop shop for managing widgets and permissions. Add widgets very conveniently. See the video tutorials on how to get a new widget coded and deployed in less than 5 mins.
  • More web.config settings to allow customization of key behaviors of the project.

Enjoy the new version. Those who have purchased the old version, I strongly recommend you take the time to upgrade. There’s no DB schema change. But there’s some good amount of code change. Check out the code commit history and the details of the changes are there.

Fast Streaming Ajax Proxy with GET PUT POST DELETE

I have enhanced my streaming Ajax Proxy with POST, PUT and
DELETE features. Previously it supported only GET. Now it supports
all 4 popular methods for complete REST support. Using this proxy,
you can call REST API on external domain directly from your
website’s javascript code. You can test the proxy from this
link:


labs.omaralzabir.com/ajaxstreamingproxy/GetPutDeleteTest.aspx

The latest source code for the Ajax Proxy is available here:

http://code.google.com/p/fastajaxproxy/

You can find a detail CodeProject article that explains how the
streaming asynchronous aspect of this proxy works:

Fast, Scalable,
Streaming AJAX Proxy – continuously deliver data from across
domains

Here’s how the test UI looks like where you can test POST,
PUT and DELETE:


image

If you want to run the sample source code on your local IIS,
make sure you allow the POST, PUT, and DELETE headers on .ashx
extension from IIS properties:


image

The sample project shows how you can use the proxy to make calls
to external domains. You can directly hit any external URL and
perform POST or DELETE from your javascript code:

var proxyUrl = "StreamingProxy.ashx";
function download(method, proxyUrl, contentUrl, isJson, bodyContent, completeCallback) { var request = new Sys.Net.WebRequest(); if (method == "POST" || method == "PUT") request.set_httpVerb("POST"); else request.set_httpVerb("GET"); var url = proxyUrl + "?m=" + method +
(
isJson ? "&t=" + escape("application/json") : "") + "&u=" + escape(contentUrl); request.set_url(url); if (bodyContent.length > 0) { request.set_body(bodyContent); request.get_headers()["Content-Length"] = bodyContent.length; } var startTime = new Date().getTime(); request.add_completed(function(executor) { if (executor.get_responseAvailable()) { var content = executor.get_responseData(); var endTime = new Date().getTime(); var statistics = "Duration: " + (endTime - startTime) + "ms" + 'n' + "Length: " + content.length + " bytes" + 'n' + "Status Code: " + executor.get_statusCode() + " " + 'n' + "Status: [" + executor.get_statusText() + "]" + 'n'; appendStat(statistics); get('resultContent').value = content; completeCallback(); } }); var executor = new Sys.Net.XMLHttpExecutor(); request.set_executor(executor); executor.executeRequest(); }

I am using MS AJAX here. You can use jQuery to perform the same
test as well. All you need to do is hit the URL of the
StreamingProxy.ashx and pass the actual URL in query string
parameter “u” and pass the type of the http method in
query string parameter “m”. That’s it!

7 tips for for loading Javascript rich Web 2.0-like sites significantly faster

Introduction

When you create rich Ajax application, you use external
JavaScript frameworks and you have your own homemade code that
drives your application. The problem with well known JavaScript
framework is, they offer rich set of features which are not always
necessary in its entirety. You may end up using only 30% of jQuery
but you still download the full jQuery framework. So, you are
downloading 70% unnecessary scripts. Similarly, you might have
written your own javascripts which are not always used. There might
be features which are not used when the site loads
for the first time, resulting in unnecessary download during
initial load. Initial loading time is crucial – it can make
or break your website. We did some analysis and found that every
500ms we added to initial loading, we lost approx 30% traffic who
never wait for the whole page to load and just close browser or go
away. So, saving initial loading time, even by couple of hundred
milliseconds, is crucial for survival of a startup, especially if
it’s a Rich AJAX website.

You must have noticed Microsoft’s new tool Doloto
which helps solve the following problem:

Modern Web 2.0 applications, such as GMail, Live Maps, Facebook
and many others, use a combination of Dynamic HTML, JavaScript and
other Web browser technologies commonly referred as AJAX to push
page generation and content manipulation to the client web browser.
This improves the responsiveness of these network-bound
applications, but the shift of application execution from a
back-end server to the client also often dramatically increases the
amount of code that must first be downloaded to the browser. This
creates an unfortunate Catch-22: to create responsive distributed
Web 2.0 applications developers move code to the client, but for an
application to be responsive, the code must first be transferred
there, which takes time.

Microsoft Research looked at this problem and published
this
research paper in 2008
, where they showed how much improvement
can be achieved on initial loading if there was a way to split the
javascripts frameworks into two parts – one primary part
which is absolutely essential for initial rendering of the page and
one auxiliary part which is not essential for initial load and can
be downloaded later or on-demand when user does some action. They
looked at my earlier startup Pageflakes and reported:

2.2.2 Dynamic Loading: Pageflakes
A contrast to Bunny Hunt is the Pageflakes application, an
industrial-strength mashup page providing portal-like
functionality.
While the download size for Pageflakes is over 1 MB, its
initial
execution time appears to be quite fast. Examining network
activity
reveals that Pageflakes downloads only a small stub of code
with the initial page, and loads the rest of its code dynamically
in
the background. As illustrated by Pageflakes, developers today
can
use dynamic code loading to improve their web application’s
performance.
However, designing an application architecture that is
amenable to dynamic code loading requires careful consideration
of JavaScript language issues such as function closures,
scoping,
etc. Moreover, an optimal decomposition of code into
dynamically
loaded components often requires developers to set aside the
semantic
groupings of code and instead primarily consider the execution
order of functions. Of course, evolving code and changing
user workloads make both of these issues a software maintenance
nightmare.

Back in 2007, I was looking at ways to improve the initial load
time and reduce user dropout. The number of users who would not
wait for the page to load and go away was growing day by day as we
introduced new and cool features. It was a surprise. We thought new
features will keep more users on our site but the opposite
happened. Analysis concluded it was the initial loading time that
caused more dropout than it retained users. So, all our hard work
was essentially going to drain and we had to come up with something
ground breaking to solve the problem. Of course we had already
tried all the basic stuffs –
IIS compression
,
browser caching
, on-demand loading of JavaScript,
css and html
when user does something, deferred
JavaScript execution
– but nothing helped. The frameworks
and our own hand coded framework was just too large. So, the idea
tricked me, what if we could load functions inside a class in two
steps. First step will load the class with absolutely essential
functions and second step will inject more functions to the
existing classes.

I published a codeproject article which shows you 7 tricks to
significantly improve page load time even if you have large amount
of Javascript used on the page.

7
Tips for Loading JavaScript Rich Web 2.0-like Sites Significantly
Faster

  1. Use Doloto
  2. Split a Class into Multiple JavaScript Files
  3. Stub the Functions Which Aren’t Called During Initial Load
  4. JavaScript Code in Text
  5. Break UI Loading into Multiple Stages
  6. Always Grow Content from Top to Bottom, Never Shrink or
    Jump
  7. Deliver Browser Specific Script from Server

If you like these tricks, please vote for me!

ASP.NET AJAX testing made easy using Visual Studio 2008 Web Test

Visual Studio 2008 comes with rich Web Testing support, but
it’s not rich enough to test highly dynamic AJAX websites
where the page content is generated dynamically from database and
the same page output changes very frequently based on some external
data source e.g. RSS feed. Although you can use the Web Test Record
feature to record some browser actions by running a real browser
and then play it back. But if the page that you are testing changes
everytime you visit the page, then your recorded tests no longer
work as expected. The problem with recorded Web Test is that it
stores the generated ASP.NET Control ID, Form field names inside
the test. If the page is no longer producing the same ASP.NET
Control ID or same Form fields, then the recorded test no longer
works. A very simple example is in VS Web Test, you can say
“click the button with ID
ctrl00_UpdatePanel003_SubmitButton002”, but you cannot
say “click the 2nd Submit button inside the third
UpdatePanel”. Another key limitation is in Web Tests, you
cannot address Controls using the Server side Control ID like
“SubmitButton”. You have to always use the generated
Client ID which is something weird like
“ctrl_00_SomeControl001_SubmitButton”. Moreover, if you
are making AJAX calls where certain call returns some JSON or
updates some UpdatePanel and then based on the server
returned response, you want to make further AJAX calls or post the
refreshed UpdatePanel, then recorded tests don’t work
properly. You *do* have the option to write the tests hand coded
and write code to handle such scenario but it’s pretty
difficult to write hand coded tests when you are using
UpdatePanels because you have to keep track of the page
viewstates, form hidden variables etc across async post backs. So,
I have built a library that makes it significantly easier to test
dynamic AJAX websites and UpdatePanel rich web pages. There
are several ExtractionRule and ValidationRule
available in the library which makes testing Cookies, Response
Headers, JSON output, discovering all UpdatePanel in a page,
finding controls in the response body, finding controls inside some
UpdatePanel all very easy.

First, let me give you an example of what can be tested using
this library. My open source project Dropthings produces a Web
2.0 Start Page where the page is composed of widgets.


image

Each widget is composed of two UpdatePanel. There’s
a header area in each widget which is one UpdatePanel and
the body area is another UpdatePanel. Each widget is
rendered from database using the unique ID of the widget row, which
is an INT IDENTITY. Every page has unique widgets, with unique
ASP.NET Control ID. As a result, there’s no way you can
record a test and play it back because none of the ASP.NET Control
IDs are ever same for the same page on different visits. This is
where my library comes to the rescue.

See the web test I did:


image

This test simulates an anonymous user visit. When anonymous user
visits Dropthings for the first time, two pages are created with
some default widgets. You can also add new widgets on the page, you
can drag & drop widgets, you can delete a widget that you
don’t like.

This Web Test simulates these behaviors automatically:

  • Visit the homepage
  • Show the widget list which is an UpdatePanel. It checks
    if the UpdatePanel contains the BBC World widget.
  • Then it clicks on the “Edit” link of the “How
    to of the day” widget which brings up some options
    dynamically inside an UpdatePanel. Then it tries to change
    the Dropdown value inside the UpdatePanel to 10.
  • Adds a new widget from the Widget List. Ensures that the
    UpdatePanel postback successfully renders the new
    widget.
  • Deletes the newly added widget and ensures the widget is
    gone.
  • Logs user out.

If you want to learn details about the project, read my
codeproject article:

http://www.codeproject.com/KB/aspnet/aspnetajaxtesting.aspx

Please vote if you find this useful.

Web 2.0 AJAX Portal using jQuery, ASP.NET 3.5, Silverlight, Linq to SQL, WF and Unity

Dropthings
– my open
source
Web 2.0 Ajax Portal has gone through a technology
overhauling. Previously it was built using ASP.NET AJAX, a little
bit of Workflow Foundation and Linq to SQL. Now Dropthings boasts
full jQuery front-end combined with ASP.NET AJAX
UpdatePanel, Silverlight widget, full
Workflow Foundation implementation on the business
layer, 100% Linq to SQL Compiled Queries on the
data access layer, Dependency Injection and Inversion of Control
(IoC) using Microsoft Enterprise Library 4.1 and
Unity. It also has a ASP.NET AJAX Web Test
framework that makes it real easy to write Web Tests that simulates
real user actions on AJAX web pages. This article will walk you
through the challenges in getting these new technologies to work in
an ASP.NET website and how performance, scalability, extensibility
and maintainability has significantly improved by the new
technologies. Dropthings has been licensed for commercial use by
prominent companies including BT Business, Intel, Microsoft IS,
Denmark Government portal for Citizens; Startups like Limead and
many more. So, this is serious stuff! There’s a very cool
open source implementation of Dropthings framework available at
National
University of Singapore
portal.

Visit: http://dropthings.omaralzabir.com


Dropthings AJAX Portal

I have published a new article on this on CodeProject:

http://www.codeproject.com/KB/ajax/Web20Portal.aspx

Get the source code

Latest source code is hosted at Google code:

http://code.google.com/p/dropthings

There’s a CodePlex site for documentation and issue
tracking:

http://www.codeplex.com/dropthings

You will need Visual Studio 2008 Team Suite with Service Pack 1
and Silverlight 2 SDK in order to run all the projects. If you have
only Visual Studio 2008 Professional, then you will have to remove
the Dropthings.Test project.

New features introduced

Dropthings new release has the following features:

  • Template users – you can define a user who’s pages
    and widgets are used as a template for new users. Whatever you put
    in that template user’s pages, it will be copied for every
    new user. Thus this is an easier way to define the default pages
    and widgets for new users. Similarly you can do the same for a
    registered user. The template users can be defined in the
    web.config.
  • Widget-to-Widget communication – Widgets can send message
    to each other. Widgets can subscribe to an Event Broker and
    exchange messages using a Pub-Sub pattern.
  • WidgetZone – you can create any number of zones in any
    shape on the page. You can have widgets laid in horizontal layout,
    you can have zones on different places on the page and so on. With
    this zone model, you are no longer limited to the Page-Column model
    where you could only have N vertical columns.
  • Role based widgets – now widgets are mapped to roles so
    that you can allow different users to see different widget list
    using ManageWidgetPersmission.aspx.
  • Role based page setup – you can define page setup for
    different roles. For ex, Managers see different pages and widgets
    than Employees.
  • Widget maximize – you can maximize a widget to take full
    screen. Handy for widgets with lots of content.
  • Free form resize – you can freely resize widgets
    vertically.
  • Silverlight Widgets – You can now make widgets in
    Silverlight!

Why the technology overhauling

Performance, Scalability, Maintainability and Extensibility
– four key reasons for the overhauling. Each new technology
solved one of more of these problems.

First, jQuery was used to replace my personal hand-coded large
amount of Javascript code that offered the client side drag &
drop and other UI effects. jQuery already has a rich set of library
for Drag & Drop, Animations, Event handling, cross browser
javascript framework and so on. So, using jQuery means opening the
door to thousands of jQuery plugins to be offered on Dropthings.
This made Dropthings highly extensible on the client side.
Moreover, jQuery is very light. Unlike AJAX Control Toolkit jumbo
sized framework and heavy control extenders, jQuery is very lean.
So, total javascript size decreased significantly resulting in
improved page load time. In total, the jQuery framework, AJAX basic
framework, all my stuffs are total 395KB, sweet! Performance is
key; it makes or breaks a product.

Secondly, Linq to SQL queries are replaced with Compiled
Queries. Dropthings did not survive a load test when regular lambda
expressions were used to query database. I could only reach up to
12 Req/Sec using 20 concurrent users without burning up web server
CPU on a Quad Core DELL server.

Thirdly, Workflow Foundation is used to build operations that
require multiple Data Access Classes to perform together in a
single transaction. Instead of writing large functions with many
if…else conditions, for…loops, it’s better to
write them in a Workflow because you can visually see the flow of
execution and you can reuse Activities among different Workflows.
Best of all, architects can design workflows and developers can
fill-in code inside Activities. So, I could design a complex
operations in a workflow without writing the real code inside
Activities and then ask someone else to implement each Activity. It
is like handing over a design document to developers to implement
each unit module, only that here everything is strongly typed and
verified by compiler. If you strictly follow Single Responsibility
Principle for your Activities, which is a smart way of saying one
Activity does only one and very simple task, you end up with a
highly reusable and maintainable business layer and a very clean
code that’s easily extensible.

Fourthly, Unity
Dependency Injection (DI) framework is used to pave the path for
unit testing and dependency injection. It offers Inversion of
Control (IoC), which enables testing individual classes in
isolation. Moreover, it has a handy feature to control lifetime of
objects. Instead of creating instance of commonly used classes
several times within the same request, you can make instances
thread level, which means only one instance is created per thread
and subsequent calls reuse the same instance. Are these going over
your head? No worries, continue reading, I will explain later
on.

Fifthly, enabling API for Silverlight widgets allows more
interactive widgets to be built using Silverlight. HTML and
Javascripts still have limitations on smooth graphics and
continuous transmission of data from web server. Silverlight solves
all of these problems.

Read the article for details on how all these improvements were
done and how all these hot techs play together in a very useful
open source project for enterprises.

http://www.codeproject.com/KB/ajax/Web20Portal.aspx

Don’t forget to vote for me if you like it.

Optimize ASP.NET Membership Stored Procedures for greater speed and scalability

Last year at Pageflakes,
when we were getting millions of hits per day, we were having query
timeout due to lock timeout and Transaction Deadlock errors. These
locks were produced from aspnet_Users and
aspnet_Membership tables. Since both of these tables
are very high read (almost every request causes a read on these
tables) and high write (every anonymous visit creates a row on
aspnet_Users), there were just way too many locks
created on these tables per second. SQL Counters showed thousands
of locks per second being created. Moreover, we had queries that
would select thousands of rows from these tables frequently and
thus produced more locks for longer period, forcing other queries
to timeout and thus throw errors on the website.

If you have read my last blog post, you know why such locks
happen. Basically every table when it grows up to hold millions of
records and becomes popular goes through this trouble. It’s
just a part of scalability problem that is common to database. But
we rarely take prevention about it in our early design.

The solution is simple, you should either have WITH
(NOLOCK)
or SET TRANSACTION ISOLATION LEVEL READ
UNCOMMITTED
before SELECT queries. Either of this will do.
They tell SQL Server not to hold any lock on the table while it is
reading the table. If some row is locked while the read is
happening, it will just ignore that row. When you are reading a
table thousand times per second, without these options, you are
issuing lock on many places around the table thousand times per
second. It not only makes read from table slower, but also so many
lock prevents insert, update, delete from happening timely and thus
queries timeout. If you have queries like “show the currently
online users from last one hour based on
LastActivityDate field”, that is going to issue
such a wide lock that even other harmless select queries will
timeout. And did I tell you that there’s no index on
LastActivityDate on aspnet_Users
table?

Now don’t blame yourself for not putting either of these
options on your every stored proc and every dynamically generated
SQL from the very first day. ASP.NET developers made the same
mistake. You won’t see either of these used in any of the
stored procs used by ASP.NET Membership. For example, the following
stored proc gets called whenever you access Profile
object:

ALTER PROCEDURE [dbo].[aspnet_Profile_GetProperties]
@ApplicationName
nvarchar(256),
@UserName nvarchar(256),
@CurrentTimeUtc datetime
AS
BEGIN

DECLARE
@ApplicationId uniqueidentifier
SELECT
@ApplicationId = NULL
SELECT
@ApplicationId = ApplicationId FROM
dbo.aspnet_Applications WHERE LOWER(@ApplicationName) = LoweredApplicationName
IF (@ApplicationId IS NULL)
RETURN

DECLARE
@UserId uniqueidentifier
DECLARE
@LastActivityDate datetime
SELECT
@UserId = NULL

SELECT
@UserId = UserId, @LastActivityDate = LastActivityDate
FROM dbo.aspnet_Users
WHERE ApplicationId = @ApplicationId AND LoweredUserName = LOWER(@UserName)

IF (@UserId IS NULL)
RETURN
SELECT TOP
1 PropertyNames, PropertyValuesString, PropertyValuesBinary
FROM dbo.aspnet_Profile
WHERE UserId = @UserId

IF (@@ROWCOUNT > 0)
BEGIN
UPDATE
dbo.aspnet_Users
SET LastActivityDate=@CurrentTimeUtc
WHERE UserId = @UserId
END
END

There are two
SELECT operations that hold lock on two very high read tables
aspnet_Users and aspnet_Profile.
Moreover, there’s a nasty UPDATE statement. It tries to
update the LastActivityDate of a user whenever you
access Profile object for the first time within a http
request.

This stored proc alone is enough to bring your site down. It did
to us because we are using Profile Provider
everywhere. This stored proc was called around 300 times/sec. We
were having nightmarish slow performance on the website and many
lock timeouts and transaction deadlocks. So, we added the
transaction isolation level and we also modified the UPDATE
statement to only perform an update when the
LastActivityDate is over an hour. So, this means, the
same user’s LastActivityDate won’t be
updated if the user hits the site within the same hour.

So, after the modifications, the stored proc looked like
this:

ALTER PROCEDURE [dbo].[aspnet_Profile_GetProperties]
@ApplicationName
nvarchar(256),
@UserName nvarchar(256),
@CurrentTimeUtc datetime
AS
BEGIN
-- 1. Please no more locks during reads
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;

DECLARE @ApplicationId uniqueidentifier
--SELECT @ApplicationId = NULL
--SELECT @ApplicationId = ApplicationId FROM dbo.aspnet_Applications
WHERE LOWER(@ApplicationName) = LoweredApplicationName
--IF (@ApplicationId IS NULL)
-- RETURN

-- 2. No more call to Application table. We have only one app dude!
SET @ApplicationId = dbo.udfGetAppId()

DECLARE @UserId uniqueidentifier
DECLARE
@LastActivityDate datetime
SELECT
@UserId = NULL

SELECT
@UserId = UserId, @LastActivityDate = LastActivityDate
FROM dbo.aspnet_Users
WHERE ApplicationId = @ApplicationId AND LoweredUserName = LOWER(@UserName)

IF (@UserId IS NULL)
RETURN
SELECT TOP
1 PropertyNames, PropertyValuesString, PropertyValuesBinary
FROM dbo.aspnet_Profile
WHERE UserId = @UserId

IF (@@ROWCOUNT > 0)
BEGIN
-- 3. Do not update the same user within an hour
IF DateDiff(n, @LastActivityDate, @CurrentTimeUtc) > 60
BEGIN
-- 4. Use ROWLOCK to lock only a row since we know this query
-- is highly selective
UPDATE dbo.aspnet_Users WITH(ROWLOCK)
SET LastActivityDate=@CurrentTimeUtc
WHERE UserId = @UserId
END
END
END

The changes I
made are numbered and commented. No need for further explanation.
The only tricky thing here is, I have eliminate call to Application
table just to get the ApplicationID from ApplicationName. Since
there’s only one application in a database (ever heard of
multiple applications storing their user separately on the same
database and the same table?), we don’t need to look up the
ApplicationID on every call to every Membership stored proc. We can
just get the ID and hard code it in a function.

CREATE FUNCTION dbo.udfGetAppId()
RETURNS uniqueidentifier
WITH EXECUTE AS
CALLER
AS
BEGIN
RETURN CONVERT
(uniqueidentifier, 'fd639154-299a-4a9d-b273-69dc28eb6388')
END;

This UDF returns the ApplicationID that I have
hardcoded copying from the Application table. Thus it eliminates
the need for quering on the Application table.

Similarly you should do the changes in all other stored
procedures that belong to Membership Provider. All the stroc procs
are missing proper locking, issues aggressive lock during update
and too frequent updates than practical need. Most of them also try
to resolve ApplicationID from ApplicationName, which is unnecessary
when you have only one web application per database. Make these
changes and enjoy lock contention free super performance from
Membership Provider!


kick it on DotNetKicks.com

99.99% available ASP.NET and SQL Server SaaS Production Architecture

You have a hot ASP.NET+SQL Server product, growing at thousand
users per day and you have hit the limit of your own garage hosting
capability. Now that you have enough VC money in your pocket, you
are planning to go out and host on some real hosting facility,
maybe a colocation or managed hosting. So, you are thinking, how to
design a physical architecture that will ensure performance,
scalability, security and availability of your product? How can you
achieve four-nine (99.99%) availability? How do you securely let
your development team connect to production servers? How do you
choose the right hardware for web and database server? Should you
use Storage Area Network (SAN) or just local disks on RAID? How do
you securely connect your office computers to production
environment?

Here I will answer all these queries. Let me first show you a
diagram that I made for Pageflakes where we ensured we get
four-nine availability. Since Pageflakes is a Level 3
SaaS
, it’s absolutely important that we build a high
performance, highly available product that can be used from
anywhere in the world 24/7 and end-user gets quick access to their
content with complete personalization and customization of content
and can share it with others and to the world. So, you can take
this production architecture as a very good candidate for Level 3
SaaS:


Hosting_environment

Here’s a CodeProject article that explains all the
ideas:

99.99% available ASP.NET and SQL Server SaaS Production
Architecture

Hope you like it. Appreciate your vote.


kick it on DotNetKicks.com