Linq to SQL: Delete an entity using Primary Key only

Linq to Sql does not come with a function like .Delete(ID) which allows you to
delete an entity using it’s primary key. You have to first
get the object that you want to delete and then call .DeleteOnSubmit(obj) to queue
it for delete. Then you have to call DataContext.SubmitChanges() to
play the delete queries on database. So, how to delete object
without getting them from database and avoid database
roundtrip?


Delete an object without getting it - Linq to Sql

You can call this function using DeleteByPK(10,
dataContext);

First type is the entity type and second one is the type of the
primary key. If your object’s primary key is a Guid field, specify
Guid instead of
int.

How it works:

  • It figures out the table name and the primary key field name
    from the entity
  • Then it uses the table name and primary key field name to build
    a DELETE query

Figuring out the table name and primary key field name is a bit
hard. There’s some reflection involved. The GetTableDef()
returns the table name and primary key field name for an
entity.

Every Linq Entity class is decorated with a Table attribute that has the
table name:


Lint entity declaration

Then the primary key field is decorated with a Column attribute with
IsPrimaryKey =
true
.


Primary Key field has Column attribute with IsPrimaryKey = true

So, using reflection we can figure out the table name and the
primary key property and the field name.

Here’s the code that does it:


Using reflection find the Table attribute and the Column attribute

Before you scream “Reflection is SLOW!!!!” the
definition is cached. So, reflection is used only once per
appDomain per entity. Subsequent call is just a dictionary lookup
away, which is as fast as it can get.

You can also delete a collection of object without ever getting
any one of them. The the following function to delete a whole bunch
of objects:


Delete a list of objects using Linq to SQL

The code is available here:

http://code.msdn.microsoft.com/DeleteEntitiesLinq


kick it on DotNetKicks.com

Solving common problems with Compiled Queries in Linq to Sql for high demand ASP.NET websites

If you are using Linq to SQL, instead of writing regular Linq
Queries, you should be using
Compiled Queries
. if you are building an ASP.NET web
application that’s going to get thousands of hits per hour,
the execution overhead of Linq queries is going to consume too much
CPU and make your site slow. There’s a runtime cost
associated with each and every Linq Query you write. The queries
are parsed and converted to a nice SQL Statement on *every* hit.
It’s not done at compile time because there’s no way to
figure out what you might be sending as the parameters in the
queries during runtime. So, if you have common Linq to Sql
statements like the following one throughout your growing web
application, you are soon going to have scalability nightmares:

var query = from widget in dc.Widgets
where widget.ID == id && widget.PageID == pageId
select widget;

var widget = query.SingleOrDefault();

There’s
a nice blog post by JD Conley
that shows how evil Linq to Sql
queries are:


image

You see how many times SqlVisitor.Visit is called to
convert a Linq Query to its SQL representation? The runtime cost to
convert a Linq query to its SQL Command representation is just way
too high.


Rico Mariani has a very informative performance comparison
of
regular Linq queries vs Compiled Linq queries performance:


image

Compiled Query wins on every case.

So, now you know about the benefits of compiled queries. If you
are building ASP.NET web application that is going to get high
traffic and you have a lot of Linq to Sql queries throughout your
project, you have to go for compiled queries. Compiled Queries are
built for this specific scenario.

In this article, I will show you some steps to convert regular
Linq to Sql queries to their Compiled representation and how to
avoid the dreaded exception “Compiled queries across
DataContexts with different LoadOptions not
supported.”

Here are some step by step instruction on converting a Linq to
Sql query to its compiled form:

First we need to find out all the external decision factors in a
query. It mostly means parameters in the WHERE clause. Say, we are
trying to get a user from aspnet_users table using
Username and Application ID:

Here, we have two external decision factor – one is the
Username and another is the Application ID. So, first think this
way, if you were to wrap this query in a function that will just
return this query as it is, what would you do? You would create a
function that takes the DataContext (dc named here),
then two parameters named userName and applicationID, right?

So, be it. We create one function that returns just this
query:


Converting a LInq Query to a function

Next step is to replace this function with a Func<> representation
that returns the query. This is the hard part. If you haven’t
dealt with Func<> and Lambda
expression before, then I suggest you read this
and
this
and then continue.

So, here’s the delegate representation of the above
function:


Creating a delegate out of Linq Query

Couple of things to note here. I have declared the delegate as
static readonly
because a compiled query is declared only once and reused by all
threads. If you don’t declare Compiled Queries as static,
then you don’t get the performance gain because compiling
queries everytime when needed is even worse than regular Linq
queries.

Then there’s the complex Func> thing. Basically the
generic Func<> is declared to
have three parameters from the GetQuery function and a return
type of IQueryable.
Here the parameter types are specified so that the delegate is
created strongly typed. Func<> allows up to 4
parameters and 1 return type.

Next comes the real business, compiling the query. Now that we
have the query in delegate form, we can pass this to CompiledQuery.Compile function
which compiles the delegate and returns a handle to us. Instead of
directly assigning the lambda expression to the func, we will pass
the expression through the CompiledQuery.Compile
function.


Converting a Linq Query to Compiled Query

Here’s where head starts to spin. This is so hard to read
and maintain. Bear with me. I just wrapped the lambda expression on
the right side inside the CompiledQuery.Compile function.
Basically that’s the only change. Also, when you call
CompiledQuery.Compile<>,
the generic types must match and be in exactly the same order as
the Func<>
declaration.

Fortunately, calling a compiled query is as simple as calling a
function:


Running Compiled Query

There you have it, a lot faster Linq Query execution. The hard
work of converting all your queries into Compiled Query pays off
when you see the performance difference.

Now, there are some challenges to Compiled Queries. Most common
one is, what do you do when you have more than 4 parameters to
supply to a Compiled Query? You can’t declare a Func<> with more than 4
types. Solution is to use a struct to encapsulate all the
parameters. Here’s an example:


Using struct in compiled query as parameter

Calling the query is quite simple:


Calling compiled query with struct parameter

Now to the dreaded challenge of using LoadOptions with Compiled
Query. You will notice that the following code results in an
exception:


Using DataLoadOptions with Compiled Query

The above DataLoadOption runs perfectly
when you use regular Linq Queries. But it does not work with
compiled queries. When you run this code and the query hits the
second time, it produces an exception:

Compiled queries across DataContexts with different
LoadOptions not supported

A compiled query remembers the DataLoadOption once its called.
It does not allow executing the same compiled query with a
different DataLoadOption again. Although
you are creating the same DataLoadOption with the same
LoadWith<>
calls, it still produces exception because it remembers the exact
instance that was used when the compiled query was called for the
first time. Since next call creates a new instance of DataLoadOptions, it does not
match and the exception is thrown. You can read details about the
problem in
this forum post
.

The solution is to use a static DataLoadOption. You cannot
create a local DataLoadOption instance and use
in compiled queries. It needs to be static. Here’s how you
can do it:


image

Basically the idea is to construct a static instance of DataLoadOptions using a static
function. As writing function for every single DataLoadOptions combination is
painful, I created a static delegate here and executed it right on
the declaration line. This is in interesting way to declare a
variable that requires more than one statement to prepare it.

Using this option is very simple:


image

Now you can use DataLoadOptions with compiled
queries.


kick it on DotNetKicks.com

Deploy ASP.NET MVC on IIS 6, solve 404, compression and performance problems

There are several problems with ASP.NET MVC application when
deployed on IIS 6.0:

  • Extensionless URLs give 404 unless some URL Rewrite module is
    used or wildcard mapping is enabled
  • IIS 6.0 built-in compression does not work for dynamic
    requests. As a result, ASP.NET pages are served uncompressed
    resulting in poor site load speed.
  • Mapping wildcard extension to ASP.NET introduces the following
    problems:

    • Slow performance as all static files get handled by ASP.NET and
      ASP.NET reads the file from file system on every call
    • Expires headers doesn’t work for static content as IIS does not
      serve them anymore. Learn about benefits of expires header from

      here
      . ASP.NET serves a fixed expires header that makes content
      expire in a day.
    • Cache-Control header does not produce max-age properly and thus
      caching does not work as expected. Learn about caching best
      practices from
      here
      .
  • After deploying on a domain as the root site, the homepage
    produces HTTP 404.

Problem 1: Visiting your website’s homepage gives 404 when
hosted on a domain

You have done the wildcard mapping, mapped .mvc extention to
ASP.NET ISAPI handler, written the route mapping for Default.aspx
or default.aspx (lowercase), but still when you visit your homepage
after deployment, you get:


image

You will find people banging their heads on the wall here:

Solution is to capture hits going to “/” and then rewrite it to
Default.aspx:


image

You can apply this approach to any URL that ASP.NET MVC is not
handling for you and it should handle. Just see the URL reported on
the 404 error page and then rewrite it to a proper URL.

Problem 2: IIS 6 compression is no longer working after
wildcard mapping

When you enable wildcard mapping, IIS 6 compression no longer
works for extensionless URL because IIS 6 does not see any
extension which is defined in IIS Metabase. You can learn about IIS
6 compression feature and how to configure it properly from

my earlier post
.

Solution is to use an HttpModule to do the compression for
dynamic requests.

Problem 3: ASP.NET ISAPI does not cache Static Files

When ASP.NET’s DefaultHttpHandler serves static files, it
does not cache the files in-memory or in ASP.NET cache. As a
result, every hit to static file results in a File read. Below is
the decompiled code in DefaultHttpHandler when it handles a
static file. As you see here, it makes a file read on every hit and
it only set the expiration to one day in future. Moreover, it
generates ETag for each file based on file’s modified date.
For best caching efficiency, we need to get rid of that
ETag, produce an expiry date on far future (like 30 days),
and produce Cache-Control header which offers better control
over caching.


image

So, we need to write a custom static file handler that will
cache small files like images, Javascripts, CSS, HTML and so on in
ASP.NET cache and serve the files directly from cache instead of
hitting the disk. Here are the steps:

  • Install an HttpModule that installs a Compression Stream
    on Response.Filter so that anything written on Response gets
    compressed. This serves dynamic requests.
  • Replace ASP.NET’s DefaultHttpHandler that listens on *.*
    for static files.
  • Write our own Http Handler that will deliver compressed
    response for static resources like Javascript, CSS, and HTML.


image

Here’s the mapping in ASP.NET’s web.config for the
DefaultHttpHandler. You will have to replace this with your
own handler in order to serve static files compressed and
cached.

Solution 1: An Http Module to compress dynamic requests

First, you need to serve compressed responses that are served by
the MvcHandler or ASP.NET’s default Page Handler. The
following HttpCompressionModule hooks on the
Response.Filter and installs a GZipStream or
DeflateStream on it so that whatever is written on the
Response stream, it gets compressed.


image

These are formalities for a regular HttpModule. The real
hook is installed as below:


image

Here you see we ignore requests that are handled by ASP.NET’s
DefaultHttpHandler and our own StaticFileHandler that
you will see in next section. After that, it checks whether the
request allows content to be compressed. Accept-Encoding
header contains “gzip” or “deflate” or both when browser supports
compressed content. So, when browser supports compressed content, a
Response Filter is installed to compress the output.

Solution 2: An Http Module to compress and cache static file
requests

Here’s how the handler works:

  • Hooks on *.* so that all unhandled requests get served by the
    handler
  • Handles some specific files like js, css, html, graphics files.
    Anything else, it lets ASP.NET transmit it
  • The extensions it handles itself, it caches the file content so
    that subsequent requests are served from cache
  • It allows compression of some specific extensions like js, css,
    html. It does not compress graphics files or any other
    extension.

Let’s start with the handler code:


image

Here you will find the extensions the handler handles and the
extensions it compresses. You should only put files that are text
files in the COMPRESS_FILE_TYPES.

Now start handling each request from
BeginProcessRequest.


image

Here you decide the compression mode based on
Accept-Encoding header. If browser does not support
compression, do not perform any compression. Then check if the file
being requested falls in one of the extensions that we support. If
not, let ASP.NET handle it. You will see soon how.


image

Calculate the cache key based on the compression mode and the
physical path of the file. This ensures that no matter what the URL
requested, we have one cache entry for one physical file. Physical
file path won’t be different for the same file. Compression mode is
used in the cache key because we need to store different copy of
the file’s content in ASP.NET cache based on Compression Mode. So,
there will be one uncompressed version, a gzip compressed version
and a deflate compressed version.

Next check if the file exits. If not, throw HTTP 404. Then
create a memory stream that will hold the bytes for the file or the
compressed content. Then read the file and write in the memory
stream either directly or via a GZip or Deflate stream. Then cache
the bytes in the memory stream and deliver to response. You will
see the ReadFileData and CacheAndDeliver functions
soon.


image

This function delivers content directly from ASP.NET cache. The
code is simple, read from cache and write to the response.

When the content is not available in cache, read the file bytes
and store in a memory stream either as it is or compressed based on
what compression mode you decided before:


image

Here bytes are read in chunk in order to avoid large amount of
memory allocation. You could read the whole file in one shot and
store in a byte array same as the size of the file length. But I
wanted to save memory allocation. Do a performance test to figure
out if reading in 8K chunk is not the best approach for you.

Now you have the bytes to write to the response. Next step is to
cache it and then deliver it.


image

Now the two functions that you have seen several times and have
been wondering what they do. Here they are:


image

WriteResponse has no tricks, but
ProduceResponseHeader has much wisdom in it. First it turns
off response buffering so that ASP.NET does not store the written
bytes in any internal buffer. This saves some memory allocation.
Then it produces proper cache headers to cache the file in browser
and proxy for 30 days, ensures proxy revalidate the file after the
expiry date and also produces the Last-Modified date from
the file’s last write time in UTC.

How to use it

Get the HttpCompressionModule and
StaticFileHandler from:

http://code.msdn.microsoft.com/fastmvc

Then install them in web.config. First you install the
StaticFileHandler by removing the existing mapping for
path=”*” and then you install the HttpCompressionModule.


image

That’s it! Enjoy a faster and more responsive ASP.NET MVC
website deployed on IIS 6.0.

Share this post : del.icio.us digg dotnetkicks furl live reddit spurl technorati yahoo

Fast, Streaming AJAX proxy – continuously download from cross domain

Due to browser’s prohibition on cross
domain XMLHTTP call, all AJAX websites must have server side proxy
to fetch content from external domain like Flickr or Digg. From
client side javascript code, an XMLHTTP call goes to the server
side proxy hosted on the same domain and then the proxy downloads
the content from the external server and sends back to the browser.
In general, all AJAX websites on the Internet that are showing
content from external domains are following this proxy approach
except some rare ones who are using JSONP. Such a proxy gets a very
large number of hits when a lot of component on the website are
downloading content from external domains. So, it becomes a
scalability issue when the proxy starts getting millions of hits.
Moreover, web page’s overall load performance largely depends on
the performance of the proxy as it delivers content to the page. In
this article, we will take a look how we can take a conventional
AJAX Proxy and make it faster, asynchronous, continuously stream
content and thus make it more scalable.

You can see such a proxy in action when you go to Pageflakes.com. You will see
flakes (widgets) loading many different content like weather feed,
flickr photo, youtube videos, RSS from many different external
domains. All these are done via a Content Proxy. Content
Proxy served about 42.3 million URLs last month which is
quite an engineering challenge for us to make it both fast and
scalable. Sometimes Content Proxy serves megabytes of data, which
poses even greater engineering challenge. As such proxy gets large
number of hits, if we can save on an average 100ms from each call,
we can save 4.23 million seconds of
download/upload/processing time every month. That’s about 1175 man
hours wasted throughout the world by millions of people staring at
browser waiting for content to download.

Such a content proxy takes an external server’s URL as a query
parameter. It downloads the content from the URL and then writes
the content as response back to browser.


image

Figure: Content Proxy working as a middleman between browser and
external domain

The above timeline shows how request goes to the server and then
server makes a request to external server, downloads the response
and then transmits back to the browser. The response arrow from
proxy to browser is larger than the response arrow from external
server to proxy because generally proxy server’s hosting
environment has better download speed than the user’s Internet
connectivity.

Such a content proxy is also available in my open source Ajax
Web Portal Dropthings.com.
You can see from its
code
how such a proxy is implemented.

The following is a very simple synchronous, non-streaming,
blocking Proxy:

[WebMethod]
[ScriptMethod(UseHttpGet=true)]
public string GetString(string url)
{
using (WebClient client = new WebClient())
{
string response = client.DownloadString(url);
return response;
}
}
}

Although it shows the general principle, but it’s no where close
to a real proxy because:

  • It’s a synchronous proxy and thus not scalable. Every call to
    this web method causes the ASP.NET thread to wait until the call to
    the external URL completes.
  • It’s non streaming. It first downloads the entire
    content on the server, storing it in a string and then uploading
    that entire content to the browser. If you pass MSDN feed URL, it will
    download that gigantic 220 KB RSS XML on the server and store it on
    a 220 KB long string (actually double the size as .NET strings are
    all Unicode string) and then write 220 KB to ASP.NET Response
    buffer, consuming another 220 KB UTF8 byte array in memory. Then
    that 220 KB will be passed to IIS in chunks so that it can transmit
    it to the browser.
  • It does not produce proper response header to cache the
    response on the server. Nor does it deliver important headers like
    Content-Type from the source.
  • If external URL is providing gzipped content, it decompresses
    the content into a string representation and thus wastes server
    memory.
  • It does not cache the content on the server. So, repeated call
    to the same external URL within the same second or minute will
    download content from the external URL and thus waste bandwidth on
    your server.

So, we need an asynchronous streaming proxy that
transmits the content to the browser while it downloads from the
external domain server. So, it will download bytes from external
URL in small chunks and immediately transmit that to the browser.
As a result, browser will see a continuous transmission of bytes
right after calling the web service. There will be no delay while
the content is fully downloaded on the server.

Before I show you the complex streaming proxy code, let’s take
an evolutionary approach. Let’s build a better Content Proxy that
the one shown above, which is synchronous, non-streaming but does
not have the other problems mentioned above. We will build a HTTP
Handler named RegularProxy.ashx which will take url
as a query parameter. It will also take cache as a query
parameter which it will use to produce proper response headers in
order to cache the content on the browser. Thus it will save
browser from downloading the same content again and again.

<%@ WebHandler Language="C#" Class="RegularProxy" %>

using System;
using System.Web;
using System.Web.Caching;
using System.Net;
using ProxyHelpers;
public class RegularProxy : IHttpHandler {

public void ProcessRequest (HttpContext context) {
string url = context.Request["url"];
int cacheDuration = Convert.ToInt32(context.Request["cache"]?? "0");
string contentType = context.Request["type"];

// We don't want to buffer because we want to save memory
context.Response.Buffer = false;

// Serve from cache if available
if (context.Cache[url] != null)
{
context.Response.BinaryWrite(context.Cache[url] as byte[]);
context.Response.Flush();
return;
}
using (WebClient client = new WebClient())
{
if (!string.IsNullOrEmpty(contentType))
client.Headers["Content-Type"] = contentType;

client.Headers["Accept-Encoding"] = "gzip";
client.Headers["Accept"] = "*/*";
client.Headers["Accept-Language"] = "en-US";
client.Headers["User-Agent"] =
"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6";

byte[] data = client.DownloadData(url);

context.Cache.Insert(url, data, null,
Cache.NoAbsoluteExpiration,
TimeSpan.FromMinutes(cacheDuration),
CacheItemPriority.Normal, null);

if (!context.Response.IsClientConnected) return;


// Deliver content type, encoding and length as it is received from the external URL
context.Response.ContentType = client.ResponseHeaders["Content-Type"];
string contentEncoding = client.ResponseHeaders["Content-Encoding"];
string contentLength = client.ResponseHeaders["Content-Length"];

if (!string.IsNullOrEmpty(contentEncoding))
context.Response.AppendHeader("Content-Encoding", contentEncoding);
if (!string.IsNullOrEmpty(contentLength))
context.Response.AppendHeader("Content-Length", contentLength);

if (cacheDuration > 0)
HttpHelper.CacheResponse(context, cacheDuration);

// Transmit the exact bytes downloaded
context.Response.BinaryWrite(data);
}
}

public bool IsReusable {
get {
return false;
}
}

}

There are two enhancements in this proxy:

  • It allows server side caching of content. Same URL requested by
    a different browser within a time period will not be downloaded on
    server again, instead it will be served from cache.
  • It generates proper response cache header so that the content
    can be cached on browser.
  • It does not decompress the downloaded content in memory. It
    keeps the original byte stream intact. This saves memory
    allocation.
  • It transmits the data in non-buffered fashion, which means
    ASP.NET Response object does not buffer the response and thus saves
    memory

However, this is a blocking proxy. We need to make a streaming
asynchronous proxy for better performance. Here’s why:


image

Figure: Continuous streaming proxy

As you see, when data is transmitted from server to browser
while server downloads the content, the delay for server side
download is eliminated. So, if server takes 300ms to download
something from external source, and then 700ms to send it back to
browser, you can save up to 300ms Network Latency between server
and browser. The situation gets even better when the external
server that serves the content is slow and takes quite some time to
deliver the content. The slower external site is, the more saving
you get in this continuous streaming approach. This is
significantly faster than blocking approach when the external
server is in Asia or Australia and your server is in USA.

The approach for continuous proxy is:

  • Read bytes from external server in chunks of 8KB from a
    separate thread (Reader thread) so that it’s not blocked
  • Store the chunks in an in-memory Queue
  • Write the chunks to ASP.NET Response from that same queue
  • If the queue is finished, wait until more bytes are downloaded
    by the reader thread


image

The Pipe Stream needs to be thread safe and it needs to support
blocking Read. By blocking read it means, if a thread tries to read
a chunk from it and the stream is empty, it will suspend that
thread until another thread writes something on the stream. Once a
write happens, it will resume the reader thread and allow it to
read. I have taken the code of PipeStream from CodeProject
article by James Kolpack
and extended it to make sure it’s high
performance, supports chunks of bytes to be stored instead of
single bytes, support timeout on waits and so on.

A did some comparison between Regular proxy (blocking,
synchronous, download all then deliver) and Streaming Proxy
(continuous transmission from external server to browser). Both
proxy downloads the MSDN feed and delivers it to the browser. The
time taken here shows the total duration of browser making the
request to the proxy and then getting the entire response.


image

Figure: Time taken by Streaming Proxy vs Regular Proxy while
downloading MSDN feed

Not a very scientific graph and response time varies on the link
speed between the browser and the proxy server and then from proxy
server to the external server. But it shows that most of the time,
Streaming Proxy outperformed Regular proxy.


image

Figure: Test client to compare between Regular Proxy and Streaming
Proxy

You can also test both proxy’s response time by going to
http://labs.dropthings.com/AjaxStreamingProxy.
Put your URL and hit Regular/Stream button and see the “Statistics”
text box for the total duration. You can turn on “Cache response”
and hit a URL from one browser. Then go to another browser and hit
the URL to see the response coming from server cache directly. Also
if you hit the URL again on the same browser, you will see response
comes instantly without ever making call to the server. That’s
browser cache at work.

Learn more about Http Response caching from my blog post:

Making best use of cache for high performance website

A Visual Studio Web Test run inside a Load Test shows a better
picture:


image

Figure: Regular Proxy load test result shows Average
Requests/Sec 0.79
and Avg Response Time 2.5 sec


image

Figure: Streaming Proxy load test result shows Avg Req/Sec is
1.08
and Avg Response Time 1.8 sec.

From the above load test results, Streaming Proxy is 26%
better Request/Sec and Average Response Time is 29% better
. The
numbers may sound small, but at Pageflakes, 29% better response
time means 1.29 million seconds saved per month for all the
users on the website. So, we are effectively saving 353 man hours
per month which was wasted staring at browser screen while it
downloads content.

Building the Streaming Proxy

The details how the Streaming Proxy is built is quite long and
not suitable for a blog post. So, I have written a CodeProject
article:

Fast, Scalable,
Streaming AJAX Proxy – continuously deliver data from cross
domain

Please read the article and please vote for me if your find it
useful.

10 ASP.NET Performance and Scalability Secrets

ASP.NET 2.0 has many secrets, when revealed, can give you big
performance and scalability boost. For instance, there are secret
bottlenecks in Membership and Profile provider which can be solved
easily to make authentication and authorization faster.
Furthermore, ASP.NET Http pipeline can be tweaked to avoid
executing unnecessary code that gets hit on each and every request.
Not only that, ASP.NET Worker Process can be pushed to its limit to
squeeze out every drop of performance out of it. Page fragment
output caching on the browser (not on the server) can save
significant amount of download time on repeated visits. On demand
UI loading can give your site a fast and smooth feeling. Finally,
Content Delivery Networks (CDN) and proper use of HTTP Cache
headers can make your website screaming fast when implemented
properly. In this article, you will learn these techniques that can
give your ASP.NET application a big performance and scalability
boost and prepare it to perform well under 10 times to 100 times
more traffic.

http://www.codeproject.com/KB/aspnet/10ASPNetPerformance.aspx

In this article I have shown the following techniques:

  • ASP.NET Pipeline optimization
  • ASP.NET Process configuration optimization
  • Things you must do for ASP.NET before going live
  • Content Delivery Network
  • Caching AJAX calls on browser
  • Making best use of Browser Cache
  • On demand progressive UI loading for fast smooth
    experience
  • Optimize ASP.NET 2.0 Profile provider
  • How to query ASP.NET 2.0 Membership tables without bringing
    down the site
  • Prevent Denial of Service (DOS) attack

The above techniques can be implemented on any ASP.NET website
especially those who use ASP.NET 2.0’s Membership and Profile
provider.

You can learn a lot more about performance and scalability
improvement of ASP.NET and ASP.NET AJAX websites from my book –
Building a
Web 2.0 portal using ASP.NET 3.5
.

My first book – Building a Web 2.0 Portal with ASP.NET 3.5

My first book “Building a Web 2.0 Portal with ASP.NET 3.5” from
O’Reilly is published and available in the stores. This book
explains in detail the architecture design, development, test,
deployment, performance and scalability challenges of my open
source web portal Dropthings.com. Dropthings is a prototype of a web
portal similar to iGoogle or Pageflakes. But this portal is developed using
recently released brand new technologies like ASP.NET 3.5, C# 3.0,
Linq to Sql, Linq to XML, and Windows Workflow foundation. It makes
heavy use of ASP.NET AJAX 1.0. Throughout my career I have built
several state-of-the-art personal, educational, enterprise and mass consumer web
portals
. This book collects my experience in building all of
those portals.

O’Reilly Website:
http://www.oreilly.com/catalog/9780596510503/

Amazon:

http://www.amazon.com/Building-Web-2-0-Portal-ASP-NET/dp/0596510500

Disclaimer: This book does not show you how to build Pageflakes.
Dropthings is entirely different in terms of architecture,
implementation and the technologies involved.

You learn how to:

  • Implement a highly decoupled architecture following the popular
    n-tier, widget-based application model
  • Provide drag-and-drop functionality, and use ASP.NET 3.5 to
    build the server-side part of the web layer
  • Use LINQ to build the data access layer, and Windows Workflow
    Foundation to build the business layer as a collection of
    workflows
  • Build client-side widgets using JavaScript for faster
    performance and better caching
  • Get maximum performance out of the ASP.NET AJAX Framework for
    faster, more dynamic, and scalable sites
  • Build a custom web service call handler to overcome
    shortcomings in ASP.NET AJAX 1.0 for asynchronous, transactional,
    cache-friendly web services
  • Overcome JavaScript performance problems, and help the user
    interface load faster and be more responsive
  • Solve various scalability and security problems as your site
    grows from hundreds to millions of users
  • Deploy and run a high-volume production site while solving
    software, hardware, hosting, and Internet infrastructure
    problems

If you’re ready to build state-of-the art, high-volume web
applications that can withstand millions of hits per day, this book
has exactly what you need.

Linq to SQL: How to Attach object to a different data context

After upgrading to Visual Studio 2008 RTM, you will have trouble
updating Linq to SQL Classes which are read from one data context
and then updated into another data context. You will get this
exception during update:

System.NotSupportedException: An attempt has been made to
Attach or Add an entity that is not new, perhaps having been loaded
from another DataContext. This is not supported.

Here’s a typical example taken from a “http://forums.microsoft.com/msdn/ShowPost.aspx?postid=2524396&siteid=1”>
Forum post:

   1: public static void UpdateEmployee(Employee employee)
   2: {
   3:   using (HRDataContext dataContext =new HRDataContext())
   4:   {
   5:     //Get original employee
   6:     Employee originalEmployee = dataContext.Employees.Single(
e=>e.EmployeeId==employee.EmployeeId);
   8:     //attach to datacontext
   9:     dataContext.Employees.Attach(employee, originalEmployee);
  11:     //save changes
  12:     dataContext.SubmitChanges();
  14:   }
  15: }

When you call the Attach function, you will get the exception
mentioned above.

Here’s a way to do this. First, create a partial class that adds
a Detach method to Employee class. This method will
detach the object from it’s data context and detach associated
objects.

   1: public partial class Employee
   2: {
   3:   public void Detach()
   4:   {
   5:     this.PropertyChanged = null; this.PropertyChanging = null;
   7:     // Assuming there's a foreign
key from Employee to Boss
   8:     this.Boss = default(EntityRef<Boss>);
   9:     // Similarly set child objects to default as well
  11:     this.Subordinates = default(EntitySet<Subordinate>);
  12:   }
  13: }
Now during update, call Detach before attaching the

object to a DataContext.

   1: public static void UpdateEmployee(Employee employee)
   2: {
   3:     using (HRDataContext dataContext = new HRDataContext())
   4:     {
   5:         //attach to datacontext
   6:         employee.Detach();
              dataContext.Employees.Attach(employee);
   9:         //save changes
              dataContext.SubmitChanges();
  12:     }
  13: }
This'll work. It assumes the employee object already has its

primary key populated.

You might see during update, it’s generating a bloated UPDATE
statement where each and every property is appearing on the WHERE
clause. In that case, set UpdateCheck to Never for
all properties of Employee class from the Object Relational
Designer.

On-demand UI loading on AJAX websites

AJAX websites are all about loading as many features as possible
into the browser without having any postback. If you look at the
Start Pages like Pageflakes, it’s only one single
page that gives you all the features of the whole application with
zero postback. A quick and dirty approach for doing this is to
deliver every possible html snippets inside hidden divs during page
load and then make those divs visible when needed. But this makes
first time loading way too long and browser gives sluggish
performance as too much stuff is on the DOM. So, a better approach
is to load html snippet and necessary javascript on-demand. In my
dropthings project, I have
shown an example how this is done.

clip_image002

When you click on the “help” link, it loads the content of the
help dynamically. This html is not delivered as part of the
default.aspx that renders the first page. Thus the giant
html and graphics related to the help section has no effect on site
load performance. It is only loaded when user clicks the “help”
link. Moreover, it gets cached on the browser and thus loads only
once. When user clicks the “help” link again, it’s served directly
from the browser cache, instead of fetching from the origin server
again.

The principle is making an XMLHTTP call to an .aspx page,
get the response html, put that response html inside a container
DIV, make that DIV visible.

AJAX framework has a Sys.Net.WebRequest class which you
can use to make regular HTTP calls. You can define http method,
URI, headers and the body of the call. It’s kind of a low
level function for making direct calls via XMLHTTP. Once you
construct a web request, you can execute it using
Sys.Net.XMLHttpExecutor.

   1: function showHelp()
   2: {
   3:     var request =
new Sys.Net.WebRequest();
   4:     request.set_httpVerb(
"GET");
   5:     request.set_url(
'help.aspx');
   6:     request.add_completed(
function( executor )
   7:     {
   8:
if (executor.get_responseAvailable())
   9:         {
  10:             var helpDiv = get(
'HelpDiv');
  11:             var helpLink = get(
'HelpLink');
  12:
  13:             var helpLinkBounds =
Sys.UI.DomElement.getBounds(helpLink);
  14:
  15:             helpDiv.style.top =
(helpLinkBounds.y + helpLinkBounds.height) +
"px";
  16:
  17:             var content =
executor.get_responseData();
  18:             helpDiv.innerHTML =
content;
  19:             helpDiv.style.display =

"block";
  20:         }
  21:     });
  22:
  23:     var executor =
new Sys.Net.XMLHttpExecutor();
  24:     request.set_executor(executor);

  25:     executor.executeRequest();
  26: }

The example shows how the help section is loaded by hitting
help.aspx and injecting its response inside the
HelpDiv. The response can be cached by the output cache
directive
set on help.aspx. So, next time when user
clicks on the link, the UI pops up immediately. The
help.aspx file has no block, only the
content that is set inside the DIV.

   1:
<%@ Page Language="C#" AutoEventWireup="true"
CodeFile="Help.aspx.cs" Inherits="Help" %>
   2:
<%@ OutputCache Location="ServerAndClient"
Duration="604800" VaryByParam="none" %>
   3:
<
div
class
="helpContent"
>
   4:
<
div
id
="lipsum"
>
   5:
<
p
>
   6: Lorem ipsum dolor sit amet,
consectetuer adipiscing elit. Duis lorem
   7: eros, volutpat sit amet, venenatis
vitae, condimentum at, dolor. Nunc
   8: porttitor eleifend tellus. Praesent
vitae neque ut mi rutrum cursus.

Using this approach, you can break the UI into smaller
.aspx files. Although these .aspx files cannot have
JavaScript or stylesheet blocks, but they can contain large amount
of html that you need to show on the UI on-demand. Thus you can
keep initial download to absolute minimum just for loading the
basic stuffs. When user explores new features on the site, load
those areas incrementally.

Safe COM: Managed Disposable Strongly Typed safe wrapper to late bound COM

There are several problems using COM from
.NET:

  • You cannot implement the Dispose pattern
    by utilizing the “using” block in order to safely dispose
    COM references.
  • You cannot guaranty COM references are finalized. There’s no
    way to implement “~Destructor()” for COM
    references.
  • COM reference is not released when a call to
    Marshal.ReleaseComObject is skipped due to an
    Exception.
  • When you “Add Reference…” to a COM library, the reference is
    version specific. So, if you add a reference to the Office 2003 COM
    library, it does not work properly when deployed to Office
    2000.
  • The only solution to version independent COM is to use Late
    Bound operations but you miss all the features of a strongly typed
    language.

Let’s solve all these problems. We want to use a Managed
strongly typed
approach to Late Bound COM operations and
also utilize the Dispose pattern on COM objects. The
solution proposed here
works for any COM object which can be Microsoft Office COM
libraries, IE & DHTML objects and even your own COM objects.
You should use this approach whenever you are dealing with any type
of COM library.

If you like this solution, please vote for me.
http://www.codeproject.com/csharp/safecomwrapper.asp