Create REST API using ASP.NET MVC that speaks both Json and plain Xml

UPDATE: There’s a newer article on this that shows how to create a truly RESTful API and website using the same ASP.NET MVC code.

www.codeproject.com/KB/aspnet/aspnet_mvc_restapi.aspx

ASP.NET MVC Controllers can directly return objects and collections, without rendering a view, which makes it quite appealing for creating REST like API. The nice extensionless Url provided by MVC makes it handy to build REST services, which means you can create APIs with smart Url like “something.com/API/User/GetUserList”

There are some challenges to solve in order to expose REST API:

  • Based on who is calling your API, you need to be able to speak both Json and plain old Xml (POX). If the call comes from an AJAX front-end, you need to return objects serialized as Json. If it’s coming from some other client, say a PHP website, you need to return plain Xml.
  • Similarly you need to be able to understand REST, Json and plain Xml calls. Someone can hit you using REST url, someone can post a Json payload or someone can post Xml payload.

I have created an ObjectResult class which takes an object and generates Xml or Json output automatically looking at the Content-Type header of HttpRequest. AJAX calls send Content-Type=application/json. So, it generates Json as response in that case, but when Content-Type is something else, it does simple Xml Serialzation.

image

Here’s the ObjectResult that you can use from Controllers to return objects and it takes care of proper serialization method. Above shows the Json serialization, which is quite simple.XmlSerialization is a bit complex though:

image

Things to note here:

  • You have to force UTF8 encoding. Otherwise it produces UTF16 output.
  • XML Declaration is skipped because that’s not quite necessary. Wastes bandwidth. If you need it, turn it on.
  • I have turned on indenting for better readability. You can turn it off to save bandwidth.

Some of you might be boiling inside looking at my obscure coding style. I love this style! I am spoiled by jQuery. I wish there was a cQuery. I actually started writing one, but it never saw day light just like my hundred other open source attempts.

Now back to Object Serialization, we got the serialization done. Now you can return objects from Controller easily:

image

You can use the test web project to call these methods and see the result:

image

So far you have seen simple object and list serialization. A best practice is to return a common result object that has some status, message and then the real payload. It’s handy when you only need to return some error but no object or list. I use a common Result object that has three properties – ErrorCode (0 by default means success), Message (a string data type) andData which is the real object.

image

When you want to return only a result with error message, you can do this:

image

This produces a result like this:

image

No payload here. So, the return format is always consistent. Those who are consuming service can write a common Xml or Json parsing code to consume both success and failure response. Those who are building API for their website, I humbly request you to return consistent response for both success and failure. It makes our life so easier.

So, far we have only returned objects and lists. Now we need to accept Json and Xml payload, delivered via HTTP POST. Sometimes your client might want to upload a collection of objects in one shot for batch processing. So, they can upload objects using either Json or Xml format. There’s no native support in ASP.NET MVC to automatically parse posted Json or Xml and automatically map to Action parameters. So, I wrote a filter that does it.

image

This filter intercepts calls going to Action methods and checks whether client has posted Xml or Json. Based on what has been posted, it uses DataContractJsonSerializer or simpleXmlSerializer to convert the payload to objects or collections.

You use this attribute on Action methods like this:

image

The attribute expects a parameter name where it stores the deserialized object/collection. It also expects a root type that it needs to pass to the deserializer. If you are expecting a single object, specify typeof(SingeObject). If you are expecting a list of objects, specify an array of that object like typeof(SingleObject[])

You can test the project live at this URL:

http://labs.dropthings.com/MvcWebAPI

The code is also available at:

http://code.msdn.microsoft.com/MvcWebAPI

Enjoy!

————

Here’s an Eid gift for my believer brothers. Check out this amazing sitewww.quranexplorer.com/. You will get online recitation, translation – verse by verse. The recitation of Mishari Rashid is something you have to listen to to believe. Try these two recitations to see what I mean:

Sura 97 – Verse 1
Sura 114 – Verse 1

Press the “Play” icon at bottom left (hard to find).

kick it on DotNetKicks.com

HTTP handler to combine multiple files, cache and deliver compressed output for faster page load

It’s a good practice to use many small Javascript and CSS files
instead of one large Javascript/CSS file for better code
maintainability, but bad in terms of website performance. Although
you should write your Javascript code in small files and break
large CSS files into small chunks but when browser requests those
javascript and css files, it makes one Http request per file. Every
Http Request results in a network roundtrip form your browser to
the server and the delay in reaching the server and coming back to
the browser is called latency. So, if you have four javascripts and
three css files loaded by a page, you are wasting time in seven
network roundtrips. Within USA, latency is average 70ms. So, you
waste 7×70 = 490ms, about half a second of delay. Outside USA,
average latency is around 200ms. So, that means 1400ms of waiting.
Browser cannot show the page properly until Css and Javascripts are
fully loaded. So, the more latency you have, the slower page
loads.

Here’s a graph that shows how each request latency adds up and
introduces significant delay in page loading:

You can reduce the wait time by using a CDN. my previous blog post about using CDN. However, a better

solution is to deliver multiple files over one request using an
HttpHandler that combines several files and delivers as one
output. So, instead of putting many < script> or tag, you just put one < script> and one tag, and
point them to the HttpHandler. You tell the handler which
files to combine and it delivers those files in one response. This
saves browser from making many requests and eliminates the
latency.

Here you can see how much improvement you get if you can combine multiple javascripts and css into one.

In a typical web page, you will see many javascripts referenced:

<script type="text/javascript" src="/Content/JScript/jquery.js">
<script type="text/javascript" src="/Content/JScript/jDate.js">
<script type="text/javascript" src="/Content/JScript/jQuery.Core.js">
<script type="text/javascript" src="/Content/JScript/jQuery.Delegate.js">
<script type="text/javascript" src="/Content/JScript/jQuery.Validation.js">

Instead of these individual < script> tags, you can
use only one < script> tag to serve the whole set of
scripts using an Http Handler:

<script type="text/javascript" 
    src="HttpCombiner.ashx?s=jQueryScripts&t=text/javascript&v=1" >

The Http Handler reads the file names defined in a configuration
and combines all those files and delivers as one response. It
delivers the response as gzip compressed to save bandwidth.
Moreover, it generates proper cache header to cache the response in
browser cache, so that, browser does not request it again on future
visits.

You can find details about the HttpHandler from this
CodeProject article:

http://www.codeproject.com/KB/aspnet/HttpCombine.aspx

You can also get the latest code from this code site:

http://code.msdn.microsoft.com/HttpCombiner

That’s it! Make your website faster to load, get more users and
earn more revenue.

 

Loading static content in ASP.NET pages from different domain for faster parallel download

Generally we put static content (images, css, js) of our website
inside the same web project. Thus they get downloaded from the same
domain like www.dropthings.com. There are three
problems in this approach:

  • They occupy connections on the same domain www.dropthings.com and thus other
    important calls like Web service call do not get a chance to happen
    earlier as browser can only make two simultaneous connections per
    domain.
  • If you are using ASP.NET Forms Authentication, then you have
    that gigantic Forms Authentication cookie being sent with every
    single request on www.dropthings.com. This cookie
    gets sent for all images, CSS and JS files, which has no use for
    the cookie. Thus it wastes upload bandwidth and makes every request
    slower. Upload bandwidth is very limited for users compared to
    download bandwidth. Generally users with 1Mbps download speed has
    around 128kbps upload speed. So, adding another 100 bytes on the
    request for the unnecessary cookie results in delay in sending the
    request and thus increases your site load time and the site feels
    slow to respond.
  • It creates enormous IIS Logs as it records the cookies for each
    static content request. Moreover, if you are using Google Analytics
    to track hits to your site, it issues four big cookies that gets
    sent for each and every image, css and js files resulting in slower
    requests and even larger IIS log entries.

Let’s see the first problem, browser’s two connection limit. See
what happens when content download using two HTTP requests in
parallel:


image

This figure shows only two files are downloaded in parallel. All
the hits are going to the same domain e.g. www.dropthings.com. As you see,
only two call can execute at the same time. Moreover, due to
browser’s way of handling script tags, once a script is being
downloaded, browser does not download anything else until the
script has downloaded and executed.

Now, if we can download the images from different domain, which
allows browser to open another two simultaneous connections, then
the page loads a lot faster:


image

You see, the total page downloads 40% faster. Here only the
images are downloaded from a different domain e.g.
“s.dropthings.com”, thus the calls for the script, CSS and
webservices still go to main domain e.g. www.dropthings.com

The second problem for loading static content from same domain
is the gigantic forms authentication cookie or any other cookie
being registered on the main domain e.g. www subdomain. Here’s how
Pageflake’s website’s request looks like with the forms
authentication cookie and Google Analytics cookies:


image

You see a lot of data being sent on the request header which has
no use for any static content. Thus it wastes bandwidth, makes
request reach server slower and produces large IIS logs.

You can solve this problem by loading static contents from
different domain as we have done it at Pageflakes by loading static
contents from a different domain e.g. flakepage.com. As the cookies
are registered only on the www subdomain, browser does not send the
cookies to any other subdomain or domain. Thus requests going to
other domains are smaller and thus faster.

Would not it be great if you could just plugin something in your
ASP.NET project and all the graphics, CSS, javascript URLs
automatically get converted to a different domain URL without you
having to do anything manually like going through all your ASP.NET
pages, webcontrols and manually changing the urls?

Here’s a nice HttpFilter that will do the exact thing.
You just configure in your web.config what prefix you want
to add in front of your javascript, css and images and the filter
takes care of changing all the links for you when a page is being
rendered.

First you add these keys in your web.config‘s
block that defines the prefix to inject
before the relative URL of your static content. You can define
three different prefix for images, javascripts and css:


image

So, you can download images from one domain, javascripts from
another domain and css from another domain in order to increase
parallel download. But beware, there’s the overhead of DNS lookup
which is significant. Ideally you should have max three unique
domains used in your entire page, one for the main domain and two
other domain.

Then you register the Filter on Application_BeginRequest
so that it intercepts all aspx pages:


image

That’s it! You will see all the tag’s
src attribute, < script> tag’s src
attribute, tag’s href attribute are
automatically prefixed with the prefix defined in
web.config

Here’s how the Filter works. First it intercepts the
Write method and then searches through the buffer if there’s
any of the tags. If found, it checks for the src or
href attribute and then sees if the URL is absolute or
relative. If relative, inserts the prefix first and then the
relative value follows.

The principle is relatively simple, but the code is far more
complex than it sounds. As you work with char[] in an
HttpFilter, you need to work with char[] array only,
no string. Moreover, there’s very high performance
requirement for such a filter because it processes each and every
page’s output. So, the filter will be processing megabytes of data
every second on a busy site. Thus it needs to be extremely fast. No
string allocation, no string comparison, no Dictionary or
ArrayList, no StringBuilder or MemoryStream.
You need to forget all these .NET goodies and go back to good old
Computer Science school days and work with arrays, bytes, char and
so on.

First, we run through the content array provided and see if
there’s any of the intended tag’s start.


image

Idea is to find all the image, script and link tags and see what
their src/href value is and inject the prefix if needed. The
WritePrefixIf(…) function does the work of parsing the
attribute. Some cool things to notice here is that, there’s
absolutely no string comparison here. Everything is done on the
char[] passed to the Write method.
image

This function checks if src/href attribute is found and
it writes the prefix right after the double quote if the value of
the prefix does not start with http://

Basically that’s it. The only other interesting thing is the
FindAttributeValuePos. It checks if the specified attribute
exists and if it does, finds the position of the value in the
content array so that content can be flushed up to the value
position.


image

Two other small functions that are worth mentioning are the
compare functions so that you can see, there’s absolutely no string
comparison involved in this entire filter:


image

Now the season finally, the remaining code in Write function
that solves several challenges like unfinished tags in a buffer.
It’s possible Write method will pass you a buffer where a tag has
just started, but did not end. Of you can get part of a tag like
and that’s it. So, these scenario needs to be
handled. Idea is to detect such unfinished tags and store them in a
temporary buffer. When next Write call happens, it will
combine the buffer and process it.


image

That’s it for the filter’s code.

Download the code
from here
. It’s just one class.

You can use this filter in conjunction with the
ScriptDeferFilter that I have showed in CodeProject
article which defers script loading after body and combines
multiple script tags into one
for faster download and better
compression and thus significantly faster web page load
performance.

In case you are wondering whether this is production
ready
, visit www.dropthings.com and you will see
static content downloads from s.dropthings.com using this
Filter.

Share this post :

kick it on DotNetKicks.com

Open Source ASP.NET 3.5 AJAX Portal – new and improved

Last week I released a new version of Dropthings, my open source
AJAX portal, that shows many fancy Web 2.0 features and showcases
extensive use of ASP.NET 3.5, Workflow Foundation, C# 3.0 new
language features, custom ASP.NET AJAX extenders, many performance
and scalability techniques. I have written
a book
on these topics as well.

The new version implements the following performance and
scalability improvement techniques:

Here’s how the new version looks:


Dropthings new version

Hope you like the new design and the performance and scalability
techniques that can significantly boost your ASP.NET website’s
quality. I highly recommend these techniques for ASP.NET websites.
These are easy to implement and makes a world of difference in
speed and smoothness for ASP.NET websites.

I am thinking about making an ASP.NET MVC version of this portal
using jQuery. Do you think it will be a hot area to explore?

Share this post :

kick it on DotNetKicks.com

Deploy ASP.NET MVC on IIS 6, solve 404, compression and performance problems

There are several problems with ASP.NET MVC application when
deployed on IIS 6.0:

  • Extensionless URLs give 404 unless some URL Rewrite module is
    used or wildcard mapping is enabled
  • IIS 6.0 built-in compression does not work for dynamic
    requests. As a result, ASP.NET pages are served uncompressed
    resulting in poor site load speed.
  • Mapping wildcard extension to ASP.NET introduces the following
    problems:

    • Slow performance as all static files get handled by ASP.NET and
      ASP.NET reads the file from file system on every call
    • Expires headers doesn’t work for static content as IIS does not
      serve them anymore. Learn about benefits of expires header from

      here
      . ASP.NET serves a fixed expires header that makes content
      expire in a day.
    • Cache-Control header does not produce max-age properly and thus
      caching does not work as expected. Learn about caching best
      practices from
      here
      .
  • After deploying on a domain as the root site, the homepage
    produces HTTP 404.

Problem 1: Visiting your website’s homepage gives 404 when
hosted on a domain

You have done the wildcard mapping, mapped .mvc extention to
ASP.NET ISAPI handler, written the route mapping for Default.aspx
or default.aspx (lowercase), but still when you visit your homepage
after deployment, you get:


image

You will find people banging their heads on the wall here:

Solution is to capture hits going to “/” and then rewrite it to
Default.aspx:


image

You can apply this approach to any URL that ASP.NET MVC is not
handling for you and it should handle. Just see the URL reported on
the 404 error page and then rewrite it to a proper URL.

Problem 2: IIS 6 compression is no longer working after
wildcard mapping

When you enable wildcard mapping, IIS 6 compression no longer
works for extensionless URL because IIS 6 does not see any
extension which is defined in IIS Metabase. You can learn about IIS
6 compression feature and how to configure it properly from

my earlier post
.

Solution is to use an HttpModule to do the compression for
dynamic requests.

Problem 3: ASP.NET ISAPI does not cache Static Files

When ASP.NET’s DefaultHttpHandler serves static files, it
does not cache the files in-memory or in ASP.NET cache. As a
result, every hit to static file results in a File read. Below is
the decompiled code in DefaultHttpHandler when it handles a
static file. As you see here, it makes a file read on every hit and
it only set the expiration to one day in future. Moreover, it
generates ETag for each file based on file’s modified date.
For best caching efficiency, we need to get rid of that
ETag, produce an expiry date on far future (like 30 days),
and produce Cache-Control header which offers better control
over caching.


image

So, we need to write a custom static file handler that will
cache small files like images, Javascripts, CSS, HTML and so on in
ASP.NET cache and serve the files directly from cache instead of
hitting the disk. Here are the steps:

  • Install an HttpModule that installs a Compression Stream
    on Response.Filter so that anything written on Response gets
    compressed. This serves dynamic requests.
  • Replace ASP.NET’s DefaultHttpHandler that listens on *.*
    for static files.
  • Write our own Http Handler that will deliver compressed
    response for static resources like Javascript, CSS, and HTML.


image

Here’s the mapping in ASP.NET’s web.config for the
DefaultHttpHandler. You will have to replace this with your
own handler in order to serve static files compressed and
cached.

Solution 1: An Http Module to compress dynamic requests

First, you need to serve compressed responses that are served by
the MvcHandler or ASP.NET’s default Page Handler. The
following HttpCompressionModule hooks on the
Response.Filter and installs a GZipStream or
DeflateStream on it so that whatever is written on the
Response stream, it gets compressed.


image

These are formalities for a regular HttpModule. The real
hook is installed as below:


image

Here you see we ignore requests that are handled by ASP.NET’s
DefaultHttpHandler and our own StaticFileHandler that
you will see in next section. After that, it checks whether the
request allows content to be compressed. Accept-Encoding
header contains “gzip” or “deflate” or both when browser supports
compressed content. So, when browser supports compressed content, a
Response Filter is installed to compress the output.

Solution 2: An Http Module to compress and cache static file
requests

Here’s how the handler works:

  • Hooks on *.* so that all unhandled requests get served by the
    handler
  • Handles some specific files like js, css, html, graphics files.
    Anything else, it lets ASP.NET transmit it
  • The extensions it handles itself, it caches the file content so
    that subsequent requests are served from cache
  • It allows compression of some specific extensions like js, css,
    html. It does not compress graphics files or any other
    extension.

Let’s start with the handler code:


image

Here you will find the extensions the handler handles and the
extensions it compresses. You should only put files that are text
files in the COMPRESS_FILE_TYPES.

Now start handling each request from
BeginProcessRequest.


image

Here you decide the compression mode based on
Accept-Encoding header. If browser does not support
compression, do not perform any compression. Then check if the file
being requested falls in one of the extensions that we support. If
not, let ASP.NET handle it. You will see soon how.


image

Calculate the cache key based on the compression mode and the
physical path of the file. This ensures that no matter what the URL
requested, we have one cache entry for one physical file. Physical
file path won’t be different for the same file. Compression mode is
used in the cache key because we need to store different copy of
the file’s content in ASP.NET cache based on Compression Mode. So,
there will be one uncompressed version, a gzip compressed version
and a deflate compressed version.

Next check if the file exits. If not, throw HTTP 404. Then
create a memory stream that will hold the bytes for the file or the
compressed content. Then read the file and write in the memory
stream either directly or via a GZip or Deflate stream. Then cache
the bytes in the memory stream and deliver to response. You will
see the ReadFileData and CacheAndDeliver functions
soon.


image

This function delivers content directly from ASP.NET cache. The
code is simple, read from cache and write to the response.

When the content is not available in cache, read the file bytes
and store in a memory stream either as it is or compressed based on
what compression mode you decided before:


image

Here bytes are read in chunk in order to avoid large amount of
memory allocation. You could read the whole file in one shot and
store in a byte array same as the size of the file length. But I
wanted to save memory allocation. Do a performance test to figure
out if reading in 8K chunk is not the best approach for you.

Now you have the bytes to write to the response. Next step is to
cache it and then deliver it.


image

Now the two functions that you have seen several times and have
been wondering what they do. Here they are:


image

WriteResponse has no tricks, but
ProduceResponseHeader has much wisdom in it. First it turns
off response buffering so that ASP.NET does not store the written
bytes in any internal buffer. This saves some memory allocation.
Then it produces proper cache headers to cache the file in browser
and proxy for 30 days, ensures proxy revalidate the file after the
expiry date and also produces the Last-Modified date from
the file’s last write time in UTC.

How to use it

Get the HttpCompressionModule and
StaticFileHandler from:

http://code.msdn.microsoft.com/fastmvc

Then install them in web.config. First you install the
StaticFileHandler by removing the existing mapping for
path=”*” and then you install the HttpCompressionModule.


image

That’s it! Enjoy a faster and more responsive ASP.NET MVC
website deployed on IIS 6.0.

Share this post : del.icio.us digg dotnetkicks furl live reddit spurl technorati yahoo

ensure – Ensure relevant Javascript and HTML are loaded before using them

ensure allows you to load Javascript, HTML and CSS
on-demand, whenever they are needed. It saves you from writing a
gigantic Javascript framework up front so that you can ensure all
functions are available whenever they are needed. It also saves you
from delivering all possible html on your default page (e.g.
default.aspx) hoping that they might some day be needed on some
user action. Delivering Javascript, html fragments, CSS during
initial loading that is not immediately used on first view makes
initial loading slow. Moreover, browser operations get slower as
there are lots of stuff on the browser DOM to deal with. So,
ensure saves you from delivering unnecessary
javascript, html and CSS up front, instead load them on-demand.
Javascripts, html and CSS loaded by ensure remain in
the browser and next time when ensure is called with
the same Javascript, CSS or HTML, it does not reload them and thus
saves from repeated downloads.

Ensure supports jQuery,
Microsoft ASP.NET AJAX and
Prototype framework. This
means you can use it on any html, ASP.NET, PHP, JSP page that uses
any of the above framework.

For example, you can use ensure to download
Javascript on demand:

ensure( { js: "Some.js" }, function()
{
SomeJS(); // The function SomeJS is available in Some.js only
});

The above code ensures Some.js is available before executing the
code. If the SomeJS.js has already been loaded, it executes the
function write away. Otherwise it downloads Some.js, waits until it
is properly loaded and only then it executes the function. Thus it
saves you from deliverying Some.js upfront when you only need it
upon some user action.

Similarly you can wait for some HTML fragment to be available,
say a popup dialog box. There’s no need for you to deliver HTML for
all possible popup boxes that you will ever show to user on your
default web page. You can fetch the HTML whenever you need
them.

ensure( {html: "Popup.html"}, function()
{
// The element "Popup" is available only in Popup.html
document.getElementById("Popup").style.display = "";
});

The above code downloads the html from “Popup.html” and adds it
into the body of the document and then fires the function. So, you
code can safely use the UI element from that html.

You can mix match Javascript, html and CSS altogether in one
ensure call. For example,

ensure( { js: "popup.js", html: "popup.html", css: "popup.css" }, function()
{
PopupManager.show();
});

You can also specify multiple Javascripts, html or CSS files to
ensure all of them are made available before executing the
code:

ensure( { js: ["blockUI.js","popup.js"], html: ["popup.html", "blockUI.html"], css: ["blockUI.css", "popup.css"] }, function()
{
BlockUI.show();
PopupManager.show();
});

You might think you are going to end up writing a lot of
ensure code all over your Javascript code and result
in a larger Javascript file than before. In order to save you
javascript size, you can define shorthands for commonly used
files:

var JQUERY = { js: "jquery.js" };
var POPUP = { js: ["blockUI.js","popup.js"], html: ["popup.html", "blockUI.html"], css: ["blockUI.css", "popup.css"] };
...
...
ensure( JQUERY, POPUP, function() {
("DeleteConfirmPopupDIV").show();
});
...
...
ensure( POPUP, function()
{
("SaveConfirmationDIV").show();
);

While loading html, you can specify a container element where
ensure can inject the loaded HTML. For example, you can say load
HtmlSnippet.html and then inject the content inside a DIV named
“exampleDiv”

ensure( { html: ["popup.html", "blockUI.html"], parent: "exampleDiv"}, function(){});

You can also specify Javascript and CSS that will be loaded
along with the html.

How ensure works

The following CodeProject article explains in detail how ensure
it built. Be prepared for a high dose of Javascript techniques:

http://www.codeproject.com/KB/ajax/ensure.aspx

If you find ensure useful, please vote for me.

Download Code

Download latest source code from CodePlex: www.codeplex.com/ensure

Share this post :

kick it on DotNetKicks.com

UFrame: goodness of UpdatePanel and IFRAME combined

UFrame combines
the goodness of UpdatePanel and IFRAME in a cross browser and
cross platform solution. It allows a DIV to behave like an
IFRAME loading
content from any page either static or dynamic. It can load pages
having both inline and external Javascript and CSS, just like an
IFRAME. But unlike IFRAME, it loads the content within the main
document and you can put any number of UFrame on your page without
slowing down the browser. It supports ASP.NET postback nicely and
you can have DataGrid or any other complex
ASP.NET control within a UFrame. UFrame works perfectly with
ASP.NET MVC making it an replacement for
UpdatePanel. Best
of all, UFrame is
implemented 100% in Javascript making it a cross platform solution.
As a result, you can use UFrame on ASP.NET, PHP,
JSP
or any other platform.

<div class="UFrame" id="UFrame1" src="SomePage.aspx?ID=UFrame1" >
  <p>This should get replaced with content from Somepage.aspxp>
div>

Response from SomePage.aspx is rendered
directly inside the UFrame. Here you see two
UFrame‘s are used
to load the same SomePage.aspx as if they are
loaded inside IFRAME. Another UFrame is used to load
AnotherPage.aspx
that shows photos from Flickr.


image

See it in action!

You can test UFrame from:

What is UFrame?

UFrame can load
and host a page (ASP.NET, PHP or regular html) inside a DIV. Unlike
IFRAME which loads the content inside a browser frame that has no
relation with the main document, UFrame loads the content within
the same document. Thus all the Javascripts, CSS on the main
document flows through the loaded content. It’s just like
UpdatePanel with
IFRAME’s src
attribute.

The above UFrames are declared like
this:

<div id="UFrame1" src="SomePage.aspx" >
    <p>This should get replaced with content from Somepage.aspxp>
div>

The features of UFrame are:

  • You can build regular ASP.NET/PHP/JSP/HTML page and make them
    behave as if they are fully AJAX enabled! Simple regular postback
    will work as if it’s an UpdatePanel, or simple
    hyperlinks will behave as if content is being loaded using
    AJAX.
  • Load any URL inside a DIV. It can be a PHP, ASP.NET, JSP or
    regular HTML page.
  • Just like IFRAME, you can set src property of DIVs and they
    are converted to UFrames when UFrame library loads.
  • Unlike IFRAME, it loads the content within the main document.
    So, main document’s CSS and Javascripts are available to the loaded
    content.
  • It allows you to build parts of a page as multiple fully
    independent pages.
  • Each page is built as standalone page. You can build, test and
    debug each small page independently and put them together on the
    main page using UFrames.
  • It loads and executes both inline and external scripts from
    loaded page. You can also render different scripts during
    UFrame
    postback.
  • All external scripts are loaded before the body content is set.
    And all inline scripts are executed when both external scripts and
    body has been loaded. This way the inline scripts execute when the
    body content is already available.
  • It loads both inline and external CSS.
  • It handles duplicates nicely. It does not load the same
    external Javascript or CSS twice.

Download the code

You can download latest version of UFrame along with the VS 2005
and VS 2008 (MVC) example projects from CodePlex:

www.codeplex.com/uframe

Please go to the “Source Code” tab for the latest version. You
are invited to join the project and improve it or fix bugs.

Read the article about UFrame

I have published an article about UFrame at CodeProject:

http://www.codeproject.com/KB/aspnet/uframe.aspx

The article explains in details how the UFrame is built. Be
prepared for a big dose of Javascript code.

If you find UFrame or the article useful, please vote for me at
CodeProject.


kick it on DotNetKicks.com

Fast ASP.NET web page loading by downloading multiple javascripts in batch

A web page can load a lot faster and feel faster if the
javascripts on the page can be loaded after the visible content has
been loaded and multiple javascripts can be batched into one
download. Browsers download one external script at a time and
sometimes pause rendering while a script is being downloaded and
executed. This makes web pages load and render slow when there are
multiple javascripts on the page. For every javascript reference,
browser stops downloading and processing of any other content on
the page and some browsers (like IE6) pause rendering while it
processes the javascript. This gives a slow loading experience and
the web page kind of gets ‘stuck’ frequently. As a result, a web
page can only load fast when there are small number of external
scripts on the page and the scripts are loaded after the visible
content of the page has loaded.

Here’s an example, when you visit
http://dropthings.omaralzabir.com, you see a lot of Javascripts
downloading. Majority of these are from the ASP.NET AJAX framework
and the ASP.NET AJAX Control Toolkit project.


Andysnap_003

Figure: Many scripts downloaded on a typical ASP.NET AJAX page
having ASP.NET AJAX Control Toolkit

As you see, browser gets stuck for 15 times as it downloads and
processes external scripts. This makes page loading “feel” slower.
The actual loading time is also pretty bad because these 15 http
requests waste 15*100ms = 1500ms on the network latency inside USA.
Outside USA, the latency is even higher. Asia gets about 270ms and
Australia gets about 380ms latency from any server in USA. So,
users outside USA wastes 4 to 6 seconds on network latency where no
data is being downloaded. This is an unacceptable performance for
any website.

You pay for such high number of script downloads only because
you use two extenders from AJAX Control Toolkit and the
UpdatePanel of
ASP.NET AJAX.

If we can batch the multiple individual script calls into one
call like Scripts.ashx as shown in the
picture below and download several scripts together in one shot
using an HTTP Handler, it saves us a lot of http connection which
could be spent doing other valuable work like downloading CSS for
the page to show content properly or downloading images on the page
that is visible to user.


Andysnap_002

Figure: Download several javascripts over one connection and save
call and latency

The Scripts.ashx
handler can not only download multiple scripts in one shot, but
also has a very short URL form. For example:

/scripts.ashx?initial=a,b,c,d,e&/

Compared to conventional ASP.NET ScriptResource URLs like:

/ScriptResource.axd?d=WzuUYZ-Ggi7-B0tkhjPDTmMmgb5FPLmciWEXQLdjNjt
bmek2jgmm3QETspZjKLvHue5em5kVYJGEuf4kofrcKNL9z6AiMhCe3SrJrcBel_c1
&t=633454272919375000

The benefits of downloading multiple Javascript over one http
call are:

  • Saves expensive network roundtrip latency where neither browser
    nor the origin server is doing anything, not even a single byte is
    being transmitted during the latency
  • Create less “pause” moments for the browser. So, browser can
    fluently render the content of the page and thus give user a fast
    loading feel
  • Give browser move time and free http connections to download
    visible artifacts of the page and thus give user a “something’s
    happening” feel
  • When IIS compression is enabled, the total size of individually
    compressed files is greater than multiple files compressed after
    they are combined. This is because each compressed byte stream has
    compression header in order to decompress the content.
  • This reduces the size of the page html as there are only a few
    handful of script tag. So, you can easily saves hundreds of bytes
    from the page html. Especially when ASP.NET AJAX produces gigantic
    WebResource.axd and
    ScriptResource.axd
    URLs that have very large query parameter

The solution is to dynamically parse the response of a page
before it is sent to the browser and find out what script
references are being sent to the browser. I have built an http
module which can parse the generated html of a page and find out
what are the script blocks being sent. It then parses those script
blocks and find the scripts that can be combined. Then it takes out
those individual script tags from the response and adds one script
tag that generates the combined response of multiple script
tags.

For example, the homepage of Dropthings.com produces the
following script tags:

< script type="text/javascript">
...
//]]>

< script src="/Dropthings/WebResource.axd?d=_w65Lg0FVE-htJvl4_zmXw2&t=633403939286875000" 
type="text/javascript"> ... < script src="Widgets/FastFlickrWidget.js" type="text/javascript"> < script src="Widgets/FastRssWidget.js" type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=WzuUYZ-Ggi7-B0tkhjPDTmMmgb5FPLmciWEXQLdj
Njtbmek2jgmm3QETspZjKLvHue5em5kVYJGEuf4kofrcKNL9z6AiMhCe3SrJrcBel_c1
&t=633454272919375000"
type="text/javascript"> < script type="text/javascript"> // ... < script src="/Dropthings/ScriptResource.axd?d=WzuUYZ-Ggi7-B0tkhjPDTmMmgb5FPLmciWEXQLdjNjtbmek2j
gmm3QETspZjKLvHIbaYWwsewvr_eclXZRGNKzWlaVj44lDEdg9CT2tyH-Yo9jFoQij_XIWxZNETQkZ90
&t=633454272919375000"
type="text/javascript"> < script type="text/javascript"> ... < script type="text/javascript"> ... < script type="text/javascript" charset="utf-8"> ... < script src="Myframework.js" type="text/javascript"> < script type="text/javascript"> ... < script type="text/javascript">if( typeof Proxy == "undefined" ) Proxy = ProxyAsync; < script type="text/javascript"> ... < script src="/Dropthings/ScriptResource.axd?d=WzuUYZ-Ggi7-B0tkhjPDTmMmgb5FPLmciWEXQLdjN
jtbmek2jgmm3QETspZjKLvH-H5JQeA1OWzBaqnbKRQWwc2hxzZ5M8vtSrMhytbB-Oc1
&t=633454272919375000"
type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=BXpG1T2rClCdn7QzWc-HrzQ2ECeqBhG6oiVakhRAk
RY6YSaFJsnzqttheoUJJXE4jMUal_1CAxRvbSZ_4_ikAw2
&t=633454540450468750"
type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=BXpG1T2rClCdn7QzWc-HrzQ2ECeqBhG6oiVakhRA
kRYRhsy_ZxsfsH4NaPtFtpdDEJ8oZaV5wKE16ikC-hinpw2
&t=633454540450468750"
type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=BXpG1T2rClCdn7QzWc-HrzQ2ECeqBhG6oiVakhRAk
RZbimFWogKpiYN4SVreNyf57osSvFc_f24oloxX4RTFfnfj5QsvJGQanl-pbbMbPf01
&t=633454540450468750"
type="text/javascript">
...

< script type="text/javascript"> ...

As you see, there are lots of large script tags, in total 15 of
them. The solution I will show here will combine the script links
and replace with two script links that download 13 of the
individual scripts. I have left two scripts out that are related to
ASP.NET AJAX Timer extender.


< script type="text/javascript"> ...


< script type="text/javascript" src="Scripts.ashx?initial=a,b,c,d,e,f&/dropthings/">
< script type="text/javascript"> ... < script type="text/javascript"> ... < script type="text/javascript"> ... < script type="text/javascript"> ... < script type="text/javascript">if( typeof Proxy == "undefined" ) Proxy = ProxyAsync; < script type="text/javascript"> ... < script src="/Dropthings/ScriptResource.axd?d=WzuUYZ-..." type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=BXpG1T2..." type="text/javascript">
< script type="text/javascript" src="Scripts.ashx?post=C,D,E,F,G,H,I,J&/dropthings/"> < script type="text/javascript"> ...

As you see, 13 of the script links have been combined into two
script links. The URL is also smaller than majority of the script
references.

There are two steps involved here:

  1. Find out all the script tags being emitted
    inside generated response HTML and collect them in a buffer. Move
    them after the visible artifacts in the HTML, especially the

    tag
    that contains the generated output of all ASP.NET controls on the
    page
  2. Parse the buffer and see which script references can be
    combined into one set. The sets are defined in a configuration
    file. Replace the individual script references with the combined
    set reference.

The whole solution is explained in this CodeProject article:

Fast ASP.NET web page loading by downloading multiple
javascripts after visible content and in batch
http://www.codeproject.com/KB/aspnet/fastload.aspx

You should be able to use this approach in any ASP.NET (even
better if AJAX) application and give your site a big performance
boost.

If you like the idea, please vote for me.


kick it on DotNetKicks.com

Fast, Streaming AJAX proxy – continuously download from cross domain

Due to browser’s prohibition on cross
domain XMLHTTP call, all AJAX websites must have server side proxy
to fetch content from external domain like Flickr or Digg. From
client side javascript code, an XMLHTTP call goes to the server
side proxy hosted on the same domain and then the proxy downloads
the content from the external server and sends back to the browser.
In general, all AJAX websites on the Internet that are showing
content from external domains are following this proxy approach
except some rare ones who are using JSONP. Such a proxy gets a very
large number of hits when a lot of component on the website are
downloading content from external domains. So, it becomes a
scalability issue when the proxy starts getting millions of hits.
Moreover, web page’s overall load performance largely depends on
the performance of the proxy as it delivers content to the page. In
this article, we will take a look how we can take a conventional
AJAX Proxy and make it faster, asynchronous, continuously stream
content and thus make it more scalable.

You can see such a proxy in action when you go to Pageflakes.com. You will see
flakes (widgets) loading many different content like weather feed,
flickr photo, youtube videos, RSS from many different external
domains. All these are done via a Content Proxy. Content
Proxy served about 42.3 million URLs last month which is
quite an engineering challenge for us to make it both fast and
scalable. Sometimes Content Proxy serves megabytes of data, which
poses even greater engineering challenge. As such proxy gets large
number of hits, if we can save on an average 100ms from each call,
we can save 4.23 million seconds of
download/upload/processing time every month. That’s about 1175 man
hours wasted throughout the world by millions of people staring at
browser waiting for content to download.

Such a content proxy takes an external server’s URL as a query
parameter. It downloads the content from the URL and then writes
the content as response back to browser.


image

Figure: Content Proxy working as a middleman between browser and
external domain

The above timeline shows how request goes to the server and then
server makes a request to external server, downloads the response
and then transmits back to the browser. The response arrow from
proxy to browser is larger than the response arrow from external
server to proxy because generally proxy server’s hosting
environment has better download speed than the user’s Internet
connectivity.

Such a content proxy is also available in my open source Ajax
Web Portal Dropthings.com.
You can see from its
code
how such a proxy is implemented.

The following is a very simple synchronous, non-streaming,
blocking Proxy:

[WebMethod]
[ScriptMethod(UseHttpGet=true)]
public string GetString(string url)
{
using (WebClient client = new WebClient())
{
string response = client.DownloadString(url);
return response;
}
}
}

Although it shows the general principle, but it’s no where close
to a real proxy because:

  • It’s a synchronous proxy and thus not scalable. Every call to
    this web method causes the ASP.NET thread to wait until the call to
    the external URL completes.
  • It’s non streaming. It first downloads the entire
    content on the server, storing it in a string and then uploading
    that entire content to the browser. If you pass MSDN feed URL, it will
    download that gigantic 220 KB RSS XML on the server and store it on
    a 220 KB long string (actually double the size as .NET strings are
    all Unicode string) and then write 220 KB to ASP.NET Response
    buffer, consuming another 220 KB UTF8 byte array in memory. Then
    that 220 KB will be passed to IIS in chunks so that it can transmit
    it to the browser.
  • It does not produce proper response header to cache the
    response on the server. Nor does it deliver important headers like
    Content-Type from the source.
  • If external URL is providing gzipped content, it decompresses
    the content into a string representation and thus wastes server
    memory.
  • It does not cache the content on the server. So, repeated call
    to the same external URL within the same second or minute will
    download content from the external URL and thus waste bandwidth on
    your server.

So, we need an asynchronous streaming proxy that
transmits the content to the browser while it downloads from the
external domain server. So, it will download bytes from external
URL in small chunks and immediately transmit that to the browser.
As a result, browser will see a continuous transmission of bytes
right after calling the web service. There will be no delay while
the content is fully downloaded on the server.

Before I show you the complex streaming proxy code, let’s take
an evolutionary approach. Let’s build a better Content Proxy that
the one shown above, which is synchronous, non-streaming but does
not have the other problems mentioned above. We will build a HTTP
Handler named RegularProxy.ashx which will take url
as a query parameter. It will also take cache as a query
parameter which it will use to produce proper response headers in
order to cache the content on the browser. Thus it will save
browser from downloading the same content again and again.

<%@ WebHandler Language="C#" Class="RegularProxy" %>

using System;
using System.Web;
using System.Web.Caching;
using System.Net;
using ProxyHelpers;
public class RegularProxy : IHttpHandler {

public void ProcessRequest (HttpContext context) {
string url = context.Request["url"];
int cacheDuration = Convert.ToInt32(context.Request["cache"]?? "0");
string contentType = context.Request["type"];

// We don't want to buffer because we want to save memory
context.Response.Buffer = false;

// Serve from cache if available
if (context.Cache[url] != null)
{
context.Response.BinaryWrite(context.Cache[url] as byte[]);
context.Response.Flush();
return;
}
using (WebClient client = new WebClient())
{
if (!string.IsNullOrEmpty(contentType))
client.Headers["Content-Type"] = contentType;

client.Headers["Accept-Encoding"] = "gzip";
client.Headers["Accept"] = "*/*";
client.Headers["Accept-Language"] = "en-US";
client.Headers["User-Agent"] =
"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6";

byte[] data = client.DownloadData(url);

context.Cache.Insert(url, data, null,
Cache.NoAbsoluteExpiration,
TimeSpan.FromMinutes(cacheDuration),
CacheItemPriority.Normal, null);

if (!context.Response.IsClientConnected) return;


// Deliver content type, encoding and length as it is received from the external URL
context.Response.ContentType = client.ResponseHeaders["Content-Type"];
string contentEncoding = client.ResponseHeaders["Content-Encoding"];
string contentLength = client.ResponseHeaders["Content-Length"];

if (!string.IsNullOrEmpty(contentEncoding))
context.Response.AppendHeader("Content-Encoding", contentEncoding);
if (!string.IsNullOrEmpty(contentLength))
context.Response.AppendHeader("Content-Length", contentLength);

if (cacheDuration > 0)
HttpHelper.CacheResponse(context, cacheDuration);

// Transmit the exact bytes downloaded
context.Response.BinaryWrite(data);
}
}

public bool IsReusable {
get {
return false;
}
}

}

There are two enhancements in this proxy:

  • It allows server side caching of content. Same URL requested by
    a different browser within a time period will not be downloaded on
    server again, instead it will be served from cache.
  • It generates proper response cache header so that the content
    can be cached on browser.
  • It does not decompress the downloaded content in memory. It
    keeps the original byte stream intact. This saves memory
    allocation.
  • It transmits the data in non-buffered fashion, which means
    ASP.NET Response object does not buffer the response and thus saves
    memory

However, this is a blocking proxy. We need to make a streaming
asynchronous proxy for better performance. Here’s why:


image

Figure: Continuous streaming proxy

As you see, when data is transmitted from server to browser
while server downloads the content, the delay for server side
download is eliminated. So, if server takes 300ms to download
something from external source, and then 700ms to send it back to
browser, you can save up to 300ms Network Latency between server
and browser. The situation gets even better when the external
server that serves the content is slow and takes quite some time to
deliver the content. The slower external site is, the more saving
you get in this continuous streaming approach. This is
significantly faster than blocking approach when the external
server is in Asia or Australia and your server is in USA.

The approach for continuous proxy is:

  • Read bytes from external server in chunks of 8KB from a
    separate thread (Reader thread) so that it’s not blocked
  • Store the chunks in an in-memory Queue
  • Write the chunks to ASP.NET Response from that same queue
  • If the queue is finished, wait until more bytes are downloaded
    by the reader thread


image

The Pipe Stream needs to be thread safe and it needs to support
blocking Read. By blocking read it means, if a thread tries to read
a chunk from it and the stream is empty, it will suspend that
thread until another thread writes something on the stream. Once a
write happens, it will resume the reader thread and allow it to
read. I have taken the code of PipeStream from CodeProject
article by James Kolpack
and extended it to make sure it’s high
performance, supports chunks of bytes to be stored instead of
single bytes, support timeout on waits and so on.

A did some comparison between Regular proxy (blocking,
synchronous, download all then deliver) and Streaming Proxy
(continuous transmission from external server to browser). Both
proxy downloads the MSDN feed and delivers it to the browser. The
time taken here shows the total duration of browser making the
request to the proxy and then getting the entire response.


image

Figure: Time taken by Streaming Proxy vs Regular Proxy while
downloading MSDN feed

Not a very scientific graph and response time varies on the link
speed between the browser and the proxy server and then from proxy
server to the external server. But it shows that most of the time,
Streaming Proxy outperformed Regular proxy.


image

Figure: Test client to compare between Regular Proxy and Streaming
Proxy

You can also test both proxy’s response time by going to
http://labs.dropthings.com/AjaxStreamingProxy.
Put your URL and hit Regular/Stream button and see the “Statistics”
text box for the total duration. You can turn on “Cache response”
and hit a URL from one browser. Then go to another browser and hit
the URL to see the response coming from server cache directly. Also
if you hit the URL again on the same browser, you will see response
comes instantly without ever making call to the server. That’s
browser cache at work.

Learn more about Http Response caching from my blog post:

Making best use of cache for high performance website

A Visual Studio Web Test run inside a Load Test shows a better
picture:


image

Figure: Regular Proxy load test result shows Average
Requests/Sec 0.79
and Avg Response Time 2.5 sec


image

Figure: Streaming Proxy load test result shows Avg Req/Sec is
1.08
and Avg Response Time 1.8 sec.

From the above load test results, Streaming Proxy is 26%
better Request/Sec and Average Response Time is 29% better
. The
numbers may sound small, but at Pageflakes, 29% better response
time means 1.29 million seconds saved per month for all the
users on the website. So, we are effectively saving 353 man hours
per month which was wasted staring at browser screen while it
downloads content.

Building the Streaming Proxy

The details how the Streaming Proxy is built is quite long and
not suitable for a blog post. So, I have written a CodeProject
article:

Fast, Scalable,
Streaming AJAX Proxy – continuously deliver data from cross
domain

Please read the article and please vote for me if your find it
useful.