UFrame: goodness of UpdatePanel and IFRAME combined

UFrame combines
the goodness of UpdatePanel and IFRAME in a cross browser and
cross platform solution. It allows a DIV to behave like an
IFRAME loading
content from any page either static or dynamic. It can load pages
having both inline and external Javascript and CSS, just like an
IFRAME. But unlike IFRAME, it loads the content within the main
document and you can put any number of UFrame on your page without
slowing down the browser. It supports ASP.NET postback nicely and
you can have DataGrid or any other complex
ASP.NET control within a UFrame. UFrame works perfectly with
ASP.NET MVC making it an replacement for
UpdatePanel. Best
of all, UFrame is
implemented 100% in Javascript making it a cross platform solution.
As a result, you can use UFrame on ASP.NET, PHP,
JSP
or any other platform.

<div class="UFrame" id="UFrame1" src="SomePage.aspx?ID=UFrame1" >
  <p>This should get replaced with content from Somepage.aspxp>
div>

Response from SomePage.aspx is rendered
directly inside the UFrame. Here you see two
UFrame‘s are used
to load the same SomePage.aspx as if they are
loaded inside IFRAME. Another UFrame is used to load
AnotherPage.aspx
that shows photos from Flickr.


image

See it in action!

You can test UFrame from:

What is UFrame?

UFrame can load
and host a page (ASP.NET, PHP or regular html) inside a DIV. Unlike
IFRAME which loads the content inside a browser frame that has no
relation with the main document, UFrame loads the content within
the same document. Thus all the Javascripts, CSS on the main
document flows through the loaded content. It’s just like
UpdatePanel with
IFRAME’s src
attribute.

The above UFrames are declared like
this:

<div id="UFrame1" src="SomePage.aspx" >
    <p>This should get replaced with content from Somepage.aspxp>
div>

The features of UFrame are:

  • You can build regular ASP.NET/PHP/JSP/HTML page and make them
    behave as if they are fully AJAX enabled! Simple regular postback
    will work as if it’s an UpdatePanel, or simple
    hyperlinks will behave as if content is being loaded using
    AJAX.
  • Load any URL inside a DIV. It can be a PHP, ASP.NET, JSP or
    regular HTML page.
  • Just like IFRAME, you can set src property of DIVs and they
    are converted to UFrames when UFrame library loads.
  • Unlike IFRAME, it loads the content within the main document.
    So, main document’s CSS and Javascripts are available to the loaded
    content.
  • It allows you to build parts of a page as multiple fully
    independent pages.
  • Each page is built as standalone page. You can build, test and
    debug each small page independently and put them together on the
    main page using UFrames.
  • It loads and executes both inline and external scripts from
    loaded page. You can also render different scripts during
    UFrame
    postback.
  • All external scripts are loaded before the body content is set.
    And all inline scripts are executed when both external scripts and
    body has been loaded. This way the inline scripts execute when the
    body content is already available.
  • It loads both inline and external CSS.
  • It handles duplicates nicely. It does not load the same
    external Javascript or CSS twice.

Download the code

You can download latest version of UFrame along with the VS 2005
and VS 2008 (MVC) example projects from CodePlex:

www.codeplex.com/uframe

Please go to the “Source Code” tab for the latest version. You
are invited to join the project and improve it or fix bugs.

Read the article about UFrame

I have published an article about UFrame at CodeProject:

http://www.codeproject.com/KB/aspnet/uframe.aspx

The article explains in details how the UFrame is built. Be
prepared for a big dose of Javascript code.

If you find UFrame or the article useful, please vote for me at
CodeProject.


kick it on DotNetKicks.com

Fast ASP.NET web page loading by downloading multiple javascripts in batch

A web page can load a lot faster and feel faster if the
javascripts on the page can be loaded after the visible content has
been loaded and multiple javascripts can be batched into one
download. Browsers download one external script at a time and
sometimes pause rendering while a script is being downloaded and
executed. This makes web pages load and render slow when there are
multiple javascripts on the page. For every javascript reference,
browser stops downloading and processing of any other content on
the page and some browsers (like IE6) pause rendering while it
processes the javascript. This gives a slow loading experience and
the web page kind of gets ‘stuck’ frequently. As a result, a web
page can only load fast when there are small number of external
scripts on the page and the scripts are loaded after the visible
content of the page has loaded.

Here’s an example, when you visit
http://dropthings.omaralzabir.com, you see a lot of Javascripts
downloading. Majority of these are from the ASP.NET AJAX framework
and the ASP.NET AJAX Control Toolkit project.


Andysnap_003

Figure: Many scripts downloaded on a typical ASP.NET AJAX page
having ASP.NET AJAX Control Toolkit

As you see, browser gets stuck for 15 times as it downloads and
processes external scripts. This makes page loading “feel” slower.
The actual loading time is also pretty bad because these 15 http
requests waste 15*100ms = 1500ms on the network latency inside USA.
Outside USA, the latency is even higher. Asia gets about 270ms and
Australia gets about 380ms latency from any server in USA. So,
users outside USA wastes 4 to 6 seconds on network latency where no
data is being downloaded. This is an unacceptable performance for
any website.

You pay for such high number of script downloads only because
you use two extenders from AJAX Control Toolkit and the
UpdatePanel of
ASP.NET AJAX.

If we can batch the multiple individual script calls into one
call like Scripts.ashx as shown in the
picture below and download several scripts together in one shot
using an HTTP Handler, it saves us a lot of http connection which
could be spent doing other valuable work like downloading CSS for
the page to show content properly or downloading images on the page
that is visible to user.


Andysnap_002

Figure: Download several javascripts over one connection and save
call and latency

The Scripts.ashx
handler can not only download multiple scripts in one shot, but
also has a very short URL form. For example:

/scripts.ashx?initial=a,b,c,d,e&/

Compared to conventional ASP.NET ScriptResource URLs like:

/ScriptResource.axd?d=WzuUYZ-Ggi7-B0tkhjPDTmMmgb5FPLmciWEXQLdjNjt
bmek2jgmm3QETspZjKLvHue5em5kVYJGEuf4kofrcKNL9z6AiMhCe3SrJrcBel_c1
&t=633454272919375000

The benefits of downloading multiple Javascript over one http
call are:

  • Saves expensive network roundtrip latency where neither browser
    nor the origin server is doing anything, not even a single byte is
    being transmitted during the latency
  • Create less “pause” moments for the browser. So, browser can
    fluently render the content of the page and thus give user a fast
    loading feel
  • Give browser move time and free http connections to download
    visible artifacts of the page and thus give user a “something’s
    happening” feel
  • When IIS compression is enabled, the total size of individually
    compressed files is greater than multiple files compressed after
    they are combined. This is because each compressed byte stream has
    compression header in order to decompress the content.
  • This reduces the size of the page html as there are only a few
    handful of script tag. So, you can easily saves hundreds of bytes
    from the page html. Especially when ASP.NET AJAX produces gigantic
    WebResource.axd and
    ScriptResource.axd
    URLs that have very large query parameter

The solution is to dynamically parse the response of a page
before it is sent to the browser and find out what script
references are being sent to the browser. I have built an http
module which can parse the generated html of a page and find out
what are the script blocks being sent. It then parses those script
blocks and find the scripts that can be combined. Then it takes out
those individual script tags from the response and adds one script
tag that generates the combined response of multiple script
tags.

For example, the homepage of Dropthings.com produces the
following script tags:

< script type="text/javascript">
...
//]]>

< script src="/Dropthings/WebResource.axd?d=_w65Lg0FVE-htJvl4_zmXw2&t=633403939286875000" 
type="text/javascript"> ... < script src="Widgets/FastFlickrWidget.js" type="text/javascript"> < script src="Widgets/FastRssWidget.js" type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=WzuUYZ-Ggi7-B0tkhjPDTmMmgb5FPLmciWEXQLdj
Njtbmek2jgmm3QETspZjKLvHue5em5kVYJGEuf4kofrcKNL9z6AiMhCe3SrJrcBel_c1
&t=633454272919375000"
type="text/javascript"> < script type="text/javascript"> // ... < script src="/Dropthings/ScriptResource.axd?d=WzuUYZ-Ggi7-B0tkhjPDTmMmgb5FPLmciWEXQLdjNjtbmek2j
gmm3QETspZjKLvHIbaYWwsewvr_eclXZRGNKzWlaVj44lDEdg9CT2tyH-Yo9jFoQij_XIWxZNETQkZ90
&t=633454272919375000"
type="text/javascript"> < script type="text/javascript"> ... < script type="text/javascript"> ... < script type="text/javascript" charset="utf-8"> ... < script src="Myframework.js" type="text/javascript"> < script type="text/javascript"> ... < script type="text/javascript">if( typeof Proxy == "undefined" ) Proxy = ProxyAsync; < script type="text/javascript"> ... < script src="/Dropthings/ScriptResource.axd?d=WzuUYZ-Ggi7-B0tkhjPDTmMmgb5FPLmciWEXQLdjN
jtbmek2jgmm3QETspZjKLvH-H5JQeA1OWzBaqnbKRQWwc2hxzZ5M8vtSrMhytbB-Oc1
&t=633454272919375000"
type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=BXpG1T2rClCdn7QzWc-HrzQ2ECeqBhG6oiVakhRAk
RY6YSaFJsnzqttheoUJJXE4jMUal_1CAxRvbSZ_4_ikAw2
&t=633454540450468750"
type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=BXpG1T2rClCdn7QzWc-HrzQ2ECeqBhG6oiVakhRA
kRYRhsy_ZxsfsH4NaPtFtpdDEJ8oZaV5wKE16ikC-hinpw2
&t=633454540450468750"
type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=BXpG1T2rClCdn7QzWc-HrzQ2ECeqBhG6oiVakhRAk
RZbimFWogKpiYN4SVreNyf57osSvFc_f24oloxX4RTFfnfj5QsvJGQanl-pbbMbPf01
&t=633454540450468750"
type="text/javascript">
...

< script type="text/javascript"> ...

As you see, there are lots of large script tags, in total 15 of
them. The solution I will show here will combine the script links
and replace with two script links that download 13 of the
individual scripts. I have left two scripts out that are related to
ASP.NET AJAX Timer extender.


< script type="text/javascript"> ...


< script type="text/javascript" src="Scripts.ashx?initial=a,b,c,d,e,f&/dropthings/">
< script type="text/javascript"> ... < script type="text/javascript"> ... < script type="text/javascript"> ... < script type="text/javascript"> ... < script type="text/javascript">if( typeof Proxy == "undefined" ) Proxy = ProxyAsync; < script type="text/javascript"> ... < script src="/Dropthings/ScriptResource.axd?d=WzuUYZ-..." type="text/javascript"> < script src="/Dropthings/ScriptResource.axd?d=BXpG1T2..." type="text/javascript">
< script type="text/javascript" src="Scripts.ashx?post=C,D,E,F,G,H,I,J&/dropthings/"> < script type="text/javascript"> ...

As you see, 13 of the script links have been combined into two
script links. The URL is also smaller than majority of the script
references.

There are two steps involved here:

  1. Find out all the script tags being emitted
    inside generated response HTML and collect them in a buffer. Move
    them after the visible artifacts in the HTML, especially the

    tag
    that contains the generated output of all ASP.NET controls on the
    page
  2. Parse the buffer and see which script references can be
    combined into one set. The sets are defined in a configuration
    file. Replace the individual script references with the combined
    set reference.

The whole solution is explained in this CodeProject article:

Fast ASP.NET web page loading by downloading multiple
javascripts after visible content and in batch
http://www.codeproject.com/KB/aspnet/fastload.aspx

You should be able to use this approach in any ASP.NET (even
better if AJAX) application and give your site a big performance
boost.

If you like the idea, please vote for me.


kick it on DotNetKicks.com

Fast, Streaming AJAX proxy – continuously download from cross domain

Due to browser’s prohibition on cross
domain XMLHTTP call, all AJAX websites must have server side proxy
to fetch content from external domain like Flickr or Digg. From
client side javascript code, an XMLHTTP call goes to the server
side proxy hosted on the same domain and then the proxy downloads
the content from the external server and sends back to the browser.
In general, all AJAX websites on the Internet that are showing
content from external domains are following this proxy approach
except some rare ones who are using JSONP. Such a proxy gets a very
large number of hits when a lot of component on the website are
downloading content from external domains. So, it becomes a
scalability issue when the proxy starts getting millions of hits.
Moreover, web page’s overall load performance largely depends on
the performance of the proxy as it delivers content to the page. In
this article, we will take a look how we can take a conventional
AJAX Proxy and make it faster, asynchronous, continuously stream
content and thus make it more scalable.

You can see such a proxy in action when you go to Pageflakes.com. You will see
flakes (widgets) loading many different content like weather feed,
flickr photo, youtube videos, RSS from many different external
domains. All these are done via a Content Proxy. Content
Proxy served about 42.3 million URLs last month which is
quite an engineering challenge for us to make it both fast and
scalable. Sometimes Content Proxy serves megabytes of data, which
poses even greater engineering challenge. As such proxy gets large
number of hits, if we can save on an average 100ms from each call,
we can save 4.23 million seconds of
download/upload/processing time every month. That’s about 1175 man
hours wasted throughout the world by millions of people staring at
browser waiting for content to download.

Such a content proxy takes an external server’s URL as a query
parameter. It downloads the content from the URL and then writes
the content as response back to browser.


image

Figure: Content Proxy working as a middleman between browser and
external domain

The above timeline shows how request goes to the server and then
server makes a request to external server, downloads the response
and then transmits back to the browser. The response arrow from
proxy to browser is larger than the response arrow from external
server to proxy because generally proxy server’s hosting
environment has better download speed than the user’s Internet
connectivity.

Such a content proxy is also available in my open source Ajax
Web Portal Dropthings.com.
You can see from its
code
how such a proxy is implemented.

The following is a very simple synchronous, non-streaming,
blocking Proxy:

[WebMethod]
[ScriptMethod(UseHttpGet=true)]
public string GetString(string url)
{
using (WebClient client = new WebClient())
{
string response = client.DownloadString(url);
return response;
}
}
}

Although it shows the general principle, but it’s no where close
to a real proxy because:

  • It’s a synchronous proxy and thus not scalable. Every call to
    this web method causes the ASP.NET thread to wait until the call to
    the external URL completes.
  • It’s non streaming. It first downloads the entire
    content on the server, storing it in a string and then uploading
    that entire content to the browser. If you pass MSDN feed URL, it will
    download that gigantic 220 KB RSS XML on the server and store it on
    a 220 KB long string (actually double the size as .NET strings are
    all Unicode string) and then write 220 KB to ASP.NET Response
    buffer, consuming another 220 KB UTF8 byte array in memory. Then
    that 220 KB will be passed to IIS in chunks so that it can transmit
    it to the browser.
  • It does not produce proper response header to cache the
    response on the server. Nor does it deliver important headers like
    Content-Type from the source.
  • If external URL is providing gzipped content, it decompresses
    the content into a string representation and thus wastes server
    memory.
  • It does not cache the content on the server. So, repeated call
    to the same external URL within the same second or minute will
    download content from the external URL and thus waste bandwidth on
    your server.

So, we need an asynchronous streaming proxy that
transmits the content to the browser while it downloads from the
external domain server. So, it will download bytes from external
URL in small chunks and immediately transmit that to the browser.
As a result, browser will see a continuous transmission of bytes
right after calling the web service. There will be no delay while
the content is fully downloaded on the server.

Before I show you the complex streaming proxy code, let’s take
an evolutionary approach. Let’s build a better Content Proxy that
the one shown above, which is synchronous, non-streaming but does
not have the other problems mentioned above. We will build a HTTP
Handler named RegularProxy.ashx which will take url
as a query parameter. It will also take cache as a query
parameter which it will use to produce proper response headers in
order to cache the content on the browser. Thus it will save
browser from downloading the same content again and again.

<%@ WebHandler Language="C#" Class="RegularProxy" %>

using System;
using System.Web;
using System.Web.Caching;
using System.Net;
using ProxyHelpers;
public class RegularProxy : IHttpHandler {

public void ProcessRequest (HttpContext context) {
string url = context.Request["url"];
int cacheDuration = Convert.ToInt32(context.Request["cache"]?? "0");
string contentType = context.Request["type"];

// We don't want to buffer because we want to save memory
context.Response.Buffer = false;

// Serve from cache if available
if (context.Cache[url] != null)
{
context.Response.BinaryWrite(context.Cache[url] as byte[]);
context.Response.Flush();
return;
}
using (WebClient client = new WebClient())
{
if (!string.IsNullOrEmpty(contentType))
client.Headers["Content-Type"] = contentType;

client.Headers["Accept-Encoding"] = "gzip";
client.Headers["Accept"] = "*/*";
client.Headers["Accept-Language"] = "en-US";
client.Headers["User-Agent"] =
"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6";

byte[] data = client.DownloadData(url);

context.Cache.Insert(url, data, null,
Cache.NoAbsoluteExpiration,
TimeSpan.FromMinutes(cacheDuration),
CacheItemPriority.Normal, null);

if (!context.Response.IsClientConnected) return;


// Deliver content type, encoding and length as it is received from the external URL
context.Response.ContentType = client.ResponseHeaders["Content-Type"];
string contentEncoding = client.ResponseHeaders["Content-Encoding"];
string contentLength = client.ResponseHeaders["Content-Length"];

if (!string.IsNullOrEmpty(contentEncoding))
context.Response.AppendHeader("Content-Encoding", contentEncoding);
if (!string.IsNullOrEmpty(contentLength))
context.Response.AppendHeader("Content-Length", contentLength);

if (cacheDuration > 0)
HttpHelper.CacheResponse(context, cacheDuration);

// Transmit the exact bytes downloaded
context.Response.BinaryWrite(data);
}
}

public bool IsReusable {
get {
return false;
}
}

}

There are two enhancements in this proxy:

  • It allows server side caching of content. Same URL requested by
    a different browser within a time period will not be downloaded on
    server again, instead it will be served from cache.
  • It generates proper response cache header so that the content
    can be cached on browser.
  • It does not decompress the downloaded content in memory. It
    keeps the original byte stream intact. This saves memory
    allocation.
  • It transmits the data in non-buffered fashion, which means
    ASP.NET Response object does not buffer the response and thus saves
    memory

However, this is a blocking proxy. We need to make a streaming
asynchronous proxy for better performance. Here’s why:


image

Figure: Continuous streaming proxy

As you see, when data is transmitted from server to browser
while server downloads the content, the delay for server side
download is eliminated. So, if server takes 300ms to download
something from external source, and then 700ms to send it back to
browser, you can save up to 300ms Network Latency between server
and browser. The situation gets even better when the external
server that serves the content is slow and takes quite some time to
deliver the content. The slower external site is, the more saving
you get in this continuous streaming approach. This is
significantly faster than blocking approach when the external
server is in Asia or Australia and your server is in USA.

The approach for continuous proxy is:

  • Read bytes from external server in chunks of 8KB from a
    separate thread (Reader thread) so that it’s not blocked
  • Store the chunks in an in-memory Queue
  • Write the chunks to ASP.NET Response from that same queue
  • If the queue is finished, wait until more bytes are downloaded
    by the reader thread


image

The Pipe Stream needs to be thread safe and it needs to support
blocking Read. By blocking read it means, if a thread tries to read
a chunk from it and the stream is empty, it will suspend that
thread until another thread writes something on the stream. Once a
write happens, it will resume the reader thread and allow it to
read. I have taken the code of PipeStream from CodeProject
article by James Kolpack
and extended it to make sure it’s high
performance, supports chunks of bytes to be stored instead of
single bytes, support timeout on waits and so on.

A did some comparison between Regular proxy (blocking,
synchronous, download all then deliver) and Streaming Proxy
(continuous transmission from external server to browser). Both
proxy downloads the MSDN feed and delivers it to the browser. The
time taken here shows the total duration of browser making the
request to the proxy and then getting the entire response.


image

Figure: Time taken by Streaming Proxy vs Regular Proxy while
downloading MSDN feed

Not a very scientific graph and response time varies on the link
speed between the browser and the proxy server and then from proxy
server to the external server. But it shows that most of the time,
Streaming Proxy outperformed Regular proxy.


image

Figure: Test client to compare between Regular Proxy and Streaming
Proxy

You can also test both proxy’s response time by going to
http://labs.dropthings.com/AjaxStreamingProxy.
Put your URL and hit Regular/Stream button and see the “Statistics”
text box for the total duration. You can turn on “Cache response”
and hit a URL from one browser. Then go to another browser and hit
the URL to see the response coming from server cache directly. Also
if you hit the URL again on the same browser, you will see response
comes instantly without ever making call to the server. That’s
browser cache at work.

Learn more about Http Response caching from my blog post:

Making best use of cache for high performance website

A Visual Studio Web Test run inside a Load Test shows a better
picture:


image

Figure: Regular Proxy load test result shows Average
Requests/Sec 0.79
and Avg Response Time 2.5 sec


image

Figure: Streaming Proxy load test result shows Avg Req/Sec is
1.08
and Avg Response Time 1.8 sec.

From the above load test results, Streaming Proxy is 26%
better Request/Sec and Average Response Time is 29% better
. The
numbers may sound small, but at Pageflakes, 29% better response
time means 1.29 million seconds saved per month for all the
users on the website. So, we are effectively saving 353 man hours
per month which was wasted staring at browser screen while it
downloads content.

Building the Streaming Proxy

The details how the Streaming Proxy is built is quite long and
not suitable for a blog post. So, I have written a CodeProject
article:

Fast, Scalable,
Streaming AJAX Proxy – continuously deliver data from cross
domain

Please read the article and please vote for me if your find it
useful.

Reduce website download time by heavily compressing PNG and JPEG

PNG and JPEG are two most popular formats for web graphics. JPEG
is used for photographs, screenshots and backgrounds where PNG is
used for all other graphics need including cliparts, buttons,
headers, footers, borders and so on. As a result, these two types
of graphics file usually take up 80% of the total graphics used in
a website. Of course, there’s the GIF, which is very popular. But
as it supports only 256 colors, it is losing its popularity day by
day. PNG seems to be a all rounder winner for all kinds of graphics
need. As all browsers support PNG well enough and PNG supports
alpha transparency, it’s surely the best format so far on the web
for all purpose graphics need for websites. So, if you can optimize
all PNG and JPEG on your website and compress them rigorously, you
can easily shed off several seconds of load time from your website
without doing any coding. Especially if your website is graphics
rich like Pageflakes, 30%
reduction in total size of graphics throughout the website is a big
performance win.

Optimize all PNG on your website

PNG has a lot of scope for optimization. Generally regular
graphics tools like Photoshop, Paintshop pro, Paint.NET all
generate PNG using a moderate compression. So, PNG can be
compressed further by using advanced compression tools. OptiPNG is such a tool
that can compress PNG and sometimes produce 50% smaller output. At
Pageflakes, we have around 380 PNG which when compressed using
OptiPNG, gives us 40% reduction in total size. This is a big win
for us.

Here’s what wikipedia says about OptiPNG:

OptiPNG is an open source command line computer program that
reduces the size of PNG files. The compression is lossless, meaning
that the resulting image will have exactly the same appearance as
the source image.

The main purpose of OptiPNG is to reduce the size of the PNG
IDAT data stream by trying various filtering and compression
methods. It also performs automatic bit depth, color type and color
palette reduction where possible, and can correct some data
integrity errors in input files.

Here’s a poweshell script that you can run from the root folder
of your website. It will scan through all the PNG files in the
webtree and run OptiPNG on each file. This takes quite some time if
you have hundreds of files. So, you should make it a part of your
build script.

gci -include *.png -recurse | foreach
 { fileName = _.FullName; cmd /c "C:softpngoptipng.exe -o7 "fileName"" }

Here I have stored the optipng.exe on the c:softpng
folder.

OptiPNG gives very good compression. But there’s even more scope
for compression. AdvanceCOMP is the
ultimate in compression technology for PNG as it uses the mighty
7zip compression algorithm. It
can squeeze down PNG even further after being compressed by OptiPNG
using its maximum compression mode. PNG files are compressed using
DEFLATE algorithm. DEFLATE has 0 to 9 compression level, where 9 is
the highest. AdvanceCOMP uses 7zip DEFLATE encoder, that extends
the compression factor even more. During 7zip compression, a much
more detailed search of compression possibilities is performed, at
the expense of significant further processor time spent on
searching. Effectively, the 10-point scale used in gzip is extended
to include extra settings above 9, the previous maximum search
level. There will be no difference in decompression speed,
regardless of the level of compressed size achieved or time taken
to encode the data.

Here’s a poweshell script that you can run from the root folder
of your website. It will scan through all the PNG files in the
webtree and run AdvanceCOMP on each file. You need to run
AdvanceCOMP after running OptiPNG.

gci -include *.png -recurse | foreach
 { fileName = _.FullName; cmd /c "C:softpngadvpng.exe
 --shrink-insane -z "fileName"" }

I have collected both optipng and advpng in this
zip
file.

Optimize all JPEG on your website

Unfortunately, there’s not much powerful tool like OptiPNG for
jpeg that you can run on all your jpeg files and compress them
rigorously. JPEG file is compressed when it is saved. Generally all
graphics applications provide you an option to select what’s the
quality ratio of the jpeg being saved. So, you have to consciously
make the best compression vs quality choice while saving the jpeg
file. However, libjpeg project has a
jpeg optimizer tool that does some optimization on jpeg files. It
has a jpegtran utility which does the optimization according to
wikipedia:

jpegtran – a utility for lossless transcoding between different
JPEG formats. The jpegtran command-line program is useful to
optimize the compression of a JPEG file, convert between
progressive and non-progressive JPEG formats, eliminate
non-standard application-specific data inserted by some image
programs, or to perform certain transformations on a file —
such as grayscaling, or rotating and flipping (within certain
limits) — all done “losslessly” (i.e. without decompressing
and recompressing the data, and so causing a reduction of image
quality due to generation loss).

However, when we ran jpegtran on all the jpeg files in
Pageflakes, we are able to reduce about 20% total size of all jpeg.
So, that was not too bad.

Here’s how you run jpegtran to get all the jpeg files within
your website directory optimized:

gci -include *.jpg -recurse | foreach
 { fileName = _.FullName; newFileName = fileName + ".tmp";
cmd /c "C:softjpegjpegtran.exe -optimize -outfile "newFileName" "fileName"";
copy newFileName fileName; del newFileName; }

The libjpeg
binaries are uploaded here
for your convenience.

Warning: You have to run all these powershell commands in a
single line. I have broken the commands in multiple line for better
readability.

Let’s save global bandwidth, go green.

Fast page loading by moving ASP.NET AJAX scripts after visible content

ASP.NET ScriptManager control has a property
LoadScriptsBeforeUI, when set to false, should
load all AJAX framework scripts after the content of the page. But
it does not effectively push down all scripts
after the content. Some framework scripts, extender scripts and
other scripts registered by Ajax Control Toolkit still load before
the page content loads. The following screen taken from www.dropthings.com shows several
script tags are still added at the beginning of

which
forces them to download first before the page content is loaded and
displayed on the page. Script tags pause rendering on several
browsers especially in IE until the scripts download and execute.
As a result, it gives user a slow loading impression as user stares
at a white screen for some time until the scripts before the
content download and execute completely. If browser could render
the html before it downloads any script, user would see the page
content immediately after visiting the site and not see a white
screen. This will give user an impression that the website is
blazingly fast (just like Google homepage) because user will
ideally see the page content, if it’s not too large, immediately
after hitting the URL.

image
Figure: Script blocks being delivered before the content

From the above screen shot you see there are some scripts from
ASP.NET AJAX framework and some scripts from Ajax Control Toolkit
that are added before the content of the page. Until these scripts
download, browser don’t see anything on the UI and thus you get a
pause in rendering giving user a slow load feeling. Each script to
external URL adds about 200ms avg network roundtrip delay outside
USA while it tries to fetch the script. So, user basically stares
at a white screen for at least 1.5 sec no matter how fast internet
connection he/she has.

These scripts are rendered at the beginning of form tag
because they are registered using
Page.ClientScript.RegisterClientScriptBlock. Inside
Page class of System.Web, there’s a method
BeginFormRender which renders the client script blocks
immediately after the form tag.

   1: internal void BeginFormRender(HtmlTextWriter writer, string formUniqueID)
   2: {
   3:     ...
   4:         this.ClientScript.RenderHiddenFields(writer);
   5:         this.RenderViewStateFields(writer);
   6:         ...
   7:         if (this.ClientSupportsJavaScript)
   8:         {
   9:             ...
  10:             if (this._fRequirePostBackScript)
  11:             {
  12:                 this.RenderPostBackScript(writer, formUniqueID);
  13:             }
  14:             if (this._fRequireWebFormsScript)
  15:             {
  16:                 this.RenderWebFormsScript(writer);
  17:             }
  18:         }
  19:         this.ClientScript.RenderClientScriptBlocks(writer);
  20: }

Figure: Decompiled code from System.Web.Page class

Here you see several script blocks including scripts registered
by calling ClientScript.RegisterClientScriptBlock are
rendered right after form tag starts.

There’s no easy work around to override the
BeginFormRender method and defer rendering of these scripts.
These rendering functions are buried inside System.Web and
none of these are overridable. So, the only solution seems to be
using a Response Filter to capture the html being written and
suppress rendering the script blocks until it’s the end of the
body tag. When the tag is about to be
rendered, we can safely assume page content has been successfully
delivered and now all suppressed script blocks can be rendered at
once.

In ASP.NET 2.0, you to create Response Filter which is an
implementation of a Stream. You can replace default
Response.Filter with your own stream and then ASP.NET will
use your filter to write the final rendered HTML. When
Response.Write is called or Page’s Render method
fires, the response is written to the output stream via the filter.
So, you can intercept every byte that’s going to be sent to the
client (browser) and modify it the way you like. Response Filters
can be used in variety ways to optimize Page output like stripping
off all white spaces or doing some formatting on the generated
content, or manipulating the characters being sent to the browser
and so on.

I have created a Response filter which captures all characters
being sent to the browser. It it finds that script blocks are being
rendered, instead of rendering it to the
Response.OutputStream, it will extract the script blocks out
of the buffer being written and render the rest of the content. It
stores all script blocks, both internal and external, in a string
buffer. When it detects tag is about to be
written to the response, it flushes all the captured script blocks
from the string buffer.

   1: public class ScriptDeferFilter : Stream
   2: {
   3:     Stream responseStream;
   4:     long position;
   5:
   6:     /// 
   7:     /// When this is true, script blocks are suppressed and captured for 
   8:     /// later rendering
   9:     /// 
  10:     bool captureScripts;
  11:
  12:     /// 
  13:     /// Holds all script blocks that are injected by the controls
  14:     /// The script blocks will be moved after the form tag renders
  15:     /// 
  16:     StringBuilder scriptBlocks;
  17:
  18:     Encoding encoding;
  19:
  20:     public ScriptDeferFilter(Stream inputStream, HttpResponse response)
  21:     {
  22:         this.encoding = response.Output.Encoding;
  23:         this.responseStream = response.Filter;
  24:
  25:         this.scriptBlocks = new StringBuilder(5000);
  26:         // When this is on, script blocks are captured and not written to output
  27:         this.captureScripts = true;
  28:     }

Here’s the beginning of the Filter class. When it initializes,
it takes the original Response Filter. Then it overrides the
Write method of the Stream so that it can capture the
buffers being written and do it’s own processing.

   1: public override void Write(byte[] buffer, int offset, int count)
   2: {
   3:     // If we are not capturing script blocks anymore, just redirect to response stream
   4:     if (!this.captureScripts)
   5:     {
   6:         this.responseStream.Write(buffer, offset, count);
   7:         return;
   8:     }
   9:
  10:     /* 
  11:      * Script and HTML can be in one of the following combinations in the specified buffer:          
  12:      * .....< script ....>..........
  13:      * < script ....>..........
  14:      * < script ....>.....
  15:      * < script ....>..... .....
  16:      * ....< script ....>..... 
  17:      * < script ....>..... 
  18:      * ..........
  19:      * .....
  20:      * < script>.....
  21:      * .... 
  22:      * ......
  23:      * Here, "...." means html content between and outside script tags
  24:     */
  25:
  26:     char[] content = this.encoding.GetChars(buffer, offset, count);
  27:
  28:     int scriptTagStart = 0;
  29:     int lastScriptTagEnd = 0;
  30:     bool scriptTagStarted = false;
  31:
  32:     for (int pos = 0; pos < content.Length; pos++)
  33:     {
  34:         // See if tag start
  35:         char c = content[pos];
  36:         if (c == '<')
  37:         {
  38:             int tagStart = pos;
  39:             // Check if it's a tag ending
  40:             if (content[pos+1] == '/')
  41:             {
  42:                 pos+=2; // go past the 
  43:
  44:                 // See if script tag is ending
  45:                 if (isScriptTag(content, pos))
  46:                 {
  47:                     /// Script tag just ended. Get the whole script
  48:                     /// and store in buffer
  49:                     pos = pos + "script>".Length;
  50:                     scriptBlocks.Append(content, scriptTagStart, pos - scriptTagStart);
  51:                     scriptBlocks.Append(Environment.NewLine);
  52:                     lastScriptTagEnd = pos;
  53:
  54:                     scriptTagStarted = false;
  55:                     continue;
  56:                 }
  57:                 else if (isBodyTag(content, pos))
  58:                 {
  59:                     /// body tag has just end. Time for rendering all the script
  60:                     /// blocks we have suppressed so far and stop capturing script blocks
  61:
  62:                     if (this.scriptBlocks.Length > 0)
  63:                     {
  64:                         // Render all pending html output till now
  65:                         this.WriteOutput(content, lastScriptTagEnd, tagStart - lastScriptTagEnd);
  66:
  67:                         // Render the script blocks
  68:                         byte[] scriptBytes = this.encoding.GetBytes(this.scriptBlocks.ToString());
  69:                         this.responseStream.Write(scriptBytes, 0, scriptBytes.Length);
  70:
  71:                         // Stop capturing for script blocks
  72:                         this.captureScripts = false;
  73:
  74:                         // Write from the body tag start to the end of the inut buffer and return
  75:                         // from the function. We are done.
  76:                         this.WriteOutput(content, tagStart, content.Length - tagStart);
  77:                         return;
  78:                     }
  79:                 }
  80:                 else
  81:                 {
  82:                     // some other tag's closing. safely skip one character as smallest
  83:                     // html tag is one character e.g. . just an optimization to save one loop
  84:                     pos++;
  85:                 }
  86:             }
  87:             else
  88:             {
  89:                 if (isScriptTag(content, pos+1))
  90:                 {
  91:                     /// Script tag started. Record the position as we will 
  92:                     /// capture the whole script tag including its content
  93:                     /// and store in an internal buffer.
  94:                     scriptTagStart = pos;
  95:
  96:                     // Write html content since last script tag closing upto this script tag 
  97:                     this.WriteOutput(content, lastScriptTagEnd, scriptTagStart - lastScriptTagEnd);
  98:
  99:                     // Skip the tag start to save some loops
 100:                     pos += "< script".Length;
 101:
 102:                     scriptTagStarted = true;
 103:                 }
 104:                 else
 105:                 {
 106:                     // some other tag started
 107:                     // safely skip 2 character because the smallest tag is one character e.g. 
 108:                     // just an optimization to eliminate one loop 
 109:                     pos++;
 110:                 }
 111:             }
 112:         }
 113:     }
 114:
 115:     // If a script tag is partially sent to buffer, then the remaining content
 116:     // is part of the last script block
 117:     if (scriptTagStarted)
 118:     {
 119:
 120:         this.scriptBlocks.Append(content, scriptTagStart, content.Length - scriptTagStart);
 121:     }
 122:     else
 123:     {
 124:         /// Render the characters since the last script tag ending
 125:         this.WriteOutput(content, lastScriptTagEnd, content.Length - lastScriptTagEnd);
 126:     }
 127: }

There are several situations to consider here. The Write
method is called several times during the Page render process
because the generated HTML can be quite big. So, it will contain
partial HTML. So, it's possible the first Write call contains a
start of a script block, but no ending script tag. The following
Write call may or may not have the ending script block. So, we need
to preserve state to make sure we don't overlook any script block.
Each Write call can have several script block in the buffer
as well. It can also have no script block and only page
content.

The idea here is to go through each character and see if there's
any starting script tag. If there is, remember the start position
of the script tag. If script end tag is found within the buffer,
then extract out the whole script block from the buffer and render
the remaining html. If there's no ending tag found but a script tag
did start within the buffer, then suppress output and capture the
remaining content within the script buffer so that next call to
Write method can grab the remaining script and extract it
out from the output.

There are two other private functions that are basically helper
functions and does not do anything interesting:

   1: private void WriteOutput(char[] content, int pos, int length)
   2: {
   3:     if (length == 0) return;
   4:
   5:     byte[] buffer = this.encoding.GetBytes(content, pos, length);
   6:     this.responseStream.Write(buffer, 0, buffer.Length);
   7: }
   8:
   9: private bool isScriptTag(char[] content, int pos)
  10: {
  11:     if (pos + 5 < content.Length)
  12:         return ((content[pos] == 's' || content[pos] == 'S')
  13:             && (content[pos + 1] == 'c' || content[pos + 1] == 'C')
  14:             && (content[pos + 2] == 'r' || content[pos + 2] == 'R')
  15:             && (content[pos + 3] == 'i' || content[pos + 3] == 'I')
  16:             && (content[pos + 4] == 'p' || content[pos + 4] == 'P')
  17:             && (content[pos + 5] == 't' || content[pos + 5] == 'T'));
  18:     else
  19:         return false;
  20:
  21: }
  22:
  23: private bool isBodyTag(char[] content, int pos)
  24: {
  25:     if (pos + 3 < content.Length)
  26:         return ((content[pos] == 'b' || content[pos] == 'B')
  27:             && (content[pos + 1] == 'o' || content[pos + 1] == 'O')
  28:             && (content[pos + 2] == 'd' || content[pos + 2] == 'D')
  29:             && (content[pos + 3] == 'y' || content[pos + 3] == 'Y'));
  30:     else
  31:         return false;
  32: }

The isScriptTag and isBodyTag functions may look
weird. The reason for such weird code is pure performance. Instead
of doing fancy checks like taking a part of the array out and doing
string comparison, this is the fastest way of doing the check. Best
thing about .NET IL is that it's optimized, if any of the condition
in the && pairs fail, it won't even execute the rest. So,
this is as best as it can get to check for certain
characters.

There are some corner cases that are not handled here. For
example, what if the buffer contains a partial script tag
declaration. For example, ".... and that's it. The
remaining characters did not finish in the buffer instead next
buffer is sent with the remaining characters like "ipt src="..."
>...... In such case, the script tag won't be
taken out. One way to handle this would be to make sure you always
have enough characters left in the buffer to do a complete tag name
check. If not found, store the half finished buffer somewhere and
on next call to Write, combine it with the new buffer sent and do
the processing.

In order to install the Filter, you need to hook it in in the
Global.asax BeginRequest or some other event that's fired before
the Response is generated.

   1: protected void Application_BeginRequest(object sender, EventArgs e)
   2: {
   3:     if (Request.HttpMethod == "GET")
   4:     {
   5:         if (Request.AppRelativeCurrentExecutionFilePath.EndsWith(".aspx"))
   6:         {
   7:             Response.Filter = new ScriptDeferFilter(Response);
   8:         }
   9:     }
  10: }

Here I am hooking the Filter only for GET calls to
.aspx pages. You can hook it to POST calls as well. But
asynchronous postbacks are regular POST and I do not want to
do any change in the generated JSON or html fragment. Another way
is to hook the filter only when ContentType is
text/html.

When this filter is installed, www.dropthings.com defers all
script loading after the

tag completes.

image
Figure: Script tags are moved after the

tag when the
filter is used

You can grab the Filter class from the
App_CodeScriptDeferFilter.cs of the Dropthings project. Go
to CodePlex site
and download the latest code for the latest filter.

HTML and IFRAME widget for Dropthings

I made two new widgets for Dropthings – one is an
HTML widget, that allows you to put any HTML content inside a
widget and the other one is an IFRAME widget that allows you to
host an IFRAME to any URL. You can see example of these widgets
from http://dropthings.omaralzabir.com

You can write any HTML and Script on the HTML Widget and build
your own widget at run time. You can put Video, Audio,
Picture or even ActiveX components right on the widget. An example
of HTML widget is the NatGeo widget:


image

image

This is made of HTML Widget where I have just added some widget
snippet that I took from ClearSpring.

The IFRAME widget is also very useful. IFRAME widget hosts an
IFRAME pointing to the URL you specify in the settings. Using this
widget, you can include any widget from widget providers like
Labpixies, ClearSpring, Google Widgets and so on. For instance, I
have built three widgets from Labpixies – Stock, Sports and
Travelocity widget using the IFRAME widget. The only thing I need
to specify in order to bring this widgets is to provide the URL of
the widgets that you can find from the widget snippet provided on
their website.


image

image

image

HTML Widget

The HTML widget basically collects HTML snippet in a text box
and then renders the HTML output inside a Literal control. The UI
consists of only a text box, a button and a literal control.

<%@ Control Language="C#" AutoEventWireup="true" CodeFile="HtmlWidget.ascx.cs" Inherits="Widgets_HtmlWidget" %>
<asp:Panel ID="SettingsPanel" runat="server" Visible="false">
HTML: <br />
<asp:TextBox ID="HtmltextBox" runat="server" Width="300" Height="200" MaxLength="2000" TextMode="MultiLine" />
<asp:Button ID="SaveSettings" runat="server" OnClick="SaveSettings_Clicked" Text="Save" />
asp:Panel>
<asp:Literal ID="Output" runat="server" />

On the server side, there’s basically the OnPreRender function
that does the HTML rendering. Other code snippets are standard
formalities for a widget in Dropthings.

public partial class Widgets_HtmlWidget : System.Web.UI.UserControl, IWidget
{
private IWidgetHost _Host;

private XElement _State;
private XElement State
{
get
{
string state = this._Host.GetState();
if (string.IsNullOrEmpty(state))
state = "";
if (_State == null) _State = XElement.Parse(state);
return _State;
}
}

protected void Page_Load(object sender, EventArgs e)
{

}

void IWidget.Init(IWidgetHost host)
{
this._Host = host;
}

void IWidget.ShowSettings()
{
this.HtmltextBox.Text = this.State.Value;
SettingsPanel.Visible = true;
}
void IWidget.HideSettings()
{
SettingsPanel.Visible = false;
}
void IWidget.Minimized()
{
}
void IWidget.Maximized()
{
}
void IWidget.Closed()
{
}

private void SaveState()
{
var xml = this.State.Xml();
this._Host.SaveState(xml);
}

protected void SaveSettings_Clicked(object sender, EventArgs e)
{
this.State.RemoveAll();
this.State.Add(new XCData(this.HtmltextBox.Text));

this.SaveState();
}

protected override void OnPreRender(EventArgs e)
{
base.OnPreRender(e);

this.Output.Text = (this.State.FirstNode as XCData ?? new XCData("")).Value;
}
}

There you have it, an HTML widget that can take any HTML and
render it on the UI.

IFRAME Widget

Just like the HTML widget, IFRAME widget also has a simple URL
text box and it renders an