Prevent ASP.NET cookies from being sent on every css, js, image request

ASP.NET generates some large cookies if you are using ASP.NET membership provider. Especially if you are using the Anonymous provider, then a typical site will send the following cookies to every request when a user is logged in, whether the request is to a dynamic page or to any static resource:

.DBANON=w3kYczsH8Wvzs6MgryS4JYEF0N-8ZR6aLRSTU9KwVaGaydD6WwUHD7X9tN8vBgjgzKf3r3SJHusTYFjU85y
YfnunyCeuExcZs895JK9Fk1HS68ksGwm3QpxnRZvpDBAfJKEUKee2OTlND0gi43qwwtIPLeY1;
ASP.NET_SessionId=bmnbp155wilotk45gjhitoqg; DBAUTH12=2A848A8C200CB0E8E05C6EBA8059A0DBA228FC5F6EDD29401C249D2
37812344C15B3C5C57D6B776037FAA8F14017880E57BDC14A7963C58B0A0B30229
AF0123A6DF56601D814E75525E7DCA9AD4A0EF200832B39A1F35A5111092F0805B
0A8CD3D2FD5E3AB6176893D86AFBEB68F7EA42BE61E89537DEAA3279F3B576D0C
44BA00B9FA1D9DD3EE985F37B0A5A134ADC0EA9C548D

There are 517 bytes of worthless data being sent to every css, js and images from the browser to your webserver!

You might think 517 bytes is peanut. Do the math:

  • Avg page has 40 requests to server. 40 x 517 bytes = 20 KB per page view.
  • 1M page views = 20 GB
  • That’s 20GB of data getting uploaded to your server for just 1M page views. It does not take millions of users to produce 1M page views. Around 100k+ users using your site every day can produce 1M page views every day.

Here’s how to prevent this:

  • Setup a new website and map a different subdomain to it. If your main site is www.yoursite.com then map static.yoursite.com to it.
  • Manually change all the <link>, <script>, <img> css url(…) and prefix each resource with http://static.yoursite.com
  • If you don’t want to do it manually, use this solution I have done before.
  • Add a Global.asax and in the EndRequest do this trick:
    HttpContext context = HttpContext.Current;
    if (context.Request.Url.ToString.StartsWith("http://static.yoursite.com")
    {
      List<string> cookiesToClear = new List<string>();
      foreach (string cookieName in context.Request.Cookies)
      {
        HttpCookie cookie = context.Request.Cookies[cookieName];
        cookiesToClear.Add(cookie.Name);
      }
    
      foreach (string name in cookiesToClear)
      {
        HttpCookie cookie = new HttpCookie(name, string.Empty);
        cookie.Expires = DateTime.Today.AddYears(-1);
    
        context.Response.Cookies.Set(cookie);
      }
    }

    This code reads all the cookies it receives from request and expires them so that browser does not send those cookies again. If by any chance ASP.NET cookies get injected into the static.yoursite.com domain, this code will take care of removing them.

Tweaking WCF to build highly scalable async REST API

At 9 AM in the morning, during the peak traffic for your business, you get an emergency call that the website you built is no more. It’s not responding to any request. Some people can see some page after waiting for long time but most can’t. So, you think it must be some slow query or the database might need some tuning. You do the regular checks like looking at CPU and Disk on database server. You find nothing is wrong there. Then you suspect it must be webserver running slow. So, you check CPU and Disk on webservers. You find no problem there either. Both web servers and database servers have very low CPU and Disk usage. Then you suspect it must be the network. So, you try a large file copy from webserver to database server and vice versa. Nope, file copies perfectly fine, network has no problem. You also quickly check RAM usage on all servers but find RAM usage is perfectly fine. As the last resort, you run some diagnostics on Load Balancer, Firewall, and Switches but find everything to be in good shape. But your website is down. Looking at the performance counters on the webserver, you see a lot of requests getting queued, and there’s very high request execution time, and request wait time.

image001

So, you do an IIS restart. Your websites comes back online for couple of minutes and then it goes down again. After doing restart several times you realize it’s not an infrastructure issue. You have some scalability issue in your code. All the good things you have read about scalability and thought that those were fairy tales and they will never happen to you is now happening right in front of you. You realize you should have made your services async.

However, just converting your sync services to async mode does not solve the scalability problem. WCF has a bug due to which it cannot serve requests as fast as you would like it to. The thread pool it uses to handle the async calls cannot start threads as requests come in. It only adds a new thread to the pool every 500ms. As a result, you get slow rampup of threads:

image018

Read my article to learn details on how WCF works for async services and how to fix this bug to make your async services truly async and scale under heavy load.

http://www.codeproject.com/KB/webservices/fixwcf_for_restapi.aspx

Don’t forget to vote.

Automatic Javascript, CSS versioning to refresh browser cache

When you update javascript or css files that are already cached in users’ browsers, most likely many users won’t get that for some time because of the caching at the browser or intermediate proxy(s). You need some way to force browser and proxy(s) to download latest files. There’s no way to do that effectively across all browsers and proxies from the webserver by manipulating cache headers unless you change the file name or you change the URL of the files by introducing some unique query string so that browsers/proxies interpret them as new files. Most web developers use the query string approach and use a version suffix to send the new file to the browser. For example,

<script src="someJs.js?v=1001" ></script>
<link href="someCss.css?v=2001"></link>

In order to do this, developers have to go to all the html, aspx, ascx, master pages, find all references to static files that are changed, and then increase the version number. If you forget to do this on some page, that page may break because browser uses old cached script. So, it requires a lot of regression test effort to find out whether changing some css or js breaks something anywhere in the entire website.

Another approach is to run some build script that scans all files and updates the reference to the javascript and css files in each and every page in the website. But this approach does not work on dynamic pages where the javascript and css references are added at run-time, say using ScriptManager.

If you have no way to know what javascript and css will get added to the page at run-time, the only option is to analyze the page output at runtime and then change the javascript, css references on the fly.

Here’s an HttpFilter that can do that for you. This filter intercepts any ASPX hit and then it automatically appends the last modification date time of javascript and css files inside the emitted html. It does so without storing the whole generated html in memory nor doing any string operation because that will cause high memory and CPU consumption on webserver under high load. The code works with character buffers and response streams directly so that it’s as fast as possible. I have done enough load test to ensure even if you hit an aspx page million times per hour, it won’t add more than 50ms delay over each page response time.

First, you add set the filter called StaticContentFilter in the Global.asax file’s Application_BeginRequest event handler:

Response.Filter = new Dropthings.Web.Util.StaticContentFilter(
    Response,
    relativePath =>
      {
        if (Context.Cache[physicalPath] == null)
        {
          var physicalPath = Server.MapPath(relativePath);
          var version = "?v=" +
            new System.IO.FileInfo(physicalPath).LastWriteTime
            .ToString("yyyyMMddhhmmss");
          Context.Cache.Add(physicalPath, version, null,
            DateTime.Now.AddMinutes(1), TimeSpan.Zero,
            CacheItemPriority.Normal, null);
          Context.Cache[physicalPath] = version;
          return version;
        }
        else
        {
          return Context.Cache[physicalPath] as string;
        }
      },
    "http://images.mydomain.com/",
    "http://scripts.mydomain.com/",
    "http://styles.mydomain.com/",
    baseUrl,
    applicationPath,
    folderPath);
}

The only tricky part here is the delegate that is fired whenever the filter detects a script or css link and it asks you to return the version for the file. Whatever you return gets appended right after the original URL of the script or css. So, here the delegate is producing the version as “?v=yyyyMMddhhmmss” using the file’s last modified date time. It’s also caching the version for the file to make sure it does not make a File I/O request on each and every page view in order to get the file’s last modified date time.

For example, the following scripts and css in the html snippet:

<script type="text/javascript" src="scripts/jquery-1.4.1.min.js" ></script>
<script type="text/javascript" src="scripts/TestScript.js" ></script>
<link href="Styles/Stylesheet.css" rel="stylesheet" type="text/css" />

It will get emitted as:

<script type="text/javascript" src="scripts/jquery-1.4.1.min.js?v=20100319021342" ></script>
<script type="text/javascript" src="scripts/TestScript.js?v=20110522074353" ></script>
<link href="Styles/Stylesheet.css?v=20110522074829" rel="stylesheet" type="text/css" />

As you see there’s a query string generated with each of the file’s last modified date time. Good thing is you don’t have to worry about generating a sequential version number after changing a file. it will take the last modified date, which will change only when a file is changed.

The HttpFilter I will show you here can not only append version suffix, it can also prepend anything you want to add on image, css and link URLs. You can use this feature to load images from a different domain, or load scripts from a different domain and benefit from the parallel loading feature of browsers and increase the page load performance. For example, the following tags can have any URL prepended to them:

<script src="some.js" ></script>
<link href="some.css" />
<img src="some.png" />

They can be emitted as:

<script src="http://javascripts.mydomain.com/some.js" ></script>
<link href="http://styles.mydomain.com/some.css" />
<img src="http://images.mydomain.com/some.png" />

Loading javascripts, css and images from different domain can significantly improve your page load time since browsers can load only two files from a domain at a time. If you load javascripts, css and images from different subdomains and the page itself on www subdomain, you can load 8 files in parallel instead of only 2 files in parallel.

Read here to learn how this works:

http://www.codeproject.com/KB/aspnet/autojscssversion.aspx

Appreciate your feedback.

WCF does not support compression out of the box, so fix it

WCF service and client do not support HTTP Compression out of the box in .NET 3.5 even if you turn on Dynamic Compression in IIS 6 or 7. It has been fixed in .NET 4 but those who are stuck with .NET 3.5 for foreseeable future, you are out of luck.  First of all, it’s IIS fault that it does not enable http compression for SOAP messages even if you turn on Dynamic Compression in IIS 7. Secondly, it’s WCF’s fault that it does not send the Accept-Encoding: gzip, deflate header in http requests to the server, which tells IIS that the client supports compression. Thirdly, it’s again WCF fault that even if you make IIS to send back compressed response, WCF can’t process it since it does not know how to decompress it. So, you have to tweak IIS and System.Net factories to make compression work for WCF services. Compression is key for performance since it can dramatically reduce the data transfer from server to client and thus give significant performance improvement if you are exchanging medium to large data over WAN or internet.

There are two steps – first configure IIS, then configure System.Net. There’s no need to tweak anything in WCF like using some Message Interceptor to inject HTTP Headers as you find people trying to do here, here and here.

Configure IIS to support gzip on SOAP respones

After you have enabled Dynamic Compression on IIS 7 following the guide, you need to add the following block in the <dynamicTypes> section of applicationHost.config file inside C:WindowsSystem32inetsrvconfig folder. Be very careful about the space in mimeType. They need to be exactly the same as you find in response header of SOAP response generated by WCF services.

<add mimeType="application/soap+xml" enabled="true" />
<add mimeType="application/soap+xml; charset=utf-8" enabled="true" />
<add mimeType="application/soap+xml; charset=ISO-8895-1" enabled="true" />

After adding the block, the config file will look like this:

image

For IIS 6, first you need to first enable dynamic compression and then allow the .svc extension so that IIS compresses responses from WCF services.

Next you need to make WCF send the Accept-Encoding: gzip, deflate header as part of request and then support decompressing a compressed response.

Send proper request header in WCF requests

You need to override the System.Net default WebRequest creator to create HttpWebRequest with compression turned on. First you create a class like this:

public class CompressibleHttpRequestCreator : IWebRequestCreate
{
public CompressibleHttpRequestCreator()
{
}

WebRequest IWebRequestCreate.Create(Uri uri)
{
HttpWebRequest httpWebRequest =
Activator.CreateInstance(typeof(HttpWebRequest),
BindingFlags.CreateInstance | BindingFlags.Public |
BindingFlags.NonPublic | BindingFlags.Instance,
null, new object[] { uri, null }, null) as HttpWebRequest;

if (httpWebRequest == null)
{
return null;
}

httpWebRequest.AutomaticDecompression = DecompressionMethods.GZip |
DecompressionMethods.Deflate;

return httpWebRequest;
}
}

Then on the WCF Client application’s app.config or web.config, you need to put this block inside system.net which tells system.net to use your factory instead of the default one.

<system.net>
<webRequestModules>
<remove prefix="http:"/>
<add prefix="http:" type="WcfHttpCompressionEnabler.CompressibleHttpRequestCreator, WcfHttpCompressionEnabler,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
/>
</webRequestModules>
</system.net>

That’s it.

I have uploaded a sample project which shows how all these works.

Quick ways to boost performance and scalability of ASP.NET, WCF and Desktop Clients

There are some simple configuration changes that you can make on machine.config and IIS to give your web applications significant performance boost. These are simple harmless changes but makes a lot of difference in terms of scalability. By tweaking system.net changes, you can increase the number of parallel calls that can be made from the services hosted on your servers as well as on desktop computers and thus increase scalability. By changing WCF throttling config you can increase number of simultaneous calls WCF can accept and thus make most use of your hardware power. By changing ASP.NET process model, you can increase number of concurrent requests that can be served by your website. And finally by turning on IIS caching and dynamic compression, you can dramatically increase the page download speed on browsers and and overall responsiveness of your applications.

Read the CodeProject article for more details.

http://www.codeproject.com/KB/webservices/quickwins.aspx

image

Please vote for me if you find the article useful.

Ten Caching Mistakes that Break your App

 

Caching frequently used objects, that are expensive to fetch from the source, makes application perform faster under high load. It helps scale an application under concurrent requests. But some hard to notice mistakes can lead the application to suffer under high load, let alone making it perform better, especially when you are using distributed caching where there’s separate cache server or cache application that stores the items. Moreover, code that works fine using in-memory cache can fail when the cache is made out-of-process. Here I will show you some common distributed caching mistakes that will help you make better decision when to cache and when not to cache.

Here are the top 10 mistakes I have seen:

  1. Relying on .NET’s default serializer.
  2. Storing large objects in a single cache item.
  3. Using cache to share objects between threads.
  4. Assuming items will be in cache immediately after storing it.
  5. Storing entire collection with nested objects.
  6. Storing parent-child objects together and also separately.
  7. Caching Configuration settings.
  8. Caching Live Objects that has open handle to stream, file, registry, or network.
  9. Storing same item using multiple keys.
  10. Not updating or deleting items in cache after updating or deleting them on persistent storage.

Let’s see what they are and how to avoid them.

http://www.codeproject.com/KB/web-cache/cachingmistakes.aspx

Please vote if you find this useful.

Production Challenges of ASP.NET Website – recording of my talk

“It works in my PC”, the common dialog from Developers. But on production CPU burns out, disks go crazy, your site stops responding every now and then, you have to frequently recycle application pool, or even restart windows to bring things back to normal for a while.

As your site grows from thousands to millions of users, delivering new features and bug fixes becomes an increasingly complex process where you have to ensure the performance and stability of the site is not compromised. Let me tell you some stories from the trenches how I grew my hot startup Pageflakes (acquired by MySpace founder Brad Greenspan) from thousands to millions of visits within a year surviving through production software, hardware, process, people challenges and how you can avoid and solve them using my handy collection of articles and tools.

Watch the presentation I made in a LIDNUG session:

http://www.lidnug.org/Archives.aspx

Find the session with the title “Production Challenges of ASP.NET Website”.

Unfortunately they could not record the whole session. But whatever is there should be useful for you.

Building High Performance Queue in Database for storing Orders, Notifications, Tasks

We have Queues everywhere. There are queues for asynchronously sending notifications like email and SMS in most websites. E-Commerce sites have queues for storing orders, processing and dispatching them. Factory Assembly line automation systems have queues for running tasks in parallel, in a certain order. Queue is a widely used data structure that sometimes have to be created in a database instead of using specialized queue technologies like MSMQ. Running a high performance and highly scalable queue using database technologies is a big challenge and it’s hard to maintain when the queue starts to get millions of rows queued and dequeued per day. Let me show you some common design mistakes made in designing Queue-like tables and how to get maximum performance and scalability from a queue implemented using simple database features.

Queue

Let’s first identify the challenges you have in such queue tables:

  • The table is both read and write. Thus queuing and dequeuing impact each other and cause lock contention, transaction deadlocks, IO timeouts etc under heavy load.
  • When multiple receivers try to read from the same queue, they randomly get duplicate items picked out of the queue, thus resulting in duplicate processing. You need to implement some kind of high performance row lock on the queue so that same item never gets picked up by concurrent receivers.
  • The Queue table needs to store rows in certain order and read in certain order, which is an index design challenge. It’s not always first in and first out. Sometimes Orders have higher priority and need to be processed regardless of when they are queued.
  • The Queue table needs to store serialized objects in XML or binary form, which becomes a storage and index rebuild challenge. You can’t rebuild index on the Queue table because it contains text and/or binary fields. Thus the tables keep getting slower and slower every day and eventually queries start timing out until you take a downtime and rebuild the indexes.
  • During dequeue, a batch of rows are selected, updated and then returned for processing. You have a “State” column that defines the state of the items. During dequeue, you select items of certain state. Now State only has a small set of values eg PENDING, PROCESSING, PROCESSED, ARCHIVED. As a result, you cannot create index on “State” column because that does not give you enough selectivity. There can be thousands of rows having the same state. As a result, any dequeue operation results in a clustered index scan that’s both CPU and IO intensive and produces lock contention.
  • During dequeue, you cannot just remove the rows from table because that causes fragmentation in the table. Moreover, you need to retry orders/jobs/notification N times  incase they fail on first attempt. This means rows are stored for longer period, indexes keep growing and dequeue gets slower day by day.
  • You have to archive processed items from the Queue table to a different table or database, in order to keep the main Queue table small. That means moving large amount of rows of some particular status to another database. Such large data removal leaves the table highly defragmented causing poor queue/dequeue performance.
  • You have a 24×7 business. You have no maintenance window where you can take a downtime and archive large number of rows. This means you have to continuously archive rows without affecting production queue-dequeue traffic.

If you have implemented such queue tables, you might have suffered from one or more of the above challenges. Let me give you some tips on how to overcome these challenges and how to design and maintain a high performance queue table.

Read the article for details:

http://www.codeproject.com/KB/database/fastqueue.aspx

Please vote if you find this useful.

Munq is for web, Unity is for Enterprise

The Unity Application Block (Unity) is a lightweight extensible dependency injection container with support for constructor, property, and method call injection. It’s a great library for facilitating Inversion of Control and the recent version supports AOP as well. However, when it comes to performance, it’s CPU hungry. In fact it’s so CPU hungry that it makes it impossible to make it work at Internet Scale. I was investigating some CPU issue on a portal that gets around 3MM hits per day and I found unusually high CPU. Here’s why:

Unity_High_CPU

I did some CPU profiling on my open source project Dropthings and found that the highest CPU is consumed by Unity’s Resolve<>(). There’s no funky use of Unity in the project. Straightforward Register<>() and Resolve<>(). But as you can see, Resolve<>() is consuming significantly high CPU even after the site is warm and has been running for a while.

Then I tried Munq, which is a basic Dependency Injection Container. It has everything you will usually need in a regular project. It boasts to be the fastest DI out there. So, I converted all Unity code to Munq in Dropthings and did a CPU profile and Whala!

MunqMuchBetter

 

There’s no trace of any Munq calls anywhere. That proves Munq is a lot faster than Unity.