Powerful IIS/Apache Monitoring dashboard using ElasticSearch+Grafana

IIS or Apache do not come with any monitoring dashboard that shows you graphs of requests/sec, response times, slow URLs, failed requests and so on. You need to use external tools to visualize that. ElasticSearch and Grafana are two such tools that let you collect logs from web servers, and then parse, filter, sort, analyze, and create beautiful presentations out of them. ElasticSearch is a distributed JSON document store, just like a NoSQL database. You can use it to store logs as JSON documents. Then you can use Grafana to fetch those documents from ElasticSearch and build beautiful presentations. Both are free and open source. 

Web Graph

Read the full details here, please don’t forget to rate:

http://www.codeproject.com/Articles/1094405/Powerful-IIS-Apache-Monitoring-dashboard-using

ElasticSearch is a very powerful product. It is a multi-purpose distributed JSON document store and also a powerful search engine. Most frequent use cases for ElasticSearch is to create searchable documents, implement auto completion feature, and also aggregate logs and analyze them. Grafana is a beautiful Dashboard tool that takes ElasticSearch, among many, as a data source. Combing these two, you can build sophisticated monitoring and reporting tools to get a holistic view on how your application is performing and where the issues are.

Enjoy!

Using custom font without slowing down page load

Custom fonts are widely used nowadays to give your websites trendy look, but at the great expense of page loading speed. Each custom font has to be downloaded, initialized and applied throughout the page before browser can render anything. If you have a custom font in any CSS, anywhere on the page, most browsers (like IE) will not render anything (remain white) until the custom font is downloaded, initialized and applied. Usually custom fonts are over 300KB. As a result, they significantly slow down page load speed. See this timeline of a website that uses custom font:

image

The gray bar is the .eot font. The green vertical bar is when browser renders the page. Until that green bar, user is staring at a white page with absolutely nothing for 5.5 seconds. The .eot is the very last thing browser downloads before it can render anything on the page. Usually 5.5 second page load time is unacceptable for most websites because on an average 40% users abandon your site if it does not load in 3 seconds!

image

The above graph shows you that nearly 80% change in bounce rate for pages that have over 4 second page load time!

It is absolutely key that you maintain less than 4 sec home page load time. Browser cannot remain blank for more than 4 sec. Ideally the page should be fully loaded within 4 sec if you want to encourage users to come back to your site.

Let’s learn a trick that you can use to download custom fonts behind the scene, without preventing browser from rendering anything and then apply the font as soon as the font is downloaded. Once you apply this trick, browser will render the page before the custom font is downloaded:

image

As you see here, browser has rendered the page on 4th second before it has downloaded the custom font file. The font takes a long time to downloading. During this whole time, browser is free, content is rendered, user can scroll around. Once the font is downloaded, it is applied. It freezes from 8.5 sec till 9.5 second while it initializes the font and applies on the page.

Step 1: Create a CSS that loads the custom font

First create a css that will load the custom font and apply to the html elements. First create a IE version of the CSS:

@font-face{
  font-family:'Nikosh';
  src: url('nikosh2.eot'); /* here you go, IE */
}

p, li, input, select, div, span, em, i, b, u, strong {
    font-family: 'Nikosh', Arial, Helvetica, sans-serif;
}

IE only supports Embedded True Type font (.eot). It does not support TTF, OTF, WOFF.

Other browsers do not support .eot. So, for them, you need to use some other open format. Say you are using OTF:

@font-face{
  font-family:'Nikosh';
  src: url('nikosh2.otf') format('opentype');
}

p, li, input, select, div, span, em, i, b, u, strong {
    font-family: 'Nikosh', Arial, Helvetica, sans-serif;
}

This one IE won’t understand. So, you need two different CSS files. I understand there’s this bullet-proof font face hack which is claimed to work in all browsers. But I have tried it and it does not work in Chrome. Chrome does not understand the url(://) hack.

Step 2: Javascript to pre-cache the font file and download the CSS after font is loaded

Next step is to pre-cache the font file behind the scene so that browser is not frozen while the large custom font file downloads. You can use jQuery to download the font file.

var fontFile = jQuery.browser.msie ? '/wp-content/uploads/font/nikosh2.eot' 
    : '/wp-content/uploads/font/nikosh2.otf';
var cssFile = jQuery.browser.msie ? '/wp-content/uploads/font/nikosh2ie.css' 
    : '/wp-content/uploads/font/nikosh2nonie.css';

jQuery(document).ready(function(){
	jQuery.ajax({
	  url: fontFile,
	  beforeSend: function ( xhr ) {
		xhr.overrideMimeType("application/octet-stream");
	  }
	}).done(function (data) {
		jQuery("	")
		  .appendTo(jQuery('head'))
		  .attr({type : 'text/css', rel : 'stylesheet'})
		  .attr('href', cssFile);
	});
});

First it pre-caches the right font file based on browser version. Make sure you define the correct font file name here as you have defined in the browser specific css file. Once the file is downloaded, jQuery done will fire and then it will inject a link that tag will fetch the css. The css will refer the exact same font file, from the exact same location. So, browser will not download it again as it has just downloaded the file and cached in local cache. Thus the css will be applied immediately.

Remember, as soon as the <link> tag is added, browser will freeze and become white until the font is initialized and applied. So, you will some freezing here. As it is freezing after the page has loaded completely, user is at least looking at the page and thinks that the page has finished loading. So, the user experience is better.

Step 3: Enable static content expiration

For this to work, you must have static content expiration in the webserver. If you don’t have it then browser will not cache the font file and it will download the file again as soon as you add the <link> tag. In order to enable static content expiration, follow these steps:

  • Click on the website in IIS.
  • Click on “HTT Response Headers” icon.
  • Click on “Set Common headers” from the right side menu.
  • Turn on “Expire Web Content” and set a far future date.

image

This will turn on browser caching for all static files, not just font file.

Step 4: Set MIME type for fonts

IIS 7 or 8 does not come with MIME settings for .otf, .ttf etc font extensions. Only .eot is there. So, you need to set the MIME type for these extensions, otherwise it won’t deliver the font and browser will get a 404.3.

  • Click on the website on IIS.
  • Select “MIME Types” icons.
  • Click Add…
  • Add the .otf and .ttf extensions.
  • Set Content Type as: application/octet-stream

That should now enable custom fonts on your webserver.

WCF does not support compression out of the box, so fix it

WCF service and client do not support HTTP Compression out of the box in .NET 3.5 even if you turn on Dynamic Compression in IIS 6 or 7. It has been fixed in .NET 4 but those who are stuck with .NET 3.5 for foreseeable future, you are out of luck.  First of all, it’s IIS fault that it does not enable http compression for SOAP messages even if you turn on Dynamic Compression in IIS 7. Secondly, it’s WCF’s fault that it does not send the Accept-Encoding: gzip, deflate header in http requests to the server, which tells IIS that the client supports compression. Thirdly, it’s again WCF fault that even if you make IIS to send back compressed response, WCF can’t process it since it does not know how to decompress it. So, you have to tweak IIS and System.Net factories to make compression work for WCF services. Compression is key for performance since it can dramatically reduce the data transfer from server to client and thus give significant performance improvement if you are exchanging medium to large data over WAN or internet.

There are two steps – first configure IIS, then configure System.Net. There’s no need to tweak anything in WCF like using some Message Interceptor to inject HTTP Headers as you find people trying to do here, here and here.

Configure IIS to support gzip on SOAP respones

After you have enabled Dynamic Compression on IIS 7 following the guide, you need to add the following block in the <dynamicTypes> section of applicationHost.config file inside C:WindowsSystem32inetsrvconfig folder. Be very careful about the space in mimeType. They need to be exactly the same as you find in response header of SOAP response generated by WCF services.

<add mimeType="application/soap+xml" enabled="true" />
<add mimeType="application/soap+xml; charset=utf-8" enabled="true" />
<add mimeType="application/soap+xml; charset=ISO-8895-1" enabled="true" />

After adding the block, the config file will look like this:

image

For IIS 6, first you need to first enable dynamic compression and then allow the .svc extension so that IIS compresses responses from WCF services.

Next you need to make WCF send the Accept-Encoding: gzip, deflate header as part of request and then support decompressing a compressed response.

Send proper request header in WCF requests

You need to override the System.Net default WebRequest creator to create HttpWebRequest with compression turned on. First you create a class like this:

public class CompressibleHttpRequestCreator : IWebRequestCreate
{
public CompressibleHttpRequestCreator()
{
}

WebRequest IWebRequestCreate.Create(Uri uri)
{
HttpWebRequest httpWebRequest =
Activator.CreateInstance(typeof(HttpWebRequest),
BindingFlags.CreateInstance | BindingFlags.Public |
BindingFlags.NonPublic | BindingFlags.Instance,
null, new object[] { uri, null }, null) as HttpWebRequest;

if (httpWebRequest == null)
{
return null;
}

httpWebRequest.AutomaticDecompression = DecompressionMethods.GZip |
DecompressionMethods.Deflate;

return httpWebRequest;
}
}

Then on the WCF Client application’s app.config or web.config, you need to put this block inside system.net which tells system.net to use your factory instead of the default one.

<system.net>
<webRequestModules>
<remove prefix="http:"/>
<add prefix="http:" type="WcfHttpCompressionEnabler.CompressibleHttpRequestCreator, WcfHttpCompressionEnabler,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
/>
</webRequestModules>
</system.net>

That’s it.

I have uploaded a sample project which shows how all these works.

Safely deploying changes to production servers

When you deploy incremental changes on a production server, which is running and live all the time, you some times see error messages like “Compiler Error Message: The Type ‘XXX’ exists in both…”. Sometimes you find Application_Start event not firing although you shipped a new class, dll or web.config. Sometimes you find static variables not getting initialized and so on. There are so many weird things happen on webservers when you incrementally deploy changes to the server and the server has been up and running for several weeks. So, I came up with a full proof house keeping steps that we always do whenever we deploy some incremental change to our websites. These steps ensure that the web sites are properly recycled , cached are cleared, all the data stored at Application level is initialized.

First of all you should have multiple web servers behind a load balancer. This way you can take one server out of the production traffic, do your deployment and house keeping tasks like restarting IIS, and then put it back. Then you can do it for the second server and so on. This ensures there’s no outage for customer. If you can do it reasonable fast, hopefully customers won’t notice discrepancy between the servers some having new code and some having old code. You should only do this when your changes aren’t drastic. For ex, you aren’t delivering a complete revamped UI. In that case, some users hitting server1 with latest UI will suddenly get a completely different experience and then on next page refresh, they might hit server2 with old code and get a totally different experience. This works for incremental non-dramatic changes only.

image

During deployment you should follow these steps:

  • Take server X out of load balancer so that it does not get any traffic.
  • Stop all your .NET windows services on the server.
  • Stop IIS.
  • Delete the Temporary ASP.NET folders of all .NET versions incase you have multiple .NET versions running. You can follow this link.
  • Deploy the changes.
  • Flush any distributed cache you have, for ex, Velocity or Memcached.
  • Start IIS.
  • Start your .NET windows services on the server.
  • Warm up all websites by hitting major URLs on the websites. You should have some automated script to do this. You can use tinyget to hit some major URLs, especially pages that take a lot of time to compile. Read my post on keeping websites warm with zero coding.
  • Put server X back to load balancer so that it starts receiving traffic.

That’s it. It should give you a clean deployment and prevent unexpected errors. You should print these steps and hang on the desk of your deployment guys so that they never forget during deployment pressure.

Doing all these steps manually is risky. Under deployment time pressure, your production guys can make mistakes and screw up a server for good. So, I always prefer having a batch file that takes a server out and makes it ready for deploying code and then after the deployment is done, use another batch file to put the server back into load balancer traffic rotation after the server is warmed up.

Generally load balancers are configured to hit some page on your website and keep the server alive if that page returns a HTTP 200. If not, it assumes the server is dead and takes it our of rotation. For ex, say you have an alive.txt file on your website which is what load balancer is keeping an eye on. If it’s gone, the server is put out of the rotation. In that case, you can create some batch files that will take the server out, wait for couple of seconds to ensure the in-flight requests complete and then stop IIS, delete temporary ASP.NET files and make server ready to deploy stuff. Something like this:

serverout.bat
=====================
Ren alive.txt dead.txt
typeperf "ASP.NET Applications(__Total__)Requests Executing" -sc 30
iisreset /stop
rmdir /q /s "C:WINDOWSMicrosoft.NETFramework64v1.1.4322Temporary ASP.NET Files"
rmdir /q /s "C:WINDOWSMicrosoft.NETFramework64v2.0.50727Temporary ASP.NET Files"
md "C:WINDOWSMicrosoft.NETFramework64v1.1.4322Temporary ASP.NET Files"
md "C:WINDOWSMicrosoft.NETFramework64v2.0.50727Temporary ASP.NET Files"
xcacls "C:WINDOWSMicrosoft.NETFramework64v1.1.4322Temporary ASP.NET Files" /E /G MYMACHINEIIS_WPG:F /Q
xcacls "C:WINDOWSMicrosoft.NETFramework64v2.0.50727Temporary ASP.NET Files" /E /G MYMACHINEIIS_WPG:F /Q

Similarly you should have a batch file that starts IIS, warms up some pages, and then puts the server back into load balancer.

serverin.bat
============
SET TINYGET=C:Program Files (x86)IIS ResourcesTinyGettinyget.exe
iisreset /start"%TINYGET%" -srv:localhost -uri:http://localhost/ -status:200
ren dead.txt alive.txt
typeperf "ASP.NET Applications(__Total__)Requests Executing" -sc 30

Always try to automate this kind of admin chores. It’s difficult to do it right all the time manually under deployment pressure.

Keep website and webservices warm with zero coding

If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how:

First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “C:Program Files (x86)IIS ResourcesTinyGet” to the server where your IIS 6.0 or IIS 7 is running.

Then create a batch file that will hit the pages and webservices. Something like this:

SET TINYGET=C:Program Files (x86)IIS ResourcesTinyGettinyget.exe

"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200
"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200

Save this in a batch file and run it as a scheduled task at some interval like 10 minutes and your website will always remain nice and warm.

First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit.

Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code.

Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this:

"%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200

 

Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

 

Redirecting traffic from http to https with zero coding in IIS

When you want to enforce https on users and you want to redirect any URL being hit on http to the exact counterpart of https, then usually you do it with some HttpModule written in .NET, or install some URL Redirector module in IIS, or setup a dummy website on http and then use meta refresh tag to send traffic to https. There are many solutions out there which requires some amount of coding skill. Let me show you a zero coding redirection setup.

First, go to the properties of the real website which is now listening on both http and https. Change the http port to something dummy like 8083. This will prevent the website from holding onto port 80.

Now create a new website on an empty folder that has read permission on NETWORK SERVICE account. The new website will have the same host headers as the real one, say omaralzabir.com. But it will listen on port 80 only. It won’t listen on https port 443.

Once created, go to properties and Home Directory tab. And do the following:

IIS website properties to redirect from http to https

Things to do here:

  • Select the “A redirection to a URL” option.
  • Enter the https://yourdomain.com and then $S$Q. Remember, there’s no trailing slash after the domain.
  • Select “The exact URL entered above"”
  • Select “A permanent redirection for this resource”
  • Set Execute permission to None.
  • You can select the same app pool as the original one.

That’s it.

Update:

The $S puts the subdirectories. If you have hit http://omaralzabir.com/subdir/anotherdit, $S = /subdir/anontherdir. And the $Q represents the query string. Together, they represent the whole path and query string.

Best practices for creating websites in IIS 6.0

Every time I create an IIS website, I do some steps, which I
consider as best practice for creating any IIS website for better
performance, maintainability, and scalability. Here’ re the things
I do:

Create a separate application pool for each web
application

I always create separate app pool for each web app because I can
select different schedule for app pool recycle. Some heavy traffic
websites have long recycle schedule where low traffic websites have
short recycle schedule to save memory. Moreover, I can choose
different number of processes served by the app pool. Applications
that are made for web garden mode can benefit from multiple process
where applications that use in-process session, in memory cache
needs to have single process serving the app pool. Hosting all my
application under the DefaultAppPool does not give me the
flexibility to control these per site.

The more app pool you create, the more ASP.NET threads you make
available to your application. Each w3wp.exe has it’s own
thread pool. So, if some application is congesting particular
w3wp.exe process, other applications can run happily on
their separate w3wp.exe instance, running under separate app
pool. Each app pool hosts its own w3wp.exe instance.

So, my rule of thumb: Always create new app pool for new web
applications and name the app pool based on the site’s domain name
or some internal name that makes sense. For example, if you are
creating a new website alzabir.com, name the app pool alzabir.com
to easily identify it.

Another best practice: Disable the DefaultAppPool so that
you don’t mistakenly keep adding sites to
DefaultAppPool.


image

First you create a new application pool. Then you create a new
Website or Virtual Directory, go to Properties -> Home Directory
tab -> Select the new app pool.


image

Customize Website properties for performance,
scalability and maintainability

First you map the right host headers to your website. In order
to do this, go to WebSite tab and click on “Advanced” button. Add
mapping for both domain.com and www.domain.com. Most of the time,
people forget to map the domain.com. Thus many visitors skip typing
the www prefix and get no page served.


image

Next turn on some log entries:


image

These are very handy for analysis. If you want to measure your
bandwidth consumption for specific sites, you need the Bytes Sent.
If you want to measure the execution time of different pages and
find out the slow running pages, you need Time Taken. If you want
to measure unique and returning visitors, you need the Cookie. If
you need to know who is sending you most traffic – search engines
or some websites, you need the Referer. Once these entries are
turned on, you can use variety of Log Analysis tools to do the
analysis. For example, open source AWStats.

But if you are using Google Analytics or something else, you
should have these turned off, especially the Cookie and Referer
because they take quite some space on the log. If you are using
ASP.NET Forms Authentication, the gigantic cookie coming with every
request will produce gigabytes of logs per week if you have a
medium traffic website.


image

This is kinda no brainer. I add Default.aspx as the default
content page so that, when visitors hit the site without any .aspx
page name, e.g. alzabir.com, they get the default.aspx served.


image

Things I do here:

  • Turn on Content Expiration. This makes static files remain in
    browser cache for 30 days and browser serves the files from its own
    cache instead of hitting the server. As a result, when your users
    revisit, they don’t download all the static files like images,
    javascripts, css files again and again. This one setting
    significantly improves your site’s performance.
  • Remove the X-Powered-By: ASP.NET header. You really
    don’t need it unless you want to attach Visual Studio Remote
    Debugger to your IIS. Otherwise, it’s just sending 21 bytes on
    every response.
  • Add “From” header and set the server name. I do this on each
    webserver and specify different names on each box. It’s handy to
    see from which servers requests are being served. When you are
    trying to troubleshoot load balancing issues, it comes handy to see
    if a particular server is sending requests.


image

I set the 404 handler to some ASPX so that I can show some
custom error message. There’s a 404.aspx which shows some nice
friendly message and suggests some other pages that user can visit.
However, another reason to use this custom mapping is to serve
extensionless URL from IIS.
Read this blog post for details
.


image

Make sure to set ASP.NET 2.0 for your ASP.NET 2.0, 3.0 and 3.5
websites.

Finally, you must, I repeat you “MUST”
turn on IIS 6.0 gzip compression
. This turns on the Volkswagen
V8 engine that is built into IIS to make your site screaming
fast.

kick it on DotNetKicks.com

My first book – Building a Web 2.0 Portal with ASP.NET 3.5

My first book “Building a Web 2.0 Portal with ASP.NET 3.5” from
O’Reilly is published and available in the stores. This book
explains in detail the architecture design, development, test,
deployment, performance and scalability challenges of my open
source web portal Dropthings.com. Dropthings is a prototype of a web
portal similar to iGoogle or Pageflakes. But this portal is developed using
recently released brand new technologies like ASP.NET 3.5, C# 3.0,
Linq to Sql, Linq to XML, and Windows Workflow foundation. It makes
heavy use of ASP.NET AJAX 1.0. Throughout my career I have built
several state-of-the-art personal, educational, enterprise and mass consumer web
portals
. This book collects my experience in building all of
those portals.

O’Reilly Website:
http://www.oreilly.com/catalog/9780596510503/

Amazon:

http://www.amazon.com/Building-Web-2-0-Portal-ASP-NET/dp/0596510500

Disclaimer: This book does not show you how to build Pageflakes.
Dropthings is entirely different in terms of architecture,
implementation and the technologies involved.

You learn how to:

  • Implement a highly decoupled architecture following the popular
    n-tier, widget-based application model
  • Provide drag-and-drop functionality, and use ASP.NET 3.5 to
    build the server-side part of the web layer
  • Use LINQ to build the data access layer, and Windows Workflow
    Foundation to build the business layer as a collection of
    workflows
  • Build client-side widgets using JavaScript for faster
    performance and better caching
  • Get maximum performance out of the ASP.NET AJAX Framework for
    faster, more dynamic, and scalable sites
  • Build a custom web service call handler to overcome
    shortcomings in ASP.NET AJAX 1.0 for asynchronous, transactional,
    cache-friendly web services
  • Overcome JavaScript performance problems, and help the user
    interface load faster and be more responsive
  • Solve various scalability and security problems as your site
    grows from hundreds to millions of users
  • Deploy and run a high-volume production site while solving
    software, hardware, hosting, and Internet infrastructure
    problems

If you’re ready to build state-of-the art, high-volume web
applications that can withstand millions of hits per day, this book
has exactly what you need.