Student Data Bank: An ASP.NET MVC, WebAPI, EF Codefirst, Task, ESB showcase

Imagine this: you are a student of Oxford and you want to join a program in Cambridge. Cambridge wants to pull all the courses you have done in Oxford, along with your course results and automatically credit them, so that you don’t have to repeat the courses. Eventually when Cambridge decides to award you a degree, they can automatically check that you have done all the courses required by their program, either in Cambridge or in Oxford, Harvard, MIT or any other universities in the world. Would it not be nice to have a common system where all your courses and results from any university in the world, are securely stored and universities can exchange with each other what you have done, fully automatically? There would be no need to get transcripts printed, mailed, certified etc anymore. It would be done entirely online securely.

Let’s build such an imaginary system using ASP.NET MVC and WebAPI, and expose that over an ESB like WSO2 ESB. You don’t have to use any ESB. You can just run the ASP.NET app directly. ESB is there to offer secure, reliable exposure of the services.

HighLevelArchitecture

Let’s define what features we want on this system:

StudentDataBank.org (SDB – an imaginary org) is an organization that offers secure online storage and controlled exchange of student records between universities, colleges, schools and other types of Educational Institutes (EI). An EI can establish a secure interface with Student Data Bank and upload its Courses, Programs, Students and Student’s Course Results. Then another EI can request a Student’s Record to be fetched from the offering EI. Student Data Bank (SDB) takes care of securely validating the request and fetching Student Records from an EI’s systems via the secure interface that EI has already established with the SDB. Thus SDB works as a broker between multiple EIs, empowering each to share student records with each other in a secure and auditable manner.

An awarding EI can initiate the process of awarding a degree to a student and in the process automatically establish whether the awardee student has completed all the required courses from the awarding EI, as well as from other EIs that the student claims to have attended required courses from. Each EI has a way to define what requirements their courses meet, so that courses can be accepted from other EIs automatically while awarding a degree to a student.

The Web site looks like this:

Fetch Courses

Clicking on that Fetch Course triggers a workflow, which fetches the courses from the University, via the interface integration done with the University.

Sequence Diagram

Read the full CodeProject article how this is done:

http://www.codeproject.com/Articles/895310/Student-Data-Bank-An-ASP-NET-MVC-WebAPI-EF-Codefir

 

Faster page rendering by downloading JS/CSS before server generates full page

When a dynamic page is executing on the server, browser is doing nothing but waiting for the html to come from the server. If your server-side code takes 5 seconds to perform database operations on the server, then for that 5 seconds, user is staring at a blank white screen. Why not take this opportunity to download the Javascript and CSS on the browser simultaneously? By the time server finishes doing its work, server will just send the dynamic content and browser will be able to render it right away. This optimization technique can improve the performance of any dynamic page that takes some time to finish its job on the server, and has some js and css to download.

When server and browser are working simulteneously, you get to render the page significantly faster:

Fast
Server and browser working simultaneously

 

Read the full article here:

http://www.codeproject.com/Articles/755672/Faster-page-rendering-by-downloading-JS-CSS-before

Safely deploying changes to production servers

When you deploy incremental changes on a production server, which is running and live all the time, you some times see error messages like “Compiler Error Message: The Type ‘XXX’ exists in both…”. Sometimes you find Application_Start event not firing although you shipped a new class, dll or web.config. Sometimes you find static variables not getting initialized and so on. There are so many weird things happen on webservers when you incrementally deploy changes to the server and the server has been up and running for several weeks. So, I came up with a full proof house keeping steps that we always do whenever we deploy some incremental change to our websites. These steps ensure that the web sites are properly recycled , cached are cleared, all the data stored at Application level is initialized.

First of all you should have multiple web servers behind a load balancer. This way you can take one server out of the production traffic, do your deployment and house keeping tasks like restarting IIS, and then put it back. Then you can do it for the second server and so on. This ensures there’s no outage for customer. If you can do it reasonable fast, hopefully customers won’t notice discrepancy between the servers some having new code and some having old code. You should only do this when your changes aren’t drastic. For ex, you aren’t delivering a complete revamped UI. In that case, some users hitting server1 with latest UI will suddenly get a completely different experience and then on next page refresh, they might hit server2 with old code and get a totally different experience. This works for incremental non-dramatic changes only.

image

During deployment you should follow these steps:

  • Take server X out of load balancer so that it does not get any traffic.
  • Stop all your .NET windows services on the server.
  • Stop IIS.
  • Delete the Temporary ASP.NET folders of all .NET versions incase you have multiple .NET versions running. You can follow this link.
  • Deploy the changes.
  • Flush any distributed cache you have, for ex, Velocity or Memcached.
  • Start IIS.
  • Start your .NET windows services on the server.
  • Warm up all websites by hitting major URLs on the websites. You should have some automated script to do this. You can use tinyget to hit some major URLs, especially pages that take a lot of time to compile. Read my post on keeping websites warm with zero coding.
  • Put server X back to load balancer so that it starts receiving traffic.

That’s it. It should give you a clean deployment and prevent unexpected errors. You should print these steps and hang on the desk of your deployment guys so that they never forget during deployment pressure.

Doing all these steps manually is risky. Under deployment time pressure, your production guys can make mistakes and screw up a server for good. So, I always prefer having a batch file that takes a server out and makes it ready for deploying code and then after the deployment is done, use another batch file to put the server back into load balancer traffic rotation after the server is warmed up.

Generally load balancers are configured to hit some page on your website and keep the server alive if that page returns a HTTP 200. If not, it assumes the server is dead and takes it our of rotation. For ex, say you have an alive.txt file on your website which is what load balancer is keeping an eye on. If it’s gone, the server is put out of the rotation. In that case, you can create some batch files that will take the server out, wait for couple of seconds to ensure the in-flight requests complete and then stop IIS, delete temporary ASP.NET files and make server ready to deploy stuff. Something like this:

serverout.bat
=====================
Ren alive.txt dead.txt
typeperf "ASP.NET Applications(__Total__)Requests Executing" -sc 30
iisreset /stop
rmdir /q /s "C:WINDOWSMicrosoft.NETFramework64v1.1.4322Temporary ASP.NET Files"
rmdir /q /s "C:WINDOWSMicrosoft.NETFramework64v2.0.50727Temporary ASP.NET Files"
md "C:WINDOWSMicrosoft.NETFramework64v1.1.4322Temporary ASP.NET Files"
md "C:WINDOWSMicrosoft.NETFramework64v2.0.50727Temporary ASP.NET Files"
xcacls "C:WINDOWSMicrosoft.NETFramework64v1.1.4322Temporary ASP.NET Files" /E /G MYMACHINEIIS_WPG:F /Q
xcacls "C:WINDOWSMicrosoft.NETFramework64v2.0.50727Temporary ASP.NET Files" /E /G MYMACHINEIIS_WPG:F /Q

Similarly you should have a batch file that starts IIS, warms up some pages, and then puts the server back into load balancer.

serverin.bat
============
SET TINYGET=C:Program Files (x86)IIS ResourcesTinyGettinyget.exe
iisreset /start"%TINYGET%" -srv:localhost -uri:http://localhost/ -status:200
ren dead.txt alive.txt
typeperf "ASP.NET Applications(__Total__)Requests Executing" -sc 30

Always try to automate this kind of admin chores. It’s difficult to do it right all the time manually under deployment pressure.

Quick ways to boost performance and scalability of ASP.NET, WCF and Desktop Clients

There are some simple configuration changes that you can make on machine.config and IIS to give your web applications significant performance boost. These are simple harmless changes but makes a lot of difference in terms of scalability. By tweaking system.net changes, you can increase the number of parallel calls that can be made from the services hosted on your servers as well as on desktop computers and thus increase scalability. By changing WCF throttling config you can increase number of simultaneous calls WCF can accept and thus make most use of your hardware power. By changing ASP.NET process model, you can increase number of concurrent requests that can be served by your website. And finally by turning on IIS caching and dynamic compression, you can dramatically increase the page download speed on browsers and and overall responsiveness of your applications.

Read the CodeProject article for more details.

http://www.codeproject.com/KB/webservices/quickwins.aspx

image

Please vote for me if you find the article useful.

Finally! Entity Framework working in fully disconnected N-tier web app

Entity Framework was supposed to solve the problem of Linq to SQL, which requires endless hacks to make it work in n-tier world. Not only did Entity Framework solve none of the L2S problems, but also it made it even more difficult to use and hack it for n-tier scenarios. It’s somehow half way between a fully disconnected ORM and a fully connected ORM like Linq to SQL. Some useful features of Linq to SQL are gone – like automatic deferred loading. If you try to do simple select with join, insert, update, delete in a disconnected architecture, you will realize not only you need to make fundamental changes from the top layer to the very bottom layer, but also endless hacks in basic CRUD operations. I will show you in this article how I have  added custom CRUD functions on top of EF’s ObjectContext to make it finally work well in a fully disconnected N-tier web application (my open source Web 2.0 AJAX portal – Dropthings) and how I have produced a 100% unit testable fully n-tier compliant data access layerfollowing the repository pattern.

http://www.codeproject.com/KB/linq/ef.aspx

In .NET 4.0, most of the problems are solved, but not all. So, you should read this article even if you are coding in .NET 4.0. Moreover, there’s enough insight here to help you troubleshoot EF related problems.

You might think “Why bother using EF when Linq to SQL is doing good enough for me.” Linq to SQL is not going to get any innovation from Microsoft anymore. Entity Framework is the future of persistence layer in .NET framework. All the innovations are happening in EF world only, which is frustrating. There’s a big jump on EF 4.0. So, you should plan to migrate your L2S projects to EF soon.

Munq is for web, Unity is for Enterprise

The Unity Application Block (Unity) is a lightweight extensible dependency injection container with support for constructor, property, and method call injection. It’s a great library for facilitating Inversion of Control and the recent version supports AOP as well. However, when it comes to performance, it’s CPU hungry. In fact it’s so CPU hungry that it makes it impossible to make it work at Internet Scale. I was investigating some CPU issue on a portal that gets around 3MM hits per day and I found unusually high CPU. Here’s why:

Unity_High_CPU

I did some CPU profiling on my open source project Dropthings and found that the highest CPU is consumed by Unity’s Resolve<>(). There’s no funky use of Unity in the project. Straightforward Register<>() and Resolve<>(). But as you can see, Resolve<>() is consuming significantly high CPU even after the site is warm and has been running for a while.

Then I tried Munq, which is a basic Dependency Injection Container. It has everything you will usually need in a regular project. It boasts to be the fastest DI out there. So, I converted all Unity code to Munq in Dropthings and did a CPU profile and Whala!

MunqMuchBetter

 

There’s no trace of any Munq calls anywhere. That proves Munq is a lot faster than Unity.

Keep website and webservices warm with zero coding

If you want to keep your websites or webservices warm and save user from seeing the long warm up time after an application pool recycle, or IIS restart or new code deployment or even windows restart, you can use the tinyget command line tool, that comes with IIS Resource Kit, to hit the site and services and keep them warm. Here’s how:

First get tinyget from here. Download and install the IIS 6.0 Resource Kit on some PC. Then copy the tinyget.exe from “C:Program Files (x86)IIS ResourcesTinyGet” to the server where your IIS 6.0 or IIS 7 is running.

Then create a batch file that will hit the pages and webservices. Something like this:

SET TINYGET=C:Program Files (x86)IIS ResourcesTinyGettinyget.exe

"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/ -status:200
"%TINYGET%" -srv:dropthings.omaralzabir.com -uri:http://dropthings.omaralzabir.com/WidgetService.asmx?WSDL - status:200

Save this in a batch file and run it as a scheduled task at some interval like 10 minutes and your website will always remain nice and warm.

First I am hitting the homepage to keep the webpage warm. Then I am hitting the webservice URL with ?WSDL parameter, which allows ASP.NET to compile the service if not already compiled and walk through all the operations and reflect on them and thus loading all related DLLs into memory and reducing the warmup time when hit.

Tinyget gets the servers name or IP in the –srv parameter and then the actual URI in the –uri. I have specified what’s the HTTP response code to expect in –status parameter. It ensures the site is alive and is returning http 200 code.

Besides just warming up a site, you can do some load test on the site. Tinyget can run in multiple threads and run loops to hit some URL. You can literally blow up a site with commands like this:

"%TINYGET%" -threads:30 -loop:100 -srv:google.com -uri:http://www.google.com/ -status:200

 

Tinyget is also pretty useful to run automated tests. You can record http posts in a text file and then use it to make http posts to some page. Then you can put matching clause to check for certain string in the output to ensure the correct response is given. Thus with some simple command line commands, you can warm up, do some transactions, validate the site is giving off correct response as well as run a load test to ensure the server performing well. Very cheap way to get a lot done.

 

Do not use “using” in WCF Client

You know that any IDisposable object must be disposed using using. So, you have been using using to wrap WCF service’s ChannelFactory and Clients like this:

using(var client = new SomeClient()) {

.

.

.

}

Or, if you are doing it the hard and slow way (without really knowing why), then:

using(var factory = new ChannelFactory<ISomeService>()) {

var channel= factory.CreateChannel();

.

.

.

}

That’s what we have all learnt in school right? We have learnt it wrong!

When there’s a network related error or the connection is broken, or the call is timed out before Dispose is called by the using keyword, then it results in the following exception when the using keyword tries to dispose the channel:

failed: System.ServiceModel.CommunicationObjectFaultedException : 
The communication object, System.ServiceModel.Channels.ServiceChannel,
cannot be used for communication because it is in the Faulted state.

Server stack trace:
at System.ServiceModel.Channels.CommunicationObject.Close(TimeSpan timeout)

Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout)
at System.ServiceModel.ClientBase`1.System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout)
at System.ServiceModel.ClientBase`1.Close()
at System.ServiceModel.ClientBase`1.System.IDisposable.Dispose()

There are various reasons for which the underlying connection can be at broken state before the using block is completed and the .Dispose() is called. Common problems like network connection dropping, IIS doing an app pool recycle at that moment, some proxy sitting between you and the service dropping the connection for various reasons and so on. The point is, it might seem like a corner case, but it’s a likely corner case. If you are building a highly available client, you need to treat this properly before you go-live.

So, do NOT use using on WCF Channel/Client/ChannelFactory. Instead you need to use an alternative. Here’s what you can do:

First create an extension method.

public static class WcfExtensions
{
public static void Using<T>(this T client, Action<T> work)
where T : ICommunicationObject
{
try
{
work(client);
client.Close();
}
catch (CommunicationException e)
{
client.Abort();
}
catch (TimeoutException e)
{
client.Abort();
}
catch (Exception e)
{
client.Abort();
throw;
}
}
}


Then use this instead of the using keyword:

new SomeClient().Using(channel => {
channel.Login(username, password);
});

Or if you are using ChannelFactory then:

new ChannelFactory<ISomeService>().Using(channel => {    
channel.Login(username, password);
});

Enjoy!

Step by Step screencasts to do Behavior Driven Development on WCF and UI using xUnit

For those who have missed my presentation, I am trying to encourage my team to get into Behavior Driven Development (BDD). So, I made two quick video tutorials to show how BDD can be done from early requirement collection stage to late integration tests. It explains breaking user stories into behaviors, and then developers and test engineers taking the behavior specs and writing a WCF service and unit test for it, in parallel, and then eventually integrating the WCF service and doing the integration tests. It introduces how mocking is done using the Moq library. Moreover, it shows a way how you can write test once and do both unit and integration tests at the flip of a config setting.

Watch the screencast here:

The next video tutorial is about doing BDD to do automated UI tests. It shows how test engineers can take behaviors and then write tests that tests a prototype UI in isolation (just like Service Contract) in order to ensure the prototype conforms to the expected behaviors, while developers can write the real code and build the real product in parallel. When the real stuff is done, the same test can test the real stuff and ensure the agreed behaviors are satisfied. I have used WatiN to automate UI and test UI for expected behaviors.

 

Hope you like it!