Defend ASP.NET and WCF from various attacks using Nginx

ASP.NET websites and WCF services can be attacked in many ways to slow down the service and even cause a complete outage. One can perform slowloirs attack to exhaust all connections and threads on IIS and cause a complete outage. One can hit expensive URLs like Download URLs or exploit an expensive WCF service to cause high CPU usage and bring down the service. One can open too many parallel connections and stop IIS from accepting more connections. One can exploit a large file download URL and perform continuous parallel download and exhaust the network bandwidth, causing a complete outage or expensive bandwidth bill at the end of the month. One can also find a way to insert a large amount of data in the database and exhaust database storage.

Thus ASP.NET and WCF services, like all other web technology platforms, need more than standard network firewall. They need proper Web Application Firewall (WAF), which can inspect exactly what is being done on the application and block malicious transactions, specific to the application being protected. Nginx (engine x) is such an application that can offer many types of defence for ASP.NET and WCF and it can significantly speed up an ASP.NET website by offloading static files and large file transfers.

Even if you have zero knowledge of Linux, you can still get a decent nginx setup done.

deployment

Read details from this CodeProject article:

http://www.codeproject.com/Articles/1115111/Defend-ASP-NET-and-WCF-from-various-attacks-using

Don’t forget to vote!

Caching WCF javascript proxy on browser

When you use WCF services from Javascript, you have to generate the Javascript proxies by hitting the Service.svc/js. If you have five WCF services, then it means five javascripts to download. As browsers download javascripts synchronously, one after another, it adds latency to page load and slows down page rendering performance. Moreover, the same WCF service proxy is downloaded from every page, because the generated javascript file is not cached on browser. Here is a solution that will ensure the generated Javascript proxies are cached on browser and when there is a hit on the service, it will respond with HTTP 304 if the Service.svc file has not changed.

Here’s a Fiddler trace of a page that uses two WCF services.

image

You can see there are two /js hits and they are sequential. Every visit to the same page, even with the same browser session results in making those two hits to /js. Second time when the same page is browsed:

image

You can see everything else is cached, except the WCF javascript proxies. They are never cached because the WCF javascript proxy generator does not produce the necessary caching headers to cache the files on browser.

Here’s an HttpModule for IIS and IIS Express which will intercept calls to WCF service proxy. It first checks if the service is changed since the cached version on the browser. If it has not changed then it will return HTTP 304 and not go through the service proxy generation process. Thus it saves some CPU on server. But if the request is for the first time and there’s no cached copy on browser, it will deliver the proxy and also emit the proper cache headers to cache the response on browser.

http://www.codeproject.com/Articles/360437/Caching-WCF-javascript-proxy-on-browser

Don’t forget to vote.

Tweaking WCF to build highly scalable async REST API

At 9 AM in the morning, during the peak traffic for your business, you get an emergency call that the website you built is no more. It’s not responding to any request. Some people can see some page after waiting for long time but most can’t. So, you think it must be some slow query or the database might need some tuning. You do the regular checks like looking at CPU and Disk on database server. You find nothing is wrong there. Then you suspect it must be webserver running slow. So, you check CPU and Disk on webservers. You find no problem there either. Both web servers and database servers have very low CPU and Disk usage. Then you suspect it must be the network. So, you try a large file copy from webserver to database server and vice versa. Nope, file copies perfectly fine, network has no problem. You also quickly check RAM usage on all servers but find RAM usage is perfectly fine. As the last resort, you run some diagnostics on Load Balancer, Firewall, and Switches but find everything to be in good shape. But your website is down. Looking at the performance counters on the webserver, you see a lot of requests getting queued, and there’s very high request execution time, and request wait time.

image001

So, you do an IIS restart. Your websites comes back online for couple of minutes and then it goes down again. After doing restart several times you realize it’s not an infrastructure issue. You have some scalability issue in your code. All the good things you have read about scalability and thought that those were fairy tales and they will never happen to you is now happening right in front of you. You realize you should have made your services async.

However, just converting your sync services to async mode does not solve the scalability problem. WCF has a bug due to which it cannot serve requests as fast as you would like it to. The thread pool it uses to handle the async calls cannot start threads as requests come in. It only adds a new thread to the pool every 500ms. As a result, you get slow rampup of threads:

image018

Read my article to learn details on how WCF works for async services and how to fix this bug to make your async services truly async and scale under heavy load.

http://www.codeproject.com/KB/webservices/fixwcf_for_restapi.aspx

Don’t forget to vote.

WCF does not support compression out of the box, so fix it

WCF service and client do not support HTTP Compression out of the box in .NET 3.5 even if you turn on Dynamic Compression in IIS 6 or 7. It has been fixed in .NET 4 but those who are stuck with .NET 3.5 for foreseeable future, you are out of luck.  First of all, it’s IIS fault that it does not enable http compression for SOAP messages even if you turn on Dynamic Compression in IIS 7. Secondly, it’s WCF’s fault that it does not send the Accept-Encoding: gzip, deflate header in http requests to the server, which tells IIS that the client supports compression. Thirdly, it’s again WCF fault that even if you make IIS to send back compressed response, WCF can’t process it since it does not know how to decompress it. So, you have to tweak IIS and System.Net factories to make compression work for WCF services. Compression is key for performance since it can dramatically reduce the data transfer from server to client and thus give significant performance improvement if you are exchanging medium to large data over WAN or internet.

There are two steps – first configure IIS, then configure System.Net. There’s no need to tweak anything in WCF like using some Message Interceptor to inject HTTP Headers as you find people trying to do here, here and here.

Configure IIS to support gzip on SOAP respones

After you have enabled Dynamic Compression on IIS 7 following the guide, you need to add the following block in the <dynamicTypes> section of applicationHost.config file inside C:WindowsSystem32inetsrvconfig folder. Be very careful about the space in mimeType. They need to be exactly the same as you find in response header of SOAP response generated by WCF services.

<add mimeType="application/soap+xml" enabled="true" />
<add mimeType="application/soap+xml; charset=utf-8" enabled="true" />
<add mimeType="application/soap+xml; charset=ISO-8895-1" enabled="true" />

After adding the block, the config file will look like this:

image

For IIS 6, first you need to first enable dynamic compression and then allow the .svc extension so that IIS compresses responses from WCF services.

Next you need to make WCF send the Accept-Encoding: gzip, deflate header as part of request and then support decompressing a compressed response.

Send proper request header in WCF requests

You need to override the System.Net default WebRequest creator to create HttpWebRequest with compression turned on. First you create a class like this:

public class CompressibleHttpRequestCreator : IWebRequestCreate
{
public CompressibleHttpRequestCreator()
{
}

WebRequest IWebRequestCreate.Create(Uri uri)
{
HttpWebRequest httpWebRequest =
Activator.CreateInstance(typeof(HttpWebRequest),
BindingFlags.CreateInstance | BindingFlags.Public |
BindingFlags.NonPublic | BindingFlags.Instance,
null, new object[] { uri, null }, null) as HttpWebRequest;

if (httpWebRequest == null)
{
return null;
}

httpWebRequest.AutomaticDecompression = DecompressionMethods.GZip |
DecompressionMethods.Deflate;

return httpWebRequest;
}
}

Then on the WCF Client application’s app.config or web.config, you need to put this block inside system.net which tells system.net to use your factory instead of the default one.

<system.net>
<webRequestModules>
<remove prefix="http:"/>
<add prefix="http:" type="WcfHttpCompressionEnabler.CompressibleHttpRequestCreator, WcfHttpCompressionEnabler,
Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"
/>
</webRequestModules>
</system.net>

That’s it.

I have uploaded a sample project which shows how all these works.

Quick ways to boost performance and scalability of ASP.NET, WCF and Desktop Clients

There are some simple configuration changes that you can make on machine.config and IIS to give your web applications significant performance boost. These are simple harmless changes but makes a lot of difference in terms of scalability. By tweaking system.net changes, you can increase the number of parallel calls that can be made from the services hosted on your servers as well as on desktop computers and thus increase scalability. By changing WCF throttling config you can increase number of simultaneous calls WCF can accept and thus make most use of your hardware power. By changing ASP.NET process model, you can increase number of concurrent requests that can be served by your website. And finally by turning on IIS caching and dynamic compression, you can dramatically increase the page download speed on browsers and and overall responsiveness of your applications.

Read the CodeProject article for more details.

http://www.codeproject.com/KB/webservices/quickwins.aspx

image

Please vote for me if you find the article useful.

Dynamically set WCF Endpoint in Silverlight

When you add a WCF service reference to a Silverlight Application, it generates the ServiceReference.ClientConfig file where the URL of the WCF endpoint is defined. When you add the WCF service reference on a development computer, the endpoint URL is on localhost. But when you deploy the Silverlight client and the WCF service on a production server, the endpoint URL no longer is on localhost instead on some domain. As a result, the Silverlight application fails to call the WCF services. You have to manually change the endpoint URL on the Silverlight config file to match the production URL before deploying live. Now if you are deploying the Silverlight application and the server side WCF service as a distributable application where customer install the service themselves on their own domain then you don’t know what will be the production URL. As a result, you can’t rely on the ServiceReference.ClientConfig. You have to dynamically find out on which domain the Silverlight application is running and what will be the endpoint URL of the WCF service. Here I will show you an approach to dynamically decide the endpoint URL.

First you add a typical service reference and generate a ServiceReference.ClientConfig that looks like this:

<configuration>
    <system.serviceModel>
        <bindings>
            <basicHttpBinding>
                <binding name="BasicHttpBinding_ProxyService" maxBufferSize="2147483647"
                    maxReceivedMessageSize="2147483647">
                    <security mode="None" />
                </binding>
                <binding name="BasicHttpBinding_WidgetService" maxBufferSize="2147483647"
                    maxReceivedMessageSize="2147483647">
                    <security mode="None" />
                </binding>
            </basicHttpBinding>
        </bindings>
        <client>
            <endpoint address="http://localhost:8000/Dropthings/API/Proxy.svc/pox"
                binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_ProxyService"
                contract="DropthingsProxy.ProxyService" name="BasicHttpBinding_ProxyService" />
            <endpoint address="http://localhost:8000/Dropthings/API/Widget.svc/pox"
                binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_WidgetService"
                contract="DropthingsWidgetService.WidgetService" name="BasicHttpBinding_WidgetService" />
        </client>
    </system.serviceModel>
</configuration>

As you see, all the URL are pointing to localhost, on my development environment. The Silverlight application now need to dynamically decide what URL the Silverlight app is running from and then resolve the endpoint URL dynamically.

I do this by creating a helper class that checks the URL of the Silverlight application and then decides what’s going to be the URL of the endpoint.

public class DynamicEndpointHelper
{
    // Put the development server site URL including the trailing slash
    // This should be same as what's set in the Dropthings web project's
    // properties as the URL of the site in development server
    private const string BaseUrl = "http://localhost:8000/Dropthings/";

    public static string ResolveEndpointUrl(string endpointUrl, string xapPath)
    {
        string baseUrl = xapPath.Substring(0, xapPath.IndexOf("ClientBin"));
        string relativeEndpointUrl = endpointUrl.Substring(BaseUrl.Length);
        string dynamicEndpointUrl = baseUrl + relativeEndpointUrl;
        return dynamicEndpointUrl;
    }
}

In the Silverlight app, I construct the Service Client this way:

private DropthingsProxy.ProxyServiceClient GetProxyService()
{
    DropthingsProxy.ProxyServiceClient service = new DropthingsProxy.ProxyServiceClient();
    service.Endpoint.Address = new EndpointAddress(
        DynamicEndpointHelper.ResolveEndpointUrl(service.Endpoint.Address.Uri.ToString(),
        App.Current.Host.Source.ToString()));
    return service;
}

After creating the service client with default setting, it changes the endpoint URL to the currently running website’s URL. This solution works when the WCF services are exposed from the same web application. If you have the WCF services hosted on a different domain and you are making cross domain calls to the WCF service then this will not work. In that case, you will have to find out what’s the domain of the WCF service and then use that instead of localhost.

Do not use “using” in WCF Client

You know that any IDisposable object must be disposed using using. So, you have been using using to wrap WCF service’s ChannelFactory and Clients like this:

using(var client = new SomeClient()) {

.

.

.

}

Or, if you are doing it the hard and slow way (without really knowing why), then:

using(var factory = new ChannelFactory<ISomeService>()) {

var channel= factory.CreateChannel();

.

.

.

}

That’s what we have all learnt in school right? We have learnt it wrong!

When there’s a network related error or the connection is broken, or the call is timed out before Dispose is called by the using keyword, then it results in the following exception when the using keyword tries to dispose the channel:

failed: System.ServiceModel.CommunicationObjectFaultedException : 
The communication object, System.ServiceModel.Channels.ServiceChannel,
cannot be used for communication because it is in the Faulted state.

Server stack trace:
at System.ServiceModel.Channels.CommunicationObject.Close(TimeSpan timeout)

Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout)
at System.ServiceModel.ClientBase`1.System.ServiceModel.ICommunicationObject.Close(TimeSpan timeout)
at System.ServiceModel.ClientBase`1.Close()
at System.ServiceModel.ClientBase`1.System.IDisposable.Dispose()

There are various reasons for which the underlying connection can be at broken state before the using block is completed and the .Dispose() is called. Common problems like network connection dropping, IIS doing an app pool recycle at that moment, some proxy sitting between you and the service dropping the connection for various reasons and so on. The point is, it might seem like a corner case, but it’s a likely corner case. If you are building a highly available client, you need to treat this properly before you go-live.

So, do NOT use using on WCF Channel/Client/ChannelFactory. Instead you need to use an alternative. Here’s what you can do:

First create an extension method.

public static class WcfExtensions
{
public static void Using<T>(this T client, Action<T> work)
where T : ICommunicationObject
{
try
{
work(client);
client.Close();
}
catch (CommunicationException e)
{
client.Abort();
}
catch (TimeoutException e)
{
client.Abort();
}
catch (Exception e)
{
client.Abort();
throw;
}
}
}


Then use this instead of the using keyword:

new SomeClient().Using(channel => {
channel.Login(username, password);
});

Or if you are using ChannelFactory then:

new ChannelFactory<ISomeService>().Using(channel => {    
channel.Login(username, password);
});

Enjoy!