I will be unavailable for some days. Going through difficult
times. I had to postpone my US travel due to family situation so
those who were expecting me there, I am sorry to say I will be late
in arriving.
Cost effective, failsafe, 2 server load balanced web & DB servers
We were very poor when we first started Pageflakes so we needed a really
cheap but fully reliable deployment configuration. The requirement
was:
- Fail safe. If a server goes down, another one can take over
without users noticing - Database must be mirrored in multiple servers
- Web servers will be in Load Balanced mode so that incase one
goes down, the other takes over immediately - Do it with least hardware and cost
So, we thought a lot and could not make a reliable solution
without at least three servers. Then we found one with two servers
only. Here’s the plan:
We made the above plan for our hosting service provider.
We have 2 windows server both with IIS 6.0 and SQL Server 2005.
Let’s call these servers Web server and DB server.
Web server gets 60% web traffic configured via Network Load
Balancing. We used Windows Network Load Balancing in order to save
the cost of buying a separate Load balancer. We also used Windows
Firewall instead of external firewall. You can imagine how poor we
were. DB Server gets 40% web traffic and hosts the database in its
SQL Server 2005. We started with SQL Server 2005 Workgroup Edition
because it was the only version we could afford. So, we could not
use the new DB Mirroring feature instead we had to use Log
Shipping. The web server also had SQL Server 2005 but it was used
for a standby database so we did not have to pay for licensing fee.
Remember: You do not need to pay for SQL server license if
the server is hosting standby databases only. The DB
Server also runs the web site in its IIS.
So, we have 2 servers running the website in NLB. So, web server
is properly load balanced and failsafe. Then we have database on
the DB Server which is log shipped to the web server. So, incase DB
server goes down, we can divert all traffic to the web server and
bring up web server’s standby database in order to run the site
solely from web server. When DB server comes back online, we
configure log shipping the opposite way and divert most of the
traffic to the DB server. Thus DB server becomes the web server and
previously web server becomes the DB server. It requires some
manual work and not fully automated. But the cheapest solution for
a reliable configuration.
So when you add more servers in future, it’s very easy to add
new ones.
Things you must do for ASP.NET 2.0 Membership Provider before going live
Some tweakings you should do on your web.config if you are using
ASP.NET 2.0 Membership Provider before you go live on your
production server:
- Add “applicationname” attribute in Profile Provider. IF you do
not add a specific name here, Profile provider will use a GUID. So,
on your local machine you will have one GUID and on production
server you will have another GUID. If you copy your local DB to
production server, you won’t be able to reuse the records available
in your local DB and ASP.NET will create a new application on
production. Here’s where you need to add it:
-
< profile enabled =”true” > < providers > < clear /> < add name =”…” type =”System.Web.Profile.SqlProfileProvider”
connectionStringName
=”…” applicationName =”YourApplicationName” description =”…” /> providers > - Profile provider will automatically save the profile whenever a
page request completes. So, this might result in unnecessary UPDATE
on your DB which has significant performance penalty. So, turn off
automatic save and do it explicitly from your code using
Profile.Save();< profile enabled =”true” automaticSaveEnabled =”false” > - Role Manager always queries database in order to get the user
roles. This has significant performance penalty. You can avoid this
by letting Role Manager cache role information on cookie. But this
will work for users who do not have a lot of roles assigned to them
which exceeds the 2 KB limit of Cookie. But it’s not a common
scenario. So, you can safely store role info on cookie and save one
DB rountrip on every request to .aspx and .asmx.< roleManager enabled =”true” cacheRolesInCookie =”true” > - See my previous
post on optimizing two important SPs used by Profile
Provider.
Redirect Traffic from old to new server when you change hosting service
How do you redirect your users when you change hosting service?
Currently your domain maps to the IP which is with the current
hosting provider. When you change to a new hosting provider, you
get a new IP range for your new servers. So, even if you change the
DNS mapping, it will take at least 4 days to get refreshed all over
the world. So, during these 4 days, some user will go to new IP and
some will go to old IP.
The problem here is redirecting all users to the new IP without
letting themknow anything.
At Pageflakes, We have
done this many times. We had all sorts of problem with our hosting
providers and have changed servers almost once every month. So, we
had to come up with a solution which works transparently and
without any downtime.
Here’s what we do. First we map a new subdomain like
new.pageflakes.com to the new server IP. Then we create a new web
site (not virtual directory) on old web server called
“Redirectors”. It maps to a folder which has nothing but a
global.asax and web.config. Then we go to Web site Properties ->
Home Directory -> Configuration and map ASP.net to receive all
web requests. This includes all URLs including .html, .gif, .css.
Js etc.
Next, we write the following code in Global.asax which redirects
all traffic to the new server.
protected void Application_BeginRequest(Object sender, EventArgs e) { string url = HttpContext.Current.Request.Url.AbsolutePath; string QueryParams = HttpContext.Current.Request.QueryString.ToString(); if (QueryParams != "") { Response.Redirect( "http://new.pageflakes.com" + url + "?"+ QueryParams); } else { Response.Redirect( "http://new.pageflakes.com" + url ); } }
So, anyone trying to go to www.pageflakes.com/aboutus.html
gets redirected to new.pageflakes.com/aboutus.html. The redirector
keeps the query string and logical path intact. So, complicated
URLs like
www.pageflakes.com/flakes/amit/notepad/notepad.html?haha=huhu&hehe=hoho
gets converted nicely.
After doing this, we stop the default web site which is
listening to port 80 and turn on the redirector. By this time, new
server is already up and running on the new subdomain.
Now we do the DNS change and map the new server’s IP to www.
So, those users who still have the old IP in their DNS cache
goes to the old server and then gets redirected to the new server.
But after a while when their DNS cache gets refreshed and they get
the new IP, all their requests go to the new server, so we have
nothing to do here. After 4 or 5 days, we can safely wipe out the
old server and start kicking old hosting providers backend.
Anyone has a better idea?
Pageflakes – the AJAX king
Here’s what a Digg article says:
Pageflakes and YouOS – not Google – are the Ajax
Kings
InformationWeek took a close look at Google and about 20
startups to find out who’s the Ajax King. In the “webtop” or
“personalized startpage” category – arguably the most important of
all categories – the winners are Pageflakes and YouOs, ahead of
Google and Microsoft.
I am very happy to see we again got recognized as the best
Webtop ahead of Microsoft and Google. The first one was on Web 2.0 Awards. Pleeeeeeeeeease give
us a Digg and don’t miss the article on InformationWeek.
Forms authentication failed for the request. Reason: The ticket supplied was invalid. (Solution)
When you turn on web garden (multiple process per application
pool) or go into multi server load balance deployment where servers
are serving the same website, you will have Forms Authentication
problem. Users will get automatically logged out or see the Yellow
screen of death (ASP.NET error page) frequently. This happens
because ASP.NET encrypts the login information in cookie. But the
encryption key is unique for each machine and for each process. So,
if one user hits Server #1 and gets an ecnrypted key, and then the
next hit goes to Server #2, it will fail to decrypt the cookie and
log user out or throw user the asp.net general error message.
In order to prevent this on your production server, you need to
remember this before you go live:
The reasons for a forms auth ticket failing are normally that
either the validation key or the decryption key are not in sync
across all servers in a web farm. Another potential reason can be
if both ASP.NET 1.1 and ASP.NET 2.0 applications are issuing forms
auth tickets with the same domain and path.For the first case, setting the validationKey and decryptionKey
attributes explicitly onon each web server
will solve the problem.For the second case, setting the validationKey and decryptionKey
attributes explicitly infor *both* the
ASP.NET 1.1 and ASP.NET 2.0 applications is necessary. Additionally
on the ASP.NET 2.0 apps, the “decryption” attribute in
should be set to “3DES”.
This is what I learned from Stephan Schackow (Microsoft Atlas
team). Here’s how the machine.config should look like:
the
Catch No 1:
How do you generate the machine key? You need to use a utility
to produce the key for your PC. I have made a .exe which can
generate such keys for you. Here’s how you run it:
SecurityKey.exe 24 64
It takes two parameters and these are the exact values you have
to provide.
For super cautious IT guys like me, don’t worry, it’s a .NET 2.0
binary, no virus. Test the strength of your anti-virus software on
this .exe if you like. If it can’t find any virus then either I’m
smarter than you and have hidden a virus successfully or there’s no
virus at all and I am not that smart as I sound like.
Catch No 2:
You have put the machine keys in both machine. Restarted IIS.
Even restarted your server. But you still see lots of Event Log
error entries which shows users are still getting the dreaded
“Forms authentication failed for the request. Reason: The ticket
supplied was invalid”. So, what did you do wrong? You call
Microsoft support. Go to Forums and make post. Everyone says what
you did is correct.
Here’s what you need to do: wait. Wait for 2 or 3 days until all
those users come back to your website at least once. Those users
will have cookie encrypted with previously assigned encryption key
pair. Naturally, it will fail to decrypt with the new key pair you
have just specified in machine.config. So, until all those users
get a new key, you will keep on having the error message. Don’t be
alarmed if you see this even after one week or month. This just
means some user visited you after a long time and you are not doing
much good in attracting users to your site. So, if you see such
event log entries after a week or two, call you marketing team and
ask what kind of marketing they are doing.
It would be really good if there was something like “Checklist
for Going Live with ASP.NET” which stated all these issues.
How to change user name in ASP.NET 2.0 Membership Provider
Profile.UserName is a readonly field. So, how do you change a
user’s name? Many use email address as user name (like us,
Pageflakes) and we want to allow users to change their email
address which in turn changes their user name. So, how do you do
it?
It looks like there’s no easy way to do it. You have to do the
following:
- Create a new user using the new email address
- Get the password of the old account and set it to the new
account. If you can’t get the old password via Membership provider,
then ask user. - Create a new profile for the new user account
- Copy all the properties from the old profile to the new profile
object. - Log out user from old account
- Auto sign in to the new account so that user does not notice
what an incredible thing just happened.
Here’s how I do it:
if (Profile.UserName !=
newUserName)
{
// Changing email address of user.
Delete current user account and create
// a new one using the new email address
but the same password
if ( null != Membership.GetUser(newUserName))
throw new ApplicationException( "There's another user with the same email. Please enter a different email.");
MembershipUser newUser = Membership.CreateUser(newUserName, currentPassword);
// Create profile for the new user and
copy all values from current profile
// to new profile
ProfileCommon newProfile = ProfileCommon.Create(newUserName, true) as ProfileCommon;
newProfile.IsInvited = Profile.IsInvited;
newProfile.IsRealUser = Profile.IsRealUser;
newProfile.Name = newUserName;
newProfile.Save();
if ( Membership.ValidateUser(newUserName, currentPassword))
{
FormsAuthentication.SignOut();
Session.Abandon();
// Delete the old profile and
user
ProfileManager.DeleteProfile(Profile.UserName);
Membership.DeleteUser(user.UserName);
//FormsAuthentication.SetAuthCookie(newUserName,
true);
FormsAuthentication.RedirectFromLoginPage(newUserName, true);
}
}
How to use ASP.NET 2.0 Profile object from web service code
You have started to use ASP.NET 2.0 Profile Provider and you are
very happy with it. All your .aspx pages are full of
“Profile.Something”. You also introduced a lot of new properties on
the profile object via web.config. Then you added a new web
service. There you want to access the Profile object. You realize,
you are doomed.
You cannot access the Profile object from Web Service.
At runtime, ASP.NET generates a class looking at the
specification provided in web.config, which becomes the “Profile”
object in .aspx pages. But this object is not available in Web
service (.asmx.cs) and you cannot see the custom properties you
have added in the profile object. Although
HttpContext.Current.Profile will give you reference to Profile
object, but it’s type is ProfileBase which does not show your
custom properties. Makes sense, because the class is generated at
runtime. But if it can be made available in .aspx.cs, then it
should also be available in .asmx.cs.
In order to overcome this problem, you have to hand code that
profile class in your App_Code folder and then configure web.config
so that it does not auto generate the class instead use your one.
Here’s what you do in web.config:
I have added a new attribute UserProfile. Now go to App_Code and
make a UserProfile class like this:
public class UserProfile :
System.Web.Profile.ProfileBase
{
[SettingsAllowAnonymousAttribute(true)]
public virtual int Timezone
{
get
{
return ((int)(this.GetPropertyValue(“Timezone”)));
}
set
{
this.SetPropertyValue(“Timezone”, value);
}
}
Declare all the properties like this. Don’t forget to add the
[SettingsAllowAnonymousAttribute(true)] on the properties which you
want to be made available to anonymous users.
At the end of the class, add this method:
public virtual ProfileCommon
GetProfile(string username)
{
return ((ProfileCommon)(ProfileBase.Create(username)));
}
Here’s an easy way to avoid hand coding this class and
generating it automatically. Before you make the changes in
web.config and create the UserProfile class, run your web project
as it was before. But before running it, turn off SQL Server. This
will make Asp.net execution to break on first call to some Profile
object’s property. For ex, if you have a custom property TimeZone
in the Profile object, execution will break on this line:
public virtual int
Timezone
{
get
{
return
((int)(this.GetPropertyValue(“Timezone”)));
It will fail to load the profile object values from database
because database is down. If you scroll up, you will see this is
the class that ASP.NET generates at run time. You will see all the
properties are declared on this class already. So, you can just
copy & paste it in your own class easily!
But after copying, you will realize there’s no
[SettingsAllowAnonymousAttribute(true)] attribute. So, you will
have to put them manually. Also after making your own custom class,
you will have to remove all the custom properties declared inside
Now that you have your own Profile class, inside web service,
you can cast (HttpContext.Current.Profile as UserProfile) and then
you can use all the custom properties.
If you don’t want to enjoy strongly typed coding on web service,
then you can always use the old way of accessing Profile properties
via: Profile.GetPropertyValue(“TimeZone”). But it’s no fun.
Optimize ASP.NET 2.0 Profile Provider before you go live
Do you know there are two important stored procedures in ASP.NET
2.0 Profile Provider that you can optimize significantly? If you
use them without doing the necessary optimization, your servers
will sink taking your business down with you during heavy load.
Here’re a story:
During March, Pageflakes
was shown on MIX 2006. We were having a glamorous time back then.
We were right on Showcase
of Atlas web site. The first company, not the second or third.
Number of visits per day were rising sky high. One day we noticed,
the server was no more. We restarted the server, brought it back,
again it died within an hour. After doing a lot of postmortem
analysis on the remaining of server’s body parts, we found that it
was having 100% CPU and super high IO usage. The hard drives were
over heated and turned themselves off in order to save themselves.
This was quite surprising to us because we were very intelligent
back then and we profiled every single web service function. So, we
went through hundreds of megabytes of logs hoping to find which
webservice function was taking the time. We suspected one. It was
the first function that loads a user’s page setup. We broke it up
into smaller parts in order to see which part is taking most of the
time.
private GetPageflake( string source, string pageID, string userUniqueName )
{
if( Profile.IsAnonymous ) { using ( new TimedLog (Profile.UserName, "GetPageflake" ))
{
You see, the entire function body is timed. If you want to learn
how this timing works, wait until I make a post on it. We also
timed smaller parts which we suspected were taking the most
resource. But we could find not a single place in our code which
was taking any significant time. Our codebase is always super
optimized (after all, you know who is reviewing it, me).
Meanwhile, users were shouting, management was screaming,
support staff was complaining on the phone. Developers were
fainting one by one. Remaining ones were furiously sweating and
blood vessels on their forehead were coming out etc. Nothing
special, just a typical situation we have couple of times every
month.
Now you must be shouting, “You could have used SQL Profiler, you
idiot!” We were using SQL Server workgroup edition. It does
not have SQL Profiler. So, we had to hack our way through to get it
running on a server somehow. Don’t ask how. After running
the SQL Profiler, boy, were we surprised! The name of the
honorable SP which were giving us so much pleasure was none
but the famous and the most pretigious one, which we all know by
name due to its incredible contribution to the development
of human civilization. Please welcome, the
great stored procedure “dbo.aspnet_Profile_GetProfiles”!
We used (and still use) Profile provider extensively.
Here’s the SP:
CREATE PROCEDURE [dbo].[aspnet_Profile_GetProfiles]
@ApplicationName nvarchar (256),
@ProfileAuthOptions int ,
@PageIndex int ,
@PageSize int ,
@UserNameToMatch nvarchar (256) = NULL ,
@InactiveSinceDate datetime = NULL
AS
BEGIN
DECLARE @ApplicationId uniqueidentifier
SELECT @ApplicationId = NULL
SELECT @ApplicationId = ApplicationId FROM aspnet_Applications
WHERE LOWER (@ApplicationName) = LoweredApplicationName
IF (@ApplicationId IS NULL )
RETURN
-- Set the page bounds
DECLARE @PageLowerBound int
DECLARE @PageUpperBound int
DECLARE @TotalRecords int
SET @PageLowerBound = @PageSize * @PageIndex
SET @PageUpperBound = @PageSize - 1 + @PageLowerBound
-- Create a temp table TO store the select results
CREATE TABLE #PageIndexForUsers
(
IndexId int IDENTITY (0, 1) NOT NULL ,
UserId uniqueidentifier
)
-- Insert into our temp table
INSERT INTO #PageIndexForUsers (UserId)
SELECT u.UserId
FROM dbo.aspnet_Users u, dbo.aspnet_Profile p
WHERE ApplicationId = @ApplicationId
AND u.UserId = p.UserId
AND (@InactiveSinceDate IS NULL OR LastActivityDate <= @InactiveSinceDate)
AND ( (@ProfileAuthOptions = 2)
OR (@ProfileAuthOptions = 0 AND IsAnonymous = 1)
OR (@ProfileAuthOptions = 1 AND IsAnonymous = 0)
)
AND (@UserNameToMatch IS NULL OR LoweredUserName LIKE LOWER (@UserNameToMatch))
ORDER BY UserName
SELECT u.UserName, u.IsAnonymous, u.LastActivityDate, p.LastUpdatedDate,
DATALENGTH (p.PropertyNames) + DATALENGTH (p.PropertyValuesString)
+ DATALENGTH (p.PropertyValuesBinary)
FROM dbo.aspnet_Users u, dbo.aspnet_Profile p, #PageIndexForUsers i
WHERE u.UserId = p.UserId AND p.UserId = i.UserId AND
i.IndexId >= @PageLowerBound AND i.IndexId <= @PageUpperBound
DROP TABLE #PageIndexForUsers
END
END
First it looks up for ApplicationID.
DECLARE @ApplicationId uniqueidentifier
SELECT @ApplicationId = NULL
SELECT @ApplicationId = ApplicationId FROM aspnet_Applications
WHERE LOWER (@ApplicationName) = LoweredApplicationName
IF (@ApplicationId IS NULL )
RETURN
Then it creates a temporary table (why not table data type?) in
order to store profiles of users.
— Create a temp table TO store the select results
CREATE TABLE #PageIndexForUsers
(
IndexId int IDENTITY (0, 1) NOT NULL ,
UserId uniqueidentifier
)
-- Insert into our temp table
INSERT INTO #PageIndexForUsers (UserId)
If it gets called very frequently, there will be too high IO due
to the temporary table creation. It also runs through 2 very big
tables – aspnet_Users and aspnet_Profile. The SP is written in such
a way that if one user has multiple profiles, it will return all
profiles of the user. But normally we store one profile per user.
So, there’s no need for creating a temporary table. Moreover,
there’s no need for doing
LIKE LOWER
(@UserNameToMatch). It is always called with a full user name which
we can match directly using the equal operator.
So, we opened up the stored proc and did a open heart bypass
surgery like this:
IF @UserNameToMatch IS NOT NULL
BEGIN
SELECT u.UserName, u.IsAnonymous, u.LastActivityDate, p.LastUpdatedDate,
DATALENGTH (p.PropertyNames) + DATALENGTH (p.PropertyValuesString) + DATALENGTH (p.PropertyValuesBinary)
FROM dbo.aspnet_Users u
INNER JOIN dbo.aspnet_Profile p ON u.UserId = p.UserId
WHERE
u.LoweredUserName = LOWER (@UserNameToMatch)
SELECT @@ROWCOUNT
END
ELSE
BEGIN -- Do the original bad things
It ran fine locally. Now it was time to run it on the server.
This is an important SP which is used by the ASP.NET 2.0 Profile
Provider, heart of ASP.NET framework. If we do something wrong
here, we might not be able to see the problem immediately, but may
be after one month we will realize users profile is mixed up and
there’s no way to get it back. So, it was a pretty hard decision to
run this on a live production server directly without doing enough
test. We did not have time to do enough test anyway. We are already
down. So, we all gathered, said our prayer and hit the “Execute”
button on SQL Server Management Studio.
The SP ran fine. On the server we noticed from 100% CPU usage it
came down to 30% CPU usage. IO usage also came down to 40%.
We went live again.
Here’s another SP which gets called on every page load and
webservice call on our site because we use Profile provider
extensively.
CREATE PROCEDURE [dbo].[aspnet_Profile_GetProperties]
@ApplicationName nvarchar (256),
@UserName nvarchar (256),
@CurrentTimeUtc datetime
AS
BEGIN
DECLARE @ApplicationId uniqueidentifier
SELECT @ApplicationId = NULL
SELECT @ApplicationId = ApplicationId FROM dbo.aspnet_Applications WHERE LOWER (@ApplicationName) = LoweredApplicationName
IF (@ApplicationId IS NULL )
RETURN
DECLARE @UserId uniqueidentifier
SELECT @UserId = NULL
SELECT @UserId = UserId
FROM dbo.aspnet_Users
WHERE ApplicationId = @ApplicationId AND LoweredUserName = LOWER (@UserName)
IF (@UserId IS NULL )
RETURN
SELECT TOP 1 PropertyNames, PropertyValuesString, PropertyValuesBinary
FROM dbo.aspnet_Profile
WHERE UserId = @UserId
IF (@@ROWCOUNT > 0)
BEGIN
UPDATE dbo.aspnet_Users
SET LastActivityDate=@CurrentTimeUtc
WHERE UserId = @UserId
END
END
When you run the SP, see the statistics:
Table 'aspnet_Applications' . Scan count 1, logical reads 2, physical reads 0, read -ahead reads 0, lob logical reads 0, lob physical reads 0, lob read -ahead reads 0.
(1 row(s) affected)
Table 'aspnet_Users' . Scan count 1, logical reads 4, physical reads 0, read -ahead reads 0, lob logical reads 0, lob physical reads 0, lob read -ahead reads 0.
(1 row(s) affected)
(1 row(s) affected)
Table 'aspnet_Profile' . Scan count 0, logical reads 3, physical reads 0, read -ahead reads 0, lob logical reads 0, lob physical reads 0, lob read -ahead reads 0.
(1 row(s) affected)
Table 'aspnet_Users' . Scan count 0, logical reads 27, physical reads 0, read -ahead reads 0, lob logical reads 0, lob physical reads 0, lob read -ahead reads 0.
(1 row(s) affected)
(1 row(s) affected)
This stored proc is called whenever you access some
property on Profile object.
First it does a SELECT on aspnet_application to find out the
application ID from application name. You can easily replace this
with a hard coded application ID inside the SP and save some
effort. Normally we run only one application on our production
server. So, there’s no need to lookup application ID on every
single call.
From client statistics, you can see how expensive it is:
Then look at the last block where aspnet_users table is updated
with LastActivityDate. This is the most expensive one.
This is done in order to ensure Profile provider remembers when
was the last time a user’s profile was accessed. We do not need to
do this on every single page load and web service call. May be we
can do it when user first logs in and logs out. In our case, a lot
of web service is called while user is on the page. There’s only
one page anyway. So, we can easily remove this in order to save a
costly update on the giant aspnet_users table on every single web
service call.