Yesterday I took a class on website performance optimization.
I would like to share what I did in the class.
I would like to share what I did in the class.
I took the class in a company which has a local blog site. It has
serious performance issue because of both hardware and software
limitations. The homepage takes around 3 sec to prepare on the
server side which is absolutely unacceptable. So, it was an ideal
candidate to digg into the architecture and code of the site and
try to improve part by part.
The blog site has the following tables: Blog, Post, Comment,
Category. The database is mySQL.
Blog table is like this which represents one blog of a user:
ID |
BIGINT |
Title |
text |
UserID |
BIGINT FK to user table |
Post table contains posts in a blog and looks like this
ID |
BIGINT |
BlogID |
BIGINT FK to Blog table |
Title |
Text |
Section1 |
Char(1) |
Section2 |
Char(1) |
Section3 |
Char(1) |
. . . |
|
|
|
Here Section1…9 are global sections for blog posts like
Politics, Jokes, Entertainment etc. Each blog belongs to some
section.
If Section1 contains ‘1’, then the post belongs to Politics. (I
know what you are thinking, hold on)
Comment table contains comments submitted against posts:
ID |
BIGINT |
PostID |
BIGINT FK to Blog table |
Title |
Text |
Category table contains category of a blog where posts belong. Each
blog can have their own category list. Each post belongs to one or
more category.
ID |
BIGINT |
BlogID |
BIGINT FK to Blog table |
Title |
Text |
Optimization step 1 – Table design and data
type
First step is to change the table design. You see all those
BIGINTs? Why do you need BIGINT instead of INT? INT is 32 bit which
gives you a range from 0 to 4294967295. So, if you make 13 posts
per second, in 10 years this number will run out. It’s a good plan
to think ahead of 10 years, unfortunately we don’t have 10 years
advanced hardware. When you have 32bit processor, every time you
make you processor work with a 64bit number like comparing numbers
(e.g. WHERE BlogID=1121233388765543246) it has to compare two 32
bit chunks and combine their results in order to reach to a
decision. So, your 32bit processor is doing double work. If your
CPU is 60% busy with such computation, converting to INT will
reduce CPU usage to nearly 40% easily because comparing 32bit with
another 32bit for a 32bit processor is just one operation for the
processor. 32bit data structure is the fastest data structure for
32bit processors. But if you have a 64bit processor hardware, OS is
64bit, Database engine is 64bit, then you can use 64bit data
structure and then 64bit data structure will be the fastest one.
However, if you have 64bit hardware, but your OS is 32bit and
Database Engine is also 32bit, you do not gain much speed
improvement.
So, we converted all the BIGINTs to INT. 64bit to 32bit.
We also need to think about storage optimization. 64bit things will
take double space than 32bit things. But that’s a negligible
addition. We need to think about bigger stuffs like Text data
types. Sometimes we use the common “Text” data type of variable
length every where in order to avoid future database changes. So,
trying to save efforts in changing database design in next 10 years
ultimately results in poor database performance and storage
problems for next 10 years. Here are some tips on choosing the
right field type and size:
- If you don’t need unicode support (multiple
language) in fields, use the non-unicode types. For ex, use
varchar instead of nvarchar. All the text data types that start
with “n” support unicode and thus take 2 bytes per character.
That’s double the size of the non-“n” counter parts. Normally we
need unicode only for fields which somehow gets into the UI e.g.
First Name, Title, Description etc. But fields like “FileName”,
“Path”, “Url” etc do not need unicode support at all (when your
server is in some English speaking country, not in Japan, China
etc). So, you can use varchar in these cases and save 50%
space. - Try to avoid using text data types as primary
key. Searching over a text data type is lot more expensive than
searching on integer data types. I have seen tables with primary
key on “Url”, “FileName”, “Full Name” type fields. Think about what
happens when you make a text field primary key. Database Engine has
to match characters in the field data whenever you do queries like
“WHERE FileName=’something'”. It has to match “all” the characters
in order to find a complete match. That’s way more expensive than
just doing “WHERE ID=10” which only takes one 32bit comparison
which the processor can do billions of times per second. You can
easily get around using text fields as primary key by adding an
integer “ID” column which is an auto number and using that ID
column on hyperlinks and page navigations. For example, instead of
using hyperlinks like
www.pageflakes.com/downloadfile.aspx?filename=TestFile.html
you can do it as www.pageflakes.com/downloadfile.aspx?id=10 - Fixed length text data types are much faster to
compare and store than variable length data types. Whenever you use
fixed length data types like char(255), database engine allocates
fixed 255 bytes in the row and it knows there’s no way it’s going
to increase. So, it need not maintain a ‘length’ indicator, nor
does it need to increase/decrease row size whenever the content
changes. This results in less fragmentation. Less fragmentation
means less hard drive usage and less moving around in different
parts of the hard drive. So, use fixed length data types whenever
possible and especially to those fields which appear in WHERE
clause. But beware that when field length fields are compared, you
need to account for the trailing spaces. If you are comparing WHERE
FirstName = ‘Omar’ and first name field is defined as char(15) then
you actually need to do WHERE FirstName = ‘Omar’ + (11
spaces). - Do not use expression on the left side of the
WHERE clause. For example, you could do the above like this: WHERE
Rtrim(FirstName) = ‘Omar’. But that makes the database engine to
trim every single first name it runs through. Similarly, if you do
WHERE Age+10 > 20 it has to do the sum for every row it runs
through. The idea is to use the expression on the right side so
that database engine can run that expression once and then compare
the resultant with the field data on the left. So, do this: WHERE
Age > 10
Optimization Step 2 – Using proper index
Normally rows are added sequentially and the last row is added at
the end of the table. So, blog posts are stored in this way in
database:
Post table sample data
ID |
BlogID |
Title |
1 |
1 |
… |
2 |
1 |
… |
3 |
2 |
… |
4 |
1 |
… |
5 |
2 |
… |
6 |
4 |
… |
7 |
2 |
… |
8 |
4 |
… |
Now if you ask your database engine to find all the posts that
belong to BlogID = 2, it has to run through the entire table to
find which rows have BlogID = 2 and select those rows. So, if you
have 1 million rows, it runs through 1 million rows to select 10 or
20 rows from it. If you see your server’s CPU usage is very high
and hard drive activity is also very high, then database is running
throughout your hard drive in order to find rows. Here’s an
interesting way to find how busy your hard drive is:
Go to Control Panel->Administrative Tools->Performance
Add the above performance counters. You will see hard drive usage
pretty well. See how busy both hard drive is from the above
picture? Read time is more than it can handle.
% Disk Read Time is way high. Which means it’s trying its best to
find rows from DB and going out of its limit. Also the alarming
counter is “Current Disk Queue Length”. This indicates that there
are 22 requests waiting for reading something from your hard drive
and they are currently waiting for the last request to complete.
This means your hard drive is so busy running here and there that
request for fetching data from hard drive is getting piled up
continuously.
By using Index, you can resolve this. Index are like Binary Trees.
Binary Tree stores data in such a way that, they can be searched
very quickly. If you have 1M rows in a table, it can find an entry
in say 100 steps. In order to learn details about Binary Tree,
please search
in Google.
However, you need to do proper indexing in order to get the correct
result. Improper indexing will make this situation worse. Here’s
what I discovered in an index in the “Post” table:
Index |
BlogID, ID |
The index is on BlogID field and ID field. We definitely need
BlogID field to be indexed because we need to find rows using
BlogID. But what happens when you make one index with both BlogID
and ID? It creates each entry in the index using a combination of
BlogID and ID. So, an entry in the index only matches when you
query rows with both BlogID “AND” ID. Something like “WHERE
BlogID=1 AND ID=12312”. But no one will do it for sure because if
you know the ID, you know a particular row already. You don’t need
BlogID at all. Usually the query is “WHERE BlogID=something”. So,
you need one index which has BlogID only. However, sometimes you
need to find an individual post using ID. So, you also need
“another” index which has ID field only. So, there should be 2
separate indexes, one on BlogID and one on ID.
After doing this, table scan stopped, indexes were used properly
and performance counter looked like this:
Peace at last.
Some handy tips for making proper index:
- You MUST NOT run a query which has fields in
WHERE clause but those fields are not part of any index. As soon as
you run a query which has some field in the WHERE cause not part of
any index, database engine has to run through ENTIRE table to find
a match. That’s when CPU and Hard drive usage goes sky
high. - Look at your WHERE clause fields. If you have
AND, then make an index which contains all the fields in ANDs. For
ex, WHERE A=1 AND B=2 AND C=3, then make an index based on A, B, C
and it must be in this exact order. - If you have multiple queries on the same table
and each has different fields in WHERE clause, create different
indexes. For ex, if you have a query which does WHERE A=1 and then
there’s another query which does WHERE B=1, create two separate
index. One with field A and one with field B. - The above applied for clauses which has OR. For
example, WHERE A=1 OR B=2
Optimization step 3 – Caching rows in memory or text
file
Let’s look at how the page is rendered, we need to decide on
performance optimization strategy based on how things really
look:
Logo |
Page title |
Date |
Top Blogger list |
Blog posts |
Recent comment list |
Blogger1 Blogger2 Blogger3 Blogger4 |
Post 1 Title Post 1 description……. # of comments Post 2 Title Post 2 description……. # of comments Post 3 Title Post 3 description……. # of comments Post 4 Title Post 4 description……. # of comments |
Comment 1 Comment 2 Comment 3 Comment 4 |
There are 3 major parts, list of Top bloggers who has the highest
number of posts, show the last 10 blogs per page at the middle and
at the right side, show last 10 comments
Left part blogger list is generated based on this query:
- Find number of posts made by each
user - Order them in descending order. Highest number of
posts become Rank 1. - Get the top 10 rows.
You can guess, it’s a complicated query and requires the database
engine to do a lot of running here and there on comments table,
post table, blog table and user table. So, when we run this query
on every visit to homepage, it makes the entire server pretty slow.
There are 10 to 15 requests per second on the homepage which makes
the database engine run this query 10 to 15 times per second!
That’s around 864,000 per day! It’s a miracle that the server is
still alive don’t you think? Now think about the scenario, when
does bloggers’ rank change? When a blogger makes a post. So, how
often does a blogger make a post? Say twice or thrice per day. So,
why do we run this query 10 times per second instead of twice or
thrice per day? Now that is the right question, Neo.
Remember this, the frequency of changing data is much less than the
frequency of reading data.
So, what does this mean? INSERT, UPDATE and DELETE happens much
less than SELECT. Normally SELECT happens 100s or 1000s times more
than INSERT/UPDATE/DELETE. So, we do not need to run the same
SELECT again and again when the underlying rows are not changing at
all. We can SELECT once, and store the result somewhere so that
when we SELECT the same thing again and the data in the table has
not changed, we can serve the same result from memory instead of
going to the database.
So, here’s the plan, we run the complicated query which takes some
time to find out the top 10 bloggers and then get the result. Then
we store the result either in memory or in some text file in the
web server. The homepage will ALWAYS read from that text file and
render the list. It will never go to the database and fetch the
list. So, this already saves the database from running a
complicated query and reduces the execution time of the
homepage.
The same thing can be done for the “Recent Comments” list. This
list gets updated only when a new comment is posted. So, it can
also be cached and stored either in memory or in a text file.
Making these 2 most expensive queries cached, we can save 40% of
the execution time of the home page. Now that’s some
improvement!
Now comes the most interesting part, the center part which shows 10
posts at a time. Post table is a pretty big table. It can have
millions of rows within couple of months. So, doing queries on this
table will become pretty expensive unless you do the right
indexing. Even indexing won’t help you much if there’s too much
traffic on your site. So, we need to do intelligent caching
here.
We apply the principle of updating cache only when something change
in this place too. The post list shows recent posts in paging mode
which shows 10 posts at a time. Normally people see the top 10
posts every time they visit the homepage. So, that’s something we
need to cache seriously. Also, this cache gets refreshed frequently
too. May be once every 20 seconds when someone adds or edits a
post. So, the lifetime of this cache is pretty short also.
But if you cache top 100 posts in the cache, you can provide cached
content to a user for up to 10 clicks on the ‘Next 10 posts’ link.
Normally 80% users will click ‘Next 10 posts’ once, 60% will click
twice, 40% will click thrice. Some persistent ones will click it 10
times but that’s pretty rare. So, you can get around by caching top
100 or 200 posts easily. This saves you from doing a SELECT on the
large “Post” table and then ordering it by “PostDate” and then
selecting the top 10. You do it only once when you build the
“Top100Posts” cache table or text file and store the data in the
table in sorted order. This way, when paging occurs, you only
select Top 10 rows from Nth position from the “Top100Posts” table
or the text file. This happens blazingly fast compared to doing
this on the actual “Post” table.
Optimization Step 4 – Page output rendering
Here’s a nice formula you can use for caching:
Output = code(a) + code(b)
Where, code(a) = db(a) + db(b)
code(b) = db(c) + db(d)
This means, some code uses some part of the database and some other
part of the code uses some other part of the database. The final
output that user sees is from both codes’ combined hard
effort.
So, if db(a) and db(b) gets cached, the output of code(a) can also
be cached. Their is no way result of code(a) is going to change
until either db(a) or db(b) changes. Think about the blog example.
If top 10 bloggers are calculated by code(a) and it uses two
database operations to prepare the list db(a) and db(b), then until
db(a) or db(b) returns a different value, the result of code(a) is
never going to change. So, when we cache it, the equation
becomes:
x = code(a) = db(a) + db(b)
So, the formular is: output = x + code(b)
Let’s say code(b) generates the ‘Recent comments’ list. If that is
also cached then the formula becomes:
y = code(b) = db(c) + db(d)
So, the formula is: output, z = x + y
Now you see, the right side is entirely cached. So, until either x
or y changes, z is always the same. That means, if you and I hit
the home page of the blog site, we will see the exact output until
someone posts a blog or comment.
Now think, why do we even need to execute code(a) and code(b) and
generate the output at all? If we do it once in a while and store
the entire output in a static html file, then we can just deliver
that static html file to the browser. Is there any need to execute
.php code, or .aspx code at all for the homepage? The entire
homepage can become just a static html file which web server can
cache it in memory. Thus the entire page can be served from memory
without requiring a single IO operation or database call!
Conclusion
We started with super high CPU and IO usage, thousands of database
calls, super slow web site and came down to zero database call,
zero IO usage and a super fast homepage.
Really great. Specially indexing. I never thought of indexing two fields.
But, a confusion, are you telling to save the entire first page in html for some time, or you mean to cache the first page for some time?
Great article. This will surely help many who wants to have an insight into optimizing database queries.
hi raisul Kabir,
i think misho bhai meaning about… fragment cache. means it will cache each block separately instead of as a whole HTML page…
Managed Hosting, Colocation and Data Center Services by victoryushchenkonashpresudent …
Misho bro, this article shows only a part of the class that you took. Being on the receiving end, I learned the real essence of how to dig into problems and find out a solution. Moreover, I came to discover how farseeing one should be.
I teach all my fellow developers with this example when I see someone is lacking in knowledge with database (indexing, queries, etc), optimization, caching, etc. But don’t know whether I can come close to the way you make things understand.
Once again, thanks bro for this guidance.
Hello, my name is Jessica, I like yours blog, i with pleasure shall support a theme.
Hello, my name is Ditrich, I like yours blog, i with pleasure shall support a theme.
Thanks for sharing!! A couple more shiny nuggets for my collection.
Hi omar,
As i have just started my career as asp.net developer, and i am very much interested in Ajax from the day i saw pageflakes. I have become a great fan of your blog and book “Building a Web 2.0 Portal with ASP.NET 3.5” i have just started reading it. Just keep up the good work! which is sharing your knowledge.
It's really helpfull….
Great article. It will help many people like me.
Brilliant. Keep it up.
Greatings, Thanks for article. Everytime like to read you.
Thank you
Thanks for article. Everytime like to read you.
Hello,
Amazing! Not clear for me, how offen you updating your msmvps.com.
Joker
Hi, msmvps.com – da best. Keep it going!
Zoran