Tuesday, October 31, 2006

London, HO!

On Friday night, I am getting on a BA flight from Logan to Heathrow to work out of our London office for a week. I love going to London, as it's the only major city I feel comfortable moving around in without a car.

Besides work, highlights include the Victoria and Albert Museum, and possibly the Tate Modern.

If you'll be in London next week, let me know! I will be on Skype and will have a UK number that my desk phone here in the US will be forwarded to.

Monday, October 30, 2006

Web Performance: Optimizing Page Load Time

Aaron Hopkins posted an article detailing all of the Web performance goodness that I have been advocating for a number of years.

To summarize:

  • Use server-side compression

  • Set your static objects to be cacheable in browser and proxy caches

  • Use keep-alives / persistent connections

  • Turn your browsers' HTTP pipelining feature on

These ideas are not new, and neither are the finding in his study. As someone who has worked in the Web performance field for nearly a decade, these are old-hat. However, it's always nice to have someone new inject some life back into the discussion.

Sunday, October 29, 2006

What do you mean you don't think this way?

One of the lengthy conversations I have had with my wife as I work my way through understanding how my bipolar works and affects my life focused on how I think, and see the world.

I am just now coming to terms with the fact that the filters I process my world through are radically different than those that most people use. This is a breakthrough for me, as I assumed that everyone saw the world as I did and do.

A lot of this comes from my family. Both sides of my family are rife with bipolar and schizophrenia. My mother has it; my father had it to a lesser degree. My family was unusual because of this. Not dysfunctional; just differently functional.

My wife filters the world in a logical, linear way. Imagine one of those orderly mass protests you see on the news. Lots of people, lots of noise, but everyone moving in the same direction, headed for the same goal.

Then there's me. I filter the world as if there was a riot going on. People running everywhere, throwing rocks, Molotov cocktails, screaming. Troops in vehicles rushing through spraying water cannons. But occasionally, one side or the other gathers enough strength to achieve a small tactical victory, push the other side back a little.

When you step back and look at those of us who have bipolar, remember that we see your world very differently. And it is your world, designed to preserve order and organization, protect you from the "madness" in our minds.

Thursday, October 26, 2006

Living with Bipolar: If you could press a button and be cured, would you?

Since August of this year, I have been exploring the insides of my mind in greater detail. If you read this blog regularly, you are pretty likely aware of the fluctuations in my mood, and the rationality of my behaviour.

If you get the chance, find and watch The Secret Life of the Manic Depressive hosted by Stephen Fry. In his open, intelligent and witty way, Fry tackles the topic of Bipolar Disorders (oh yes, there are more than one), including his own. If you can find it (you will have to try all of the usual channels to get it in North America), watch it.

So, why am I openly discussing the fact that I am Bipolar in a public forum? Why would I confess to the world, to people who may in the future meet me, or even consider hiring me?

It's simple. Many months ago, I wrote that if you were going to hire me based on what I had done in the past, or what school I went to, I most likely wouldn't want to work for your company anyway. The same applies to this illness, this condition I suffer from. If you or your company won't hire me because I suffer from an illness that is beyond my control, that I will have for the rest of my life, why would I work for your firm?

I have had Bipolar for a long time. I can track the behaviours that identify the condition back into my childhood, through my teens, through until today. Normally, the cycling that I go through is benign, punctuated by periods of utter and complete hyperfocus. Most of the time, hyperfocus is a benefit for me -- it is what got me through re-building the GrabPERF interface last year, and helped power me to absorb and write as much on Web performance as I have.

The manic side does have its pitfalls. My mania usually results in buying and spending sprees that have often endangered my financial stability. An example of this is my acquisition or stationery supplies, pen, notebooks and books.

Two weeks ago, I cleaned out my desk and aggregated all of the writing instruments I have purchased over the last 12 months. When I was done, I had filled a 1-gallon Zip-Lock baggie with pens, pencils, highlighters and Sharpies.

In my lifetime, I could never use them all.

I fanatically acquire notebooks. Rhodia, Moleskine, Rite-in-the-Rain, anything. How many of them have I written in? Well, lets just say that my kids will be using my blank notebook collection for many years after I have departed this world.

The spending sprees, the intense desire for the acquisition of things, is my most noticeable manifestation of manic behaviour. In most instances, the manic process starts to wind down after a while. In a few instances, it continues upward. It continues upward until my rational mind dissipates, and I start ranting and raving, making irrational and potentially destructive choices in my life. Choices that have (or could have) affected the course of my life.

I suffer from a small subset of the condition, Bipolar I. What differentiates this group from the standard "manic-depressive" or Bipolar diagnosis is that is more MANIC-depressive, with a sustained emphasis on the manic episodes. Depressive episodes occur, don't get me wrong; but it is the intense and unstoppable mania that has shaped me more than the depression.

However, this condition is not "curable" in the standard way. It also doesn't manifest any physical symptoms. So in most cases, people just say that I need to get a grip and get on with my life. I am grateful that I have an understanding and (in some cases) forgiving wife who is intent on helping me control and regulate my behaviour. I am also extremely lucky that my current manager understands this part of me, and gives me the freedom I need to ebb and flow with the condition.

To wrap this up (I hate long postings), I leave you with this thought. In his programme, Fry asks his interview subjects the following question (and I paraphrase it here):
If there was a button you could push, a button that cured you of this condition, and gave you a normal mind, would you press it?

Only one of the interview subjects said yes. Everyone else said that despite the pain and suffering that accompanies the condition, there is no way that they would be willing to give back the state of mind that allowed them to achieve what they had achieved.

We are not in our right mind. And I am proud of that.

Technorati Tags: , , , , , , , ,

Sunday, October 22, 2006

Fire? What Fire? The flames, smoke and fire engines are part of a cunning training exercise.

Homeland Stupidity is great and reminding us that the security and intelligence community in the United States is insecure and of questionable intelligence.

The military intelligence unit responsible for spying on Americans had to evacuate its Fort Meade, Md., offices Friday after a six-alarm fire broke out.

A fire broke out shortly after 3 p.m. on the roof of Nathan Hale Hall, at 4554 Llewellyn Ave., just on the other side of the golf course from the National Security Agency headquarters. Construction was underway on the part of the roof that caught fire, according to Lt. Col. James Peterson, director of emergency services at Fort Meade.

A fire is unfortunate, and yes, it occurred in a building with sensitive "intelligence" material. However, isn't this quote from later in the post a bit odd?

Jennifer Downing, a spokesman for the post, would only confirm a fire was burning at 4554 Llewellyn Ave., deep inside the west county Army base. She directed calls to a spokesman with the Army’s Criminal Investigation Division, who did not return calls.

Fort Meade’s fire chief also did not return calls for comment. And later, a public affairs officer told The Capital to file a Freedom of Information Act request. — Annapolis Capital


"Dude, I can see flames coming from your offices."

"I can neither confirm nor deny that my hair and clothes are on fire. Excuse me, I must participate in the screaming in pain and running madly away from the fire exercise."

Saturday, October 21, 2006

Kevin Tillman: And just how do you justify Iraq again?

Kevin Tillman, the brother of the late Pat Tillman, has written an essay that will make it hard for anyone to justify Iraq.

How can the Rumsfeld, Rice, Cheney, Rove, and Bush argue their way out of this essay?

Remember, the Gettysburg Address took less than 2 minutes to deliver.

Pass it on.

Yet another stationery 12-step program drop-out

Traveling Journal Kit - contents
Originally uploaded by junquegrrl.

Ummm...and I thought I had a problem!

Tuesday, October 17, 2006

Tomorrow: OHIO!

Flying JetBlue into Columbus tomorrow to see a client in the Cincinnati are on Thursday.

Not that anyone ever checks in, but if GrabPERF goes poof, it may be a while before I get to it.

The Office Smells

Either I have lost my mind or the building has become truly evil.

Today, the air in my office is saturated with the small you can only find in an airplane bathroom.

You all know the smell. It is one of the most unique shared experiences humans can have.

It's evil. And it's everywhere.

Wednesday, October 11, 2006

Web compression: Oh, the irony!

Well, the irony of this is painful.

I went with 1&1 as the hosting location for my personal domains, including

One of the things that I preach there is the use of compression.

Guess what? 1&1 doesn't use Web compression on their servers.


Port80 Software: IIS 6.0 Market Share Increases in Fortune 1000

Port80 Software is reporting that in their survey of Fortune 1000 Web sites, IIS 6.0 has overtaken Apache as the Web server platform of choice. [here]

My two-cents: I respect the Port80 Software team greatly and love their maniacal devotion to ensuring that IIS users actually make use of the HTTP compression and caching that can so greatly improve Web performance.

That said, they are tied to Microsoft and the IIS platform. I would be curious to see if, scratching below the surface, they were able to determine what the application platform these companies built their mission critical Web applications on. I am open-minded and willing to hear that IIS is winning in that area as well. In my mind, it's about Web performance tuning, not what you use to get that performance.

That said, I think a critical Web application survey of these same firms would find that many of these companies rely on JSP servers to run their core business processes.

As well, it would be interesting to se, by Fortune 1000 ranking, what the companies are using what server platform.

And...people still use Netscape Enterprise, SunOne, and Domino as production Web servers? YIKES!

Guilty Pleasures: Go Insane

As a teenager growing up in a very small logging town in the BC interior, I had what could be politely termed unusual musical tastes, especially for the mainstream, heavy-metal, hair-banging kids I hung around with.

But when I was alone with my walkman, I listened to the real geniuses of 80s rock: REM, Kate Bush, Talking Heads, and...Lindsey Buckingham.

Lindsey Buckingham?? That guy from Fleetwood Mac?

Want a little aural treat? Listen to Go Insane. I literally wore the oxide off my version of the cassette. Crosses so many different boundaries...and realize that you are pretty much hearing Lindsey Buckingham only. Mick Fleetwood makes a couple appearances, but other than that, it is a one-man show.

Do it. I dare you.

Tuesday, October 10, 2006

Citizens Bank Outage

Citizens Bank Outage

Originally uploaded by spierzchala.

Some days, your bank needs to get smacked around.

This is one of those times.

What is going on?

AJAX Performance Blog

Ok Web performance gurus, I have been out-cooled by someone I work with. Ryan Breen, VP of Technology at Gomez and overall uber-geek, has managed to register AJAX Performance and has a blog up there that talks all about the freaky twisted goodness of making your AJAX behave.

Ryan knows way more about making apps behave; I just know how to analyze the data that shows that they're broken.

Monday, October 9, 2006

Happy Thanksgiving

Happy Thanksgiving

Originally uploaded by spierzchala.

To all the folks back home, Happy Thanksgiving!

May your turkey be moist, juicy, and, preferably, smoked.

Saturday, October 7, 2006

This is amazing

Sometimes, you have to be in awe.

USS George H.W. Bush

Today, they christened the Nimitz-class carrier, George H.W. Bush.

Still a few bugs to work out. Seems the navigation system breaks down after it has seen battle, causing it to wander aimlessly, and eventually become lost. It is especially vulnerable to attack by more than one enemy simultaneously, which in some simulations has forced the commander to surrender the vessel.

I also hear that they have started CAD drawings for the Seawolf-class nuclear submarine SSN George W Bush. Not only is it designed to be isolated from and out of contact with the rest of the world for long periods of time, I hear that it will have a new command feature: all fleet orders, battle information, or damage reports are first filtered through the boat's Media Relations Officer before being passed to the commander.

File Under: Humor.

Friday, October 6, 2006

Aren't tracer rounds illegal?

So, after 6 years of controlling and managing my own Web server, I have handed responsibility over to 1 & 1. I wish I could say that there was a really good reason why I've done this, but frankly, it's because I don't need a lot of oooommmmph for my personal domains (they run happily on a low-end Pentium II Celeron), and the price was right.

GrabPERF is still happily hosted by the folks at Technorati, while controls my blog.

In some ways, I am glad that someone else has these headaches now.

Tuesday, October 3, 2006

Performance Improvement From Caching and Compression

This paper is an extension of the work done for another article that highlighted the performance benefits of retrieving uncompressed and compressed objects directly from the origin server. I wanted to add a proxy server into the stream and determine if proxy servers helped improve the performance of object downloads, and by how much.

Using the same series of objects in the original compression article[1], the CURL tests were re-run 3 times:

  1. Directly from the origin server

  2. Through the proxy server, to load the files into cache

  3. Through the proxy server, to avoid retrieving files from the origin.[2]

This series of three tests was repeated twice: once for the uncompressed files, and then for the compressed objects.[3]

As can be seen clearly in the plots below, compression caused web page download times to improve greatly, when the objects were retrieved from the source. However, the performance difference between compressed and uncompressed data all but disappears when retrieving objects from a proxy server on a corporate LAN.



Instead of the linear growth between object size and download time seen in both of the retrieval tests that used the origin server (Source and Proxy Load data), the Proxy Draw data clearly shows the benefits that accrue when a proxy server is added to a network to assist with serving HTTP traffic.

Uncompressed Pages
Total Time Uncompressed -- No Proxy0.256
Total Time Uncompressed -- Proxy Load0.254
Total Time Uncompressed -- Proxy Draw0.110
Compressed Pages
Total Time Compressed -- No Proxy0.181
Total Time Compressed -- Proxy Load0.140
Total Time Compressed -- Proxy Draw0.104

The data above shows just how much of an improvement is gained by adding a local proxy server, explicit caching descriptions and compression can add to a Web site. For sites that do force a great of requests to be returned directly to the origin server, compression will be of great help in reducing bandwidth costs and improving performance. However, by allowing pages to be cached in local proxy servers, the difference between compressed and uncompressed pages vanishes.


Compression is a very good start when attempting to optimize performance. The addition of explicit caching messages in server responses which allow proxy servers to serve cached data to clients on remote local LANs can improve performance to even a greater extent than compression can. These two should be used together to improve the overall performance of Web sites.

[1]The test set was made up of the 1952 HTML files located in the top directory of the Linux Documentation Project HTML archive.

[2]All of the pages in these tests announced the following server response header indicating its cacheability:

Cache-Control: max-age=3600

[3]A note on the compressed files: all compression was performed dynamically by mod_gzip for Apache/1.3.27.

Performance Improvement From Compression

How much improvement can you see with compression? The difference in measured download times on a very lightly loaded server indicates that the time to download the Base Page (the initial HTML file) improved by between 1.3 and 1.6 seconds across a very slow connection when compression was used.

Base Page Performance
Base Page Performance

There is a slightly slower time for the server to respond to a client requesting a compressed page. Measurements show that the median response time for the server averaged 0.23 seconds for the uncompressed page and 0.27 seconds for the compressed page. However, most Web server administrators should be willing to accept a 0.04 increase in response time to achieve a 1.5 second improvement in file transfer time.

First Byte Performance
First Byte Performance

Web pages are not completely HTML. How do improved HTML (and CSS) download times affect overall performance? The graph below shows that overall download times for the test page were 1 to 1.5 seconds better when the HTML files were compressed.

Total Page Performance
Total Page Performance

To further emphasize the value of compression, I ran a test on a Web server to see what the average compression ratio would be when requesting a very large number of files. As well, I wanted to determine what the affect on server response time would be when requesting large numbers of compressed files simultaneously. There were 1952 HTML files in the test directory and I checked the results using CURL across my local LAN.[1]

Large sample of File Requests (1952 HTML Files)


First Byte
Total Time
Bytes per Page
Total Bytes123923184716160


First Byte
Total Time
Bytes per Page
Total Bytes123923184720735

Average Compression0.4330.438
Median Compression0.4270.427

As expected, the First Byte download time was slightly higher with the compressed files than it was with the uncompressed files. But this difference was in milliseconds, and is hardly worth mentioning in terms of on-the-fly compression. It is unlikely that any user, especially dial-up users, would notice this difference in performance.That the delivered data was transformed to 43% of the original file size should make any Web administrator sit up and notice. The compression ratio for the test files ranged from no compression for files that were less than 300 bytes, to 15% of original file size for two of the Linux SCSI Programming HOWTOs. Compression ratios do not increase in a linear fashion when compared to file size; rather, compression depends heaviliy on the repetition of content within a file to gain its greatest successes. The SCSI Programming HOWTOs have a great deal of repeated characters, making them ideal candidates for extreme compression.Smaller files also did not compress as well as larger files, exactly for this reason. Fewer bytes means a lower probability of repeated bytes, resulting in a lower compression ratio.

Average Compression by File Size

50000 and up0.3290.331

The data shows that compression works best on files larger than 5000 bytes; after that size, average compression gains are smaller, unless a file has a large number of repeated characters. Some people argue that compressing files below a certain size is a wasteful use of CPU cycles. If you agree with these folks, using the 5000 byte value as floor value for compressing files should be a good starting point. I am of the opposite mindset: I compress everything that comes off my servers because I consider myself an HTTP overclocker, trying to squeeze every last bit of download performance out of the network.


With a few simple commands, and a little bit of configuration, an Apache Web server can be configured to deliver a large amount of content in a compressed format. These benefits are not simply limited to static pages; dynamic pages generated by PHP and other dynamic content generators can be compressed by using the Apache compression modules. When added other performance tuning mechanisms and appropriate server-side caching rules, these modules can substantially reduce the bandwidth for a very low cost.

[1] The files were the top level HTML files from the Linux Documentation Project. They were installed on an Apache 1.3.27 server running mod_gzip and an Apache 2.0.44 server using mod_deflate. Minimum file size was 80 bytes and maximum file size was 99419 bytes.

[2] mod_deflate for Apache/2.0.44 and earlier comes with the compression ratio set for Best Speed, not Best Compression. This configuration can be modified using the tips found here; and starting with Apache/2.0.45, there will be a configuration directive that will allow admins to configure the compression ratio that they want.

In this example, the compression ratio was set to Level 6.

[3] mod_deflate does not have a lower bound for file size, so it attempts to compress files that are too small to benefit from compression. This results in files smaller than approximately 120 bytes becoming larger when processed by mod_deflate.

Baseline Testing With cURL

cURL is an application that can be used to retrieve any Internet file that uses the standard URL format — http://, ftp://, gopher://, etc. Its power and flexibility can be added to applications by using the libcurl library, whose API can be accessed easily using most of the commonly used scripting and programming languages.

So, how does cURL differ from some of the other command-line URL retrieval tools such as WGET? Both do very similar things, and can be coaxed to retrieve large lists of files or even mirror entire Web sites. In fact, for the automated retrieval of single files for the Internet for storage on local filesystems — such as downloading source files onto servers for building applications — WGET's syntax is the simplest to use.

However, for simple baseline testing, WGET lacks cURL's ability to produce timing results that can be written to an output file in a user-configurable format. cURL gathers a large amount of data about a transfer that can then be used for analysis or logging purposes. This makes it a step ahead of WGET for baseline testing.

cURL Installation

For the purposes of our testing, we have used cURL 7.10.5-pre2 as it adds support for downloading and interpreting GZIP-encoded content from Web servers. Because it is a pre-release version, it is currently only available as source for compiling. The compilation was smooth, and straight-forward.

$ ./configure --with-ssl --with-zlib
$ make
$ make test

[...runs about 120 checks to ensure the application and library will work as expected..]

# make install

The application installed in /usr/local/bin on my RedHat 9.0 laptop.

Testing cURL is straight-forward as well.

$ curl

[...many lines of streaming HTML omitted...]

Variations on this standard theme include:

  • Send output to a file instead of STDOUT

  • 	$ curl -o ~/slashdot.txt

  • Request compressed content if the Web server supports it

  • 	$ curl --compressed

  • Provide total byte count for downloaded HTML

  • 	$ curl -w %{size_download}

    Baseline Testing with cURL

    With the application installed, you can now begin to design a baseline test. This methodology is NOT a replacement for true load testing, but rather a method for giving small and medium-sized businesses a sense of how well their server will perform before it is deployed into production, as well as providing a baseline for future tests. This baseline can then be used as a basis for comparing performance after configuration changes in the server environment, such as caching rule changes or adding solutions that are designed to accelerate Web performance.

    To begin, a list of URLs needs to be drawn up and agreed to as a baseline for the testing. For my purposes, I use the files from the Linux Documentation project, intermingled with a number of images. This provides the test with a variety of file sizes and file types. You could construct your own file-set out of any combination of documents/files/images you wish. However, the file-set should be large — mine runs to 2134 files.

    Once the file-set has been determined, it should be archived so that this same group can be used for future performance tests; burning it to a CD is always a safe bet.

    Next, extract the filenames to a text file so that the configuration file for the tests can be constructed. I have done this for my tests, and have it set up in a generic format so that when I construct the configuration for the next test, I simply have to change/update the URL to reflect the new target.

    The configuration of the rest of the parameters should be added to the configuration file at this point. These are all the same as the command line versions, except for the URL listing format.

  • Listing of test_config.txt

  • -A "Mozilla/4.0 (compatible; cURL 7.10.5-pre2; Linux 2.4.20)"
    -w @logformat.txt
    -D headers.txt
    -H "Pragma: no-cache"
    -H "Cache-control: no-cache"
    -H "Connection: close"

    [...file listing...]

    In the above example, I have set cURL to:

    • Use a custom User-Agent string

    • Follow any re-direction responses that contain a "Location:" response header

    • Dump the server response headers to headers.txt

    • Circumvent cached responses by sending the two main "no-cache" request headers

    • Close the TCP connection after each object is downloaded, overriding cURL's default use of persistent connections

    • Format the timing and log output using the format that is described in logformat.txt

    Another command-line option that I use a lot is --compressed, which, as of cURL 7.10.5, handles both the deflate and gzip encoding of Web content, including decompression on the fly. This is great for comparing the performance improvements and bandwidth savings from compression solutions against a baseline test without compression. Network administrators may also be interested in testing the improvement that they get using proxy servers and client-side caches by inserting --proxy <proxy[:port]> into the configuration, removing the "no-cache" headers, and testing a list of popular URLs through their proxy servers.

    The logformat.txt file describes the variables that I find of interest and that I want to use for my analysis.

  • Listing of logformat.txt

  • \n
    %{url_effective}\t%{http_code}\t%{content_type}\t%{time_total}\t%{time_lookup}\t /

    These variables are defined as:
  • url_effective: URL used to make the final request, especially when following re-directions

  • http_code: HTTP code returned by the server when delivering the final HTML page requested

  • content_type: MIME type returned in the final HTML request

  • time_total: Total time for the transfer to complete

  • time_lookup: Time from start of transfer until DNS Lookup complete

  • time_connect: Time from start of transfer until TCP connection complete

  • time_starttransfer: Time from start of transfer until data begins to be returned from the server

  • size_download: Total number of bytes transferred, excluding headers

  • As time_connect and time_starttransfer are cumulative from the beginning of the transfer, you have to do some math to come up with the actual values.

    TCP Connection Time = time_connect - time_lookup
    Time First Byte = time_starttransfer - time_connect
    Redirection Time = time_total - time_starttransfer

    If you are familiar with cURL, you may wonder why I have chosen not to write the output to a file using the -o <file> option. It appears that this option only records the output for the first file requested, even in a large list of files. I prefer to use the following command to start the test and then post-process the results using grep.

    $ curl -K test_config.txt >> output_raw_1.txt

    [...lines and lines of output...]

    $ grep -i -r "^*$" output_raw_1.txt >> output_processed_1.txt

    And voila! You now have a tab delimited file you can drop into your favorite spreadsheet program to generate the necessary statistics.

    mod_gzip Compile Instructions

    The last time I attempted to compile mod_gzip into Apache, I found that the instructions for doing so were not documented clearly on the project page. After a couple of failed attempts, I finally found the instructions buried at the end of the ChangeLog document.

    I present the instructions here to preserve your sanity.

    Before you can actually get mod_gzip to work, you have to uncomment it in the httpd.conf file module list (Apache 1.3.x) or add it to the module list (Apache 2.0.x).

    Now there are two ways to build mod_gzip: statically compiled into Apache and a DSO-File for mod_so. If you want to compile it statically into Apache, just copy the source to Apache src/modules directory and there into a subdirectory named 'gzip'. You can activate it via a parameter of the configure script.

     ./configure --activate-module=src/modules/gzip/mod_gzip.a
    make install

    This will build a new Apache with mod_gzip statically built in.

    The DSO-Version is much easier to build.

     make APXS=/path/to/apxs
    make install APXS=/path/to/apxs
    /path/to/apachectl graceful

    The apxs script is normally located inside the bin directory of Apache.

    Hacking mod_deflate for Apache 2.0.44 and lower

    NOTE: This hack is only relevant to Apache 2.0.44 or lower. Starting with Apache 2.0.45, the server contains the DeflateCompressionLevel directive, which allows for user-configured compression levels in the httpd.conf file.

    One of the complaints leveled against mod_deflate for Apache 2.0.44 and below has been the lower compression ratio that it produces when compared to mod_gzip for Apache 1.3.x and 2.0.x. This issue has been traced to a decision made by the author of mod_deflate to focus on fast compression versus maximum compression.

    In discussions with the author of mod_deflate and the maintainer of mod_gzip, the location of the issue was quickly found. The level of compression can be easily modified by changing the ZLIB compression setting in mod_deflate.c from Z_BEST_SPEED (equivalent to "zip -1") to Z_BEST_COMPRESSION (equivalent to "zip -9"). These defaults can also be replaced with a numeric value between 1 and 9. A "hacked" version of the mod_deflate.c code is available here. In this file, the compression level has been set to 6, which is regarded as a good balance between speed and compression (and also happens to be ZLIB's default ratio). Some other variations are highlighted below.

    Original Code

    zRC = deflateInit2(&ctx->stream, Z_BEST_SPEED, Z_DEFLATED, c->windowSize, c->memlevel, Z_DEFAULT_STRATEGY);

    Hacked Code

    1. zRC = deflateInit2(&ctx->stream, Z_BEST_COMPRESSION, Z_DEFLATED, c->windowSize, c->memlevel, Z_DEFAULT_STRATEGY);

    2. zRC = deflateInit2(&ctx->stream, 6, Z_DEFLATED, c->windowSize, c->memlevel, Z_DEFAULT_STRATEGY);

    3. zRC = deflateInit2(&ctx->stream, 9, Z_DEFLATED, c->windowSize, c->memlevel, Z_DEFAULT_STRATEGY);

    A change has been made to mod_deflate in Apache 2.0.45 that adds a directive named DeflateCompressionLevel to the mod_deflate options. This will accept a numeric value between 1 (Best Speed) and 9 (Best Compression), with the default set at 6.

    Compressing Web Output Using mod_deflate and Apache 2.0.x

    In a previous paper, the use of mod_gzip to dynamically compress the output from an Apache server. With the growing use of the Apache 2.0.x family of Web servers, the question arises of how to perform a similar GZIP-encoding function within this server. The developers of the Apache 2.0.x servers have included a module in the codebase for the server to perform just this task.

    mod_deflate is included in the Apache 2.0.x source package, and compiling it in is a simple matter of adding it to the configure command.

    	./configure --enable-modules=all --enable-mods-shared=all --enable-deflate

    When the server is made and installed, the GZIP-encoding of documents can be enabled in one of two ways: explicit exclusion of files by extension; or by explcit inclusion of files by MIME type. These methods are specified in the httpd.conf file.

    Explicit Exclusion

    SetOutputFilter DEFLATE
    DeflateFilterNote ratio
    SetEnvIfNoCase Request_URI .(?:gif|jpe?g|png)$ no-gzip dont-vary
    SetEnvIfNoCase Request_URI .(?:exe|t?gz|zip|bz2|sit|rar)$ no-gzip dont-vary
    SetEnvIfNoCase Request_URI .pdf$ no-gzip dont-vary

    Explicit Inclusion

    DeflateFilterNote ratio
    AddOutputFilterByType DEFLATE text/*
    AddOutputFilterByType DEFLATE application/ms* application/vnd* application/postscript

    Both methods enable the automatic GZIP-encoding of all MIME-types, except image and PDF files, as they leave the server. Image files and PDF files are excluded as they are already in a highly compressed format. In fact, PDFs become unreadable by Adobe's Acrobat Reader if they are further compressed by mod_deflate or mod_gzip.On the server used for testing mod_deflate for this article, no Windows executables or compressed files are served to visitors. However, for safety's sake, please ensure that compressed files and binaries are not GZIP-encoded by your Web server application.For the file-types indicated in the exclude statements, the server is told explicitly not to send the Vary header. The Vary header indicates to any proxy or cache server which particular condition(s) will cause this response to Vary from other responses to the same request.

    If a client sends a request which does not include the Accept-Encoding: gzip header, then the item which is stored in the cache cannot be returned to the requesting client if the Accept-Encoding headers do not match. The request must then be passed directly to the origin server to obtain a non-encoded version. In effect, proxy servers may store 2 or more copies of the same file, depending on the client request conditions which cause the server response to Vary.

    Removing the Vary response requirement for objects not handled means that if the objects do not vary due to any other directives on the server (browser type, for example), then the cached object can be served up without any additional requests until the Time-To-Live (TTL) of the cached object has expired.

    In examining the performance of mod_deflate against mod_gzip, the one item that distinguished mod_deflate from mod_gzip in versions of Apache prior to 2.0.45 was the amount of compression that occurred. The examples below demonstrate that the compression algorithm for mod_gzip produces between 4-6% more compression than mod_deflate for the same file.[1]

    Table 1 — /compress/homepage2.html

    CompressionSizeCompression %
    No compression56380 bytesn/a
    Apache 1.3.x/mod_gzip16333 bytes29% of original
    Apache 2.0.x/mod_deflate19898 bytes35% of original

    Table 2 — /documents/

    CompressionSizeCompression %
    No Compression63451 bytesn/a
    Apache 1.3.x/mod_gzip19758 bytes31% of original
    Apache 2.0.x/mod_deflate23407 bytes37% of original

    Attempts to increase the compression ratio of mod_deflate in Apache 2.044 and lower using the directives provided for this module produced no further decrease in transferred file size. A comment from one of the authors of the mod_deflate module stated that the module was written specifically to ensure that server performance was not degraded by using this compression method. The module was, by default, performing the fastest compression possible, rather than a mid-range compromise between speed and final file size.

    Starting with Apache 2.0.45, the compression level of mod_deflate is configurable using the DeflateCompressionLevel directive. This directive accepts values between 1 (fastest compression speed; lowest compression ratio) and 9 (slowest compression speed; highest compression ratio), with the default value being 6. This simple change makes the compression in mod_deflate comparable to mod_gzip out of the box.

    Using mod_deflate for Apache 2.0.x is a quick and effective way to decrease the size of the files that are sent to clients. Anything that can produce between 50% and 80% in bandwidth savings with so little effort should definitely be considered for any and all Apache 2.0.x deployments wishing to use the default Apache codebase.

    [1] A note on the compression in mod_deflate for Apache 2.044 and lower: The level of compression can be modified by changing the ZLIB compression setting in mod_deflate.c from Z_BEST_SPEED (equivalent to "gzip -1") to Z_BEST_COMPRESSION (equivalent to "gzip -9"). These defaults can also be replaced with a numeric value between 1 and 9.

    More info on hacking mod_deflate for Apache 2.0.44 and lower can be found here.

    Compressing Web Output Using mod_gzip for Apache 1.3.x and 2.0.x

    Web page compression is not a new technology, but it has just recently gained higher recognition in the minds of IT administrators and managers because of the rapid ROI it generates. Compression extensions exist for most of the major Web server platforms, but in this article I will focus on the Apache and mod_gzip solution.

    The idea behind GZIP-encoding documents is very straightforward. Take a file that is to be transmitted to a Web client, and send a compressed version of the data, rather than the raw file as it exists on the filesystem. Depending on the size of the file, the compressed version can run anywhere from 50% to 20% of the original file size.

    In Apache, this can be achieved using a couple of different methods. Content Negotiation, which requires that two separate sets of HTML files be generated -- one for clients that can handle GZIP-encoding, and one for those who can't -- is one method. The problem with this solution should be readily apparent: there is no provision in this methodology for GZIP-encoding dynamically-generated pages.

    The more graceful solution for administrators who want to add GZIP-encoding to Apache is the use of mod_gzip. I consider it one of the overlooked gems for designing a high-performance Web server. Using this module, configured file types -- based on file extension or MIME type -- will be compressed using GZIP-encoding after they have been processed by all of Apache's other modules, and before they are sent to the client. The compressed data that is generated reduces the number of bytes transferred to the client, without any loss in the structure or content of the original, uncompressed document.

    mod_gzip can be compiled into Apache as either a static or dynamic module; I have chosen to compile it as a dynamic module in my own server. The advantage of using mod_gzip is that this method requires that nothing be done on the client side to make it work. All current browsers -- Mozilla, Opera, and even Internet Explorer -- understand and can process GZIP-encoded text content.

    On the server side, all the server or site administrator has to do is compile the module, edit the appropriate configuration directives that were added to the httpd.conf file, enable the module in the httpd.conf file, and restart the server. In less than 10 minutes, you can be serving static and dynamic content using GZIP-encoding without the need to maintain multiple codebases for clients that can or cannot accept GZIP-encoded documents.

    When a request is received from a client, Apache determines if mod_gzip should be invoked by noting if the "Accept-Encoding: gzip" HTTP request header has been sent by the client. If the client sends the header, mod_gzip will automatically compress the output of all configured file types when sending them to the client.

    This client header announces to Apache that the client will understand files that have been GZIP-encoded. mod_gzip then processes the outgoing content and includes the following server response headers.

    	Content-Type: text/html
    Content-Encoding: gzip

    These server response headers announce that the content returned from the server is GZIP-encoded, but that when the content is expanded by the client application, it should be treated as a standard HTML file. Not only is this successful for static HTML files, but this can be applied to pages that contain dynamic elements, such as those produced by Server-Side Includes (SSI), PHP, and other dynamic page generation methods. You can also use it to compress your Cascading Stylesheets (CSS) and plain text files. As well, a whole range of application file types can be compressed and sent to clients. My httpd.conf file sets the following configuration for the file types handled by mod_gzip:

    	mod_gzip_item_include mime ^text/.*
    mod_gzip_item_include mime ^application/postscript$
    mod_gzip_item_include mime ^application/ms.*$
    mod_gzip_item_include mime ^application/vnd.*$
    mod_gzip_item_exclude mime ^application/x-javascript$
    mod_gzip_item_exclude mime ^image/.*$

    This allows Microsoft Office and Postscript files to be GZIP-encoded, while not affecting PDF files. PDF files should not be GZIP-encoded, as they are already compressed in their native format, and compressing them leads to issues when attempting to display the files in Adobe Acrobat Reader.[1] For the paranoid system administrator, you may want to explicitly exclude PDF files.

    	mod_gzip_item_exclude mime ^application/pdf$

    Another side-note is that nothing needs to be done to allow the GZIP-encoding of OpenOffice (and presumably, StarOffice) documents. Their MIME-type is already set to text-plain, allowing them to be covered by one of the default rules.

    How beneficial is sending GZIP-encoded content? In some simple tests I ran on my Web server using WGET, GZIP-encoded documents showed that even on a small Web server, there is the potential to produce a substantial savings in bandwidth usage.

 File Size: 3122 bytes File Size: 1578 bytes File Size: 56279 bytes File Size: 16286 bytes

    Server administrators may be concerned that mod_gzip will place a heavy burden on their systems as files are compressed on the fly. I argue against that, pointing out that this does not seem to concern the administrators of Slashdot, one of the busiest Web servers on the Internet, who use mod_gzip in their very high-traffic environment.

    The mod_gzip project page for Apache 1.3.x is located at SourceForge. The Apache 2.0.x version is available from here.

    [1] From
    "Both Internet Explorer 5.5 and Internet Explorer 6.0 have a bug with decompression that affects some users. This bug is documented in: the Microsoft knowledge Base articles, Q312496 is for IE 6.0 … , the Q313712 is for IE 5.5. Basically Internet Explorer doesn't decompress the response before it sends it to plug-ins like Adobe Photoshop."

    Compressing PHP Output

    A little-used or discussed feature of PHP is the ability to compress output from the scripts using GZIP for more efficient transfer to requesting clients. By automatically detecting the ability of the requesting clients to accept and interpret GZIP encoded HTML, PHP4 can decrease the size of files transferred to the client by 60% to 80%.

    The information given here is known to work on systems running Red Hat 8.0, Apache/1.3.27, Apache/2.0.44 and PHP/4.3.1.

    [Note: Although not re-tested since this article was originally written, compression is still present in the PHP 5.x releases and can be used to effectively compress content on shared or hosted servers where compression is not enabled within the Web server.]

    Configuring PHP

    The configuration needed to make this work is simple. Check your installed Red Hat RPMS for the following two packages:

    1. zlib

    2. zlib-devel

    For those not familiar with zlib, it is a highly efficient, open-source compression library. This library is used by PHP uses to compress the output sent to the client.

    Compile PHP4 with your favourite ./configure statement. I use the following:

    ./configure --without-mysql --with-apxs=/usr/local/apache/bin/apxs --with-zlib

    ./configure --without-mysql --with-apxs2=/usr/local/apache2/bin/apxs --with-zlib

    After doing make && make install, PHP4 should be ready to go as a dynamic Apache module. Now, you have to make some modifications to the php.ini file. This is usually found in /usr/local/lib, but if it's not there, don't panic; you will find some php.ini* files in the directory where you unpacked PHP4. Simply copy one of those to /usr/local/lib and rename it php.ini.

    Within php.ini, some modifications need to be made to switch on the GZIP compression detection and encoding. There are two methods to do this.

    Method 1:

    output_buffering = On
    output_handler = ob_gzhandler
    zlib.output_compression = Off

    Method 2:

    output_buffering = Off
    output_handler =
    zlib.output_compression = On

    Once this is done, PHP4 will automatically detect if the requesting client accepts GZIP encoding, and will then buffer the output through the gzhandler function to dynamically compress the data sent to the client.

    The ob_gzhandler

    The most important component of this entire process is placing the ob_gzhandler PHP command on the page itself. It needs to be placed in the code at the top of the page, above the HTML tag in order to work. It takes the addition of the following line to complete the process:
    <?php ob_start("ob_gzhandler"); ?>

    In Wordpress installs, this becomes the first line in the HEADER.PHP file. But be careful to check that it's working properly. If the Web application has the compression function built into it, and you add the ob_gzhandler function, a funky error message will appear at the top of the page telling you that your can't invoke compression twice.

    Web servers with native compression are smarter than that - they realize that the file is already compressed and don't run it through the compression algorithm again.

    Once this is in place, you will be able to verify the decrease in size using any HTTP browser capture tool (Firebug, Safari Web Inspector, Fiddler2, etc.)


    The winning situation here is that for an expenditure of $0 (except your time) and a tiny bit more server overhead (you're probably still using fewer resources than if you were running ASP on IIS!), you will now be sending much smaller, dynamically generated html documents to your clients, reducing your bandwidth usage and the amount of time it takes to download the files.

    How much of a size reduction is achieved? Well, I ran a test on my Web server, using WGET to retrieve the file. The configuration and results of the test are listed below.

    Method 0: No Compression
    File Size: 9415 bytes
    Method 1: ob_gzhandler
    wget --header="Accept-Encoding: gzip,*"
    File Size: 3529 bytes
    Method 2: zlib.output_compression
    wget --header="Accept-Encoding: gzip,*"
    File Size: 3584 bytes

    You will have to experiment with the method that give the most efficient balance between file size and overhead and processing time on your server.

    A 62% reduction in transferred file size without affecting the quality of the data sent to the client is a pretty good return for 10 minutes of work. I recommend including this procedure in all of your future PHP4 builds.

    Using Client-Side Cache Solutions And Server-Side Caching Configurations To Improve Internet Performance

    In todays highly competitive e-commerce marketplace, the performance of a web-site plays a key role in attracting new and retaining current clients. New technologies are being developed to help speed up the delivery of content to customers while still allowing companies to get their message across using rich, graphical content. However, in the rush to find new technologies to improve internet performance, one low-cost alternative to these new technologies is often overlooked: client-side content caching.

    This process is often overlooked or dismissed by web administrators and content providers seeking to improve performance. The major concern that is expressed by these groups is that they need to ensure that clients always get the freshest content possible. In their eyes, allowing their content to be cached is perceived as losing control of their message.

    This bias against caching is, in most cases, unjustified. By understanding how server software can be used to distinguish unique caching policies for each type of content being delivered, client-side performance gains can be achieved with no new hardware or software being added to an existing web-site system.


    When a client requests web content, this information is either retrieved directly from the origin server, from a browser cache on a local hard drive or from a nearby cache server[1]. Where and for how long the data is stored depends on how the data is tagged when it leaves the web server. However, when discussing cache content, there are three states that content can be in: non-cacheable, fresh or stale.

    The non-cacheable state indicates a file that should never be cached by any device that receives it and that every request for that file must be retrieved from the origin server. This places an additional load on both client and server bandwidth, as well as on the server which responds to these additional requests. In many cases, such as database queries, news content, and personalized content marked by unique cookies, the content provider may explicitly not want data to be cached to prevent stale data from being received by the client.

    A fresh file is one that has a clearly defined future expiration date and/or does not indicate that it is non-cacheable. A file with a defined lifespan is only valid for a set number of seconds after it is downloaded, or until the explicitly stated expiry date and time is reached. At that point, the file is considered stale and must be re-verified (preferred as it requires less bandwidth) or re-loaded from the origin server.[2]

    If a file does not explicitly indicate it is non-cacheable, but does not indicate an explicit expiry period or time, the cache server assigns the file an expiry time defined in the cache servers configuration. When that deadline is reached and the cache server receives a request for that file, the server checks with the origin server to see whether the content has changed. If the file is unchanged, the counter is reset and the existing content is served to the client; if the file is changed, the new content is downloaded, cached according to its settings and then served to the client.

    A stale file is a file in cache that is no longer valid. A client has requested information that had previously been stored in the cache and the control data for the object indicates that it has expired or is too stale to be considered for serving. The browser or cache server must now either re-validate the file with or retrieve the file from the origin server before the data can served to the client.

    The state of an item being considered for caching is determined using one or more of 5 HTTP header messages[3] two server messages, one client message, and two that can be sent by either the client or the server.[4] These headers include: Pragma: no-cache; Cache-Control; Expires; Last-Modified; and If-Modified-Since. Each of these identifies a particular condition that the proxy server must adhere to when deciding whether the content is fresh enough to be served to the requesting client.

    Pragma: no-cache is an HTTP/1.0 client and server header that informs caching servers not to serve the requested content to the client from their cache (client-side) and not cache the marked information if they receive it (server-side). This response has been deprecated in favor of the new HTTP/1.1 Cache-Control header, but is still used in many browsers and servers. The continued use of this header is necessary to ensure backwards-compatibility, as it cannot be guaranteed that all devices and servers will understand the HTTP/1.1 server headers.

    Cache-Control is a family of HTTP/1.1 client and server messages that can be used to clearly define not only if an item can be cached, but also for how long and how it should be validated upon expiry. This more precise family of messages replaces the older Pragma: no-cache message. There are a large number of options for this header field, but four that are especially relevant to this discussion.[5]

    Cache-Control: private/public

    This setting indicates what type of devices can cache the data. The private setting allows the marked items to be cached by the requesting client, but not by any cache servers encountered en-route. The public setting indicates that any device can cache this content. By default, public is assumed unless private is explicitly stated.

    Cache-Control: no-cache

    This is the HTTP/1.1 equivalent of Pragma: no-cache and can be used by clients to force an end-to-end retrieval of the requested files and by servers to prevent items from being cached.

    Cache-Control: max-age=x

    This setting allows the indicated files to be cached either by the client or the cache server for x seconds.

    Cache-Control: must-revalidate

    This setting informs the cache server that if the item in cache is stale, it must be re-validated before it can be served to the client.

    A number of these settings can be combined to form a larger Cache-Control header message. For example, an administrator may want to define how long the content is valid for, and then indicate that, at the end of that period, all new requests must be revalidated with the origin server. This can be accomplished by creating a multi-field Cache-Control header message like the one below.
    Cache-Control: max-age=3600, must-revalidate

    Expires sets an explicit expiry date and time for the requested file. This is usually in the future, but a server administrator can ensure that an object is always re-validated by setting an expiry date that is in the past an example of this will be shown below.

    Last-Modified can indicate one of several conditions, but the most common is the last time the state of the requested object was updated. The cache server can use this to confirm an object has not changed since it was inserted into the cache, allowing for re-validation, versus completely re-loading, of objects in cache.

    If-Modified-Since is a client-side header message that is sent either by a browser or a cache server and is set by the Last-Modified value of the object in cache. When the origin server has not set an explicit cache expiry value and the cache server has had to set an expiry time on the object using its own internal configuration, the Last-Modified value is used to confirm whether content has changed on the origin server. If the Last-Modified value on an object held by the origin server is newer than that held by the client, the entire file is re-loaded. If these values are the same, the origin server returns a 304 Not Modified HTTP message and the cache object is then served to the client and has its cache-defined counter reset.

    Using an application trace program, clients are able to capture the data that flows out of and in to the browser application. The following two examples show how a server can use header messages to mark content as non-cacheable, or set very specific caching values.

    Server Messages for a Non-Cacheable Object

    HTTP/1.0 200 OK
    Content-Type: text/html
    Content-Length: 19662
    Pragma: no-cache
    Cache-Control: no-cache
    Server: Roxen/2.1.185
    Accept-Ranges: bytes
    Expires: Wed, 03 Jan 2001 00:18:55 GMT

    In this example, the server returns three indications that the content is non-cacheable. The first two are the Pragma: no-cache and Cache-Control: no-cache statements. With most client and cache server configurations, one of these headers on its own should be enough to prevent the requested object from being stored in cache. The web administrator in this example has chosen to ensure that any device, regardless of the version of HTTP used, will clearly understand that this object is non-cacheable.

    However, in order to guarantee that this item is never stored in or served from cache, the Expires statement is set to a date and time that is in the past.[6] These three statements should be enough to guarantee that no cache serves this file without performing an end-to-end transfer of this object from the origin server with each request.

    Specific Caching Information in Server Messages

    HTTP/1.1 200 OK
    Date: Tue, 13 Feb 2001 14:50:31 GMT
    Server: Apache/1.3.12
    Cache-Control: max-age=43200
    Expires: Wed, 14 Feb 2001 02:50:31 GMT
    Last-Modified: Sun, 03 Dec 2000 23:52:56 GMT
    ETag: "1cbf3-dfd-3a2adcd8"
    Accept-Ranges: bytes
    Content-Length: 3581
    Connection: close
    Content-Type: text/html

    In the example above, the server returns a header message Cache-Control: max-age=43200. This immediately informs the cache that the object can be stored in cache for up to 12 hours. This 12-hour time limit is further guaranteed by the Expires header, which is set to a date value that is exactly 12 hours ahead of the value set in the Date header message.[7]

    These two examples present two variations of web server responses containing information that makes the requested content either completely non-cacheable or cacheable only for a very specific period of time.

    How does caching work?

    Content is cached by devices on the internet, and these devices then serve this stored content when the same file is requested by the original client or another client that uses that same cache. This rather simplistic description covers a number of different cache scenarios, but two will be the focus of this paper browser caching and caching servers.[8]

    For the remainder of this paper, the caching environment that will be discussed is one involving a network with a number of clients using a single cache server, the general internet, and a server network with a series of web servers on it.

    Browser Caching

    Browser caching is what most people are familiar with, as all web browsers perform this behavior by default. With this type of caching, the web browser stores a copy of the requested files in a cache directory on the client machine in order to help speed up page downloads. This performance increase is achieved by serving stored files from this directory on the local hard drive instead of retrieving these same files from the web server, which resides across a much slower connection than the one between the hard-drive and the local application, when an item that is stored in cache is requested.

    To ensure that old content is not being served to the client, the browser checks its cache first to see if an item is in cache. If the item is in cache, the browser then confirms the state of the object in cache with the origin server to see if the item has been modified at the source since the browser last downloaded it. If the object has not been modified, the origin server sends a 304 Not Modified message, and the item is served from the local hard drive and not across the much slower internet.

    First Request for a file

    GET /file.html HTTP/1.1
    Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/, application/, application/msword, application/x-comet, */*
    Accept-Language: en-us
    Accept-Encoding: gzip, deflate
    User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
    Connection: Keep-Alive

    HTTP/1.1 200 OK
    Date: Tue, 13 Feb 2001 20:00:22 GMT
    Server: Apache
    Cache-Control: max-age=604800
    Last-Modified: Wed, 29 Nov 2000 15:28:38 GMT
    ETag: "1df-28f1-3a2520a6"
    Accept-Ranges: bytes
    Content-Length: 10481
    Keep-Alive: timeout=5, max=100
    Connection: Keep-Alive
    Content-Type: text/html

    In the above example[9], the file is retrieved from the server for the first time, and the server sends a 200 OK response and then returns the requested file. The items marked in blue indicate cache control data sent to the client by the server.

    Second Request for a file

    GET /file.html HTTP/1.1
    Accept: */*
    Accept-Language: en-us
    Accept-Encoding: gzip, deflate
    If-Modified-Since: Wed, 29 Nov 2000 15:28:38 GMT
    If-None-Match: "1df-28f1-3a2520a6"
    User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
    Connection: Keep-Alive

    HTTP/1.1 304 Not Modified
    Date: Tue, 13 Feb 2001 20:01:07 GMT
    Server: Apache
    Connection: Keep-Alive
    Keep-Alive: timeout=5, max=100
    ETag: "1df-28f1-3a2520a6"
    Cache-Control: max-age=604800

    The second request for a file sees the client send a request for the same object 40 seconds later, but with two additions. The server asks if the file has been modified since the last time it was requested by the client (If-Modified-Since). If the date in that field cannot be used by the origin server to confirm the state of the requested object, the client asks if the objects Etag tracking code has changed using the If-None-Match header message.[10] The origin server responds by verifying that object has not been modified and confirms this by returning the same Etag value that was sent by the client. This rapid client-server exchange allows the browser to quickly determine that it can serve the file directly from its local cache directory.

    Caching Server

    A caching server performs functions similar to those of a browser cache, only on a much larger scale. Where a browser cache is responsible for storing web objects for a single browser application on a single machine, a cache server stores web objects for a larger number of clients or perhaps even an entire network. With a cache server, all web requests from a network are passed through caching server, which then will serve the requested files to the client. The cache server can deliver content either directly from its own cache of objects, or by retrieving objects from the internet and then serving them to clients. [11]

    Cache servers are a more efficient than browser caches as this network-level caching process makes the object available to all users of the network once it has been retrieved. With a browser cache, each user and, in fact, each browser application on a specific client must maintain a unique cache of files that is not shared with other clients or applications.

    Also, cache servers use additional information provided by the web server in the headers sent along with each web request. Browser caches simply re-validate content with each request, confirming that the content has not been modified since it was last requested. Cache servers use the values sent in the Expires and Cache-Control header messages to set explicit expiry times for objects they store.

    First Request for a file through a cache server

    GET HTTP/1.1
    Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/, application/, application/msword, application/x-comet, */*
    Accept-Language: en-us
    Accept-Encoding: gzip, deflate
    User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
    Proxy-Connection: Keep-Alive

    HTTP/1.0 200 OK
    Date: Tue, 16 Jan 2001 15:46:42 GMT
    Server: Apache
    Cache-Control: max-age=604800
    Last-Modified: Wed, 29 Nov 2000 15:28:38 GMT
    ETag: "1df-28f1-3a2520a6"
    Content-Length: 10481
    Content-Type: text/html
    Connection: Close

    The first request from the client through a cache server shows two very interesting things.[12] The first is that although the client request was sent out as HTTP/1.1, the server responded using HTTP/1.0. The browser caching example above demonstrated that the responding server uses HTTP/1.1. The change in protocol is the first clue that this data was served by a cache server.

    The second item of interest is that the file that is initially served by the proxy server has a Date field set to January 16, 2001. This server is not serving stale data; this is the default time set by the cache server to indicate a new object that has been inserted in the cache.[13]

    Second Request for a file through a cache server Second Browser

    GET HTTP/1.1
    User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; 0.7) Gecko/20010109
    Accept: */*
    Accept-Language: en
    Accept-Encoding: gzip,deflate,compress,identity
    Keep-Alive: 300
    Connection: keep-alive

    HTTP/1.0 200 OK
    Date: Tue, 16 Jan 2001 15:46:42 GMT
    Server: Apache
    Cache-Control: max-age=604800
    Last-Modified: Wed, 29 Nov 2000 15:28:38 GMT
    ETag: "1df-28f1-3a2520a6"
    Content-Length: 10481
    Content-Type: text/html
    Connection: Close

    Third Request for a file through a cache server Second Client Machine

    GET HTTP/1.0
    Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/,
    application/, application/msword, */*
    Accept-Language: en-us
    User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
    Proxy-Connection: Keep-Alive

    HTTP/1.0 200 OK
    Date: Tue, 16 Jan 2001 15:46:42 GMT
    Server: Apache
    Cache-Control: max-age=604800
    Last-Modified: Wed, 29 Nov 2000 15:28:38 GMT
    ETag: "1df-28f1-3a2520a6"
    Content-Length: 10481
    Content-Type: text/html
    Connection: Close

    A second request through the cache server, using another browser on the same client configured to use the cache server, indicates that this client retrieved the file from the cache server, not from the origin server. The Date field is the same as the initial request and the protocol has once again been swapped from HTTP/1.1 to HTTP/1.0.

    The third example shows that the object is now not only available to different browsers on the same machine, but now that it is available to different machines on the same network, using the same cache server. By requesting the same content from another client machine on the same network, it is clear that the object is served to the client by the cache server, as the Date field set to the same value observed in the previous two examples.

    Why should data be cached?

    Many web pages that are downloaded by web browsers today are marked as being non-cacheable. The theory behind this is that there is so much dynamic and personalized content on the internet today that if any of it is cached, people using the web may not have the freshest possible content or they may end up receiving content that was personalized for another client making use of the same cache server.

    The dynamic and personalized nature of the web today does make this a challenge, but if the design of a web-site is examined closely, it can be seen that these new features of the web can work hand-in-hand with content caching.

    How does caching the perceived user experience? In both the browser caching and caching server discussions above, it has been demonstrated that caching helps attack the problem of internet performance on three fronts. First, caching moves content closer to the client, by placing it on local hard-drives or in local network caches. With data stored on or near the client, the network delay encountered when trying to retrieve the data is reduced or eliminated.

    Secondly, caching reduces network traffic by serving content that is fresh as described above. Cache servers will attempt to confirm with the origin server that the objects stored in cache if not explicitly marked for expiry are still valid and do not need to be fully re-loaded across the internet. In order to gain the maximum performance benefit from object caching, it is vital to specify explicit cache expiry dates or periods.

    The final performance benefit to properly defining caching configurations of content on an origin server is that server load is reduced. If the server uses carefully planned explicit caching policies, server load can be greatly reduced, improving the user experience.

    When examining how the configuration of a web server can be modified to improve content cacheability, it is important keep in mind two very important considerations. First, the content and site administrators must have a very granular level of control over how the content being served will or wont be cached once it leaves their server. Secondly, within this need to control how content is cached, ways should be found to minimize the impact that client requests have on bandwidth and server load by allowing some content to be cached.

    Take the example of a large, popular site that is noted for its dynamic content and rich graphics. Despite having a great deal of dynamic content, caching can serve a beneficial purpose without compromising the nature of the content being served. The primary focus of the caching evaluation should be on the rich graphical content of the site.

    If the images of this site all have unique names that are not shared by any other object on the site, or the images all reside in the same directory tree, then this content can be marked differently within the server configuration, allowing it to be cached.[14] A policy that allows these objects to be cached for 60, 120 or 180 seconds could have a large affect on reducing the bandwidth and server strain at the modified site. During this seemingly short period of time, several dozen of even several hundred different requests for the same object could originate from a large corporate network or ISP. If local cache servers can handle these requests, both the server and client sides of the transaction could see immediate performance improvements.

    Taking a server header from an example used earlier in the paper, it can be demonstrated how even a slight change to the server header itself can help control the caching properties of dynamic content.

    Dynamic Content

    HTTP/1.1 200 OK
    Date: Tue, 13 Feb 2001 14:50:31 GMT
    Server: Apache/1.3.12
    Cache-Control: no-cache, must-revalidate
    Expires: Sat, 13 Jan 2001 14:50:31 GMT
    Last-Modified: Sun, 03 Dec 2000 23:52:56 GMT
    ETag: "1cbf3-dfd-3a2adcd8"
    Accept-Ranges: bytes
    Content-Length: 3581
    Connection: close
    Content-Type: text/html

    Static Content

    HTTP/1.1 200 OK
    Date: Tue, 13 Feb 2001 14:50:31 GMT
    Server: Apache/1.3.12
    Cache-Control: max-age=43200, must-revalidate
    Expires: Wed, 14 Feb 2001 02:50:31 GMT
    Last-Modified: Sun, 03 Dec 2000 23:52:56 GMT
    ETag: "1cbf3-dfd-3a2adcd8"
    Accept-Ranges: bytes
    Content-Length: 3581
    Connection: close
    Content-Type: text/html

    As can been seen above, the only difference in the headers sent with the Dynamic Content and the Static Content are the Cache-Control and Expires values. The Dynamic Content example sets Cache-Control to no-cache, must-revalidate and Expires to one month in the past. This should prevent any cache from storing this data or serving it when a request is received to retrieve the same content.

    The Static Content modifies these two settings, making the requested object cacheable for up to 12 hours Cache-Control value set to 43,200 seconds and an Expires value that is exactly 12 hours in the future. After the period specified, the browser cache or caching server must re-validate the content before it can be served in response to local requests.

    The must-revalidate item is not necessary, but it does add additional control over content. Some cache servers will attempt to serve content that is stale under certain circumstances, such as if the origin server for the content cannot be reached. The must-revalidate setting forces the cache server to re-validate the stale content, and return an error if it cannot be retrieved.

    Differentiating caching policies based on the type of content served allows a very granular level of control over what is not cached, what is cached, and for how long the content can be cached for and still be considered fresh. In this way, server and web administrators can improve site performance a little or no additional development or capital cost.

    It is very important to note that defining specific server-side caching policies will only have a beneficial affect on server performance if explicit object caching configurations are used. The two main types of explicit caching configurations are those set by the Expires header and the Cache-Control family of headers as seen in the example above. If no explicit value is set for object expiry, performance gains that might have been achieved are eliminated by a flood of unnecessary client and cache server requests to re-validate unchanged objects with the origin server.


    Despite the growth of dynamic and personalized content on the web, there is still a great deal of highly cacheable material that is served to clients. However, many sites do not take advantage of the performance gains that can be achieved by isolating the dynamic and personalized content of their site from the relatively static content that is served alongside it.

    Using the inherent ability to set explicit caching policies within most modern web-server applications, objects handled by a web-sever can be separated into unique content groups. With distinct caching policies for each defined group of web objects, the web-site administrator, not the cache administrator, has control over how long content is served without re-validation or re-loading. This granular control of explicit content caching policies can allow web-sites to achieve noticeable performance gains with no additional outlay for hardware and software.


    [1] The proximity that is referred to here is network proximity, not physical proximity. For example, AOLs network has some of the worlds largest cache servers and they are concentrated in Virginia; however, because of the structure of AOLs network, these cache servers are not far from the client.

    [2] A re-verify is preferred as it consumes less bandwidth than a full re-load of the object from the origin server. With a re-verification, the origin server just confirms that the file is still valid and the cache server can simply reset the timer on the object.

    [3] An HTTP header message is a data-control message sent by a web client or a web server to indicate a variety of data transmission parameters concerning the requests being made. Caching information is included in the information sent to and from the server.

    [4] There are actually a substantially larger number of header messages that can be applied to a client or a server data transmission to communicate caching information. The most up-to-date list of the messages can be found in section 13 of RFC 2616, Hypertext Transfer Protocol -- HTTP/1.1.

    [5] A complete listing of the Cache-Control settings can be found in RFC 2616, Hypertext Transfer Protocol -- HTTP/1.1, section 14.9.

    [6] The initial file request that generated this header was sent on February 12, 2001.

    [7] The Date header message indicates the date and time on the origin server when it responded to the request.

    [8] A third type of caching, Reverse Caching or HTTPD Accelerators, are used at the server side to place highly cacheable content into high-speed machines that use solid-state storage to make retrieval of these objects very fast. This reduces the load on the web servers and allows them to concentrate on the generation of dynamic and personalized content.

    [9] The data shown here is just application trace data.

    [10] The Etag or entity tag is used to identify specific objects on a web server. Each item has unique Etag value, and this value is changed each time the file is modified. As an example, the Etag for a local web file was captured. This data was re-captured after the file was modified two carriage returns were inserted.

    Test 1 Original File

    ETag: "21ccd-10cb-399a1b33"

    Test 2 Modified File

    ETag: "21ccd-10cd-3a8c0597"

    [11] This is where the other name for a cache server comes from, as the cache server acts as a proxy for the client making the request. The term proxy server is outdated as the term proxy assumes that the device will do exactly as the client requests; this is not always the case due to the security and content control mechanisms which are a part of all cache servers today. The client isnt always guaranteed to receive the complete content they requested. Infact, many networks do not allow any content into the network that does not first go through the cache devices on that network.

    [12] The data shown here is just application trace data. For a more complete example of what the application and network properties of a web object retrieval are, please see Appendix A and B.

    [13] All the data captures used in this example were taken on February 11-14, 2001.

    [14] The description used here is based on the configuration options available with the Apache/1.3.x server family, which allows caching options to be set down to the file level. Other server applications may vary in their methods of applying individual caching policies to different sets of content on the same server.

    Monday, October 2, 2006

    Home Office for a God...errrr, Goddess

    Kathy Sierra.

    Kathy Sierra and her home office in a Silver Streak trailer [here].

    We are not worthy.

    But the whole idea of a playful office is one that is very powerful to me. The "office" I commute to is a broad open space, with no walls. And as we are growing, the noise is becoming difficult to work around.

    With both boys in school in the mornings, it is now much more peaceful for me to work from home. At MY desk, the one I bought. An old oak teacher's desk, of which there appear to be millions in circulation.

    In my chair, the one I feel comfortable in. I have an Aeron at work, and I think it's overrated.

    It is vital to work where you will be most creative, most comfortable.