Compressing Web content can produce a much faster site for users. Here's how to set it up and measure your success.
Reducing costs is a key consideration for every IT budget. One of the items looked at most closely is the cost of a company's bandwidth. Using content compression on a Web site is one way to reduce both bandwidth needs and cost. With that in mind, this article examines some of the compression modules available for Apache, specifically, mod_gzip for Apache 1.3.x and 2.0.x and mod_deflate for Apache 2.0.x.
Most compression algorithms, when applied to a plain-text file, can reduce its size by 70% or more, depending on the content in the file. When using compression algorithms, the difference between standard and maximum compression levels is small, especially when you consider the extra CPU time necessary to process these extra compression passes. This is quite important when dynamically compressing Web content. Most software content compression techniques use a compression level of 6 (out of 9 levels) to conserve CPU cycles. The file size difference between level 6 and level 9 is usually so small as to be not worth the extra time involved.
For files identified as text/.* MIME types, compression can be applied to the file prior to placing it on the wire. This simultaneously reduces the number of bytes transferred and improves performance. Testing also has shown that Microsoft Office, StarOffice/OpenOffice and PostScipt files can be GZIP-encoded for transport by the compression modules.
Some important MIME types that cannot be GZIP encoded are external JavaScript files, PDF files and image files. The problem with Javascript files mainly is due to bugs in browser software, as these files are really text files and overall performance would benefit by being compressed for transport. PDF and image files already are compressed, and attempting to compress them again simply makes them larger and leads to potential rendering issues with browsers.
Prior to sending a compressed file to a client, it is vital that the server ensures the client receiving the data correctly understands and renders the compressed format. Browsers that understand compressed content send a variation of the following client request headers:
Accept-encoding: gzip
Accept-encoding: gzip, deflate
Current major browsers include some variation of this message with every request they send. If the server sees the header and chooses to provide compressed content, it should respond with the server response header:
Content-encoding: gzip
This header tells the receiving browser to decompress the content and parse it as it normally would. Alternatively, content may be passed to the appropriate helper application, based on the value of the Content-type header.
The file size benefits of compressing content can be seen easily by looking at a couple of examples, one an HTML file (Table 1) and the other a PostScript file (Table 2). Performance improvements are examined later in this article.
mod_deflate for Apache versions 2.0.44 and earlier comes with the compression ratio set for best speed, not best compression. This configuration can be modified using the tips found at www.webcompression.org/mod_deflate-hack.php. Starting with Apache 2.0.45, a configuration directive is included.
The mod_gzip module is available for both Apache 1.3.x and Apache 2.0.x.[3], and it can be compiled into Apache as a dynamic shared object (DSO) or as a static module. The compilation for a DSO is simple; from the uncompressed source directory, perform the following steps as root:
make APXS=/path/to/apxs make install APXS=/path/to/apxs /path/to/apachectl graceful
mod_gzip must be loaded last in the module list, as Apache 1.3.x processes content in module order, and compression is the final step performed before data is sent. mod_gzip installs itself in the httpd.conf file, but it is commented out.
A basic configuration for mod_gzip in the httpd.conf should include:
mod_gzip_item_include mime ^text/.* mod_gzip_item_include mime \ ^application/postscript$ mod_gzip_item_exclude mime \ ^application/x-javascript$ mod_gzip_item_exclude mime ^image/.*$ mod_gzip_item_exclude file \ \.(?:exe|t?gz|zip|bz2|sit|rar)$
This allows PostScript files to be GZIP-encoded, while not compressing PDF files. PDF files should not be compressed; doing so leads to problems when attempting to display the files in Adobe Acrobat Reader. To be even more careful, you may want to exclude PDF files explicitly from being compressed:
mod_gzip_item_eclude mime ^application/pdf$
The mod_deflate module for Apache 2.0.x is included with the source for this server, which makes compiling it into the server rather simple:
./configure --enable-modules=all \ --enable-mods-shared=all --enable-deflate make make install
With mod_deflate for Apache 2.0.x, the GZIP encoding of documents can be enabled in one of two ways: explicit exclusion of files by extension or explicit inclusion of files by MIME type. These methods are specified in the httpd.conf file. Explicit exclusion looks like:
SetOutputFilter DEFLATE DeflateFilterNote ratio SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ \ no-gzip dont-vary SetEnvIfNoCase Request_URI \ \.(?:exe|t?gz|zip|bz2|sit|rar)$ \ no-gzip dont-vary SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary
Explicit inclusion looks like:
DeflateFilterNote ratio AddOutputFilterByType DEFLATE text/* AddOutputFilterByType DEFLATE application/ms* \ application/vnd* application/postscript
In the explicit exclusion method, the same exclusions are present as in the mod_gzip file, namely images and PDF files.
If your site uses dynamic content—XSSI, CGI and the like—nothing special needs to be done to compress the output of these modules. As mod_gzip and mod_deflate process all outgoing content before it is placed on the wire, all content from Apache that matches either the MIME types or the file extensions mapped in the configuration directives is compressed.
The output from PHP, the most popular dynamic scripting language for Apache, also can be compressed in one of three possible ways: using the built-in output handler, ob_gzhandler; using the built-in ZLIB compression; or using one of the Apache compression modules. Configuring PHP's built-in compression is simply a matter of compiling PHP with the --with-zlib configure option and then reconfiguring the php.ini file.
Below is what the output buffer method looks like:
output_buffering = On output_handler = ob_gzhandler zlib.output_compression = Off
The ZLIB method uses:
output_buffering = Off output_handler = zlib.output_compression = On
The output buffer method produces marginally better compression, but both methods work. The output buffer, ob_gzhandler, also can be added on a script-by-script basis, if you do not want to enable compression across the entire site.
If you do not want to reconfigure PHP with ZLIB enabled, the Apache compression modules can compress the content generated by PHP. I have configured my server so that Apache modules handle all of the compression, and all pages are compressed in a consistent manner, regardless of their origin.
Can compressed content be cached? The answer is an unequivocal yes. With mod_gzip and mod_deflate, Apache sends the Vary header, indicating to caches that this object differs from other requests for the same object based on certain criteria—user-agent, character set and so on. When a compressed object is received by a cache, it notes that the server returned a Vary: Accept-Encoding response. This response indicates it was generated based on the request containing the Accept-Encoding: gzip header.
Caching compressed content can lead to a situation where a cache stores two copies of the same document, one compressed and one uncompressed. This is a design feature of HTTP 1.1, and it allows clients with and without the ability to receive compressed content to benefit from the performance enhancements gained from local proxy caches.
When considering the logging methods of mod_gzip and mod_deflate, there really are no comparisons. mod_gzip logging is robust and configurable and is based on the Apache log format. This allows mod_gzip logs to be configured for analysis basically any way you want. The default log formats provided when the module is installed are shown below:
LogFormat "%h %l %u %t \"%r\" %>s %b mod_gzip: \ %{mod_gzip_compression_ratio}npct." \ common_with_mod_gzip_info1 LogFormat "%h %l %u %t \"%r\" %>s %b mod_gzip: \ %{mod_gzip_result}n In:%{mod_gzip_input_size}n \ Out:%{mod_gzip_output_size}n \ Ratio:%{mod_gzip_compression_ratio}npct." \ common_with_mod_gzip_info2 LogFormat "%{mod_gzip_compression_ratio}npct." \ mod_gzip_info1 LogFormat "%{mod_gzip_result}n In:%{mod_gzip_input_size}n \ Out:%{mod_gzip_output_size}n \ Ratio:%{mod_gzip_compression_ratio}npct." \ mod_gzip_info2
Logging allows you to see the file's size prior to and after compression, as well as the compression ratio. After tweaking the log formats to meet your specific configuration, they can be added to a logging system by specifying a CustomLog in the httpd.conf file:
CustomLog logs/gzip.log common_with_mod_gzip_info2 CustomLog logs/gzip.log mod_gzip_info2
Logging in mod_deflate, on the other hand, is limited to one configuration directive, DeflateFilterNote, which is added to an access_log file. Be careful about doing this in your production logs, as it may cause some log analyzers to have issues when examining your files. It is best to start out by logging compression ratios to a separate file:
DeflateFilterNote ratio LogFormat '"%r" %b (%{ratio}n) "%{User-agent}i"' \ deflate CustomLog logs/deflate_log deflate
How much improvement can you see with compression? The difference in measured download times on a lightly loaded server indicates the time to download the base page (the initial HTML file) improved by between 1.3 and 1.6 seconds across a slow connection.
The time for the server to respond to a client requesting a compressed page is slightly slower. Measurements show that the median response time for the server averaged 0.23 seconds for the uncompressed page and 0.27 seconds for the compressed page. However, most Web server administrators should be willing to accept a 0.04 increase in response time to achieve a 1.5 second improvement in file transfer time.
Web pages are not completely HTML, however. So, how do improved HTML (and CSS) download times affect overall performance? The graph below shows that overall download times for the test page were 1–1.5 seconds better when the HTML files were compressed.
To emphasize the value of compression further, I ran a test on a Web server to see what the average compression ratio would be when requesting a large number of files. In addition, I wanted to determine what the affect on server response time would be when requesting large numbers of compressed files simultaneously. There were 1,952 HTML files in the test directory, and I checked the results using cURL across my local LAN (Tables 3 and 4). The files I used were the top-level HTML files from the Linux Documentation Project. They were installed on an Apache 1.3.27 server running mod_gzip. Minimum file size was 80 bytes and maximum file size was 99,419 bytes.
Table 3. Large Sample of File Requests (1952 HTML Files)
First Byte: Average/Median | Total Time: Average/Median | Bytes: Average/Median | Total Bytes | |
---|---|---|---|---|
mod_gzip | ||||
Uncompressed | 0.091/0.030 | 0.280/0.173 | 6,349/3,750 | 12,392,318 |
Compressed | 0.084/0.036 | 0.128/0.079 | 2,416/1,543 | 4,716,160 |
mod_deflate[5] | ||||
Uncompressed | 0.044/0.028 | 0.241/0.169 | 6,349/3,750 | 12,392,318 |
Compressed | 0.046/0.031 | 0.107/0.050 | 2,418/1,544 | 4,720,735 |
As expected, the first byte download time was slightly higher with the compressed files than it was with the uncompressed files. But this difference was in milliseconds and is hardly worth mentioning in terms of on-the-fly compression. It is unlikely that any user, especially dial-up users, would notice this difference in performance.
That the delivered data was transformed to 43% of the original file size should make any Web administrator sit up and take notice. The compression ratio for the test files ranged from no compression for files that were less than 300 bytes to 15% of the original file size for two Linux SCSI Programming HOWTOs. Compression ratios do not increase in a linear fashion when compared to file size; rather, compression depends heavily on the repetition of content within a file to gain its greatest successes. The SCSI Programming HOWTOs have a great deal of repeated characters, making them ideal candidates for extreme compression.
Smaller files also did not compress as well as larger files, exactly for this reason. Fewer bytes means a lower probability of repeated bytes, resulting in a lower compression ratio.
Table 5. Average Compression by File Size (in Bytes)
0–999 | 1,000–4,999 | 5,000–9,999 | 10,000–19,999 | 20,000–49,999 | 50,000– | |
---|---|---|---|---|---|---|
mod_gzip | 0.713 | 0.440 | 0.389 | 0.369 | 0.350 | 0.329 |
mod_deflate | 0.777 | 0.440 | 0.389 | 0.369 | 0.350 | 0.331 |
The data in Table 5 shows that compression works best on files larger than 5,000 bytes. Below that size, average compression gains are smaller, unless a file has a large number of repeated characters. Some people argue that compressing files below a certain size wastes CPU cycles. If you agree with these folks, using the 5,000 byte value as floor value for compressing files should be a good starting point. I am of the opposite mindset. I compress everything that comes off my servers because I consider myself an HTTP over-clocker, trying to squeeze every last bit of download performance out of the network.
mod_deflate does not have a low-end boundary for file size, so it attempts to compress files too small to benefit from compression. This results in files smaller than approximately 120 bytes becoming larger when processed by mod_deflate.
With a few simple commands and a little bit of configuration, an Apache Web server can deliver a large amount of content in a compressed format. These benefits are not simply limited to static pages; dynamic pages generated by PHP and to other dynamic content generators can be compressed by using the Apache compression modules. When added other performance-tuning mechanisms and appropriate server-side caching rules, these modules can reduce substantially necessary bandwidth for a very low cost.