Bending the TTL’s. Dynamic traffic adaptation by aiScalerPosted by Max Robbins on October 25th, 2010
The secret of caching is controlling the freshness of content. This is managed using (TTL’s) Time To Live settings. These settings balance the speed of delivery with the freshness of content by controlling how frequently content is generated.
Website admins struggle to balance content freshness and speed of delivery. The challenge is that settings that work well for normal traffic can be ineffective during peak periods.
aiScaler solves this issue by allowing the TTL’s to make dynamic adjustments during peak periods, returning to normal setting when the load diminishes.
Under normal load aiScaler obeys the caching TTL’s. When load increases in response to viral content, being featured in a web consolidator such as Digg, Drudgereport etc., aiScaler allows you to bend the TTL’s.
aiScaler implements this “bending” by allowing you to factor an increase in the TTL for a specified period.
You can tell aiScaler that under peak load it should bend the TTL’s by a certain factor, for example 5, for a particular length of time. Content that would have ordinarily refreshed in 30 seconds will now be served from aiScaler for 150 seconds, when load reaches a defined threshold.
With aiScaler capable of producing content at thousands of times the speed of the normal environment, this simple factoring of the TTL will allow the site to manage all additional traffic quickly, for as long as it remains under load.
aiScaler analyzes response times every few seconds and reports on both the scale factor and the normalization.
The end result? With a simple one line configuration you receive the best of both worlds, optimally fresh content for users and high speed reliable serving under peak loads.