Would you like to discuss your environment with a knowledgable engineer?
You can store your assets on S3 and use aiScaler as a reverse proxy to cache them, speeding up the delivery and also cutting AWS expenses. Let’s suppose you already have a S3 bucket with an URL similar to http://bucket.name.s3-website-us-east-1.amazonaws.com.
First you need to launch an instance of aiScaler on AWS. You can use this guide: Getting Started on the AWS Marketplace .
The next step will be to edit the aiScaler configuration file. To edit it open /etc/aicache/aicache.cfg or edit it on step3 of the web-based deployment tool.
There are multiple options how to pass the requests to S3:
hostname static.domain.com min_gzip_size 4000 fallback logstats httpheader Connection keep-alive httpheader Accept */* httpheader Accept-Encoding gzip healthcheck /test.html HTTP 5 4 pattern / simple 7d os_tag 1 origin http://bucket.name.s3-website-us-east-1.amazonaws.com 80 1 # static.domain.com/images will be served from # bucket.name.s3-website-us-east-1.amazonaws.com/images
hostname domain.com min_gzip_size 4000 fallback logstats httpheader Connection keep-alive httpheader Accept */* httpheader Accept-Encoding gzip healthcheck /test.html HTTP 5 4 pattern ^/static simple 7d os_tag 1 origin http://bucket.name.s3-website-us-east-1.amazonaws.com 80 1 # domain.com/static will be served from # bucket.name.s3-website-us-east-1.amazonaws.com/static
hostname domain.com min_gzip_size 4000 fallback logstats httpheader Connection keep-alive httpheader Accept */* httpheader Accept-Encoding gzip healthcheck /test.html HTTP 5 4 pattern .css simple 7d os_tag 1 pattern .js simple 7d os_tag 1 origin http://bucket.name.s3-website-us-east-1.amazonaws.com 80 1 origin your_ip_here 80 # serving other requests Don't forget to restart your aiScaler instance
After you made the changes to your config file, don’t forget to restart your aiScaler instance. Now it’s ready for use.
These tests show the difference in performance possible when using objects from Amazon Web Services, Simple Storage Service (S3) to build a dynamic website, and using the same service cached with instances of aiScaler Dynamic Caching running as an instance between S3 and the web tier.
The comparison assumes that users of a site, are repeatedly using the same files within the TTL (Time to Live), specified for the cache.
Our test setup used an Httpd (web Server), running on an AWS small linux Instance. The test page contained 500 embedded image files. The objective was to have a broad enough range of files that would simulate random requests from an active web page. Every object had to be downloaded by the client Browser and can be cached by aiScaler.
The original site was http://bucket.name.s3-website-us-east-1.amazonaws.com (we tested in us-east-1 region). aiScaler had a standard configuration pointing to the hostname bucket.name.s3-website-us-east-1.amazonaws.com and had enabled caching for all static objects like .js, .css, .jpg, .gif, .png, .swf, .html. All objects matched caching rules. The TTL’s for caching (the time until next update of the cache) was 1 day for html, and 7 days for other content, which was arbitrary and could range from 1 second to and an infinite amount of time.
We accessed the site through aiScaler the first time, so all objects where held in cache. This would be the same as the first user to access the page would see the cache. From this point forward, all content is delivered by aiScaler memory.
We made several tests to show the difference in accessing the site directly from S3 and from the cached version. We also tested from an instance started in the same region (us-east-1) with apache bench.
This demonstration is not intended to be comprehensive for all sites. It is quite simple to re-create this test environment, using an existing site to provide a real life proof of concept.
The important piece of this test from our perspective is:
Average time to receive an S3 files that was cached was reduced by 3x to 10x.
Files that where not cached, but still accessed via aiScaler where on average 5% faster (better session management).
We did not need to know the specific files that where to be cached in advance, only the regular expressions.
This environment is limited by the memory of the aiScaler instance and would not work well for very large files.
Network latency was largely ignored in this configuration.
Geographic distribution was not used.
Caching TTL was arbitrary and can range from 1 second to unlimited by regular expression.
** Browsermob is a service of Neustar and is in no way associated with Scaler.