Amazon S3 Object Expiration

Amazon Simple Storage Service (S3) is a low cost storage solution for storing all types of data. Examples of its usage include static resources on dynamic Web applications. For instance, my company places all images for our Web applications on S3. This reduces the load on the application servers leaving them free to deal with the requests for dynamic data. Images are uploaded to an S3 bucket – an entity that is created on S3 to hold your data. Each bucket has a unique url, that can them be mapped to a CNAME for the domain where the main application is hosted. Such an approach is simple and very cost effective with current rates for S3 storage being $0.14 per GB. Other costs include requests (PUT, POST, GET etc) which are $0.01 per 1000 requests and data transfer out which is free for the first GB and then starts at $0.12 per GB.

Another use for S3 is for storing application log files. One of the down sides here is that over a period of time log files build up and after a certain point they are no longer needed. On self managed servers these files are normally compressed and ultimately removed by logrotate or a similar facility. This can be made to work for S3 but requires a lot of scripting to achieve. Today, Amazon released an object expiration facility for s3 storage. This is an elegant solution to automatically deleting S3 storage objects when they are no longer required. Log files are a perfect example for where this service is invaluable. From the AWS management console, object expiration rules will be able to be configured. When the rules match the objects that match the rule will be automatically deleted.

Chris Czarnecki

Type to search

Do you mean "" ?

Sorry, no results were found for your query.

Please check your spelling and try your search again.