AWS S3 Replication – 6 things you should know about it

Apr 14, 2015 Posted by

Amazon announced on their blog the availability of a new AWS S3 Cross-Region Replication feature, making it easier to make copies of S3 objects in a second AWS region. This feature can be used as a backup solution or to allow meeting regulatory requirements for storage of sensitive financial and personal data.

About the new S3 Cross-Region Replication

Once enabled, new objects uploaded to a particular S3 bucket are automatically replicated to a designated destination bucket located in a different AWS region. The replication process also copies any metadata and ACLs (Access Control Lists) associated with the object and can be enabled and managed through the S3 API.

What’s in it for me?

  1. It’s (almost) free – Enabling the feature doesn’t cost you a dime. Of course you will be charged data storage charges for the data in the destination bucket, and you will also pay the usual AWS price for data transfer between regions (See the AWS S3 Pricing page for more information).
  2. It’s easy to use – You can set this up in minutes. It is built on top of S3’s existing versioning facility. Once enabled, you choose the destination region and bucket, set up an IAM role (so that S3 can list and retrieve objects from the source bucket and to initiate replication operations on the destination bucket), and you are done.
  3. Bi-directional replication – You can use the new mechanism to synchronize two buckets in a bi-directional fashion, when one buckets is replicated into the other and vice versa. This is useful for both Disaster Recovery and for using S3 as a sort of a CDN to reduce latency for users in various geographical locations.

Get your hands on 55 top AWS tips selected from hundreds submitted by cloud experts across the industry.

What’s missing?

  1. 100% up-to-date replication – As replication applies to new objects in the S3 source storage, any objects stored prior to enabling this feature are not replicated. You could start copying your existing data to the target S3 storage prior to enabling this feature, however, with bandwidth bottlenecks and initial copy time, the initial copy is most likely going to be out of date (assuming the S3 source storage keeps changing).
  2. Failover – Every bucket has a unique name, so if you’d like to start using your S3 replica, you will need to manually update and configure your applications to refer to the new S3 buckets.
  3. Disaster Recovery Drill testing – There is no simple way to test your applications with the replicated S3 storage, as similar to the missing failover, you will need to “re-wire” your application to the replicated S3 region.

 

Latest Posts

Popular Posts