Backup files and paths to S3 with write-only keys

Lately I’ve been doing some maintenance to several servers, most of which had to be just turned off since legacy services had no longer any reason to exist.
I don’t know what about you, but for me, when it’s time to turn off a VPS, I always feel a bit anxious. Even though app repository and database are already backed up for archive, you sometimes stumble upon snowflake server configurations or application logs which are not backed up and that may be of interest in the future.

In such cases I used to backup those files locally on my laptop and then move them some where depending on the specific situation. Sometimes it was a CD or DVD, sometimes some other kind of medium. Sometimes I though it would have been tremendously useful to move those files from the server to S3 directly, in some kind of backup bucket.

Other times it was just the need to have a quick way to send a bunch of files to S3 directly, say for periodic backup of databases or filesystem snapshots.

Then I though about security issues related to keeping S3 keys on those servers. If for any reason a host was compromised, to lose control of a key that allows anyone to read everything from that bucket would be a mess. Bacukups very often hold all sorts of sensible information and the idea to have to deal with such security concern was just too much.

S3 and write only keys

I never really developed a standard procedure for that, until few days ago. In fact I though about the possibility to have write-only keys on several servers, and a kind of script to allow you to just send files to S3, with no possibility to read anything.

That sounded great to me. As part of a standard setup for every host I could configure the following:

  • a configuration file with S3 write only keys and bucket name
  • a script suitable to be used with S3 write only keys

In the beginning I considered to use a binary like s3cmd for this purpose, but I found it was not playing well with write-only keys. Then I decided to build my own script. It was actually very easy with few lines of Ruby to come up with a script which was doing just that: read a path from the command line and recursively push the tree to S3.

Sink3

Sink3 is available here on github. It’s in such an early stage that I felt a little bit uncomfortable even to write this post. But then I thought “hey! it’s working after all.”

Here is what it does:

  • it uses the hostname to create a root folder on S3
  • it creates a folder from the current date inside the hostname folder
  • it copies files or paths it receives as arguments inside the date folder

Working this way it can even be used to perform periodical backups. Example usage:

assuming a host named tiana

sink3 send File1 File2 File3

What you get in the bucket is:

tiana/2015-02-03/File1
tiana/2015-02-03/File2
tiana/2015-02-03/File3

nice hum? You don’t have to worry about anything else other than to avoid conflicts in filenames. That would overwrite what you backed up previously.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s