How to disable weak ciphers on nginx

I’ve read and reposted this post here that explains how to remove some weak ciphers from nginx and apache.

It has been useful but I’ve found I needed to edit the string a little and remove some ciphers that Qualis SSL check considered weak.

Here’s the string, in case you have a similar need.



Postgres essentials: window functions

Window functions in Postgres allow you to perform computations related to the set of rows being returned by your query. Imagine you can group your query by a certain column, and have computations be limited by the boundaries of that group (in other words, a window).

To better explain the concept, let’s look at a very simple example (data can be downloaded here).

Imagine we have a table with 100 records, with four columns:

  • id (primary key)
  • name
  • a performance score from 1 to 100
  • marital status field

Something like this:


Now let’s suppose you want to view the best performers in each marital status. First thing to do is to order the list by marital status and performance. From the following image you can see what a window is:


Now that we have clear what a window is, we can introduce the concept of window functions. Very easily, window functions are functions which operate on those windows of data, in other words on those sub-recordsets. We can add columns with the result of a function that takes into account only the values of other rows inside the boundaries of the window. 

We can use the rank() function to show the rank of each record in its own window. A rank is not like a row number. A ranks output an equal value for an equal input inside a given set. 
Let’s write this query: 
Which leads to the following result: 
As you can see ranking restarts when a new window starts
Now go back and take a look at the query. Right after the call to the rank() function we define a partition criteria and a sorting. Those two parameters are required to specify the scope, the field of action of the window function. 
Now let’s suppose we want to query for just the first ranked people in each marital status. It’s very easy now that we have rank column in place. Just wrap everything inside a subquery and add a where condition. 


Before window functions it was not that easy to get the same result. I don’t know precisely because I’m not that old. Anyway I suppose the same result was a matter of nested subqueries and similar sorcery.

There’s much more you can do with window functions. Have a look a the documentation page for the feature and for the available functions.

I hope you found this useful. Here you can download the data I used for this post, in case you want to try it yourself.



Backup files and paths to S3 with write-only keys

Lately I’ve been doing some maintenance to several servers, most of which had to be just turned off since legacy services had no longer any reason to exist.
I don’t know what about you, but for me, when it’s time to turn off a VPS, I always feel a bit anxious. Even though app repository and database are already backed up for archive, you sometimes stumble upon snowflake server configurations or application logs which are not backed up and that may be of interest in the future.

In such cases I used to backup those files locally on my laptop and then move them some where depending on the specific situation. Sometimes it was a CD or DVD, sometimes some other kind of medium. Sometimes I though it would have been tremendously useful to move those files from the server to S3 directly, in some kind of backup bucket.

Other times it was just the need to have a quick way to send a bunch of files to S3 directly, say for periodic backup of databases or filesystem snapshots.

Then I though about security issues related to keeping S3 keys on those servers. If for any reason a host was compromised, to lose control of a key that allows anyone to read everything from that bucket would be a mess. Bacukups very often hold all sorts of sensible information and the idea to have to deal with such security concern was just too much.

S3 and write only keys

I never really developed a standard procedure for that, until few days ago. In fact I though about the possibility to have write-only keys on several servers, and a kind of script to allow you to just send files to S3, with no possibility to read anything.

That sounded great to me. As part of a standard setup for every host I could configure the following:

  • a configuration file with S3 write only keys and bucket name
  • a script suitable to be used with S3 write only keys

In the beginning I considered to use a binary like s3cmd for this purpose, but I found it was not playing well with write-only keys. Then I decided to build my own script. It was actually very easy with few lines of Ruby to come up with a script which was doing just that: read a path from the command line and recursively push the tree to S3.


Sink3 is available here on github. It’s in such an early stage that I felt a little bit uncomfortable even to write this post. But then I thought “hey! it’s working after all.”

Here is what it does:

  • it uses the hostname to create a root folder on S3
  • it creates a folder from the current date inside the hostname folder
  • it copies files or paths it receives as arguments inside the date folder

Working this way it can even be used to perform periodical backups. Example usage:

assuming a host named tiana

sink3 send File1 File2 File3

What you get in the bucket is:


nice hum? You don’t have to worry about anything else other than to avoid conflicts in filenames. That would overwrite what you backed up previously.

How to setup Zimbra to forward missing recipients to Google Apps

This tip is to configure a Zimbra instance to deliver messages to Google Apps in the case the recipient email address is not found on the local server.

This is the scenario:

  • you have an email domain on Zimbra with an email account:
  • you have Google Apps configured to use Zimbra as a secondary server. This means that Google Apps will be configured as the only MX dns record, and will forward all unknown recipients to Zimbra as a fallback.
  • you have an email account configured on Google Apps:
  • when you are logged into Zimbra webmail (or use an IMAP client) in you want to be able to forward messages to

In normal conditions, Zimbra will reply with a message telling you that the recipient is not found.

The following command will tell Zimbra to forward every message for the given domain to Google Apps:

zmprov md zimbraMailTransport

I searched for ways to apply this configuration through the administration interface but I wasn’t able to find anything. So running this at the command line seems to be the only option.

Then, restart postfix:

postfix stop 
postfix start 

From now on, every message for domain will be forwarded to Google Apps first.

If you are a Zimbra guru and you know a better way to do that please drop me a line in the comments.

Migrate your blog from Jekyll to WordPress in 3 steps

Long story short you have to use a RSS 2.0 feed. Here is the flow I followed recently and that worked for quite well. This actually did not import images. It was not a big issue in my case since I did have very few of them and doing a quick review was enough. If you find better ways to do this please drop a comment and let me know.

Install RSS 2.0 feed in your Jekyll site

My installation didn’t have an RSS feed, it had an Atom feed which unfortunately didn’t seem to work for this purpose. Find an RSS 2.0 pluign and install it into your _plugins directory. If you don’t have a plugins directory make one.

I used this one. It requires two configuration items in _config.yml wich I didn’t have: name and URL. Be sure to have those in your config. Restart Jekyll and visit /rss.xml on your local site.

You don’t even need to push this change to production, since the WordPress importer we’ll see next will ask you to upload a file.

So download this file from your local instance doing something like

wget http://localhost:4000/rss.xml

Keep the resulting rss.xml file ready for the next step.

Install an RSS import plugin on WordPress

This import process is going to use an RSS feed file as source. Install this plugin

Not much to say here. Move to the RSS import tool and upload the file you generated at the previous step.


You should see a long list of imported successfully lines.

Import images

This is boring part. If you find more clever ways to do that please let me know.

Your images are likely to be in your Jekyll images folder. Open that folder, select all relevant images and drag them into the WordPress media library. You can image what the next step is: relink images by visiting each post. Delete the image placeholder which will be broken at this point, and add back the corresponding media item from the library.

As I said this was not much an issue for me because I didn’t have many images. If you have tons of images and this step is not a viable solution I’d suggest a couple of possibilities:

  1. Work on the rss.xml file, so that once the images are uploaded in the WordPress media library, the path will match and they won’t be broken paths.
  2. Spend some more time to find a more clever importer so that linked images from your domain as actually imported in the library, and maybe the image src changed according to the new path.

Other issues

I found posts to have line breaks where there shouldn’t be. I don’t know why, but anyway I had to review the posts by hand and fixed it during the process.

Another thing you may want to check is the code blocks (code samples and the like) which you may find are not appearing as the were used to. I’m using the Crayon code highlighter now.

Let me know how your migration goes and if you found this post useful. If you catch me in coffee time I’d be happy to help you out.

Sudo and tty_tickets option

It’s about ssh and sudo sessions. Working on a new Ubuntu 12.04 instance, I noticed
it was not keeping my sudo sessions open across ssh connections.

Such behaviour was a default in previous releases. I never had to configure it
and it just worked. It’s quite annoying to input your password repeatedly
when you are running a bunch of automated script to configure a new instance,
especially if you are testing those configurations and you run them on and on.

So here is the line you have to add to your /etc/sudoers to do the trick:

Defaults        !tty_tickets

I’ve read the docs too late once again. You please go and read the docs to know
how it works.


Keep in mind that changing this configuration will allow anyone who is able to
access your shell on the remote machine to run a sudo command without password