Let’s say you have an automation tool that patches your servers. What patches the server hosting the automation tool? Is it, as the ancients believed, turtles all the way down?

You could perform some kind of magic rug-pulling to try to get the automation tool to patch its own server without breaking, but package managers can and will restart any daemon they see fit partway through the process. Something your automation depends on, like an agent, or even sshd, might end up getting bounced before the upgrade is complete. So we need something a bit more fire-and-forget.

Why not cron? Going by installations alone, it’s the most popular scheduled job automation tool. It’s simple, it works, and odds are really good that it’s already running on your system.

And in fact, automated tools already exist that use cron or other system scheduling tools to kick off upgrades. yum-cron and unattended-upgrades are designed to silently patch your systems in the background and reboot only when necessary. My gripe with them, though, is that they’re magical by design — they want to run at random times and behave more like desktop upgrades than a server patch. Being Linux utilities, there are of course ways of wrangling them into doing what you want, but at that point I’ve cast aside most of their features in favor of predictability.

Back to basics, then. What’s wrong with sticking the upgrade command in cron directly?

Well, if you use yum, the answer is... nothing. yum upgrade -y will run without any additional user input, just as you’d expect. apt-get, on the other hand, is a different story.

See, apt-get really wants to be run interactively. Even if you run apt-get update && apt-get dist-upgrade -y, apt-get will try its hardest to get you to read mailing lists, configuration updates, and warnings. It doesn't want to be responsible for breaking your system and tries to give the end user all the information they need to make the right decision at every step in the process.

The thing is, though, if you’re not doing anything funky with your package management, chances are that apt-get knows better than you and that you’d like it to get on with whatever it wants to do. Face it, if you were sitting there reading interactive prompt after interactive prompt, you’d be taking the default option every time anyway. Defaults are safe; hopefully, defaults represent the option the developers judged to be the least likely to brick your system. We just have to give apt-get some confidence to pick them for us.

So here is a simple script that lets apt-get shut up and do what it does best.

#!/bin/sh

set -e

DEBIAN_FRONTEND=noninteractive
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

apt-get update

apt-get \
  -o Dpkg::Options::=--force-confold \
  -o Dpkg::Options::=--force-confdef \
  -y dist-upgrade

apt-get autoremove -y

shutdown -r 0

Put that somewhere useful like /usr/local/bin/autopatch, mark it executable (chmod +x /usr/local/bin/autopatch), and then invoke it via cron during a time when you're not likely to be using your automation server.

0 2 * * 6 /usr/local/bin/autopatch 2>&1 >> /var/log/autopatch.log

You might notice that this reboots the server indiscriminately; this originates from my deep-seated mistrust of computers. I don’t care whether the server thinks it needs to reboot. I’d rather reboot it anyway and find out sooner rather than later that something won’t come up from a fresh start.

“But downtime!” you might say. Yes, it does mean the server will go down more than it strictly needs to. But this is an automation server, remember. It shouldn’t be doing anything public-facing! Even if the server goes hard down in the middle of the night, the systems it manages should be stable without further intervention. If that isn’t true, then your infrastructure is a ticking time bomb.

In exchange for a brief weekly interruption, my most sensitive server — the server with privileged access to my other servers and cloud hosting accounts — keeps itself up-to-date and secure.