critchat Episode 0 – Introduction

The introduction episode of my writing podcast, critchat.

Docker containers are awesome little throwaway environments. Often I like to jump into a container to test something out, like installing a project from scratch, swapping runtimes without needing a version manager, or trying something new without gumming up my base system. It’s also useful for debugging problems in other Linux distributions or as a quick approximation of some remote machine.

Usually when I jump into a Docker image, though, I like to bring my files with me by mounting $(pwd). This works pretty well but has a few annoyances:

  • If I do something that generates files in my working directory, like pytest, I have to manually clean it up once I’m done.
  • File ownership becomes a stew of root and other UIDs.
  • docker run --rm -ti -v $(pwd):/work -w /work $image /bin/sh is a real pain to type, and I can never remember which images have /bin/bash installed, which is generally a more pleasant experience than bare sh.

I started to hack away at these annoyances one at a time in a shell alias, which evolved into a shell function, which further evolved into a full-blown shell project. Meet dockit, the utility that can drop you into a Docker image in a single command:

dockit alpine
Using default tag: latest
latest: Pulling from library/alpine
Digest: sha256:72c42ed48c3a2db31b7dafe17d275b634664a708d901ec9fd57b1529280f01fb
Status: Image is up to date for alpine:latest
docker.io/library/alpine:latest
/docked #

dockit drops you into a directory called /docked with the best available shell. The /docked directory is owned by the most sensible user dockit can find — the image USER if possible and root if not.

Changes made in the /docked directory don’t propagate back to the host, so you can generate cache files, delete stuff, and generally make a mess of things without having to clean it up. If you want to explicitly export something back to the host, you can use the special undock command. undock will copy a file or folder from the /docked directory back to the host directory, fixing its ownership along the way.

A word of warning, dockit is meant to be used from a smallish directory, like a git repository. Depending on how recent your kernel is, attempting to dockit a large directory may end up taking a lot of time and/or disk space. (That said, because I’m on a kernel version where overlayfs accepts metacopy=on, I can mount my entire 100+ GB home directory in about ten seconds and no appreciable bloat in disk usage, so, your mileage may vary!)

If this sounds like a tool you’d like to have in your toolbox, check out dockit on GitHub.

The Toaster Diaries #0: The Parts

I’ve recently come into the opinion that every hardware enthusiast should own at least two machines: one good one — and one really crap one.

The reasons for owning a good machine are obvious. Everyone loves top-of-the-line hardware that eats modern triple-A games for breakfast. One that shrugs at large codebases, laughs in the face of video rendering, and boots to the desktop in the blink of an eye.

But how much of that beastly machine are you actually using, day in and day out? How often does it sit there, barely more than idle, as you use your thousand-dollar graphics card to play back a 720p YouTube video?

Not that there’s anything wrong with that, of course. The point of having a powerful computer isn’t to use it to its full potential all the time. Computers are flexible things: movie players, web surfers, emulators of games from bygone eras. Owning a powerful computer isn’t about always running at peak capacity. It’s about having the confidence that, if and when you need it, the power is at your disposal.

And that, right there, is where some of the magic is lost.

Once upon a time — or maybe even now — you had a computer where asking it to do just about anything made the fans whine and electronics chatter. And not everything was guaranteed to work, either. Some things needed tweaking, optimization, patching, fixes found only by poring through ancient forums. There’s an entire dimension of computer ownership that disappears once you have a dream machine that can do whatever you ask it to.

So my recommendation to hardware enthusiasts everywhere is this: build one that absolutely, positively, can’t.

Now, there’s a lot of ways to go about building a crap PC. You could buy an outdated “gaming rig” from someone looking for the cash to upgrade. You could build something from scratch with a tight budget. You could even nab an OEM tower designed for workstation use and push it beyond its limits.

My recommendation is to go with something you’re curious about, something you’d have no excuse to work with otherwise. Which, in my case, ended up being two things: a Ryzen APU and a miniature “tower” the size of a thick book.

Face the Raven: Ryzen 2400g

Integrated graphics are nothing new, but they’ve never been a good approximation of dedicated graphics either. Intel’s iGPUs, for example, are just shy of a joke: something that lets you plug in a monitor and watch videos on the Internet without the chip burning up. You certainly can’t game on them, at least not if you want to play anything made in the last decade.

AMD’s Ryzen APUs, like the 2400g, changed all that. Now you can have a halfway decent quad-core Zen CPU with a halfway decent Vega GPU. Neither of them are exactly a performance powerhouse, but hey, you got two for one, and if you’re just starting out, it’s not a bad jumping off point. I was able to pick up a 2400g from eBay for $95. For comparison, a used 1600 and RX 480 — while certainly more powerful — would easily run someone double that.

If I were starting a build on a tight budget but with an eye towards upgrading later, I’d strongly consider an APU. It’s easy to add a dedicated GPU down the line, and you’re not exactly locking yourself out of future CPUs, either, considering the versatility of the AM4 socket. (That’s another thing that can’t be said for Intel and their one-socket-per-generation approach.)

Upgradeability would be far more limited for this build, however, thanks to its real creative constraint: the chassis.

The Black Box: Inwin Chopin

The Chopin is one of the smallest computer cases out there, alongside the xCase and Skyreach. Size-wise, it’s little more than a shell around an ITX motherboard and a 150W PSU.

The included internal PSU is nice since it means that the Chopin doesn’t sport a hefty external power brick, but it also doesn’t leave much room for, well, anything else. Drives are mounted in the back of the case. Cable management is handled by a single gutter at the front. All this means onboard graphics are all I’m going to get, making the Chopin an appropriate pairing with the processor.

The Rest

  • Motherboard: Somehow, I managed to snag an ASRock Fatal1ty AB350 Gaming-ITX motherboard for $80 on eBay. Of all the motherboards I have owned, which isn’t many, this one takes the prize for most obnoxious name. It’s tiresome searching for “Fatal1ty” and Gaming-ITX is a pretty dumb rebranding of Mini-ITX.
  • RAM: I deliberately went for overkill here, but I didn’t realize how much overkill until after I built the thing. I purchased a TridentZ G.Skill 3600 2x8GB kit because I heard Ryzen was very memory sensitive. Also because I haven’t yet built a system with RGB RAM, which is practically a rite of passage.
  • Storage: Somewhat overkill, but not too much. I bought two Samsung 860 EVO drives, one for Linux and the other for Windows. I’d run dual-boot setups before, but I thought this time I’d save myself some headache and run the operating systems on entirely different devices. This wouldn’t prove to be as much of a panacea as I’d hoped, but that’s a story for a later post.
  • Cooling: The 2400g I purchased didn’t come with a Wraith cooler, which was fine as it couldn’t fit the Inwin anyway. I went with the Noctua NH-L9a-AM4 since I’m a recent Noctua convert. Sadly, as of August 2019, there wasn’t a Chromax alternative, but the brown would end up being hardly visible in the build anyway.

All told, the build came out to just shy of $600. More sensible RAM and storage options would have easily brought it under $500:

Category Product Price New or Used
Case InWin Chopin $100 (Amazon) New
Processor AMD Ryzen 2400G $100 (eBay) Used
Motherboard ASRock Fatal1ty AB350 Gaming-ITX $80 (eBay) Used
Storage Samsung 860 EVO (500GB) $75 (Amazon) New
RAM Corsair Vengance 2x8GB 3200 $80 (Amazon) New
Cooler Noctua NH-L9a-AM4 $40 (Amazon) New
Total - $475 -

The SSD I bought new as it’s one of the rare pieces of tech that can degrade with heavy usage — unlike, say, a graphics card or processor. The case, cooler, and RAM I would have happily bought used, but eBay listings ran more expensive than Amazon or Newegg.

Next time on The Toaster Diaries: putting it all together.

So you’ve spun up a fresh Linux Workspace and discovered in short order that it is ugly as sin. Fret not, this is as configurable a Linux machine as any — but in different ways than you might be used to.

Enable SSH

Fortunately, Linux WorkSpaces run sshd by default. However, there’s no way to get to it from the Internet.

This is worth fixing up sooner rather than later, as you may run into a problem in the future with your workspace becoming Unhealthy. An Unhealthy WorkSpace is a WorkSpace that AWS can’t reach, usually caused by networking problems. You won’t be able to connect to an Unhealthy WorkSpace through the WorkSpaces client, so it’s good to have SSH handy for debugging.

Before exposing port 22 to the Internet, edit /etc/ssh/sshd_config and tighten up the security a little bit: set ChallengeResponseAuthentication to no. This prevents password logins and allows only key-based authentication. An exposed SSH port in an AWS IP block is a massive target for brute-force attacks; no sense in letting anyone try.

Now, sudo systemctl restart sshd. Verify it’s running. Add your public key(s) to ~/.ssh/authorized_keys. Once you’ve done that, open up the AWS Console.

The WorkSpace itself isn’t visible in the EC2 console, but the network interface attached to it is. Visit “Network Interfaces” and select the ENI “Created by Amazon WorkSpaces.” While you’re here, by the way, I highly recommend registering the Elastic IP attached to your WorkSpace ENI with some kind of DNS. It’s a lot more convenient to access later, and the Elastic IP won’t change.

Edit the ENI’s security group. Add TCP port 22 from “Anywhere” and save. Your WorkSpace is now SSHable! tail -f /var/log/secure for a good laugh.

Note that if you’re SSHing from a shell, you’ll need to escape the backslash that separates your AD domain from your username, i.e. CORP\\bob rather than CORP\bob.

Disable sudo password prompts

What’s the point of authenticating twice with the same password? Edit /etc/sudoers.d/01-ws-admin-user and change the final ALL to NOPASSWD:ALL.

Enable EPEL

Nope, yum install epel-release doesn’t work with Amazon’s repositories. You’ll need to install the package from the Internet directly.

sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

Pull in CentOS repositories

The easiest way to do this is to install docker. Don’t forget to add yourself to the Docker group with usermod -a -G docker $USER. Then copy the CentOS repository files to your WorkSpace.

centos=$(docker run centos:7)
sudo docker cp $centos:/etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo
sudo sed -i 's/\$releasever/7/g' /etc/yum.repos.d/CentOS-Base.repo
sudo sed -i 's/\]$/\]\nenabled=0/' /etc/yum.repos.d/CentOS-Base.repo
sudo docker cp $centos:/etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 
docker rm -f $centos

Some explanation: $releasever of Amazon Linux is 2, but the compatible CentOS repositories are CentOS 7. So $releasever needs to be hardcoded to 7 in CentOS-Base.repo in order to work.

We also leave these repositories disabled by default to minimize conflicts with Amazon’s own packages. Use them only when you need them by adding --enablerepo= when installing a package, i.e. yum install --enablerepo=extras. Some packages require more obscure dependencies than others, so it’s worth having the CentOS repositories handy.

Change your desktop environment

Personally, I find adding xfce and the Numix theme works wonders for making your WorkSpace more pleasing to the eye with very little effort. If you do run xfce with a custom GTK theme like I do, don’t forget to set the theme style under both Settings Manager > Appearance > Style and Settings Manager > Window Manager > Style > Theme.

sudo yum groupinstall -y xfce
sudo yum install -y numix-gtk-theme numix-icon-theme-circle

To switch window managers, edit /etc/pcoip-agent/pcoip-agent.conf and change pcoip.desktop_session from mate to xfce. If you’ve installed a different desktop environment and are wondering what keyword belongs here, check /usr/share/xsessions:

ls /usr/share/xsessions/*.desktop # List available desktop sessions.

Reboot your WorkSpace and you’ll be in your preferred desktop session.

Change your shell

chsh doesn’t work in this realm because of Active Directory. Instead, add yourself to /etc/passwd and modify your shell the good old-fashioned way.

getent passwd $USER | sudo tee -a /etc/passwd

Then edit /etc/passwd and change your shell to your liking.

Set a monospace font

This isn’t really WorkSpace specific, but I came across this issue while configuring my WorkSpace, so here it is anyway.

Not all desktop environments expose the ability to modify your system’s monospace font. In case you have this issue too, create a ~/.fonts.conf with the following XML format, replacing Terminus with the monospace font of your choice.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<match target="pattern">
  <test qual="any" name="family">
    <string>monospace</string>
  </test>
  <edit name="family" mode="assign">
    <string>Terminus</string>
  </edit>
</match>
</fontconfig>

Why not just use EC2?

Your WorkSpace should now be every bit as comfy as a local VM. But you might be wondering why anyone would bother going through all this special configuration when you could just run your own EC2 instance.

The answer is simple: the WorkSpace is cheaper and Just Works. I run the cheapest Amazon WorkSpace, which is 1 vCPU, 2 GB of RAM, 80 GB of root storage and 10 GB of user storage. It runs me $21 a month. The equivalent EC2/EBS storage is $24.23 a month. Why bother?

As for it Just Working, I don’t have to bother with tuning or securing VNC. The WorkSpaces client is fast, secure, and runs on everything — even Linux, if you install it under WINE. The VirtualBox-like automatic desktop resizing is nice too, which is something even NoMachine struggles to pull off. Having messed with other remote clients in the past, I find the WorkSpaces client to be a more seamless and consistent user experience.

With that, hopefully your WorkSpace is set up the way you like it. Happy hacking!

Amazon WorkSpaces are virtual desktops in the cloud. As designed, they’re an office sysadmin’s dream: a simple thin client, integration with Active Directory, and the ability to rebuild WorkSpaces from scratch with the click of a button. But how are they as a personal desktop? I decided to find out.

They can be incredibly useful

A WorkSpace is valuable so long as you 1. switch computers often and 2. you have actual work to do. If you’re not a hobbyist developer, chances are a cloud drive service will suit your needs just fine.

But if you are a hobbyist devops kinda person, you will know the nightmare of keeping everything “in sync.” Notes are easy: there’s Evernote, Google Keep, Dropbox, good old-fashioned git; code can be committed to private repositories while in progress. But then there’s all the little things. Shell and IDE history. Bookmarks and tabs. Configuration files that are never up to date. Local webservers that you forget to start after every boot.

It’s never anything major, and it’s certainly nothing that stops you from getting stuff done, just a thousand niggling things that aren’t quite the way you expect them to be in the moment. And yes, all of these things are syncable, but most require individual solutions with varying degrees of robustness and headache.

And even when everything is running smoothly, who among us hasn’t had that oh crap! moment of kicking off a long-running script just before the Internet drops?

The brilliance of a WorkSpace can be summed up with the word continuity. You are always exactly where you left off. Your WiFi or your battery may have betrayed you, but your WorkSpace is eternal, ready for you to reconnect and get on with shit.

They need Active Directory

Yes, even the Linux WorkSpaces. It’s just how authentication is done. If you create a WorkSpace manually in the console, AWS will quietly make you a pleasant-sounding Simple AD domain called CORP.

This might ring some alarm bells. After all, nothing in the cloud is free! Except, as it turns out, this is.

As AWS explains on the directory pricing page:

If you use Amazon WorkSpaces, Amazon WorkDocs, or Amazon WorkMail in conjunction with AWS Directory Service, you will not be charged an additional fee for either Simple AD or AD Connector directories registered with these services, as long as you have active users of Amazon WorkSpaces, Amazon WorkDocs, or Amazon WorkMail.

Specifically, you need one active user to keep your small directory free. That’s you!

Since you’re going to have a domain created one way or another, it’s probably worth creating the domain yourself to avoid being stuck with a dreary, generic CORP. You can create your own domain from the ‘Directory Service’ console. Then, once you’ve made it, go to the WorkSpaces console and deploy a WorkSpace from your new domain.

Hourly WorkSpaces are a scam

As you peruse the pricing page for WorkSpaces, you might be tempted to choose an Hourly WorkSpace over an AlwaysOn one. This is because most things in the cloud are cheaper if they’re not on 24/7, and you’re probably not going to be logged in 24 hours a day anyway.

Don’t do it. The per-hour pricing is over five times more expensive going with Hourly over AlwaysOn. Even at a modest use of three hours per day, the cheapest Hourly Linux instance will run you more than the AlwaysOn monthly fee. You also get charged the initial flat Hourly fee immediately, before your first hour ticks over, and you will get charged it again if you re-provision your WorkSpace without waiting for the previous WorkSpace to be deleted. Trust me, I speak from experience.

That’s to say nothing of the awful wait for your Hourly WorkSpace to dredge itself out from whatever depths of the cloud it slumbers in, should you dare to rouse it. Hope you like staring at the word RESUMING with increasing incredulity. Just don’t bother with Hourly.

They will get rebooted

Part of my motivation to check out WorkSpaces was my impatience with Windows clobbering my Linux VMs I worked in whenever it felt like updating. Unfortunately, you don’t escape updates here either. Fortunately, they are completely predictable!

WorkSpace maintenance windows are between midnight and 4am Sunday in the WorkSpace’s timezone. I will happily take a predictable reboot period over the surprises of Windows 10. It would be nice if we could define the maintenance period ourselves, but at least AWS picked a time when people are the least likely to be awake, let alone online.

They are free tier eligible

If you still qualify, you can nab a decent sized WorkSpace under the free tier plan. Go check it out!

As for me, I haven’t been free tier for some time, but I’m finding my $21/month Linux WorkSpace to be fast enough for my needs.

A WorkSpace of my own

While they’re not for everyone, I can report that I’ll probably be staying with my WorkSpace for the immediate future. It’s been an easy transition from having multiple Linux VMs playing configuration leapfrog to a single Linux VM running on someone else’s computer. Having a persistent desktop brings back a certain comfort in a world of increasingly transient devices.