GFX::Monk Home

Posts tagged: "linux"

nix-wrangle: Manage nix sources and dependencies with ease

Update:

Doing things my own way is too much effort, I just use niv these days :)

My last post was more than a year ago, in which I described my long journey towards better package management with nix.

Well, it turns out that journey wasn’t over. Since then, I’ve been using the tools I created and found them unfortunately lacking.

So, I’ve built just one more tool, to replace those described in the previous post.

I’m writing this assuming you’ve read the previous post, otherwise I’d be repeating myself a lot. If you haven’t, just check out nix-wrangle and don’t worry about my previous attempts ;)

Step 7: nix-wrangle for development, dependency management and releases

After a lot of mulling over the interconnected problems described in that previous post (and discussions with the creators of similar tools), I came at it from a new direction. Nix wrangle is the result of that approach. I’ve been using it fairly successfully for a while now, and I’m ready to announce it to the world.

How does it differ from my previous state of using nix-update-source and nix-pin?

  • One JSON file for all dependencies, allowing bulk operations like show and update, as well as automating dependency injection (since all dependencies are known implicitly).
  • A splice command which takes a nix derivation file, injects a synthetic src attribute baking in the current source from the nix-wrangle JSON file. This allows you to keep an idiomatic nix file for local development (using a local source), and automatically derive an expression using the latest public sources for inclusion in nixpkgs proper.
  • Project-local development overrides. The global heuristic of nix-pin caused issues and some confusion, nix-wrangle supports local sources for exactly the same purpose, but with an explicit, project level scope.

Please check it out if you use nix! And if you don’t use nix, check that out first :)

A journey towards better nix package development

Update:

Doing things my own way is too much effort, I just use niv these days :)


This post is targeted at users of nix who write / maintain package derivations for software they contribute to. I’ve spent a lot of time doing (and thinking about) this, although it’s probably quite a niche audience ;)

tl;dr: you should check out nix-pin and nix-update-source if you want to have a happy life developing and updating nix expressions for projects you work on.

I believe nix is a technically excellent mechanism for software distribution and packaging.

But it’s also flexible enough that I want to use it for many different use cases, especially during development. Unfortunately, there are a few rough edges which make a good development setup difficult, especially when you’re trying to build expressions which serve multiple purposes. Each of these purpose has quite a few constraints:

Running gnome-shell nested in a Xephyr window

TL;DR: install nix and Xephyr, then try this script.

I’ve worked on a GNOME Shell tiling window extension (shellshape) for 5 years now, since before the first release of gnome-shell. The shell itself is impressively extensible, and it’s pretty amazing that I can distribute a tiling window extension which as just a bunch of javascript. But the development process itself has always been awful:

  • you have to restart your window manager all the time, which typically loses the sizing and workspace affinity of every window, leaving you with a tangled mess of windows
  • if your extension doesn’t work then you have a broken shell
  • it is painfully easy to cause a segfault (from JavaScript code :( )
  • you’d better be editing your code in a tmux session so you can fix it from a VTE
  • sometimes when restarting the shell, all your DBus-based integrations get messed up so you can’t change volume, use multimedia keys or shutdown
  • testing against a new gnome-shell version basically means either upgrading your OS or trying to do a fresh install in a VM, which is a whole new layer of annoyance.

Maybe I’m spoiled from working on projects which are easily run in isolation - I bet kernel developers scoff at the above minor inconveniences. But it makes development annoying enough that I dread it, which means I’ll only fix bugs when they get more annoying than development itself.

All of which is to say that this is freakin’ awesome. As of a couple days ago I’ve been able to run the latest version of GNOME Shell (which isn’t packaged for my distro) in a regular window, completely disconnected from my real session, running the development version of shellshape.

Big thanks go to whichever mysterious developers were responsible for fixing whatever gnome-shell / graphics / Xephyr issues have always prevented gnome-shell from running nested (it does now!), and to the nixpkgs folks maintaining the latest GNOME releases so that I can run new versions of GNOME without affecting the rest of my system.

Unfortunately I can’t guarantee it’ll work for you, since this stuff is heavily dependant on your graphics card and drivers, plus it only seems to work with my system version of Xephyr, not the nixpkgs one. But if this interests you, you should definitely give it a go. You’ll need nix and Xephyr. If you don’t want to use nix, you can probably extract what you need from the script to run your system version of gnome-shell in a Xephyr window.

Force a specific shell for sshd

I use fish-shell as my default shell on my own computer, because it’s a pretty nice shell.

Occasionally, though, this causes issues. Software tests in particular have a habit of sloppily running shell command. Typically, this can be fixed by just being more explicit, using execvp(['bash', '-c', '<command>']) (or just use execv* directly instead of going through a shell).

But one case I couldn’t figure out is SSH. When you’re testing an SSH client, the most reliable way to do that is to run some shell scripts over a local SSH session, and check that it does what you expect. SSH has no way of passing a nice array of arguments, all you get is a string which will be interpreted by the user’s SHELL.

If you want to use bash for an SSH command, many online resources will tell to you run “ssh bash ...". But that won't work for tests, which want to run _real_ commands against a POSIX shell (including edge cases around argument parsing). Other suggestions include changing your default shell, but I don't want to forsake my preferred shell just to appease some automated tests!

What you can do however, is funnel the whole shell string through to your desired shell using this slightly underhand sshd configuration:

ForceCommand sh -c 'eval "$SSH_ORIGINAL_COMMAND"'

This still requires a default shell that’s normal enough to not do any interpolation within single quotes, but that’s a much simpler requirement (and true of fish-shell).

For the Conductance test suite, we run an unprivileged SSHD daemon with its own checked-in config during testing. So applying this globally is fine. But if you are doing this on actual SSH server, you might want to use a Match directive to make sure this rule only applies to trusted user (e.g I haven’t tested how this works on a locked-down account with its shell set to /bin/nologin, it could conceivably create a security hole).

NixOS and Stateless Deployment

If I had my way, I would never deploy or administer a linux server that isn’t running NixOS.

I’m not exactly a prolific sysadmin - in my time, I’ve set up and administered servers numbering in the low tens. And yet every single time, it’s awful.

Firstly, you get out of the notion of doing anything manually, ever. Anytime you do something manually you create a unique snowflake, and then 3 weeks (or 3 years!) down the track you tear your hair out trying to recreate whatever seemingly-unimportant thing it is you did last time that must have made it work.

So you learn about automated deployment. There are no shortage of tools, and they’re mostly pretty similar. I’ve personally used these, and learned about many more in my quest not to have an awful deployment experience:

All of these work more or less as advertised, but all of them still leave me with a pretty crappy deployment experience.

The problem

Most of those are imperative, in that they boil down to a list of steps - “install X”, “upload file A -> B”, etc. This is the obvious approach to automating deployment, kind of like a shell script is the obvious approach to automating a process. It takes what you currently do, and turns it into one or more concrete files that you can modify and replay later.

And obviously, the entire problem of server deployment is deeply stateful - your server is quite literally a state machine, and each deployment attempts to modify its current state into (hopefully) the expected target state.

Unfortunately, in such a system it can be difficult to predict how the current state will interact with your deployment scripts. Performing the same deployment to two servers that started in different states can have drastically different results. Usually one of them failing.

Puppet is a little different, in that you don’t specify what you want to happen, but rather the desired state. Instead of writing down the steps required to install the package foo, you simply state that you want foo to be installed, and puppet knows what to do to get the current system (whatever its state) into the state you asked for.

Which would be great, if it weren’t a pretty big lie.

The thing is, it’s a fool’s errand to try and specify your system state in puppet. Puppet is built on traditional linux (and even windows) systems, with their stateful package managers and their stateful file systems and their stateful user management and their stateful configuration directories, and… well, you get the idea. There are plenty of places for state to hide, and puppet barely scratches the surface.

If you deploy a puppet configuration that specifies “package foo must be installed”, but then you remove that line from your config at time t, what happens? Well, now any servers deployed before t will have foo installed, but new servers (after t) will not. You did nothing wrong, it’s just that puppet’s declarative approach is only a thin veneer over an inherently stateful system.

To correctly use puppet, you would have to specify not only what you do want to be true about a system, but also all of the possible things that you do not want to be true about a system. This includes any package that may have ever been installed, any file that may have ever been created, any users or groups that may have ever been created, etc. And if you miss any of that, well, don’t worry. You’ll find out when it breaks something.

So servers are deeply stateful. And deployment is typically imperative. This is clearly a bad mix for something that you want to be as reproducible and reliable as possible.

Puppet tries to fix the “imperative” part of deployment, but can’t really do anything about the statefulness of its hosts. Can we do better?

Well, yeah.

Doing stuff when files change

There’s a common pattern in development tools to help with rapid feedback: you run a long-lived process that does a certain task. At the same time, it watches the filesystem and restarts or re-runs that task whenever a file that you’re monitoring changes.

This is an extremely useful tool for rapid feedback (which is why we’ve integrated nodemon into our Conductance app server), but is not very flexible - most tools are integrated into a web framework or other environment, and can’t easily be used outside of it. There are a few generic tools to do this kind of thing - I personally use watchdog a lot, but it’s sucky in various ways:

  • Configuring it to watch the right file types is hard
  • Configuring it to ignore “junk” files is hard, leading to infinite feedback loops if you get it wrong
  • It sees about 6 events from a single “save file” action, and then insists on running my build script 6 times in a row
  • It takes a bash string, rather than a list of arguments - so you have to deal with double-escaping all your special characters

And yet for all of those issues, I haven’t found a better tool that does what I need.

My build workflow

Lately, I’ve been making heavy use of my relatively-new build system, gup. It’s a bit like make, but way better in my humble-and-totally-biased-opinion. But going to a terminal window and typing up, enter (or the wrist-saving alternative crtl-p, ctrl-m) to rebuild things is tedious. But there’s no way I’m going to implement yet another watch-the-filesystem-and-then-re-run-something gup-specific tool, at least not until the lazy alternatives have been exhausted.

Obviously, my workflow isn’t just up, enter. It’s (frequently):

  • save file in vim
  • go to terminal
  • press up, enter
  • go to browser
  • refresh

And you know what? Monitoring every file is kind of dumb for this workflow. I don’t have gremlins running around changing files in my working tree at random (I hope), I almost always want to reload in response to me changing a file (with vim, of course). So why not just cooperate?

The simple fix

So I’ve written a stupid-dumb vim plugin, and a stupid-dumb python script. The vim plugin touches a file in $XDG_USER_DIR whenever vim saves a file. And then the script monitors just this file, and does whatever you told it to do each time the file is modified. The script automatically communicates with vim to enable / disable the plugin as needed, so it has no overhead when you’re not using it.

It’s called vim-watch, and I just gave you the link.

Addendum: restarting servers

While writing this post, I was a little disappointed that it still doesn’t quite replace tools that automatically restart a server when something changes, because it expects to run a build-style command that exits when it’s done - but servers run forever. Some unix daemons (like apache) restart themselves when you send them a HUP signal, but that’s not so common in app servers. So now huppy exists, too.

It’s a tiny utility that’ll run whatever long-lived process you tell it to, and when it receives a HUP signal it’ll kill that process (with SIGINT) if it’s still running, then restart it. It seems surprising that this didn’t exist before (maybe my google-fu is failing me) but on the other hand it’s less than 60 lines of code - hardly an expensive wheel to reinvent.

You can use it like:

$ # step 1: start your server
$ huppy run-my-server

$ # step 2: use vim-watch to reload the server on changes
$ vim-watch killall -HUP huppy

$ # Or, if you need to rebuild stuff before restarting the server,
$ vim-watch bash -c 'gup && killall -HUP huppy'