This document represents the results of my crazy experiment to manage my UNIX dotfiles using Literate Programming with Emacs Org mode. These dotfiles contain all my personal system configuration that I’m willing to make public (that is, that doesn’t contain passwords or other sensitive information). My hope is that this setup will allow me to both easily migrate between machines and keep track of why my configuration is the way it is.
Literate Programming is a programming methodology first described by Donald Knuth in which the software developer maintains not a source file containing documentation, but rather a prose explanation of the program that contains bits of the source code. The prose explanation can be weaved into a typeset document or tangled into a source file. The benefits of this, when done properly, come primarily through ease of maintenance—the prose explanation can explain the reasons for data structure and algorithm selection and program organization in a way that even the best, most lucid source code cannot. In other words, true Literate Programming allows the programmer to explain why, not just what or how.
UNIX dotfiles are files that are generally stored within the user’s $HOME
directory that contain configuration for the user’s software. These files are
so-named because their filenames begin with a period, making them hidden from
most directory listings.
This file represents my attempt to maintain my dotfiles in a Literate Programming way.
In order to accomplish this, I’m using the same technique I used to manage my Emacs configuration: Org mode, and in particular, Org Babel. Org Babel piggybacks on the normal Org mode export functionality to weave documentation, and adds functionality to tangle the configuration files.
In short,
- to run, call
org-babel-execute-buffer
(C-c C-v b
). - to weave, call
org-export-dispatch
(C-c C-e
) and select the output format, and - to tangle, call
org-babel-tangle
(C-c C-v t
).
Tangling this file will result in a directory structure usable with GNU Stow. To install Stow, you will want to install the `stow` package or equivalent; the command to do so for Debian is show below:
DEBIAN_FRONTEND=noninteractive apt-get -y install stow
Once this file has been tangled, you can pick the functionality you need on the system using Stow. To install some set of functionality using Stow, run the following command from the root directory of this file
stow -t ~ -S feature1 feature2 feature3 …
where each of feature1
, feature2
, and so on are directories created from
tangling this file. This command will install symbolic links to the tangled
files under your home directory as needed.
I’ve been maintaining my Emacs configuration through Literate Programming in Org mode for a while now, and I’ve found it incredibly useful—although it takes more work to properly maintain the configuration, the payout has been extremely worthwhile. Because I’ve maintained a prose description of why my configuration is the way it is, and because I do not have to organize the source blocks in the same order as they end up in the tangled configuration, I can easily organize the Org file in such a way that all relevant blocks are close together, thus minimizing any long-distance dependencies. Where there are long-distance dependencies, I can hyperlink between them, and thus make sure that any changes I make do not result in a stale documentation. Modifying this configuration is incredibly easy, especially compared to how my configuration was before.
In contrast, my dotfiles have been just that: dotfiles. For simple configurations, anything more is overkill. Recently, though, I’ve been running up against my dotfiles themselves. For example, to properly configure GPG, I need to make sure that my shell configuration, environment configuration and Emacs configuration are in sync (not to mention making sure the multiple GnuPG 2.1 configuration files aren’t contradictory). To make it worse, lots of things depend on a properly configured GPG, and sometimes in subtle ways. I need to keep all these assumptions in mind when I modify my GPG configuration, and that can affect the way I structure my GPG configuration. In particular, if I modify something incorrectly, I may (and have) ended up with a machine that mysteriously wouldn’t let me log in, or wouldn’t let me encrypt and decrypt files. This is not something I enjoy fixing, especially when I have other, more pressing things to be doing.
Furthermore, this complexity multiplies as soon as I try to support multiple hosts with different software installed. On my primary laptop, for instance, I have X11 installed; I want X11 configuration, and that means modifying my shell configuration files. On my server, though, I don’t have (or want) X11 installed; I still want a lot of my shell configuration, though. I could maintain separate versions of the shell configuration, but that would been keeping several almost-identical versions in sync, and that’s certain to result in problems down the line.
What if, though, I take the Literate Programming model I’ve been using to maintain my Emacs configuration and apply it to UNIX dotfiles? This allows me to centralize all my configuration, describe why my configuration looks the way it does, and specify parameters during the process of tangling that allows me to generate different hosts’ configurations, using different subsets of the configuration in this file. This doesn’t work perfectly, but it’s a big step up from how it was before.
Putting this altogether, Literate Dotfiles allow me to solve the following problems:
- I can explain exactly why my configuration is the way it is inline with the actual configuration,
- I can group related configurations right next to each other in this Org file, regardless of whether they are spread across multiple physical configuration files for different software, and
- I can hyperlink between configurations that depend on one another when they cannot or should not be grouped together in this Org file.
Literate Dotfiles is not a completely novel idea (Howard Abrams’ dotfiles and Keifer Miller’s dotfiles are excellent prior art), but it is not a very common idea, and many of the so-called “literate” dotfiles are merely blocks of code organized by headers—something that I can already do with comments and that does not warrant the added complexity of tangling the dotfiles in Org mode. In particular, and I write this mostly as a warning to myself, I do not want my dotfiles to look like those in this repository or this repository. It’s easy to get fall into this trap, but there is nothing “literate” about these, and I get almost nothing of the benefits I’ve described above.
Dotfiles are not meant to be forked, but I have no problem with anyone taking inspiration from this configuration. In particular, I hope that the prose in this file will help point out pitfalls that you may not be aware of. I’m not much a fan of copy-paste configuration, as it’s just as great a way of propagating problematic configuration as it is beneficial configuration. I hope that the prose descriptions will help anyone looking through my dotfiles. I don’t think Literate Dotfiles are the best for everyone, but it does have the nice benefit of making dotfiles easy to understand.
With that said, I do not want to place any restrictions on the use of the tangled dotfiles or weaved documentation. As such, to the extent possible under law, I have waived all copyright and related or neighboring rights to this work. Please see the Creative Commons Zero 1.0 license for details.
I need to make some minimal assumptions about the systems I’m running on.
Nowadays, if I stick to GNU/Linux, I can assume Systemd is the init system.
Systemd has some very nice features, but the most relevant here is the ability
to run per-user Systemd instances. This allows me to manage certain tasks that
I might otherwise have needed to use cron or a $HOME/.bashrc
for in the same
way I can manage system services, with all the same process tracking benefits.
While this will make porting this dotfiles master file to something like Mac OS
X or FreeBSD more difficult, I think this is a worthwhile price to pay for the
moment, as I am almost exclusively using GNU/Linux systems, and I can live
without a lot of these when I’m on a Macintosh or *BSD system.
On top of this, I have a few requirements of my own for my dotfiles:
- We live in a sad world where dotfiles clutter the
$HOME
directory. This makes them hard to manage, hard to move, and hard to differentiate from transient data or application save data. Although the XDG Base Directories Specification aims to fix this by creating separate directories for config (generally read-only), data (generally read-write), and cache (safe to delete), there are many pieces of software that don’t follow it by default and have to be coddled into doing so using environment or special command line flags. This is unfortunate, but it’s more important to me to keep my$HOME
directory as clean as I can. Here are some links that describe how to do this:- Super User: What are the step to move all your dotfiles into XDG directories?
- grawity Dotfile Notes
- Move your config files to ~$XDG_CONFIG_HOME~ by Philipp Schmitt
- woegjiub
xdg.sh
script - Arch Linux Forums XDG Base Directory support
- Sometimes I install software under the
$HOME/.local
tree, so I want to make sure the$PATH
and all related variables will look in the right place for binaries, manpages, headers, libraries, and so forth.
In the old days, the way to set your environment variables was to modify a shell
script like .profile
or .bashrc
, which are run whenever a new shell is
launched. Because programs were usually launched from shells, this would be
good enough. However, nowadays more and more programs you interact with are not
launched in shells, but rather through systemd or other daemons, so they can
take advantage of cgroups and namespaces and other resource-limiting and
security technologies. To solve this, a new way of configuring the environment,
called environment.d
, has been introduced. While this mechanism gives a
little less flexibility than a full bash script (it’s not possible to, for
instance, set environment variables in a loop), it gives a clean configuration
file that can be shared between user daemons and shells.
For users, the environment is build up by reading configuration files in a
handful of directories; the one we as users have control over is the
environment.d
subdirectory in our .config
directory.
The XDG Base Directory variables define where configuration, cache, and data
files for the user should be stored. While this has the nice effect of cleaning
up the home directory, moving dotfiles into subdirectories (something I like
very much), it has an even more important benefit: because it separates
configuration files, cache files, and important data files into separate
folders, it greatly simplifies backup and recovery of these files. Migrating to
a new laptop, for instance, should be as simple as installing the software and
copying over the configuration and data. With the typical dotfiles approach,
there’s nothing that prevents cached data—data that isn’t essential and could
potentially contain system-specific data that would not transfer well—from being
written straight to the home directory. In essence, this mirrors quite closely
how UNIX systems break the file system into directories that store configuration
(/etc
), cached data (/var
), shared data (/usr/share
), and so forth.
Let’s create a file $HOME/.config/environment.d/00-xdg.conf
that, when
sourced, sets these variables correctly. The full listing of this file is shown
below:
<<conf-xdg>>
But what are the variables we need to configure? The XDG Base Directory specification lists the following environment variables:
- There is a single base directory relative to which user-specific data files should be written. This directory is defined by the environment variable
$XDG_DATA_HOME
.- There is a single base directory relative to which user-specific configuration files should be written. This directory is defined by the environment variable
$XDG_CONFIG_HOME
.- There is a single base directory relative to which user-specific executable files should be written. This directory is defined by the environment variable
$XDG_BIN_HOME
.- There is a single base directory relative to which user-specific architecture-independent library files shoule be written. This directory is defined by the environment variable
$XDG_LIB_HOME
.- There is a set of preference ordered base directories relative to which executable files should be searched. This set of directories is defined by the environment variable
$XDG_BIN_DIRS
.- There is a set of preference ordered base directories relative to which library files should be searched. This set of directories is defined by the environment variable
$XDG_LIB_DIRS
.- There is a set of preference ordered base directories relative to which data files should be searched. This set of directories is defined by the environment variable
$XDG_DATA_DIRS
.- There is a set of preference ordered base directories relative to which configuration files should be searched. This set of directories is defined by the environment variable
$XDG_CONFIG_DIRS
.- There is a single base directory relative to which user-specific non-essential (cached) data should be written. This directory is defined by the environment variable
$XDG_CACHE_HOME
.- There is a single base directory relative to which user-specific runtime files and other file objects should be placed. This directory is defined by the environment variable
$XDG_RUNTIME_DIR
.
The variables $XDG_BIN_DIRS
, $XDG_LIB_DIRS
, $XDG_DATA_DIRS
, and
$XDG_CONFIG_DIRS
contain system paths, and they should be set by the system
(or applications should use the defaults defined in the specification).
Furthermore, ~$XDG_RUNTIME_DIR~ is set by the Systemd PAM module, so we don’t
need, or want, to set it by ourselves.
The remaining variables (namely, $XDG_DATA_HOME
, $XDG_CONFIG_HOME
,
$XDG_BIN_HOME
, $XDG_LIB_HOME
, and $XDG_CACHE_HOME
), though, should be set
in our environment configuration. I use the following, which happen to be the
defaults anyway:
XDG_DATA_HOME=$HOME/.local/share
XDG_CONFIG_HOME=$HOME/.config
XDG_BIN_HOME=$HOME/.local/bin
XDG_LIB_HOME=$HOME/.local/lib
XDG_CACHE_HOME=$HOME/.cache
As a note, we have to be careful, as the XDG Base Directory Specification
requires us to use absolute paths. Here, we do this by using double-quoting,
which interpolates the $HOME
variable into the path for us. Because $HOME
is an absolute path, the resulting paths will all be absolute, too.
The semantics of these environment variables naturally lead us to a backup and recovery strategy:
$XDG_DATA_HOME
contains user-specific data, so we generally want to back it up. Not all of the data in this directory is important, but some is. This may contain sensitive information, so we should encrypt our backups.$XDG_CONFIG_HOME
contains user-specific configuration, which we want to back up. Hopefully, this contains no sensitive information, but I don’t trust that no passwords or secrets will make it into this, so we encrypt the backups just in case.$XDG_BIN_HOME
and$XDG_LIB_HOME
are for user-installed software that may be system-specific, so we don’t want to back it up. To recover, we need to reinstall the software.$XDG_CACHE_HOME
is non-essential data, files that store information locally for performance. These can be deleted at any time, and could go out-of-date, so there is no point in backing them up. Software that uses these should regenerate them on its own.
While just configuring this should be enough, it’s not. There is an annoying amount of software that does not use these directories properly, or at all. We do our best here to configure the problematic software to use them, but we can’t get all of it.
TeX stores its cache right under the home directory by default, so we set the following environment variable to move it to the cache directory:
TEXMFVAR=$XDG_CACHE_HOME/texmf-var
In addition to (or perhaps complementary to) the XDG Base Directories, we also
use the .local
tree as an install path for user-local software. Because
.local
mirrors /usr
, this works very well. It’s not quite as simple as
adding the binary path to $PATH
, though. There are a number of variables we
need to set for the software to work correctly.
# Add software installed under `~/.local` tree.
PATH=$HOME/.local/bin:$PATH
MANPATH=$HOME/.local/share/man:$MANPATH
CFLAGS=-I$HOME/.local/include $CFLAGS
CXXFLAGS=-I$HOME/.local/include $CXXFLAGS
LDFLAGS=-L$HOME/.local/lib -Wl,-rpath=$HOME/.local/lib $LDFLAGS
LD_RUNPATH=$HOME/.local/lib:$LD_RUNPATH
PKG_CONFIG_PATH=$HOME/.local/lib/pkgconfig:$PKG_CONFIG_PATH
ACLOCAL_FLAGS=-I $HOME/.local/share/aclocal/
Unfortunately, some applications don’t automatically support Wayland. For these, we set environment variables to force them to use Wayland.
MOZ_ENABLE_WAYLAND=1
Unfortunately, this is not enough. When starting a Wayland session, with GNOME,
on Debian, the PATH
environment variable setting in environment.d
is
overwritten by a static string (see this bug; no one wants to claim it as their
own fault…). We’ll need to fix this by reloading the environment in our
.profile
configuration, unfortunately. The way I do this is taken from this
answer, which gives a solution that doesn’t rely on Bash-isms, and thus should
work well as a real .profile
.
set -a
. /dev/fd/0 <<EOF
$(/usr/lib/systemd/user-environment-generators/30-systemd-environment-d-generator)
EOF
set +a
The UNIX shell is at the center of the UNIX CLI experience, so it makes sense to begin with this. There are two particular shells I care about: Bash and standard POSIX shell. The former is what I use for interactive shells outside of Emacs, whereas the latter is what I strive to write my scripts for (so, among other things, they support *BSDs and other UNIXen without modification). This configuration is structured so that I can configure both—although I keep POSIX shell completely vanilla with regard to its functionality, so I don’t get any unexpected surprises when moving my scripts to a new host.
On Debian systems, the POSIX shell is Dash, the Debian Almquist Shell, by default. This shell is POSIX compliant and very lightweight. Other systems use Bash as the POSIX shell, which, as long its configured correctly, is also fine.
To orient readers, my shell configuration is similar to that described in the article _Getting Started With Dotfiles_, by Lars Kappert.
Shell configuration is done in three files, whose semantics are described below:
.profile
- This file is sourced by a login shell, which is the root process of almost everything run by the user (with the exception of Systemd units and cron jobs, which are run from a daemon not spawned from the login shell). Because all shells, not just Bash, source this file, we want to avoid anything Bash-specific here.
.bashrc
- This file is sourced by interactive Bash shells that are not login shells, so it should contain only configuration that we use while interacting with a shell (as opposed to, for example, configuration that might affect shell scripts). These are mostly conveniences, and are necessarily Bash-specific.
.bash_profile
- This file is sourced by Bash in priority to
.profile
for login shells, but is otherwise the same.
The above descriptions lead to the following plan: we will use .profile
for
one-time configuration for each login, such as environment variables that are
needed by every program; .bashrc
will contain Bash-specific configuration that
is sourced by every new interactive shell (things like aliases and functions,
which aren’t inherited by subshells anyway); .bash_profile
will simply source
both .profile
and .bashrc
, which means interactive Bash login shells will
have both the non-Bash-specific configurations and the Bash-specific
configurations.
So, let’s take a look at these three configuration files:
# Source installed login shell configurations:
<<sh-profile>>
# Source installed interactive shell configurations:
<<sh-bashrc>>
# Source login shell configuration:
. .profile
# Only source .bashrc when shell is interactive:
case "$-" in *i*) . .bashrc ;; esac
I store aliases in the $HOME/.config/sh/alias.sh
file. These aliases apply
only to interactive shells, not to scripts, so all these aliases are only to
help me in interactive shells. Here is a full listing of that file:
<<sh-alias>>
We also want to make sure to source this file from .bashrc
:
[ -r $HOME/.config/sh/alias.sh ] && . $HOME/.config/sh/alias.sh
The default ls
does not automatically print its results in color when the
terminal supports it, and it gives rather unhelpful values for file sizes. For
usability, we change the default in interactive shells to use color whenever the
output terminal supports it and to display file sizes in human-readable format
(e.g., 1K
, 234M
, 2G
). Once we’ve done that, we can also add the common
and useful ll
alias, which displays a long listing format, sorted with
directories first.
alias ls="ls -h --color=auto"
alias ll="ls -lv --group-directories-first"
We also define some aliases to easily start Emacs from the terminal.
In addition to aliases, I use some shell functions for functionality that is
more complicated than what aliases can provide but not complicated enough to
warrant a separate shell script. These functions are stored in
$HOME/.config/sh/function.sh
, reproduced below:
<<sh-function>>
Again, we source it from .bashrc
:
[ -r $HOME/.config/sh/function.sh ] && . $HOME/.config/sh/function.sh
The functions I use most commonly manage my $PATH
variable, the environment
variable that contains a colon-separated list of directories in which to look
for a command to be executed. Modifying it manually—especially removing
directories from it—is tedious and error-prone; these functions, which I found
on a StackOverflow question, have served we well:
path_append() { path_remove $1; export PATH="$PATH:$1"; }
path_prepend() { path_remove $1; export PATH="$1:$PATH"; }
path_remove() { export PATH=`<<sh-function-pathremove>>`; }
The path_append()
and path_prepend()
functions are rather self-explanatory,
but the path_remove()
function may not be. In fact, it’s slightly modified
from the version in the StackOverflow question linked above. Let’s break it
down. Our goal is to export the $PATH
variable to a new value, so let’s look
inside the backtick-quoted string to see what is run:
- First, we print out the current
$PATH
, which we will use as input. The$PATH
variable should not end in a newline, which gives us two options:echo -n
, which is not completely portable, orprintf
.
In the name of portability, we will choose the later.
printf '%s' "$PATH"
- We want to parse this output into a series of records separated by colons.
To this, we turn to awk. The awk ~RS~ variable stores the line/record
separator used in parsing, and the ~ORS~ variable stores the line/record
separator used in printing. We can use these two variables to piggyback on
awk’s parsing capabilities, setting both of them to colons. Awk can then
loop over these parsed directory names to determine whether any of them are
the directory we are trying to remove. If they are, we ignore them.
awk -v RS=: -v ORS=: '$0 != "'$1'"'
The expression here used to filter is a little opaque, but works as follows:
- We have an initial, single-quoted string in which the
$0
is an awk variable meaning “this record”. This string ends with a double quote. - Then, we have a shell variable that interpolates to the first argument to our function.
- Finally, we have a third string that closes the opening quote from the first string.
- We have an initial, single-quoted string in which the
- Unfortunately, awk outputs the value of
ORS
at the end of the string, too, so we need to chop it off. The following sed invocation does that:sed 's/:$//'
In order to configure our Bash prompt, we make a new file,
$HOME/.config/sh/prompt.sh
. This file’s job is simply to set the prompt as we
want when it sourced.
Bash prompt configuration is contained within the $PS1
environment variable,
which is extremely terse and hard to work with. The following is my $PS1
configuration:
white='\e[0;37m'
greenbold='\e[01;32m'
bluebold='\e[01;34m'
reset='\e[0m'
# Set prompt
export PS1="<<sh-prompt>>"
# Set xterm title
case "$TERM" in
xterm*|rxvt*) export PS1="<<sh-prompt-title>>$PS1" ;;
*) ;;
esac
unset white
unset greenbold
unset bluebold
unset reset
This will produce a shell prompt that looks as follows:
hostname:~(0)$
The first few lines define ANSI color codes that we will use in the prompt.
Because these are unset later, we don’t need to worry about them polluting the
our environment when we source this file. When we use these color codes, we
will enclose them in \[
and \]
, which tell bash not to consider the
enclosing text when moving the cursor. We can use the variables within our
$PS1
variable, and they will be interpolated correctly within the
double-quoted string.
Let’s break the prompt down some:
- We start out by resetting the color setting of the terminal, just in case
some rogue command does not clean up after itself:
\[$reset\]
- The next part of the
$PS1
variable prints out the hostname (\h
) in a bold, green color, and then prints out a white colon:\[$greenbold\]\h\[$reset\]\[$white\]:
In the past, I’ve also included the username (
\u
) before the hostname, but except in specific cases (perhaps when logging in as root, which I tend to disable), I don’t really care about seeing it on every prompt. On the other hand, I often have multiple terminal windows open to multiple different hosts, and I find it easy to get confused, so I always display the hostname. - The third part of the
$PS1
variable prints out the current working directory in a bold, blue color:\[$reset\]\[$bluebold\]\W
The
\W
command here only prints out the name of the working directory, not the full path to it (this can be done using the\w
command). I want my prompt to be relatively short, so I can fit the command on the same line as the prompt, and when I want to know the full path, I can always use thepwd
command. - Then, we print out the exit code of the last command run in parentheses, in
plain white:
\[$reset\]\[$white\](\$?)
The exit code of the last command run is contained within the
$?
variable. I’ve found this functionality very useful, because I’ve run across tricky commands that don’t print out a useful message tostderr
to indicate that they’ve failed, but just die with some nonzero exit code.Notice that we have to escape the dollar sign of the
$?
, because otherwise it would be expanded when we set thePS1
variable initially, not expanded each time the shell prompt is printed! - The final part of the
$PS1
variable prints out the actual prompt, a dollar sign and space, and resets the color state:\\$ \[$reset\]
We need to double escape the dollar sign, because otherwise it would be considered an environment variable expansion when printing the prompt. We really want a literal dollar sign here.
Concatenating these together will set our prompt as we want it.
After that, we want to make sure that xterms which are hosting our shell session (potentially xterms on a different machine, that are connecting over SSH) have a useful title. Here, I elect to display the username as well as the hostname and working directory. Unlike in a shell prompt, changing the title will not take up valuable screen real-estate, so this extra information doesn’t have much cost. As long as the terminal is an xterm (which we check by pattern matching), we prepend a string to the prompt which is displayed on the title bar, but otherwise not shown. The string has the following form:
<<sh-prompt-title>>
Let’s look at how this breaks down:
- We start with the same
\[
that we used earlier on to prevent Bash from considering this text when moving the cursor:\[
We will close this at the end of the title text.
- Then, we add the special escape sequence that an xterm detects to set the
title:
\e]0;
- Then, we set the title using the same escape sequences we used for the
prompt above, with the addition of a
\u
, which expands to the current user:\u@\h: \W
- Finally, we tell the xterm that the title text is done and close the
\[
we opened earlier:\a\]
Now that we’ve set the prompt and xterm title, let’s make sure to source this
configuration from .bashrc
:
[ -r $HOME/.config/sh/prompt.sh ] && . $HOME/.config/sh/prompt.sh
Finally, we’re left with some interactive shell customizations that don’t fit
under any other heading. These are either set in or conditionally sourced from
$HOME/.config/sh/interactive.sh
, which is listed below:
<<sh-interactive>>
As these are interactive, Bash-specific customizations, we want to source it
from our .bashrc
by adding the following line to that file:
[ -r $HOME/.config/sh/interactive.sh ] && . $HOME/.config/sh/interactive.sh
To enable completion in Bash, we source one of two files:
if [ -r /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -r /etc/bash_completion ]; then
. /etc/bash_completion
fi
This configuration is taken from the default .bashrc
shipped with Debian; the
former path is the path that the bash-completion
package installs to. This
can actually be modified programmatically by packages.
Bash has command history support that allows you to recall previously run
commands and run them again at a later session. Command history is stored both
in memory and in a special file written to disk, $HOME/.bash_history
.
I don’t care so much about my command history being written to disk, because my
primary use case is to save on typing during an interactive session. Because of
this, we want to unset the $HISTFILE
variable. This will prevent the command
history from being written to disk when the shell is exited.
unset HISTFILE
When saving command history in memory, I want to prevent two things from being
added: lines beginning with whitespace (in case we have a reason to run a
command and not remember it) and duplicate lines (which are just a nuisance to
scroll through). This can be done by setting the $HISTCONTROL
environment
variable to ignoreboth
. We don’t want this environment variable to leak into
subshells (especially noninteractive subshells), so we don’t export
it.
HISTCONTROL=ignoreboth
We also want to set a few shell options to control how history is stored as well:
cmdhist
saves all lines in a multi-line command in the history file, which makes it easy to modify multi-line commands that we’ve run.histreedit
allows a user to re-edit a failed history substitution instead of clearing the prompt.
shopt -s cmdhist
shopt -s histreedit
Finally, we have the following configuration options that don’t fit anywhere else.
We want to check the size of the terminal window after each command and, if
necessary, update the values of $LINES
and $COLUMNS
. If any command uses
the size of the terminal window to intelligently format output (think ls
selecting the number of columns to output filenames in), this will give it
up-to-date information on the terminal size. The shell option checkwinsize
does this for us.
shopt -s checkwinsize
GNU Readline is a library used by many programs for interactive command editing and recall. Most importantly for my purposes, it is used by Bash, so this could be considered as an extension of our shell configuration.
Let’s start off by moving the configuration to the correct XDG Basedir by adding
this to the xdg.sh
script we detail in the XDG Basedirs section.
INPUTRC=$XDG_CONFIG_HOME/readline/inputrc
The actual $XDG_CONFIG_HOME/readline/inputrc
file is shown and described
below:
<<inputrc>>
Our first configuration is to make TAB
autocomplete regardless of the case of
the input. This is somewhat of a trade-off, because it gives worse completion
when the case of a prefix really does disambiguate. I find, in practice, this
is rather rare, and even rarer in my primary Readline application, Bash.
set completion-ignore-case on
I find the default behavior of Readline with regard to ambiguous completion to
be very annoying. By default, Readline will beep at you when you attempt to
complete an ambiguous prefix and wait for you to press TAB
again to see the
alternatives; if the completion is ambiguous, I want to be told of the possible
alternatives immediately. Enabling the show-all-if-ambiguous
setting
accomplishes this.
set show-all-if-ambiguous on
Another setting we want to make sure is set is to not autocomplete hidden files unless the pattern explicitly begins with a dot. Usually I don’t want to deal with hidden files, so this is a good trade-off.
set match-hidden-files off
Also, we want to normalize the handling of directories and symlinks to directories, so there appears to be no difference. The following setting immediately adds a trailing slash when autocompleting symlinks to directories.
set mark-symlinked-directories on
Finally, we add more intelligent UP~/~DOWN
behavior, using the text that has
already been typed as the prefix for searching through command history.
"\e[B": history-search-forward
"\e[A": history-search-backward
PGP is annoying and hard to use properly. GnuPG is an implementation of PGP that is also annoying and hard to use properly. I do my best to use other interfaces that work on top of GnuPG (of which there are many), so I don’t have to deal with it as much as possible.
Not only is GnuPG hard to work with, but it’s also hard to configure properly. Recent versions of GnuPG have changed things for the better, but in incompatible ways. The following configuration makes everything work out, to the best I can tell, but I live in fear that some day something may break without me knowing. It’s happened before.
First, we change the configuration directory for GnuPG to one within the XDG Base Directories:
GNUPGHOME=$XDG_CONFIG_HOME/gnupg
This change seems innocuous. However, GnuPG automatically generates the socket
names for its internal gpg-agent
daemon based on this variable. What this
means is that the default systemd management for sockets will not work
correctly, because they assume the old socket names, and don’t read the
GNUPGHOME
variable to generate the correct ones. So, we need to modify the
systemd unit files ourselves and correct the socket names. We do this by
copying the unit files included in the Debian package to a user directory we
control and modifying them. Luckily, the socket names are built from a hash of
the GNUPGHOME
directory, so it’s at least we’re hard coding a constant:
[Unit]
Description=GnuPG cryptographic agent and passphrase cache (access for web browsers)
Documentation=man:gpg-agent(1)
[Socket]
ListenStream=%t/gnupg/d.3xhj9kn7wba5eojhjbnkjr3n/S.gpg-agent.browser
FileDescriptorName=browser
Service=gpg-agent.service
SocketMode=0600
DirectoryMode=0700
[Install]
WantedBy=sockets.target
[Unit]
Description=GnuPG cryptographic agent and passphrase cache (restricted)
Documentation=man:gpg-agent(1)
[Socket]
ListenStream=%t/gnupg/d.3xhj9kn7wba5eojhjbnkjr3n/S.gpg-agent.extra
FileDescriptorName=extra
Service=gpg-agent.service
SocketMode=0600
DirectoryMode=0700
[Install]
WantedBy=sockets.target
[Unit]
Description=GnuPG cryptographic agent and passphrase cache
Documentation=man:gpg-agent(1)
[Socket]
ListenStream=%t/gnupg/d.3xhj9kn7wba5eojhjbnkjr3n/S.gpg-agent
FileDescriptorName=std
Service=gpg-agent.service
SocketMode=0600
DirectoryMode=0700
[Install]
WantedBy=sockets.target
[Unit]
Description=GnuPG cryptographic agent (ssh-agent emulation)
Documentation=man:gpg-agent(1) man:ssh-add(1) man:ssh-agent(1) man:ssh(1)
[Socket]
ListenStream=%t/gnupg/d.3xhj9kn7wba5eojhjbnkjr3n/S.gpg-agent.ssh
FileDescriptorName=ssh
Service=gpg-agent.service
SocketMode=0600
DirectoryMode=0700
[Install]
WantedBy=sockets.target
# SSH_AGENT_PID=
# SSH_AUTH_SOCK=$XDG_RUNTIME_DIR/gnupg/S.gpg-agent.ssh
# GSM_SKIP_SSH_AGENT_WORKAROUND=true
My current email setup is probably the biggest improvement I have ever made for my productivity. I have, in the past, used GNOME Evolution for email, which I find to be a really nice program. However, it started to balk at the number of emails I had. Sometimes, its database would become corrupted, and I would have to download all my mails again. Furthermore, as I started using Emacs Org Mode to manage my schedule and notes, I was finding I was only using Evolution for mail. Naturally, I started looking for a more stable and Emacs-compatible solution.
There were some important considerations I had when researching a mail setup:
- I want to be able to work offline, and that includes reading (and even sending) mail! Sometimes this is born of necessity, such as when I’m on a plane or a bus; sometimes it is self-imposed. When I get back online, I want the mail I’ve queued up to be sent to be actually propagated to a server, and all the mail that I’ve received in the meantime to be accessible. Note that this necessitates both a copy of all mail locally on my machine and a sent mail queue.
- I have a lot of email, and managing it all manually is a big chore. I want to be able to search for mail quickly and easily, and I want this to be my primary means of using email.
- I don’t want to be roped into any specific tools. Whenever possible, I want to be using common, open standards. For one, this adds some redundancy to the system, which is a really good thing for such an important tool—that is, if one part of the system breaks somehow, it doesn’t bring down everything else, and I can still potentially work. Furthermore, this means I can easily swap parts of the system out. I’ve done this in the past, swapping mu for notmuch and OfflineIMAP for isync. In the future, I may look at imapfw, which is by the same author as OfflineIMAP—it just doesn’t look stable enough at the moment.
I switched through some setups, eventually settling on my current setup, which centers around the following loosely-coupled tools:
- isync
- a tool for synchronizing a local Maildir with an IMAP server. Because isync only connects to the server intermittently to sync a local copy with a remote copy, it means I don’t have to have an internet connection at all times to read my mail, satisfying consideration 1 above. Compared to the alternative in the same space, OfflineIMAP, I’ve found isync very fast, even with all the mail I have; this satisfies condition 2. Finally, isync only uses the IMAP4 protocol and the widely-used Maildir format, meaning I’m not locked into it if I want to switch or do something novel with my email, satisfying condition 3.
- lieer
- a tool for synchronizing a local notmuch Maildir with Gmail tags.
- msmtp
- a sendmail-compatible tool for sending emails through a remote SMTP
server. Packaged with it in the Debian archive is a nice script called
msmtpq
, which, if we can’t send mail to the remote server (if, for instance, we’re not connected to the network), queues the mail locally to be sent later. In doing so, it satisfies my first criterion above, and since it’s an SMTP tool, it satisfies criterion 3 as well. Fortunately, I don’t send all that much mail, so it’s not important for this to scale to a large number of messages—although, it might. - notmuch
- a Maildir indexer, which provides lightning fast tagging and searching for email messages. The search-based paradigm for email is how email should be, as it takes so little maintenance. notmuch only needs a local copy of your email (condition 1), uses a Xapian database and puts it in your Maildir (condition 3), and is incredibly fast (even faster than its competitor, mu, which I used for some time), and able to cope with very, very large amounts of email (condition 2).
All of these tools combine together to make an incredibly efficient email workflow. To set each of these tools up, though, we need to do some preliminary work.
Let’s create a directory to store our emails first:
mkdir -p ~/Retpoŝtoj
This section describes general configuration of each of the components of the setup. The next section gives the configuration for each account I use.
As described above, the tool we will use to sync mail to and from our IMAP
servers is isync, a fast IMAP and Maildir synchronization program written in C.
To get started, we need to make sure we have the isync
package installed.
Let’s install it:
DEBIAN_FRONTEND=noninteractive apt-get -y install isync
Configuration of isync is not too hard, but there are some caveats. As we
discussed in the XDG Basedirs section, our ideal is to move all configuration
files out of our home directory. Our usual tool for doing this is by setting an
environment variable. isync does not support an environment variable like this,
though. Fortunately, its mbsync
executable does support a command line flag
telling it where to look for its configuration file. As long as we only use
isync with this flag, we’ll be fine (and we’ll make sure of this later).
However, this means we can place our configuration in a
$XDG_CONFIG_HOME/isync/config
file, shown below:
# -*- conf -*-
<<mail-isync>>
Before diving into this file, let’s take some time to understand the basic concepts of isync. Isync essentially deals with mappings between two backing stores of email; these mappings are called channels. A channel has a master store (usually the authoritative copy) and a slave store (usually a replica). Each of these stores can either be a mailbox stored in a local Maildir or a mailbox stored in a remote server, accessible over IMAP. Finally, for IMAP stores, we need to also set up information about the IMAP connection, called an IMAP account.
We don’t just want to receive mail locally, though; we also want to send it. To
do this, we will use msmtp, a sendmail-like process that communicates with
external SMTP servers. The msmtp package also contains an implementation of a
local mail queue, which I need for sending mail when offline. So, first let’s
install the msmtp
package from Debian.
DEBIAN_FRONTEND=noninteractive apt-get -y install msmtp
The mail queue scripts are installed along with documentation, along with a very
useful README file. As described there, the queue scripts are a wrapper for
msmtp itself, and so these scripts are what we will be using for our MTA. We
need to copy them to our PATH
and make sure they are executable.
mkdir -p ~/.local/bin
cp /usr/share/doc/msmtp/examples/msmtpq/msmtp-queue ~/.local/bin/
cp /usr/share/doc/msmtp/examples/msmtpq/msmtpq ~/.local/bin/
chmod +x ~/.local/bin/msmtp-queue ~/.local/bin/msmtpq
Next, we need to tell these scripts where to place the queue. I think the
proper place for this is is in a subdirectory of $XDG_DATA_HOME
, so the queue
is persistent between boots (just in case!). Let’s create that directory.
mkdir -p $XDG_DATA_HOME/msmtp/queue
chmod 0700 $XDG_DATA_HOME/msmtp/queue
Next, we need to modify the msmtpq
script to use this directory. We do this
by rewriting two configuration lines near the top of the script:
s|Q=~/.msmtp.queue|Q=\$XDG_DATA_HOME/msmtp/queue|;
s|LOG=~/log/msmtp.queue.log|LOG=\$XDG_DATA_HOME/msmtp/queue.log|;
We are almost ready to just use the local msmtpq
program as our MTA! However,
if we are running apparmor on our system, we won’t be able to read the local
configuration file using the default profile. We will add to the whitelist the
ability to read any path in the home directory that ends in msmtp/config
.
echo 'owner @{HOME}/**/msmtp/config r,' >> /etc/apparmor.d/local/usr.bin.msmtp
Configuring msmtp
, like isync
is fairly simple.
# -*- conf -*-
# Set default values for all following accounts.
defaults
auth on
tls on
syslog on
<<mail-msmtp>>
# Set a default account
account default : personal
In order to index and search our mail, we use notmuch. Let’s first install this from the Debian archive:
DEBIAN_FRONTEND=noninteractive apt-get -y install notmuch
Note that we don’t want to install notmuch-emacs, because it pulls in emacs24. We use 25, so instead we will pull from MELPA.
By default, notmuch looks for a configuration file directly under the user’s home. We can configure this using an environment variable, though, so we can hide this away within the XDG configuration directory.
NOTMUCH_CONFIG=$XDG_CONFIG_HOME/notmuch/config
Speaking of the configuration file, let’s take a look at it:
[database]
path=/home/pniedzielski/Retpoŝtoj
[user]
name=Patrick M. Niedzielski
[email protected]
[email protected];[email protected];[email protected];[email protected];
[new]
tags=new
ignore=.credentials.gmailieer.json;.gmailieer.json;.state.gmailieer.json;.state.gmailieer.json.bak;.gmailieer.json.bak;.lock;.mbsyncstate;.uidvalidity;.msyncstate.journal;.mbsyncstate.new
[search]
exclude_tags=deleted;spam
[maildir]
synchronize_flags=true
[crypto]
gpg_path=gpg
We can automate the synchronizing of mail and tagging using Notmuch’s hooks. There are two hooks that we need to consider:
pre-new
- This hook runs when
notmuch new
is called, but before the database is updated. This is a good place to synchronize our mail with the network. It is important that we should always succeed in this hook, even if the network is down. post-new
- This hook runs after
notmuch new
is called, and after the database is updated. At this point, any new messages should be tagged withnew
. This is where we want to do initial tagging.
Let’s take a look at the pre-new
hook:
# -*- sh -*-
# Flush out the outbox.
msmtp-queue -r
# Pull new mail from our accounts.
(echo -n "Sync Personal…" && mbsync -c ~/.config/isync/config personal && echo "Done!") || echo "Error!" &
(echo -n "Sync MIT…" && mbsync -c ~/.config/isync/config mit && echo "Done!") || echo "Error!" &
(echo -n "Sync Gmail…" && cd ~/Retpoŝtoj/gmail && gmi sync >/dev/null && echo "Done!") || echo "Error!" &
(echo -n "Sync Cornell…" && cd ~/Retpoŝtoj/cornell && gmi sync >/dev/null && echo "Done!") || echo "Error!" &
wait
Syncing my mail used to take quite a long time, because I pulled mail from each account sequentially. The above hook pulls each account in parallel, and then waits for them all to complete before moving on.
Now, let’s take a look at the tagging in the post-new
hook:
# -*- sh -*-
notmuch tag +account/personal -- is:new and path:personal/**
notmuch tag +account/mit -- is:new and path:mit/**
notmuch tag +account/gmail -- is:new and path:gmail/**
notmuch tag +account/cornell -- is:new and path:cornell/**
notmuch tag +to-me -- is:new and to:[email protected]
notmuch tag +to-me -- is:new and to:[email protected]
notmuch tag +to-me -- is:new and to:[email protected]
notmuch tag +to-me -- is:new and to:[email protected]
notmuch tag +sent -- is:new and from:[email protected]
notmuch tag +sent -- is:new and from:[email protected]
notmuch tag +sent -- is:new and from:[email protected]
notmuch tag +sent -- is:new and from:[email protected]
notmuch tag +feeds -- is:new and to:[email protected]
notmuch tag +lists +lists/boston-pm -- is:new and to:[email protected]
notmuch tag +lists +lists/LINGUIST-L -- is:new and list:linguist.listserv.linguistlist.org
notmuch tag +lists +lists/CONLANG-L -- is:new and to:[email protected]
notmuch tag +lists +lists/LCS-members -- is:new and to:[email protected]
notmuch tag +lists +lists/EFFector -to-me -- is:new and from:[email protected]
notmuch tag +lists +lists/SIL-font-news -- is:new and to:[email protected]
notmuch tag +lists +lists/bulletproof-tls -to-me -- is:new and from:[email protected]
notmuch tag +lists +lists/xrds-acm -- is:new and to:[email protected]
notmuch tag +lists +lists/technews-acm -to-me -- is:new and from:[email protected]
notmuch tag +lists +lists/debian-security-announce -- is:new and to:[email protected]
notmuch tag +lists +lists/info-fsf -to-me -- is:new and from:[email protected]
notmuch tag +lists +lists/info-gnu -- is:new and from:[email protected]
notmuch tag +lists +lists/perl-qa -- is:new and to:[email protected]
notmuch tag +lists +lists/c++embedded +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/cxx-abi-dev +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/std-discussion +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/std-proposals +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/sg2-modules +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/sg5-tm +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/sg7-reflection +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/sg8-concepts +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/sg9-ranges +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/sg10-features +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/sg12-ub +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/sg13-hmi +c++ -- is:new and to:[email protected]
notmuch tag +lists +lists/MIT-daily -to-me -- is:new and list:80f62adc67c5889c8cf03eb72.174773.list-id.mcsv.net
notmuch tag +lists +lists/MITAC -to-me -- is:new and list:7dfb17e8237543c1b898119e1.250537.list-id.mcsv.net
notmuch tag +lists +lists/GSC-anno -to-me -- is:new and list:cdee009ad27356d631e8ca5b8.380005.list-id.mcsv.net
notmuch tag +lists +lists/LSA -to-me -- is:new and list:001f7eb7302f6add98bff7e46.216539.list-id.mcsv.net
notmuch tag +lists +lists/emacs-humanities -to-me -- is:new and to:[email protected]
notmuch tag +OpenSourceCornell +cornell/cs -- is:new and to:[email protected]
notmuch tag +OpenSourceCornell +cornell/cs -- is:new and to:[email protected]
notmuch tag +OpenSourceCornell +cornell/cs -- is:new and to:[email protected]
notmuch tag +OpenSourceCornell +cornell/cs -- is:new and to:[email protected]
notmuch tag +OpenSourceCornell +cornell/cs -- is:new and to:[email protected]
notmuch tag +cornell/cs -- is:new and to:[email protected]
notmuch tag +cornell/cs -- is:new and to:[email protected]
notmuch tag +cornell/linguistics +underlings -- is:new and to:[email protected]
notmuch tag +cornell/linguistics +underlings -- is:new and subject:"underlings-l subscription report"
notmuch tag +cornell/linguistics +underlings -- is:new and to:[email protected]
notmuch tag +cornell/linguistics -- is:new and to:[email protected]
notmuch tag +cornell/linguistics -- is:new and to:[email protected]
notmuch tag +cornell/linguistics -- is:new and to:[email protected]
notmuch tag +cornell/linguistics -- is:new and to:[email protected]
notmuch tag +employment -to-me -- is:new and from:linkedin.com
notmuch tag +twitch -to-me -new -- is:new and from:twitch.tv
notmuch tag +debianchania -- is:new and to:[email protected]
notmuch tag +test-anything-protocol -- is:new and to:[email protected]
notmuch tag +deleted -- is:new and path:personal/Trash/**
notmuch tag +deleted -- is:new and path:gmail/Trash/**
notmuch tag +deleted -- is:new and path:cornell/Trash/**
notmuch tag +deleted -- is:new and path:culc/Trash/**
notmuch tag +deleted -- is:new and path:mit/Deleted\ Items/**
notmuch tag +spam -- is:new and path:personal/Junk/**
notmuch tag +spam -- is:new and path:gmail/Junk/**
notmuch tag +spam -- is:new and path:cornell/Junk/**
notmuch tag +spam -- is:new and path:culc/Junk/**
notmuch tag +spam -- is:new and path:mit/Junk\ E-Mail/**
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- to:[email protected] and [email protected]
notmuch tag +spam -- to:[email protected] and [email protected]
notmuch tag +spam -- to:[email protected] and [email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:"Jessica Lee"
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:@hira
#notmuch tag +spam -- from:"Asia from"
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +spam -- from:[email protected]
notmuch tag +draft -- is:new and path:personal/Draft/**
notmuch tag +draft -- is:new and path:gmail/Draft/**
notmuch tag +draft -- is:new and path:cornell/Draft/**
notmuch tag +draft -- is:new and path:culc/Draft/**
notmuch tag +draft -- is:new and path:mit/Drafts/**
notmuch tag +inbox -- is:new and is:to-me and is:sent
notmuch tag -new -- is:feeds
notmuch tag -new -- is:lists
notmuch tag -new -- is:deleted
notmuch tag -new -- is:spam
notmuch tag -new -- is:sent
notmuch tag -new -- is:draft
notmuch tag +spam -- from:[email protected]
notmuch tag +inbox -new -- is:new
Now that notmuch is configured to synchronize our local mail with our remote accounts and to tag our mail, we want this to happen in the background. We can accomplish this using systemd timers.
First, we need to set up a systemd user unit that, when started, runs notmuch
new
:
[Unit]
Description=Synchronize local mail with remote accounts
RefuseManualStart=no
RefuseManualStop=no
[Service]
Type=oneshot
ExecStart=notmuch new
Now, we want to run this unit on a timer. Let’s choose once every five minutes:
[Unit]
Description=Synchronize local mail with remote accounts at regular intervals
RefuseManualStart=no
RefuseManualStop=no
[Timer]
Persistent=false
OnBootSec=2min
OnUnitActiveSec=5min
Unit=mail-sync.service
[Install]
WantedBy=default.target
Finally, let’s enable both the timer:
systemd --user enable mail-sync.timer
This is the self-hosted email that I use for most things.
- Address:
[email protected]
- IMAP:
tocharian.pniedzielski.net
, STARTTLS with ACME generated certificate - SMTP:
tocharian.pniedzielski.net
, STARTTLS with ACME generated certificate on message submission port (587).
First, make a directory in the Maildir hierarchy for emails from this account.
mkdir -p ~/Retpoŝtoj/personal/{cur,new,tmp}
###############################################################################
# PERSONAL EMAIL (tocharian.pniedzielski.net) #
###############################################################################
IMAPAccount personal
Host tocharian.pniedzielski.net
User pniedzielski
PassCmd "pass mail/personal"
SSLType imaps
SSLVersions TLSv1.2
IMAPStore personal-remote
Account personal
MaildirStore personal-local
Path ~/Retpoŝtoj/personal/
Inbox ~/Retpoŝtoj/personal/Inbox
SubFolders Legacy
Channel personal
Far :personal-remote:
Near :personal-local:
Patterns * !Archive*
Create Both
CopyArrivalDate yes
SyncState *
###############################################################################
# PERSONAL EMAIL (tocharian.pniedzielski.net) #
###############################################################################
account personal
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
host tocharian.pniedzielski.net
port 587
from [email protected]
user pniedzielski
passwordeval pass mail/personal
This is my university email, which I use for MIT-related/academic work. This account is by far the one that gives me the most trouble. My university hosts mail on an Exchange server that provides IMAP and SMTP, but only barely. I’ve tried several different ways of working with this account locally, including directly using their anemic IMAP and SMTP server, or routing the access through DavMail, but right now I’m forwarding all the mail to my personal hosted email server (which works beautifully), and using IMAP from it. SMTP still goes through the Exchange server, which isn’t ideal, but which works better than the Exchange IMAP does.
What this looks like on my server is an additional mailbox, mit
, with its own
password and IMAP hierarchy. IMAP accesses the same address as Personal, but
uses a different user. Otherwise, the configuration should be identical. For
SMTP, I use the Exchange SMTP directly.
- Address:
[email protected]
- IMAP:
tocharian.pniedzielski.net
, STARTTLS with ACME generated certificate - SMTP:
outgoing.mit.edu
, SMTPS.
First, make a directory in the Maildir hierarchy for emails from this account.
mkdir -p ~/Retpoŝtoj/mit/{cur,new,tmp}
###############################################################################
# MIT EMAIL (tocharian.pniedzielski.net) #
###############################################################################
IMAPAccount mit
Host tocharian.pniedzielski.net
User mit
PassCmd "pass mail/mit"
SSLType imaps
SSLVersions TLSv1.2
IMAPStore mit-remote
Account mit
MaildirStore mit-local
Path ~/Retpoŝtoj/mit/
Inbox ~/Retpoŝtoj/mit/Inbox
SubFolders Legacy
Channel mit
Far :mit-remote:
Near :mit-local:
Patterns * !Archive*
Create Both
CopyArrivalDate yes
SyncState *
Channel mit-archive
Far :mit-remote:
Near :mit-local:
Patterns Archive*
Create Both
CopyArrivalDate yes
SyncState *
###############################################################################
# MIT EMAIL (outgoing.mit.edu) #
###############################################################################
account mit
tls_starttls off
tls_trust_file /etc/ssl/certs/ca-certificates.crt
host outgoing.mit.edu
port 465
from [email protected]
user pnski
passwordeval pass mit/kerberos
This is an older email account that I mainly use as an archive and for emails
I’ll need for self-hosted services, just in case I cannot access
tocharian.pniedzielski.net
.
- Address:
[email protected]
- IMAP:
imap.gmail.com
, IMAPS. - SMTP:
smtp.gmail.com
, STARTTLS on message submission port (587).
First, make a directory in the Maildir hierarchy for emails from this account.
mkdir -p ~/Retpoŝtoj/gmail
###############################################################################
# GMAIL (imap.gmail.com) #
###############################################################################
account gmail
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
host smtp.gmail.com
port 587
from [email protected]
user [email protected]
passwordeval pass mail/gmail
This is the university email that I use for Cornell-related work. This account is hosted by Gmail.
- Address:
[email protected]
- IMAP:
imap.gmail.com
, IMAPS. - SMTP:
smtp.gmail.com
, STARTTLS on message submission port (587).
First, make a directory in the Maildir hierarchy for emails from this account.
mkdir -p ~/Retpoŝtoj/cornell/{cur,new,tmp}
###############################################################################
# CORNELL EMAIL (imap.gmail.com) #
###############################################################################
account cornell
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
host smtp.gmail.com
port 587
from [email protected]
user [email protected]
passwordeval pass mail/gmail
It seems like everything in the Haskell ecosystem is moving towards GHCup, which requires me to download which versions of GHC I want. I’ve always been a bigger fan of either using my system’s package manager or letting the build system install the proper sandboxed toolchain for me, like Stack does. Until now, I could ignore GHCup for this reason. However, recently, the Haskell Language Server stopped providing prebuilt binaries that work with Stack’s sandboxed compiler. Now, unless I use GHCup, I have to manually build the Haskell Language Server for each compiler I use, negating the benefits of using Stack. This means we have to do a little bit extra work coaxing GHCup and Stack to play well with one another. In this section, I deal exclusively with the setup for GHCup, and that coaxing happens later on, in the Stack section below
First, we need to download the GHCup binary:
curl -Lf "https://downloads.haskell.org/~ghcup/x86_64-linux-ghcup" > ~/.local/bin/ghcup
chmod +x ~/.local/bin/ghcup
Next, we need to convince GHCup to use XDG directories, which it doesn’t do by default:
GHCUP_USE_XDG_DIRS=1
I use Stack, which is meant to be both a reproducible build system and a package manager for Haskell. It is very nice, and seemed to be the hot thing a while ago—especially compared with the alternative, Cabal. One of the nice things about Stack is that it automatically downloads a sandboxed compiler for you, so I don’t need to worry about which compilers and versions of base I have installed. Instead, building a project automatically gets me the right version of everything.
Until recently, making Stack work with GHCup was a pain. As of Stack 2.9.1, though, we can make Stack run hooks to install its desired version of GHC. First, we need to set Stack to use the XDG Base Directory specification (yet another tool that doesn’t default to it…):
STACK_XDG=1
Next, we need to set up a GHC installation hook to teach Stack about GHCup. We do this by downloading GHCup-provided hook from their repository, installing it into the Stack hooks directory, teaching Stack to prefer to install GHC using rather than using any system GHC, and finally teaching Stack’s internal installation logic.
mkdir -p $XDG_CONFIG_HOME/stack/hooks/
curl https://raw.githubusercontent.com/haskell/ghcup-hs/master/scripts/hooks/stack/ghc-install.sh \
> $XDG_CONFIG_HOME/stack/hooks/ghc-install.sh
chmod +x $XDG_CONFIG_HOME/stack/hooks/ghc-install.sh
# hooks are only run when 'system-ghc: false'
stack config set system-ghc false --global
# when the hook fails, don’t try the internal logic
stack config set install-ghc false --global
Now, so we can easily connect to the Emacs server from an interactive terminal,
we define some shorthand shell aliases. I can never remember the command-line
arguments to emacsclient
, and emacsclient
itself is a pretty hefty command
name, so these aliases find a lot of use. em
opens its argument in an
existing frame, emnew
opens its argument in a new frame, and emtty
opens its
argument in the current terminal.
alias em="emacsclient -n $@"
alias emnew="emacsclient -c -n $@"
alias emtty="emacsclient -t $@"
For each of these aliases, I used to have the --alternative-editor
flag, which
I could use to set an editor to select if Emacs was not running. There is no
case when that happens, and if there’s some problem where Emacs is not running,
I’d like to be warned so I use vi
explicitly and not get confused.
Finally, we set Emacs as our default editor for the session. We want the
behavior to be “open a new buffer for the existing Emacs session. If that
session does not exist, open Emacs in daemon mode and then open a terminal frame
connection to it.” Setting $VISUAL
and $EDITOR
to emacsclient
accomplishes the first part, and setting $ALTERNATIVE_EDITOR
to an empty
string accomplishes the second part, as described in the article _Working with
EmacsClient_.
# Use emacsclient as the editor.
EDITOR=emacsclient
VISUAL=emacsclient
ALTERNATIVE_EDITOR=