Friday, December 22, 2017

It's that time again

I won't go off on a rant about how mandatory password change policies are bad for users at best and actually erode security at worst. Or that NIST now recommends against mandatory password changes.

I'm not going to do that because I'm either preaching to the choir or shouting at the sea, demanding the tide stay out.  I submit that mandatory password change policies are the climate change of IT departments.  There are going to be people and organizations that, in the words of Captain, "you just can't reach".

For quite a while now I've been using completely random passwords with high entropy.  I have a few that unlock my keyrings, those I have committed to memory and are single-purpose (that is, I never use the same password for more than one keyring).  The others, though, I don't even bother trying to remember.  I use programs such as apg or KeePassX to create passwords for everything, each of them are again single-use, all of which look like a crypted password at a first glance.  For example, one I generated yesterday as a joke was x+KQ$zy^lqv<?;XD.3se9.  I'm guessing if you were trying to decyper a hash of my password and got that output, you wouldn't immediately think you'd been successful.

There's a problem with this approach, though.  KeePass and all of its variants (like the one I use on Android) are great for managing and providing those passwords to me when I need to enter them, but occasionally I still need to access those passwords without the benefit of a browser plugin, for example.

The worst of these examples is mutt.  I cannot imagine a scenario where I would willingly give up mutt as my primary means of using email, but it's really a pain in the butt to have it work with mandatory password changes.  I could just say to heck with it and use a password scheme for my email that is easy for me to remember and sticks to the specific rules of any password change policy, but that actually makes things a lot worse because now I'm doing exactly what all those articles I linked above say is the problem with mandatory password change policies.

Turns out, though, that mutt can do something pretty neat when it comes to saved passwords:

set imap_user="jjm"
set imap_pass=`/usr/bin/gpg --decrypt $HOME/passwd.gpg 2>/dev/null`

Which means now I store my passwords in a file (in my home directory in the example above, for the sake of keeping the example simple) that is encrypted with a password or passphrase of my chosing.  Of course, this is just another keyring, so I need to commit one more password to memory, but now any time I am required to change my password on that account, I can set it to a new password that has high entropy and is completely unrelated to any previous password I've ever used and I don't need to remember it.

How do I update that with a new password?  Easy:
$ echo "x+KQ$zy^lqv<?;XD.3se9" | gpg -c > passwd.gpg

So in this scenario the new password never even lands on my disk in an un-encrypted state.  Finally, there is a bit of a case for keeping old passwords around, so I can accomplish that with:
$ (gpg --decrypt passwd.gpg  2>/dev/null && gpg --decrypt old-passwds.gpg 2>/dev/null ) | gpg -c > old-passwds.gpg

Thus, old-passwds.gpg is an accumulation of previous passwords I've used for this account, also encrypted, never stored as cleartext on my disk, and the worst case is I need to remember two new keyring passwords or passphrases.  I think that's a pretty good compromise for complying with a mandatory password change policy that is widely accepted to be a bad idea.

Wednesday, January 18, 2017

Hello patch-id, my old friend.

I started off with a series of patches that I'd cherry-picked out of one branch into my current branch. All seemed good, so I was ready to merge them. Except something happened to the upstream and I was unable to actually push my commits. Now, though, I'm presented with an interesting mess. I have my own series that I know I've tested, the branch from which I've cherry-picked them has a ton of unrelated (and untested by me) commits and I want to enlist someone's help with getting these dozen commits merged into the main development branch.

How do I do this?

Well, I thought, first off we'll ensure my tree is clean.

   # git fetch --all --tags
   # git status
   On branch master
   Your branch is ahead of 'oe/master' by 12 commits.
     (use "git push" to publish your local commits)

   nothing to commit, working directory clean

All's good. Next, let's go ensure all of our commits are in the upstream branch, trusting the summary line to be good enough as a first indicator because there's no way I would've touched the summary in my tree when cherry-picking or amending or anything. The only failure would be if something in my tree has subsequently gone missing from the up-stream -next tree. And given this is a -next tree and not intended to be fast-forward, this seems like a reasonable thing to check.

   # git log --pretty="format:'%s'" oe/master..HEAD | \
   > xargs -n1 -i git log --oneline --grep={} oe/master-next

Works pretty well it seems. Numbers check?

   # git log --pretty="format:'%s'" oe/master..HEAD | \
   > xargs -n1 -i git log --oneline --grep={} oe/master-next | wc -l

Yup. Okay, now we have essentially two lists of commit ids. My branch and the upstream branch where I picked from. Let's make sure those commits are basically the same.  We'll do this with git-patch-id, a tool that, in ancient times, I created by hand by formatting patches out of my tree and then doing an md5 (it was ancient times, leave me alone) of the patches without headers.  Apparently I wasn't the only one doing this sort of crime, because now git has something very much like that built right in.  Truly we live in an age of wonders.

Moving on.

   # git log --pretty="format:'%s'" oe/master..HEAD | \
   > xargs -n1 -i git log --pretty="format:%H " --grep={} oe/master-next | \
   > xargs -n1 -i sh -c 'git show {} | git patch-id' > upstream.txt

   # git log --pretty="format:'%s'" oe/master..HEAD| \
   > xargs -n1 -i git log --pretty="format:%H " --grep={} | \
   > xargs -n1 -i sh -c 'git show {} | git patch-id' > local.txt

So, with a dozen patches, it's easy enough to just visually compare them, but I figured I'd go the one extra step and automate comparison of the lists as well.

   # git log --pretty="format:'%s'" oe/master..HEAD | \
   > xargs -n1 -i git log --pretty="format:%H " --grep={} oe/master-next | \
   > xargs -n1 -i sh -c 'git show {} | git patch-id' | cut -f1 -d' ' \
   > > upstream.txt

   # git log --pretty="format:'%s'" oe/master..HEAD | \
   > xargs -n1 -i git log --pretty="format:%H " --grep={} | \
   > xargs -n1 -i sh -c 'git show {} | git patch-id' | cut -f1 -d' ' \
   > > local.txt

   # diff upstream.txt local.txt

And in my case this revealed that I had one commit that was substantially different, since patch-id returned differences. On closer examination of the commit ids in each tree it was obvious that I'd made a significant change based on a discussion with the author and I'd said I would make the discussed change in place rather than requesting a new patch. So ultimately this was all worth it.

Friday, April 01, 2016

tty Types

I have long ago given up on using anything other than GNU Screen as a terminal emulator.  If you're reading this I'm confident you know what both of those things are and why I would need them.  If not, then you work in a different UNIX-y world than I do.  See you next time.

Seriously, though, all of the other emulators I used to use were so fusty and full of so many eccentricities (when compared to everything else I use on a daily basis) that I nearly lost my mind when I realised I could just do this:

# screen /dev/ttyS0 115200

and like actual magic get something with a civilized scrollback buffer, keyboard-centric cut-and-paste from multiple buffers, split-screen, logging, all of the stuff I love about GNU Screen.

But man, there's been one persistent irritation remaining:

That drives me nuts.  If I dare start up vi on some file on one of these, then the terminal gets sufficiently mangled that I'm now reduced to an 80x24 window inside my ridiculously large xterm.  There's no need of that.

In ancient times, you used to be able to create a file named /etc/ttytype where you would create a mapping like this:

screen-256color ttyS0

And that would be interpreted by whatever getty variant you were using and set up your login environment sanely.  That's no longer the case in Debian (and probably no longer the case in most Linux systems today), so I've just been suffering with this until recently.

This is what I've come up with to work around the problem.

# echo <<eot&gt>~/.profile
> TERM=screen-256color
> export TERM
> eval `tset -s`   
> resize
# ^D

Then on my next login I get this:

Muuuuch better.

I have resize on most every system I use, so that works for me, but then in my digging around I found this on the Arch wiki:

rsz() {
 if [[ -t 0 && $# -eq 0 ]];then
  local IFS='[;' escape geometry x y
  print -n '\e7\e[r\e[999;999H\e[6n\e8'
  read -sd R escape geometry
  x=${geometry##*;} y=${geometry%%;*}
  if [[ ${COLUMNS} -eq ${x} && ${LINES} -eq ${y} ]];then
   print "${TERM} ${x}x${y}"
   print "${COLUMNS}x${LINES} -> ${x}x${y}"
   stty cols ${x} rows ${y}
  [[ -n ${commands[repo-elephant]} ]] && repo-elephant || print 'Usage: rsz'  ## Easter egg here :)

I've not used it, but it looks good.  I probably will try it next time I'm on a seriously limited system.

But this is one where I'd like some feedback.  If you've got this far you must have your own solution to this (or you've got a higher pain threshold than I do), so what've you done to solve this problem?

Monday, March 07, 2016

Creating alternatives

Something much bigger I'm probably gearing up to document here (if it works out, since I'm still trying to determine if I've been wasting my time learning this tool or not) inspired me to again check the manpages on a pretty useful little tool:  update-alternatives.

It's nothing terribly special, it gets used everywhere, and it's not very difficult to use, but I don't use it as often as I should, to be honest.

Today I found myself needing to install a new version of ruby on my system.  On an ageing (seriously?  8.2 is ageing?) Debian system like mine that's an interesting level of hell that I'll also get into at a later date not because I ever expect to do that particular dance again nor do I expect anyone else to be interested in it, but because it helped me get the rust out of some gears that I find myself needing to turn reasonably often, so the how is interesting even if the what or why are not.

But back to the quickie for today.

update-alternatives lets you have multiple versions of the same executable (or, to a lesser extent, libraries / manuals / shared files / etc.) installed on a system at the same time and identify the one you want to use by default.  So, for example, I use mutt for email.  But which mutt?

# which mutt
# ls -l /usr/bin/mutt
lrwxrwxrwx 1 root 22 Oct 25  2013 /usr/bin/mutt -> /etc/alternatives/mutt*
# update-alternatives --list mutt

So from that you can see that I've got two different versions of mutt, but when I type mutt I get mutt-patched.  If I wanted to change that, though, and run the vanilla version of mutt, I could just change the alternative to point at mutt-org instead with a command like:

# update-alternatives --config mutt
There are 2 choices for the alternative mutt (providing /usr/bin/mutt).

  Selection    Path                   Priority   Status
* 0            /usr/bin/mutt-patched   60        auto mode
  1            /usr/bin/mutt-org       50        manual mode
  2            /usr/bin/mutt-patched   60        manual mode

Press enter to keep the current choice[*], or type selection number: 

That's good, but when I started out today there were no alternatives for ruby. Irritating, since /usr/bin/ruby was already a symlink to one of three (!) versions already installed, but there was no update-alternatives mechanism to switch (and, of course, my new ruby just landed in the same spot and wouldn't actually get used when I invoked ruby --version, so that didn't solve my problem anyway...). It turns out, though, that update-alternatives does exactly the right thing.

# update-alternatives --list ruby  
update-alternatives: error: no alternatives for ruby
# ls -l /usr/bin/ruby*
lrwxrwxrwx 1 root    7 May 10  2015 /usr/bin/ruby -> ruby2.1*
-rwxr-xr-x 1 root 6264 Dec  1  2013 /usr/bin/ruby1.8*
-rwxr-xr-x 1 root 6336 Feb  8  2015 /usr/bin/ruby1.9.1*
-rwxr-xr-x 1 root 6168 Aug 26  2015 /usr/bin/ruby2.1*
# update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby2.1 100
update-alternatives: using /usr/bin/ruby2.1 to provide /usr/bin/ruby (ruby) in auto mode
# update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 50 
# update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.8 40  
# update-alternatives --list ruby                                            
# update-alternatives --config ruby
There are 3 choices for the alternative ruby (providing /usr/bin/ruby).

  Selection    Path                Priority   Status
* 0            /usr/bin/ruby2.1     100       auto mode
  1            /usr/bin/ruby1.8     40        manual mode
  2            /usr/bin/ruby1.9.1   50        manual mode
  3            /usr/bin/ruby2.1     100       manual mode

Press enter to keep the current choice[*], or type selection number: 

The numbers I assigned are purely relative, they don't mean anything except that the preferred version will be the one that was previously the direct symlink, the other two stay in place, I can use them by calling them directly or if I need to change my default link to be one of the others it's as simple as using the config option again.  And then I installed version 2.3 and gave it a priority of 110, so it's now my new default.

Easy peasy.

Friday, March 04, 2016

Easy automounting sshfs

I use sshfs a lot.  Mostly because Norton Commander rocked my world when I first encountered it in the summer of 1994 (bear with me).  I was about to say I was done after that, but that's not quite true.  In 1996 I discovered pushd/popd (unfortunately limited to bash at the time) and the staggering power of the Korn shell's history editing and I had nearly everything I needed to be happy with my interface to computers.  Then I found Midnight Commander on one of those Linux Software Archive CDs in late 1996 and the final piece fell into place.

Alright, enough ancient history.  The point of all of this is that when I discovered MC's Virtual Filesystem functionality I didn't think I'd ever need any other remote tool.  Being able to treat FTP connections like local filesystems was mind-blowing.  Then I started getting into SSH and found I could do the same thing with SFTP and later SSHFS connections.  Absolutely outstanding.

It's a shame the method to connect to these Virtual FS systems was such an EMACS-ish combination of hotkeys and things (at least in the early days) that looked nothing like a URI and didn't really seem to follow any sensible pattern.  So I used it less than I otherwise would unless it was something I already had set a bookmark for.  Because reading help sucks and because all too often you'd be hitting F1 for help in MC and instead get some garbage from your desktop environment or window manager.  I know, solvable problems which I've long-since resolved, but back when it really mattered, it was an issue for me.

That brings us to the last, say, two years.  I don't use MC anymore.  At least not very much.  Probably more than half the machines I work with don't even have mc installed anymore.  But I use sshfs all the time because I love the ability to treat remote systems like they are local.

So this is what I've been doing until recently.

# sshfs joe@meathead:/ meathead/
# sshfs pi@retropie:/ retropie/

(Yeah, I'm one of those guys.)

That works pretty well.

Now here's something you're not going to hear out of me very often.  I kind of miss NFS.

Okay, what I actually miss is auto-mounted NFS shares.  This week I started down the path of creating a script to automatically mount my regular SSHFS locations on boot.  That had two obvious problems right out of the gate:
  • Mounts won't come back automatically if the machine reboots
  • Mounts would only happen automatically on boot of my machine
Plus the third, more subtle but still core-to-my-very-being one:
  • Someone else must have already solved this problem
Because that's how I approach nearly every problem.  First assume I'm not the first one to trip over something.

Turns out I was right.

There's a lot of solutions for this particular problem of varying levels of fugly.  This one (specifically the afuse one) documented over at the Arch Wiki turns out to be just the trick:

# afuse -o mount_template='sshfs -o ServerAliveInterval=10 -o reconnect %r:/ %m' \
> -o unmount_template='fusermount -u -z %m' ~/mnt/ssh

So a little bit of that action and now instead of having a bunch of fixed mounts and a clunky script that'll need a bunch of options and checking to see if something's already mounted or whatever, I just dump the above afuse command in my Openbox autostart script and now I can just do this:

# cd ~/mnt/ssh/pi@retropie/

and magic happens.  Initially I thought it would be a bit of a down-side to dedicating a whole directory to afuse, but it turns out I like it a lot better because now I have all of my 'remote' systems in a single hierarchy but I can still treat them like local filesystems for all intents and purposes.  It's pretty sweet.  Should've done this years ago.

Wednesday, June 25, 2014

mailman Archives Made Useful

I use mutt for all my email needs.  (Not quite true, but I don't count the javascript-fest that is Gmail's interface as an actual mailer.)  It's extremely good at doing what I want it to do:  presenting text in an easily navigable and reasonably easily searchable format.  Yes, I wish I had something that could search my offline mailboxes as efficiently as Gmail's search, but that's a task for another day.

Anyway, a lot of my day is spent working with mailing list traffic.  It's the way of things for a lot of high-tech folks in the modern age.  It's generally a good thing, too.  Often I find myself signing up for a new mailing list for a specific purpose (last week it was to hopefully find answers about strangeness I'm seeing on a board I'm using, this week it was to submit a patch to a project I've been using but until now not contributing to, next week it'll be something else).  When I do that, though, I have a personal code of conduct spawned from one of the touch-stones of my personal philosophy.  Here it is, one of the closest things I have to actual wisdom:

You are not the first one to see this.

There's a lot of corollaries to this theorem and I can think of an excellent counter-example from yesterday, but putting in qualifiers and such (e.g. you are almost certainly not the first one to see this) opens the door to interpretation and short-cutting and, frankly, intellectually lazy behaviour.

So that's why when I go to a new mailing list for something, I make a genuine effort to search the archives before ever posting anything.

That doesn't sound like such a big thing, but searching archives can actually be a significant challenge depending on the list.  In particular, if a list has chosen to use GNU mailman (for reasons that I can only conclude are indicative of inherently anti-social tendencies, because no one, in good conscience, should ever choose it) as the management software, then the "archives", such as they are, are broken down into a nearly useless set of non-searchable HTML-formatted pages, separated by month and year boundaries.  For, what the mailman developers most likely laughingly call, convenience, you can also download a gzip compressed flat text file of each month archive.  Note that these text files, ones uncompressed, are not any standard mailbox format that can be read by civilized mail tools.

But they're close.

In the interests of being completely honest here, I am aware that many mailman installations are configured to actually provide the entire archive in mbox-format (since that is, apparently, how mailman stores the archive anyway) to subscribers provided they use an undocumented URL.  See here for more detail on that.  That does not always work, however.  In about half of the mailman lists I've run into I've found that this feature (the one redeeming feature of mailman, if I may be so bold) is disabled.  So we're back to downloading compressed-not-quite-mbox-files-split-by-month (because a discussion thread never carries over from one month to the next...) and opening them in your favourite text editor.

Or you do what dozens of people have done and you create a tool to turn those text files into something that's actually usuable by mail programs.  Go ahead, search.  You'll find a lot of them.  Shell scripts, python scripts, ruby scripts, perl scripts, probably C programs, likely things written in Haskell and Modula-3 and Emacs-lisp as well if you're of a particularly deviant bent.

This week I tried a half-dozen of these things, all with varying levels of success, all falling below "acceptable".  So I wrote my own.  Here it is:

 wget -l 1 -A .gz -r <archive_url>   
 for i in *.gz ; do  
   gunzip -c $i | sed 's=\(^From.*\) at =\1@=' >> out.mbox  

The primary reason I did this is because I'm tired of constantly hacking this together on the command line and futzing about with getting the correct command-line options or the correct sed expression every single time I run into this scenario.  It happens often enough I'm annoyed by it but not often enough that it's become muscle-memory.  So here we are.  I created a script to wrap the core idea, making it a bit friendlier and less likely to leave garbage lying around my disk.

The resulting script:

 # Copyright (c) 2014, Joe MacDonald <>  
 # All rights reserved.  
 # Redistribution and use in source and binary forms, with or without  
 # modification, are permitted provided that the following conditions are met:  
 #   * Redistributions of source code must retain the above copyright  
 #     notice, this list of conditions and the following disclaimer.  
 #   * Redistributions in binary form must reproduce the above copyright  
 #     notice, this list of conditions and the following disclaimer in the  
 #     documentation and/or other materials provided with the distribution.  
 #   * Neither the name of Joe MacDonald nor the names of its contributors may  
 #     be used to endorse or promote products derived from this software  
 #     without specific prior written permission.  
 function dump_help  
   echo "$0 [-w <wget>] [-m <mailbox>] -u <url> [-a]"  
   echo ""  
   echo "  -w <wget>   Full path to your wget binary"  
   echo "  -m <mailbox>  The filename for the newly created archive mailbox"  
   echo "         Default: /tmp/archive.mbox"  
   echo "  -u <url>    The URL to the mailman archive index page."  
   echo "  -a       Append to an existing mbox."  
   echo ""  
 ARCHIVE_DL_DIR=`mktemp -d -q`  
 while getopts "am:u:w:" option  
   case $option in  
    a ) APPEND="y" ;;  
    m ) MAILBOX="$OPTARG" ;;  
    u ) URL="$OPTARG" ;;  
    w ) WGET="$OPTARG" ;;  
    * ) dump_help ; exit ;;  
 # ------------------------------------------------------------------------  
 # if you did no provide a URL, we cannot proceed.  
 # {{{  
 if [ -z "${URL}" ]  
   echo You must specify the URL of the mailman archive index page.  
 # }}}  
 # ------------------------------------------------------------------------  
 # if there's no wget, we cannot proceed.  
 # {{{  
 if [ -z "${WGET}" ]  
   WGET=`which wget`  
 if [ ! -x ${WGET} ]  
   echo Wget not found, unable to proceed. If you have installed wget  
   echo in a location not in your PATH, you can try passing the -w option  
   echo to tell the script where to find wget.  
 WGET_OPTS=" \  
   -l 1 \  
   -r \  
   --no-directories \  
   -A .gz \  
 # }}}  
 # ------------------------------------------------------------------------  
 # if we cannot create the new mailbox, we cannot proceed either.  
 # {{{  
 if [ -e ${MAILBOX:=/tmp/archive.mbox} ]  
   if [ -z "${APPEND}" ]  
    echo "WARNING: The specified mailbox (${MAILBOX}) already"  
    echo "     exists. This script will append the new mbox contents"  
    echo "     to it. If this is what you want, specify the -a (append)"  
    echo "     option on the comand line and re-run this script."  
 touch ${MAILBOX}  
 if [ ! -w "${MAILBOX}" ]  
   echo Unable to write to mailbox file: \"${MAILBOX}\".  
   # Since we're changing directories, we'll want to canonicalize this,  
   # otherwise the above check is invalid unless you explicity set a full path  
   # to your new mailbox *or* you took the default.  
   MAILBOX=$(readlink -e ${MAILBOX})  
 # }}}  
 # ------------------------------------------------------------------------  
 # main  
 # {{{  
 if [ -d ${ARCHIVE_DL_DIR} ]  
   echo Temporary archive download directory: ${ARCHIVE_DL_DIR}  
   pushd ${ARCHIVE_DL_DIR}  
   wget ${WGET_OPTS} ${URL}  
   for i in *.gz  
    gunzip -c $i | sed 's=\(^\(From\|Cc\|To\).*\) at =\1@=' >> ${MAILBOX}  
   rm -fr ${ARCHIVE_DL_DIR}  
   echo Failed to create temporary archive download directory  
   echo Unable to continue. Consider setting TMPDIR to a writable  
   echo location in your environment then re-run the command.  
 # }}}  

Saturday, July 27, 2013

Command-line Sound Recording

Chances are if you're running the common varieties of Linux or Windows or MacOS you've got a GUI tool already for recording from either your sound card or an internal/external microphone or whatever.  Turns out I don't since a while back I gave up on my former infatuation with Ubuntu and Xubuntu.  Switching to Crunchbang is a decision I would make again in a heartbeat, but it does mean that I'm back to the days of searching for tools and deciding between options rather than finding them all pre-installed.

This is, for what it's worth, a good thing in my mind.

So my situation is a little complex.  The summary is I wanted to record a phone call and turn it into a sound file that's in some way useful to me.  That is, some raw audio format or high quality MP3 or something.  Should be easy enough, right?  Time was I'd just use Google Voice to record my call, or Google Talk, but the former has become much more difficult to use in Canada these days and the latter no longer appears to support recording of outgoing calls.  I searched, trust me.  I suspect that's a workaround for problems they had with enabling recording and having someone try to dial in to conference bridges or use automated menu systems or such.  I don't miss the loss of that feature at all, by the way, except in this one scenario.

Okay, so I can't use Google Voice or Google Talk.  Next obvious option is hunt through the Google Play store and find something to record calls there, then make the call on my phone.  I won't link to the various apps I tried, I'm sure most of them are quite good and have probably earned their 4+ star ratings.  I tried six different ones before I gave up on that angle.  Not a single one successfully recorded any audio on my phone.  Well, two of them might have, but they only produced 3gpp files and I wasn't able to find anything that played back 3gpp files for me on my laptop.  Or if I did, those files were full of silence as well, it's kind of tough to tell.

So back to the desktop, then.  There are a lot of audio recorder programs in Linux.  The best of them, though, either seem to depend on Gnome or KDE and there is no way I wanted to drag in all of that onto my nice, relatively clean platform just to record a single phone call.  That's ridiculous.

Back to searching, then.

Now anyone who talks to me for any length of time about operating systems knows I absolutely hate two things about Linux.  All of them suck without qualification of any kind.  Printers.  Printing in Linux is a disaster and it shows little sign of improving over where it was in the late 1990s when I started in with Linux.  And sound.  When it works well, it's okay.  Often, though, it barely works or produces inconsistent results.  That infuriates me.  Almost enough to make me learn something about sound architectures in Linux and try to fix one of them enough to be usable.  The barrier, there, is the distributed nature of sound in Linux.  It's not like "sound" is even a thing.  There's kernel support for at least two architectures, and there's userspace tools that interact with some set of those architectures.

Of those tools, the one that I have the most distaste for -- though probably because it is the one I encounter the most, not because it's any worse than any other -- is Pulse.  Pulseaudio is gawdawful.  The only redeeming quality it has, if I can feel that it has any, is it is installed by default on my last three chosen distributions and oftentimes it successfully manages to produce sounds from my speakers.  Frequently it is even the sounds I want it to produce (though if I ever sort out the agony it is currently causing me when playing music from Chrome, I'll have another post about that little slice of hell...).

But today it actually turns out to be a good thing for me.  Some refinements on my search criteria turned up pacat.  I've never heard of this thing before, but within a minute of finding the manpage I was done, I had exactly what I needed.

Since I wanted to capture audio that was playing through my speakers, I was looking for monitor on an alsa output device (I knew ahead that I was using Alsa, if you don't know if you're using Alsa or OSS or something else, you'll need to google that too, I suppose), so I did this:

% pactl list | grep Monitor\ Source
Monitor Source: alsa_output.pci-0000_00_1b.0.analog-stereo.monitor

And searched for monitors.  In my case, there happened to be only one.  That helps.  So then I make my phone call and start up pacat with flac as the file format, because why not?  The call is short and I wanted good quality:

% pacat --verbose --device=alsa_output.pci-0000_00_1b.0.analog-stereo.monitor\
--record --file-format=flac phone.flac
Opening a recording stream with sample specification 's16le 2ch 44100Hz' and channel map 'front-left,front-right'.
Connection established.
Stream successfully created.
Buffer metrics: maxlength=4194304, fragsize=352800
Using sample spec 's16le 2ch 44100Hz', channel map 'front-left,front-right'.
Connected to device alsa_output.pci-0000_00_1b.0.analog-stereo.monitor (0, not suspended).
Got signal, exiting

And there you are.  phone.flac is a capture of what was playing on the speakers from the time I started until the time I ^C'd it.

Definitely want to remember this one.  By far the easiest way to record audio in Linux I've ever encountered.  So, y'know, score one for pulseaudio.