2025-07-20 14:20:21 PDT
I became aware of the Kelly Criterion
a while back through YouTube or something. It's a really interesting
idea. Imagine that you want to maximize your return over the long term
from independent bets in some game.
For example imagine playing poker with a current stack
of chips. Say that you can make a bet of size
with expected win probability
.
Let the "fractional return" of the bet be
:
that is, you will get
chips for winning the bet.
The Kelly Criterion suggests that you optimize your long-term return
by choosing
If you bet more, you will win too little per win over the long term
due to excess risk of shrinking your bankroll. If you bet less you will
not get the return you "deserve".
There is a fancy proof of optimality of the Kelly Criterion involving
log-likelihood.
I am thinking about the Kelly Criterion and variants in the context
not just of Poker, but of Yahtzee. I will report here if I figure out
anything interesting that is new to me.
There's some math above. Hooray. I have switched my GitAtom blog software
to use Python's pandoc
library instead of
cmarkgfm
. It still does Github-flavored Markdown, but has
an option to generate MathML. I don't like that Pandoc centers display
math, and I'm not sure the HTML renders as well in Chrome or Safari as
it does in Firefox. That said, I like using math, so I will likely stick
with it for now.
2025-07-16 13:37:46 PDT
Bart Massey 2025
So in the previous post here I upgraded my Debian desktop. One of my
more foolish choices was to finally tackle upgrading Firefox.
Firefox has been stuck on an ancient version because I pinned Debian
to default to testing
and the testing
version
of Firefox relied on a library from unstable
. (At least I
think that's what I remember. If so, it's an obvious Debian packaging
bug. I may also have just held Firefox, as I use it extensively in my
work and it really mustn't fail for me in a critical situation.)
I installed the missing library manually and upgraded to current
testing Firefox. Then the fun started.
My bad workflow, which I am trying consciously to get away from, is
to manage my to-do list by keeping Firefox windows open for each of
item. Since Firefox tends to gracefully recover these windows
on restarts, this works just well enough that it's hard to get away
from. It's fairly convenient, at least.
So I carefully bookmarked every window I didn't want to lose, and
restarted Firefox. The new Firefox came up and did indeed manage to
restore its ancient state. But the UI was pretty broken — even for new
windows.
All new Firefox windows came up with the Bookmarks Manager open. This
rapidly became unusably annoying. Suggestions from the Interwebs solved
nothing.
Worse was the UI scaling, which had become untenable. My vision is
not great, so I need to have the UI font (as opposed to the web page
font) large enough on my big display. (This LG43UK6200PUA TV-as-monitor
is 42.5" 4K — reported by XFCE's Display widget as 75" and by
xrandr
as about 11", but that's a story for another day.)
Sadly, everything was out of whack: the UI font was unbelievable tiny.
The Internets could only suggest adjusting Firefox's DPI setting, but
that just produced a whole another set of issues. Icon size to text size
ratios on Firefox have always been terrible: this is especially annoying
for the tab close box, which is a tricky target to hit on this
display.
So… to make a several-hours-long story short, I finally gave up. I
moved my whole ~/.mozilla
folder aside and restarted
Firefox. As expected, things worked OK now out of the box. I used
Mozilla Sync to recover most of what I'd lost. The notable exception
was, again as expected, my session: Mozilla Sync claims to save and
restore it, but it just didn't. Good thing I bookmarked everything way
back up there.
Bowser cookies and favicons were also lost in all this, so now I'm
logging into all my sites fresh again. I'm not sure I wanted either of
these synced and restored (especially cookies) but it sure would have
been convenient here.
I'll have to figure out what to do about the alternate Firefox
Profiles I had in my old setup. I am afraid to just move them over, for
pretty obvious reasons. I'll probably just try to recreate the couple I
care about; more adventures await.
As a novice Linux user with no particular bond to Firefox and no
particular experience dealing with its vagaries, the solution would have
been relatively simple: I could have switched back to Chrome, from
whence I came a year or two ago.
The switch to Firefox was motivated primarily by Google's attempt to
restrict ad-blockers: I rely on mine for the web, and honestly would use
a lot less of the web without it. I occasionally encounter the
non-ad-blocked experience elsewhere and am kind of stunned: you have to
literally mine the nuggets of information out of a pile of intrusive and
noxious advertising. How does anyone live with that?
Anyhow, maybe this will help someone, and the whining felt good. I'm
back to a functional place with Firefox, and that's all I needed. I just
wish it had been easy and seamless.
2025-07-15 21:48:42 PDT
Bart Massey 2025
I got a new second monitor for my home desktop today (2025-07-15). My
old second monitor was pretty great, except it was so old it had a
fluorescent backlight. It was only 30", but put off enough heat to toast
bread.
The new monitor was as close to the old as I could buy in terms of
geometry, but I oddly could not find a new 30" 1600p monitor. I bought a
32" 4K monitor. This second monitor sits next to the 72" TV I use as my
primary display. I have always wanted to be able to move windows from
display to display without them changing size (too noticeably). That
meant I got to use display scaling.
Selecting a monitor
Selecting a monitor from the Internet is harder than it should be.
The key problem for me was that I wanted a monitor about the display
same height as the old one. The old one was something like 17", so I
went looking for a 17"-high monitor.
No one will tell you the height and width of the display for the
monitor they are trying to sell you. No, no. You get the diagonal
size (in freedom units) and the aspect ratio (a
dimensionless ratio). Clearly these are the important parameters that
normal people want to know.
Anyhow, https://github.com/BartMassey/monitor-dims
encapsulates the math.
Upgrading Debian
Sadly, the new monitor meant it was time for the dreaded home desktop
Debian upgrade. I mean, not really, but as I suspected it turned out I
had to log out and back in anyway: the combination of X11 and Nvidia in
a dual-monitor situation always gets the display pretty messed up when
configuring, so logouts are necessary. This is a good time for me to do
the Debian upgrades I've been putting off for months because everything
was just working, within reason.
Fixing sources.list.d
For a while now, apt
has been whining at me with "You
need to update your /etc/apt/sources.list.d
real bad!
There's bad keys, and besides, there's a whole archive description file
format you need to migrate to! It's important!"
Ok. So let's do this too while we're at it.
After making appropriate backup copies of stuff, I ran
apt modernize-sources
. Cool.
That command was not smart enough to migrate the
arch
field of a archive listing:
deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main
for example just lost the arch
part. Then
apt
complained at me that the Chrome archive didn't have
x86
support. I know. But I'm on a dual-architecture
platform for gaming purposes.
Much research and agony later I solved this by manually
adding
Architectures: amd64
to every single .sources
file. Note the slight but very
important difference between Architecture:
(silently
ignored) and Architectures:
works fine.
The problem of "bad" archive signing keys had been around a
while. For Zulip Desktop, for example, apt update --audit
gave the ominous
Warning: https://download.zulip.com/desktop/apt/dists/stable/InRelease: Policy will reject signature within a year, see --audit for details
Audit: https://download.zulip.com/desktop/apt/dists/stable/InRelease:
Sub-process /usr/bin/sqv returned an error code (1), error message is:
Signing key on 69AD12704E71A4803DCA3A682424BE5AE9BD10D9 is not bound: No
binding signature at time 2025-03-13T21:39:32Z because: Policy rejected
non-revocation signature (PositiveCertification) requiring second
pre-image resistance because: SHA1 is not considered secure since
2026-02-01T00:00:00Z
"Since 2026-02-01." Nice. Anyhoo, except for Zulip Desktop
(filed Issue #1437) I
managed to track down proper keys for everything I needed.
Turns out you're supposed to put the signing keys in
/etc/apt/keyrings
and reference them there with a
Signed-By
header — unless the signing key is a "system key"
and needs to go somewhere else. The initial
apt modernize-sources
had done stuff, so it was not too
brutal to figure out. Still…
Dealing With Linux Nvidia —
Again
This one is always comical. This time, I managed to get the Debian
login prompt to come up on my freshly-booted primary screen. But only
the primary screen. The new screen was ignored.
VT-switching to the console showed that the new screen was working
fine. It just didn't light up under X11.
"No problem," I naïvely thought. "I'll just log into my account and
use the Nvidia display tool to fix it. Or xrandr
. Something
will work."
On login the screen froze so hard I gave up and rebooted my
machine.
…
Let's just skip the silly debugging session. Bottom line: everything
works perfectly at this point — as long as I turn off my new
secondary monitor before I log in, and then turn it back on. Srsly. This
is reproducible, and I have only the vaguest guesses why/how this is a
thing.
Epilogue
I burned down about five hours on this — not counting things like
unpacking my new monitor and finding an old adjustable stand to use with
it, and then putting the stand from my soon-to-be-discarded
thermal-generator/monitor onto the monitor I borrowed the stand
from.
The good news is that I am now ready to put off upgrading Debian for
another month, and getting a new monitor for another 10 or 15 years. I
wonder why I do that?
2025-02-03 18:51:54 PST
Bart Massey 2025
I've given an assignment in a couple of Rust classes I've taught
recently involving Cellular
Automaton (CA) "Rule 110". The latest
version of the assignment is at the end of this post — please take a
second to read it, noting especially the transition table there.
Implementing Rule 110
The obvious way to generate a next row using Rule 110 is to walk over
the current row one bit at a time, take the "left" (L), "center" (C) and
"right" (R) bit relative to the current position, and use Rule 110 to
compute an output bit. Note that the output bit must be placed in a new
row, to avoid corrupting the row computation.
This algorithm allows various implementations of the row datatype. A
good starting place in a language with a Boolean type and sized arrays —
Rust — is to use an array of Booleans as the row type. In Rust this
looks like [bool;8]
and occupies one byte per boolean. The
result is pretty inefficient but easy to work with: for example, unlike
in C Rust array assignment and parameter passing is by value (copied)
instead of by reference.
Once you've fixed a representation of the row, and of the LCR values
— in Rust, just [bool; 3]
— you can write a function that,
given an LCR, returns the correct Boolean; that function finishes the
algorithm. There are many ways to accomplish this: a simple
match
(like C switch
) statement is probably as
easy in Rust as anything. The Rust compiler will even notice and report
errors like incomplete or overlapping matches, which is nice for
catching copy-paste bugs.
Going Bitwise
An obvious next step is to improve the performance of everything by
changing the data representation. Rows of length up to 64 can
conveniently be represented as a simple unsigned integer:
u64
in Rust, uint64_t
in modern C. (Rust also
supports u128
, but since that is not a native type in most
CPUs, more than 64 bits per row might indicate time to just switch to a
full-on bit-vector package.)
Now some shifting and masking can get the LCR bits at each position
into the low-order position, at which point the Rule 110 calculation can
just take an integer in the range [0..7]
to a Boolean.
Indeed, this works for any CA rule: my code contains this function
fn by_rule(rule: u8, bits: u64) -> u64 {
((rule >> (bits & 0x7) as u32) & 1) as u64
}
(Excuse the gross as
casts in the Rust code. Rust does
no automatic numeric promotion, so getting the types right is down to
casting without screwing up: as
casts cannot panic, so will
do weird (deterministic) things if you get them wrong. Not a great
language feature, honestly.)
This optimization provides a substantial speedup in my application,
but it still leaves a bunch on the table. For "short" rows like this, we
should be able to get bitwise parallelism if we are careful.
However, the only way I see to do this is rule-specific: we can
hard-wire a Boolean computation using the structure of (for example)
Rule 110 that produces the right answer. Let's do that.
Optimizing Boolean Functions
Rule 110 is defined by a Boolean truth table. These are familiar
objects to Computer Scientists and Electrical Engineers. Let's write
down the actual table, but upside down.
LCR Q
111 0
110 1
101 1
100 0
011 1
010 1
001 1
000 0
The reason for the name "Rule 110" is thus made apparent: reading
Q
from top to bottom as a binary number yields decimal 110.
(You can tell that rule name inventor Stephen Wolfram is not a "real"
computer person. The rule numbering should have been hex — 67 here — for
ease of use.)
Those who recall days gone by or work with this stuff regularly may
see that there's an obvious Boolean expression here that will produce
the right answer. Let's let the negation of a positive literal be its
lowercase: for example, "not L
" is "l
". Then
the "sum of products" (SOP) expression for Q
is
LCr + LcR + lCR + lCr + lcR
where plus is bitwise "or" and adjacency is bitwise "and".
This is a fairly expensive expression to compute, so we'd like to see
if we can simplify it to something cheaper that gives the same
answer.
Boolean Algebra
One way is just using Boolean algebra: we can easily factor
L
and l
to get
L(Cr + cR) + l(CR + Cr + cR)
The extremely clever will note that we can then use Boolean "xor",
which we will denote with "^" as per horrible C (and Rust) convention.
(Why horrible? Because people confuse it with exponentiation all the
time, and then the typechecker passes it, and then the bad thing
happens.)
L(C ^ R) + l(CR + (C ^ R))
The next steps go as follows
L(C ^ R) + lCR + l(C ^ R)
(L + l)(C ^ R) + lCR
(C ^ R) + lCR
This is a boolean expression we can compute fairly cheaply: it
requires an "xor", an "or", two "and"s and a "not" for a total of 5
operations.
Karnaugh Maps
By using Boolean algebra we got from
LCr + LcR + lCR + lCr + lcR
to
(C ^ R) + lCR
which is quite an improvement. But it was also a fairly involved
process with many fiddly steps to get wrong.
The Karnaugh
Map is a well-known device in Electrical Engineering, but maybe less
well-known in Computer Science: a graphical method of quickly
simplifying a Boolean expression.
(Here we will work with our specific 3-literal SOP expression.
Karnaugh Maps can be made for any number of literals, but they get hard
to reason with as the number of literals becomes large. They also can be
used to simplify product-of-sums (POS) expressions.)
The Karnaugh Map for our initial expression might look like this:
CR Cr cR cr
L 0 1 1 0
l 1 1 1 0
We have done the ordering "backward" because the original truth table
is backward: this is not normal, but it doesn't matter to the method. We
have also chosen to group C and R: any grouping will work, although
changing the representation may make salient features more or less
graphically obvious.
Now, the heart of the Karnaugh Map method is to try to create large
rectangles that cover all-ones areas. Each rectangle corresponds to a
min-term in the simplified SOP formula.
Let's start with the large rectangle in the center, starring each
element.
CR Cr cR cr
L 0 *1 *1 0
l 1 *1 *1 0
This could give us two min-terms either vertically or horizontally.
However, the graphical view gives us an easy way to spot what we spotted
before: we will add the combined term C ^ R
. Now we have
covered all the one-bits but the lower-left one: we will cover that with
a second 1×1 rectangle, getting
CR Cr cR cr
L 0 1 1 0
l *1 1 1 0
This gives us a second min-term lCR
, so our combined
expression is as before.
(C ^ R) + lCR
That was way easier. Our expression still has a cost of 5
operations.
But wait — there's more! In that last step we violated our rule of
making big rectangles. Nothing says the rectangles can't overlap. Making
the 1×3 rectangle
CR Cr cR cr
L 0 1 1 0
l *1 *1 *1 0
gives us l(C + cR)
which isn't as cheap as before. (In
general, crossing even boundaries is expensive.) But making the 1×2
rectangle
CR Cr cR cr
L 0 1 1 0
l *1 *1 1 0
gives us lC
which is a 20% savings! Our expression costs
4 now:
(C ^ R) + lC
This seems counterintuitive, so we should probably check it with a
truth table.
LCR C^R lC Q
111 0 0 0
110 1 0 1
101 1 0 1
100 0 0 0
011 0 1 1
010 1 1 1
001 1 0 1
000 0 0 0
Huh. Works perfectly.
Could we do better? Notice that Q
has more 1 bits than 0
bits. Maybe we should pay for a negation and work with the map for
q
?
CR Cr cR cr
L 1 0 0 1
l 0 0 0 1
This leads to q
as
cr + LCR
This has 7 operations counting the invisible negation, so not as
good.
Since we need Q
we could try to deMorgan-ize and factor
to get
!(cr) !(LCR)
(C + R) (l + c + r)
(C + R) (l + c + r)
Cl + Cr + lR + cR
l(C + R) + (C ^ R)
5 operations, but we ended doing some fiddly Boolean algebra again.
Not an improvement.
Bitwise Parallelism
Now we have a really nice Boolean formula for Rule 110. Yay. Remember
that we started with a really nice formula generalized over all rule
numbers. What does the specialization buy us?
Well, now we can use "bitwise parallelism" to generate all the bits
of the next row at once. Notice that R[i]
is just
(C << 1)[i]
and L[i]
is
(C >> 1)[i]
. Soo…
let left = cur >> 1;
let right = cur << 1;
let next = (cur ^ right) | (!left & cur);
Note that Rust uses !
for bitwise negation of integer
types. Wouldn't have been my choice, but it's fine.
We've computed a whole next row of up to 64 bits at once. This is
nice. Really nice.
There's only one remaining problem. The end positions will not be
computed correctly. Right now we're shifting 0 into the
right
value, and whatever was lying around in the
next-upper bit into the left
. Then we're negating
left
without consideration of how many bits actually need
to flip, essentially trashing the left bit for the next iteration.
Wrapping Around
Recall that we are to "wrap around" at the ends. Let's add a little
masking and rotate action. Let nbits
be the number of
least-significant bits that are supposed to be part of the row. We can
rotate right by shifting right and moving the left bit back around.
Similarly, we can rotate left by shifting left and moving the right bit
back around. The mask at the end ensures that there's a 0 bit left of
the significant bits on the next cycle.
let left = (cur >> 1) | (cur << (nbits - 1));
let right = (cur << 1) | (cur >> (nbits - 1));
let mask = (1u64 << nbits).wrapping_sub(1);
let next = ((cur ^ right) | (!left & cur)) & mask;
The Rust compiler, when run in the default debug mode, inserts
integer wrap assertions in the code. (Honestly, these should be on by
default in optimized "release" compiles as well. Sigh.) The
wrapping_sub()
disables the wrap check, important when
nbits == 64
.
With the code above we can apparently produce a new row of up to 64
bits in 15 cycles. Of course, that's a really gross approximation to
real performance. The calculation assumes that everything can be in
64-bit registers all the time — not unreasonable on a modern "big" CPU.
The calculation also doesn't take into account cycle times: the integer
operations may issue at two or more per cycle depending on… things. But
still…
Testing
We have done a lot to get to this point. We should probably start by
comparing with a known-good implementation to make sure that our fiddly
bit-wiggling works. I'll check the first 10 rows of the 8-cell table
from our assignment — that should be good enough. (Remember the
assignment? Seems like a long time since we started there.) One moment,
please.
Ok, full disclosure. The version of the row operation given above is
not what I started with. I made one masking bug and missed one
optimization in the first version. This is why we test: the core idea
was sound, but my code was not. Fixed.
The whole point of all this fiddliness was, in some sense to get a
really fast implementation of Rule 110. Let's see how we did.
The results? The old method (already faster) makes 100M rows in about
3.36s. The new method makes 100M rows in 0.15s. Roughly a 22× speedup.
1.46s for 1B rows, about 685M rows per second on a single core of my
2.2GHz machine.
But wait — there's one more set of tricks to apply. I can set up
rustc
to compile for my real "modern" CPU (AMD Ryzen 9
3900X) instead of some least-common-denominator x86_64
. I
can also turn all the rustc
optimizations I know about up
to 11. The result? Got 1B rows in 1.42s! Yeah, compiler don't care.
After applying the same optimizations to the previous version, I
found 3.07s and 0.14s for 100M rows, respectively — our speedup is about
the same. The compiler can do a lot more with the "slow" version,
apparently, because there's more code there to work with.
One final idea: this benchmark is somewhat specific. The row width
shouldn't matter much to the fast version, but it matters a lot
to the old version. I chose 8 bits just because that is what we started
with. Let's try a couple more widths on 100M rows just to verify a bunch
of assumptions. With width 1, I get 0.75s and 0.14s respectively. Still
a respectable 5× speedup in the worst ("real") case. With width 64, I
get 21.9s and 0.14s. That's kind of a big deal.
We have won.
Future Work
Battles don't stay won. There's things we could do to be stronger,
faster, better than before.
Bit parallelism is the best parallelism. If you want to go wider
than 64-bit rows, though, you'll probably want to investigate the other
kind. The trick here, of course, is the boundaries: some communication
between cores is needed on each cycle. This can slow you down massively.
However, because end corruption can only travel inward at one bit per
row, you can do some batching and fix things up later. It all gets
fiddly and really hard to make fast.
The manual process we went through to get the Rule 110 map seems
automatable. Some kind of nice Boolean formula optimizer seems in order.
Since there are only 256 rules (some of which are utterly boring) we
could pre-compute a Boolean formula for each and hard-code it into a
table. Alternatively, we could find some reasonably fast optimizer and
do this on the fly. We really should handle arbitrary rules.
Conclusions
I set out to write a short little thing to explain to those
interested what I was doing today. I would estimate that I ended up
spending about four hours writing and about two hours actually doing the
thing. This is not an unusual ratio for me, and I think for others.
The good news is that I ended up with some interesting insights for
myself (even on this well-traveled road). Hopefully you learned a thing
or two as well.
Acknowledgments
CA Rule 110 was first introduced to me in Fall 2008 when I was
teaching a Multicore Computing class. (I ended up losing to only a
couple of students from the class of 20 or so, pitting some forgotten
variant of my single-threaded implementation described in this article
against their 16-core things.) Thanks much to the student who brought
this problem to the class.
Keith Packard and others taught me the art of bit-banging, long ago.
Thanks for that! Also thanks to Hank Warren's Hacker's Delight,
the one true bit-parallel book.
Availability
The Rust source code for everything here is available under an
open-source license at https://github.com/pdx-cs-rust/hw-rule110-solution.
Please see the README there for details.
Appendix: Rule 110 Assignment
Bart Massey 2025
Background
In this exercise you will create an ASCII rendering of the Rule 110 Elementary
Cellular Automaton (CA). Not as scary as it sounds, and a nice
simple Rust thing to play with.
A CA starts with a random row of bits. We will represent 1 bits with
*
and 0 bits with .
. Our starting row will
specifically be this 8-bit position for now.
*.*..*..
To produce the next row, take bits three at a time, "wrapping around"
if a boundary is hit (row 7 is next to row 0 and so forth). A group of
three bits in row n will determine the center bit position in
row n + 1 according to Rule 110:
111 → 0
110 → 1
101 → 1
100 → 0
011 → 1
010 → 1
001 → 1
000 → 0
So for our start, the first two rows will be
*.*..*..
***.**.*
(Notice the wraparound at the beginning and end.)
The Program
Write a program that prints ten rows starting with the two given.
Challenges
(Challenges are not required, but give
a chance to do something extra. Turn them in and write them up in the
README.)
Make your program take a command-line argument for the starting
row.
Time your program. See how fast you can make it go. Compare
performance with a C or C++ implementation. You will probably have to
modify the program to print a lot more rows: maybe some number
given by a command-line argument.
2024-12-30 01:19:18 PST
Bart Massey 2024
I spent tonight continuing to work on moving my web content to my
cloud server. This particular task was bittersweet at best.
My beloved brother Byron passed
away in 2010. I miss him constantly.
Byron's website was at http://nealhere.com
. I let the
domain name go, but kept the website alive in his memory at http://nealhere.po8.org. Take a look:
there's a lot of cool old stuff there.
The move was complicated because Byron had built
nealhere
using a now-ancient version of the excellent
blosxom
CGI static site generator. nealhere
also used server-side includes and .htaccess
. So I figured
out how to make all these things work with my new nginx
installation.
blosxom
needed CGI.pm
to function.
apt install libcgi-pm-perl
.
blosxom
needed plain old CGI. Unfortunately, "slow"
CGI is long gone, and the wrapping needed to make things work with
FastCGI took some discovery. apt install fcgiwrap
, then
look at the nginx
example under
/usr/share/doc/fcgiwrap
to get some setup hints.
Byron's setup used .htaccess
to do rewrites. Found
out how to do this with nginx
rewrite rules.
Unfortunately, the cursed Zulip installation on the cloud server
had set the running userid of nginx
to zulip
.
This caused the fcgiwrap
socket to be inaccessible. It's
also wrong and gross. I edited the fcgiwrap.socket
config
file for systemd
for now.
Everything is hard and I'm tired. I'm regretting not running Zulip in
a VM now. It clearly does not play nice with others.
I miss you, Byron.
2024-12-29 23:32:45 PST
Bart Massey 2024
In concert with the move I reported in my previous blog entry, I
decided to move my RSS feed reader Miniflux to my cloud server. That is the
logical place for it, I think.
I spent six or eight hours trying, and gave up.
Miniflux is a server that polls RSS (and/or Atom) feeds regularly and
provides a web interface to browse them. I like Miniflux's functionality
very much: it became my RSS solution sometime after Google bailed
on Google Reader in 2013. I would love to think that Google killed it
because it was too hard to maintain, and with the hope that others would
take up the slack. The cynic in me, though, thinks that Google killed
Reader because it's really hard to generate advertising dollars or
gather much interesting personal information with an RSS platform, even
one you control.
Anyhow, Miniflux is written in Go, which seems fine for this. It also
uses Postgres for its data store. I am a big fan of Postgres in the
abstract: it is the only truly open-source database I am aware of that
is usable for "big" professional-grade deployments.
Sadly, I am less of a fan of Postgres in the concrete these days. It
has become really "enterprisey", and doesn't seem to have much
innovation or support for "normal" use cases. The big problem with
Postgres — the awful, awful problem that has kept MySQL alive and made
SQLite the correct solution for this kind of deployment — is its
horrible authentication and security setup story. I have watched
students and open source developers struggle here, and ultimately admit
defeat. Heck, I have done it myself.
I don't have the energy to go into the Postgres setup and
administration story in detail. Here's a few of the commands I typed at
Postgres while trying to dump the miniflux database on my old server and
then restore it on my new server. I don't know exactly what these do, so
don't use them unless you know what you are doing. They didn't work for
me much anyhow, but I put them here to give an idea and because I may
want them later. Notes, in other words.
ALTER DATABASE postgres REFRESH COLLATION VERSION;
CREATE USER miniflux WITH PASSWORD 'xxxxxxxx';
CREATE DATABASE miniflux WITH OWNER = miniflux;
GRANT ALL PRIVILEGES ON DATABASE miniflux TO miniflux;
\c miniflux
GRANT ALL PRIVILEGES ON SCHEMA public TO miniflux;
Eventually, after exploring and learning a lot, I got my Miniflux
installation running on my cloud server. It seemed to be working
perfectly, so I patted myself on the back and went to bed.
In the morning, it developed that the feed sync couldn't update the
database. I kept getting a cryptic error about "schema zulip". Something
in the Zulip Postgres installation, I eventually concluded, was keeping
the Miniflux Postgres installation from working. I tried to debug it for
quite a while, but eventually concluded it was beyond me. I filed an issue asking for
help from Miniflux (maybe should have tried Zulip as well) and moved
back to accessing Miniflux on my own server for now.
I clearly have too much infrastructure in play. I need to get it down
somehow. Ah well. Will let you know if anything develops: maybe you'll
see the post via my Atom feed.
2024-12-29 02:19:25 PST
Bart Massey 2024
I've retired the main server that acts as the DMZ box for my home
network. The machine now called bartgone
served me
faithfully for so many years that I have no idea how old it actually is,
but its time has come.
Hmm. Let's see how old bartgone
might be.
- Intel "Core2 6400" @ 2.13GHz (probably Conroe Core 2 Duo E6400,
2006)
- 4GB RAM
- Intel DG965W Motherboard (2006)
So, best guess is late 2006 sometime. That makes
bartgone
about 19 years old. During that time, it has run
24/7/365 faithfully and well, routing packets and providing services to
my home net and the internets.
I've replaced bartgone
with bartlet
, a GMK
NucBox M6 mini-PC. I bought it cheap a couple of months ago.
- AMD Ryzen 5 6600H 12-core
- 16GB RAM
- Dual ethernet onboard
So far looks like it's going to be a fantastic replacement. May it
last me another 20 years.
2024-12-17 21:34:55 PST
Bart Massey 2024
Thought I'd do a quick post-mortem (damn near, anyway) on my big
adventure of the last few hours. It involved reconfiguring a Zulip server I run, and was supposed to be
a quick thing. But Zulip is never quick.
Zulip has an interesting configuration option for allowing multiple
Zulip chat servers on a single host server. They call this "realms" for
some reason. By default you only get the one default realm on your
server, so that's what I got when I very quickly set mine up a couple of
years ago.
I now wanted to reconfigure to allow multiple realms:
https://site1.zulip.example.com
(for example), and
https://site2.zulip.example.com
instead of just
https://zulip.example.com
on my cloud server.
Thus the fun began…
DNS
So the first thing was to get the domain names set up. I run the DNS
for example.com
. I am serving it with bind9
or
bind
or named
— all different names for the
same piece of software in use on my home Linux server, depending on
context. It turns out that systemctl restart named
and
systemctl restart bind9
are just aliases of each other.
Which is weird.
I've spent a lot of time in /etc/bind
configuring this
thing, so I wasn't anticipating any big deal. I slapped
site1.zulip.example.com
as an A record in the zone table
and… nope.
A half-hour of flailing later I called a friend who is both generous
with his time and a genius. He too was confused. The thing we both
thought should work, and the internets thought should work, didn't
work.
Skipping a bunch more flailing, the desired result was achieved by
adding a new zone for zulip.example.com
in the zone file
for example.com
(as zone master, backed up to my friend as
zone, er, alternate). With the NS and CNAME records filled in just
right, it all just worked.
Upgrading Zulip
Before I tried to do anything with Zulip, I figured I should upgrade
first, because it was time anyway and I'd be working from a stable base.
Sadly, Zulip is not packaged for Debian as far as I can tell, so I had
to download a big tarball and have some script from the existing Zulip
installation run the upgrade.
The Zulip install script refused, because "unsupported Debian
version". Much digging around later, it turns out my cloud server
provider, who had graciously installed Debian for me, had done something
that altered both /etc/debian_version
and
/etc/os-release
to say I was running
trixie/sid
. Some careful hand-editing of these files got me
back to where the Zulip script was willing to admit that I had an OS
they supported and install the software.
There was one other quirk: the installer wanted libvips
,
but Debian had only libvips42
. Huh. So I broke down the
upgrade tarball, hand-edited the dependency, and then rebuilt the
tarball and gave it to the installer again. Success.
Move The Existing Zulip
I then wanted to move the existing Zulip from
zulip.example.com
to site1.zulip.example.com
.
I used the Zulip backup script (wouldn't work earlier because of the
version thing) to back the existing Zulip up, then just used another
Zulip script to move the thing. Just worked, which surprised me.
Deal With Nginx and
Certificates
Of course, everything has to be TLS now. So I ran another
Zulip script which ran certbot
to get a new TLS certificate
for site1.zulip.example.com
. (Given the amount of Zulip
instances I ever expect to run, getting a wildcard cert seemed like
excessive effort.)
I then confronted a couple of sad realities: nothing was working, and
nginx
configuration was the problem. I have been using
Apache since it came out, and I am just not that comfortable with
nginx
. However, it was on this server because reasons and
seemed hard to replace, so I buckled down and started to patch up the
config.
One issue was another service running on my cloud box, "Punchy".
Punchy had its nginx
config installed in
/etc/nginx/conf.d
and really wanted to be in charge of the
TLS for everybody. I finally dpkg-divert
ed it to
sites-available
where it should have been in the first
place.
The key finding of this phase was that every
server
section needed to have a server_name
set. Anything that didn't just kind of took over everything else.
Finally sorted that all out.
One Last Zulip Config
At this point, I had my Zulip desktop client talking successfully to
site.zulip.example.com
. Hooray.
Unfortunately, browser access not so much. The browser took a login,
but then just hung spinning, with a message that said "if this doesn't
come back in a few seconds try reloading the page". Needless to say, a
reload solved nothing.
Much adventures later, I got out the browser developer
tools, which reported that Zulip was still trying (and failing) to talk
to zulip.example.com
. I then discovered
/etc/zulip/config.py
, which had
zulip.example.com
set as primary, and no entry in the
alternate hostname for site1.zulip.example.com
. I added the
latter, and then altered the nginx
configuration to allow
the former.
Conclusions and Future Work
Hooray. I'm back to where I started. Except now I'm running Zulip the
way I wanted to, and also now I've fixed the Punchy config and also have
figured out how to do a static site for my cloud server using
nginx
. Way too many hours, but a moderate success.
In digging through Zulip stuff I noticed that it may support Github
and Google for auth now. I need to look into this: it's way more
convenient.
Now if Zulip would fix alerts on mobile it might become actually
usable for people. Hooray.
2024-12-08 00:11:24 PST
Bart Massey 2024
Just got a Lenovo X1 Carbon Gen 13 Aura Edition (yes, its real name)
running Windows 11. I was setting up to dual-boot Lenovo-endorsed Ubuntu
Wayland, and started this notebook to document the process.
The goal of this notebook is several-fold:
Remind future me what I did. ("'Future Me.' I hate that guy. He
knows what he did." —Julian Kongslie)
Give others a guide with some pitfalls pointed out and
navigated
Give feedback to the Ubuntu folks.
Background
It may be helpful to know where I'm coming from.
I started using 2.9BSD UNIX on a PDP-11 in 1982, and became a paid
sysadmin and later consultant and teacher of UNIX stuff. I started with
Linux around 1.2.13, using whatever "distros" existed then. Later, I was
a paid admin for a network of Red Hat boxes, then switched to Debian at
slink
. I've been using Debian continuously since then. I
have small patches in the Linux kernel. I was one of the founders of
Linux Plumber's Conference.
What I'm saying is that I'm old enough and arrogant enough that the
ridiculous mess that follows is something I can't imagine a new Linux
user wading through on their own. In my opinion it was utterly
irresponsible of Lenovo to claim they supported Ubuntu on the Carbon Gen
13 when it worked like this.
This is my first serious messing-about with Ubuntu. So
that's on me.
2024-12-04
Prepping For Linux
Previously on this machine, I had installed Debian Sid. Gave it up,
as X11 seemed not to be available and I didn't want to figure out Debian
Wayland. In the process I had pre-done a few things to prep for
Linux:
Modified Secure Boot: As shipped, Secure Boot would not
allow booting non-Microsoft OSes. Turned on laptop, got into BIOS
(repeatedly push F1 during boot) and found the setting for Secure Boot
and modified it to allow Third-Party CA Certificates.
On the next Windows boot, this made me supply my BitLocker key, since
the Secure Boot setting had changed. Did so without incident. I had no
idea the Windows 11 install had shipped with BitLocker.
Disabled Bitlocker: A partition with BitLocker enabled
cannot be resized by any means. However, needed to split the main 2TB
partition between Windows and Linux. So found instructions for turning
off BitLocker on the Windows partition. Should have done this before
modifying Secure Boot.
Shrank Windows Partition: From the Debian installer,
shrank the Windows partition to 500GB (should be plenty since I rarely
use Windows for anything). I then let the installer use its default
partitioning scheme on the resulting free space. Little did I know that
I would reclaim this free space and start over later with
Ubuntu.
The total time for this step was probably around an hour. It would be
much faster if repeated.
Installing Ubuntu
Now I was ready to start over with Ubuntu. Some of these steps were
actually duplicates of Debian install things, but we'll pretend that
they were all new.
Configured DNS: Set up bartcarbon13.po8.org
with IP address 192.168.1.13
using my local DNS
infrastructure.
Configured DHCP: Set up my local DHCP infrastructure to
recognize my machine with dhcp-client-identifier
bartcarbon13
. This was necessary since the ethernet address
would be that of an adapter rather than the machine itself.
Built Ubuntu USB bootstick: Grabbed an old 8GB USB flash
drive and copied a redundantly-named Ubuntu
plucky-mini-iso-amd64.iso
onto the base drive device (not a
partition). Was extremely careful to check the device with
parted
right before the copy: easy mistake to copy your
boot iso onto one of your actual hard drives, leading directly to
backup-recovery-town.
Booted from USB: Turned on the laptop and got into the
boot menu (repeatedly hit F12 during boot). Selected booting from USB.
Was confronted with micro-text menu due to HIDPI display. Got my
super-magnifying-glasses on.
Tried to start install: Turns out the micro-menu just
said "Choose an Ubuntu Distribution To Install". Why is this a menu
item?
Next micro-menu offered me a variety of options, none of which was
the "Plucky" the mini-iso said it would install for me. I wouldn't mind
an earlier Ubuntu, except I doubt it would support the Intel ARC
Graphics on my laptop.
Folks say installing Ubuntu is easier than installing Debian. Don't
think they've tried recently.
Tried installing again: Fetched a daily-build
plucky-desktop-amd64.iso
after a 5-minute download. 5.4GB
should fit on my 8GB USB stick, but not with miles to spare for sure.
Dumped the ISO onto my USB drive in another 18 minutes — that's a very
slow drive and a big image.
This time was presented with a four-item micro-menu. While trying to
figure out which option to select, the bootloader timed out and started
the default "Install Ubuntu." Apparently I was going with that.
After a couple of minutes of staring at a splash screen, I got an
Ubuntu desktop screen. A minute later Ubuntu was more-or-less up,
apparently running as a Live Image off the really slow USB drive. Not
ideal, but I decided I could try working with it.
Tried an actual install: Noticed "Install Ubuntu 25.04"
in the lower-right corner of the desktop, and decided to click on that
once the handy USB drive activity light mostly quit flashing. The only
effect seemed to be refreshing the list of volumes in the hotbar.
After a while I started clicking on things and eventually ended up
stuck in an application search screen. I apparently had windows open
according to the hotbar, but had no idea how to get to them. A quick
Google search of "ubuntu return to desktop from app search screen" gave
as its top hit "Super+D or Ctrl+Alt+D: Show desktop". Nope. Found a
System 76 tutorial on Ubuntu Basics, which was helpful. Still couldn't
get anywhere.
Finally gave up and rebooted.
Tried installing again: This time chose "Ubuntu (safe
graphics)" from the boot menu, because I knew what was coming. Figured
maybe there were graphics issues with the ARC stuff that weren't yet
sorted on the Live CD. Waited again for a couple of minutes for things
to come up.
This time I would be careful not to search for applications.
Got a window that said "Preparing Ubuntu" this time! Seemed more
promising. I suspect graphics were just borked on the previous attempt.
Frickin' bespoke hardware.
Continued installing: OK! Got the expected installation
prompts. Apparently graphics were borked after all.
Everything was still micro-text, of course. Got out my heavy
magnifying glasses again. Then switched to my Optivisor™ because the
focal length of the glasses was like two inches and I wanted my face out
of the screen.
But hey, on the second screen of the install I was offered the
accessbility option of "Desktop Zoom". Perfect! But it turned out to be
pan-and-zoom. Nope. Settled for "Large Text" and continued
magnification, with the plan of fixing later.
Worked through the Ubuntu installer: Other than nice
graphics, the screens and choices seemed to be pretty similar to the
standard Debian installer.
Realized in here somewhere that I probably wanted to set up an Ubuntu
mirror at home. Sigh.
Changed disk partitions: Oh! The installer offered to
keep my old Debian installation and install Ubuntu as a third way for
now. Tough choice. I could always delete the Debian partition later if I
didn't miss it. But I couldn't imagine why I wouldn't miss it, and it
would be easier now.
I decided to suck it up, take "Manual Installation" and reuse the
Debian partition for Ubuntu. The partition editor was pretty
Debian-standard, but was not interested in letting me erase the Debian
partition. I didn't know what would happen, but decided to plunge
on.
Continued installation: Having the timezone
configuration auto-locate me, presumably by IP, was a nice touch: even
Windows didn't do that.
Huh. The "Review Your Choices" panel pointed out that I had forgotten
to go back and mark the swap partition as "swap" instead of "leave as
swap". That's gross, but OK. Unfortunately the partition editor then
made me reset everything else about the configuration, including stuff
it had initially auto-guessed. Ah well. I'm kind of used to using a swap
file instead of a swap partition for flexibility, but this was chosen
for me by the Debian install and I decided to go along.
Installed installation: I pressed the "Install"ifier.
Started a copy of files off the USB stick. Wasn't sure this was faster
than from the Internet, but hey. Started seeing a series of
post-purchase ad screens, including "Great for Gaming", which, well,
yeah but also had Windows on the box so…
Rebooted from SSD: Ubuntu came up as the first option in
the micro-menu, so I took it. Looked like Windows was there also though,
as expected. Logged in and there it was: success! Went through a brief
post-installation dialog and was looking at a running system.
The total time for this step was about four hours. If repeated, it
would probably still be about two: there was a lot of sit-and-wait, and
a lot of menus to wade through.
Configuring Linux
The fun doesn't stop when the desktop boots. Oh no.
I needed to configure my machine for my use cases. How hard could it
be?
Installed a terminal: The startup had pointed out the
"App Center" icon in the hotbar. I opened it. Took me a moment to
realize that it was locked to "snap packages". Ugh. Switched to "Debian
packages". Nothing under "terminal". Huh.
Found the "Show Apps" start menu thingy. Found a "Terminal" and
pinned it to the hotbar "dock". Apparently it's called a
dock.
Ah! The battery-looking bar icon in the upper-left corner took me
back to the desktop. So yeah, things were just borked earlier on.
I hoped "Terminal" was Gnome Terminal, my standard ride and one that
would make sense on a Gnome-based distro. When I checked… it was! My old
friend. Finally started to feel a bit comfortable with this Ubuntu
experience. A shell prompt is the key to the world of fixing
stuff.
Things were suspiciously slow on this very modern laptop. I suspected
something was wrong, but wasn't sure what yet. Continued anyway.
At this point I was noticing huge lag spikes during typing in the
terminal. 10 or 15 repeats of a key would get buffered up during a
pause, which was really, really annoying. Obvious starting point would
be a reboot, but wanted to do a couple more things now that I was SSH-ed
in from my desktop where things were more convenient.
It had been another hour. I took a break.
I should take breaks more often during this stuff. I just want to
power through it, though. It's frustrating to notice how much needs to
be done just to make things work.
--
Came back late in the evening, having slept the laptop (I think?)
unplugged for about six hours while I did other things. Battery was down
to 85%, which was not a good sign if it really was asleep — needed to
check that. Continuing with the adventure, the battery seemed to be
dropping really fast. Was curious how the battery would perform in
Ubuntu — really needed to be minimum three hours for some of the stuff I
do.
Performance was garbage. Decided to reboot. Seems to have fixed it
for now. I don't know.
Set up Firefox: This involved a surprising number of
dialogs and options, and a password. But it seemed to work fine in the
end.
Enabled SSH: Hadn't said
systemctl enable ssh
earlier, so needed to enable SSH and
start it so it would be available on future reboots.
Plugged the hard Ethernet back in. My current home WiFi setup is kind
of a mess.
Installed emacs: Specifically emacs-nox
and
emacs-common-non-dfsg
. Needed it, would keep needing
it.
Set the DHCP client identifier: Added it to
/etc/dhcpcd.conf
. Didn't appear to have changed anything,
though. Decided to figure out later how to ensure that I got the right
internal IP from DHCP.
Sent over my dotfile kit: I have built up an
infrastructure where I can run a package-dotfiles
shell
script and have all my portable personal dotfiles, scripts, etc to a
tarball that can be copied to a remote machine and extracted into my
home directory.
Did this. Then realized I had accidentally done it as root on the
laptop and extracted into /root
. Sigh. Redid it: worked
surprisingly well. Now I had my emacs configuration, among many other
things.
At this point, the laptop terminal was very difficult to type in
because of lag spikes and key repeating. Something was clearly quite
wrong.
Oh, also, since Wayland means no setxkbd
I needed to
figure out how to set the Caps-Lock key to Control, or I would go mad.
Found this
blog post which summarized why I had to switch to Wayland, and why I
really didn't want to, and maybe what to do about it. Left it to explore
later.
Oops. A warning popped up on the laptop: "Suspending soon because of
inactivity." Apparently being ssh-ed into the box did not count as
activity. Bleah.
Came back to DHCP: Discovered that Ubuntu uses "netplan"
to manage the network. Found /etc/netplan
which contained
YAML files (ugh) to control stuff, which explained why
dhcpcd.conf
was being ignored. Again left it to explore
later.
Set up SSH key: Used ssh-keygen
to make an
ED25519 keypair for my host to connect to my laptop. Copied the public
key over and set it up in authorized_keys
. Worked.
Used ssh-add
on the host to be able to avoid typing
passwords for a bit. Figured this would speed things up.
OK, power seemed better than feared. About 10% down in an hour of
moderate use.
Fixed up laptop sudo config: I have an… idiosyncratic
idea of how local sudo should work. A bit of visudo
later,
and I had it how I wanted it. Gross but necessary for peace.
Set a root password: Realized Ubuntu has no root
password by default. Admirable, but not OK: wanted it as a backup.
Fixed.
Figured I'd try the next one and then go to bed.
Installed Rust: Yeah, it's a critical use case.
apt-install rustup
followed by
rustup install stable
just worked. Zero issues. Nice.
After some messing around, installed my version of the Programming
Rust mandelbrot
. Failed to link because rustc
was using cc
by default? I thought we'd switched to
rust-lld
. What version was this anyhow? Ah, the default
install didn't include llvm-tools
— fixable. Installed
clang
because wanted it anyhow, and rebuilt. Build time was
17 seconds real, vs 15 for my big fast home box. That was
fantastic news! Runtime was not so great: laptop 1.9 seconds vs
home box 1.1 seconds. However, this is a benchmark that uses a
lot of cores: home box is 12-core/24-ht, laptop is 8-core. It's a
laptop: it will be fine. The compiles consumed about 1% of power each.
Fair enough.
Was bedtime. Power was at 70%. Left the laptop unplugged and
suspended overnight to see how it did.
2024-12-05
Got up after seven hours or so. Apparently the laptop hadn't
suspended. Power was down to 35%. Plugged it in and left it for later.
Wow, charging was fast. About 10% in ten minutes, while
running.
Oh, right. Made a list of things marked "deal with later":
Display font size final adjustment
Fixing keymap
Typing issues
DHCP client side
Sound(!)
rust-lld
Installed rust-lld
: A weird side-effect of
using the Debian packaging of rustup
is that I never got to
set up a "normal" Rust environment. cargo install rust-lld
worked, but still didn't have $HOME/.cargo/bin
in my
necessarily in my path because no $HOME/.cargo/env
. After
staring at it for a while, just reinstalled rustup
using
curl
, then apt install --reinstall rustup
to
make sure the one in /usr/bin
was current, then deleted the
rustup
in $HOME/.cargo/bin
to avoid confusion.
There were still a bunch of corresponding files between
/usr/bin
and $HOME/.cargo/bin
, but didn't seem
worth playing with for now.
*Set up "reverse" SSH:" Made a new SSH key on the laptop and
installed it on my host. Set up .ssh/config
on my
laptop.
Except I didn't. I wiped out my SSH config on my host instead,
because screwup. Complete disaster. Backup time. Hey, at least learned
how to restore files on my new backup system!
Tried again. Succeeded.
Suddenly, a wild failure appeared! The laptop popped up a problem
dialog. Looks like my dhcpcd screwing around earlier left things borked?
While dealing with this, my laptop locked hard and needed to be
"hard" powered-off and rebooted.
I became doubtful I was going to keep this thing.
Restored /etc/dhcpcd.conf
: This was harder
than it looked: apt
really didn't want to install
a fresh copy of the file. Finally fixed it by
apt remove --purge dhcpcd-base
followed by reinstalling it
and all the things that had been removed by uninstalling it.
Ugh.
Fixed the lid switch: Made the lid switch not suspend
the laptop when powered. Consensus of the internets was to just edit
/etc/systemd/logind.conf
. Nice…
Tried disabling "disable touchpad while typing". Didn't seem to fix
the keyboard problems. top
showed nothing interesting.
Laptop latency spikes continued and worsened.
Apparently installing Linux 6.12 will fix sound. Have to go to work.
Will return to it later.
2024-12-06
Installed the latest kernel: Spent a good hour trying to
figure out how to set up https://kernel.ubuntu.com/mainline
as an apt
source. Finally just gave up and clicked on the
four linux-*.deb
files I wanted like a chump, then copied
them to the laptop and used apt
to install Linux
6.13.0-rc1.
The kernel was not signed, so ultimately had to disable Secure Boot
in BIOS; then booted it fine. Made a mental note to fix that in the
indefinite future when a working signed kernel became available. Would
Windows boot without Secure Boot enabled? Who knows?
Even with the newer kernel, sound still didn't work. Surprising. Did
find a Phoronix article saying this kernel should fix the keyboard and
mouse lag spikes and whatnot I had been experiencing. Time would
tell.
Time immediately told. Keyboard and mouse lag spikes continued.
Thought to look in the kernel log and found where the kernel sound
stuff was crashing with a null pointer deref. I love C.
Tried again with Linux 6.12.3. Something hung hard enough I needed a
60-second hard powerdown. Eventually came back up. Seemed to work
without kernel panics, so decided to keep it for now.
Sound still didn't work.
Got sound working: Had found this https://forums.gentoo.org/viewtopic-t-1171193-start-0.html
which mentioned getting Lunar Lake audio working. Having gotten a new
kernel, seemed like time to explore. Unfortunately, looked like I
probably maybe had all the modules I needed.
Then went with this https://forum.manjaro.org/t/no-sound-dummy-output-on-asus-zenbook-s14-lunar-lake-under-6-12-0-rc2/170460
. Went through the complicated process of installing the latest Intel
SOF firmware and… success! As I rebooted I got sound.
The running Pipewire version is a little stale but
functional.
Reviewed my to-do list.
Display font size final adjustment
Fixing keymap
Typing issues (lag spikes)
DHCP client side
Sound(!)
rust-lld
Fixed font sizes: The first item was easy. The display
font I chose was a little large even for the hidpi display. Went back
from 300% display scaling to 200%. This still left the terminal font too
small, so installed Bitstream Vera Sans Mono and set it to a reasonable
size for 80×24.
If it weren't for the lag spikes, this thing would be starting to
feel like home. Had to believe that a kernel update would catch
that.
Fixed lag spikes(!), fixed grub boot menu size: Found
this https://bbs.archlinux.org/viewtopic.php?id=300155
in my backlog, which suggested a kernel boot argument of
intel_idle.max_cstate=1
.
While applying this by editing /etc/default/grub
, also
updated GRUB's idea of the display resolution to something more
sane.
Both of these things seemed to work! Without the lag spikes, this was
probably a viable machine.
To-do list was now
Display font size final adjustment
- Fixing keymap
Typing issues (lag spikes)
- DHCP client side
Sound(!)
rust-lld
This seemed manageable. Break time.
- Made the console usable: Ran
dpkg-reconfigure console-setup
and picked the largest VGA
font offered. After much messing around, gave up on making it even
bigger.
Along the way, checked that Windows still booted. Yep. Yay.
Switched to X11 for a bit! This was as easy as finding the little
gear icon while signing in. Didn't work: system experienced some failure
almost immediately. But good to know it should be on soon. Rebooted and
went back to Wayland for now.
Fixed CapsLock: After a bunch of messing around on the
internets, tried this
gsettings set org.gnome.desktop.input-sources \
xkb-options "['caps:ctrl_modifier']
and it immediately worked.
Set up my GitHub SSH key: Was fairly
straightforward.
In the process, finally fixed the weird "command-not-found database"
error message I'd been having. Maybe.
apt install --reinstall command-not-found
Nope. Idk. Removed the package. That shut it up.
It was bedtime again. I'd made progress. Getting close.
2024-12-07
The laptop had lost 30% power in 8 hours in unpowered sleep. Clearly
sleeping was not a viable long-term alternative until someone fixed
something.
Boot time through BIOS and Ubuntu seems to be about 25 seconds, with
most of that in BIOS. Ugh.
Fixed DHCP client id: That was exciting. Like several
excruciating hours of exciting. tl;dr…
nmcli con show
nmcli con modify f92bba01-5461-3e40-9d31-610788c0350f ipv4.dhcp-client-id bartcarbon13.po8.org
Then go set the dhcpd.conf
host
entry on
the DHCP server box to have
option dhcp-client-identifier "\000bartcarbon13.po8.org";
The \000
is a "type id", because dhcpd
is
kind of borked.
As far as I can tell, the GNOME NetworkManager GUI provides no way to
set the dhcp-client-id
. (The XFCE Network Manager GUI does
provide this on Debian.)
Learn to love tshark
.
shark -i ent0 -f 'port 67 or port 68' -w /tmp/packets
tshark -r /tmp/packets -V >/tmp/decode
Now some decoded DHCP packets will be in
/tmp/decode
.
Checked the Camera: Just worked right out of the box.
Really nice camera, too. Even has a hard camera cover that also turns
off the camera when you close it.
Great feature!
With that, I was done. I'd accomplished everything I could think of
that really mattered. My Dual Boot Notebook dual-booted, and Linux
worked well enough to use.
Conclusions
For all intents and purposes, Linux is not yet supported on the
Lenovo X1 Carbon Gen 13 Aura Edition. I've put myself through a
ridiculous amount of difficult fiddly diagnosis and repair. Even
following my instructions, which may not work next week, it would be a
big job to get this all up and working.
That said, now that I have the laptop mostly working I am rather fond
of it. Assuming that the idle issues can be fixed, the boot time
improved a bit and the X server made to work reliably, I think will be a
great ride.
("A friend" who knows the insides of the X Server mumbled something
about "atomic KMS modesetting" when I told my stories. Apparently there
is a reasonable path to making X11 work if someone find the will.)
I found Ubuntu to be nice in a few ways, and annoying in others. The
install experience was pretty bad compared to a Debian install,
something I really did not expect. Ubuntu doesn't seem to have
the package depth of most Debian-derived distros; I tried to install
several things that seemed like they should be there but weren't. On the
other hand, the polish of the UI and the really nice documentation on
the Ubuntu website were great. I'm not sure whether internet searches
for relevant information yielded more or less than they would have with
stock Debian: seemed about even.
Anyhow, I'm going to pretend I have a working new laptop for now, and
start tackling issues lazily as they arise like I usually do.
Thanks for listening.
2022-09-14 22:29:02 PDT
My wife and I have been watching The Rings Of Power. So far,
we haven't been terribly enthused.
There's too many storylines and characters: it's hard to keep
track of, and everything gets diluted. I haven't found any plot line or
character yet that I am really excited to see more of, just a lot of
barely-above-meh.
As someone who has read much of Tolkien's literary output
including The Silmarillion, I still don't understand a lot of
what happens in the show. It feels to me like a lot of barely-disguised
move-the-plot-along fridge logic. My wife keeps asking me "why was
that?" and I keep saying "I don't know." Not a good sign.
One of the classic ways to try to hold interest in a serial, one that
goes way way back, is to have a Mysterious Figure for the audience to
worry about. We have not one, but two here: "Meteor Man" and Halbrand.
Of the two, Meteor Man is the more interesting to me, which is to say I
care a tiny bit.
Sadly, I don't trust The Rings Of Power to have a satisfying
identity set up for Meteor Man at this point. My suspicion is that it
will be something stupid. Here's some guesses about Meteor Man's "true
identity" I've seen from folks I watch on YouTube: I've ranked them in
the order of my belief, although I'm not yet enthused about any of
them.
Gandalf: This is in some ways the obvious choice: a
Maiar Fire Mage, a really familiar and beloved character who is known to
be friends with the Hobbits.
This last was my initial reason for adopting this theory — in the
Tolkien Cinematic Universe, as in any CU, writers can't seem to resist
overexplaining trivial stuff from characters' backstories. "You know why
Gandalf likes Hobbits so much, right? In the Second Age, one of them
helped him and then he befriended a Hobbit clan."
The problem with the Gandalf theory, as pointed out by Robert at
In Deep Geek (an excellent YouTube channel), is that there's a
bunch of hints that Meteor Man is actually evil. There's the corrupted
leaf in Lothlorien as the meteor sweeps by. There's the whole "so evil
the fire gives no heat" thing (that as far as I know was invented for
The Rings Of Power and probably shouldn't have been, as it
doesn't fit at all with Tolkien and is stupid). There's the dead
fireflies, and Nori's dad breaking his leg when Meteor Man breaks a
stick.
Still, the problem with Meteor Man being evil is that we are clearly
expected to trust Hobbit girl Nori's instinct that she should help him.
Indeed, the writers set up a second child character, Bronwyn's son Theo,
as a kind of bad counterpart of Nori who manages to steal some kind of
ultra-evil Morgoth dagger. While Nori ending up doing evil and Theo
doing good is exactly the kind of stupid subversion I would expect from
bad writers, I hope the writers of The Rings Of Power are
better than that.
A Balrog: This is Robert's suggestion, and it's a tricky
one. It's the kind of thing the writers might do, I guess? See his video for some of the
justification for why it could be so.
However, I think there are some issues with this identification.
First, I am not aware that any Balrog took human form anyhwere in
Tolkien's stories; they are always described as overt monsters. Second,
it appears from Tolkien lore that the Balrogs all came to Middle-Earth
in the time of Melkor: there is nothing to support any falling from the
sky for no reason at this point. Finally, see the argument against a
truly evil being above.
Still, as Robert repeatedly points out, we are promised a Balrog by
the previews of The Ring Of Power. I suppose it could be this
one. I just doubt it.
Sauron: I mean, why would Sauron come sailing in from
the sky and crash with no knowledge of who or where he was? Why? Maybe I
give the writers too much or too little credit here. Perhaps there's
some truly clever explanation; perhaps the explanation will just be
handwaved.
Anyhow, Sauron is going to have to be charming and deceptive enough
at some point to convince the Elves (Celebrimbor, probably) to forge
The Rings Of Power. This doesn't seem like a great start.
Again, see the argument above about pure evil.
A Blue Wizard. The Blue Wizards are really minor
characters from Tolkien that we know little about. Only real arguments
in favor are that they are known to go to the South, where Nori et
al are.
Radagast. Nope. Thought about it early on, because of
firefly powers. But nothing else would really make sense here that I can
think of.
The Witch-King of Angband. I found this suggestion here.
Saruman. Not much fits here. Fire powers, the exact
opposite of beguiling speech, etc etc.
"The
Man in the Moon Came Down Too Soon". This
website suggested this. It is both ingenious and ridiculous.
Hopefully not.
Tom Bombadil. Makes no sense. Not even worth
debunking.
Some newly-invented character not mentioned by Tolkien.
No. Just… please no. If so, the writers better save the reveal for
the end, because I'm quitting the minute they start adding major
characters of their own invention.
Some character only from The Silmarillion. Basically
impossible, since Amazon would not have the rights to any such character
not also mentioned in The Lord of the Rings or The
Hobbit.
One clue that has been much bandied about is the only words spoken by
Meteor Man so far: "Mana Ure". Apparently Ure is a Quenya Elvish word
for something like "fire", "heat" or "the sun". Mana is more
complicated, but may be some sort of interrogative like "where?" or
"what?". My theory is that "Mana Ure" is "Where's the fire?", a
reference to all the rushing around he's seeing. Or maybe "Ure Mana",
"Your Momma".
The one thing I take away from all this is that the writers have
written themselves into a can't win situation for me. Absolutely none of
the above possibilities would be anything but sad and ugly.
It would have been far better to just leave Hobbits and Meteor Man
out of this one. The Hobbits can't become too famous without spoiling
The Lord of the Rings anyhow, since we're told they really
weren't in the old accounts. If Amazon's writers need a Meteor Man to
keep the audience holding on, the series is likely unwatchable. My
curiosity has certainly passed.
I'll leave you with my theory: Smaug. Someone once told me
that it is never a good to leave a live dragon out of your calculations.
Dragons are all about fire, they come from the sky, they are magical,
they are evil but not super-evil in the same way as the other stuff in
Tolkien.
This is a really stupid theory. But it's better than Tom
Bombadil.
2022-09-06 22:31:50 PDT
I just spent a bunch of time cleaning up the image links in the
Markdown on this GitAtom blog. The images are now all properly formatted
according to the CommonMark Github-Flavored Markdown
(cmark-gfm
) specification.
As I feared, it was about an hour of messing about without doing much
that was new or constructive or interesting.
I belatedly realized that I could probably leverage the site's
CSS to format the images in a way "good enough" for my purposes. There
wouldn't be much in the way of easy customization, but meh.
So I went and figured out what CSS wanted me to do here, and modified
my CSS accordingly. Big fun. In case you're curious, the new CSS looks
like this:
/* https://www.w3schools.com/css/css_align.asp */
p img {
display: block;
margin-left: 3em;
margin-right: auto;
max-width: 50%;
max-height: 20ex;
}
CSS is nothing if not beautiful.
I copied that addition into GitAtom's default CSS also. It's
something most folks will want, I think.
I developed the new CSS on my development blogsite. Never a great
idea to work directly on the production site. Seemed to work after a
bunch of tweaking. Then I copied it here.
I really wanted a CSS class
tag for img
elements to be generated by the Python cmarkgfm
package
that GitAtom is using for Markdown rendering. cmarkgfm
's
idea of a "handy" image tag is like this:
[](/posts/dall-e-credit.html)
That is translated by the Markdown renderer to this HTML:
<p><a href="/posts/dall-e-credit.html"><img src="/media/0019-literal-genie.png"
alt="A Literal Genie maliciously granting a wish."
title="A 19th century etching of a Literal Genie
maliciously granting a wish." /></a></p>
I wanted HTML something like this:
<p><a href="/posts/dall-e-credit.html"><img src="/media/0019-literal-genie.png"
class="cmark_image"
alt="A Literal Genie maliciously granting a wish."
title="A 19th century etching of a Literal Genie
maliciously granting a wish." /></a></p>
See the dramatic difference? No. That's computing for ya. But getting
the second thing would allow changing my CSS selector to
img #cmark_image
to ensure that the CSS didn't accidentally catch stuff on this site
that was not part of a blog post: for example the Feed logo.
So I filed an issue
against cmark-gfm
. We'll see. I may end up contributing a
few lines of C code there.
I decided this was a good time to move all the images on the site
from the /posts
directory where the blog posts live to a
separate /media
directory. I had been planning to do it
anyhow, because reasons; since I was messing with this stuff now was a
good time.
I'm hoping to someday get GitAtom to be cleverer about attached
media, in particular in Atom feeds. In any case, I'm not too comfortable
with the idea that the images live only in my site directory right now.
Fixing it all will be in the far future though — very low
priority.
I next got to fix all the image links in my previous posts.
That's the problem with co-developing GitAtom with FOB4: retroactive
changes. I did a couple of posts by hand, and got the hang of it. Then I
decided to partially automate the process for the rest of the links by
writing a sed
script. A few minutes of fiddling around
later, I had this beauty:
s@<p align="center"><a href="\([^"]*\)"><img alt="\([^"]*\)" title="\([^"]*\)" src="/posts/\([^"]*\)" width="[^"]*" */></a></p>@[](\1)@
Sorry about the long line: sed
has no great way to stick
line breaks in stuff.
But of course, it wasn't that simple. I had to yak-shave by trying to
get a sed
mode installed for emacs
. I didn't
find anything good, but I now get in a really primitive sed
mode when I open emacs
on a .sed
file.
Of course, after all that I realized that my fancy new
sed
script was only good for auto-fixing exactly two more
links. Ugh. Even then, I had to wrap the HTML in the Markdown so that it
was all on one line, so that sed
could deal with it.
Totally not worth it.
I hand-fixed the rest of the links, which were much simpler.
After all that, I wrote this post.
It's always like that. Things get very incrementally easier after
every little thing like this, of course. Still, it definitely feels like
pushing a boulder up a hill. Unlike the canonical example, the boulder
doesn't roll back down, really. The hill is just endlessly high, and
every time I look back it doesn't look like I've come any distance at
all.
Enjoy my bespoke post on my bespoke blog. Otherwise I'll just feel
silly.
2022-09-03 00:45:35 PDT
I'm very much into Rust
programming (the language, not the video game) these days. The language
seems kind of for real, and has some nice properties.
Back in December 2016 I participated in Advent of Code 2016. This was
my second time doing AoC, I think, and I did it in Rust. This was about
two years after Rust went 1.0, but it was already a very viable
language. I ended up with this
repository, which has over the years become one of my
highest-starred Github repos.
Today I glanced at the repo for some reason and noticed that
Dependabot, Github's security checker, had scattered many, many security
advisories over my code. This is easy for Dependabot to do with Rust,
since the libraries used all come from one place —
crates.io
— and security stuff is tracked really well.

Even though a lot of these security warnings were redundant, this was
still a bit alarming. I decided to see what I could do to bring my code
up to 2022 standards, and eliminate the possible security issues in the
process.
The good news is that this was ridiculously easier than in any other
language I've worked with. Rust tools just automate the heck out of it.
I eventually ended up shell-scripting most of the work. I brought all
the code up to 2022 Edition, fixed all the warnings, and replaced a
cryptography crate that was stale and vulnerable.
I also tried, as an experiment, just doing the minimum automated
upgrades to keep things going. That was even easier, but of course I
like the nicer result.
When I contrast this with my experiences with Haskell, Go, C and C++,
it's just night and day. Rust is really a nice language.
2022-09-01 01:46:30 PDT
Welp. Blaugust is done.
It's been a great opportunity to get back into the blogging swing.
I've gotten my software stood up shakily, I've done 31 posts over 31
days (albeit not consistently). I've met some more nice blogger
people.
As I promised earlier, I'm going to scale back a little on output.
I'm also going to work more on GitAtom. I've got a lot of projects to
keep me busy, so it's the usual time balance thing.
Thanks again to Belghast for running Blaugust, and to Bhagpuss for
inspiring me to give it a try.
In the unlikely event that you're reading this, hope you'll keep
reading along.
2022-08-31 01:26:03 PDT
In terms of content, the participants of Blaugust are about games,
music and geek media. So far, my blog has utterly failed to fit in. But
hey, I can write about these things too. I really can!
Today I was trying to recall the last game I really played hard. It
doesn't happen all that often — I'm a pretty fickle gamer. Turns out it
was probably Breath of the Wild. So… yeah.
I never did finish BotW. I finally beat the four thingummies and
finished most of the non-combat content in the game. I spent countless
hours exploring and poking around. I got the DLC, then discovered it
wasn't really for me.
The sticking point with BotW is that, while I dearly loved the
open-world exploration, I'm not much of a fighter. It isn't that I can't
do it, although I am objectively terrible at it. I just don't like it.
In terms of traditional gamer classifications, I'm discovery then
puzzles then meh.
The sad fact of the matter is that games are more and more stuck on
genre content. Worse yet, pretty much every genre is either fighting,
puzzling, crafting or some combination of those. It's great to see some
open-world exploration games, but the game designers seem unable to get
beyond the idea that the purpose of a vast open world is to provide
things to fight, puzzles to solve, and industries to start.
Conservative genre-mining kind of makes sense for AA and AAA titles.
No taking extra risks with that kind of money. Customers who are going
to pay big and in large numbers for a title want predictability and
familiarity. This is even more true for the free-to-play titles: they
need to be big and sticky to work at all.
Indie game devs don't have so much excuse. The phrase "indie", which
after all is only shorthand for "independent", is a little weird here:
Activision/Blizzard is "independent". Presumably most small game devs
want to be A/B someday, or at least want that kind of money.
What I want is genre-breaking gameplay. In particular, I want to get
away from the toxic parts of genre tropes: no murdering and violence, no
frustrating IQ tests, no dully addictive game loops. That leaves out a
lot, right there.
As a sometimes game designer myself, I've been thinking hard about
this. What space have I left myself?
I don't know. It's late and I don't have answers. Let's all think
about the question. There has to be something there somewhere.
2022-08-29 22:24:01 PDT
Apparently this is Lessons Learned Week in Blaugust. Blaugust is
almost over: just a few more days.
Oh, right. Lessons learned.
I might as well give up on genre blogging. The things I
want to write about are all over the map. It's going to be that way. I
once got kicked from a blog ring about X Development for being too
off-topic about X Development. I guess GitAtom needs tags and topic
filters. Atom supports tags, so there's that. I'll add it to the GitAtom
enhancements list. Holding your breath for anything on that list would
be really dangerous.
Co-developing blog software is a mixed blessing. Modern
freely-available blog platforms are pretty darn sophisticated. Taking a
student project to a usable state while trying to do Blaugust was
questionable at best. I'm relieved to be a little on top of GitAtom now,
but concerned at how much I still want to do. To be fair, without this
blog GitAtom would have ended its life stale and forgotten.
I am not a social blogger. That's an OK thing not to be.
I think it would be silly to blog if I didn't want people to read my
content at all. However, there's no urgency and no pressure for me
there. I haven't engaged at all on the Blaugust Discord. I will
try to do that here at the end, but it's probably too late. I can live
with this.
Schedule and size are a thing. I think I'll keep
blogging for sure, but probably not on a daily schedule. That turns out
to be just a little more than I want to do. My goal after this month is
14+ posts per month. I really am going to start building up a queue,
saving out some non-time-sensitive posts even though they're ready to
go. At some point soon I'll decide on a regular posting schedule: that's
a thing I've never done before.
I want to quit with the tiny "post-for-the-sake-of-posting" posts. I
don't like them. It looks like my most comfortable length so far is
around 400-500 words. Not long by blog standards, but substantial. I
have a few longer posts covering a topic in more depth. I feel like I
can knock out 500 words in 20 minutes or so, usually. That's a fine
amount of time to spend a few times a week on this hobby.
I have some other aspirations for this blog as well. That's a good
topic for a future post. Another 500 words folks can read or not on a
random topic on my GitAtom mess. Riveting, I'm sure.
2022-08-29 18:41:24 PDT
I wrote a couple of days ago about my use of DALL-E for blog
illustrations. My conclusions weren't so positive.
Today I discovered Stable
Diffusion, an open(ish) source project in the style of DALL-E that
you can run on your home computer.
Of course I did. The thing has only been available for a few days,
and already there are reasonably easy ways to use it. My NVIDIA 2080 is
barely powerful enough to run Stable Diffusion's generator: but
it works.
Here's my reaction, in parallel to what I wrote previously.
On Business Models and Creativity… Being able to run SD
at home means I just don't have to think about whether I want to
generate an image. It's about 20 seconds per generated image on my box,
which is fine.
On The Limitations Of 2022 AI… SD is quite comparable to
DALL-E. So far it seems to do better in some ways and worse in
others.
The thing everybody's excited about with SD is the unclamping of
limits on what can be generated. I guess that matters: I can now ask for
a picture of whatever celebrity and get them. Meh.
One thing that is cool about the interface to SD that I'm using is
that it allows "image to image" transformations with text guidance. So
starting with my FOB logo image

I asked for it as a Mondrian

and as an Edward Hopper

These were reasonably successful. Other experiments went pretty
badly.
On GitAtom's Clunkiness And "Ease-Of-Use"… Because of
the way the SD interface I'm using locally generates images, I can
probably include them here a little faster and easier. But that is
offset by the hassle and time of starting up SD in the first
place.
On The Competition… No change. It's still just an
alternative.
So… yeah. Progress marches on too fast to count.
Dinnertime. Peace, y'all.
2022-08-28 02:21:38 PDT
I am posting late today because I spent a long time being involved
with THE WISE CUP. I am spelling it that way because it is a 10-letter
backronym too convoluted to repeat: I'll just go with TWC from here on
out, though.
TWC is an annual event run by our next-door neighbors, for the
benefit of the folks on our street and friends and associates thereof.
Because we have that kind of neighborhood.
TWC is a treasure-hunt-style game with geographic clues that lead
teams of 2-4 people to various landmarks in our small but riche suburban
town. At each stop, participants must solve a puzzle of some sort and
make some observations. At the end, there is a quiz over the
observations. Scoring is based on successful completion plus the quiz
score. There are prize baskets for the top three finishers, plus a
literal trophy cup made by my neighbor, who is quite crafty and does
mosaicing. It's all a great deal of fun.
This is the third year of TWC. The first year, my wife, my
sister-in-law and I won, uh, very handily vs eight other teams. Last
year I was recovering from surgery during TWC, so my friend and his son
filled in for me. It was, uh, even worse. This year I decided to sit
out, because it seemed reasonable to give the other teams a chance. The
outcome was quite a close call, but, uh, my wife + sister-in-law + son +
son's partner won again.
What can you do?
Looking forward to next year's THE WISE CUP.
2022-08-26 17:07:53 PDT
If you've been following this blog (why? how?) you'll know I've been
playing with the DALL-E AI art
generation tool for blog illustration purposes.
I feel like I'm getting to the end of that experiment. Here's a few
thoughts on why.
On Business Models and Creativity… In the world of
modern computing an awful lot is premised on literally "pay to play".
Whether it's games themselves or creative tools, organizations expect to
cover their costs as well as a healthy profit by charging. This seems
quite reasonable on the face of it.
The question of whether a creativity service charge is a workable
business model is thus premised on whether the chargee values the
service enough to pay adequately. I'm grateful that DALL-E gave me a
chance to get an answer for free. The bad news is that I am concluding
that I do not value DALL-E that much.
Pay-per-image, first of all, is deadly for me. I've been really
careful with my 50 free images, as a gamer-inclined person will be with
most any resource. I have skipped asking for illustrations I didn't
really "need", and I've never asked for a re-render of an illustration I
was unhappy with. I get to a spot where a picture was mandatory, pick
the best of the four results, and call it good — regardless of whether
I'm really happy with what went down.
I'd much prefer an unlimited-use (or huge-cap use — say 5000 images
annually) periodic fee model. But I'm really skeptical they could charge
small enough to make it work for me on that basis. I guess I'd happily
pay $10 per year for effectively-unlimited access to DALL-E. I doubt
that would even come close to covering my share of the infrastructure
costs.
On The Limitations Of 2022 AI… The premise of DALL-E and
friends is just insanely aggressive. As an AI Professor, I am more than
aware. The fact that this bear dances at all is nothing short of
astonishing.
That said, once the astonishment wears off, the holes become pretty
apparent. The biggest problem with Magic Neural Nets is that there is
little that can be done with small-scale engineering to uncover obvious
systemic limitations.
For example, since its inception (no pun intended) DALL-E has liked
to put "symbols that look more or less like text" in its images. It does
this the most when it is confused about what is happening. It's
"clever," but it isn't something that's OK in an image that is meant to
stand on its own.
No one knows how to fix this, really. I'm not much exaggerating here.
OpenAI could retrain the net heavily to get DALL-E to stop doing that.
Mostly. But that big a retrain would likely induce other follow-on
problems. You can't just go comment out the "generate texty stuff" code
— there isn't any. What you'd really like is to get DALL-E to put real,
sensible text in the places where it wants to put text. That would
be a whole 'nother grand-challenge-grade AI project.
I look at this, at the failure to faithfully utilize the art styles
I've asked for, the confusion about the subject — all of which are
completely understandable given the tech involved — and I think
"Will I be ok with very slow incremental progress in these areas?" and I
feel like the answer is no.
On GitAtom's Clunkiness And "Ease-Of-Use"… The friction
right now for getting a DALL-E image set up for my blog is surprisingly
high. Much of this is GitAtom's fault. GitAtom doesn't have any real
intrinsic support for images beyond what is provided by the
cmarkgfm
Markdown engine I'm using. It's not great. In
particular, ensuring that an image is rendered at a reasonable size
pretty much eliminates using Markdown-style image links, since the
width
parameter of the image tag wants to be specified.
It's possible that with some CSS magic I could make this smoother, but
it would be a big project.
Further, I can't just drag-and-drop my generated DALL-E image, for
several reasons. I want to preserve the query text I used to get the
image as the image title, which again means I'm stuck copy-pasting it
into an <img>
element somehow. I need to get some alt
text for visually-impaired readers, which is different from the title
because DALL-E is just not that good. Finally, I want the image
wrapped to link to my DALL-E credits page when clicked, which means some
nested magic.
I would guess it takes me about 20 minutes to go from "I want a
DALL-E image here" to "OK ready to go." That's too long.
On The Competition… But if not DALL-E images, where
would my blog pictures come from? Hopefully, this is not too
much of a mystery. I could do pretty much without pictures altogether,
as I basically am doing anyhow. I could use good ol' photos, both
freely-available stock photos and photos I have taken myself. I can't
really afford to pay artists, but I can create my own art in my own
limited primitive way.
I ran a blog for years and years without DALL-E. I can easily survive
without it going forward. "Free is a very good price." —Tom
Peterson
It's been a grand experiment. I'm not saying I will never use another
DALL-E image. But I'm branching out. Thanks to OpenAI for DALL-E anyhow.
It's been an amazing trial.
2022-08-26 16:39:02 PDT
I got into a mild argument with someone I care about today. I had a
plan about how to do a thing. Other Person thought it was a really bad
plan, with too many risks. After some argument, OP won, and I did it
their way. That went fine.
It reminded me, though, of a rule I've been working on following for
some time now. Whenever feasible, allow others to fail.
That sounds like a horrible mistake. Let me clarify.
Let us say that I am a teacher (I am), and one of my students has a
bad plan. They've found a terrible way to try to achieve an
ill-thought-out goal.
As a teacher, I have an immediate obligation to try to help them. I
should absolutely try to work with them to clarify their goal. I should
absolutely try to help them see the issues with their plan and suggest a
better alternative. Perhaps I will even need to discourage them from
proceeding at all, because I cannot see a good plan even with
their help. So it goes.
But I may get an argument. Or a bunch of argument. The
student may be convinced that I am wrong. They may feel that the goal is
clear, and that the plan will work in spite of my objections. Of course,
it is now my job to persuade them otherwise… or is it?
Premise: People learn more from their failures than they
learn from avoiding failure. Real consequences are more real
than hypothetical ones.
Premise: I could be wrong. After all, I may be more expert
than a student, but I'm far, far from omniscient. Maybe the terrible
thing I've pointed out won't happen. Maybe the student will be really
happy in their goal state.
SO I should always state my objections and concerns
mildly, as observations and estimates. If my advice is ignored, I should
default to letting the advisee go on. One of two good things
will result: the advisee will learn, or the advisee will be right. In
either case I have won and they have won.
The exception to this default, of course, is about consequences of
failure. If I can reliably determine that a
particular failure is both actually conceivable and
sufficiently devastating, I need to do everything in my power
to stop that. For example, dead advisees won't have learned anything;
those who have ruined their lives won't appreciate the price of
learning.
The good news is that if I have presented as a risk-tolerant advisor
— somebody who allows advisees to fail — my occasional strong opposition
will be taken much more seriously. Advisees will know it's coming from a
place of goodwill and trust, not just an ego trip or argumentativeness
on my part.
So I try to practice caution and humility. I won't let people go into
failure blind, but I won't close most doors in front of them either.
I guess what I'm saying is that OP should have let me fail. Oh well.
I was reminded of this lesson, and everything came out fine. So it
goes.
2022-08-26 13:24:55 PDT
OK, so after all that talking and caterwauling and whatever, if you
look at the upper-right corner of your browser you should see a Feed
Icon. If you paste the address there into your Feed Reader as per usual,
you should be experiencing the joys of my new Atom feed.
The process of getting this to beta was as involved as getting a
major feature to beta gets. I ended up refactoring most of the GitAtom
codebase. This will make future improvements easier, but yeesh. In the
process of getting the feed going, I closed some other critical
issues.
There was also the usual yak-shaving.
Notably, I wanted to include the Feed Icon as, well, an icon. This led
to the discovery of the previous-decade's initiative for a standard Feed
Icon, which I would have thought would have led to a freely-licensed
Feed Icon to use. Sadly, I can't figure out the licensing terms for the
"standard" icons and whether they license both copyright and trademark:
it all seems to be under fairly fancy open source licenses. I gave up,
and drew my own CC0 Feed Icons. Hopefully
I won't have trademark issues with them — seems unlikely.
There's still likely some pretty bad bugs to be fixed here. Some
things are not yet tested, and nothing is well-tested. Still, nice to be
able to syndicate my site at all.
2022-08-24 23:58:35 PDT
Sometimes my motivation to blog is as simple as not giving up.
The last couple of days have been weird. I've been made tired and
messed up by them.
Still, how hard is it to type a couple of sentences here? I'm
reminded of the biting lyric by the late 1980s Christian songwriter
Keith Green: "Jesus rose from the grave, and you can't even get out of
bed."
So I slog out some typing. I'm still a day behind. But it's better
than two days behind.
Talk to you again soon.
2022-08-22 22:31:25 PDT
I said I was going to add an Atom Feed to GitAtom yesterday. And by
the end of today, I almost have.
I mean, teknicully I akshually have an Atom feed
now. It just doesn't do quite what I want so I am refactoring and
rewriting part of it, and there are a number of bugs that it has exposed
that should still maybe be squashed.
My division of labor so far, as estimated off-the-cuff:
3 hours: acquiring, reading and understanding various technical
specifications around Atom, XML, MIME types, etc, to get the
implementation standards-conformant(ish).
3 hours: acquiring, reading and understanding a bunch of Python
and library stuff needed to get the implementation right.
3 hours: refactoring the existing codebase into some kind of
usable shape.
3 hours: writing new actual code for the Atom feed.
2 hours: finding and fixing bugs related to the
implementation.
2 hours: building a dev instance of GitAtom and some
infrastructure to auto-update it.
2 hours: miscellaneous, including maintaining the GitAtom repo
and issues, prototyping stuff as needed, debugging, etc.
So roughly 18 hours in two days, of which about 3 were the actual
implementation of the actual feature. I'm not quite done yet, but the
proporations should stay about the same.
Am I motivated to program like this because I'm blogging, or
motivated to blog about programming? I dunno.
Starting to burn out on this whole fire. Must finish soon.
2022-08-21 13:05:03 PDT
Yesterday I talked about a really great idea that I needed to share.
Today I've forgotten which idea it was.
Great.
It's now "Staying Motivated Week" in Blaugust. It couldn't be more
timely. Here's what's keeping me motivated in blogging right now:
Getting GitAtom into a truly usable state. I have a million
things to do today, but instead I'm going to start by finally adding a
feed to the software.
Getting back into the swing of blogging. I enjoy blogging quite a
bit while actually doing it. I need to remind myself of that by
repetition. Humans are not one-trial learners.
Getting things written down. There's an old adage in academia,
where I live, that a thing isn't real until you write it down. It's a
hard adage for me to live by.
I have a ton of content and content ideas that I want to be up on the
web in some semi-persistent form. A blog is kind of a perfect way to do
that: low effort, low bar for quality, but still a better result than
carrying ideas in my head and telling them to my friends.
I'm going to get the future post ideas queue going again today. It
will be full of half-started blog entries. At least it will remind me
what my blogging priorities are.
I can't believe I forgot.
2022-08-20 21:54:48 PDT
I have a great blog post idea. It's a topic I have talked about a lot
in the past, and it deserves to be here.
What I don't have is the will to give the post the 20 minutes of
careful effort it really deserves. It's an idea that has to be got right
and expressed right to be of value.
I'm tired. You get this post instead. I can't promise that you will
get the goodpost someday soon: I'll try.
I think this is a common blogging thing, really. The bloggers I've
followed express it all the time. With a daily blog, pacing is
everything. A recurring theme on my blog is that we are all human, that
I am especially human, and that humans need to respect their
limitations. I envy the people who make everything they do great. I
don't know how they do it. I know that will never be me.
So enjoy this meta-post. It's what I have to give. Hope you'll "tune
in" (wow I'm old) going forward and see me at my best. Hope my best is
OK. I'll settle for OK.
2022-08-20 21:47:18 PDT
Turns out my last post contains an off-by-one error. I was only three
posts behind.
This is one of the most common errors in computing: a fact surprising
to non-computingists, I think. Computers are supposed to be really good
at counting things. They are. The problem is that we humans need to let
the computer do the counting. When we try it ourselves, we inevitably
get close but no cigar.
Turns out that the manual blog-post-numbering I was doing as I
manually made Markdown files for my post was off by a bit here and
there. Fixing it turned into a half-hour mess of renaming files and
editing their contents to make GitAtom happy again.
The obvious solution is to fix my GitAtom blog software to do post
numbering for me. That's hard, and I'll have to think about (a) what I
want to happen, and (b) how to make it happen in the Python codebase.
Again, maybe (a) is the surprising part: it turns out that figuring out
exactly what we want the computer to do is arguably the hardest part of
software development.
The computer is a real-life Literal
Genie. The computer will (software bugs notwithstanding) do the most
annoying version of exactly what we tell it to do. With great power
comes great failure, mostly.

For now, I'll just wade on manually and try to avoid mistakes. Like
that ever worked.
2022-08-20 21:16:15 PDT

I got my Amateur Radio Technician license about 20 years ago, testing
in along with a very good friend. The FCC knows me as KD7SQH, but you
probably do not, since I've barely done any ham radio since then. I got
the license for a purpose — to communicate in the Black Rock Desert
during a rocket launch event, and once it served its purpose I set it
aside.
I'm now getting involved in an amateur radio thing, and so I decided
to upgrade to my General License. A couple of Sundays ago my friend and
I went down and passed that test: my friend went ahead and passed the
Amateur Extra test as well. I will be taking that test shortly with
every expectation of passing.
With the upgrade, the next logical thing to do is to buy a real radio
setup for my home. My hand-held transceiver worked for its purpose, but
long-distance stuff requires more than an HT.
The sad truth is that when I think about the cost and time commitment
of going there, I've pretty much decided to forego this plan for now.
I'm somewhat budget-constrained at the moment: I'm also constrained by
the hundred-foot fir trees surrounding my house, which while beautiful
and wonderful make for unique setup challenges if I'm to get
anywhere.
The fact of the matter is that the last thing I need in my life right
now is to add another half-assed hobby to the dozens of those I already
have. I may try participating in one of my local Amateur Radio Emergency
Service groups in some capacity — that is a fine and noble endeavor. But
I wouldn't need equipment for that right away: I can wait and see what I
need and what I want to do.
Oh for the funds and time. Maybe someday. Maybe.
2022-08-20 21:16:15 PDT
For the last few days, my life has been consumed by doing final setup
and then leading the Rust-Edu
Workshop. I'm really happy with how it turned out. Who knew
that inviting a bunch of brilliant, creative and really friendly people
to work together on a common interest would go so well? As a
completely unbiased observer, I'd give the event a 9.8/10: the
0.2 was all me.
That said, it looks like I am now four posts behind this Blaugust. My
grand ambition of maintaining a queue failed early: it's now just a
struggle to get back to parity again.
Please look forward to three extremely short blog posts,
even by my standards, as I try to maintain some semblance of being able
to blog "regularly."
I am exhausted and suffering from stress recovery, but I'll be fine
in a bit. Thank you for your patience.
2022-08-17 00:03:26 PDT
Bill Wurtz is a guy on YouTube.
Bill Wurtz is a remarkable creator on YouTube.
Bill Wurtz is most famous for history of the entire world, i
guess (mildly NSFW). It is rightly celebrated.
Bill Wurtz makes short music videos like old macdonald.
Bill Wurtz more commonly makes "normal length" music videos like Mount St. Helens is about to
Blow Up, which is perhaps my favorite.
Bill Wurtz writes all of his own music, performs all his own
instruments, does all his own animation.
Bill Wurtz is an international treasure.
Bill Wurtz is a favorite
of Charles
Cornell.
Bill Wurtz may not be for you. If not, I don't understand why.
I really like Bill Wurtz.
2022-08-15 22:18:47 PDT
Let's start this with a note I posted on an obscure Discord back in
February…
In 2006 I started following this obscure webcomic that finished
in 2015. It was a cute romantic comedy: not the kind of thing I usually
read, but it was fun. Last week I got an update from them to check out
their upcoming movie trailer. I thought it was a joke at first, but
nope, it was real.
That was my introduction to the AAA title linked above. I ended up
really enjoying the movie and can recommend it. It was a weirdly
successful cross between a Hallmark Channel movie and an
oddball webcomic.
I've been following webcomics since the web became a thing. Arguably
before that: I used to follow Dilbert on Usenet through the
Brad Templeton's Clarinet service back before Scott Adams bailed on the
fanbase that made him. (I suggested "a clueless intern" to Adams as a
character in an email a month or two before Asok entered the strip.
Probably just a coincidence…)
I currently follow about 80 webcomics using http://piperka.net, although most of those
are not currently active. A couple of my student friends created http://comic-rocket.com/ with a bit
of help from me: one of them got me into webcomics on a much bigger
scale 20 years ago.
So all this hipster cred-building is my way of saying…
Thank you. Thank you to the creators that made and continue
to make webcomics. You folks are amazing. I have been but a happy
parasite, for the most part, on your success.
Thank you. Thank you to the webcomic tool and infrastructure
creators that have made my webcomic experience so smooth and special.
Some of you are my friends. All of you have my respect.
Back when I had a full-time blog before, I'd post my current webcomic
list every few years. Probably time to do that again. This is edited
both for taste and to show only currently-running strips. It's
enough.
I haven't included links, because exporting them is a bit troublesome
right now. Again, you can type titles at Piperka or Comic Rocket.
- The Bouletcorp
- Dead Winter
- Dicebox
- Dinosaur Comics
- Existential Comics
- Gunnerkrig Court
- Guthrum
- Magefront
- The Order of the Stick
- Outsider
- Skin Deep
- Spacetrawler
- Terminal Lance
- The Wandering Ones
- Widdershins
- XKCD
2022-08-14 23:26:15 PDT
I sat down tonight to figure out enough Bevy to be able to
participate in the upcoming Bevy Game Jam.
I failed. I failed to motivate myself sufficiently. Looks like it
would probably take me 10 or 20 hours to get to where I'm comfortable
with the framework, and I just don't feel like spending that right
now.
It made me think, here in Blaugust Creative Appreciation Week, about
the hells of complexity and weirdness that modern digital creators put
themselves through to get stuff built. When I was a boy, the bar for
creating on a computer was so low, and any creative output was highly
regarded. Nothing is easy anymore, and expectations are so high now.
Dozens of great games will be built for the Bevy Game Jam this year.
Each one will be many thousands of lines of clever code, with crazy
amounts of art, music, modeling and other creative assets.
Thank you, those people! Wish I could be one of you. Maybe someday
soon.
2022-08-14 00:05:15 PDT
For many years I had a primary blog I self-hosted using the Drupal
CMS. It was called "Project Resolution" because it started with a New
Year's Resolution, and "FOB" for "Friends of Bart" — a name with quite a
bit of history.
My blog collapsed not so much because of lack of will on my part as
because the underlying blog framework collapsed. I got wedged in the
state where I couldn't keep blogging with the existing Drupal because my
Drupal version had become too stale (security holes and software rot),
couldn't manage to upgrade Drupal to a newer version in spite of
extensive efforts, and couldn't manage to extract my content and move it
to a new platform in spite of efforts that included writing a bunch of
bespoke code for that purpose. All of this provided the impetus for the
creation of the GitAtom platform I am using now.
I did manage to dig much of the content out of Drupal as HTML or
Markdown, but haven't yet posted it anywhere. I should do this in the
next day or two so that I know what's up.
I have had other blogs as well. I had a Ghost instance up to blog EVE
Online stuff; I'm not even sure myself what the current status of that
content is. I think I blogged there for about a year around 2018? I have
had various blogs for Google Summer of Code back when I was
participating there, although they were very light on content.
Notably, I have a super-secret blog. As far as I know, only one very
close friend knows of it. It doesn't have much content, but it contains
things that were on the one hand sensitive enough that I was afraid to
have them associated with me (especially pre-tenure), but on the other
ok enough that it wouldn't be an absolute disaster if it got out.

Things that I decided I didn't want on the Internet at all are
squirreled away in that directory I talked about last post. Thoughts
that must never, ever be on the Internet are not on any computer
anywhere: I'm not silly.
You can also find many years of my comments on Reddit, or even dig my
old Usenet days out of the Usenet archives to a certain extent.
So… yeah. I obviously like to write stuff and publish it. I'm not
sure why I let blogging go for so long. Good to be back.
2022-08-13 00:02:35 PDT
A future blog post here will cover — as thoroughly as I can manage —
my old and dormant blogs. I have had a bunch.
I'm too tired to do a post that detailed and careful tonight.
Instead, I want to talk about a weird side thing I've been doing.
One thing I learned long ago is to carefully read anything I am about
to post on the Interwebs to make sure it actually belongs there. I think
think twice and then often cut once. Since 2006, I have saved some 440
files, 109,000 words of text that I wrote and then "threw away". Most
anything complex is there.
109,000 words is a lot. I have no idea how it piled up like that.
The longest single piece, an Ingress leveling guide written in
HTML in 2013, is only about 3800 words, even using an inflated notion of
word length.
Nearly the same size is the second-place piece: a transcript of a
conversation on Hangouts with my friend Sergey in September 2014 while
his hometown of Mariupol, Ukraine was
being shelled by Russians. Dammit. I hope Sergey is OK in the current
conflict. I know he got out of Mariupol years ago, but last I heard his
parents were still there. So scary; so sad.
The third-place piece is a PDF draft, of all things, in which I
rant in 2008 about why I am quitting the GPL. Spoiler
alert: I calmed down a bit and didn't actually quite quit using
the GPL. That said, I still stand by most of my unpublished
writing.
At the other end of the spectrum, a lot of these pieces are just
little snippets like this from August 2016:
I know, right? It's been a good seven years since the SLC
Police charged a gay couple with public
kissing.
Anyway, that could have happened anywhere in the US in 2009.
Right?
...Right?
I have no recollection what prompted me to write that. It was
probably in response to some Reddit
thing. Anyhow, I clipped it instead of posting it, and there it
is.
I keep all of this in a directory called blog
. The
intent is that I go back and mine it for future blog posts.
Someday I'll do that. For today, just thought I'd point out what
might be a bit of a novel approach.
2022-08-12 00:13:20 PDT
The problem of commenting on blogs is bigger than many non-bloggers
realize, I think.
I wrote a blog for many years in the now-distant past. My blog
collapsed for technical reasons that I will cover in a future post. (I
have written many blogs. This was just the big one.)
Perhaps the largest non-technical problem I had was dealing with blog
comment spam. I got a high enough pagerank back then that I'd get on the
average of a spam comment or two a day. Real comments were pretty rare
except on one vaguely viral post.
The spam I was seeing looked like it had been generated by an
extremely low-paid human. I had enough captchas and whatnot in place to
make it hard-ish for a bot to comment, and the comment content seemed to
be vaguely customized.
For me as a casual blogger this constant drip of spam was painful. I
eventually shut off comments. Now I'm up again on GitAtom, which lacks
any comment facilities. What should I do?
My cousin uses some free version of Disqus for blog comments. I could
conceivably set that up. I have no idea how well it would cut the spam
down; I have mixed feelings about Disqus itself. There are various open
source alternatives around. I could look into those, I guess.
There's various friendzone / pre-approval things I could do. I really
do like the world to be able to legit comment though.
I've thought about taking advantage of Gmail's decent spam filtering.
I could request that comments be sent by email to some dedicated Gmail
account, then either manually or automatically forward them to GitAtom
if they missed the filter. A lot of work, but might be worth it.
What do you think? Post your answer below… Nah, just kidding. You can
always email me if you like. I'll try to pass on the good stuff
here.
2022-08-11 01:19:54 PDT
Apparently this is the week in Blaugust where we are supposed to
introduce ourselves. Indeed, this week is perhaps almost over.
I'm dead tired and need to go to sleep. I will introduce myself as
briefly as possible: no stories, no extra stuff.
I can do this, I know I can.
I'm Bart Massey. I've lived most of my life in Oregon, except for a
couple of years in Tennessee when I was in early grade school. I spent
the first part of my life in Portland, and after the Tennessee thing
ended up in North Bend. Upon graduating high school I spent a year at
Oregon State University, then four years at Reed College, graduating
with a physics degree in '87. I spent two years working full-time at
Tektronix, doing UNIX admin, open source tools and digital signal
processing. Then I went back to grad school at University of Oregon,
where I got my thesis MS CS in programming language implementation doing
concurrent logic programming, then my PhD CS doing artifical
intelligence. Slightly before graduation I got a job at Portland State
University, where I've been a CS prof for over 20 years now. If you do
the math, you'll find I'm 58.
I have a loving wife who I've been with continuously since high
school, and one adult son who works as a software engineer here in
Portland.
My hobbies include reading, programming, making music (piano and a
little guitar and voice), playing poker, and too many other things to
list. I am low A Tier at trivia and worthless at athletics and related
pursuits. I am a poor follower of Jesus, but not a participant in any
religion; I am not the least bit evangelical.
I like to explore and create. I am more than usually lazy and
disorganized, with few time management skills.
That is about as small a wall of text as I can manage for now. I
expect I'll be telling you more regularly.
2022-08-09 22:49:04 PDT
I restarted blogging as part of Blaugust.
The soft goal of Blaugust is to blog every day during the month of
August. That is an achievable schedule for me; nonetheless, I already
fell behind this month, though I have now caught up.
I am also working ahead a bit. The conventional solution to the
problem of regularity in the web publishing world is to build a "queue"
or "buffer" of upcoming posts. A buffer famously cannot match rates, but
is great at smoothing irregularities. Readers like regular posts, while
writers like irregular writing.
GitAtom doesn't currently have any explicit support for queueing.
Because of the way the platform works, it is straightforward to write
posts ahead and put them aside for later. This is the main thing that is
needed. However, the process of pulling queue entries and posting them
should arguably be automated based on intended posting frequency. If I
don't have the time or will to write a post on a given day, I probably
also don't have the time or will to interact with my blog at all that
day.
Sigh. Added to the issue list.
2022-08-08 10:35:01 PDT
When I made the decision to blog on GitAtom a week ago, I
knew that dogfooding GitAtom would be both a great way to improve
GitAtom and a giant pain. Both the improvement and the pain have
exceeded my expectations. I've posted a half-dozen new GitAtom issues
and closed a bunch of them. I've engaged with some "hard" issues and
emerged, if not victorious, at least kind of satisfied.
Hopefully FOB4 has become easier to read as the result of my efforts.
It has also become easier to blog with, for what that's worth. I'm
afriad that FOB4/GitAtom remains more creator-friendly than
reader-friendly.
Adding an Atom feed is the penultimate can't-live-without-it reader
improvement on my list. I will try to find time in my currently-insane
schedule (more because of my laziness and disorganization than because I
have more than a person should be able to do) to get a feed up over the
next few days.
(The ultimate must-have feature is allowing some kind of commenting.
More about that in a future blog post.)
Sadly, I think the whole GitAtom hairball is due for some fundamental
rethinking. I don't want to do that rethinking — I just want to blog. I
don't want to document the GitAtom architecture extensively,
re-architect it, document that, and re-implement/refactor big
pieces.
I would switch blog software in a second if someone told me of the
right platform. Unfortunately, I don't know of anything super-close to
what GitAtom provides for blogging.
Such are the woes of a geek who blogs, I guess. Thanks for
listening.
2022-08-08 00:28:44 PDT
Passed my Amateur Radio General Class examination today. (My friend
Keith passed both General and Extra. I'll do extra soon too.)
I didn't do as well as I hoped on the test. Ah well, passing is
passing.
Should be doing voice on HF reasonably soon.
2022-08-07 23:29:19 PDT
At RustConf Friday I had dinner with a bunch of really cool people. I
especially enjoyed talking to a couple that was seated with me; we
chatted for several hours. One was a tech geek like myself (except
smarter and harder working, I suspect). The other was a school nurse in
a rough major city. You don't get to meet IRL heroes that often. It
seemed like we had a lot of common interests and topics…
Ah, yes. I was talking about Jeopardy!, wasn't I? (The bang
[!] is part of the name, if you didn't know. Game shows.) Somehow the
topic came up, and I mentioned my "scoring system" for playing along at
home.
There is a standard scoring system used by those aspiring to
become Jeopardy! contestants. Coryat
Scoring is a system developed for comparing contestant skill.
Reportedly many home players try to get some minimum average Coryat
score before trying to get on the show.
I don't have any strong ambition to be a Jeopardy!
contestant. It would be pretty cool, but I'm just not willing to put in
the work. For me, trivia is supposed to be relaxing fun. For someone
aspiring to Jeopardy! greatness, there's not so much fun to be
had.
Unfortunately, Jeopardy! recycles a bunch of topics. A lot.
See how many episodes you go without seeing one or more of the
following:
A question that is easy if you have memorized the US Presidents
and Vice Presidents, their sequence numbers, and their dates in
office.
A question that requires you to know the capitols of the 50 US
states, or vice-versa.
A question that requires you to distinguish between Luxembourg,
Lichtenstein, and Monaco.
A question about the life and works of Jane Austen and/or the
various Brontë sisters.
A question involving Robert Frost or Robert Burns.
A question asking about a European mountain range.
I could go on (oh could I), but you get the idea. To be solid at
Jeopardy, it's not enough to just know actual trivia. You need to first
survey this body of knowledge by digging through gobs of old game
records and trying to see what the "Clue Crew" (who writes the
questions) is up to. You then need to memorize quite a bit of
information to cover this gap. Sounds like a lot of work for a lazy
person like me.
What I take pride in is my knowledge of actual "trivia trivia". The
kind of stuff that you would never be able to predict or prepare for by
study; you just have to know. I know what a finial is, what Matthias
Rust did, what Hanlon's Razor says and who (probably) coined it. I
didn't learn these things by studying, but by learning and
remembering.
Sooo… Here's how I score Jeopardy! when playing at home:
I may only score on questions that are not correctly answered by
the in-game contestants. This includes regular questions not correctly
answered by any of the three, and Daily Double questions not correctly
answered by the contestant tasked.
I give myself one point for each such question that I get
correct. I do not subtract for those I get wrong: I allow myself to
guess freely.
If I answer before a player attempts a question, I may change my
answer after they fail.
Final Jeopardy is scored separately. I give myself an "Ace" if I
get Final Jeopardy and the other players do not.
It is very rare that I score zero in a game. I would say my average
is around 3-4. The other night I scored 10 plus an Ace.
Am I proud of that? Of course I am. But I'm also concerned. The
quality of contestants had definitely seemed to slip during the last two
years, for obvious reasons. But I was expecting it to come back, and
with a few notable exceptions, I don't feel like it has. The clues seem
like they're getting easier, and yet we still seem to get a lot more
games with a lot of unanswered or incorrectly-answered questions than we
did a few years ago.
The quality of clue-writing has also seemed to me to be declining
lately. It is much more often that a clue seems to be ambiguously or
just poorly worded. It is much more often that my wife and I stare at a
wall-of-text clue hoping to just understand what the question is. For a
while, there was a very strong reliance on things not interestingly
related to trivia questions (don't get me started on Celebrity Before
and After), although thankfully it seems to be winding down a bit.
Anyhoowww… I was talking with this nice person at the dinner and
mentioned my scoring system and she looked at me astonished and said "I
though I was the only one who scored like that!"
So that was cool.
A: It's the GOAT of gameshows, but also a course of
constant friction and confusion: dangerous people on a dangerous
show.
Q: ?
2022-08-07 00:19:18 PDT
So I've had my last two days completely consumed by RustConf 2022. In
particular, I'm somehow leading something called Rust-Edu which is
intended to get the Rust programming language into academia.
tl;dr: I've already missed a Blaugust blog post.
This is a blog post to backfill yesterday's void. Tomorrow I will
write one to fill today's void, and another for tomorrow.
So it goes.
2022-08-04 18:39:02 PDT
Still fighting the technical demons here, but while I do that I
thought it would be good to say what this blog should be about…
Technology: I am a technologist — a computer
scientist, software developer, and hobbyist. I sometimes know things,
and want to share them.
Creativity: I like to make and build interesting
things — writing, music, electronics, etc. Sharing that seems fun
too.
Opinion: As someone voted Most Opinionated in my
High School class (seriously), I sometimes want to share my opinions
with the world.
Miscellaneous: So much miscellaneous.
I hope you will enjoy some of it once I get good and rolling.
2022-08-03 19:53:55 PDT
This week my posts are kind of obsessed with process. Sorry: it's a
boring way to start Blaugust. But it is what it is: I need to understand
how to make my blogging process more permanent, and I'd like to keep a
record of successes and failures.
So… much of yesterday was spent fixing GitAtom issues.
I closed an issue or two and fixed some minor bugs. This site seems
usable now for blogging, if a little fragile and weird.
I'm not yet sure whether choosing GitAtom was the right choice.
Something like Ghost that I have
used in the past might have been better. It offers many of the
advantages of GitAtom, along with a fully-developed and polished
experience.
GitAtom offers two things that I don't know how to get otherwise:
Low-friction Git-backed Markdown blogging with remote
publishing. Lots of systems offer some subset of these features. I
know of none that offers the whole package.
Absolute zero lock-in. By maintaining blog content in
100% standard open formats, and code as fully open-source extremely
simple Python, GitAtom ensures that both repairing the existing codebase
and migrating content elsewhere are easily achievable.
I have built code to extricate my old blog content from the clutches
of an ancient version of Drupal when I
could no longer maintain or upgrade my site. blog-escape
is incomplete and ugly, and does a poor job. I don't want that
experience again.
What GitAtom doesn't
offer is an Atom feed.
Yes, I know. Being easily able to provide an Atom feed was one of the
big selling points of storing all the content in Atom syndication format.
The code never got written. The Atom feed files being saved by GitAtom
appear not to
be RFC compliant, which will need to be fixed before trying to
generate a feed from them.
So now I have two projects.
- Blog daily for Blaugust.
- Fix my blogging tool — in particular by adding an Atom feed.
Will admit to discouragement, but it has to be done. Onward.
P.S. — While trying to find blog-escape
above I
Googled "bartmassey blog rescue". I got a hit for this book,
which confused me mightily for a moment. Needless to say, I am not the
Bart Massey who wrote this book with an oddly relevant keyword.
The interwebs are weird.
2022-08-02 19:49:47 PDT
A couple of decades ago, I had a bunch of students who had all done
substantial research work: I think about eight of them? One or two MS
and a bunch of undergrads. The opportunity came up for the students to
present at a University-wide student research Poster Contest. I
generally don't like Poster Sessions; I think they're kind of a bad
cross between a presentation and an article. But I thought it would be
fun to flood the thing, so I brought the whole bunch of 'em — each with
a lovely poster. (Coincidentally, so did one of our Physics profs. It
was kind of hilarious, actually. Between the two of us I think we had
about half the presenters.)
Anyway, if you're going to bring a team, you need a team uniform. So
I had hats made. I designed a "Friends Of Bart" FOB logo and Keith
Packard and I rendered it in SVG using our Nickle programming language.

We looked ever so stylish.

That day the FOB "brand" was born. I've been using it ever since.
2022-08-02 18:49:47 PDT
I have been using DALL-E to
generate some of the images for this site. These images are captioned
with the prompt I gave DALL-E to produce them: I then selected the best
of the four given choices.
Many thanks to OpenAI for the Beta DALL-E access. Hope this will be
fun for everyone.
2022-08-01 18:20:33 PDT
I decided to take a shot at Blaugust
2022 this August. Sadly, I don't have a blogging tool I'm really
happy with at this point.
I'm going to try to dogfood GitAtom as my blogging
tool. GitAtom was written by my students to make Keith Packard and I
happy. Sadly, it wasn't quite completed, and needs quite a lot of work.
Nonetheless, I'm going to see if it can work for the month, and fix it
as I go.
More tomorrow. Wish me luck!