Contact
-- Joel Beckmeyer -Matrix: @joel:thebeckmeyers.xyz -Fediverse: @TinfoilSubmarine@social.beckmeyer.us -
-diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..8dd6d56 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +public/ +.hugo_build.lock diff --git a/public/adblock.png b/public/adblock.png deleted file mode 100644 index f24bb4f..0000000 Binary files a/public/adblock.png and /dev/null differ diff --git a/public/adblock_blocklist.png b/public/adblock_blocklist.png deleted file mode 100644 index 91516ac..0000000 Binary files a/public/adblock_blocklist.png and /dev/null differ diff --git a/public/adblock_enable.png b/public/adblock_enable.png deleted file mode 100644 index 1a19f32..0000000 Binary files a/public/adblock_enable.png and /dev/null differ diff --git a/public/adblock_refresh.png b/public/adblock_refresh.png deleted file mode 100644 index 2921659..0000000 Binary files a/public/adblock_refresh.png and /dev/null differ diff --git a/public/categories/index.html b/public/categories/index.html deleted file mode 100644 index 8c0bb1e..0000000 --- a/public/categories/index.html +++ /dev/null @@ -1,68 +0,0 @@ - - -
- -- Joel Beckmeyer -Matrix: @joel:thebeckmeyers.xyz -Fediverse: @TinfoilSubmarine@social.beckmeyer.us -
-Joel Beckmeyer
-Matrix: @joel:thebeckmeyers.xyz
-Fediverse: @TinfoilSubmarine@social.beckmeyer.us
You can find me on the Fediverse and Matrix.
- - - -- I’ve seen a lot of talk about this stuff: -“Check out my FOSS project (hosted on Github)” “Wayland is a great innovation and boon to the community! Also, there are very few tools/alternatives available yet for your favorite X11 tool!” “We love open source! Also, we develop the most popular proprietary operating system!” “Do as I say, not as I do.” We love to poke fun at and expose this kind of stuff, which is all fine and dandy. -
- - Read More… - -- There are many that say -(and I tend to agree) -that free software is the best there could be. -But please don’t mistake -using software that’s free -as a right to superiority. -There are many that go -from day to day living -and don’t give a thought to what they are using. -Are they worse for this? -Are you better for caring? -Sometimes the truth can be quite baring. -That not every human -
- - Read More… - -There are many that say
-(and I tend to agree)
-that free software is the best there could be.
But please don’t mistake
-using software that’s free
-as a right to superiority.
There are many that go
-from day to day living
-and don’t give a thought to what they are using.
Are they worse for this?
-Are you better for caring?
-Sometimes the truth can be quite baring.
That not every human
-in present circumstance
-is able or willing to take a chance.
‘Cause that’s what it is,
-taking a chance and going
-into the unknown with fear, and knowing
that what you might find,
-may not truly be better.
But instead simply different;
-and still made by a stranger.
I’ve seen a lot of talk about this stuff:
-We love to poke fun at and expose this kind of stuff, which is all fine and -dandy. I think it’s an interesting (and important) part of our humanity that -this kind of thing bugs us so much. Think about that last point, which at least -in my experience, is something I loved to fault authorities for.
-Hypocrisy is fun and also infuriating to uncover in others, but how often do -we do a “consistency check” on ourselves? Is what we are saying evidenced by -the rest of our actions?
-That’s a hard look sometimes. I know it is for me, since I’m very quick -to judge others, but don’t often think about how I fail at my own principles.
-Example: As a FOSS advocate, it’s nearly natural to assume that everything will -be better and easier with more people using FOSS. When evidence seems to point -to the contrary (e.g. fighting with Matrix/Element to get it working for my -family and friends), I don’t own up to the fact that it isn’t easier, and that -is an actual problem.
-If we truly want to build a welcoming and wholesome community, let’s be careful -to do a consistency check to make sure nothing smells foul.
- - -Today, I switched my workstation from sudo
to doas
. I’m running Void Linux,
-and the process was fairly easy.
First, I needed to figure out how to remove sudo
(yes, I realize I could have
-installed doas
first, then removed sudo
, but I decided to do it the hard way.)
-As it turns out, the advanced usage section of the XBPS manual details how to use the ignorepkg
entry in xbps.d with nothing
-other than this exact use case! I created the file /etc/xbps.d/20-ignorepkg-sudo.conf
with contents
ignorepkg=sudo
-
and then ran sudo xbps-remove sudo
(an ironic command).
After that, because I was stupid and removed sudo
before I had set up doas
,
-I had to use plain-old su
to change to the root user and run xi opendoas
. I also
-configured doas
in /etc/doas.conf
with the following:
# see doas.conf(5) for configuration details
-permit nopass keepenv :admin
-
I ran groupadd admin
, usermod -aG admin joel
, and then logged out so that my
-user account would see the new group perms.
And just like that, I can now run doas xbps-install ...
and all of my other commands,
-just substituting doas
for sudo
.
The one thing I immediately missed was sudoedit
. Before I accidentally tried
-to use sudo
for the first time, I had already accidentally tried to run sudoedit
-at least 5 times. I had to fix this. I saw a discussion on Reddit where one user
-suggested writing a script to replace the sudoedit
functionality.
-I quickly starting hacking together something like that. I started with:
#!/bin/sh
-mkdir -p /tmp/doasedit
-doas cp $1 /tmp/doasedit/tmp_file
-$EDITOR /tmp/doasedit/tmp_file
-
And quickly ran into my first road-block. The script is going to have to change
-the permissions of that file before the user can edit it. But if the script changes
-the permissions, how can I restore it to the original location with the right
-permissions? cp /tmp/doasedit/tmp_file $1
won’t work. I thought about just using
-cat to overwrite the file contents in-place (cat /tmp/doasedit/tmp_file > $1
).
-That could create some issues if a program has the file open. Instead, a better option
-is to create two copies of the file–one for editing, and one for preserving file
-attributes:
#!/bin/sh
-mkdir -p /tmp/doasedit
-doas cp $1 /tmp/doasedit/edit
-doas chown -R $USER:$USER /tmp/doasedit/edit
-doas cp $1 /tmp/doasedit/file
-$EDITOR /tmp/doasedit/edit
-cat /tmp/doasedit/edit | doas tee /tmp/doasedit/file 1>/dev/null
-doas mv -f /tmp/doasedit/file $1
-rm -rf /tmp/doasedit
-
Of course, the issue with this is that it only works with absolute paths.
-I want to make it work for relative paths as well. I’m going to take advantage
-of realpath
, which is part of the coreutils
package from Void. As a bonus, this
-will also take care of the edge case where the given file is a symlink (IIRC,
-sudoedit
didn’t follow symlinks, so I may be diverging here):
#!/bin/sh
-mkdir -p /tmp/doasedit
-srcfile="$(realpath $1)"
-
-doas cp $srcfile /tmp/doasedit/edit
-doas chown -R $USER:$USER /tmp/doasedit/edit
-doas cp $srcfile /tmp/doasedit/file
-
-$EDITOR /tmp/doasedit/edit
-
-cat /tmp/doasedit/edit | doas tee /tmp/doasedit/file 1>/dev/null
-doas mv -f /tmp/doasedit/file $srcfile
-
-rm -rf /tmp/doasedit
-
At this point, it works…okay-ish. It can only be used in one instance currently
-since I hard-coded /tmp/doasedit/file
and /tmp/doasedit/edit
, but that’s easily fixed:
#!/bin/sh
-
-destfile_pfx="$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32)"
-
-while [ -d "/tmp/doasedit/$destfile_pfx" ]; do
- destfile_pfx="$(cat /dev/urandom | tr -cd 'a-f0-9' | head -c 32)"
-done
-
-mkdir -p /tmp/doasedit/$destfile_pfx
-srcfile="$(realpath $1)"
-
-doas cp $srcfile /tmp/doasedit/$destfile_pfx/edit
-doas chown -R $USER:$USER /tmp/doasedit/$destfile_pfx/edit
-doas cp $srcfile /tmp/doasedit/$destfile_pfx/file
-
-$EDITOR /tmp/doasedit/$destfile_pfx/edit
-
-cat /tmp/doasedit/$destfile_pfx/edit | doas tee /tmp/doasedit/$destfile_pfx/file 1>/dev/null
-doas mv -f /tmp/doasedit/$destfile_pfx/file $srcfile
-
-rm -rf /tmp/doasedit/$destfile_pfx
-
At this point, the only thing missing is the check to see if the file was actually -edited:
-...
-cat /tmp/doasedit/$destfile_pfx/edit | doas tee /tmp/doasedit/$destfile_pfx/file 1>/dev/null
-
-if cmp -s "/tmp/doasedit/$destfile_pfx/file" "$srcfile"; then
- echo "Skipping write; no changes."
-else
- doas mv -f /tmp/doasedit/$destfile_pfx/file $srcfile
-fi
-...
-
I put this in a repo on GitHub if
-anyone is interested. I know that a major
-weakness of this script is the number of times it calls doas
, which could
-break flows where password is required every time doas
is run.
- I’ve seen a lot of talk about this stuff: -“Check out my FOSS project (hosted on Github)” “Wayland is a great innovation and boon to the community! Also, there are very few tools/alternatives available yet for your favorite X11 tool!” “We love open source! Also, we develop the most popular proprietary operating system!” “Do as I say, not as I do.” We love to poke fun at and expose this kind of stuff, which is all fine and dandy. -
- Read More… -- There are many that say -(and I tend to agree) -that free software is the best there could be. -But please don’t mistake -using software that’s free -as a right to superiority. -There are many that go -from day to day living -and don’t give a thought to what they are using. -Are they worse for this? -Are you better for caring? -Sometimes the truth can be quite baring. -That not every human -
- Read More… -- Void Linux recently announced that they were going to move back to OpenSSL after originally switching to LibreSSL in 2014. It seems that there are a lot of things at play here. -It seems that the main focus of the recent announcement is on the maintainability and other difficulties of not using the one true SSL/TLS library. To me, this pragmatically makes sense. However, every time something like this happens I get this lingering feeling of worry… -
- Read More… -- After talking about the hardware and software problems of digital permanence, I’m struck by a classical Sci-Fi motif with a conundrum: the Generation Ship; a ship outfitted with all of the technology, infrastructure, and storage to support lightyear-scale human travel. -But what about that technology on the ship? If we build one of these ships, we need to accomplish one of several things in regards to information storage: -1. Innovate to the point where the lifetime of the storage devices is able to support lightyear scale travel. -
- Read More… -- Note: This is a continuation of the thoughts I started thinking about in my Volatile Mediums blog post. -The next level up from physical mediums for data storage is the way that the data is stored. In the digital age, we have a plethora of formats for storing information. For me, one of the most interesting areas of information storage is the analog-digital space. -The fundamental problem of storing audio, video, and other replications of the physical world is that there is so much information that we can collect with sensors (think microphones, video cameras, etc. -
- Read More… -- I decided to do some work on my Linksys WRT32X running OpenWRT to make it a little more useful. -Unbound is a DNS resolver which I like because it’s recursive, meaning it directly queries the root servers instead of relying on existing DNS servers run by Google, Cloudflare, your ISP, or the like. I already have it running on several of my servers and computers, but I figured it would be great if everything on my network can use Unbound and be, well, unbound from all of those intermediary DNS servers. -
- Read More… -- Today, I switched my workstation from sudo to doas. I’m running Void Linux, and the process was fairly easy. -First, I needed to figure out how to remove sudo (yes, I realize I could have installed doas first, then removed sudo, but I decided to do it the hard way.) As it turns out, the advanced usage section of the XBPS manual details how to use the ignorepkg entry in xbps. -
- Read More… -- I’ve recently been thinking a lot about storage mediums [1] – especially in the long-term. -Technology has made a lot of progress. Digital storage mediums started out only being able to store 224KB on a tape drive for an average lifetime of up to 30 years. Now, we can store terrabytes of data on hard drives and solid-state drives. However, no one ever really answered the question about long-term storage. -
- Read More… -Void Linux recently announced -that they were going to move back to OpenSSL after originally switching to -LibreSSL in 2014. -It seems that there are a lot of things at play here.
-It seems that the main focus of the recent announcement is on the maintainability -and other difficulties of not using the one true SSL/TLS library. To me, -this pragmatically makes sense. However, every time something like this happens -I get this lingering feeling of worry…
-Microsoft moving their default browser from their own implementation to -Chromium, and other browsers following suit.
-Linux distributions moving en masse to systemd.
-Distributed email being slowly crushed and killed by Google with GMail.
-And many other examples that aren’t immediately coming to mind.
-I think it’s great that OpenSSL as a project has made a comeback from the -Heartbleed fiasco, and that it is apparently more actively developed nowadays, -but the fact that we are even at the point of moving back to OpenSSL due to -difficulties with building software is worrying. To me, it looks like a -symptom of software becoming too entrenched and dependent on a single piece -of software.
-This kind of accusation coming from anyone is going to be hypocritical, since -we all depend on Linux, X11, Wayland, systemd, or some common piece of software -that we take for granted and don’t lose sleep over. However, I think what’s -categorically different about this one is that an alternative was adopted, -worked on, but eventually “failed” (at least for Void, but also possibly for -Linux as well).
-I don’t know what the fix for this specific issue would be. I’m not nearly -familiar enough with SSL/TLS or how you would develop software to be agnostic -of dependencies like this. But I think in order to honor principles like -the Unix philosophy, the KISS principle, and countless others, we need to -figure out a way to be more modular for dependency issues like this.
- - -I decided to do some work on my Linksys WRT32X running OpenWRT to make it a -little more useful.
-Unbound is a DNS -resolver which I like because it’s recursive, meaning it directly queries the -root servers instead of relying on existing DNS servers run by Google, -Cloudflare, your ISP, or the like. I already have it running on several of my -servers and computers, but I figured it would be great if everything on my -network can use Unbound and be, well, unbound from all of those intermediary -DNS servers.
-Luckily, OpenWRT already has Unbound packaged, and also has a useful LuCI app
-that goes with it (LuCI is the graphical web interface that comes with OpenWRT).
-All I had to do was install luci-app-unbound
, which pulls in all of the
-necessary dependencies to run unbound.
After that finished installing, I -refreshed LuCI/OpenWRT and went to “Services” on the top, and there it is!
- -At this point, you’ll have to get your hands dirty. You can either dig through
-some LuCI menus or SSH in and make some edits. For reference, I’m using
-“Parallel dnsmasq” section from the README for unbound in the OpenWRT packages (which
-has a lot of other useful information as well!). Essentially, I made the edits
-to /etc/config/unbound
and /etc/config/dhcp
after SSH’ing in. However, you
-can make the same edits through LuCI.
For the /etc/config/unbound
edits, you can make the edits to the file in
-LuCI directly at “Services -> Recursive DNS -> Files -> Edit: UCI”:
For the /etc/config/dhcp
edits, you can make the edits by finding the same
-fields under “Network -> DHCP and DNS”:
However, the field names are different from the lines in the config, so they
-would need to be researched to determine which fields in LuCI map to which
-lines in /etc/config/dhcp
.
At this point (or maybe after restarting unbound and dnsmasq, which is a lot
-easier using SSH and /etc/init.d ... restart
as well), OpenWRT should now
-be using unbound for resolving all DNS lookups, while dnsmasq is only used for
-DHCP-DNS.
Bonus: you can also enable a nice status dashboard in LuCI under
-“Services -> Recursive DNS -> Status”, but this requires installing several more
-software packages: unbound-control
and unbound-control-setup
. You will also
-need to change a line in /etc/config/unbound
:
...
-option unbound_control '0'
-...
-
becomes
-...
-option unbound_control '1'
-...
-
A word of warning: there is another section on “Unbound and odhcpd” which -tries to cut out dnsmasq completely. However, when I tried to set this up, -I got myself into a lot of trouble (had to reset OpenWRT, re-install any extra -software packages, and restore configuration from backup). It is also possible that if you mess up -the configuration for the “Parallel dnsmasq” method, you could end up in a -similar error state and have to start over. Please be careful when doing this -and don’t change anything you’re not supposed to.
-Now, moving on to adblock, which should be much simpler to setup. First,
-install luci-app-adblock
and refresh. Navigate to “Services -> Adblock”:
Check the settings at the bottom. The only thing you need to get going is -to go to the “Blocklist Sources” tab and choose your blocklists.
- -The -adblock readme -has some more info on what each list is. After that, -make sure “Enabled” is checked under the “General Settings” tab:
- -and click the “Refresh” button above:
- -Then you’re good to go; adblock should work out of the box with unbound; cheers!
-ADDENDUM: Another word of warning: once you’ve setup adblock, it will download
-the blocklists, merge them into a single file at /var/lib/unbound/adb_list.overall
,
-and try to restart unbound. I recommend not trying to view/interact with adblock
-or unbound during this restart, which can take anywhere from 30 seconds - 2 minutes.
-Just leave them alone in LuCI for a little bit…
After talking about the hardware and software problems of -digital permanence, I’m struck by a classical Sci-Fi -motif with a conundrum: the Generation Ship; a ship -outfitted with all of the technology, infrastructure, and -storage to support lightyear-scale human travel.
-But what about that technology on the ship? If we build -one of these ships, we need to accomplish one of several -things in regards to information storage:
-That’s a tall order, given where we are right now with -physical storage devices. As I mentioned in one of my -previous posts, the average lifetime of physical storage -devices is less than 100 years, no matter if it is a hard -drive, solid-state drive, etc.
-Again, in my mind a tall order, since it would require -facilities on the ship to create storage devices. The -problem of having materials is at least solvable by just -sending the ship with all of the materials it needs in -advance.
-One of the main reasons I’m even thinking about this is -because I’m an individual with limited resources. -Accordingly, I think about things in terms of -broken/working, on/off, etc. With enough resources, there -is a much larger chance of being able to repair, re-purpose, -and otherwise revitalize storage devices, increasing their -lifetime. E.g., if the only failure in the hard drive is the -control circuit, that is an “easy enough” repair.
-I like to toy with the idea of a generation ship a lot in -my head, but I think it’s really fun to think about the -technical possibilities and needs of a ship like this.
- - -Note: This is a continuation of the thoughts I started -thinking about in my Volatile Mediums blog post.
-The next level up from physical mediums for data storage -is the way that the data is stored. In the digital age, -we have a plethora of formats for storing information. -For me, one of the most interesting areas of information -storage is the analog-digital space.
-The fundamental problem of storing audio, video, and other -replications of the physical world is that there is so much -information that we can collect with sensors -(think microphones, video cameras, etc.). It would be great -if we could go get the best camera and microphone out there, -record whatever people record these days, and have that -exact physical experience “played back” for us on a screen -and speaker/headphones.
-Unfortunately, there are several problems with this. Among -those is the actual design of the sensor. It takes a lot of -careful thought, engineering, and the like to create a truly -good microphone or camera. And after all of that, this sensor -will cost something. Hopefully, that cost will correspond to -the actual technical ability of that sensor! In any case, -not everyone can have the best camera or microphone due to -any number of constraints, not just those listed above.
-The second problem is the sampling issue. The sensor will -create some sort of output that can then be measured, or -sampled, by an ADC (analog-to-digital converter). The -very word “sample” belies what this nearly magical box is -doing: it is only looking at certain portions or timestamps -of the analog signal. Granted, the time between samples -can be very small (e.g. 44.1 kHz is a fairly common sample -rate for audio), but there is still some loss of signal. -Once the ADC creates these samples, it converts them into -a digital format (something that can be stored on a -CD, hard drive, thumb drive, etc.).
-The third problem is the encoding issue. The ADC creates all -of these samples, but we need to start thinking about storage -limitations. Storing the raw output of a sensor can take a -lot of space: an average album length (40 minutes) could -easily take 400MB of space! Now, again, the physical storage -space is moving in the upward direction to combat this, but -storing isn’t the only problem. One prime issue is internet -bandwidth.
-The solution to this is compression, like a ZIP file. It -makes big files smaller by doing some fancy math tricks -that can be reversed by a computer to reconstruct the -original file. However, for audio/video files, another level -of compression exists which actually gets rid of some of the -information in the original file to save more space. This -is called “lossy” compression, as opposed to “lossless” -compression.
-Great! We’ve found a way to save more space. The problem -with lossy compression is that we have to decide which -information to throw away. Usually, this is frequencies -that the average human ear/eye can’t perceive. But, let’s -just say that some compression is a bit too “greedy” when it -comes to saving space and starts to cut into the band of -frequencies that can be perceived. Also note that -the design of these compression algorithms is an artform -and takes lots of careful consideration.
-The final problem I want to mention is the codec problem. -There are many different codecs available today, and for -each and every one of them to be useful, you need to have a -way to decode each and every one of them. Unfortunately, -this is sometimes very difficult.
-It could be a licensing -issue, where you don’t have the correct software installed -or purchased to actually decode that file on your computer.
-Or it could be a physical constraints issue, where your -computer isn’t powerful enough to decode the file at a fast -enough rate for you to view it without stuttering, -buffering, etc.
-Third, it could be a personal preference. Some people -have much more sensitive eyes/ears and need to have formats -that are more transparent, meaning that the lossy file -is perceptually identical to the source it was encoded from.
-With all of these issues at play, I think there are several -key points to make:
-Can’t stress this one enough: we need to make sure we are -doing everything possible to not let our information die -when a corporation or individual makes a decision that -impacts the “who, what, where, when, and how” of their codec -usage.
-We need to remember that not everyone has the ability to use -lossless codecs, whether that be because of internet -bandwidth limitations, storage limitation, or the like. -Instead, we need to continue to innovate in the lossy -compression space to narrow the perceptual gap between lossy -and lossless more and more.
-This one may sound weird, but the fact is, if we’re talking -about long-term storage of information, we can’t let codecs -die, since there may come a day where we need a codec to -decode great-grandpa’s album that never made it big.
- - -I’ve recently been thinking a lot about storage mediums [1] – especially in the long-term.
-Technology has made a lot of progress. Digital storage mediums started out only being -able to store 224KB on a tape drive -for an average lifetime of up to 30 years. -Now, we can store terrabytes of data on hard drives and solid-state drives. However, -no one ever really answered the question about long-term storage.
-(Note: the following is based off an assumption that the storage medium is only -being used to make backups or archive data. The device itself could be unplugged and stored -when no backup is in progress.)
-Even though theoretically hard drives could store data for 20+ years, random bit flips, drive -failure, etc. all make hard drives too volatile of an option. As always, of course -redundancy takes away some of these issues.
-SSDs are in an even worse position: they cost significantly more than hard drives -per TB right now, and last I heard, there were still issues with bit fade when -unpowered.
-CD/DVD is sounding a lot better, but there are some serious issues here too. -Variable quality directly impacts the storage lifetime. Physically storing the -discs is a lot more risky since the disc itself doesn’t have as much built-in -protection as a hard drive or SSD has. You’ll need a much larger quantity to -store the terrabytes of data that you can easily dump on one hard drive. And finally, life -expectancy is still fairly low – while manufacturers of recordable discs (the ‘R’ in CD-R, DVD-R, etc.) -claim life expectancies of 100-200 (!) years under optimal conditions, others are slightly more conservative, -giving an estimate of 30 years. -Oh, and remember how I mentioned this is for recordable discs? That means they’re single write. -The random access (RW - CD-RW, DVD-RW, etc.) discs have even lower life expectancies.
-All in all, humanity has not gotten very far with the digital storage medium. -All of these life expectancies have an inconsequential variance when we zoom out -to the century view of history.
-[1] And no, I’m not talking about the kind you pay to see your dead great-great-aunt to figure out if -you’re actually related to George Washington.
-This is intended to be the beginning of a learning series/personal study on the issues surrounding -information preservation, digital permanence, and their related issues.
- - -- Joel Beckmeyer -Matrix: @joel:thebeckmeyers.xyz -Fediverse: @TinfoilSubmarine@social.beckmeyer.us -
-