Posts on Joel Beckmeyer's Blog https://beckmeyer.us/posts/ Recent content in Posts on Joel Beckmeyer's Blog Hugo -- gohugo.io en-us joel@beckmeyer.us (Joel Beckmeyer) joel@beckmeyer.us (Joel Beckmeyer) Sun, 04 Apr 2021 00:00:00 -0500 Consistency https://beckmeyer.us/posts/consistency/ Sun, 04 Apr 2021 00:00:00 -0500 joel@beckmeyer.us (Joel Beckmeyer) https://beckmeyer.us/posts/consistency/ <p>I&rsquo;ve seen a lot of talk about this stuff:</p> <ul> <li>&ldquo;Check out my FOSS project (hosted on Github)&rdquo;</li> <li>&ldquo;Wayland is a great innovation and boon to the community! Also, there are very few tools/alternatives available yet for your favorite X11 tool!&rdquo;</li> <li>&ldquo;We love open source! Also, we develop the most popular proprietary operating system!&rdquo;</li> <li>&ldquo;Do as I say, not as I do.&rdquo;</li> </ul> <p>We love to poke fun at and expose this kind of stuff, which is all fine and dandy. I think it&rsquo;s an interesting (and important) part of our humanity that this kind of thing bugs us so much. Think about that last point, which at least in my experience, is something I <em>loved</em> to fault authorities for.</p> <p>Hypocrisy is fun and also infuriating to uncover in others, but how often do we do a &ldquo;consistency check&rdquo; on ourselves? Is what we are saying evidenced by the rest of our actions?</p> <p>That&rsquo;s a hard look sometimes. I know it is for me, since I&rsquo;m <strong>very</strong> quick to judge others, but don&rsquo;t often think about how I fail at my own principles.</p> <p>Example: As a FOSS advocate, it&rsquo;s nearly natural to assume that everything will be better and easier with more people using FOSS. When evidence seems to point to the contrary (e.g. fighting with Matrix/Element to get it working for my family and friends), I don&rsquo;t own up to the fact that it isn&rsquo;t easier, and that is an actual problem.</p> <p>If we truly want to build a welcoming and wholesome community, let&rsquo;s be careful to do a consistency check to make sure nothing smells foul.</p> Better? https://beckmeyer.us/posts/better/ Sat, 03 Apr 2021 22:15:44 -0400 joel@beckmeyer.us (Joel Beckmeyer) https://beckmeyer.us/posts/better/ <p>There are many that say<br> (and I tend to agree)<br> that free software is the best there could be.</p> <p>But please don&rsquo;t mistake<br> using software that&rsquo;s free<br> as a right to superiority.</p> <p>There are many that go<br> from day to day living<br> and don&rsquo;t give a thought to what they are using.</p> <p>Are they worse for this?<br> Are you better for caring?<br> Sometimes the truth can be quite baring.</p> <p>That not every human<br> in present circumstance<br> is able or willing to take a chance.</p> <p>&lsquo;Cause that&rsquo;s what it is,<br> taking a chance and going<br> into the unknown with fear, and knowing</p> <p>that what you might find,<br> may not truly be better.</p> <p>But instead simply different;<br> and still made by a stranger.</p> Moving Back To OpenSSL https://beckmeyer.us/posts/moving_back_to_openssl/ Mon, 22 Mar 2021 11:00:00 -0400 joel@beckmeyer.us (Joel Beckmeyer) https://beckmeyer.us/posts/moving_back_to_openssl/ <p>Void Linux <a href="https://voidlinux.org/news/2021/02/OpenSSL.html">recently announced</a> that they were going to move back to OpenSSL after originally <a href="https://voidlinux.org/news/2014/08/LibreSSL-by-default.html">switching to LibreSSL in 2014</a>. It seems that there are a lot of things at play here.</p> <p>It seems that the main focus of the recent announcement is on the maintainability and other difficulties of not using the <em>one true SSL/TLS library</em>. To me, this pragmatically makes sense. However, every time something like this happens I get this lingering feeling of worry&hellip;</p> <p>Microsoft moving their default browser from their own implementation to Chromium, and other browsers following suit.</p> <p>Linux distributions moving <em>en masse</em> to <strong>systemd</strong>.</p> <p>Distributed email being slowly crushed and killed by Google with GMail.</p> <p>And many other examples that aren&rsquo;t immediately coming to mind.</p> <p>I think it&rsquo;s great that OpenSSL as a project has made a comeback from the Heartbleed fiasco, and that it is apparently more actively developed nowadays, but the fact that we are even at the point of moving back to OpenSSL due to difficulties with building software is worrying. To me, it looks like a symptom of software becoming too entrenched and dependent on a single piece of software.</p> <p>This kind of accusation coming from anyone is going to be hypocritical, since we all depend on Linux, X11, Wayland, systemd, or some common piece of software that we take for granted and don&rsquo;t lose sleep over. However, I think what&rsquo;s categorically different about this one is that an alternative was adopted, worked on, but eventually &ldquo;failed&rdquo; (at least for Void, but also possibly for Linux as well).</p> <p>I don&rsquo;t know what the fix for this specific issue would be. I&rsquo;m not nearly familiar enough with SSL/TLS or how you would develop software to be agnostic of dependencies like this. But I think in order to honor principles like the Unix philosophy, the KISS principle, and countless others, we need to figure out a way to be more modular for dependency issues like this.</p> The Generation Ship Problem https://beckmeyer.us/posts/the_generation_ship_problem/ Fri, 19 Mar 2021 15:00:00 -0400 joel@beckmeyer.us (Joel Beckmeyer) https://beckmeyer.us/posts/the_generation_ship_problem/ <p>After talking about the hardware and software problems of digital permanence, I&rsquo;m struck by a classical Sci-Fi motif with a conundrum: the <strong>Generation Ship</strong>; a ship outfitted with all of the technology, infrastructure, and storage to support lightyear-scale human travel.</p> <p>But what about that technology on the ship? If we build one of these ships, we need to accomplish one of several things in regards to information storage:</p> <h3 id="1-innovate-to-the-point-where-the-lifetime-of-the-storage-devices-is-able-to-support-lightyear-scale-travel">1. Innovate to the point where the lifetime of the storage devices is able to support lightyear scale travel.</h3> <p>That&rsquo;s a tall order, given where we are right now with physical storage devices. As I mentioned in one of my previous posts, the average lifetime of physical storage devices is less than 100 years, no matter if it is a hard drive, solid-state drive, etc.</p> <h3 id="2-provide-the-facility-to-create-new-storage-devices-to-replace-the-failing-old-ones">2. Provide the facility to create new storage devices to replace the failing old ones.</h3> <p>Again, in my mind a tall order, since it would require facilities on the ship to create storage devices. The problem of having materials is at least solvable by just sending the ship with all of the materials it needs in advance.</p> <h3 id="3-provide-the-facility-to-revitalize-storage-devices">3. Provide the facility to revitalize storage devices.</h3> <p>One of the main reasons I&rsquo;m even thinking about this is because I&rsquo;m an individual with limited resources. Accordingly, I think about things in terms of broken/working, on/off, etc. With enough resources, there is a much larger chance of being able to repair, re-purpose, and otherwise revitalize storage devices, increasing their lifetime. E.g., if the only failure in the hard drive is the control circuit, that is an &ldquo;easy enough&rdquo; repair.</p> <p>I like to toy with the idea of a generation ship a lot in my head, but I think it&rsquo;s really fun to think about the technical possibilities and needs of a ship like this.</p> Volatile Formats https://beckmeyer.us/posts/volatile_formats/ Thu, 18 Mar 2021 14:24:00 -0400 joel@beckmeyer.us (Joel Beckmeyer) https://beckmeyer.us/posts/volatile_formats/ <p><em>Note: This is a continuation of the thoughts I started thinking about in my <a href="https://beckmeyer.us/posts/volatile_mediums/">Volatile Mediums</a> blog post.</em></p> <p>The next level up from physical mediums for data storage is the <em>way</em> that the data is stored. In the digital age, we have a plethora of formats for storing information. For me, one of the most interesting areas of information storage is the analog-digital space.</p> <p>The fundamental problem of storing audio, video, and other replications of the physical world is that there is so much information that we can collect with sensors (think microphones, video cameras, etc.). It would be great if we could go get the best camera and microphone out there, record whatever people record these days, and have that exact physical experience &ldquo;played back&rdquo; for us on a screen and speaker/headphones.</p> <p>Unfortunately, there are several problems with this. Among those is the actual design of the sensor. It takes a lot of careful thought, engineering, and the like to create a truly good microphone or camera. And after all of that, this sensor will cost something. Hopefully, that cost will correspond to the actual technical ability of that sensor! In any case, not everyone can have the best camera or microphone due to any number of constraints, not just those listed above.</p> <p>The second problem is the sampling issue. The sensor will create some sort of output that can then be measured, or <strong>sampled</strong>, by an ADC (analog-to-digital converter). The very word &ldquo;sample&rdquo; belies what this nearly magical box is doing: it is only looking at certain portions or timestamps of the analog signal. Granted, the time between samples can be very small (e.g. 44.1 kHz is a fairly common sample rate for audio), but there is still some loss of signal. Once the ADC creates these samples, it converts them into a digital format (something that can be stored on a CD, hard drive, thumb drive, etc.).</p> <p>The third problem is the encoding issue. The ADC creates all of these samples, but we need to start thinking about storage limitations. Storing the raw output of a sensor can take a lot of space: an average album length (40 minutes) could easily take 400MB of space! Now, again, the physical storage space is moving in the upward direction to combat this, but storing isn&rsquo;t the only problem. One prime issue is internet bandwidth.</p> <p>The solution to this is compression, like a ZIP file. It makes big files smaller by doing some fancy math tricks that can be reversed by a computer to reconstruct the original file. However, for audio/video files, another level of compression exists which actually gets rid of some of the information in the original file to save more space. This is called &ldquo;lossy&rdquo; compression, as opposed to &ldquo;lossless&rdquo; compression.</p> <p>Great! We&rsquo;ve found a way to save more space. The problem with lossy compression is that we have to decide which information to throw away. Usually, this is frequencies that the average human ear/eye can&rsquo;t perceive. But, let&rsquo;s just say that some compression is a bit too &ldquo;greedy&rdquo; when it comes to saving space and starts to cut into the band of frequencies that can be perceived. Also note that the design of these compression algorithms is an artform and takes lots of careful consideration.</p> <p>The final problem I want to mention is the codec problem. There are many different codecs available today, and for each and every one of them to be useful, you need to have a way to decode each and every one of them. Unfortunately, this is sometimes very difficult.</p> <p>It could be a licensing issue, where you don&rsquo;t have the correct software installed or purchased to actually decode that file on your computer.</p> <p>Or it could be a physical constraints issue, where your computer isn&rsquo;t powerful enough to decode the file at a fast enough rate for you to view it without stuttering, buffering, etc.</p> <p>Third, it could be a personal preference. Some people have much more sensitive eyes/ears and need to have formats that are more <strong>transparent</strong>, meaning that the lossy file is perceptually identical to the source it was encoded from.</p> <p>With all of these issues at play, I think there are several key points to make:</p> <h3 id="1-codecs-need-to-be-freely-available-for-widespread-use-with-no-strings-attached">1. Codecs need to be freely available for widespread use with no strings attached.</h3> <p>Can&rsquo;t stress this one enough: we need to make sure we are doing everything possible to not let our information die when a corporation or individual makes a decision that impacts the &ldquo;who, what, where, when, and how&rdquo; of their codec usage.</p> <h3 id="2-lossless-compression-is-good-but-it-is-not-the-only-thing-we-need">2. Lossless compression is good, but it is not the only thing we need.</h3> <p>We need to remember that not everyone has the ability to use lossless codecs, whether that be because of internet bandwidth limitations, storage limitation, or the like. Instead, we need to continue to innovate in the lossy compression space to narrow the perceptual gap between lossy and lossless more and more.</p> <h3 id="3-a-codec-should-never-become-obsolete">3. A codec should never become obsolete.</h3> <p>This one may sound weird, but the fact is, if we&rsquo;re talking about long-term storage of information, we can&rsquo;t let codecs die, since there may come a day where we need a codec to decode great-grandpa&rsquo;s album that never made it big.</p> OpenWRT + Unbound + adblock https://beckmeyer.us/posts/openwrt_plus_unbound/ Fri, 05 Feb 2021 19:03:15 -0500 joel@beckmeyer.us (Joel Beckmeyer) https://beckmeyer.us/posts/openwrt_plus_unbound/ <p>I decided to do some work on my Linksys WRT32X running OpenWRT to make it a little more useful.</p> <p><a href="https://nlnetlabs.nl/projects/unbound/about/">Unbound</a> is a DNS resolver which I like because it&rsquo;s recursive, meaning it directly queries the root servers instead of relying on existing DNS servers run by Google, Cloudflare, your ISP, or the like. I already have it running on several of my servers and computers, but I figured it would be great if everything on my network can use Unbound and be, well, <em>unbound</em> from all of those intermediary DNS servers.</p> <p>Luckily, OpenWRT already has Unbound packaged, and also has a useful LuCI app that goes with it (LuCI is the graphical web interface that comes with OpenWRT). All I had to do was install <code>luci-app-unbound</code>, which pulls in all of the necessary dependencies to run unbound.</p> <p><img src="https://beckmeyer.us/luci_software.png" alt="LuCI: Software"></p> <p><img src="https://beckmeyer.us/luci_install.png" alt="LuCI: Install"></p> <p>After that finished installing, I refreshed LuCI/OpenWRT and went to &ldquo;Services&rdquo; on the top, and there it is!</p> <p><img src="https://beckmeyer.us/luci_services.png" alt="LuCI: Services -&gt; Recursive DNS"></p> <p>At this point, you&rsquo;ll have to get your hands dirty. You can either dig through some LuCI menus or SSH in and make some edits. For reference, I&rsquo;m using <a href="https://github.com/openwrt/packages/blob/openwrt-19.07/net/unbound/files/README.md#parallel-dnsmasq">&ldquo;Parallel dnsmasq&rdquo;</a> section from the README for unbound in the OpenWRT packages (which has a lot of other useful information as well!). Essentially, I made the edits to <code>/etc/config/unbound</code> and <code>/etc/config/dhcp</code> after SSH&rsquo;ing in. However, you can make the same edits through LuCI.</p> <p>For the <code>/etc/config/unbound</code> edits, you can make the edits to the file in LuCI directly at &ldquo;Services -&gt; Recursive DNS -&gt; Files -&gt; Edit: UCI&rdquo;:</p> <p><img src="https://beckmeyer.us/unbound_config.png" alt="LuCI: Edit /etc/config/unbound"></p> <p>For the <code>/etc/config/dhcp</code> edits, you can make the edits by finding the same fields under &ldquo;Network -&gt; DHCP and DNS&rdquo;:</p> <p><img src="https://beckmeyer.us/dhcp_config.png" alt="LuCI: Edit DHCP and DNS Settings"></p> <p>However, the field names are different from the lines in the config, so they would need to be researched to determine which fields in LuCI map to which lines in <code>/etc/config/dhcp</code>.</p> <p>At this point (or maybe after restarting unbound and dnsmasq, which is a lot easier using SSH and <code>/etc/init.d ... restart</code> as well), OpenWRT should now be using unbound for resolving all DNS lookups, while dnsmasq is only used for DHCP-DNS.</p> <p>Bonus: you can also enable a nice status dashboard in LuCI under &ldquo;Services -&gt; Recursive DNS -&gt; Status&rdquo;, but this requires installing several more software packages: <code>unbound-control</code> and <code>unbound-control-setup</code>. You will also need to change a line in <code>/etc/config/unbound</code>:</p> <pre tabindex="0"><code>... option unbound_control &#39;0&#39; ... </code></pre><p>becomes</p> <pre tabindex="0"><code>... option unbound_control &#39;1&#39; ... </code></pre><p>A word of warning: there is another section on &ldquo;Unbound and odhcpd&rdquo; which tries to cut out dnsmasq completely. However, when I tried to set this up, I got myself into a lot of trouble (had to reset OpenWRT, re-install any extra software packages, and restore configuration from backup). It is also possible that if you mess up the configuration for the &ldquo;Parallel dnsmasq&rdquo; method, you could end up in a similar error state and have to start over. Please be careful when doing this and don&rsquo;t change anything you&rsquo;re not supposed to.</p> <p>Now, moving on to adblock, which should be <strong>much</strong> simpler to setup. First, install <code>luci-app-adblock</code> and refresh. Navigate to &ldquo;Services -&gt; Adblock&rdquo;:</p> <p><img src="https://beckmeyer.us/adblock.png" alt="Services -&gt; Adblock"></p> <p>Check the settings at the bottom. The only thing you need to get going is to go to the &ldquo;Blocklist Sources&rdquo; tab and choose your blocklists.</p> <p><img src="https://beckmeyer.us/adblock_blocklist.png" alt="Adblock: Blacklist sources"></p> <p>The <a href="https://github.com/openwrt/packages/blob/master/net/adblock/files/README.md">adblock readme</a> has some more info on what each list is. After that, make sure &ldquo;Enabled&rdquo; is checked under the &ldquo;General Settings&rdquo; tab:</p> <p><img src="https://beckmeyer.us/adblock_enable.png" alt="Adblock: enable"></p> <p>and click the &ldquo;Refresh&rdquo; button above:</p> <p><img src="https://beckmeyer.us/adblock_refresh.png" alt="Adblock: refresh"></p> <p>Then you&rsquo;re good to go; adblock should work out of the box with unbound; cheers!</p> <p>ADDENDUM: Another word of warning: once you&rsquo;ve setup adblock, it will download the blocklists, merge them into a single file at <code>/var/lib/unbound/adb_list.overall</code>, and try to restart unbound. I recommend not trying to view/interact with adblock or unbound during this restart, which can take anywhere from 30 seconds - 2 minutes. Just leave them alone in LuCI for a little bit&hellip;</p> Hello doas https://beckmeyer.us/posts/hello_doas/ Sat, 30 Jan 2021 15:15:55 -0500 joel@beckmeyer.us (Joel Beckmeyer) https://beckmeyer.us/posts/hello_doas/ <p>Today, I switched my workstation from <code>sudo</code> to <code>doas</code>. I&rsquo;m running Void Linux, and the process was fairly easy.</p> <p>First, I needed to figure out how to remove <code>sudo</code> (yes, I realize I could have installed <code>doas</code> first, then removed <code>sudo</code>, but I decided to do it the hard way.) As it turns out, the <a href="https://docs.voidlinux.org/xbps/advanced-usage.html#ignoring-packages">advanced usage section of the XBPS manual</a> details how to use the <code>ignorepkg</code> entry in xbps.d with nothing other than this exact use case! I created the file <code>/etc/xbps.d/20-ignorepkg-sudo.conf</code> with contents</p> <pre tabindex="0"><code>ignorepkg=sudo </code></pre><p>and then ran <code>sudo xbps-remove sudo</code> (an ironic command).</p> <p>After that, because I was stupid and removed <code>sudo</code> before I had set up <code>doas</code>, I had to use plain-old <code>su</code> to change to the root user and run <code>xi opendoas</code>. I also configured <code>doas</code> in <code>/etc/doas.conf</code> with the following:</p> <pre tabindex="0"><code># see doas.conf(5) for configuration details permit nopass keepenv :admin </code></pre><p>I ran <code>groupadd admin</code>, <code>usermod -aG admin joel</code>, and then logged out so that my user account would see the new group perms.</p> <p>And just like that, I can now run <code>doas xbps-install ...</code> and all of my other commands, just substituting <code>doas</code> for <code>sudo</code>.</p> <p>The one thing I immediately missed was <code>sudoedit</code>. Before I accidentally tried to use <code>sudo</code> for the first time, I had already accidentally tried to run <code>sudoedit</code> <em>at least</em> 5 times. I had to fix this. I saw a discussion on Reddit where <a href="https://www.reddit.com/r/linux/comments/l6y7nv/is_doas_a_good_alternative_to_sudo/gl4hs42?utm_source=share&amp;utm_medium=web2x&amp;context=3">one user suggested</a> writing a script to replace the <code>sudoedit</code> functionality. I quickly starting hacking together something like that. I started with:</p> <pre tabindex="0"><code>#!/bin/sh mkdir -p /tmp/doasedit doas cp $1 /tmp/doasedit/tmp_file $EDITOR /tmp/doasedit/tmp_file </code></pre><p>And quickly ran into my first road-block. The script is going to have to change the permissions of that file before the user can edit it. But if the script changes the permissions, how can I restore it to the original location with the right permissions? <code>cp /tmp/doasedit/tmp_file $1</code> won&rsquo;t work. I thought about just using cat to overwrite the file contents in-place (<code>cat /tmp/doasedit/tmp_file &gt; $1</code>). That <em>could</em> create some issues if a program has the file open. Instead, a better option is to create two copies of the file&ndash;one for editing, and one for preserving file attributes:</p> <pre tabindex="0"><code>#!/bin/sh mkdir -p /tmp/doasedit doas cp $1 /tmp/doasedit/edit doas chown -R $USER:$USER /tmp/doasedit/edit doas cp $1 /tmp/doasedit/file $EDITOR /tmp/doasedit/edit cat /tmp/doasedit/edit | doas tee /tmp/doasedit/file 1&gt;/dev/null doas mv -f /tmp/doasedit/file $1 rm -rf /tmp/doasedit </code></pre><p>Of course, the issue with this is that it only works with absolute paths. I want to make it work for relative paths as well. I&rsquo;m going to take advantage of <code>realpath</code>, which is part of the <code>coreutils</code> package from Void. As a bonus, this will also take care of the edge case where the given file is a symlink (IIRC, <code>sudoedit</code> didn&rsquo;t follow symlinks, so I may be diverging here):</p> <pre tabindex="0"><code>#!/bin/sh mkdir -p /tmp/doasedit srcfile=&#34;$(realpath $1)&#34; doas cp $srcfile /tmp/doasedit/edit doas chown -R $USER:$USER /tmp/doasedit/edit doas cp $srcfile /tmp/doasedit/file $EDITOR /tmp/doasedit/edit cat /tmp/doasedit/edit | doas tee /tmp/doasedit/file 1&gt;/dev/null doas mv -f /tmp/doasedit/file $srcfile rm -rf /tmp/doasedit </code></pre><p>At this point, it works&hellip;okay-ish. It can only be used in one instance currently since I hard-coded <code>/tmp/doasedit/file</code> and <code>/tmp/doasedit/edit</code>, but that&rsquo;s easily fixed:</p> <pre tabindex="0"><code>#!/bin/sh destfile_pfx=&#34;$(cat /dev/urandom | tr -cd &#39;a-f0-9&#39; | head -c 32)&#34; while [ -d &#34;/tmp/doasedit/$destfile_pfx&#34; ]; do destfile_pfx=&#34;$(cat /dev/urandom | tr -cd &#39;a-f0-9&#39; | head -c 32)&#34; done mkdir -p /tmp/doasedit/$destfile_pfx srcfile=&#34;$(realpath $1)&#34; doas cp $srcfile /tmp/doasedit/$destfile_pfx/edit doas chown -R $USER:$USER /tmp/doasedit/$destfile_pfx/edit doas cp $srcfile /tmp/doasedit/$destfile_pfx/file $EDITOR /tmp/doasedit/$destfile_pfx/edit cat /tmp/doasedit/$destfile_pfx/edit | doas tee /tmp/doasedit/$destfile_pfx/file 1&gt;/dev/null doas mv -f /tmp/doasedit/$destfile_pfx/file $srcfile rm -rf /tmp/doasedit/$destfile_pfx </code></pre><p>At this point, the only thing missing is the check to see if the file was actually edited:</p> <pre tabindex="0"><code>... cat /tmp/doasedit/$destfile_pfx/edit | doas tee /tmp/doasedit/$destfile_pfx/file 1&gt;/dev/null if cmp -s &#34;/tmp/doasedit/$destfile_pfx/file&#34; &#34;$srcfile&#34;; then echo &#34;Skipping write; no changes.&#34; else doas mv -f /tmp/doasedit/$destfile_pfx/file $srcfile fi ... </code></pre><p>I put this in a <a href="https://github.com/AluminumTank/doasedit">repo on GitHub</a> if anyone is interested. I know that a major weakness of this script is the number of times it calls <code>doas</code>, which could break flows where password is required every time <code>doas</code> is run.</p> Volatile Mediums https://beckmeyer.us/posts/volatile_mediums/ Fri, 29 Jan 2021 23:36:00 -0500 joel@beckmeyer.us (Joel Beckmeyer) https://beckmeyer.us/posts/volatile_mediums/ <p>I&rsquo;ve recently been thinking a lot about storage mediums [1] &ndash; especially in the long-term.</p> <p>Technology has made a lot of progress. Digital storage mediums started out only being able to store <a href="https://en.wikipedia.org/wiki/Tape_drive">224KB on a tape drive</a> for an average lifetime of <a href="https://blog.storagecraft.com/data-storage-lifespan/"><em>up to</em> 30 years</a>. Now, we can store terrabytes of data on hard drives and solid-state drives. However, no one ever really answered the question about long-term storage.</p> <p>(Note: the following is based off an assumption that the storage medium is only being used to make backups or archive data. The device itself could be unplugged and stored when no backup is in progress.)</p> <p>Even though <em>theoretically</em> hard drives could store data for 20+ years, random bit flips, drive failure, etc. all make hard drives too volatile of an option. As always, of course redundancy takes away some of these issues.</p> <p>SSDs are in an even worse position: they cost significantly more than hard drives per TB right now, and last I heard, there were still issues with bit fade when unpowered.</p> <p>CD/DVD is sounding a lot better, but there are some serious issues here too. Variable quality directly impacts the storage lifetime. Physically storing the discs is a lot more risky since the disc itself doesn&rsquo;t have as much built-in protection as a hard drive or SSD has. You&rsquo;ll need a much larger quantity to store the terrabytes of data that you can easily dump on one hard drive. And finally, life expectancy is still fairly low &ndash; while manufacturers of recordable discs (the &lsquo;R&rsquo; in CD-R, DVD-R, etc.) claim life expectancies of 100-200 (!) years under optimal conditions, others are <em>slightly</em> more conservative, <a href="https://www.clir.org/pubs/reports/pub121/sec4/">giving an estimate of 30 years</a>. Oh, and remember how I mentioned this is for recordable discs? That means they&rsquo;re single write. The random access (RW - CD-RW, DVD-RW, etc.) discs have even lower life expectancies.</p> <p>All in all, humanity has not gotten very far with the digital storage medium. All of these life expectancies have an inconsequential variance when we zoom out to the century view of history.</p> <p>[1] And no, I&rsquo;m not talking about the kind you pay to see your dead great-great-aunt to figure out if you&rsquo;re actually related to George Washington.</p> <p><em>This is intended to be the beginning of a learning series/personal study on the issues surrounding information preservation, digital permanence, and their related issues.</em></p>