Once upon a time, during the dark ages, we had to run several shell commands–like savages–to get the Plex plugin in TrueNAS (or FreeNAS, if you go back that far) to update. One had to fetch, then unpack the tarball, then move to the right directory, change ownership, and finally run the script! It was quite a pain when Plex was coming out with a new update every week (or so it seemed), and got to be more annoying than productive.
Let’s start by assuming that you know how to access your TrueNAS jails. On the Jails dialog, open Plex, then DO NOT CLICK “Update”. Click “Shell”.Once you’ve got your root prompt in the shell, download the updater by invoking the following command: fetch https://raw.githubusercontent.com/mstinaff/PMS_Updater/master/PMS_Updater.sh
From here, you can just run the shell script with sh PMS_Updater.sh
Automating Plex updates on TrueNAS with cron jobs
To set up a cron job on your TrueNAS installation, navigate to the Tasks > Cron Jobs dialog. Click the “ADD” button to create a new cron job and give it a descriptive name such as “Plex update”. Then, enter the following in the “Command” field:
/usr/local/bin/iocage exec [plexjail] /bin/sh /usr/local/PMS_Updater/PMS_Updater.sh -r -a -v
Substitute the name of your Plex jail for [plexjail]. Mine is just called “plex“. The -r flag will keep your installation clean by removing the older packages before installing the new one. The -a flag automatically updates to the newest version without user intervention. Finally, the -v flag runs the script in verbose mode, so you’ll have a log available just in case anything goes wrong.
Set the “Run As User” field to root, and set your preferred schedule. I run mine weekly on Sunday nights. From here, make sure your job is enabled, and click “SAVE”. Now, you shouldn’t have to make another manual Plex update again!
I picked up one of those super-cheap ~$200 Windows 10 netbooks from Best Buy a while back because I wanted something inexpensive to keep at the shop for incidental tasks like programming microcontrollers or burning SD cards for Raspberry Pi. Unfortunately, someone at Microsoft boasted that Windows 10 could be installed on a 32GB storage drive, so manufacturers like Asus put exactly 32GB in their discount netbooks leaving very little space (after updates, usually less than 5GB) to install applications.
Fortunately, Windows 10 comes out of the box with the ability to change the default storage locations for the library folders and installation location for apps installed via the Windows Store. However, I use a lot of applications that aren’t available in the Store and I also require a fair bit of space for screen capture videos when I’m walking through a project, so I’m going to need some extra storage space. At least the little Asus that I selected has a MicroSD card slot so I can just grab a 128GB unit and leave it inserted. Windows treats it like any other removable media, but be warned: the MicroSD read/write speed isn’t anything you’re going to write home about! Applications are going to load more slowly, but in most cases will run just fine from RAM. (Writing that previous sentence gave me weird flashbacks of the 8- and 16-bit eras when we often ran applications directly from a floppy and experienced the associated slowdowns.)
Install Windows Applications To SD Card
In most cases, Windows application installers will offer the option to select the install folder. For these, I just have the folder structure of the C: drive root copied to my MicroSD card, notably the Program Files and Program Files (x86) folders, so I just change the drive letter and everything installs like normal. However, many newer applications take advantage of the Windows User/AppData folder to store local data. One notorious example is Fusion360, which installs to the AppData folder and offers no option to do otherwise! For these exceptions, we’ll need to create a symbolic link from the C: version of the AppData folder to the D: drive location.
Use Symbolic Link To Redirect Folders
First off, using a symbolic link to redirect system folders to another location is generally bad practice because it can expose the machine to symlink attacks, but considering we’ve been backed into a corner by arrogant developers and clueless manufacturers, we’re going to need to pull this trick out of the bag. I wouldn’t use this kind of workaround on any mission critical systems, but it should be just fine for this little auxiliary machine. (Technically, Windows shortcuts are symbolic links, so we’re not doing anything too weird. We’re just forcing Windows to use a different storage location for something it prefers to have on the main storage device). The first step is to create a directory on the D: drive that will hold our AppData (since it’s a hidden folder and I’m the only user, I’ll just put it in D:\AppData). From here, just copy the contents of the AppData directory over to the new location and delete (yes, I said delete) the original.
Create Symbolic Links With Windows Command Line
Windows comes with the mlink command to create symbolic links, but you have to use the Command Line terminal to invoke it. Open an elevated command prompt (one with Admin privileges) by pressing Win + X and selecting the appropriate option from the list. The syntax for the command is as follows:
mlink /switch <link> <target>
So, to make C:\Users\<user>\AppData point to the new location on the D: drive, we’ll invoke the following command:
mlink /D "C:\Users\<user>\AppData" "D:\AppData"
Once the link is created, as far as Windows is concerned, the two locations are the same place. If an application needs to access the folder, it will seamlessly connect to the location on D:. As such, Fusion360 will run without a hitch (although it will load more slowly due to the reduced read/write speed)!
Unfortunately, this trick does not work to upgrade Windows 10 to 11, so this machine will be stuck in the late 2010s forever (or until I decide to install Linux on it…again).
As much as I love to dig out one of my vintage laptops to get the full nostalgic retro gaming experience, sometimes it just isn’t very practical to fire up Windows 95 for a quick round of JezzBall. Unfortunately, the newer versions of Windows built on 32 and 64-bit architectures won’t run these old software relics. However, unlike Apple, Microsoft actually values backwards-compatibility. (Of course, this is mostly by necessity considering how many enterprises are running legacy software as well as due to the support of independent developers regularly bending the OS to their will, instead of the other way around.)
Virtue signalling aside, as part of my recent two-foot leap back into the daily-driver Windows world, I wanted to revisit one of my favorite casual diversions of the 16-bit era: Microsoft Entertainment Pack.
The only problem, of course, is that Entertainment Pack is a 16-bit application and Windows 11 won’t run 16-bit applications out of the box. Enter otvdm (forked from winevdm). Based on Wine, otvdm is a compatibility layer that adds the ability to run 16-bit Windows binaries in 64-bit exclusive versions of Windows much like the new Windows subsystems for Linux and Android. (At this point, Windows is almost a Swiss Army knife of computing, taking everything I’ve enjoyed from the Linux world without having to spend a few hours either setting up new hardware or reconfiguring everything after an update.)
How To Run 16-bit Windows Applications In 64-bit Windows
To use otvdm, download the latest release from GitHub, then unzip to the directory of your choice. From here, you can simply use otvdm as a portable installation by dragging and dropping your 16-bit application onto the otvdm.exe binary, and you’re all set!
If, like me, you prefer to have a more seamless integration into Windows, you can install otvdm with one of the included installer shortcuts. If you prefer to not show the console window while your application runs, use the “install (no console)” shortcut. At this point, 16-bit Windows applications (and their installers, for that matter) will run just like any other Windows application! Now, I’ll never get any work done ever again.
I’ve been using the Focus FK-2001 for about a week now, and I’m enjoying it. Coming from my Matias Tactile Pro, it’s a pretty easy transition (both using Alps switches and all). I do really enjoy the Tactile Pro, but I wanted something with Windows keys.
I also wanted something with a vintage style, but I didn’t like the prices for a Model M (most Model M keyboards with Windows keys are rubber dome switches anyway, and not the superbly clicky bluckling spring switches that the early Model M units employ). Anyway, I now have me a genuine vintage clicky keyboard that is a treat to type on.
Unfortunately, it’s rather…dirty. Time for a deep clean!
Opening up the case, we find the keyboard driver chip. Unfortunately, I haven’t been able to locate any information on this Intel microcontroller. It’s really just an object of curiosity right now, though.
Ran the keycaps through the ultrasonic cleaner for about 30 minutes and lubricated the switches before putting everything back together. These are all the keys with wire stabilizers reinstalled.
Gave the case a good wash with soap and water, too. Of course, I forgot to put the stabilized keys back together first, so I had to take it apart again 🙄
Now it’s completed, and I only messed up 1 key putting it back together!
I’m one of those weird people that doesn’t really use the standard library folders that come in Windows (or MacOS, for that matter), but while I can easily customize Mac’s Finder menu to not list the libraries I don’t use, customizing the Windows Explorer menu is less straightforward. I like opening new instances of Explorer to the Quick Access view where I have all my attached drives, commonly used libraries (Downloads, Desktop, and Documents), and mapped network drives available at a glance. I also have locations for files synced across my devices and the Recycle Bin pinned to the menu, giving me ahem quick access to these commonly-used locations. Because of this setup, the stock “This PC” listing is redundant–listing many of the same locations twice and taking up precious screen real estate. As such, I wanted to customize this menu as much as possible with the hope of getting it similar to my Mac’s Finder sidebar menu.
Editing the Registry
The Windows Registry holds all the power under the hood in the Windows ecosystem. The problem is that it isn’t always clear what registry key values affect what parts of the OS. Fortunately, a little Google Fu is all that is needed to find the appropriate changes to make. First, open the Registry Editor by launching regedit from the Run (WIN+R) dialog. In the Registry Editor, navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\FolderDescriptions then locate the appropriate folder according to the GUID key list below and expand the folder to show its nested PropertyBag folder.
With the appropriate PropertyBag selected, right-click the ThisPCPolicy value in the main pane and select Modify. Change the value from Show to Hide, then click OK. Conversely, to re-show a particular folder, just change the data back to Show.
In some cases, ThisPCPolicy doesn’t have a defined value in the PropertyBag. In these cases, right-click on the main pane and select New > String Value, naming it ThisPCPolicy. Then set the value appropriately.
Enabling The Libraries Folder
For convenience sake, I do like to have access to the Libraries–just not at the top of the list. You can enable the Libraries folder in the sidebar menu through Folder Options in Windows Explorer. The check box is located under the “Navigation Pane” header below the View tab.
Putting things together like this makes Windows Explorer rather usable at this point!
Windows Terminal has proven to be one of my favorite additions to the PC world in a while. Coming from the Linux and Mac paradigm for the last decade-and-a-half, I felt like I needed a capable terminal emulator if I was going to be running a Windows machine as a daily driver again. Of course, the Windows Subsystem for Linux (WSL) just makes me giggle with glee at being able to run Ubuntu at the hypervisor-level instead of using virtualization software like VMware to do the handful of specialized tasks that I would easily perform in a Mac/Linux terminal window.
Of course, if you’re going to have a modern terminal emulator, you need to be able to customize it. Under MacOS, I was rocking a classic green-on-black look that reminded me of playing Zork on an Apple II and was great for getting that “digging around under the hood” vibe. I’ll probably bring that look back for one of my specialized terminal implementations (maybe the dedicated Telnet profile I’ve set up for dialing into the occasional BBS), but for the Ubuntu profile, I wanted something that evoked the orange and purple color scheme that I’ve come to associate with my distro of choice. These colors aren’t exactly the official “on brand” colors that Canonical uses, but they get the idea across.
In Windows Terminal, you can access settings.json from the Settings tab and add the following data to the schemes section:
Once you get this inserted, you should be able to select “Ubuntu” from the Color Scheme drop-down under the Appearance tab in your Ubuntu profile, and you’ll get a terminal that looks like this:
Now that we’ve got the colors all set, we just need to add a custom icon to complete the look. I just grabbed a transparent *.png of the Ubuntu logo and converted it to an *.ico file. I’m weird in that I don’t really use my library folders the way they’re intended. I keep photos on my server, so I don’t have much use for the Pictures library folder. As such, I use the folder for “system” images like custom icons, profile images, and wallpapers. I just dropped the icon into the folder and pointed my Windows Terminal profile to it. Now I’ve got a terminal implementation that reminds me that I’m running a separate operating system on top of Windows and isn’t just another basic grey/white-on-black (that look is reserved for CMD and DOSBox).
There honestly isn’t a lot that I can say about Resident Evil 3: Nemesis that isn’t a rehash of my own thoughts and experiences on the first Resident Evil game. I never got into the franchise when it was new as I’ve never been very interested in zombie fiction or survival horror in general, so I didn’t play any of the Resident Evil games until very recently. I’ve actually owned a copy of RE3 for many years, and–in fact–I really don’t recall how I came across it. I certainly didn’t purchase it myself, so I likely somehow picked it up during some “mergers and acquisitions” in the mid aughts. It was only when I played through the first Resident Evil on the Playstation Classic that I got a taste for what the series is all about: it’s a classic text-based puzzle and resource-management game that just happens to take place in a pre-rendered graphics zombie apocalypse.
Resident Evil 3 is somewhat less linear and certainly more action-oriented than the original. Of course, I can’t tell for certain if this is a trend or just a difference between the two games as I have not played Resident Evil 2 yet. In RE3, you take control of Jill Valentine (from the first game) in an effort to escape the zombie-infested Raccoon City before it’s liquidated by a nuclear weapon. During the course of your escape, Jill is pursued and ambushed by a gigantic mutant known as Nemesis. There’s not much of an explanation as to who or what Nemesis is, except that he just looks like a rejected costume design from the 1994 Frankenstein movie. Unfortunately, these scenes with Nemesis are probably the most interesting parts of the game as the rest consists of tedious fetch quests, occasional jump scares, and very few actual puzzles. There are some interesting timed decision sequences where the player’s choice determines how the next part of the game plays out, but these are too few and far between to make up for the overwhelming boredom in the meantime.
Unfortunately, the difficulty curve doesn’t fare much better than the rest of the gameplay. The original Resident Evil had what felt like a nicely rising difficulty curve, but Nemesis seems to have an inverted curve where the game gets infinitely easier the closer to the end. The first major challenge in the game is the initial encounter with Nemesis where Jill is equipped with only a pistol and has to duck and dodge to avoid being grabbed (impaled by Nemesis’s tentacle arm?) and immediately killed. This title introduces a new dodge mechanic that has to be practiced to get right, but there are so few encounters before Nemesis that a player has no real opportunity than to learn duringthe boss fight. It’s lazy and sloppy design on Capcom’s part, causing more game overs than I care to recall. After that, though, the game is smooth sailing–especially after finding the grenade launcher.
In a lot of ways, Nemesis reminds me of the 1999 film Universal Soldier: The Return wherein Jean-Claude Van Damme is chased around the suburbs by the cartoonishly superhuman Bill Goldberg in much the same way Nemesis pursues Jill Valentine. Each encounter with Goldberg/Nemesis consists of a brief battle ending with JCVD/Valentine getting the better of their enemy through either superior firepower or–occasionally–a clever trap. Unfortunately for Capcom, I feel like Universal Soldier had the more entertaining sequel. Granted, there are a few novel components to the game–like when Jill gets poisoned and her new friend/ally/potential romantic interest Carlos has to locate an antidote–but the over-reliance on (occasionally branching) fetch quests instead of puzzle mechanics drags this game down to the bin of mediocrity. It’s an important piece of the overall Resident Evil mythology, but unless you’re a big fan of the franchise, I would stick with the original.
When I was a kid, I was obsessed with everything that had to do wish espionage and spycraft–that whole fantastic world of cloak and dagger–primarily because of James Bond. I was a Bond fan from a very early age because they were routinely broadcast on TBS, so I got to experience the films and enjoy them on a fairly regular basis. My interests branched out from there into things like those children’s science experiment kits–the ones that would show how invisible ink or fingerprinting or Morse code worked–and there was trading cyphers and setting up “treasure hunts” with my friends, coming up with clues and hiding them around the house, anything that would allow me to pursue the fantasy of the secret agent. Because of this, I knew about Mission: Impossible (the television series, as this was well before the Tom Cruise film), but it sort of existed as a cultural meme–I wasn’t really intimately familiar with it like I was James Bond, though, as it wasn’t something that was on my “cultural radar” at the time (if it didn’t come on TBS or if it wasn’t a cartoon, it practically didn’t exist in my world)
Fast-forward a couple of years, and I’m browsing the Nintendo aisle at Toys R Us when I find that there is a Mission: Impossible game for the NES. Of course, I still know nothing about the franchise except that it’s basically an American James Bond, full of action and spycraft, and I knew that I had to experience it! Like so many other kids of the era, I was completely sold on the game by the cover art alone. It screams action and intrigue! However, apparently unlike many of my peers, I actually like the game! It’s an action game, but it’s not super actiony. It’s actually a fairly “slow” game, incorporating more puzzle-solving and exploration elements along the lines of The Legend of Zelda than the twitchy platforming of Ninja Gaiden. The game is even projected top-down, so it is very much like Zelda except with spying–which makes it awesome. On top of the puzzle-solving elements, you have a character select mechanic like one of my other favorite titles of the era, Teenage Mutant Ninja Turtles, which I thought was awesome because I could play as my newly-adopted favorite character Nicholas Black. Of course, I had no idea that hot-swapping characters was part of the game’s strategy, I just thought it was awesome that one of the characters was a “master of disguise”, was an Australian who carried boomerangs (Crocodile Dundee was one of my absolute favorite movies at the time), wore glasses like I did, AND WAS A FREAKING SECRET AGENT!!! Of course, I never made it very far with Nick by himself, and I learned to begrudgingly use Grant and (ugh!) Max for specific actions in the game.
Despite the difficulty of the game, I always enjoyed playing it. There’s a very focused puzzle-solving mechanic to the sprawling level designs, and I feel like that helped keep my interest in the game piqued over the years. The game is definitely a puzzle adventure first and an action game second, much like its Konami predecessor Metal Gear (M:I was published in the USA by Ultra Games, an “alternate label” that Konami used to get around restrictive quotas set by Nintendo). Mission: Impossible definitely borrows from Hideo Kojima’s masterpiece, but does so in a way that doesn’t feel like a cheap copy. The gameplay is slower and more deliberate with fewer boss battles or run-and-gun opportunities, but you get a sense of the pedigree that M:I inherits: it’s an interesting mix of Metal Gear, The Legend of Zelda, and Teenage Mutant Ninja Turtles that shines as its own clever, if underappreciated, title on the NES.
As much as I personally enjoy the game, it seems that many of my peers do not like the way that Mission: Impossible plays. It is a difficult game, but it is generally not an arbitrarily difficult game in the way that titles like Ghosts n’ Goblins or Ninja Gaiden are. The difficulty of Mission: Impossible lies in its tight tolerances for success–the need to proceed with precision and finesse rather than nimble reflexes–much like Zelda II. Enemies generally do not respawn over the course of a level, and there are ways to navigate around most encounters without taking any damage. Unbeknownst to my younger self, the biggest strategic advantage in the game is knowing which character to use when–each has his specific skills and abilities that make him uniquely qualified to proceed through specific areas. In this way, the game plays more like The Lost Vikings. If you approach the level with the mindset of getting all three agents through the level alive (rather than as three chances for one character to make it through), then the connection to the game’s source material becomes more apparent. The Impossible Mission Force has to work together to complete a mission. Grant is the electronics expert that can break locks, Nick can use his disguises to sneak past impassable gauntlets, and Max is the marksman who can take out enemies before they see him! The game requires practice to get each level’s “choreography” right, but the game doesn’t punish you too badly for failure. There are unlimited continues, though you are reset to the beginning of the level (which can be quite frustrating during very long sequences like levels 3 and 6), and the level design rewards exploration despite the dangers faced. Admitted, to finally finish the game, I used save states on my NES Classic Edition. The game is still quite difficult, but this took a little bit of the sting out of trying to complete the final level (which, I will admit, suffers from the worst game-lengthening cop out: the “Uh-oh, now you have to play this super difficult level all over again!” trope), and allowed me to continue to enjoy a childhood favorite since adulthood tends to rob me of that precious practice play time.
Of course, every good spy thriller needs a chase sequence, and Mission: Impossible does not disappoint! There are two “chase” levels that evoke the action one would come to expect in such a genre–one in a speedboat and one skiing downhill–and they provide a deliciously novel break from the slower-paced stealth action of the main game. I would often jump to these levels using their respective passwords when I felt like a quick arcade-style distraction without commitment–great for commercial breaks or between homework assignments! These different gameplay elements help to complete the feel of a great piece of spy fiction while Jun “Dog-Man” Funahashi’s banging soundtrack reminds the player that this is definitely a Konami title.
Mission: Impossible is not Metal Gear, nor does it really pretend to be. The latter is definitely the OG granddaddy of the stealth action genre, but M:I stands on its own as a fine entry in the Konami catalog. It’s a cleverly designed homage to spy fiction, and honestly plays more into those tropes than contemporary platform action games based on the James Bond franchise. There are puzzles to solve, chases to be made, sneaking to be done, and worlds to save. If you’re a fan of either Metal Gear or The Legend of Zelda, I would give it a shot. You might be surprised by this undercover gem.
As I explained in the How Hard Could It Be? video, the first objective in getting sound out of a record player is amplifying the phono-level signal from the tonearm (about 5mV) up to line-level (1V). This pre-amplifier stage uses a low-noise operational amplifier to boost the signal to the appropriate level. For Project Califone, I’m building the preamp stage using a Texas Instruments NE5532 OpAmp chip. Of course, I was having a little bit of trouble getting the device to work because I neglected to realize that I needed to apply both a positive and a negative voltage to the chip in order for it to function.
After realizing my mistake, I sourced a 10:1 AC-AC transformer that I could use for prototyping purposes. From the wall, I can get down to a manageable 12VAC and with a simple rectifying circuit, split that into +/-12VDC. I will have to adjust the power supply circuit to account for the 30VAC output from the transformer already installed in the phonograph, but that is a problem for another day!
At this point, I have a minimum-viable amplifier circuit for a single audio channel. Note in this schematic that there is no resistance on the input signal, so there is effectively no gain control at this point. The signal is horrendously over-driven–and when piped through the main amplifier becomes so over-modulated that even Luigi Russolo would shiver–but it works! From here, it’s a matter of adding some resistors to control the gain before feeding the output to a single-knob tone control, the second pre-amp stage, then the main amplifier.
Like many of our hosts, Matthew is an aficionado of vintage technology. In this project, Matthew is completely rebuilding a Califone 1400 series portable phonograph from the early 1980s to improve its playback quality. The first obstacle he has to overcome is rebuilding the preamplifier circuit to bring the raw phono signals from the tonearm up to RIAA line level, but he’s having a little trouble with the op-amp chip. How hard could it be to build a simple preamp from scratch?
I had taken a bit of a hiatus from production (as I discussed in the last Surf Report), due to both a sense of being overwhelmed by my new day job and a general lack of enjoyment in the process. Building things became a job, and it stopped being enjoyable for a time. 2021 gave me an opportunity to reflect on my own goals, and I ended up scratching hundreds of projects from my list that I knew I would either never finish or had no interest in pursuing. I’m still culling that list, but the Califone stands firm. I’m still working on it, but I’m doing it slowly and on my own terms.
In an effort to get me back in the rotation on element14 Presents, the Producers and I agreed on this smaller-format video, showing a chunk of the project in the detail that I like to provide. It’s part of a new Friday series that highlights more conceptual projects, asking How Hard Could It Be? and following the trials that go into a simpler idea. In this case, I needed to build a phono-line preamplifier for the record player from scratch, and I made a fatal error along the way. The idea is to highlight how everyone makes simple mistakes and that it’s okay to ask for help.
The video was a nice transition back into work-for-hire and a way for me to warm myself back up for the next stage of the project. Now that I have a basic design for a power supply and preamp, I can get started on breadboarding a class-D main amplifier so these parts won’t have to spend another year on the shelf!