<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:a10="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Smalls.Online Blog</title>
    <link>https://smalls.online/blog/rss</link>
    <description>The blog for Smalls.Online.</description>
    <language>en-US</language>
    <copyright>Copyright © 2023 Tim Small</copyright>
    <a10:author>
      <a10:name>Tim Small</a10:name>
      <a10:uri>https://smalls.online</a10:uri>
    </a10:author>
    <lastBuildDate>Sat, 14 Mar 2026 05:58:12 Z</lastBuildDate>
    <item>
      <guid isPermaLink="false">3d85cb45-36d8-49bd-b792-5ab5a55b5264</guid>
      <a10:author>
        <a10:name>Tim Small</a10:name>
        <a10:uri>https://smalls.online</a10:uri>
      </a10:author>
      <category>blog</category>
      <title>The Great OCW.Social Migration</title>
      <description>You might be wondering... Wtf is OCW.Social? It's the premier private Mastodon server for the cool kids from a late 2000s Nintendo Wi-Fi Connection forum OneClickWifi. I've been running and maintaining the server since November 2022. For the longest time it was hosted on Azure, but, until recently, it's now hosted on a hybrid on-prem/cloud setup. It's been a doozy. Why migrate off of Azure? {#why-migrate-off-of-azure} So why did I make the decision to migrate off of Azure? Well... Long story short: I can't afford it anymore. Mastodon is a beast to run, but not on the CPU side of things. You can start off small with a small amount of virtual CPUs (vCPUs) and a modest amount of memory, but, once you start federating with many other servers that use ActivityPub, it will not be enough. Is it processing power that can't keep up? Nope. It's memory. You need a lot of memory. Like a lot of memory. Mastodon: Gobbler of RAM {#mastodon-gobbler-of-ram} You will have to segment out the different processes if you want a good experience with performance and uptime. You've got the Sidekiq queues that process all of the background jobs: Incoming posts, outgoing posts, media conversion, link crawling, scheduled maintenance jobs, and more. You can definitely run them all in one process, but it will eventually get backfilled and cause a poor user experience. So the three main queues (default, push, and pull) essentially need to be spread out across multiple processes in different orders. All of those queues will need to have, at a minimum, 512 MB of memory allocated to them. Then there's the frontend web app, which needs, at a minimum, 700 MB of memory allocated to them. They will all run out of memory and need to be restarted. That presents a problem with keeping the frontend of Mastodon up without interruption. The backend processes/background jobs can restart all they want, but the frontend going down can be a massive pain in the ass for everything (Not just the users). I strive to keep services up and running without interruption and not being slow. When either happens, I feel bad. Not because others will really care, but rather because I don't like it happening. So why no more Azure? {#so-why-no-more-azure} That brings us back to the original question. Why migrate off of Azure? It's the same answer, but there's more context: Running Mastodon in Azure required a lot of compute resources to satisfy Mastodon's craving for memory. For all cloud providers, VM sizes always scale up both vCPU count and memory capacity. Oh! And cost. Can't forget about cost. This kinda leaves me in a conundrum with how much processing power I'm paying for that's going completely unused, but I'm struggling to keep those costs down because I need the memory and I'm still scrapping by with the amount of memory I can use. Of course, there's also the other meters I'm being charged for. Like load balancer traffic, storage, and whatnot. For a while I could afford it, but maaaaan... The cost of living has gone up. Was it a bad idea to run Mastodon in a Kubernetes cluster in Azure? Not at all. I saved more by cramming everything into a Kubernetes cluster than I would have by hosting it directly on VMs. Not just on money, but sanity too. It meant that I didn't have to maintain the underlying operating systems. What do it be now? {#what-do-it-be-now} Like I said, our Mastodon server is now a hybrid on-prem/cloud infrastructure. Some hosted in my own network and some hosted in Linode. The costs are much lower. Currently there's only one on-prem server, but I intend to expand that. What hardware am I using on-prem though? A Raspberry Pi 5 with 8 GB of RAM. It has plenty of compute power, with its quad-core ARM Cortex-A76 processor, minimal power draw, minimal noise, and it's cheap. In the cloud? I've got three VMs with varying compute and memory. The Raspberry Pi 5 I'm using. It's still a Kubernetes cluster too. That was going to be a given. I'm utilizing the excellent k3s distribution. The three cloud VMs are the primary control plane nodes with etcd and the on-prem servers being agents in the cluster. It allows for high-availability and spreading the distribution of the various processes across both the cloud and on-prem. I'm currently connecting all of these together with Cloudflare WARP's private network capability. I don't intend on making that the long term solution and I'm working on creating a private network with just WireGuard. That being said, it 100% gets the job done and latency between the nodes is within the 30-40ms range. I'm also connecting all of the public facing sites with Cloudflare Tunnels, so I'm not directly exposing my home network to the internet. How did the migration go? {#how-did-the-migration-go} Initially the migration went fine. I changed the DNS for the Mastodon server to point to a maintenance page so that nothing would interact with the database. Then I migrated the database over to the new infrastructure (Over 30 GB of data to transfer) and started provisioning the Mastodon containers. It seemed to be fine for a bit, but there were some pretty nasty issues that popped up. There were the typical temporarily scaling up the Sidekiq queues to play catchup with the servers we federate with, but things like that pale in comparison to two issues that took a while to resolve. Database corruption {#database-corruption} The first major issue was database corruption. I can't remember all of the details about how it got corrupted in the first place, but it was a mess trying to fix it. I woke up the morning after turning the cluster into a highly available cluster and noticed all of the alerts that sprung up, while I was asleep, that the server was down. I quickly started looking at the logs and saw that the volume for the database had corrupted. It wasn't just the primary database, the replicas were too. Ah! No biggie! I was doing volume snapshots and backing those up, so it should be fairly easy to recover. Right? Right!?! Yeah, that's going to be a hard no. Each of the volume snapshots I had were corrupted. I was legit panicking. The server was down for the majority of the day and I was frantically trying every possible method of repairing/recovering the Postgres database that I knew. None of them worked. I was almost at the point of calling it a loss and having to start fresh, but that would be devastating to both the content we've posted and our state in federating with other servers. Wanna know how I was able to recover it? In a Hail Mary attempt, I created a new database and forcefully copied the data from the failed database volume into the new one. That actually fixed it. I shit you not, that got the database into a functional state again. I brute forced my way into recovering the database. The message I sent after I got the database working again. That's on me. I should have known better that volume snapshots aren't the best way to backup a Postgres database. I have a two tier backup approach now: Continuous backup of the WAL files. A weekly volume snapshot. Restoring with the backup of the WAL files is the first option, but, if shit hits the fan, the volume snapshot and the backup of the WAL files is the second. I should have done this from the beginning, but I didn't. This current setup should make disaster recovery much easier. Slow network traffic {#slow-network-traffic} The second major issue was with network traffic in the container network being stupid slow. Mainly between the cloud nodes and the on-prem node. When I first set everything up, I was getting network speeds that would match up to what I would expect: 200-300 Mb/s.[^1] Then... It just dropped. Dramatically. We're talking about it going down to only 1-15 Mb/s. It was bad and it would make trying to use Mastodon a real pain. The problem affected all nodes in the cluster, but it was more bearable between the cloud nodes. I essentially had to take the on-prem node out of being assigned the pods for the frontend and the database and relegate it to only the Sidekiq queues. That was only a band-aid fix though, because, like I said, it affected all of the nodes. So I had to do a lot of troubleshooting. It took me a week or two to pinpoint down the specific problem. Direct node-to-node traffic outside of the container network was fine and even node-to-container traffic was fine; however, container-to-container traffic was problematic. It wasn't just slow, but network packets were being dropped, which is part of the real reason why it was being so slow. I was able to figure that out by deploying a container for iperf3 onto all of the nodes and running tests between all of them in different configurations (node-to-node, node-to-container, container-to-node, and container-to-container). So what ended up being the issue? There were two: There was a MTU mismatch between the container network and the Cloudflare WARP interface. The virtual network interface for WARP has a MTU of 1280 bytes, but everything else typically had a MTU of 1500 bytes or 1420 bytes. This was causing packets to be dropped. I had to force k3s to bind to a specific address, with --bind-address, to get it to utilize the correct MTU.[^2] There was also how TCP packet congestion was being handled. By default, the vast majority of Linux kernel configs ship with Cubic as the default TCP congestion control algorithm. Switching to BBR (Bottleneck Bandwidth and Round Trip Time) helped alleviate that. Pretty drastically too. Considering what BBR was designed for, it makes sense that it would work much better with our network setup. Making both of those changes fixed the networking issues I was seeing. I haven't seen any problems with it since. How's it going post-migration? {#how-is-it-going-post-migration} Really well! The initial problems are gone and it's been rather smooth sailing since then. There have been a like one or two minor problems since then, but nothing major. In fact, the hybrid approach I've chosen has worked extremely well. My ISP had an outage late one night, but we only had a few minutes of downtime as all of the containers that were on the on-prem node were being spun up on the other nodes. So it's working really well! [^1]: You might be thinking, shouldn't that be higher? Not really. There's going to be network performance degradation in a container network, so it's to be expected. [^2]: Fun fact! I haven't been able to apply updates to k3s because a bug was introduced in v1.30.1+k3s1 that, when the --bind-address argument is provided, caused the kubelet to not work properly. </description>
      <source>Smalls.Online Blog</source>
      <pubDate>Sat, 03 Aug 2024 19:34:00 Z</pubDate>
      <a10:link href="https://smalls.online/blog/entry/the-great-ocw-social-migration" />
      <a10:content type="html">&lt;p&gt;You might be wondering... Wtf is OCW.Social? It's the premier private Mastodon server for the cool kids from a late 2000s Nintendo Wi-Fi Connection forum OneClickWifi.&lt;/p&gt;
&lt;p&gt;I've been running and maintaining the server since November 2022. For the longest time it was hosted on Azure, but, until recently, it's now hosted on a hybrid on-prem/cloud setup. It's been a doozy.&lt;/p&gt;
&lt;h2 id="why-migrate-off-of-azure"&gt;Why migrate off of Azure?&lt;/h2&gt;
&lt;p&gt;So why did I make the decision to migrate off of Azure? Well... Long story short: I can't afford it anymore.&lt;/p&gt;
&lt;p&gt;Mastodon is a beast to run, but not on the CPU side of things. You can start off small with a small amount of virtual CPUs (vCPUs) and a modest amount of memory, but, once you start federating with many other servers that use ActivityPub, &lt;strong&gt;it will not be enough&lt;/strong&gt;. Is it processing power that can't keep up? Nope. It's memory. You need a lot of memory. Like &lt;strong&gt;a lot of memory&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="mastodon-gobbler-of-ram"&gt;Mastodon: Gobbler of RAM&lt;/h2&gt;
&lt;p&gt;You will have to segment out the different processes if you want a good experience with performance and uptime. You've got the &lt;a href="https://sidekiq.org/" rel="noopener noreferrer"&gt;Sidekiq queues&lt;/a&gt; that process all of the background jobs: Incoming posts, outgoing posts, media conversion, link crawling, scheduled maintenance jobs, and more. You can definitely run them all in one process, but it will eventually get backfilled and cause a poor user experience. So the three main queues (&lt;code&gt;default&lt;/code&gt;, &lt;code&gt;push&lt;/code&gt;, and &lt;code&gt;pull&lt;/code&gt;) essentially need to be spread out across multiple processes in different orders. All of those queues will need to have, at a minimum, &lt;code&gt;512 MB&lt;/code&gt; of memory allocated to them. Then there's the frontend web app, which needs, at a minimum, &lt;code&gt;700 MB&lt;/code&gt; of memory allocated to them. &lt;strong&gt;They will all run out of memory and need to be restarted.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;That presents a problem with keeping the frontend of Mastodon up without interruption. The backend processes/background jobs can restart all they want, but the frontend going down can be a massive pain in the ass for everything (Not just the users).&lt;/p&gt;
&lt;p&gt;I strive to keep services up and running without interruption and not being slow. When either happens, I feel bad. Not because others will really care, but rather because I don't like it happening.&lt;/p&gt;
&lt;h2 id="so-why-no-more-azure"&gt;So why no more Azure?&lt;/h2&gt;
&lt;p&gt;That brings us back to the original question. Why migrate off of Azure? It's the same answer, but there's more context: &lt;strong&gt;Running Mastodon in Azure required a lot of compute resources to satisfy Mastodon's craving for memory.&lt;/strong&gt; For all cloud providers, VM sizes always scale up both vCPU count and memory capacity. Oh! And cost. Can't forget about cost.&lt;/p&gt;
&lt;p&gt;This kinda leaves me in a conundrum with how much processing power I'm paying for that's going completely unused, but I'm struggling to keep those costs down because &lt;strong&gt;I need the memory&lt;/strong&gt; and I'm still scrapping by with the amount of memory I can use. Of course, there's also the other meters I'm being charged for. Like load balancer traffic, storage, and whatnot.&lt;/p&gt;
&lt;p&gt;For a while I could afford it, but maaaaan... The cost of living has gone up. Was it a bad idea to run Mastodon in a Kubernetes cluster in Azure? Not at all. I saved more by cramming everything into a Kubernetes cluster than I would have by hosting it directly on VMs. Not just on money, but sanity too. It meant that I didn't have to maintain the underlying operating systems.&lt;/p&gt;
&lt;h2 id="what-do-it-be-now"&gt;What do it be now?&lt;/h2&gt;
&lt;p&gt;Like I said, our Mastodon server is now a hybrid on-prem/cloud infrastructure. Some hosted in my own network and some hosted in Linode. The costs are much lower. Currently there's only one on-prem server, but I intend to expand that.&lt;/p&gt;
&lt;p&gt;What hardware am I using on-prem though? A Raspberry Pi 5 with 8 GB of RAM. It has plenty of compute power, with its quad-core ARM Cortex-A76 processor, minimal power draw, minimal noise, and it's cheap. In the cloud? I've got three VMs with varying compute and memory.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cdn.smalls.online/images/blog-assets/raspberry-pi-5.jpg" class="img-fluid" alt="The Raspberry Pi 5 I'm using." /&gt;&lt;/p&gt;
&lt;p&gt;It's still a Kubernetes cluster too. That was going to be a given. I'm utilizing the excellent &lt;a href="https://k3s.io/" rel="noopener noreferrer"&gt;k3s distribution&lt;/a&gt;. The three cloud VMs are the primary control plane nodes with &lt;code&gt;etcd&lt;/code&gt; and the on-prem servers being agents in the cluster. It allows for high-availability and spreading the distribution of the various processes across both the cloud and on-prem.&lt;/p&gt;
&lt;p&gt;I'm currently connecting all of these together with Cloudflare WARP's private network capability. I don't intend on making that the long term solution and I'm working on creating a private network with just WireGuard. That being said, it 100% gets the job done and latency between the nodes is within the &lt;code&gt;30-40ms&lt;/code&gt; range. I'm also connecting all of the public facing sites with &lt;a href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/" rel="noopener noreferrer"&gt;Cloudflare Tunnels&lt;/a&gt;, so I'm not directly exposing my home network to the internet.&lt;/p&gt;
&lt;h2 id="how-did-the-migration-go"&gt;How did the migration go?&lt;/h2&gt;
&lt;p&gt;Initially the migration went fine. I changed the DNS for the Mastodon server to point to a maintenance page so that nothing would interact with the database. Then I migrated the database over to the new infrastructure (Over &lt;code&gt;30 GB&lt;/code&gt; of data to transfer) and started provisioning the Mastodon containers.&lt;/p&gt;
&lt;p&gt;It seemed to be fine for a bit, but there were some pretty nasty issues that popped up. There were the typical temporarily scaling up the Sidekiq queues to play catchup with the servers we federate with, but things like that pale in comparison to two issues that took a while to resolve.&lt;/p&gt;
&lt;h3 id="database-corruption"&gt;Database corruption&lt;/h3&gt;
&lt;p&gt;The first major issue was database corruption. I can't remember all of the details about how it got corrupted in the first place, but it was a mess trying to fix it. I woke up the morning after turning the cluster into a highly available cluster and noticed all of the alerts that sprung up, while I was asleep, that the server was down. I quickly started looking at the logs and saw that the volume for the database had corrupted. It wasn't just the primary database, the replicas were too.&lt;/p&gt;
&lt;p&gt;Ah! No biggie! I was doing volume snapshots and backing those up, so it should be fairly easy to recover. Right? &lt;strong&gt;Right!?!&lt;/strong&gt; Yeah, that's going to be a hard no. Each of the volume snapshots I had were corrupted. I was legit panicking.&lt;/p&gt;
&lt;p&gt;The server was down for the majority of the day and I was frantically trying every possible method of repairing/recovering the Postgres database that I knew. None of them worked. I was almost at the point of calling it a loss and having to start fresh, but that would be devastating to both the content we've posted and our state in federating with other servers.&lt;/p&gt;
&lt;p&gt;Wanna know how I was able to recover it? In a Hail Mary attempt, I created a new database and forcefully copied the data from the failed database volume into the new one. &lt;strong&gt;That actually fixed it.&lt;/strong&gt; I shit you not, that got the database into a functional state again. I brute forced my way into recovering the database.&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cdn.smalls.online/images/blog-assets/db-corruption_fixed-message.jpg" class="img-fluid" alt="The message I sent after I got the database working again." /&gt;&lt;/p&gt;
&lt;p&gt;That's on me. I should have known better that volume snapshots aren't the best way to backup a Postgres database. I have a two tier backup approach now:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Continuous backup of the WAL files.&lt;/li&gt;
&lt;li&gt;A weekly volume snapshot.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Restoring with the backup of the WAL files is the first option, but, if shit hits the fan, the volume snapshot and the backup of the WAL files is the second. I should have done this from the beginning, but I didn't. This current setup should make disaster recovery much easier.&lt;/p&gt;
&lt;h3 id="slow-network-traffic"&gt;Slow network traffic&lt;/h3&gt;
&lt;p&gt;The second major issue was with network traffic in the container network being stupid slow. Mainly between the cloud nodes and the on-prem node.&lt;/p&gt;
&lt;p&gt;When I first set everything up, I was getting network speeds that would match up to what I would expect: &lt;code&gt;200-300 Mb/s&lt;/code&gt;.&lt;a id="fnref:1" href="#fn:1" class="footnote-ref"&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt; Then... It just dropped. Dramatically. We're talking about it going down to only &lt;code&gt;1-15 Mb/s&lt;/code&gt;. It was bad and it would make trying to use Mastodon a real pain. The problem affected all nodes in the cluster, but it was more bearable between the cloud nodes. I essentially had to take the on-prem node out of being assigned the pods for the frontend and the database and relegate it to only the Sidekiq queues.&lt;/p&gt;
&lt;p&gt;That was only a band-aid fix though, because, like I said, it affected all of the nodes. So I had to do a lot of troubleshooting. It took me a week or two to pinpoint down the specific problem.&lt;/p&gt;
&lt;p&gt;Direct node-to-node traffic outside of the container network was fine and even node-to-container traffic was fine; however, container-to-container traffic was problematic. It wasn't just slow, but network packets were being dropped, which is part of the real reason why it was being so slow. I was able to figure that out by deploying a container for &lt;a href="https://github.com/esnet/iperf" rel="noopener noreferrer"&gt;&lt;code&gt;iperf3&lt;/code&gt;&lt;/a&gt; onto all of the nodes and running tests between all of them in different configurations (node-to-node, node-to-container, container-to-node, and container-to-container).&lt;/p&gt;
&lt;p&gt;So what ended up being the issue? There were two:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;There was a MTU mismatch between the container network and the Cloudflare WARP interface. The virtual network interface for WARP has a MTU of &lt;code&gt;1280 bytes&lt;/code&gt;, but everything else typically had a MTU of &lt;code&gt;1500 bytes&lt;/code&gt; or &lt;code&gt;1420 bytes&lt;/code&gt;. This was causing packets to be dropped. I had to force &lt;code&gt;k3s&lt;/code&gt; to bind to a specific address, with &lt;code&gt;--bind-address&lt;/code&gt;, to get it to utilize the correct MTU.&lt;a id="fnref:2" href="#fn:2" class="footnote-ref"&gt;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;There was also how TCP packet congestion was being handled. By default, the vast majority of Linux kernel configs ship with &lt;code&gt;Cubic&lt;/code&gt; as the default TCP congestion control algorithm. Switching to &lt;a href="https://cloud.google.com/blog/products/networking/tcp-bbr-congestion-control-comes-to-gcp-your-internet-just-got-faster?m=1" rel="noopener noreferrer"&gt;&lt;code&gt;BBR&lt;/code&gt; (Bottleneck Bandwidth and Round Trip Time)&lt;/a&gt; helped alleviate that. Pretty drastically too. Considering what &lt;code&gt;BBR&lt;/code&gt; was designed for, it makes sense that it would work much better with our network setup.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Making both of those changes fixed the networking issues I was seeing. I haven't seen any problems with it since.&lt;/p&gt;
&lt;h2 id="how-is-it-going-post-migration"&gt;How's it going post-migration?&lt;/h2&gt;
&lt;p&gt;Really well! The initial problems are gone and it's been rather smooth sailing since then. There have been a like one or two minor problems since then, but nothing major. In fact, the hybrid approach I've chosen has worked &lt;strong&gt;extremely well&lt;/strong&gt;. My ISP had an outage late one night, but we only had a few minutes of downtime as all of the containers that were on the on-prem node were being spun up on the other nodes. So it's working really well!&lt;/p&gt;
&lt;div class="footnotes"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;You might be thinking, shouldn't that be higher? Not really. There's going to be network performance degradation in a container network, so it's to be expected.&lt;a href="#fnref:1" class="footnote-back-ref"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;Fun fact! I haven't been able to apply updates to &lt;code&gt;k3s&lt;/code&gt; &lt;a href="https://github.com/k3s-io/k3s/issues/10476" rel="noopener noreferrer"&gt;because a bug was introduced in &lt;code&gt;v1.30.1+k3s1&lt;/code&gt;&lt;/a&gt; that, when the &lt;code&gt;--bind-address&lt;/code&gt; argument is provided, caused the &lt;code&gt;kubelet&lt;/code&gt; to not work properly.&lt;a href="#fnref:2" class="footnote-back-ref"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</a10:content>
    </item>
    <item>
      <guid isPermaLink="false">7ce69a20-a3b9-481e-892a-163f88b5782c</guid>
      <a10:author>
        <a10:name>Tim Small</a10:name>
        <a10:uri>https://smalls.online</a10:uri>
      </a10:author>
      <category>blog</category>
      <title>VSCode Configurator</title>
      <description>For a while now, it's been pretty tedious for me to bootstrap new programming projects. I have to go through many steps to get my environment setup for a new project to use in VSCode. Especially if I want to keep things consistent across all of the different random projects I start working on. So I finally decided to make something to do that for me: vscode-configurator, a CLI tool for quickly bootstrapping new projects. Ewww, VSCode? 🤮 {#ewww-vscode} Let me get this out of the way: Yeah... I use VSCode. lol It's what I've been using for quite a while now. For all of it's weird quirks, I'm very familiar with how it works. I've tried many different code editors, but I always come back to VSCode. I've never been a big fan of "highly specific" IDEs. Maybe someday I'll write about that, but the short of it is that I'm ~~a crazy madman~~ someone who works with many different languages. Yeah, I've been mainly doing things in C# for the past few years; however, I also like to toy around with other languages and my actual job doesn't even involve programming. [^1] I do use NeoVim too, but I just haven't been able to switch my programming stuff to it. It's kinda relegated to quick file edits and whenever I need to modify a config file on a Linux system. Will I ever make the switch for programming... Maybe someday... A general overview of the tool {#general-overview} So what exactly does vscode-configurator do? I guess you could probably compare it to create-t3-app? It's kinda the same concept, but not entirely. Right now it only bootstraps a workspace for C# projects, but I do plan on adding other languages and scenarios. I'm mainly writing the tool for myself, so, of course, I'm going to focus on it first and it's very opinionated. It also doesn't create the actual C# projects, so the point of it is to initialize and maintain the workspace for VSCode. In the case of bootstrapping a workspace for a C# project it will: Initialize git. Run dotnet new to create templated files: A root solution file (.sln). A .gitignore file that contains a lot of common files/paths to exclude from git. A global.json file for specifying which version of .NET for the dotnet to use. Optionally add a nuget.config file to specify other NuGet feeds. Optionally add GitVersion to the project, so versioning information for compiled projects is derived from git. Create a settings.json file for VSCode to define some local settings to the project. Create a tasks.json file for VSCode that defines a lot of the common tasks (and inputs for those tasks) I use for C# projects. Tasks like building, compiling, restoring packages, etc. Running it looks like this: 'vscode-configurator csharp init' demo Pretty cool, right? Not only that, but I also have another command that can be ran afterwards to add a C# project file to the solution file and the tasks.json file: 'vscode-configurator csharp add-project' demo Now the project is added to the root solution file and I can now select it as an option when I run a task: Demo of running a task on the newly added project Pretty dank, right? Honestly, this will help me keep things consistent across all of my projects. No more having to copy/paste files, manually add specific things, or forgetting to run that one specific command. Techno Mumbo Jumbo {#techno-mumbo-jumbo} I wrote this in... Wait for it... C#. SHOCKER! I know, right? You're probably thinking that this could have easily been a shell script or a PowerShell script. You'd be right. Where's the fun in that, though? It mainly gave me an excuse to learn some more of the ins-and-outs of writing something that's meant to be Native AOT compiled. What's Native AOT in the C#/.NET world? [^2] You're basically compiling to native code and not the traditional Just-In-Time (JIT) code that .NET is known for. There's no need for having the .NET runtime installed to run it. There's still technically a runtime, but it's directly embedded into the binary and it's trimmed down. This is like the third or fourth C# project I've done that targets Native AOT compilation in just the last few months. [^3] I've been tinkering with Native AOT compilation for a few years now, but, with .NET 8, it's gotten to the point where it's a viable option. There are a lot of "limitations" when it comes to it, so there are a lot of things that you have to do differently than if you were to compile it traditionally. One of those is interacting with JSON. Especially when you're trying to work with a JSON that you can't reliably know the schema for (Or it would take too much work creating classes for all of it lol). I ran into that with the tasks.json file whenever vscode-configurator csharp add-project is ran. Here are three methods I made to handle that, if you're curious: src/Configurator/External/VSCode/Tasks/AddCsharpProjectToTasksJsonAsync.cs src/Configurator/External/VSCode/Tasks/GetInputNodeById.cs src/Configurator/External/VSCode/Tasks/AddOptionToInputNode.cs A lot of this is just a good learning experience for me. I'm able to learn more about how things work under-the-hood. Honestly if you're a C#/.NET developer, Native AOT compilation will help you understand how a lot of the nice things work. Wrapping up {#wrapping-up} Like I said before, I'm mainly writing this tool for myself. I can't guarantee that it will be good for you; however, if you're interested in it, you can check out my GitHub repo for it here. I've also got pre-compiled binaries for the latest release here. I don't know which languages/scenarios I'm going to add next. Maybe Rust or an infrastructure-as-code language (~~Terraform~~ OpenTofu or Azure Bicep)? I want to also make it customizable, so you can add whatever you want; however, I'm not focusing on that yet and I'm not even sure if I will add it. [^1]: I'm an IT guy, so I mainly handle data center and cloud infrastructure. [^2]: I swear I'm going to get around to finishing writing that blog post one day. [^3]: The others being EntraMfaPrefillinator, GitHubReleaseGen, and TwemojiConverter. </description>
      <source>Smalls.Online Blog</source>
      <pubDate>Sat, 09 Mar 2024 16:35:00 Z</pubDate>
      <a10:link href="https://smalls.online/blog/entry/vscode-configurator" />
      <a10:content type="html">&lt;p&gt;For a while now, it's been pretty tedious for me to bootstrap new programming projects. I have to go through many steps to get my environment setup for a new project to use in VSCode. Especially if I want to keep things consistent across all of the different random projects I start working on.&lt;/p&gt;
&lt;p&gt;So I finally decided to make something to do that for me: &lt;code&gt;vscode-configurator&lt;/code&gt;, a CLI tool for quickly bootstrapping new projects.&lt;/p&gt;
&lt;h2 id="ewww-vscode"&gt;Ewww, VSCode? 🤮&lt;/h2&gt;
&lt;p&gt;Let me get this out of the way: Yeah... I use VSCode. lol It's what I've been using for quite a while now. For all of it's weird quirks, I'm &lt;em&gt;very familiar&lt;/em&gt; with how it works. I've tried many different code editors, but I always come back to VSCode.&lt;/p&gt;
&lt;p&gt;I've never been a big fan of &lt;em&gt;&amp;quot;highly specific&amp;quot;&lt;/em&gt; IDEs. Maybe someday I'll write about that, but the short of it is that I'm &lt;del&gt;a crazy madman&lt;/del&gt; someone who works with many different languages. Yeah, I've been mainly doing things in C# for the past few years; however, I also like to toy around with other languages &lt;strong&gt;and&lt;/strong&gt; my actual job doesn't even involve programming. &lt;a id="fnref:1" href="#fn:1" class="footnote-ref"&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;I do use NeoVim too, but I just haven't been able to switch my programming stuff to it. It's kinda relegated to quick file edits and whenever I need to modify a config file on a Linux system. Will I ever make the switch for programming...&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cdn.smalls.online/images/blog-assets/maybe-someday.gif" class="img-fluid" alt="Maybe someday..." /&gt;&lt;/p&gt;
&lt;h2 id="general-overview"&gt;A general overview of the tool&lt;/h2&gt;
&lt;p&gt;So what exactly does &lt;code&gt;vscode-configurator&lt;/code&gt; do? I guess you could probably compare it to &lt;a href="https://create.t3.gg/" rel="noopener noreferrer"&gt;&lt;code&gt;create-t3-app&lt;/code&gt;&lt;/a&gt;? It's kinda the same concept, but not entirely.&lt;/p&gt;
&lt;p&gt;Right now it only bootstraps a workspace for C# projects, but I do plan on adding other languages and scenarios. I'm mainly writing the tool for myself, so, of course, I'm going to focus on it first and it's &lt;em&gt;very&lt;/em&gt; opinionated. It also doesn't create the actual C# projects, so the point of it is to initialize and maintain the workspace for VSCode.&lt;/p&gt;
&lt;p&gt;In the case of bootstrapping a workspace for a C# project it will:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Initialize &lt;code&gt;git&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Run &lt;code&gt;dotnet new&lt;/code&gt; to create templated files:
&lt;ul&gt;
&lt;li&gt;A root solution file (&lt;code&gt;.sln&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;.gitignore&lt;/code&gt; file that contains a lot of common files/paths to exclude from &lt;code&gt;git&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;A &lt;code&gt;global.json&lt;/code&gt; file for specifying which version of .NET for the &lt;code&gt;dotnet&lt;/code&gt; to use.&lt;/li&gt;
&lt;li&gt;Optionally add a &lt;code&gt;nuget.config&lt;/code&gt; file to specify other NuGet feeds.&lt;/li&gt;
&lt;li&gt;Optionally add &lt;a href="https://gitversion.net/docs/" rel="noopener noreferrer"&gt;GitVersion&lt;/a&gt; to the project, so versioning information for compiled projects is derived from &lt;code&gt;git&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;settings.json&lt;/code&gt; file for VSCode to define some local settings to the project.&lt;/li&gt;
&lt;li&gt;Create a &lt;code&gt;tasks.json&lt;/code&gt; file for VSCode that defines a lot of the common tasks (and inputs for those tasks) I use for C# projects.
&lt;ul&gt;
&lt;li&gt;Tasks like building, compiling, restoring packages, etc.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Running it looks like this:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cdn.smalls.online/images/blog-assets/vscode-configurator_csharp_init.gif" class="img-fluid" alt="'vscode-configurator csharp init' demo" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pretty cool, right?&lt;/strong&gt; Not only that, but I also have another command that can be ran afterwards to add a C# project file to the solution file and the &lt;code&gt;tasks.json&lt;/code&gt; file:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cdn.smalls.online/images/blog-assets/vscode-configurator_add-project.gif" class="img-fluid" alt="'vscode-configurator csharp add-project' demo" /&gt;&lt;/p&gt;
&lt;p&gt;Now the project is added to the root solution file and I can now select it as an option when I run a task:&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cdn.smalls.online/images/blog-assets/vscode-configurator_tasks-demo.gif" class="img-fluid" alt="Demo of running a task on the newly added project" /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pretty dank, right?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Honestly, this will help me keep things consistent across all of my projects. No more having to copy/paste files, manually add specific things, or forgetting to run that one specific command.&lt;/p&gt;
&lt;h2 id="techno-mumbo-jumbo"&gt;Techno Mumbo Jumbo&lt;/h2&gt;
&lt;p&gt;I wrote this in... &lt;em&gt;Wait for it...&lt;/em&gt; C#. &lt;strong&gt;SHOCKER!&lt;/strong&gt; I know, right?&lt;/p&gt;
&lt;p&gt;You're probably thinking that this could have easily been a shell script or a PowerShell script. You'd be right. Where's the fun in that, though? It mainly gave me an excuse to learn some more of the ins-and-outs of writing something that's meant to be &lt;a href="https://learn.microsoft.com/en-us/dotnet/core/deploying/native-aot" rel="noopener noreferrer"&gt;Native AOT compiled&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;What's Native AOT in the C#/.NET world? &lt;a id="fnref:2" href="#fn:2" class="footnote-ref"&gt;&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt; You're basically compiling to native code and not the traditional Just-In-Time (JIT) code that .NET is known for. There's no need for having the .NET runtime installed to run it. There's still &lt;em&gt;technically&lt;/em&gt; a runtime, but it's directly embedded into the binary and it's trimmed down.&lt;/p&gt;
&lt;p&gt;This is like the third or fourth C# project I've done that targets Native AOT compilation in just the last few months. &lt;a id="fnref:3" href="#fn:3" class="footnote-ref"&gt;&lt;sup&gt;3&lt;/sup&gt;&lt;/a&gt; I've been tinkering with Native AOT compilation for a few years now, but, with .NET 8, it's gotten to the point where it's a viable option. There are a lot of &lt;em&gt;&amp;quot;limitations&amp;quot;&lt;/em&gt; when it comes to it, so there are a lot of things that you have to do differently than if you were to compile it traditionally.&lt;/p&gt;
&lt;p&gt;One of those is interacting with JSON. &lt;strong&gt;Especially&lt;/strong&gt; when you're trying to work with a JSON that you can't reliably know the schema for (Or it would take too much work creating classes for all of it lol). I ran into that with the &lt;code&gt;tasks.json&lt;/code&gt; file whenever &lt;code&gt;vscode-configurator csharp add-project&lt;/code&gt; is ran. Here are three methods I made to handle that, if you're curious:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/Smalls1652/SmallsOnline.VSCode.Configurator/blob/71764acac2b545af489ec37d3b5eaa07723606ba/src/Configurator/External/VSCode/Tasks/AddCsharpProjectToTasksJsonAsync.cs" rel="noopener noreferrer"&gt;&lt;code&gt;src/Configurator/External/VSCode/Tasks/AddCsharpProjectToTasksJsonAsync.cs&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Smalls1652/SmallsOnline.VSCode.Configurator/blob/71764acac2b545af489ec37d3b5eaa07723606ba/src/Configurator/External/VSCode/Tasks/GetInputNodeById.cs" rel="noopener noreferrer"&gt;&lt;code&gt;src/Configurator/External/VSCode/Tasks/GetInputNodeById.cs&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/Smalls1652/SmallsOnline.VSCode.Configurator/blob/71764acac2b545af489ec37d3b5eaa07723606ba/src/Configurator/External/VSCode/Tasks/AddOptionToInputNode.cs" rel="noopener noreferrer"&gt;&lt;code&gt;src/Configurator/External/VSCode/Tasks/AddOptionToInputNode.cs&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A lot of this is just a good learning experience for me. I'm able to learn more about how things work under-the-hood. Honestly if you're a C#/.NET developer, Native AOT compilation will help you understand how a lot of the nice things work.&lt;/p&gt;
&lt;h2 id="wrapping-up"&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;Like I said before, I'm mainly writing this tool for myself. I can't guarantee that it will be good for you; however, if you're interested in it, you can check out my GitHub repo for it &lt;a href="https://github.com/Smalls1652/SmallsOnline.VSCode.Configurator" rel="noopener noreferrer"&gt;here&lt;/a&gt;. I've also got pre-compiled binaries for the latest release &lt;a href="https://github.com/Smalls1652/SmallsOnline.VSCode.Configurator/releases/latest" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I don't know which languages/scenarios I'm going to add next. Maybe Rust or an infrastructure-as-code language (&lt;del&gt;Terraform&lt;/del&gt; OpenTofu or Azure Bicep)? I want to also make it customizable, so you can add whatever you want; however, I'm not focusing on that yet and I'm not even sure if I will add it.&lt;/p&gt;
&lt;div class="footnotes"&gt;
&lt;hr /&gt;
&lt;ol&gt;
&lt;li id="fn:1"&gt;
&lt;p&gt;I'm an IT guy, so I mainly handle data center and cloud infrastructure.&lt;a href="#fnref:1" class="footnote-back-ref"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:2"&gt;
&lt;p&gt;I swear I'm going to get around to finishing writing that blog post one day.&lt;a href="#fnref:2" class="footnote-back-ref"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li id="fn:3"&gt;
&lt;p&gt;The others being &lt;a href="https://github.com/Smalls1652/EntraMfaPrefillinator" rel="noopener noreferrer"&gt;EntraMfaPrefillinator&lt;/a&gt;, &lt;a href="https://github.com/Smalls1652/GitHubReleaseGen" rel="noopener noreferrer"&gt;GitHubReleaseGen&lt;/a&gt;, and &lt;a href="https://github.com/Smalls1652/TwemojiConverter" rel="noopener noreferrer"&gt;TwemojiConverter&lt;/a&gt;.&lt;a href="#fnref:3" class="footnote-back-ref"&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
</a10:content>
    </item>
    <item>
      <guid isPermaLink="false">34ef80ca-79e8-4d96-ab85-28bbb26dd8cb</guid>
      <a10:author>
        <a10:name>Tim Small</a10:name>
        <a10:uri>https://smalls.online</a10:uri>
      </a10:author>
      <category>blog</category>
      <title>Steam Deck OLED: First Impressions</title>
      <description>I've had the Steam Deck OLED for a few days now, so I want to put down my thoughts about it. First things first: I absolutely love the Steam Deck. I've had the original LCD model since July 2022 and it has been my main gaming device since then. I play games on it just about every day. I'm not a PC gamer (At least I haven't been one since 2016), so I love how "console-ified" the Steam Deck is; however, it still lets me do more advanced things like modding. It's an amazing little device. So here are my thoughts on the new OLED model. tl;dr {#tl-dr} If you don't want to read all of my individual thoughts, here's the tl;dr: The new OLED model is amazing and is definitely worth it. It's the model you should get if you don't already have one. If you already have a LCD model, it depends on how much you use it. If you use it all the time, it is a worthwhile upgrade. My thoughts {#my-thoughts} The screen {#the-screen} The new OLED panel is 100% better than the LCD panel. It looks much better, even with the changes in SteamOS 3.5 that improved the LCD panel's vibrancy. Not only that but it also supports HDR, which is absolutely wild. It's also a 90 Hz panel. Not that you'll be able to take advantage of that in most games, but it does make the UI much smoother and it makes input latency much better (Even if you cap the framerate to 30 FPS). The panel is also slightly bigger. It's only a 0.4 inch increase (From 7 inches to 7.4 inches), but... It's noticable. The original LCD model had huge bezels around it and it legitimately felt cramped at times. The native resolution of both the LCD and OLED panels is 1280x800, which is a 16:10 aspect ratio. Not every game supports that resolution though, so you would have to run the game at 1280x720, which is a 16:9 ratio and is commonly named 720p. For those games that run at 1280x720, it felt cramped and didn't look all that great due to the added black bars on the LCD panel; however, it looks so much better on the OLED panel because there's more room to display the image and the black bars blend into the bezels. It is such a small quality of life improvement, but it's a very good one. The battery {#the-battery} The battery is noticably better. I mean it's going from a 40 Whr battery in the LCD model to a 50 Whr battery in the OLED model, but there are also power efficiency improvements coming from the reduced size of the APU. At this point, I'm getting about 3-5 hours of playtime now. On the LCD model I was getting somewhere between 1-2 hours. It still heavily depends on what you're playing and what framerate you have set, but it is a huge improvement. Thermals {#thermals} Picture of the Steam Deck's thermals while playing Borderlands 3. The OLED model runs much cooler. I was seeing between 75-90 °C on the LCD model if I was playing something intensive or at a high framerate, but I'm seeing between 60-75 °C on the OLED model. One benefit from that is that it doesn't get hot to touch often on the back of the Deck where the APU is. Plus the fan is much quieter. You can still hear it when it does kick on, but it's nowhere near as loud as it was on the LCD model. Other things {#other-things} Here are some other things that I noticed too: The analog sticks feel much better. Completely different materials. The haptics are more prominent. If a game didn't handle haptics/vibration/rumble directly through Steam Input, you could barely feel it (If at all). Now it's actually noticable. I was able to notice it in Borderlands 3 when shooting a gun. It's much lighter. You can now wake up the Deck from sleep when you turn on a controller that you have paired to it. Very handy when you have it in a dock. The speakers are slightly better? I think it's pretty subjective. I thought the speakers on the LCD model were already really good, but there's a noticable change in how the speakers sound on the OLED model. Some things sound better, but some things don't. If you get the 1 TB model like I did, the included carrying case has a removable liner. You can basically remove the inner portion of the case and use it as a smaller case. The OLED model now supports WiFi 6e, but... I won't be able to take advantage of that just yet. The router I've got right now is WiFi 6, but not WiFi 6e. lol They added the Steam Deck logo on the charger. The logo improves NB performance by 100%. What's NB? It's a nothingburger. Wrapping up {#wrapping-up} The Steam Deck OLED is a great improvement of an already great device. Sure it might not be as powerful as something like the ASUS ROG Ally, the Lenovo Legion Go, or whatever new device Ayaneo or GPD has put out. I still believe it's the best out of all of them because it's efficient at how much power it uses. Plus you don't have to worry about all of the weird quirks with Windows (Though I will point out that it does comes with downsides, like some multiplayer games will not work because the anti-cheat doesn't work on Linux). So should you get the Steam Deck OLED? If you don't already have a Steam Deck and you're looking to get one, get the OLED model. If you already have a Steam Deck and use it all the time, I highly recommend getting the OLED model. All of the improvements made with the OLED model are well worth it. If you already have a Steam Deck and don't use it frequently, don't get the OLED model. </description>
      <source>Smalls.Online Blog</source>
      <pubDate>Sun, 26 Nov 2023 15:19:00 Z</pubDate>
      <a10:link href="https://smalls.online/blog/entry/steam-deck-oled-impressions" />
      <a10:content type="html">&lt;p&gt;I've had the Steam Deck OLED for a few days now, so I want to put down my thoughts about it.&lt;/p&gt;
&lt;p&gt;First things first: I absolutely love the Steam Deck. I've had the original LCD model since July 2022 and it has been my main gaming device since then. I play games on it just about every day. I'm not a PC gamer (At least I haven't been one since 2016), so I love how &lt;em&gt;&amp;quot;console-ified&amp;quot;&lt;/em&gt; the Steam Deck is; however, it still lets me do more advanced things like modding. It's an amazing little device.&lt;/p&gt;
&lt;p&gt;So here are my thoughts on the new OLED model.&lt;/p&gt;
&lt;h2 id="tl-dr"&gt;tl;dr&lt;/h2&gt;
&lt;p&gt;If you don't want to read all of my individual thoughts, here's the &lt;strong&gt;tl;dr&lt;/strong&gt;:&lt;/p&gt;
&lt;p&gt;The new OLED model is amazing and is definitely worth it. It's the model you should get if you don't already have one. If you already have a LCD model, it depends on how much you use it. If you use it all the time, it is a worthwhile upgrade.&lt;/p&gt;
&lt;h2 id="my-thoughts"&gt;My thoughts&lt;/h2&gt;
&lt;h3 id="the-screen"&gt;The screen&lt;/h3&gt;
&lt;p&gt;The new OLED panel is 100% better than the LCD panel. It looks much better, even with the changes in SteamOS 3.5 that improved the LCD panel's vibrancy. Not only that but it also supports HDR, which is absolutely wild. It's also a &lt;code&gt;90 Hz&lt;/code&gt; panel. Not that you'll be able to take advantage of that in most games, but it does make the UI much smoother and it makes input latency much better (Even if you cap the framerate to 30 FPS).&lt;/p&gt;
&lt;p&gt;The panel is also &lt;em&gt;slightly&lt;/em&gt; bigger. It's only a &lt;code&gt;0.4 inch&lt;/code&gt; increase (From &lt;code&gt;7 inches&lt;/code&gt; to &lt;code&gt;7.4 inches&lt;/code&gt;), but... It's noticable. The original LCD model had huge bezels around it and it legitimately felt cramped at times. The native resolution of both the LCD and OLED panels is &lt;code&gt;1280x800&lt;/code&gt;, which is a &lt;code&gt;16:10&lt;/code&gt; aspect ratio. Not every game supports that resolution though, so you would have to run the game at &lt;code&gt;1280x720&lt;/code&gt;, which is a &lt;code&gt;16:9&lt;/code&gt; ratio and is commonly named &lt;code&gt;720p&lt;/code&gt;. For those games that run at &lt;code&gt;1280x720&lt;/code&gt;, it felt cramped and didn't look all that great due to the added black bars on the LCD panel; however, it looks so much better on the OLED panel because there's more room to display the image and the black bars blend into the bezels. It is such a small quality of life improvement, but it's a very good one.&lt;/p&gt;
&lt;h3 id="the-battery"&gt;The battery&lt;/h3&gt;
&lt;p&gt;The battery is noticably better. I mean it's going from a &lt;code&gt;40 Whr&lt;/code&gt; battery in the LCD model to a &lt;code&gt;50 Whr&lt;/code&gt; battery in the OLED model, but there are also power efficiency improvements coming from the reduced size of the APU. At this point, I'm getting about 3-5 hours of playtime now. On the LCD model I was getting somewhere between 1-2 hours. It still heavily depends on what you're playing and what framerate you have set, but it is a huge improvement.&lt;/p&gt;
&lt;h3 id="thermals"&gt;Thermals&lt;/h3&gt;
&lt;p&gt;&lt;img src="https://cdn.smalls.online/images/blog-assets/IMG_7545.jpeg" class="img-fluid" alt="Picture of the Steam Deck's thermals while playing Borderlands 3." /&gt;&lt;/p&gt;
&lt;p&gt;The OLED model runs much cooler. I was seeing between &lt;code&gt;75-90 °C&lt;/code&gt; on the LCD model if I was playing something intensive or at a high framerate, but I'm seeing between &lt;code&gt;60-75 °C&lt;/code&gt; on the OLED model. One benefit from that is that it doesn't get hot to touch often on the back of the Deck where the APU is. Plus the fan is much quieter. You can still hear it when it does kick on, but it's nowhere near as loud as it was on the LCD model.&lt;/p&gt;
&lt;h3 id="other-things"&gt;Other things&lt;/h3&gt;
&lt;p&gt;Here are some other things that I noticed too:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The analog sticks feel much better. Completely different materials.&lt;/li&gt;
&lt;li&gt;The haptics are more prominent. If a game didn't handle haptics/vibration/rumble directly through Steam Input, you could barely feel it (If at all). Now it's actually noticable. I was able to notice it in Borderlands 3 when shooting a gun.&lt;/li&gt;
&lt;li&gt;It's much lighter.&lt;/li&gt;
&lt;li&gt;You can now wake up the Deck from sleep when you turn on a controller that you have paired to it. Very handy when you have it in a dock.&lt;/li&gt;
&lt;li&gt;The speakers are slightly better? I think it's pretty subjective. I thought the speakers on the LCD model were already really good, but there's a noticable change in how the speakers sound on the OLED model. Some things sound better, but some things don't.&lt;/li&gt;
&lt;li&gt;If you get the &lt;code&gt;1 TB&lt;/code&gt; model like I did, the included carrying case has a removable liner. You can basically remove the inner portion of the case and use it as a smaller case.&lt;/li&gt;
&lt;li&gt;The OLED model now supports WiFi 6e, but... I won't be able to take advantage of that just yet. The router I've got right now is WiFi 6, but not WiFi 6e. lol&lt;/li&gt;
&lt;li&gt;They added the Steam Deck logo on the charger. The logo improves &lt;code&gt;NB&lt;/code&gt; performance by 100%. What's &lt;code&gt;NB&lt;/code&gt;? It's a nothingburger.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="wrapping-up"&gt;Wrapping up&lt;/h2&gt;
&lt;p&gt;The Steam Deck OLED is a great improvement of an already great device. Sure it might not be as powerful as something like the ASUS ROG Ally, the Lenovo Legion Go, or whatever new device Ayaneo or GPD has put out. I still believe it's the best out of all of them because it's efficient at how much power it uses. Plus you don't have to worry about all of the weird quirks with Windows (Though I will point out that it does comes with downsides, like some multiplayer games will not work because the anti-cheat doesn't work on Linux).&lt;/p&gt;
&lt;p&gt;So should you get the Steam Deck OLED?&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;If you don't already have a Steam Deck and you're looking to get one&lt;/em&gt;, &lt;strong&gt;get the OLED model&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;If you already have a Steam Deck and use it all the time&lt;/em&gt;, &lt;strong&gt;I highly recommend getting the OLED model&lt;/strong&gt;. All of the improvements made with the OLED model are well worth it.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;If you already have a Steam Deck and don't use it frequently&lt;/em&gt;, &lt;strong&gt;don't get the OLED model&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
</a10:content>
    </item>
    <item>
      <guid isPermaLink="false">e35395c9-23b4-4aba-91de-eb88872ad77c</guid>
      <a10:author>
        <a10:name>Tim Small</a10:name>
        <a10:uri>https://smalls.online</a10:uri>
      </a10:author>
      <category>blog</category>
      <title>iOS Ad Blockers</title>
      <description>Did you know that you can easily block ads on iOS? Specifically through Safari? And I'm not talking about DNS filtering. They're called "Content Blockers" and they're not as powerful as something like uBlock Origin; however, they get the job done and I highly suggest you use at least one. A lot of them aren't free, but... They're definitely worth it. ⚠️ Note: Content Blockers only work with Safari. Third-party web browsers, like Google Chrome, Mozilla Firefox, and Microsoft Edge DO NOT implement them. I don't personally use third-party web browsers on iOS, since they're all just re-skins of Safari. Here are the ones I personally use: Wipr ($1.99 USD) This is probably the best one to get. It basically implements the EasyList filters, which is used by a lot of the popular desktop ad blocking extensions. AdGuard (Base - Free | Premium - $4.99/year USD or a one-time purchase of $12.99 USD) I use this for lists that aren't EasyList, since Wipr implements those. It has DNS filtering, but I do not use that. It also has an optional Advanced Protection extension that acts more like a traditional ad-blocking extension on a desktop web browser. </description>
      <source>Smalls.Online Blog</source>
      <pubDate>Sun, 05 Nov 2023 21:25:00 Z</pubDate>
      <a10:link href="https://smalls.online/blog/entry/ios-ad-blockers" />
      <a10:content type="html">&lt;p&gt;Did you know that you can easily block ads on iOS? Specifically through Safari? And I'm not talking about DNS filtering.&lt;/p&gt;
&lt;p&gt;They're called &lt;a href="https://webkit.org/blog/3476/content-blockers-first-look/" rel="noopener noreferrer"&gt;&lt;em&gt;&amp;quot;Content Blockers&amp;quot;&lt;/em&gt;&lt;/a&gt; and they're not as powerful as something like &lt;a href="https://github.com/gorhill/uBlock" rel="noopener noreferrer"&gt;uBlock Origin&lt;/a&gt;; however, they get the job done and &lt;strong&gt;I highly suggest you use at least one&lt;/strong&gt;. A lot of them aren't free, but... They're definitely worth it.&lt;/p&gt;
&lt;blockquote class="blockquote"&gt;
&lt;p&gt;&lt;strong&gt;⚠️ Note:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Content Blockers &lt;strong&gt;only work with Safari&lt;/strong&gt;. Third-party web browsers, like Google Chrome, Mozilla Firefox, and Microsoft Edge &lt;strong&gt;DO NOT&lt;/strong&gt; implement them.&lt;/p&gt;
&lt;p&gt;I don't personally use third-party web browsers on iOS, since they're all just re-skins of Safari.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Here are the ones I personally use:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://apps.apple.com/us/app/wipr/id1030595027" rel="noopener noreferrer"&gt;&lt;strong&gt;Wipr&lt;/strong&gt;&lt;/a&gt; (&lt;code&gt;$1.99 USD&lt;/code&gt;)
&lt;ul&gt;
&lt;li&gt;This is probably the best one to get. It basically implements the &lt;a href="https://easylist.to" rel="noopener noreferrer"&gt;EasyList&lt;/a&gt; filters, which is used by a lot of the popular desktop ad blocking extensions.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href="https://apps.apple.com/us/app/adguard-adblock-privacy/id1047223162" rel="noopener noreferrer"&gt;&lt;strong&gt;AdGuard&lt;/strong&gt;&lt;/a&gt; (Base - &lt;code&gt;Free&lt;/code&gt; | Premium - &lt;code&gt;$4.99/year USD&lt;/code&gt; or a one-time purchase of &lt;code&gt;$12.99 USD&lt;/code&gt;)
&lt;ul&gt;
&lt;li&gt;I use this for lists that aren't EasyList, since Wipr implements those. It has DNS filtering, but I do not use that.&lt;/li&gt;
&lt;li&gt;It also has an optional &lt;a href="https://adguard.com/kb/adguard-for-ios/overview/#advanced-protection" rel="noopener noreferrer"&gt;&lt;em&gt;Advanced Protection&lt;/em&gt;&lt;/a&gt; extension that acts more like a traditional ad-blocking extension on a desktop web browser.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
</a10:content>
    </item>
    <item>
      <guid isPermaLink="false">19ba951a-ed10-4601-9d25-efcf6fba5232</guid>
      <a10:author>
        <a10:name>Tim Small</a10:name>
        <a10:uri>https://smalls.online</a10:uri>
      </a10:author>
      <category>blog</category>
      <title>Blog publishing goes brrr</title>
      <description>It's been a hot minute since I last made a blog post here. There are a few reasons why, so let's delve into that. 😅 Why you no post?{#why-you-no-post} So I started work on making a blogging portion of my site in mid-2022 and managed to get two posts up: A very stupid hello world post and a post about WASM optimizations for my site. I had gotten it "working" and promptly stopped making posts. All the hallmarks of anyone's personal blog! It's not that I didn't want to do that. Well.. There are a few reasons why I left it like that: I didn't have an easy way to publish posts. Time. Now, I've finally fixed ~~both~~ one of those problems. Blog Publishinator 9000{#blog-publishinator-9000} After migrating the codebase for my site to be server-side rendered (SSR) with upcoming changes to .NET's Blazor, I wanted to make a post about the change. It was a bit of a switch going from WebAssembly to server-side rendering, so I wanted to explain all of the dorky details about it (That I'm sure so many people would read). There's one problem: I never got around to actually finishing the mechanisms for publishing blog posts. Ya see, I had this smart idea for it all. Instead of spending a lot of time making a web-based content management system (CMS), why not just use the tools I already use? So I had made this plan to use git for maintaining it all. Specifically through a CI/CD workflow through GitHub Actions. All I needed was something that could process the Markdown files in the repo and push any changes to the database. Enter the, appropriately named, SmallsOnline.Web.Tools.BlogPublisher CLI tool. Great name, right? I thought so too! It's just a simple CLI tool where you pass the path to the file, supply database authentication/connection info (Before you ask, no it's not hard-coded), and it'll push the changes. I had technically started work on it back in December 2022, but, like most things, I just didn't have time to work on it. Yes it's very simple, but my ADHD brain jumps from one thing to the next. Add on the fact that I'm constantly busy at work and that can drain me from wanting to work on personal projects. So now it's all in place and that's how this post got published. Why go this route?{#why-go-this-route} As I mentioned previously, making a dedicated CMS for all of this would have been a waste of my time. These are the reasons why I decided to have it controlled through a git repo on GitHub: I'm a dork. I can use the tools I already use. On my laptop, I can use Visual Studio Code. Yeah, I know that can be yucky to some people; however, for me personally, it's a great Markdown editor. On my iPhone or iPad Pro, I can use iA Writer or Runestone to write the posts. Then I can use Working Copy for performing all of the git operations. I can take advantage of version control. I can have two places for my posts: On my website. On GitHub (Or, if I were to move off of GitHub, other cloud-based git providers) That last reason is a bit of a sticking point for me. If something were to happen to my website and/or my website's database, I can at least have a backup location of them. The repo is currently (Has been for a long time lol) set to private, but it'll eventually be public and you'll be able to access it from here. Let's wrap this up{#lets-wrap-this-up} Wow! You made it all the way here? Even after reading that snooze fest? Now that I've finally got everything in place, I'll be posting a lot more. I've got some extra things I need to do, like setting up a RSS feed and figuring out the best way to make attaching images a bit easier; however, I've got something to work with now. </description>
      <source>Smalls.Online Blog</source>
      <pubDate>Fri, 08 Sep 2023 10:27:00 Z</pubDate>
      <a10:link href="https://smalls.online/blog/entry/blog-publishing-goes-brrr" />
      <a10:content type="html">&lt;p&gt;It's been a hot minute since &lt;a href="https://smalls.online/blog/entry/blazor-wasm-optimizations" rel="noopener noreferrer"&gt;I last made a blog post here&lt;/a&gt;. There are a few reasons why, so let's delve into that. 😅&lt;/p&gt;
&lt;h2 id="why-you-no-post"&gt;Why you no post?&lt;/h2&gt;
&lt;p&gt;So I started work on making a blogging portion of my site in mid-2022 and managed to get two posts up: &lt;a href="https://smalls.online/blog/entry/hello-world" rel="noopener noreferrer"&gt;A very stupid hello world post&lt;/a&gt; and &lt;a href="https://smalls.online/blog/entry/blazor-wasm-optimizations" rel="noopener noreferrer"&gt;a post about WASM optimizations for my site&lt;/a&gt;. I had gotten it &lt;em&gt;&amp;quot;working&amp;quot;&lt;/em&gt; and promptly stopped making posts. All the hallmarks of anyone's personal blog! It's not that I didn't want to do that.&lt;/p&gt;
&lt;p&gt;Well.. There are a few reasons why I left it like that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;I didn't have an easy way to publish posts.&lt;/li&gt;
&lt;li&gt;Time.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Now, I've &lt;strong&gt;finally&lt;/strong&gt; fixed &lt;del&gt;both&lt;/del&gt; one of those problems.&lt;/p&gt;
&lt;h2 id="blog-publishinator-9000"&gt;Blog Publishinator 9000&lt;/h2&gt;
&lt;p&gt;After &lt;a href="https://github.com/Smalls1652/SmallsOnline.Web/pull/125" rel="noopener noreferrer"&gt;migrating the codebase for my site to be server-side rendered (SSR)&lt;/a&gt; with upcoming changes to .NET's Blazor, I wanted to make a post about the change. It was a bit of a switch going from WebAssembly to server-side rendering, so I wanted to explain all of the dorky details about it (That I'm sure so many people would read).&lt;/p&gt;
&lt;p&gt;There's one problem: I never got around to actually finishing the mechanisms for publishing blog posts.&lt;/p&gt;
&lt;p&gt;Ya see, I had this smart idea for it all. Instead of spending a lot of time making a web-based content management system (CMS), why not just use the tools I already use? So I had made this plan to use &lt;code&gt;git&lt;/code&gt; for maintaining it all. Specifically through a CI/CD workflow through &lt;a href="https://docs.github.com/en/actions" rel="noopener noreferrer"&gt;GitHub Actions&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;All I needed was &lt;em&gt;something&lt;/em&gt; that could process the Markdown files in the repo and push any changes to the database. Enter the, appropriately named, &lt;a href="https://github.com/Smalls1652/SmallsOnline.Web/pull/130" rel="noopener noreferrer"&gt;&lt;code&gt;SmallsOnline.Web.Tools.BlogPublisher&lt;/code&gt; CLI tool&lt;/a&gt;. Great name, right? I thought so too! It's just a simple CLI tool where you pass the path to the file, supply database authentication/connection info (Before you ask, no it's not hard-coded), and it'll push the changes. I had technically started work on it back in December 2022, but, like most things, I just didn't have time to work on it. Yes it's very simple, but my ADHD brain jumps from one thing to the next. Add on the fact that I'm constantly busy at work and that can drain me from wanting to work on personal projects.&lt;/p&gt;
&lt;p&gt;So now it's all in place and that's how this post got published.&lt;/p&gt;
&lt;h2 id="why-go-this-route"&gt;Why go this route?&lt;/h2&gt;
&lt;p&gt;As I mentioned previously, making a dedicated CMS for all of this would have been a waste of my time. These are the reasons why I decided to have it controlled through a &lt;code&gt;git&lt;/code&gt; repo on GitHub:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I'm a dork.&lt;/li&gt;
&lt;li&gt;I can use the tools I already use.
&lt;ul&gt;
&lt;li&gt;On my laptop, I can use Visual Studio Code. Yeah, I know that can be yucky to some people; however, for me personally, it's a great Markdown editor.&lt;/li&gt;
&lt;li&gt;On my iPhone or iPad Pro, I can use &lt;a href="https://ia.net/writer" rel="noopener noreferrer"&gt;iA Writer&lt;/a&gt; or &lt;a href="https://runestone.app/" rel="noopener noreferrer"&gt;Runestone&lt;/a&gt; to write the posts. Then I can use &lt;a href="https://workingcopy.app/" rel="noopener noreferrer"&gt;Working Copy&lt;/a&gt; for performing all of the &lt;code&gt;git&lt;/code&gt; operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;I can take advantage of version control.&lt;/li&gt;
&lt;li&gt;I can have two places for my posts:
&lt;ul&gt;
&lt;li&gt;On my website.&lt;/li&gt;
&lt;li&gt;On GitHub (Or, if I were to move off of GitHub, other cloud-based &lt;code&gt;git&lt;/code&gt; providers)&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That last reason is a bit of a sticking point for me. &lt;strong&gt;If&lt;/strong&gt; something were to happen to my website and/or my website's database, I can at least have a backup location of them. The repo is currently (Has been for a long time lol) set to private, but it'll eventually be public and you'll be able to access it from &lt;a href="https://github.com/Smalls1652/SmallsOnline.Blog.Entries" rel="noopener noreferrer"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="lets-wrap-this-up"&gt;Let's wrap this up&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Wow!&lt;/strong&gt; You made it all the way here? Even after reading that snooze fest?&lt;/p&gt;
&lt;p&gt;Now that I've finally got everything in place, I'll be posting a lot more. I've got some extra things I need to do, like setting up a RSS feed and figuring out the best way to make attaching images a bit easier; however, I've got something to work with now.&lt;/p&gt;
</a10:content>
    </item>
    <item>
      <guid isPermaLink="false">fd76fb6f-9eaa-4375-a5e2-7dd6d6123e8c</guid>
      <a10:author>
        <a10:name>Tim Small</a10:name>
        <a10:uri>https://smalls.online</a10:uri>
      </a10:author>
      <category>blog</category>
      <title>Blazor WebAssembly optimizations I missed...</title>
      <description>I've been unintentionally skipping something in my CI/CD pipeline for my website that reduces the overall file size. 😬 If you didn't know, my website, which is probably where you're reading this, is not your typical website. It's built using Blazor WebAssembly (WASM), so the majority of the site is written in C#. This has a downside of having the overall file size of your website/web app being much larger than what you're used to seeing with just HTML and JavaScript; however, I've apparently been missing one little step that drastically reduces the final size. The Problem{#the-problem} Due to the nature of how Blazor WASM works, there are some extra files required for a website/web app to work in everyone's web browser. More specifically the .NET runtime converted to WebAssembly. By default, this is the uncompressed size of the file: | File name | Size (Uncompressed) | | --- | --- | | dotnet.wasm | 2.36 MB | One thing to keep in mind is that the file distributed to the web browser is actually compressed with Brotli, so you're not actually downloading 2.36 MB. The Solution{#the-solution} tl;dr{#tl-dr} Simple. Make sure the wasm-tools workload is installed for the dotnet SDK. You can install it by running this command: dotnet workload install wasm-tools The long answer{#the-long-answer} Installing wasm-tools adds an extra step when running dotnet publish with the release config: Runtime relinking. This basically trims the .NET runtime of code that your website/web app does not use. This was the size of the .NET runtime WebAssembly: | File name | Size (Uncompressed) | | --- | --- | | dotnet.wasm | 991 KB | That's a 1.39 MB difference! So I added this step to my build and deploy workflow on GitHub before I run dotnet publish: - name: Install wasm-tools   run: dotnet workload install wasm-tools </description>
      <source>Smalls.Online Blog</source>
      <pubDate>Mon, 18 Jul 2022 22:31:00 Z</pubDate>
      <a10:link href="https://smalls.online/blog/entry/blazor-wasm-optimizations" />
      <a10:content type="html">&lt;p&gt;I've been unintentionally skipping something in my CI/CD pipeline for my website that reduces the overall file size. 😬&lt;/p&gt;
&lt;p&gt;If you didn't know, my website, which is probably where you're reading this, is not your typical website. It's built using &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/?view=aspnetcore-6.0" rel="noopener noreferrer"&gt;Blazor WebAssembly (WASM)&lt;/a&gt;, so the majority of the site is written in C#. This has a downside of having the overall file size of your website/web app being much larger than what you're used to seeing with just HTML and JavaScript; however, I've apparently been missing one little step that drastically reduces the final size.&lt;/p&gt;
&lt;h2 id="the-problem"&gt;The Problem&lt;/h2&gt;
&lt;p&gt;Due to the nature of how Blazor WASM works, there are some extra files required for a website/web app to work in everyone's web browser. More specifically the .NET runtime converted to WebAssembly. By default, this is the uncompressed size of the file:&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File name&lt;/th&gt;
&lt;th&gt;Size (Uncompressed)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;code&gt;dotnet.wasm&lt;/code&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;2.36 MB&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;One thing to keep in mind is that the file distributed to the web browser is actually compressed with &lt;a href="https://github.com/google/brotli#introduction" rel="noopener noreferrer"&gt;Brotli&lt;/a&gt;, so you're not &lt;em&gt;actually&lt;/em&gt; downloading &lt;code&gt;2.36 MB&lt;/code&gt;.&lt;/p&gt;
&lt;h2 id="the-solution"&gt;The Solution&lt;/h2&gt;
&lt;h3 id="tl-dr"&gt;tl;dr&lt;/h3&gt;
&lt;p&gt;Simple. Make sure the &lt;code&gt;wasm-tools&lt;/code&gt; workload is installed for the &lt;code&gt;dotnet&lt;/code&gt; SDK.&lt;/p&gt;
&lt;p&gt;You can install it by running this command:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-Shell"&gt;dotnet workload install wasm-tools
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id="the-long-answer"&gt;The long answer&lt;/h3&gt;
&lt;p&gt;Installing &lt;code&gt;wasm-tools&lt;/code&gt; adds an extra step when running &lt;code&gt;dotnet publish&lt;/code&gt; with the release config: &lt;a href="https://docs.microsoft.com/en-us/aspnet/core/blazor/host-and-deploy/webassembly?view=aspnetcore-6.0#runtime-relinking" rel="noopener noreferrer"&gt;&lt;strong&gt;Runtime relinking&lt;/strong&gt;&lt;/a&gt;. This basically trims the .NET runtime of code that your website/web app does not use.&lt;/p&gt;
&lt;p&gt;This was the size of the .NET runtime WebAssembly:&lt;/p&gt;
&lt;table class="table"&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;File name&lt;/th&gt;
&lt;th&gt;Size (Uncompressed)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;&lt;code&gt;dotnet.wasm&lt;/code&gt;&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;&lt;code&gt;991 KB&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;strong&gt;That's a &lt;code&gt;1.39 MB&lt;/code&gt; difference!&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;So I added &lt;a href="https://github.com/Smalls1652/SmallsOnline.Web.PublicSite/blob/d16ef92df5b4f73b67659eb80bd24dcbd59f0783/.github/workflows/azure-webapp-deploy.yaml#L36-L37" rel="noopener noreferrer"&gt;this step to my build and deploy workflow on GitHub&lt;/a&gt; before I run &lt;code&gt;dotnet publish&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class="language-yaml"&gt;- name: Install wasm-tools
  run: dotnet workload install wasm-tools
&lt;/code&gt;&lt;/pre&gt;
</a10:content>
    </item>
    <item>
      <guid isPermaLink="false">8a87b088-d901-42ac-88cc-36d177fe9baf</guid>
      <a10:author>
        <a10:name>Tim Small</a10:name>
        <a10:uri>https://smalls.online</a10:uri>
      </a10:author>
      <category>blog</category>
      <title>Hey you... You're finally awake</title>
      <description>The dumbest "Hello world" blog entry... Hey, you... Ralof: Hey, you. You’re finally awake. You were trying to cross the border, right? Walked right into that Imperial ambush, same as us, and that thief over there. Lokir: Damn you Stormcloaks. Skyrim was fine until you came along. Empire was nice and lazy. If they hadn't been looking for you, I could've stolen that horse and be halfway to Hammerfell. You there. You and me - we shouldn't be here. It's these Stormcloaks the Empire wants. Ralof: We're all brothers and sisters in binds now, thief. Imperial Guard: Shut up back there! Lokir: And what's wrong with him, huh? Ralof: Watch your tongue. You're speaking to Ulfric Stormcloak, the true High King. Lokir: Ulfric? The Jarl of Windhelm? You're the leader of the rebellion. But if they've captured you... Oh gods, where are they taking us? Ralof: I don't know where we're going, but Sovngarde awaits. Lokir: No, this can't be happening. This isn't happening. Ralof: Hey, what village are you from, horse thief? Lokir: Why do you care? Ralof: A Nord's last thoughts should be of home. Lokir: Rorikstead. I'm... I'm from Rorikstead. </description>
      <source>Smalls.Online Blog</source>
      <pubDate>Tue, 28 Jun 2022 13:37:00 -0400</pubDate>
      <a10:link href="https://smalls.online/blog/entry/hello-world" />
      <a10:content type="html">&lt;p&gt;&lt;strong&gt;The dumbest &amp;quot;Hello world&amp;quot; blog entry...&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src="https://cdn.smalls.online/images/misc/skyrim-intro.gif" class="img-fluid" alt="Hey, you..." /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ralof:&lt;/strong&gt; Hey, you. You’re finally awake. You were trying to cross the border, right? Walked right into that Imperial ambush, same as us, and that thief over there.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lokir:&lt;/strong&gt; Damn you Stormcloaks. Skyrim was fine until you came along. Empire was nice and lazy. If they hadn't been looking for you, I could've stolen that horse and be halfway to Hammerfell. You there. You and me - we shouldn't be here. It's these Stormcloaks the Empire wants.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ralof:&lt;/strong&gt; We're all brothers and sisters in binds now, thief.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Imperial Guard:&lt;/strong&gt; Shut up back there!&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lokir:&lt;/strong&gt; And what's wrong with him, huh?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ralof:&lt;/strong&gt; Watch your tongue. You're speaking to Ulfric Stormcloak, the true High King.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lokir:&lt;/strong&gt; Ulfric? The Jarl of Windhelm? You're the leader of the rebellion. But if they've captured you... Oh gods, where are they taking us?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ralof:&lt;/strong&gt; I don't know where we're going, but Sovngarde awaits.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lokir:&lt;/strong&gt; No, this can't be happening. This isn't happening.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ralof:&lt;/strong&gt; Hey, what village are you from, horse thief?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lokir:&lt;/strong&gt; Why do you care?&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ralof:&lt;/strong&gt; A Nord's last thoughts should be of home.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Lokir:&lt;/strong&gt; Rorikstead. I'm... I'm from Rorikstead.&lt;/p&gt;
</a10:content>
    </item>
  </channel>
</rss>