<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>All-Purpose Mat&#39;s Blog</title>
    <link>https://blog.allpurposem.at/</link>
    <description>Monthly-ish projects pushing the boundary of what&#39;s possible</description>
    <pubDate>Sun, 15 Mar 2026 18:18:13 +0100</pubDate>
    <item>
      <title>Linux? On my phone??</title>
      <link>https://blog.allpurposem.at/linux</link>
      <description>&lt;![CDATA[I&#39;ve used Android for longer than I have had a phone. The first computer I owned was an Android 4 tablet), which I received as a gift for my 8th birthday, alongside a shiny Google Mail account (huh, I guess I was breaking TOS since the beginning lol). Since then, I have fully engaged in the Android ecosystem, running relatively up-to-date OS versions on both smartphones I have owned, at least until the last one stopped receiving updates with Android 12L. I then looked into alternatives, and floated around options like GrapheneOS and AICP until I settled on the amazing @LineageOS@fosstodon.org since they kept maintaining the OLED &#34;black&#34; theme (thank you!!) that was removed on vanilla Android 12. Oh, and they support latest Android releases on otherwise-obsolete devices like my poor Pixel 3a. Unfortunately, all these projects are based on the Android Open Source Project (AOSP), which is developed solely by Google and thus puts them at its whim. Sound familiar? The main reason I still stick with Firefox despite the AI stuff is because it&#39;s the only remaining browser that doesn&#39;t… 100% depend on and is at the whim of Google&#39;s Chromium project.&#xA;&#xA;[bq]: the tablet was made by BQ, a defunct Spanish company which, coincidentally, was the first to launch a Ubuntu Touch device: https://www.zdnet.com/article/first-ubuntu-smartphone-aquaris-e4-5-launches-into-cluttered-mobile-market/&#xA;&#xA;Why I&#39;m leaving my comfort zone&#xA;So: what&#39;s Google done this time to drive me off of Android? Maybe it&#39;s the recent lack of punctuality of AOSP source code releases, being several months late and thus preventing alternative Android distributions from staying up to date (and now as of writing this post, they will reduce the AOSP source code update frequency from quarterly to only twice per year?? this does not bode well, yikes). Maybe it&#39;s their (for now slightly less bad) attempt to require passport verification &amp; payment to distribute apps even outside of their Play Store? Perhaps it&#39;s their repeated sabotage of open source apps&#39; ability to function?&#xA;&#xA;Yes to all, but I will blame this little green boi:&#xA;Green location indicator from Android 12&#xA;In case you don&#39;t know (so lucky), this is a feature introduced by Google for Android 12 which helpfully pops up to tell you when a (non-Google Play Services) app accesses your location. While on the surface the location indicator is a good idea, there is no user-facing whitelist and thus the Home Assistant or weather apps, both of which are FOSS and I thus fully trust, would cause the green dot to pop up with all its animations for a few seconds, every five minutes. It literally drove me crazy, and was the last straw for me to finally consider an alternative.&#xA;&#xA;The distro&#xA;Now, there&#39;s a few distributions of Linux for phones, but with my goal of moving away from Google I did not want to settle for something that still depends on Android, otherwise we&#39;d have the Chromium problem. This means that all the distros that depend on Halium to run a Linux userspace don&#39;t qualify, despite being the most likely to have hardware working given they use the Android drivers and kernel. That leaves me basically with only one option, that being the Alpine-based postmarketOS. I got to try a device running it at last year&#39;s FrOSCon and found its performance very impressive, though I got some fair warnings about missing drivers for stuff like the fingerprint reader. I also read @neil@mastodon.neilzone.co.uk&#39;s excellent blogposts about his experience trying it on a OnePlus 6 phone. I asked around the @postmarketOS@treehouse.systems Matrix rooms, and my awesome local Aachen hackspace @CCCAC@chaos.social where I spotted someone with postmarketOS stickers on their laptop (if you&#39;re reading this, hi!), and finally settled on buying a Fairphone 5 to run pmOS on. The reason I did not install it on my existing Pixel 3a is twofold:&#xA;&#xA;It has very little RAM. You can run pmOS on 3GB, but it doesn&#39;t feel very future-proof given the state of the web (also, I don&#39;t know how heavy Waydroid is)&#xA;&#xA;I don&#39;t want to sacrifice my existing Android install. What if I need to do something that only works on Android (foreshadowing)? What if I delete important files while flashing pmOS? &#xA;&#xA;I definitely think Mobile NixOS is worth a look for its immutability &amp; reproducibility, which makes a lot of sense on a smartphone. They don&#39;t list my device on their Devices List, but I&#39;m sure I could get it working like pmOS given enough tinkering. NixOS gives me this safety cushion of being able to easily rollback my entire system if something goes wrong. Meanwhile, every time I touch a system file on postmarketOS I feel like I&#39;m committing a crime, and every update I do feels like a gamble on whether my phone will keep booting or not. For now, I&#39;m waiting for postmarketOS Duranium to become usable, and then make a decision whether to distro-hop based on the project state, since I&#39;ll have to reinstall anyways.&#xA;&#xA;Installing postmarketOS&#xA;I received my Fairphone 5 from a non-Amazon online store, skipped thru its Android setup, and directly accessed the hidden developer settings to unlock my bootloader. Fairphone, for whatever reason, has a convoluted step of inputting device data on their website to get a &#34;Bootloader Unlocking Code&#34;. I guess they want to track how many of their devices run unlocked… kind of uncool, as the phone&#39;s unlockability relies on them keeping this web tool up and running. I then went to follow the postmarketOS install instructions, but found there were multiple options. For example, if you want full-disk encryption (FDE), the &#34;pre-built image&#34; option does not provide it, and you must use the pmbootstrap CLI tool. Thankfully, the Fairphone 5 pmOS wiki page#Installation) has instructions for this which I followed without issue…&#xA;&#xA;…or so I thought! After the install, my phone booted showing the postmarketOS bootanimation (with Linux console output periodically eating away at the anim… looks kind-of broken but should be fixed once pmOS adopts Plymouth), asked for my FDE password, aaaand (:drums:)proceeded to get stuck on a black screen. Thankfully, I did this at CCCAC, and quickly got help troubleshooting. In the end, it turns out the &#34;Plasma Mobile&#34; package was broken, so I picked the other name that sounded like it&#39;d have a usable UI: &#34;GNOME Mobile&#34;.&#xA;&#xA;  [!NOTE]&#xA;  This is the part where I want to be really really clear that I don&#39;t intend to create any negativity toward Linux mobile projects. I will be complaining about things quite extensively, but only because I think it is really important to highlight how the experience is as an end-user (and was asked to share it!). I have massive respect for the people that write the code that makes Linux on phones possible, and I hope to help make all this a reality. I cannot possibly be more excited about Linux Mobile right now, so much that I&#39;m fully dedicated to using it day-to-day. &#xA;&#xA;GNOME Mobile&#xA;OK, so I installed GNOME Mobile instead of just going with Phosh. I expected the UI would be somewhat similar to the Phosh screenshots I&#39;d seen, but I was keen to explore different ways of interacting with a phone, and its name sounded more &#34;upstream&#34; than Phosh. GNOME is a high-quality desktop (if the defaults suit you), and in general I expect the defaults on a phone to be reasonable.&#xA;&#xA;After booting and entering my FDE password, I am first greeted by the full &#34;desktop&#34; version of GNOME Display Manager (GDM), asking me which user I want to log in as. Once I select myself (the only option), a full QWERTY on-screen keyboard (OSK) pops up and I get to type my user password, which is supposed to be a PIN, so should just have a numberpad…&#xA;&#xA;After inputting my PIN, it looks like GDM crashes to a black screen, but after a few seconds a familiar GNOME interface pops up! …and proceeds to (only sometimes) ask for my password again, twice (with different-looking modals), to unlock the GNOME Keyring. Thankfully, these two extra password prompts have gone away recently, so I can only assume the bug is fixed. There&#39;s a nice &#34;welcome&#34; program that explains some of the basics, and tells me it&#39;s meant for enthusiasts. Well, here I am :)&#xA;&#xA;Screenshot of &#34;postmarketOS Welcome&#34; program explaining that it&#39;s meant for enthusiasts. That&#39;s me!&#xA;&#xA;GNOME Mobile presents the usual GNOME Desktop app grid, with the ability to swipe between multiple pages. The usual quick settings tiles can be accessed by swiping down from the top, and notifications awkwardly pile up under it, with a very small scrolling area. When an app is open, the homescreen will show that window and move the search bar out of the way. It took a little getting used to, but I find that I quite like this interaction model!&#xA;&#xA;video controls src=&#34;https://allpurposem.at/blog/pmos-gnomemobile.webm&#34; alt=&#34;Video of GNOME Mobile shell. User swipes around app grid, opens weather and files apps, then shows off the dropdown menu and finishes by visiting the settings&#34;&#xA;&#xA;The frame drops are entirely due to software encoding, normally it is snappier. Also the double Do Not Disturb button might be an extension messing with things, my bad!&#xA;&#xA;What I like less is how the icons in the app grid behave…&#xA;&#xA;video controls src=&#34;https://allpurposem.at/blog/pmos-appgridmoving.webm&#34; alt=&#34;Video of GNOME Mobile app grid. User tries to move the Camera icon right (to swap with Settings) but it refuses. User then moves Settings right (to the same effect) and it works&#34;&#xA;&#xA;Sometimes you can move icons, sometimes not. Making a folder is a nigh-impossible task (I must have tried a hundred times, and only succeeded three times in total). Also: perhaps I am imagining it, but sometimes I boot my phone and the icons have rearranged themselves, and other times some are just missing. I definitely lose apps every once in a while, but thankfully the search bar has my back to find them again.&#xA;&#xA;When I installed the OS, there were two settings apps, one being GNOME&#39;s own Settings and the other being &#34;postmarketOS Tweaks&#34; which let me change some things like the hostname. The latter app has since disappeared, I think moved to the Phosh settings app in this merged MR. I have not yet looked into restoring it as I quite like the hostname I picked: hermes, messenger of the gods and himself the god of trickery, two things that represent this phone quite well.&#xA;&#xA;The lockscreen is very usable. It shows a blurred version of my wallpaper, the time, and notifications I received (most without content… and no way to enable it). I can swipe up to show a PIN entry, which absolutely has to be 6 numbers. Sometimes it will eat some of the numbers I input and I have to try again but, when it works, it works well. The lockscreen does not allow poweroff or reboot. only suspend is available, which I could not figure out how to fix. Notifications on the lockscreen often say &#34;Just now&#34; regardless of when they were received.&#xA;&#xA;Screenshot of GNOME lockscreen, with the PIN entry pulled up&#xA;&#xA;Something really cool inherited from desktop GNOME is the &#34;GNOME Online Accounts&#34; feature, which allows signing into many different online services and integrate them to the OS. I added my self-hosted Nextcloud account and was happy to see it import everything. Tapping the date on the top-left shows my upcoming events and, when my modem is working, I can start SMS conversations in Chatty with my contacts imported from Nextcloud.&#xA;&#xA;Screenshot of GNOME Online Accounts open to Nextcloud, with Calendar, Contacts, and Files sync enabled&#xA;&#xA;It&#39;s a little bit funny that all GTK apps seem to insist on showing me their keyboard shortcuts in menus, despite not having a keyboard attached. This makes dropdown menus much bigger than they need to be, but isn&#39;t a super big deal.&#xA;&#xA;Screenshot of the context menu on a CMakeLists.txt file in the files app, showing keyboard shortcuts for some actions like Alt-Return to see properties&#xA;&#xA;I found a couple ways to crash the shell, like closing an app while a popup or side menu is open, and apps themselves can crash when they try to do desktop things like lock the mouse cursor. However, when not doing weird stuff, the experience is pretty stable.&#xA;&#xA;Hapticsn&#39;t&#xA;The first thing I noticed after interacting a little bit with the UI, is that there is zero haptic feedback. On Android, when you do certain actions such as typing or switching apps, the motor inside the phone makes a nice &#34;buzz&#34; as feedback. I didn&#39;t realize how much I came to rely on this until it was taken away from me. I was expecting it to work since the pmOS wiki page for my device says it works:&#xA;&#xA;Screenshot from postmarketOS wiki page showing Haptics should work&#xA;&#xA;After asking in the GNOME Mobile Matrix room, I learned that there is a service called feedbackd, which other interfaces talk to but GNOME Mobile does not. I have to imagine then that haptics would work on Phosh or other UIs, but (this will be a recurring theme) GNOME Mobile wants to do it a different way, and thus… hasn&#39;t done it yet. I plan to work on this at some point, especially after some productive discussion with other GNOME Mobile devs, but I haven&#39;t gotten much further than asking for input from the XDG Portals folks. I aim to take the time to work on it further, especially talking with feedbackd developers, but other matters were more pressing thus far. I at least was able to verify that the vibration motor can work by talking to it directly with the kernel&#39;s force feedback API.&#xA;&#xA;Auto-brightness&#xA;Normally this was going to go in the next section, but fortunately while writing this blogpost it started working! Unfortunately however, now I want it back off, and I can&#39;t find a way to remove it.&#xA;&#xA;Anyways, auto-brightness is supported in GNOME upstream, but never showed up in my power settings as the article says it should. I asked for help in the Matrix room and we verified that net.hadess.SensorProxy detects and exposes my phone&#39;s light sensor, so I guess the gnome-settings-daemon was failing to pick this up. Not having auto-brightness means that I have to fumble to find the brightness bar every time I go outside, as I can&#39;t see what&#39;s on my screen otherwise. Now that it works, it adjusts extremely quickly to any small changes in ambient light, which unfortunately overrides my own settings. The toggle still isn&#39;t there, so I can&#39;t just turn it off. The biggest issue with auto-brightness, now that it&#39;s here, is that it animates this transition, which causes issues with my OLED display&#39;s driver: these manifest as brief flashes of horizontal bands of garbage pixels, or the color grading of the entire display changing, or (admittedly this one&#39;s pretty cool) the display showing several copies of GNOME in a grid, Andy Warhol-style.&#xA;&#xA;Of course, while doing my final editing, auto-brightness is gone again, and I don&#39;t have to deal with the screen glitching anymore! I opened pma!4274 to track the display artifacts, so I know when I can try enabling auto brightness again :)&#xA;&#xA;Smaller missing bits in the shell&#xA;&#xA;Flashlight&#xA;There&#39;s no way to toggle the flashlight. I found a merge request to the shell that would add it, but with no activity on that repository in the last 8 months I&#39;m not holding my breath for it to be merged. In the meantime I am running a GNOME Extension that adds this feature, which I had to manually install using the terminal as it does not appear to be packaged anywhere. I&#39;ll count this as the first required use of the terminal.&#xA;&#xA;OLED support&#xA;GNOME Mobile ships a dark theme, but no option for a &#34;pure black&#34; background. Back on Android I distro hopped to LineageOS just for this feature. The GNOME &#34;dark&#34; theme is a light-grey which might make sense on the usual LCDs that desktops or laptops have, looks really bad on an OLED display, which most phones nowadays have. I tried researching how to theme GTK or &#34;libadwaita&#34; apps, since that seems to be mainly what my install came with, and only found references to a defunct app called &#34;Gradience&#34;. I was able to install a Flatpak of a fork that was updated slightly more recently, and navigated its desktop-only interface very awkwardly to set the background color to black. This worked for most apps thankfully, but I did not find a way to theme the actual GNOME shell, which unfortunately is still stuck as this ugly grey, and I&#39;ve no idea where to begin to try fixing this on my device. There&#39;s a &#34;User theme&#34; extension that can apply CSS to the gnome-shell, but no indication on what the CSS would look like to fix the background color.&#xA;&#xA;Qt apps functionality&#xA;I get that it&#39;s the GNOME shell, so I should be using GTK apps, but on the already limited Linux mobile app ecosystem, I need to be able to use apps written in other frameworks, even if they may look slightly different. I have two Qt apps installed at the moment: KDE Connect (integrate phone with desktop), and Kitinerary (find public transport routes). Both of them open with a blinding white background, with a very broken-looking interface. Tapping a text field does not bring up the keyboard (thankfully I can manually double-tap the gesture bar to get it). There is a window bar at the top with minimize, maximize, and close buttons. I haven&#39;t found a solution for these issues, but I did install &#34;Kvantum&#34; and &#34;qt6ct&#34; to at least change the background color. However, these apps still look &amp; feel extremely broken and it makes me sad that my experience of their developers&#39; hard work is ruined by the way they get shipped.&#xA;&#xA;Copy &amp; paste&#xA;It&#39;s not missing, but it might as well be, because the UI is so inconsistent and often buggy that I resort to manually typing over my crazy-long Bitwarden passwords. Though the Wayland clipboard works fine, each app seems to be expected to implement its own way to select text, copy, and paste. Libadwaita (GTK4) apps, which normally have pretty good UX, make it near-impossible to get the popup for copy/paste, and then the popup uses icons with no description, often leaving me guessing at their function (also it has a weird black outline). The &#34;Text Editor&#34; app has an especially tricky to use selection/popup interaction, probably on the account of allowing text input and those interactions conflicting with selection (it took me over 30s to get it to pop up for the screenshot):&#xA;&#xA;Screenshot of the text editor with an array of many icons shown, one of them meaning copy&#xA;&#xA;On-screen keyboard customization&#xA;The default keyboard gets the job done for typing basic text, but (as far as I know) there is no option to change the keys available or how it works. On Android, I used and loved Unexpected Keyboard, which lets you type special characters via quickly &#34;flicking&#34; in many configurable directions from any given key. It also features a Ctrl key, which would entirely solve the copy-paste UI issue from above. It even binds a lot of desktop-like keys, like Escape, which is extremely useful when using e.g Vim for text editing. I am aware of a similar project for Linux mobile called Unfettered Keyboard, but unfortunately (I think) it cannot run on GNOME Mobile due to GNOME Mobile&#39;s keyboard being part of the shell, rather than a separate program. If I do stick with GNOME Mobile, I will probably learn to write extensions and see if I can write some sort of shim that lets you plug in other keyboard programs like Unfettered Keyboard.&#xA;&#xA;Customization of the quick settings &amp; statusbar&#xA;When I enable location support, there is a constant location icon in the statusbar reminiscent of that green dot which finally drove me off Android. The quick settings tiles also seem to randomly change what&#39;s available, such as auto-rotate appearing and disappearing between reboots, and a mysterious &#34;Wired&#34; connection that&#39;s always on taking up the first slot. On a big screen, I wouldn&#39;t mind as much since I have space to spare, but on a phone I need to save space by not showing useless icons, and I especially need things to stay where they are to build up any kind of muscle memory. Thankfully there is the amazing Quick Settings Tweaks shell extension that lets me hide the irrelevant toggles &amp; icons, though it can&#39;t fix the rotation appearing and disappearing. Speaking of screen rotation…&#xA;&#xA;Screen rotation&#xA;When auto-rotate is off, this really behaves like a desktop. You can go in the system settings and manually select &#34;Portrait&#34;, &#34;Landscape Left&#34;, &#34;Landscape Right&#34;, or &#34;Portrait (Flipped)&#34;, then click &#34;Apply&#34; and finally confirm &#34;Keep changes&#34;. Android did this thing where it still reads the sensor, and shows a button for a second when you physically rotate the phone which you can tap to actually do it. I always ran my Android phone like this to prevent accidental rotation, and I would love to see this on GNOME Mobile. Maybe that&#39;s a good first contribution if I decide to dive into UI stuff.&#xA;&#xA;Rarely, I get lucky and an &#34;Auto-rotate&#34; toggle is in the quick settings. When it is enabled, rotation is instant the moment I tilt my phone. While I like the lack of animation, I do wish it were a little less sensitive as it&#39;s very easy to accidentally rotate. Thus, even in the rare event the feature is available, I keep it turned off. I think this issue is related to iio-sensor-proxy, as sometimes sudo systemctl restart iio-sensor-proxy brings it back, but other times this command gets stuck, so I am not sure.&#xA;&#xA;Battery life&#xA;Normally this&#39;d have gotten its own h2, but unfortunately the otherwise excellent battery life (which I estimate would last well over a day) gets completely trounced by two issues in (presumably) GNOME Mobile:&#xA;&#xA;Since suspending on mobile is not really a thing, battery life relies on apps using as little power as possible at all times, and suspending features when not used. I&#39;m not sure what part of the stack is responsible for this, but for example I would expect 3D rendering to not use power when the screen is off, or the camera app to suspend the camera hardware when unfocused. This is not the case right now, and for example I have had my phone die while going out because I forgot to swipe away the &#34;Camera&#34; app window, which caused it to stay on for several hours until the battery gave up. I&#39;d love to learn more about this!&#xA;&#xA;The gnome-shell process will randomly start consuming 100% of one CPU core, and not stop until restarted. I have to constantly feel my phone in my pocket in case it starts getting warm, and if so log out &amp; back into the shell to prevent it eating the entire battery life. This is tracked by this issue, and two weeks ago I spent quite some time trying to figure out the root cause and collecting profiler data. Unfortunately I don&#39;t see a fix on the horizon, and if it keeps happening I might have to switch off of GNOME Mobile entirely.&#xA;&#xA;Of note: my phone came with &#34;suspend&#34; enabled, which until recently would cause a kernel panic (I was quite confused why pressing the off button would cause a reboot!), so I disabled it in the GNOME Settings. Suspend is now fixed, but I don&#39;t think I can use it as it prevents all network-based apps from receiving notifications. Supposedly SMS &amp; calls can still wakeup the device, but 95% of my communications go thru Matrix, and the remaining 5% are Signal, neither of which work while suspended.&#xA;&#xA;Something strange I encountered is that the phone will not charge over a USB-A cable. On Android it would charge slowly, so I could leave the phone plugged into my desktop&#39;s USB port while developing apps and test on it, however postmarketOS requires having the phone plugged into a proper C-to-C cable connected to a USB-C power delivery capable power supply. The phone will also always show a notification prompt asking whether I want to transfer files via MTP (but doesn&#39;t actually do anything), &#34;developer&#34; (what?), or just charge, even if the cable is power-only.&#xA;&#xA;I usually leave my phone charging next to my bed, and most of the time this works, however sometimes I will find the phone really hot and displaying the full-disk encryption password prompt, which I guess means it kernel panicked. This also happened a few times in my pocket, and the password prompt keeping the screen on likely contributed to quite some power loss.&#xA;&#xA;Ok but what about the phone stuff&#xA;Yes yes, I&#39;m getting there. I just have a lot to say about it even before inserting a SIM card!&#xA;&#xA;The Fairphone 5 requires removing the battery to access the SIM slot, which at least they make very easy, but does force me to reboot to switch SIMs. These days I only use one though, so it shouldn&#39;t be a problem. pmOS detected my SIM and it showed up as &#34;Mobile Network&#34; in the Settings app, with the usual toggles expected from other OSes. I had to manually select the correct Access Point Name (APN) for my French cell operator, which I have experience with as I had to do the same on LineageOS (which actually was harder than on pmOS, since Lineage had me manually input all the settings!). Once that was set up, I saw for the first time a nice 5G icon in my statusbar, indicating mobile data works! It&#39;s also the first time I get to use the 5G I pay for, since my previous phone only supported 4G.&#xA;&#xA;A picture of the Mobile Network settings page. I can toggle Mobile Data and Data Roaming, as well as select Network Mode, Network (set to o2 - de), Access Point Names, Sim Lock, and view Modem Details&#xA;&#xA;I can&#39;t seem to change the &#34;Network Mode&#34; option from its default setting of preferring 4G to one that prefers 5G: it lets me select it, but doesn&#39;t apply and shows me a popup that says Failed: reloaded modes (allowed &#39;2g,... (and trails off). I can probably find out more by using journalctl, but it has not bothered me enough to do so.&#xA;&#xA;What HAS bothered me is mobile data randomly becoming unavailable. One time it was caused by an update (that&#39;s what I get for running postmarketOS &#34;edge&#34;), which I reported and it got fixed in record time. Other times however it seems to happen quite randomly, like the auto-rotation disappearing (though less often thankfully!). This is usually remedied by a reboot, but is quite annoying. I&#39;d expect a notification saying my network connection failed, and I do get these, but only when I leave the range of a wifi network (which is very common given I carry the phone around!) and some other random times it decides to send this, not when mobile data disappears.&#xA;&#xA;Socials&#xA;The primary reason for carrying my phone around is being reachable, and reaching people when needed.&#xA;&#xA;SMS&#xA;The install came with this very nice app simply called &#34;Chats&#34; (but the process name reveals it is actually Chatty) which allows me to send &amp; receive SMS. It claims to support MMS, but it fails when I try to send one. The app also (!!!) let me log into my Matrix account and read messages in unencrypted rooms, sadly this last bit is not useful to me as most of my communications are encrypted. If you don&#39;t encrypt your messages however, like in SMS, this is an awesome app that runs well and does what it says. I did have some trouble sending SMS to new numbers, one of my family members did not receive a rather time-sensitive message, but I was having some similar-ish troubles on my Android phone so it might be a carrier thing.&#xA;&#xA;Screenshot of Chatty open to a conversation where I sent &#34;hello from linux phone&#34; a month ago, and typed a draft now saying &#34;I don&#39;t really use SMS&#34;&#xA;&#xA;Matrix&#xA;On the Matrix side, I first tried Fractal to have the GTK experience, however it unfortunately seems to run into similar crashing &amp; freezing issues as what I reported when I tried Fractal on desktop. Thankfully I was delighted to see that the Matrix client I used on Android is packaged on Flathub! FluffyChat runs great and basic messaging worked without any issues.&#xA;&#xA;Screenshot of FluffyChat open to the postmarketOS room. Several emojis in reactions are not rendered&#xA;&#xA;There&#39;s what is likely a packaging bug that prevents it from loading an emoji font, so I can&#39;t see most emojis. There&#39;s a permanent bar at the top of the app that just says &#34;FluffyChat&#34;, and I think is the GTK app trying to draw client-side decorations (probably GNOME Mobile should tell apps to, uh, not do that). Unfortunately, the Flutter code behaves like the desktop version, and lacks several rather important features. A non-exhaustive list:&#xA;&#xA;can&#39;t play or record voice messages, have to manually download them then open in an audio player&#xA;can&#39;t play videos, same deal&#xA;no option to take a photo to send&#xA;no notifications while the app is closed&#xA;notifications for the room you&#39;re looking at get filtered even if the screen is locked or the app window is unfocused&#xA;it makes every picture I send extremely green (flipping endianness I think):&#xA;&#xA;Picture of my cat Athos staring longingly at a door, except everything is Very Green&#xA;&#xA;I have fixed audio in a merge request, and I think video should be fairly simple to enable as well. I fear that taking photos will require some new XDG portal, so I&#39;ll be leaving that one for last. The notifications issue requires a UnifiedPush integration, which the Flutter package supports, but needs some work. I have a test version working on my desktop, but it is lacking a lot of logic for how to handle them. I hope they are merged quickly though, as I don&#39;t want to have to start to rebase a bunch of branches in a fork…&#xA;&#xA;Signal&#xA;Signal is what I give to people who don&#39;t want to invest an hour in picking a Matrix server, client, figuring out encryption, and then not saving their recovery key. This means some of my extended family and contacts from work. Thankfully there is a Signal client for mobile Linux called Flare which can send and receive messages including images (though it strips EXIF metadata, which means some photos get sent sideways or upside-down). It can&#39;t handle calling, but I usually make Signal calls on my desktop anyways, so it&#39;s not a big deal.&#xA;&#xA;Screenshot of Flare client open to a conversation&#xA;&#xA;Fediverse&#xA;I use Mastodon to access the Fediverse, and was very happy to discover @Tuba@floss.social. It implements basically everything I could want out of a Mastodon client, and looks pretty good while doing so. Just missing an OLED background, but I&#39;m pretty sure that&#39;s on me for making such a messy GTK theme. I&#39;d like to fix the background color at some point, though.&#xA;&#xA;Screenshot of Tuba open to their profile, showing a boosted toot (on my birthday!)&#xA;&#xA;Sadly, Tuba frequently triggers some bug in the Vulkan driver that causes it to print &#34;LOSTDEVICE&#34;, and the app gets totally frozen midway through sliding a view out. I don&#39;t know where to report this, but it means I can&#39;t navigate the Fediverse for very long before I get stopped in my tracks. Another freeze which might be related occurs when I write a too-long post or attach a picture, it probably triggers some re-layout that hits a GPU bug and freezes. I unfortunately have lost several surely-banger-posts to this specific freeze. It also suffers from quite poor scrolling performance sometimes, potentially related to running out of Vulkan memory (I see that log message a lot).&#xA;&#xA;E-Mail&#xA;I installed Thunderbird, which brought along an extension called mobile-config-thunderbird, and promises to make the UI more usable on phones. Unfortunately, something goes terribly wrong and it doesn&#39;t render my inbox at all, so it&#39;s not particularly useful as an email client right now. It does send me notifications though (as long as the app window is open!!!), so at least I can tap on one to read the email, since that does render.&#xA;&#xA;Screenshot of Thunderbird, not rendering the inbox&#xA;&#xA;On the topic of notifications&#xA;Yeah, it&#39;s quite important to be able to see when one of these apps wants my attention! Thankfully everything I&#39;m running is FOSS, so there&#39;s no dark patterns to worry about here.&#xA;&#xA;Push notifications&#xA;I&#39;m not super qualified to explain this, but my surface-level understanding is that on both Android and iOS, there is a central server that the OS stays permanently connected to, and services you have apps for can &#34;push&#34; to that server, which then tells the OS to wake up the app so it can show you its notification. This heavily reduces power usage, and saves each app from implementing its own background service. On Android, this is implemented through Google Firebase Cloud Messaging, but thankfully an alternative exists in the form of UnifiedPush, which let me self-host my own push server that supporting services (Matrix and Mastodon, in my case) could use instead. This meant that Android apps like FluffyChat and Tusky didn&#39;t have to run in the background, but still showed me reliable notifications piped through my very own server, which my phone was always connected to.&#xA;&#xA;On postmarketOS, I was very pleased to find a UnifiedPush wiki page, but was a little worried to see only a KDE-specific implementation, with just a single app listed as supported. Thankfully I was able to install kunifiedpush on GNOME Mobile and write a config file to make it connect to my self-hosted Ntfy server. It was all a little manual (and required terminal usage #2, probably due to me running it outside of its native KDE), but it means apps can now register to it and it actually delivers notifications, nice! I am able to receive notifications from the Fediverse via Tuba, which supports UnifiedPush, and as stated earlier I began work on FluffyChat support for UnifiedPush on its Linux builds.&#xA;&#xA;Flare (Signal client) has an optional background service that keeps a connection to their servers, which is unfortunately required as Signal does not support UnifiedPush. SMS works fine as well.&#xA;&#xA;Actually seeing the notifications&#xA;Man, I really really hoped this would work! Unfortunately I have some experience with upstream GNOME not really showing me all notifications, so I should have expected this. Even for apps that do consistently send notifications for messages, like FluffyChat and Flare, I will usually only see the first notification in a conversation, subsequent messages get &#34;grouped&#34; (which is a nice feature UI-side! but) which means I get no sound or pop-up for subsequent messages. GNOME also doesn&#39;t show me any notifications while fullscreen which, while I can understand the rationale, is not how I want it to work. This means that if I am watching a video fullscreen, I won&#39;t find out that my cooking timer has gone off until the video ends and I exit fullscreen!&#xA;&#xA;Oftentimes &#34;old&#34; notifications get stuck, and also display wrong times. This happens with FluffyChat notifications quite frequently, where I open my phone and it says I received a message from my dad &#34;Just now&#34; or claims it came recently, when I actually had a full conversation hours ago.&#xA;&#xA;Additionally, as explained earlier when talking about the lockscreen, it by default doesn&#39;t show the notification content for privacy reasons. I can enable showing content per-app in the GNOME settings, which would be great except it does not show every app, especially FluffyChat which is the one I actually need to be able to read quickly.&#xA;&#xA;Pebble&#xA;Thankfully, I wear a Pebble smartwatch, and an amazing developer who goes by Muhammad maintains a Pebble connector app that can buzz my watch when I get a message (even when GNOME unwisely decides to hide the notification), like my watch used to do back on Android! Rockwork is an unofficial Pebble client for Ubuntu Touch, and with some work I was able to rebase an experimental non-Ubuntu-Touch backend for it written by Xela Geo. I abstracted some of the buildsystem further to make it usable as an Alpine package, and have been happily running Rockwork on my postmarketOS phone, with almost everything working. I opened a merge request to upstream, and if/once it is merged I hope to contribute my first package to Alpine.&#xA;&#xA;Screenshot of Rockwork, listing the apps installed on my watch&#xA;&#xA;I can control my music and read notifications on the watch, while opening RockWork lets me switch watchfaces and view historical step counter &amp; sleep data. I cannot exaggerate how awesome this is. Of course though, it&#39;s not all perfect, though most of it can probably be blamed on my porting work (the app seems to work fine on its native Ubuntu Touch platform). I still need to get the app store and calendar sync to work, and there&#39;s a big problem with some apps using the XDG portal Notifications API, which GNOME implements privately and thus Rockwork can&#39;t eavesdrop on to forward to the watch. I don&#39;t know how I will solve this last one, and it currently means I don&#39;t get any SMS notifications.&#xA;&#xA;Using the camera&#xA;One of the things that made me pick the Fairphone 5 over other similar devices is the &#34;Partial&#34; status of the Camera (rather than &#34;Broken&#34;). When I got the phone, I was excited to try out the camera, as I usually take lots of pictures of my cat different places I go. I didn&#39;t expect much given the rating, but I am mostly positively impressed at how well it works given the level of support. Using the built-in &#34;Snapshot&#34; camera app (which is the only one I got working), there is no way to change the focus or the zoom level, but you can take pictures and videos, as well as scan QR codes. The focus appears to be stuck at a fixed setting and does not auto-adjust. Only the wide-angle rear camera or the front selfie cam are supported by postmarketOS at the moment, probably due to a missing driver for the normal one. They both seem to have similar picture quality, so I won&#39;t test them separately (but all the pictures shown are taken with the wide-angle). By default, pictures are very dark and green-tinted, especially indoors. However, if I cover the sensor with my hand (or point the camera at a bright light) for a bit, then when I uncover it the colors will briefly be a bit brighter and less green and I can take my picture, which ends up a lot better (but still dark):&#xA;&#xA;Picture of my cat Khoshekh&#xA;&#xA;It doesn&#39;t deal well with shooting when a light is in shot, as it seems to get overexposed.&#xA;&#xA;Picture of sign that says WARNING: Do not dumb here. No dumb area. Somewhat overexposed by LEDs in frame&#xA;&#xA;Trying to take a video used to freeze the phone for a few seconds, then reboot it (kernel panic?), however as of last week it no longer does this, though recording is extremely laggy. I don&#39;t know, but I think the GPU drivers might not (yet?) support hardware video encoding or decoding (except the wiki says it does, assuming it is &#34;Venus&#34;), so the result is not very usable yet. Here&#39;s a recording of where I&#39;m writing this blogpost (I promise my lights are on):&#xA;&#xA;video controls src=&#34;https://allpurposem.at/blog/pmos-video.mp4&#34; alt=&#34;Video pans around a desk with three monitors and an IBM Model M keyboard. It is very laggy, but recognizable.&#34;&#xA;&#xA;The camera app allows viewing recent photos &amp; videos, but no way to zoom into them or rotate media after-the-fact. There is also no standalone gallery app I could find, so viewing media is unfortunately quite awkward. Maybe once I finally set up Immich, the website can stand in for a gallery app.&#xA;&#xA;Either way, once I find a way around the FluffyChat image endianness bug, I will feel quite happy sending some of these pictures to family &amp; friends.&#xA;&#xA;Audio&#xA;Oh, yeah, I haven&#39;t talked about this one yet. The pmOS wiki page lists Audio as &#34;Broken&#34; for my device, and indeed this was the case when I first installed pmOS. However, I saw that a lot of work was being done in this area/Audio) and felt that I could trust these amazing folks to get it working. My PineBuds (bluetooth earbuds) paired fine and allowed me to listen to a couple YouTube videos in the meantime. Lo and behold, a few weeks into daily driving this phone I got to start enjoying the speakers on my Fairphone 5 via pma!7700, which I installed on my device thanks to Mr. Test. As I&#39;m writing this, the MR is now merged and should be built soon!&#xA;&#xA;  ![NOTE]&#xA;  It&#39;s called mrtest, a tool for testing Merge Requests, but I keep reading it like Mister Test and so I will make you read it that way at least once   :)&#xA;&#xA;The speakers don&#39;t sound quite right, and sometimes go wonky until I do a suspend-resume cycle, but it&#39;s already extremely impressive work by everyone involved. It was very exciting to follow the discussion and see the first few demos from the devs, featuring classics like Rick Astley singing his one and only hit single through the speakers. I&#39;m told microphone support is coming soon, which will allow me to start doing VoIP calls like Jitsi or MatrixRTC with friends!&#xA;&#xA;Calls&#xA;Note how I specified VoIP… yeah, calls are their own thing. Even with speaker &amp; mic working, more work will need to be done for call audio (which is a separate issue because of weird modem reasons). I can confirm that making phone calls works, as in, I can make someone&#39;s phone buzz, and they can make my phone (not buzz because GNOME doesn&#39;t implement haptics but) show a call notification—if nothing is fullscreen of course—that I can use to pick up.&#xA;&#xA;I believe there is some extra complexity in Germany with VoLTE support being required, but I&#39;ll find out for sure once the call audio stuff is in place. Let&#39;s just hope I don&#39;t need to take any important calls anytime soon!&#xA;&#xA;Web browsing&#xA;I only tried Firefox, as it was installed by default on my phone (with the mobile-config-firefox configuration by default).&#xA;&#xA;Interface&#xA;The UI in portrait mode is reminiscent of Firefox for Android with the URL bar at the bottom, except tabs are always displayed. I would like for the tabs to auto-hide or, even better, browse them in a grid like the Android version provides, but this is perfectly usable. A right-click action can be simulated by long-tapping, which will also select the word you long-pressed. If you tap on the word again, the right-click menu closes and you can drag selection handles to select more/less text, then long-tap again to act on it.&#xA;&#xA;Screenshot of Firefox open to the high CPU usage issue in GNOME Mobile. Some text is selected and the context menu is open with options for Copy, Select All, Print, Translate, and some extensions such as Ffck it&#39;s button, uBlock&#39;s Block element, and Bitwarden&#39;s autofill&#xA;&#xA;In landscape mode, the UI moves to the top of the window and permanently takes up about one-third of the screen, given both the URL bar and the tabs are always visible and neither can be collapsed. This makes the landscape mode functionally useless, as there is not enough space to interact with page content. The only time I use it is when I want to fullscreen a video, which thankfully can be easily done by double-tapping on the media. The &#34;popout player&#34; is also activatable, though unfortunately GNOME Mobile does not allow floating windows to overlay other apps, so it&#39;s not useful like it is on desktop.&#xA;&#xA;Screenshot of Firefox as described in landscape mode. It is open to the legendary YouTube video My Hands Are Bananas&#xA;&#xA;The HTML select tag (used for dropdown selections) works exactly like on desktop, with very small touch targets, and is not scrollable, making only a few entries near the top of the list selectable.&#xA;&#xA;When interacting with the popup menus that appear when one of the many permanently-visible buttons at the bottom bar, I found that there is no intuitive way to close them. Tapping outside of the menu does nothing, and clicking the button that opened it simply flickers it off-then-back-on. Thankfully, I found that by tapping the URL bar, it pops up the OSK, which I can then dismiss to get back to the page. This is quite awkward to do, but lets me use most of the browser features. &#xA;&#xA;Extensions&#xA;This being the full version of Firefox, all extensions are available to install, and I was really happy to get my favorites uBlock Origin, Dark Reader, LibRedirect+Indie Wiki Buddy, Stylus, and Constent-O-Matic synced from my Firefox Account. I set Dark Reader to force every page to use an OLED-black background, set up my AI-blocking stuff on Stylus, and configured LibRedirect to point to my favorite frontends for websites I do not wish to send traffic to. I did have to make sure to disable settings sync in the Dark Reader preferences, as otherwise the OLED preference got automatically copied to all my desktops!&#xA;&#xA;Unfortunately extensions suffer from the same &#34;popup menu&#34; behavior described in the previous section, and have the extra issue of only part of the menu being rendered (however, the entire menu is interactive, so if you know your way around you can still blindly navigate):&#xA;&#xA;Constent-O-Matic extension menu gets cut off&#xA;&#xA;Thankfully, the Bitwarden extension has a &#34;pop out&#34; mode that puts it in its own window (and that window does get fully rendered!). The button to trigger this is always in the same spot, so I can reliably blindly tap it. However, the popout window replaces Firefox in GNOME Mobile, so if I have Bitwarden open I cannot see Firefox. It also triggers the app overview a couple seconds after opening, which is probably a GNOME Mobile bug, which often interrupts me typing my master password and causes me to accidentally launch whatever app appeared where the OSK key I was aiming for was. There is a native Bitwarden client called BitRitter, but the last commit was over a year ago so I fear it may suffer a similar fate to Goldwarden. There is also to my knowledge no system-wide &#34;autofill&#34; API for Linux, that would allow a password manager to fill login details into non-web apps. &#xA;&#xA;Surfing the web&#xA;Websites themselves render and feel great as despite this technically being &#34;Firefox Desktop&#34;, meaning they are correctly detecting by other means that this is a phone. I did get locked out of Google.com (something about my browser being unsupported), but it served as a good slap on the wrist, reminding me to instead use an alternative frontend to Google Search, such as Startpage. I checked again while writing and it seems they now &#34;support&#34; my browser. I read news articles, browsed blogs, and used Piped to access YouTube videos without too many issues. It seems that Firefox does not unload tabs very readily as a few times my phone ran out of RAM, and the entire browser got killed by the OOM daemon, so I&#39;ve been careful to keep my tab count low.&#xA;&#xA;Terminal&#xA;Of course, we can&#39;t talk about a Linux distro without mentioning the terminal. My GNOME Mobile install came with a terminal emulator it calls &#34;Console&#34;, but I was able to determine by inspecting the running processes it is actually kgx (I see this project explicitly bans LLM contributions, and I applaud that!). The interface is well-adapted to my display, and has a nice feature that shows you a preview of all your open sessions/tabs in a grid. When using the terminal, the OSK gains some extra buttons for Tab, Ctrl, Alt, and the arrow keys.&#xA;&#xA;The default shell is Alpine&#39;s own default ash, with no colored prompt, tab completion, or some features I am used to like this s{imple,yntax} from Bash, the latter of which I especially miss when having to type commands via a touchscreen and every saved keystroke counts. Although fish is available on the Alpine repos, kgx offers no way I could find to launch fish instead of ash. I don&#39;t want to set my system default shell to fish as it&#39;s not POSIX-compliant, and would much prefer to only have interactive sessions (kgx and possibly ssh) launch with it.&#xA;&#xA;Although the interface adapts well to mobile, it does not have any mobile-specific features which would be very welcome, mainly:&#xA;&#xA;Ability to select &amp; copy text. It&#39;s very much mouse controls here, you can double-tap to select a word and triple-tap to select a line, but no way to grow/shrink the selection. When I am asked to share log output, I have to triple-tap each line, use the OSK to hit Ctrl+Shift+C (which is only possible thanks to the extra keys that appear), and one-by-one paste them into FluffyChat.&#xA;&#xA;Following the system theme, or allowing a custom theme to be set. An off-grey is used as the background color instead of my configured pure-black background, which is important on OLED displays. The only setting I could find is a toggle in the hamburger menu for switching between light and dark mode. There is an open issue for custom color themes, with a linked merge request which sadly has had no response from the maintainer since four years ago when a change was requested, which has long since been implemented. I really hope it moves forward…&#xA;&#xA;Pinch-to-resize. This one&#39;s more of a nitpick, but I run into it quite often so I&#39;m putting it here. Termux did this on Android, and it meant that any TUI app that wanted more space could be very quickly and easily dialed in. On kgx, I have to use the hamburger menu to access a + and - button, and it only allows going down to half of the default size, which is not always enough to display e.g btop.&#xA;&#xA;Other than that though, it serves its purpose in a crutch (mostly restarting iio-sensor-proxy every once in a while) and with a couple small-ish changes I&#39;d be very happy to use it.&#xA;&#xA;Some other apps worth mentioning&#xA;&#xA;GNOME Software is an app store that comes with the install and helped me discover a lot of awesome mobile-friendly apps from Flathub. I also believe there is a bunch of software on aports+pmaports, but sadly it is not at all discoverable via GUI, you have to use the terminal and already know the package name. This is the case of polycule for example, a very functional Matrix client built using the same SDK as FluffyChat, which I tried out but could not get the UI working very well on GNOME Mobile. GNOME Software does seem to have some support for Alpine packages as it notifies me every day about &#34;System Updates&#34; (despite having explicitly configured it to NOT check updates automatically) which when tapped list some Alpine packages. However, when I accept these updates, it doesn&#39;t always finish, and even if it does, running sudo apk update &amp;&amp; sudo apk upgrade in a terminal gets even more updates. So: terminal requirement #3 is for updating the system. Flatpak updates work fine, however. I did notice that after installing updates it will show a message saying &#34;Last checked: X days ago&#34;, so I don&#39;t know where it gets these from.&#xA;&#xA;Screenshot of GNOME Software open to Socialize section, showing a few browsers, a translation app, and an XMPP client&#xA;&#xA;KDE Connect is installable and, after manually enabling the firewall rules with the terminal (required use #4), was able to connect to my desktops. This lets me use a keyboard &amp; mouse without plugging them in, control media, and do a few other things, though I did not get the file sending feature to work. The theme looks really bad because my Qt themes are broken as discussed earlier.&#xA;&#xA;Screenshot of KDE Connect hooked up to a device called mel, with options such as Multimedia Control and Send Clipboard&#xA;&#xA;My favorite desktop calculator, called Qalculate! (yes with the exclamation mark!), is available but not mobile-friendly, however it can be made usable by navigating to File→Minimal window, and all the crazy unit conversions and math features are there. It does hide the history though, so I guess a mobile-native client/mode would still make sense.&#xA;Screenshot of qalculate-gtk showing a conversion of 3lb/s to m/h, which makes no sense but Qalculate still finds a way to do it&#xA;&#xA;There&#39;s a YouTube client with quite nice UI called Pipeline (fka. Tubefeeder), but I only got it to play a video once. It uses something called &#34;Clapper enhancements&#34; to play YouTube videos, and this doesn&#39;t seem to pull an up-to-date version of yt-dlp, as I get error messages about formats missing. I also tried the Flatpak, but that one complains about missing video decoding codecs on my phone, so it does not bode well. Something nice about Tubefeeder is it lets me select a Piped instance, but I did not yet find a way to have it sync my subscriptions like LibreTube does on Android. It also suffers from a quite similar freezing bug to Tuba, so I assume they both trigger the same Vulkan &#34;lost device&#34; codepath.&#xA;&#xA;Screenshot of Pipeline open to CarlSagan42&#39;s channel&#xA;&#xA;I used an RSS reader called Pulp for a bit until I touched a setting that makes it crash on startup. I&#39;m now using Newsflash, which nicely syncs from my Nextcloud News, but only displays the article content in a narrow centered column that does not follow my libadwaita theme, so I don&#39;t use it very often. I opened an issue to track this.&#xA;&#xA;Screenshot of Newsflash open to the pmOS blog&#xA;&#xA;Android apps&#xA;As much as I&#39;d like to use Linux exclusively, there are some cases where being able to fall back to Android is very useful. This is where Waydroid comes in, running an entire LineageOS image with optional Google Play Services inside a container. I installed it from GNOME Software, selected my desired Android image in a dialog it presented me, and (after debugging a lot in the terminal due to silent crashing, but I don&#39;t remember what it was so I don&#39;t have a cool story to share sadly) I now have a working Android system I can boot into! &#xA;&#xA;Besides the Waydroid app itself which contains the Android interface in a window, there&#39;s the really cool feature of running Android apps as their own windows. This means that, once Waydroid has booted, the apps really are quite seamlessly integrated into my shell! They have their own launcher icons, native windows, and can be closed by swiping them away. I can even use Android&#39;s back gesture (only for Waydroid apps), if I enable it inside Android&#39;s own settings app! There is noticeable input lag when interacting with these apps, which is a shame, but not a dealbreaker for me as they are only fallbacks.&#xA;&#xA;I was very excited to run some of the apps I really miss from Android, and can happily report Öffi from F-Droid works great for public transport planning. I also tried to install the excellent OpenStreetMap client CoMaps (community fork of Organic Maps), which ran great, but I discovered that GPS is not bridged to Waydroid, so it&#39;s not actually able to give me directions to places. There&#39;s an issue tracking this in the Waydroid repository and some workarounds shared via a debugging &#34;mock GPS&#34; feature, but I didn&#39;t manage to get any working, and I would much rather have this as a hardware bridge that the Android system sees as physical hardware, much like how network is bridged through a fake wired connection.&#xA;&#xA;video controls src=&#34;https://allpurposem.at/blog/pmos-waydroid.webm&#34; alt=&#34;Video shows user opening an app called Sudoku which shows the LineageOS boot animation for a few seconds, then opens up the app as if it were native to the phone. F-Droid is then opened, and both apps are shown side-by-side in the overview.&#34;&#xA;&#xA;I also tried to get my bank app running, as it is one of the only two things still tethering me to my old Android phone. The bank app is required to sign into the bank website from a new device, and to do sensitive operations like sending money to an account, so I unfortunately cannot &#34;just use the website.&#34; The app allows signing into it in two ways:&#xA;&#xA;Taking a photo of my ID and doing a &#34;live selfie.&#34; This is where I discovered Waydroid does not bridge the camera, so that was a dead end.&#xA;&#xA;Proving physical proximity to my old phone, and accepting prompts on the logged in app there. This is where I discovered Waydroid does not bridge Bluetooth devices.&#xA;&#xA;So, I won&#39;t be getting rid of my old phone just yet. I have to charge it every few weeks when a bank thing pops up, and I guess Signal will bother me about my &#34;Primary Device&#34; eventually as well (the Android app is on my old phone, though Flare seems to have experimental support for being the primary device, I do not yet fully trust it to not lose my data and likely won&#39;t use it until it&#39;s considered stable). Thankfully these are rather rare occasions, and I can somewhat safely only carry around my postmarketOS phone!&#xA;&#xA;There&#39;s an alternative project to Waydroid called Android Translation Layer which takes the WINE approach of &#34;natively&#34; running programs made for other OSes, foregoing the container approach entirely. In theory, this should let apps integrate even better, and potentially even pass the hardware to the apps that Waydroid so sorely lacks. There&#39;s a super impressive NewPipe port on Flathub using this. Unfortunately, I was unsuccessful in using the binary to run any of the apps that their own compatibility list says are supported, which I guess is likely a packaging issue on pmOS&#39;s side. I&#39;m keeping my eye on this project though!&#xA;&#xA;Conclusion&#xA;Despite me complaining so much (sorry!), I am extremely impressed with the state of Linux on mobile, and every doubt I had that I would regret moving to it is mostly erased. Most of the problems I list are minor papercuts and should be relatively easy to solve. They should make for easy targets when any of them annoys me enough that, instead of writing about it, I actually set up a dev environment and try fixing it. The community has been incredible, responding to all sorts of questions and often helping me live-debug issues with whatever crazy thing I&#39;m trying to get working. I especially want to re-shout-out the folks at CCCAC without whom I don&#39;t think I would have taken the plunge and actually spent nearly 500€ on a device exclusively to run their software.&#xA;&#xA;This blog has been dormant for a while, but with my recent adventures I am sure I will have plenty to write about, so perhaps expect some more writings once I land my first contribution to the OS!&#xA;&#xA;While Android is &#34;free,&#34; we all pay for its development by submitting control (and data) to Google, further strengthening its control over half of the mobile OS duopoly. Since I stopped paying Google, I have now set up a recurring monthly donation to the postmarketOS team, and will look into supporting individual projects that I use every day on my phone to ensure development can continue and the amazing volunteers keeping this dream alive are remunerated for their efforts. A huge THANK YOU to everyone involved in #LinuxMobile for making the computer in my pocket possible!&#xA;&#xA;No LLM was used to write this. As always, feel free to direct any corrections or feedback to my fediverse account @mat@allpurposem.at.&#xA;&#xA;---&#xD;&#xA;&#xD;&#xA;Thanks for reading! Feel free to contact me if you have any suggestions or comments.&#xD;&#xA;Find me on Mastodon and Matrix.&#xD;&#xA;&#xD;&#xA;You can follow the blog through:&#xD;&#xA;ActivityPub by inputting @mat@blog.allpurposem.at&#xD;&#xA;RSS/Atom: Copy this link into your reader: https://blog.allpurposem.at&#xD;&#xA;&#xD;&#xA;My website: https://allpurposem.at&#xD;&#xA;&#xD;&#xA;link rel=&#34;preload&#34; href=&#34;https://blog.allpurposem.at/lexend.woff2&#34; as=&#34;font&#34; type=&#34;font/woff2&#34; crossorigin=&#34;&#34;]]&gt;</description>
      <content:encoded><![CDATA[<p>I&#39;ve used Android for longer than I have had a phone. The first computer I owned was an Android 4 tablet[^bq]), which I received as a gift for my 8th birthday, alongside a shiny Google Mail account (huh, I guess I was breaking TOS since the beginning lol). Since then, I have fully engaged in the Android ecosystem, running relatively up-to-date OS versions on both smartphones I have owned, at least until the last one stopped receiving updates with Android 12L. I then looked into alternatives, and floated around options like GrapheneOS and AICP until I settled on the amazing <a href="https://blog.allpurposem.at/@/LineageOS@fosstodon.org" class="u-url mention">@<span>LineageOS@fosstodon.org</span></a> since they kept maintaining the OLED “black” theme (thank you!!) that was removed on vanilla Android 12. Oh, and they support latest Android releases on otherwise-obsolete devices like my poor Pixel 3a. Unfortunately, all these projects are based on the Android Open Source Project (AOSP), which is developed solely by Google and thus puts them at its whim. Sound familiar? The main reason I still stick with Firefox despite the AI stuff is because it&#39;s the <em>only</em> remaining browser that doesn&#39;t… 100% depend on and is at the whim of Google&#39;s Chromium project.</p>

<p>[bq]: the tablet was made by BQ, a defunct Spanish company which, coincidentally, was the first to launch a Ubuntu Touch device: <a href="https://www.zdnet.com/article/first-ubuntu-smartphone-aquaris-e4-5-launches-into-cluttered-mobile-market/">https://www.zdnet.com/article/first-ubuntu-smartphone-aquaris-e4-5-launches-into-cluttered-mobile-market/</a></p>

<h2 id="why-i-m-leaving-my-comfort-zone" id="why-i-m-leaving-my-comfort-zone">Why I&#39;m leaving my comfort zone</h2>

<p>So: what&#39;s Google done this time to drive me off of Android? Maybe it&#39;s the recent lack of punctuality of AOSP source code releases, being <a href="https://www.androidauthority.com/android-16-qpr1-source-code-available-3614853/">several months late</a> and thus preventing alternative Android distributions from staying up to date (and now as of writing this post, they will <a href="https://www.androidauthority.com/aosp-source-code-schedule-3630018/">reduce the AOSP source code update frequency</a> from quarterly to only twice per year?? this does not bode well, yikes). Maybe it&#39;s their (for now <a href="https://android-developers.googleblog.com/2025/11/android-developer-verification-early.html">slightly less bad</a>) attempt to <a href="https://www.ghacks.net/2025/08/26/google-wants-android-app-developers-to-verify-their-identity-this-could-affect-sideloading-apps/">require passport verification &amp; payment</a> to distribute apps <em>even <a href="https://f-droid.org/en/2025/09/29/google-developer-registration-decree.html">outside of their Play Store</a></em>? Perhaps it&#39;s their <a href="https://github.com/nextcloud/android/issues/12729">repeated</a> <a href="https://forum.syncthing.net/t/discontinuing-syncthing-android/23002">sabotage</a> of open source apps&#39; ability to function?</p>

<p>Yes to all, but I will blame this little green boi:
<img src="https://allpurposem.at/blog/pmos-androidlocindicator.png" alt="Green location indicator from Android 12">
In case you don&#39;t know (so lucky), this is <a href="https://source.android.com/docs/core/permissions/privacy-indicators">a feature introduced by Google for Android 12</a> which helpfully pops up to tell you when a (non-Google Play Services) app accesses your location. While on the surface the location indicator is a good idea, <em>there is no user-facing whitelist</em> and thus the Home Assistant or weather apps, both of which are FOSS and I thus fully trust, would cause the green dot to pop up with all its animations for a few seconds, <strong><em>every five minutes</em></strong>. It literally drove me crazy, and was the last straw for me to finally consider an alternative.</p>

<h2 id="the-distro" id="the-distro">The distro</h2>

<p>Now, there&#39;s a few distributions of Linux for phones, but with my goal of moving away from Google I did not want to settle for something that still depends on Android, otherwise we&#39;d have the Chromium problem. This means that all the distros that depend on <a href="https://halium.org/">Halium</a> to run a Linux userspace don&#39;t qualify, despite being the most likely to have hardware working given they use the Android drivers and kernel. That leaves me basically with only one option, that being the Alpine-based <a href="https://postmarketos.org">postmarketOS</a>. I got to try a device running it at last year&#39;s FrOSCon and found its performance very impressive, though I got some fair warnings about missing drivers for stuff like the fingerprint reader. I also read <a href="https://blog.allpurposem.at/@/neil@mastodon.neilzone.co.uk" class="u-url mention">@<span>neil@mastodon.neilzone.co.uk</span></a>&#39;s excellent blogposts about his experience trying it on a OnePlus 6 phone. I asked around the <a href="https://blog.allpurposem.at/@/postmarketOS@treehouse.systems" class="u-url mention">@<span>postmarketOS@treehouse.systems</span></a> Matrix rooms, and my awesome local Aachen hackspace <a href="https://blog.allpurposem.at/@/CCCAC@chaos.social" class="u-url mention">@<span>CCCAC@chaos.social</span></a> where I spotted someone with postmarketOS stickers on their laptop (if you&#39;re reading this, hi!), and finally settled on buying a Fairphone 5 to run pmOS on. The reason I did not install it on my existing Pixel 3a is twofold:</p>
<ol><li><p>It has very little RAM. You <em>can</em> run pmOS on 3GB, but it doesn&#39;t feel very future-proof given the state of the web (also, I don&#39;t know how heavy Waydroid is)</p></li>

<li><p>I don&#39;t want to sacrifice my existing Android install. What if I need to do something that only works on Android (foreshadowing)? What if I delete important files while flashing pmOS?</p></li></ol>

<p>I definitely think <a href="https://mobile-nixos.github.io">Mobile NixOS</a> is worth a look for its immutability &amp; reproducibility, which makes a lot of sense on a smartphone. They don&#39;t list my device on their <a href="https://mobile-nixos.github.io/mobile-nixos/devices/index.html">Devices List</a>, but I&#39;m sure I could get it working like pmOS given enough tinkering. NixOS gives me this safety cushion of being able to easily rollback my entire system if something goes wrong. Meanwhile, every time I touch a system file on postmarketOS I feel like I&#39;m committing a crime, and every update I do feels like a gamble on whether my phone will keep booting or not. For now, I&#39;m waiting for postmarketOS <a href="https://gitlab.postmarketos.org/postmarketOS/duranium">Duranium</a> to become usable, and then make a decision whether to distro-hop based on the project state, since I&#39;ll have to reinstall anyways.</p>

<h2 id="installing-postmarketos" id="installing-postmarketos">Installing postmarketOS</h2>

<p>I received my Fairphone 5 from a non-Amazon online store, skipped thru its Android setup, and directly accessed the hidden developer settings to unlock my bootloader. Fairphone, for whatever reason, has a convoluted step of <a href="https://www.fairphone.com/en/bootloader-unlocking-code-for-fairphone/">inputting device data on their website</a> to get a “Bootloader Unlocking Code”. I guess they want to track how many of their devices run unlocked… kind of uncool, as the phone&#39;s unlockability relies on them keeping this web tool up and running. I then went to follow the postmarketOS install instructions, but found there were multiple options. For example, if you want full-disk encryption (FDE), the “pre-built image” option does not provide it, and you must use the <code>pmbootstrap</code> CLI tool. Thankfully, <a href="https://wiki.postmarketos.org/wiki/Fairphone_5_(fairphone-fp5)#Installation">the Fairphone 5 pmOS wiki page</a> has instructions for this which I followed without issue…</p>

<p>…or so I thought! After the install, my phone booted showing the postmarketOS bootanimation (with Linux console output periodically eating away at the anim… looks kind-of broken but should be fixed once <a href="https://gitlab.postmarketos.org/postmarketOS/pmaports/-/merge_requests/7482">pmOS adopts Plymouth</a>), asked for my FDE password, aaaand (:drums:)proceeded to get stuck on a black screen. Thankfully, I did this at CCCAC, and quickly got help troubleshooting. In the end, it turns out the “Plasma Mobile” package was broken, so I picked the other name that sounded like it&#39;d have a usable UI: “GNOME Mobile”.</p>

<blockquote><p>[!NOTE]
This is the part where I want to be really <strong>really</strong> clear that I don&#39;t intend to create any negativity toward Linux mobile projects. I <strong>will</strong> be complaining about things <em>quite extensively</em>, but only because I think it is really important to highlight how the experience is as an end-user (and was asked to share it!). I have massive respect for the people that write the code that makes Linux on phones possible, and I hope to help make all this a reality. I cannot possibly be more excited about Linux Mobile right now, so much that I&#39;m fully dedicated to using it day-to-day.</p></blockquote>

<h2 id="gnome-mobile" id="gnome-mobile">GNOME Mobile</h2>

<p>OK, so I installed GNOME Mobile instead of just going with <a href="https://phosh.mobi/">Phosh</a>. I expected the UI would be somewhat similar to the Phosh screenshots I&#39;d seen, but I was keen to explore different ways of interacting with a phone, and its name sounded more “upstream” than Phosh. GNOME is a high-quality desktop (if the defaults suit you), and in general I expect the defaults on a phone to be reasonable.</p>

<p>After booting and entering my FDE password, I am first greeted by the full “desktop” version of <a href="https://en.wikipedia.org/wiki/GNOME_Display_Manager">GNOME Display Manager</a> (GDM), asking me which user I want to log in as. Once I select myself (the only option), a full QWERTY on-screen keyboard (OSK) pops up and I get to type my user password, which is supposed to be a PIN, so should just have a numberpad…</p>

<p>After inputting my PIN, it looks like GDM crashes to a black screen, but after a few seconds a familiar GNOME interface pops up! …and proceeds to (only sometimes) ask for my password again, <em>twice</em> (with different-looking modals), to unlock the GNOME Keyring. Thankfully, these two extra password prompts have gone away recently, so I can only assume the bug is fixed. There&#39;s a nice “welcome” program that explains some of the basics, and tells me it&#39;s meant for enthusiasts. Well, here I am :)</p>

<p><img src="https://allpurposem.at/blog/pmos-welcome.png" alt="Screenshot of &#34;postmarketOS Welcome&#34; program explaining that it&#39;s meant for enthusiasts. That&#39;s me!"></p>

<p>GNOME Mobile presents the usual GNOME Desktop app grid, with the ability to swipe between multiple pages. The usual quick settings tiles can be accessed by swiping down from the top, and notifications awkwardly pile up under it, with a very small scrolling area. When an app is open, the homescreen will show that window and move the search bar out of the way. It took a little getting used to, but I find that I quite like this interaction model!</p>

<p><video controls="" src="https://allpurposem.at/blog/pmos-gnomemobile.webm"></p>

<p>The frame drops are entirely due to software encoding, normally it is snappier. Also the double <code>Do Not Disturb</code> button might be an extension messing with things, my bad!</p>

<p>What I like less is <a href="https://mastodon.gamedev.place/@allpurposemat/115652388771429243">how the icons in the app grid behave…</a></p>

<p><video controls="" src="https://allpurposem.at/blog/pmos-appgridmoving.webm"></p>

<p>Sometimes you can move icons, sometimes not. Making a folder is a nigh-impossible task (I must have tried a hundred times, and only succeeded three times in total). Also: perhaps I am imagining it, but sometimes I boot my phone and the icons have rearranged themselves, and other times some are just missing. I definitely lose apps every once in a while, but thankfully the search bar has my back to find them again.</p>

<p>When I installed the OS, there were two settings apps, one being GNOME&#39;s own Settings and the other being “postmarketOS Tweaks” which let me change some things like the hostname. The latter app has since disappeared, I think moved to the Phosh settings app in <a href="https://gitlab.postmarketos.org/postmarketOS/pmaports/-/merge_requests/7678">this merged MR</a>. I have not yet looked into restoring it as I quite like the hostname I picked: <code>hermes</code>, messenger of the gods and himself the god of trickery, two things that represent this phone quite well.</p>

<p>The lockscreen is very usable. It shows a blurred version of my wallpaper, the time, and notifications I received (most without content… and <a href="https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues/3051">no way to enable it</a>). I can swipe up to show a PIN entry, which <em>absolutely has to</em> be 6 numbers. Sometimes it will eat some of the numbers I input and I have to try again but, when it works, it works well. The lockscreen does <em>not</em> allow poweroff or reboot. only suspend is available, which I could not figure out how to fix. Notifications on the lockscreen often say “Just now” regardless of when they were received.</p>

<p><img src="https://allpurposem.at/blog/pmos-lockscreen-pin.png" alt="Screenshot of GNOME lockscreen, with the PIN entry pulled up"></p>

<p>Something really cool inherited from desktop GNOME is the “GNOME Online Accounts” feature, which allows signing into many different online services and integrate them to the OS. I added my self-hosted Nextcloud account and was happy to see it import everything. Tapping the date on the top-left shows my upcoming events and, when my modem is working, I can start SMS conversations in Chatty with my contacts imported from Nextcloud.</p>

<p><img src="https://allpurposem.at/blog/pmos-gnomeonlineaccounts.png" alt="Screenshot of GNOME Online Accounts open to Nextcloud, with Calendar, Contacts, and Files sync enabled"></p>

<p>It&#39;s a little bit funny that all GTK apps seem to insist on showing me their keyboard shortcuts in menus, despite not having a keyboard attached. This makes dropdown menus much bigger than they need to be, but isn&#39;t a super big deal.</p>

<p><img src="https://allpurposem.at/blog/pmos-gtkshortcuts.png" alt="Screenshot of the context menu on a CMakeLists.txt file in the files app, showing keyboard shortcuts for some actions like Alt-Return to see properties"></p>

<p>I found a couple ways to crash the shell, like closing an app while a popup or side menu is open, and apps themselves can crash when they try to do desktop things like lock the mouse cursor. However, when not doing weird stuff, the experience is pretty stable.</p>

<h3 id="hapticsn-t" id="hapticsn-t">Hapticsn&#39;t</h3>

<p>The first thing I noticed after interacting a little bit with the UI, is that there is zero haptic feedback. On Android, when you do certain actions such as typing or switching apps, the motor inside the phone makes a nice “buzz” as feedback. I didn&#39;t realize how much I came to rely on this until it was taken away from me. I was expecting it to work since the pmOS wiki page for my device says it works:</p>

<p><img src="https://allpurposem.at/blog/pmos-hapticsenabled.png" alt="Screenshot from postmarketOS wiki page showing Haptics should work"></p>

<p>After asking in the <a href="https://matrix.to/#/%23mobile:gnome.org">GNOME Mobile Matrix room</a>, I learned that there is a service called <a href="https://gitlab.freedesktop.org/agx/feedbackd"><code>feedbackd</code></a>, which other interfaces talk to but GNOME Mobile does not. I have to imagine then that haptics would work on Phosh or other UIs, but (this will be a recurring theme) GNOME Mobile wants to do it a different way, and thus… hasn&#39;t done it yet. I plan to work on this at some point, especially after some productive discussion with other GNOME Mobile devs, but I haven&#39;t gotten much further than asking for input from the XDG Portals folks. I aim to take the time to work on it further, especially talking with feedbackd developers, but other matters were more pressing thus far. I at least was able to verify that the vibration motor <em>can</em> work by talking to it directly with <a href="https://www.kernel.org/doc/html/latest/input/ff.html">the kernel&#39;s force feedback API</a>.</p>

<h3 id="auto-brightness" id="auto-brightness">Auto-brightness</h3>

<p>Normally this was going to go in the next section, but fortunately while writing this blogpost it started working! <em>Un</em>fortunately however, now I want it back off, and I can&#39;t find a way to remove it.</p>

<p>Anyways, auto-brightness is <a href="https://help.gnome.org/gnome-help/power-autobrightness.html">supported in GNOME upstream</a>, but never showed up in my power settings as the article says it should. I asked for help in the Matrix room and we verified that <code>net.hadess.SensorProxy</code> detects and exposes my phone&#39;s light sensor, so I guess the gnome-settings-daemon was failing to pick this up. Not having auto-brightness means that I have to fumble to find the brightness bar every time I go outside, as I can&#39;t see what&#39;s on my screen otherwise. Now that it works, it adjusts extremely quickly to any small changes in ambient light, which unfortunately overrides my own settings. The toggle still isn&#39;t there, so I can&#39;t just turn it off. The <em>biggest</em> issue with auto-brightness, now that it&#39;s here, is that it animates this transition, which causes issues with my OLED display&#39;s driver: these manifest as brief flashes of horizontal bands of garbage pixels, or the color grading of the entire display changing, or (admittedly this one&#39;s pretty cool) the display showing several copies of GNOME in a grid, Andy Warhol-style.</p>

<p>Of course, while doing my final editing, auto-brightness is gone again, and I don&#39;t have to deal with the screen glitching anymore! I opened <a href="https://gitlab.postmarketos.org/postmarketOS/pmaports/-/issues/4274">pma!4274</a> to track the display artifacts, so I know when I can try enabling auto brightness again :)</p>

<h3 id="smaller-missing-bits-in-the-shell" id="smaller-missing-bits-in-the-shell">Smaller missing bits in the shell</h3>

<h4 id="flashlight" id="flashlight">Flashlight</h4>

<p>There&#39;s no way to toggle the flashlight. I found <a href="https://gitlab.gnome.org/verdre/gnome-shell-mobile/-/merge_requests/12">a merge request</a> to the shell that would add it, but with no activity on that repository in the last 8 months I&#39;m not holding my breath for it to be merged. In the meantime I am running <a href="https://github.com/vixalien/gnome-mobile-torch">a GNOME Extension</a> that adds this feature, which I had to manually install using the terminal as it does not appear to be packaged anywhere. I&#39;ll count this as the first <em>required</em> use of the terminal.</p>

<h4 id="oled-support" id="oled-support">OLED support</h4>

<p>GNOME Mobile ships a dark theme, but no option for a “pure black” background. Back on Android I distro hopped to LineageOS <em>just for this feature</em>. The GNOME “dark” theme is a light-grey which might make sense on the usual LCDs that desktops or laptops have, looks really bad on an OLED display, which most phones nowadays have. I tried researching how to theme GTK or “libadwaita” apps, since that seems to be mainly what my install came with, and only found references to a defunct app called “Gradience”. I was able to install a Flatpak of a fork that was updated slightly more recently, and navigated its desktop-only interface very awkwardly to set the background color to black. This worked for most apps thankfully, but I did not find a way to theme the actual GNOME shell, which unfortunately is still stuck as this ugly grey, and I&#39;ve no idea where to begin to try fixing this on my device. There&#39;s a “User theme” extension that can apply CSS to the gnome-shell, but no indication on what the CSS would look like to fix the background color.</p>

<h4 id="qt-apps-functionality" id="qt-apps-functionality">Qt apps functionality</h4>

<p>I get that it&#39;s the GNOME shell, so I should be using GTK apps, but on the already limited Linux mobile app ecosystem, I <em>need</em> to be able to use apps written in other frameworks, even if they may look slightly different. I have two Qt apps installed at the moment: KDE Connect (integrate phone with desktop), and Kitinerary (find public transport routes). Both of them open with a blinding white background, with a very broken-looking interface. Tapping a text field does not bring up the keyboard (thankfully I can manually double-tap the gesture bar to get it). There is a window bar at the top with minimize, maximize, and close buttons. I haven&#39;t found a solution for these issues, but I did install “Kvantum” and “qt6ct” to at least change the background color. However, these apps still look &amp; feel extremely broken and it makes me sad that my experience of their developers&#39; hard work is ruined by the way they get shipped.</p>

<h4 id="copy-paste" id="copy-paste">Copy &amp; paste</h4>

<p>It&#39;s not <em>missing</em>, but it might as well be, because the UI is so inconsistent and often buggy that I resort to manually typing over my crazy-long Bitwarden passwords. Though the Wayland clipboard <em>works fine</em>, each app seems to be expected to implement its own way to select text, copy, and paste. Libadwaita (GTK4) apps, which normally have pretty good UX, make it near-impossible to get the popup for copy/paste, and then the popup uses icons with no description, often leaving me guessing at their function (also it has a weird black outline). The “Text Editor” app has an especially tricky to use selection/popup interaction, probably on the account of allowing text input and those interactions conflicting with selection (it took me over 30s to get it to pop up for the screenshot):</p>

<p><img src="https://allpurposem.at/blog/pmos-copypaste.png" alt="Screenshot of the text editor with an array of many icons shown, one of them meaning copy"></p>

<h4 id="on-screen-keyboard-customization" id="on-screen-keyboard-customization">On-screen keyboard customization</h4>

<p>The default keyboard gets the job done for typing basic text, but (as far as I know) there is no option to change the keys available or how it works. On Android, I used and loved <a href="https://github.com/Julow/Unexpected-Keyboard">Unexpected Keyboard</a>, which lets you type special characters via quickly “flicking” in many configurable directions from any given key. It also features a Ctrl key, which would entirely solve the copy-paste UI issue from above. It even binds a lot of desktop-like keys, like Escape, which is extremely useful when using e.g Vim for text editing. I am aware of a similar project for Linux mobile called <a href="https://gitlab.com/flamingradian/unfettered-keyboard">Unfettered Keyboard</a>, but unfortunately (I think) it cannot run on GNOME Mobile due to GNOME Mobile&#39;s keyboard being part of the shell, rather than a separate program. If I do stick with GNOME Mobile, I will probably learn to write extensions and see if I can write some sort of shim that lets you plug in other keyboard programs like Unfettered Keyboard.</p>

<h4 id="customization-of-the-quick-settings-statusbar" id="customization-of-the-quick-settings-statusbar">Customization of the quick settings &amp; statusbar</h4>

<p>When I enable location support, there is a constant location icon in the statusbar reminiscent of that green dot which finally drove me off Android. The quick settings tiles also seem to randomly change what&#39;s available, such as auto-rotate appearing and disappearing between reboots, and a mysterious “Wired” connection that&#39;s always on taking up the first slot. On a big screen, I wouldn&#39;t mind as much since I have space to spare, but on a phone I need to save space by not showing useless icons, and I <em>especially</em> need things to stay where they are to build up any kind of muscle memory. Thankfully there is the amazing <a href="https://extensions.gnome.org/extension/5446/quick-settings-tweaker/">Quick Settings Tweaks</a> shell extension that lets me hide the irrelevant toggles &amp; icons, though it can&#39;t fix the rotation appearing and disappearing. Speaking of screen rotation…</p>

<h3 id="screen-rotation" id="screen-rotation">Screen rotation</h3>

<p>When auto-rotate is off, this <em>really</em> behaves like a desktop. You can go in the system settings and manually select “Portrait”, “Landscape Left”, “Landscape Right”, or “Portrait (Flipped)”, then click “Apply” and finally confirm “Keep changes”. Android did this thing where it still reads the sensor, and shows a button for a second when you physically rotate the phone which you can tap to actually do it. I <em>always</em> ran my Android phone like this to prevent accidental rotation, and I would love to see this on GNOME Mobile. Maybe that&#39;s a good first contribution if I decide to dive into UI stuff.</p>

<p>Rarely, I get lucky and an “Auto-rotate” toggle is in the quick settings. When it is enabled, rotation is <em>instant</em> the moment I tilt my phone. While I like the lack of animation, I do wish it were a little less sensitive as it&#39;s very easy to accidentally rotate. Thus, even in the rare event the feature is available, I keep it turned off. I think this issue is related to <code>iio-sensor-proxy</code>, as sometimes <code>sudo systemctl restart iio-sensor-proxy</code> brings it back, but other times this command gets stuck, so I am not sure.</p>

<h3 id="battery-life" id="battery-life">Battery life</h3>

<p>Normally this&#39;d have gotten its own h2, but unfortunately the otherwise excellent battery life (which I estimate would last well over a day) gets completely trounced by two issues in (presumably) GNOME Mobile:</p>
<ol><li><p>Since <a href="https://wiki.postmarketos.org/wiki/Power_management#Sleep/Suspend_modes">suspending on mobile is not really a thing</a>, battery life relies on apps using as little power as possible at all times, and suspending features when not used. I&#39;m not sure what part of the stack is responsible for this, but for example I would expect 3D rendering to not use power when the screen is off, or the camera app to suspend the camera hardware when unfocused. This is not the case right now, and for example I have had my phone die while going out because I forgot to swipe away the “Camera” app window, which caused it to stay on for several hours until the battery gave up. I&#39;d love to learn more about this!</p></li>

<li><p>The <code>gnome-shell</code> process will randomly start consuming 100% of one CPU core, and not stop until restarted. I have to constantly feel my phone in my pocket in case it starts getting warm, and if so log out &amp; back into the shell to prevent it eating the entire battery life. This is tracked by <a href="https://gitlab.gnome.org/verdre/gnome-shell-mobile/-/issues/70">this issue</a>, and two weeks ago I spent quite some time trying to figure out the root cause and collecting profiler data. Unfortunately I don&#39;t see a fix on the horizon, and if it keeps happening I might have to switch off of GNOME Mobile entirely.</p></li></ol>

<p>Of note: my phone came with “suspend” enabled, which until recently would cause a kernel panic (I was quite confused why pressing the off button would cause a reboot!), so I disabled it in the GNOME Settings. Suspend is now fixed, but I don&#39;t think I can use it as it prevents all network-based apps from receiving notifications. Supposedly SMS &amp; calls can still wakeup the device, but 95% of my communications go thru Matrix, and the remaining 5% are Signal, neither of which work while suspended.</p>

<p>Something strange I encountered is that the phone <em>will not</em> charge over a USB-A cable. On Android it would charge slowly, so I could leave the phone plugged into my desktop&#39;s USB port while developing apps and test on it, however postmarketOS requires having the phone plugged into a proper C-to-C cable connected to a USB-C power delivery capable power supply. The phone will also <em>always</em> show a notification prompt asking whether I want to transfer files via MTP (but doesn&#39;t actually do anything), “developer” (what?), or just charge, even if the cable is power-only.</p>

<p>I usually leave my phone charging next to my bed, and most of the time this works, however sometimes I will find the phone really hot and displaying the full-disk encryption password prompt, which I guess means it kernel panicked. This also happened a few times in my pocket, and the password prompt keeping the screen on likely contributed to quite some power loss.</p>

<h2 id="ok-but-what-about-the-phone-stuff" id="ok-but-what-about-the-phone-stuff">Ok but what about the phone stuff</h2>

<p>Yes yes, I&#39;m getting there. I just have a lot to say about it even before inserting a SIM card!</p>

<p>The Fairphone 5 requires removing the battery to access the SIM slot, which at least they make very easy, but does force me to reboot to switch SIMs. These days I only use one though, so it shouldn&#39;t be a problem. pmOS detected my SIM and it showed up as “Mobile Network” in the Settings app, with the usual toggles expected from other OSes. I had to manually select the correct Access Point Name (APN) for my French cell operator, which I have experience with as I had to do the same on LineageOS (which actually was harder than on pmOS, since Lineage had me manually input all the settings!). Once that was set up, I saw for the first time a nice 5G icon in my statusbar, indicating mobile data works! It&#39;s also the first time I get to use the 5G I pay for, since my previous phone only supported 4G.</p>

<p><img src="https://allpurposem.at/blog/pmos-mobilesettings.png" alt="A picture of the Mobile Network settings page. I can toggle Mobile Data and Data Roaming, as well as select Network Mode, Network (set to o2 - de), Access Point Names, Sim Lock, and view Modem Details"></p>

<p>I can&#39;t seem to change the “Network Mode” option from its default setting of preferring 4G to one that prefers 5G: it lets me select it, but doesn&#39;t apply and shows me a popup that says <code>Failed: reloaded modes (allowed &#39;2g,...</code> (and trails off). I can probably find out more by using journalctl, but it has not bothered me enough to do so.</p>

<p>What <em>HAS</em> bothered me is mobile data randomly becoming unavailable. One time it was caused by an update (that&#39;s what I get for running postmarketOS “edge”), which I reported and it got fixed in record time. Other times however it seems to happen quite randomly, like the auto-rotation disappearing (though less often thankfully!). This is usually remedied by a reboot, but is quite annoying. I&#39;d expect a notification saying my network connection failed, and I <em>do</em> get these, but only when I leave the range of a wifi network (which is very common given I carry the phone around!) and some other random times it decides to send this, <em>not</em> when mobile data disappears.</p>

<h2 id="socials" id="socials">Socials</h2>

<p>The primary reason for carrying my phone around is being reachable, and reaching people when needed.</p>

<h3 id="sms" id="sms">SMS</h3>

<p>The install came with this very nice app simply called “Chats” (but the process name reveals it is actually <a href="https://gitlab.gnome.org/World/Chatty/">Chatty</a>) which allows me to send &amp; receive SMS. It claims to support MMS, but it fails when I try to send one. The app also (!!!) let me log into my Matrix account and read messages in unencrypted rooms, sadly this last bit is not useful to me as most of my communications are encrypted. If you don&#39;t encrypt your messages however, like in SMS, this is an awesome app that runs well and does what it says. I did have some trouble sending SMS to new numbers, one of my family members did not receive a rather time-sensitive message, but I was having some similar-ish troubles on my Android phone so it might be a carrier thing.</p>

<p><img src="https://allpurposem.at/blog/pmos-chatty.png" alt="Screenshot of Chatty open to a conversation where I sent &#34;hello from linux phone&#34; a month ago, and typed a draft now saying &#34;I don&#39;t really use SMS&#34;"></p>

<h3 id="matrix" id="matrix">Matrix</h3>

<p>On the Matrix side, I first tried Fractal to have the GTK experience, however it unfortunately seems to run into similar crashing &amp; freezing issues as <a href="https://gitlab.gnome.org/World/fractal/-/issues/1617">what I reported</a> when I tried Fractal on desktop. Thankfully I was delighted to see that the Matrix client I used on Android is packaged on Flathub! FluffyChat runs great and basic messaging worked without any issues.</p>

<p><img src="https://allpurposem.at/blog/pmos-fluffychat.png" alt="Screenshot of FluffyChat open to the postmarketOS room. Several emojis in reactions are not rendered"></p>

<p>There&#39;s what is likely a packaging bug that prevents it from loading an emoji font, so I can&#39;t see most emojis. There&#39;s a permanent bar at the top of the app that just says “FluffyChat”, and I think is the GTK app trying to draw client-side decorations (probably GNOME Mobile should tell apps to, uh, <em>not</em> do that). Unfortunately, the Flutter code behaves like the desktop version, and lacks several rather important features. A non-exhaustive list:</p>
<ul><li>can&#39;t play or record voice messages, have to manually download them then open in an audio player</li>
<li>can&#39;t play videos, same deal</li>
<li>no option to take a photo to send</li>
<li>no notifications while the app is closed</li>
<li>notifications for the room you&#39;re looking at get filtered <strong><em>even if the screen is locked or the app window is unfocused</em></strong></li>
<li>it makes every picture I send *extremely <strong>green</strong>* (flipping endianness I think):</li></ul>

<p><img src="https://allpurposem.at/blog/pmos-fc-green.jpeg" alt="Picture of my cat Athos staring longingly at a door, except everything is Very Green"></p>

<p>I have fixed audio <a href="https://github.com/krille-chan/fluffychat/pull/2473">in a merge request</a>, and I think video should be fairly simple to enable as well. I fear that taking photos will require some new XDG portal, so I&#39;ll be leaving that one for last. The notifications issue requires a UnifiedPush integration, which the Flutter package supports, but needs some work. I have a test version working on my desktop, but it is lacking a lot of logic for how to handle them. I hope they are merged quickly though, as I don&#39;t want to have to start to rebase a bunch of branches in a fork…</p>

<h3 id="signal" id="signal">Signal</h3>

<p>Signal is what I give to people who don&#39;t want to invest an hour in picking a Matrix server, client, figuring out encryption, and then not saving their recovery key. This means some of my extended family and contacts from work. Thankfully there is a Signal client for mobile Linux called <a href="https://gitlab.com/schmiddi-on-mobile/flare/">Flare</a> which can send and receive messages including images (though it strips EXIF metadata, which means some photos get sent sideways or upside-down). It can&#39;t handle calling, but I usually make Signal calls on my desktop anyways, so it&#39;s not a big deal.</p>

<p><img src="https://allpurposem.at/blog/pmos-flare.png" alt="Screenshot of Flare client open to a conversation"></p>

<h3 id="fediverse" id="fediverse">Fediverse</h3>

<p>I use Mastodon to access the Fediverse, and <a href="https://mastodon.gamedev.place/@allpurposemat/115618771459175821">was very happy to discover</a> <a href="https://blog.allpurposem.at/@/Tuba@floss.social" class="u-url mention">@<span>Tuba@floss.social</span></a>. It implements basically everything I could want out of a Mastodon client, and looks pretty good while doing so. Just missing an OLED background, but I&#39;m pretty sure that&#39;s on me for making such a messy GTK theme. I&#39;d like to fix the background color at some point, though.</p>

<p><img src="https://allpurposem.at/blog/pmos-tuba.png" alt="Screenshot of Tuba open to their profile, showing a boosted toot (on my birthday!)"></p>

<p>Sadly, Tuba frequently triggers some bug in the Vulkan driver that causes it to print “LOST_DEVICE”, and the app gets totally frozen midway through sliding a view out. I don&#39;t know where to report this, but it means I can&#39;t navigate the Fediverse for very long before I get stopped in my tracks. Another freeze which might be related occurs when I write a too-long post or attach a picture, it probably triggers some re-layout that hits a GPU bug and freezes. I unfortunately have lost several surely-banger-posts to this specific freeze. It also suffers from quite poor scrolling performance sometimes, potentially related to running out of Vulkan memory (I see that log message a lot).</p>

<h3 id="e-mail" id="e-mail">E-Mail</h3>

<p>I installed Thunderbird, which brought along an extension called <a href="https://gitlab.postmarketos.org/postmarketOS/mobile-config-thunderbird"><code>mobile-config-thunderbird</code></a>, and promises to make the UI more usable on phones. Unfortunately, something goes terribly wrong and it doesn&#39;t render my inbox at all, so it&#39;s not particularly useful as an email client right now. It does send me notifications though (as long as the app window is open!!!), so at least I can tap on one to read the email, since that does render.</p>

<p><img src="https://allpurposem.at/blog/pmos-thunderbird.png" alt="Screenshot of Thunderbird, not rendering the inbox"></p>

<h2 id="on-the-topic-of-notifications" id="on-the-topic-of-notifications">On the topic of notifications</h2>

<p>Yeah, it&#39;s quite important to be able to see when one of these apps wants my attention! Thankfully everything I&#39;m running is FOSS, so there&#39;s no dark patterns to worry about here.</p>

<h3 id="push-notifications" id="push-notifications">Push notifications</h3>

<p>I&#39;m not super qualified to explain this, but my surface-level understanding is that on both Android and iOS, there is a central server that the OS stays permanently connected to, and services you have apps for can “push” to that server, which then tells the OS to wake up the app so it can show you its notification. This heavily reduces power usage, and saves each app from implementing its own background service. On Android, this is implemented through Google Firebase Cloud Messaging, but thankfully an alternative exists in the form of UnifiedPush, which let me self-host my own push server that supporting services (Matrix and Mastodon, in my case) could use instead. This meant that Android apps like FluffyChat and Tusky didn&#39;t have to run in the background, but still showed me reliable notifications piped through my very own server, which my phone was always connected to.</p>

<p>On postmarketOS, I was very pleased to find a <a href="https://wiki.postmarketos.org/wiki/UnifiedPush">UnifiedPush wiki page</a>, but was a little worried to see only a KDE-specific implementation, with just a single app listed as supported. Thankfully I was able to install <a href="https://invent.kde.org/libraries/kunifiedpush"><code>kunifiedpush</code></a> on GNOME Mobile and write a config file to make it connect to my self-hosted Ntfy server. It was all a little manual (and required terminal usage #2, probably due to me running it outside of its native KDE), but it means apps can now register to it and it actually delivers notifications, nice! I am able to receive notifications from the Fediverse via Tuba, which supports UnifiedPush, and as stated earlier I began work on FluffyChat support for UnifiedPush on its Linux builds.</p>

<p>Flare (Signal client) has an optional background service that keeps a connection to their servers, which is unfortunately required as Signal does not support UnifiedPush. SMS works fine as well.</p>

<h3 id="actually-seeing-the-notifications" id="actually-seeing-the-notifications">Actually seeing the notifications</h3>

<p>Man, I really really hoped this would work! Unfortunately I have some experience with upstream GNOME not really showing me all notifications, so I should have expected this. Even for apps that do consistently send notifications for messages, like FluffyChat and Flare, I will usually only see the first notification in a conversation, subsequent messages get “grouped” (which is a nice feature UI-side! but) which means I get no sound or pop-up for subsequent messages. GNOME also doesn&#39;t show me any notifications while fullscreen which, while I can understand the rationale, is not how I want it to work. This means that if I am watching a video fullscreen, I won&#39;t find out that my cooking timer has gone off until the video ends and I exit fullscreen!</p>

<p>Oftentimes “old” notifications get stuck, and also display wrong times. This happens with FluffyChat notifications quite frequently, where I open my phone and it says I received a message from my dad “Just now” or claims it came recently, when I actually had a full conversation hours ago.</p>

<p>Additionally, as explained earlier when talking about the lockscreen, it by default doesn&#39;t show the notification content for privacy reasons. I can enable showing content per-app in the GNOME settings, which would be great <em>except</em> it does not show every app, especially FluffyChat which is the one I actually need to be able to read quickly.</p>

<h3 id="pebble" id="pebble">Pebble</h3>

<p>Thankfully, I wear a Pebble smartwatch, and an amazing developer who goes by Muhammad maintains a Pebble connector app that can buzz my watch when I get a message (even when GNOME unwisely decides to hide the notification), like my watch used to do back on Android! <a href="https://gitlab.com/muhammad23012009/rockwork">Rockwork</a> is an unofficial Pebble client for Ubuntu Touch, and with some work I was able to rebase an experimental non-Ubuntu-Touch backend for it written by Xela Geo. I abstracted some of the buildsystem further to make it usable as an Alpine package, and have been happily running Rockwork on my postmarketOS phone, with almost everything working. I opened <a href="https://gitlab.com/muhammad23012009/rockwork/-/merge_requests/4">a merge request</a> to upstream, and if/once it is merged I hope to contribute my first package to Alpine.</p>

<p><img src="https://allpurposem.at/blog/pmos-rockwork.png" alt="Screenshot of Rockwork, listing the apps installed on my watch"></p>

<p>I can control my music and read notifications on the watch, while opening RockWork lets me switch watchfaces and view historical step counter &amp; sleep data. I cannot exaggerate how awesome this is. Of course though, it&#39;s not all perfect, though most of it can probably be blamed on my porting work (the app seems to work fine on its native Ubuntu Touch platform). I still need to get the app store and calendar sync to work, and there&#39;s a big problem with some apps using the XDG portal Notifications API, which GNOME implements privately and thus Rockwork can&#39;t eavesdrop on to forward to the watch. I don&#39;t know how I will solve this last one, and it currently means I don&#39;t get any SMS notifications.</p>

<h2 id="using-the-camera" id="using-the-camera">Using the camera</h2>

<p>One of the things that made me pick the Fairphone 5 over other similar devices is the “Partial” status of the Camera (rather than “Broken”). When I got the phone, I was excited to try out the camera, as I usually take lots of pictures of <del>my cat</del> different places I go. I didn&#39;t expect much given the rating, but I am mostly positively impressed at how well it works given the level of support. Using the built-in “Snapshot” camera app (which is the only one I got working), there is no way to change the focus or the zoom level, but you <em>can</em> take pictures and videos, as well as scan QR codes. The focus appears to be stuck at a fixed setting and does not auto-adjust. Only the wide-angle rear camera or the front selfie cam are supported by postmarketOS at the moment, probably due to a missing driver for the normal one. They both seem to have similar picture quality, so I won&#39;t test them separately (but all the pictures shown are taken with the wide-angle). By default, pictures are very dark and green-tinted, especially indoors. However, if I cover the sensor with my hand (or point the camera at a bright light) for a bit, then when I uncover it the colors will briefly be a bit brighter and less green and I can take my picture, which ends up a lot better (but still dark):</p>

<p><img src="https://allpurposem.at/blog/pmos-photo-khoshekh.jpeg" alt="Picture of my cat Khoshekh"></p>

<p>It doesn&#39;t deal well with shooting when a light is in shot, as it seems to get overexposed.</p>

<p><img src="https://allpurposem.at/blog/pmos-photo-cccac.jpeg" alt="Picture of sign that says WARNING: Do not dumb here. No dumb area. Somewhat overexposed by LEDs in frame"></p>

<p>Trying to take a video used to freeze the phone for a few seconds, then reboot it (kernel panic?), however as of last week it no longer does this, though recording is extremely laggy. I don&#39;t know, but I think the GPU drivers might not (yet?) support hardware video encoding or decoding (except <a href="https://wiki.postmarketos.org/wiki/Hardware_video_acceleration">the wiki says it does, assuming it is “Venus”</a>), so the result is not very usable yet. Here&#39;s a recording of where I&#39;m writing this blogpost (I promise my lights are on):</p>

<p><video controls="" src="https://allpurposem.at/blog/pmos-video.mp4"></p>

<p>The camera app allows viewing recent photos &amp; videos, but no way to zoom into them or rotate media after-the-fact. There is also no standalone gallery app I could find, so viewing media is unfortunately quite awkward. Maybe once I finally set up Immich, the website can stand in for a gallery app.</p>

<p>Either way, once I find a way around the FluffyChat image endianness bug, I will feel quite happy sending some of these pictures to family &amp; friends.</p>

<h2 id="audio" id="audio">Audio</h2>

<p>Oh, yeah, I haven&#39;t talked about this one yet. The pmOS wiki page lists Audio as “Broken” for my device, and indeed this was the case when I first installed pmOS. However, I saw that <a href="https://wiki.postmarketos.org/wiki/Fairphone_5_(fairphone-fp5)/Audio">a lot of work was being done in this area</a> and felt that I could trust these amazing folks to get it working. My PineBuds (bluetooth earbuds) paired fine and allowed me to listen to a couple YouTube videos in the meantime. Lo and behold, a few weeks into daily driving this phone I got to start enjoying the speakers on my Fairphone 5 via <a href="https://gitlab.postmarketos.org/postmarketOS/pmaports/-/merge_requests/7700">pma!7700</a>, which I installed on my device thanks to <a href="https://wiki.postmarketos.org/wiki/Mrtest">Mr. Test</a>. As I&#39;m writing this, the MR is now merged and should be built soon!</p>

<blockquote><p>![NOTE]
It&#39;s called <code>mrtest</code>, a tool for testing Merge Requests, but I keep reading it like Mister Test and so I will make you read it that way at least once &gt;:)</p></blockquote>

<p>The speakers don&#39;t sound <em>quite</em> right, and sometimes go wonky until I do a suspend-resume cycle, but it&#39;s already extremely impressive work by everyone involved. It was very exciting to follow the discussion and see the first few demos from the devs, featuring classics like Rick Astley singing his one and only hit single through the speakers. I&#39;m told microphone support is coming soon, which will allow me to start doing VoIP calls like Jitsi or MatrixRTC with friends!</p>

<h2 id="calls" id="calls">Calls</h2>

<p>Note how I specified VoIP… yeah, calls are their own thing. Even with speaker &amp; mic working, more work will need to be done for call audio (which is a separate issue because of weird modem reasons). I can confirm that making phone calls works, as in, I can make someone&#39;s phone buzz, and they can make my phone (not buzz because GNOME doesn&#39;t implement haptics but) show a call notification—if nothing is fullscreen of course—that I can use to pick up.</p>

<p>I believe there is some extra complexity in Germany with VoLTE support being required, but I&#39;ll find out for sure once the call audio stuff is in place. Let&#39;s just hope I don&#39;t need to take any important calls anytime soon!</p>

<h2 id="web-browsing" id="web-browsing">Web browsing</h2>

<p>I only tried Firefox, as it was installed by default on my phone (with the <a href="https://gitlab.postmarketos.org/postmarketOS/mobile-config-firefox">mobile-config-firefox</a> configuration by default).</p>

<h3 id="interface" id="interface">Interface</h3>

<p>The UI in portrait mode is reminiscent of Firefox for Android with the URL bar at the bottom, except tabs are always displayed. I would like for the tabs to auto-hide or, even better, browse them in a grid like the Android version provides, but this is perfectly usable. A right-click action can be simulated by long-tapping, which will also select the word you long-pressed. If you tap on the word again, the right-click menu closes and you can drag selection handles to select more/less text, then long-tap again to act on it.</p>

<p><img src="https://allpurposem.at/blog/pmos-firefox.png" alt="Screenshot of Firefox open to the high CPU usage issue in GNOME Mobile. Some text is selected and the context menu is open with options for Copy, Select All, Print, Translate, and some extensions such as Ffck it&#39;s button, uBlock&#39;s Block element, and Bitwarden&#39;s autofill"></p>

<p>In landscape mode, the UI moves to the top of the window and permanently takes up about one-third of the screen, given both the URL bar and the tabs are <em>always</em> visible and neither can be collapsed. This makes the landscape mode functionally useless, as there is not enough space to interact with page content. The only time I use it is when I want to fullscreen a video, which thankfully can be easily done by double-tapping on the media. The “popout player” is also activatable, though unfortunately GNOME Mobile does not allow floating windows to overlay other apps, so it&#39;s not useful like it is on desktop.</p>

<p><img src="https://allpurposem.at/blog/pmos-firefox-landscape.png" alt="Screenshot of Firefox as described in landscape mode. It is open to the legendary YouTube video My Hands Are Bananas"></p>

<p>The HTML <code>&lt;select&gt;</code> tag (used for dropdown selections) works exactly like on desktop, with very small touch targets, and is not scrollable, making only a few entries near the top of the list selectable.</p>

<p>When interacting with the popup menus that appear when one of the many permanently-visible buttons at the bottom bar, I found that there is no intuitive way to close them. Tapping outside of the menu does nothing, and clicking the button that opened it simply flickers it off-then-back-on. Thankfully, I found that by tapping the URL bar, it pops up the OSK, which I can then dismiss to get back to the page. This is quite awkward to do, but lets me use most of the browser features.</p>

<h3 id="extensions" id="extensions">Extensions</h3>

<p>This being the full version of Firefox, all extensions are available to install, and I was really happy to get my favorites <a href="https://ublockorigin.com/">uBlock Origin</a>, <a href="https://darkreader.org">Dark Reader</a>, <a href="https://libredirect.github.io/">LibRedirect</a>+<a href="https://getindie.wiki/">Indie Wiki Buddy</a>, <a href="https://add0n.com/stylus.html">Stylus</a>, and <a href="https://consentomatic.au.dk/">Constent-O-Matic</a> synced from my Firefox Account. I set Dark Reader to force every page to use an OLED-black background, set up my AI-blocking stuff on Stylus, and configured LibRedirect to point to my favorite frontends for websites I do not wish to send traffic to. I did have to make sure to disable settings sync in the Dark Reader preferences, as otherwise the OLED preference got automatically copied to all my desktops!</p>

<p>Unfortunately extensions suffer from the same “popup menu” behavior described in the previous section, and have the extra issue of only part of the menu being rendered (however, the entire menu is interactive, so if you know your way around you can still blindly navigate):</p>

<p><img src="https://allpurposem.at/blog/pmos-firefox-extension-cutoff.png" alt="Constent-O-Matic extension menu gets cut off"></p>

<p>Thankfully, the Bitwarden extension has a “pop out” mode that puts it in its own window (and that window does get fully rendered!). The button to trigger this is always in the same spot, so I can reliably blindly tap it. However, the popout window <em>replaces</em> Firefox in GNOME Mobile, so if I have Bitwarden open I cannot see Firefox. It also triggers the app overview a couple seconds after opening, which is probably a GNOME Mobile bug, which often interrupts me typing my master password and causes me to accidentally launch whatever app appeared where the OSK key I was aiming for was. There <em>is</em> a native Bitwarden client called <a href="https://codeberg.org/Chfkch/bitritter">BitRitter</a>, but the last commit was over a year ago so I fear it may suffer a similar fate to Goldwarden. There is also to my knowledge no system-wide “autofill” API for Linux, that would allow a password manager to fill login details into non-web apps.</p>

<h3 id="surfing-the-web" id="surfing-the-web">Surfing the web</h3>

<p>Websites themselves render and feel great as despite this technically being “Firefox Desktop”, meaning they are correctly detecting by other means that this is a phone. I <em>did</em> get locked out of Google.com (something about my browser being unsupported), but it served as a good slap on the wrist, reminding me to instead use an alternative frontend to Google Search, such as <a href="https://www.startpage.com/">Startpage</a>. I checked again while writing and it seems they now “support” my browser. I read news articles, browsed blogs, and used <a href="https://github.com/TeamPiped/Piped">Piped</a> to access YouTube videos without too many issues. It seems that Firefox does not unload tabs very readily as a few times my phone ran out of RAM, and the entire browser got killed by the OOM daemon, so I&#39;ve been careful to keep my tab count low.</p>

<h2 id="terminal" id="terminal">Terminal</h2>

<p>Of course, we can&#39;t talk about a Linux distro without mentioning the terminal. My GNOME Mobile install came with a terminal emulator it calls “Console”, but I was able to determine by inspecting the running processes it is actually <a href="https://gitlab.gnome.org/GNOME/console">kgx</a> (I see this project explicitly bans LLM contributions, and I applaud that!). The interface is well-adapted to my display, and has a nice feature that shows you a preview of all your open sessions/tabs in a grid. When using the terminal, the OSK gains some extra buttons for Tab, Ctrl, Alt, and the arrow keys.</p>

<p>The default shell is Alpine&#39;s own default <code>ash</code>, with no colored prompt, tab completion, or some features I am used to like this <code>s{imple,yntax}</code> from Bash, the latter of which I especially miss when having to type commands via a touchscreen and every saved keystroke counts. Although <a href="https://fishshell.com/"><code>fish</code></a> is available on the Alpine repos, kgx offers no way I could find to launch <code>fish</code> instead of <code>ash</code>. I don&#39;t want to set my system default shell to <code>fish</code> as it&#39;s not POSIX-compliant, and would much prefer to only have interactive sessions (kgx and possibly ssh) launch with it.</p>

<p>Although the interface adapts well to mobile, it does not have any mobile-specific features which would be very welcome, mainly:</p>
<ol><li><p>Ability to select &amp; copy text. It&#39;s very much mouse controls here, you can double-tap to select a word and triple-tap to select a line, but no way to grow/shrink the selection. When I am asked to share log output, I have to triple-tap each line, use the OSK to hit Ctrl+Shift+C (which is only possible thanks to the extra keys that appear), and one-by-one paste them into FluffyChat.</p></li>

<li><p>Following the system theme, or allowing a custom theme to be set. An off-grey is used as the background color instead of my configured pure-black background, which is important on OLED displays. The only setting I could find is a toggle in the hamburger menu for switching between light and dark mode. There is <a href="https://gitlab.gnome.org/GNOME/console/-/issues/27">an open issue</a> for custom color themes, with <a href="https://gitlab.gnome.org/GNOME/console/-/merge_requests/92">a linked merge request</a> which sadly has had no response from the maintainer since four years ago when a change was requested, which has long since been implemented. I really hope it moves forward…</p></li>

<li><p>Pinch-to-resize. This one&#39;s more of a nitpick, but I run into it quite often so I&#39;m putting it here. Termux did this on Android, and it meant that any TUI app that wanted more space could be very quickly and easily dialed in. On kgx, I have to use the hamburger menu to access a + and – button, and it only allows going down to half of the default size, which is not always enough to display e.g <code>btop</code>.</p></li></ol>

<p>Other than that though, it serves its purpose in a crutch (mostly restarting <code>iio-sensor-proxy</code> every once in a while) and with a couple small-ish changes I&#39;d be very happy to use it.</p>

<h2 id="some-other-apps-worth-mentioning" id="some-other-apps-worth-mentioning">Some other apps worth mentioning</h2>

<p>GNOME Software is an app store that comes with the install and helped me discover a lot of awesome mobile-friendly apps from Flathub. I also believe there is a bunch of software on aports+pmaports, but sadly it is not at all discoverable via GUI, you have to use the terminal and already know the package name. This is the case of <a href="https://polycule.im/">polycule</a> for example, a very functional Matrix client built using the same SDK as FluffyChat, which I tried out but could not get the UI working very well on GNOME Mobile. GNOME Software does seem to have some support for Alpine packages as it notifies me every day about “System Updates” (despite having explicitly configured it to <em>NOT</em> check updates automatically) which when tapped list some Alpine packages. However, when I accept these updates, it doesn&#39;t always finish, and even if it does, running <code>sudo apk update &amp;&amp; sudo apk upgrade</code> in a terminal gets even more updates. So: terminal requirement #3 is for updating the system. Flatpak updates work fine, however. I did notice that after installing updates it will show a message saying “Last checked: X days ago”, so I don&#39;t know where it gets these from.</p>

<p><img src="https://allpurposem.at/blog/pmos-gnomesoftware.png" alt="Screenshot of GNOME Software open to Socialize section, showing a few browsers, a translation app, and an XMPP client"></p>

<p>KDE Connect is installable and, after manually enabling the firewall rules with the terminal (required use #4), was able to connect to my desktops. This lets me use a keyboard &amp; mouse without plugging them in, control media, and do a few other things, though I did not get the file sending feature to work. The theme looks really bad because my Qt themes are broken as discussed earlier.</p>

<p><img src="https://allpurposem.at/blog/pmos-kdeconnect.png" alt="Screenshot of KDE Connect hooked up to a device called mel, with options such as Multimedia Control and Send Clipboard"></p>

<p>My favorite desktop calculator, called Qalculate! (yes with the exclamation mark!), is available but not mobile-friendly, however it can be made usable by navigating to <code>File</code>→<code>Minimal window</code>, and all the crazy unit conversions and math features are there. It does hide the history though, so I guess a mobile-native client/mode would still make sense.
<img src="https://allpurposem.at/blog/pmos-qalculate.png" alt="Screenshot of qalculate-gtk showing a conversion of `3lb/s to m/h`, which makes no sense but Qalculate still finds a way to do it"></p>

<p>There&#39;s a YouTube client with quite nice UI called Pipeline (fka. Tubefeeder), but I only got it to play a video once. It uses something called “Clapper enhancements” to play YouTube videos, and this doesn&#39;t seem to pull an up-to-date version of yt-dlp, as I get error messages about formats missing. I also tried the Flatpak, but that one complains about missing video decoding codecs on my phone, so it does not bode well. Something nice about Tubefeeder is it lets me select a Piped instance, but I did not yet find a way to have it sync my subscriptions like LibreTube does on Android. It also suffers from a quite similar freezing bug to Tuba, so I assume they both trigger the same Vulkan “lost device” codepath.</p>

<p><img src="https://allpurposem.at/blog/pmos-pipeline.png" alt="Screenshot of Pipeline open to CarlSagan42&#39;s channel"></p>

<p>I used an RSS reader called Pulp for a bit until I touched a setting that makes it crash on startup. I&#39;m now using Newsflash, which nicely syncs from my Nextcloud News, but only displays the article content in a narrow centered column that does not follow my libadwaita theme, so I don&#39;t use it very often. I opened <a href="https://gitlab.com/news-flash/news_flash_gtk/-/issues/868">an issue to track this</a>.</p>

<p><img src="https://allpurposem.at/blog/pmos-newsflash.png" alt="Screenshot of Newsflash open to the pmOS blog"></p>

<h2 id="android-apps" id="android-apps">Android apps</h2>

<p>As much as I&#39;d like to use Linux exclusively, there are some cases where being able to fall back to Android is very useful. This is where Waydroid comes in, running an entire LineageOS image with optional Google Play Services inside a container. I installed it from GNOME Software, selected my desired Android image in a dialog it presented me, and (after debugging a lot in the terminal due to silent crashing, but I don&#39;t remember what it was so I don&#39;t have a cool story to share sadly) I now have a working Android system I can boot into!</p>

<p>Besides the Waydroid app itself which contains the Android interface in a window, there&#39;s the really cool feature of running Android apps as their own windows. This means that, once Waydroid has booted, the apps really are quite seamlessly integrated into my shell! They have their own launcher icons, native windows, and can be closed by swiping them away. I can even use Android&#39;s back gesture (only for Waydroid apps), if I enable it inside Android&#39;s own settings app! There <em>is</em> noticeable input lag when interacting with these apps, which is a shame, but not a dealbreaker for me as they are only fallbacks.</p>

<p>I was very excited to run some of the apps I really miss from Android, and can happily report <a href="https://f-droid.org/packages/de.schildbach.oeffi/">Öffi</a> from F-Droid works great for public transport planning. I also tried to install the excellent OpenStreetMap client <a href="https://comaps.app">CoMaps</a> (community fork of Organic Maps), which ran great, but I discovered that GPS is not bridged to Waydroid, so it&#39;s not actually able to give me directions to places. There&#39;s <a href="https://github.com/waydroid/waydroid/issues/226">an issue</a> tracking this in the Waydroid repository and some workarounds shared via a debugging “mock GPS” feature, but I didn&#39;t manage to get any working, and I would much rather have this as a hardware bridge that the Android system sees as physical hardware, much like how network is bridged through a fake wired connection.</p>

<p><video controls="" src="https://allpurposem.at/blog/pmos-waydroid.webm"></p>

<p>I also tried to get my bank app running, as it is one of the only two things still tethering me to my old Android phone. The bank app is required to sign into the bank website from a new device, and to do sensitive operations like sending money to an account, so I unfortunately cannot “just use the website.” The app allows signing into it in two ways:</p>
<ol><li><p>Taking a photo of my ID and doing a “live selfie.” This is where I discovered Waydroid does not bridge the camera, so that was a dead end.</p></li>

<li><p>Proving physical proximity to my old phone, and accepting prompts on the logged in app there. This is where I discovered Waydroid does not bridge Bluetooth devices.</p></li></ol>

<p>So, I won&#39;t be getting rid of my old phone just yet. I have to charge it every few weeks when a bank thing pops up, and I guess Signal will bother me about my “Primary Device” eventually as well (the Android app is on my old phone, though Flare seems to have experimental support for being the primary device, I do not yet fully trust it to not lose my data and likely won&#39;t use it until it&#39;s considered stable). Thankfully these are rather rare occasions, and I can somewhat safely only carry around my postmarketOS phone!</p>

<p>There&#39;s an alternative project to Waydroid called <a href="https://gitlab.com/android_translation_layer/android_translation_layer">Android Translation Layer</a> which takes the WINE approach of “natively” running programs made for other OSes, foregoing the container approach entirely. In theory, this should let apps integrate even better, and potentially even pass the hardware to the apps that Waydroid so sorely lacks. There&#39;s a super impressive <a href="https://flathub.org/en/apps/net.newpipe.NewPipe">NewPipe port on Flathub</a> using this. Unfortunately, I was unsuccessful in using the binary to run any of the apps that their own compatibility list says are supported, which I guess is likely a packaging issue on pmOS&#39;s side. I&#39;m keeping my eye on this project though!</p>

<h2 id="conclusion" id="conclusion">Conclusion</h2>

<p>Despite me complaining so much (sorry!), I am <em>extremely impressed</em> with the state of Linux on mobile, and every doubt I had that I would regret moving to it is mostly erased. Most of the problems I list are minor papercuts and should be relatively easy to solve. They should make for easy targets when any of them annoys me enough that, instead of writing about it, I actually set up a dev environment and try fixing it. The community has been incredible, responding to all sorts of questions and often helping me live-debug issues with whatever crazy thing I&#39;m trying to get working. I especially want to re-shout-out the folks at CCCAC without whom I don&#39;t think I would have taken the plunge and actually spent nearly 500€ on a device exclusively to run their software.</p>

<p>This blog has been dormant for a while, but with my recent adventures I am sure I will have plenty to write about, so perhaps expect some more writings once I land my first contribution to the OS!</p>

<p>While Android is “free,” we all pay for its development by submitting control (and data) to Google, further strengthening its control over half of the mobile OS duopoly. Since I stopped paying Google, I have now set up a recurring monthly donation to <a href="https://postmarketos.org/donate/">the postmarketOS team</a>, and will look into supporting individual projects that I use every day on my phone to ensure development can continue and the amazing volunteers keeping this dream alive are remunerated for their efforts. A huge THANK YOU to everyone involved in <a href="https://blog.allpurposem.at/tag:LinuxMobile" class="hashtag"><span>#</span><span class="p-category">LinuxMobile</span></a> for making the computer in my pocket possible!</p>

<p>No LLM was used to write this. As always, feel free to direct any corrections or feedback to my fediverse account <a href="https://blog.allpurposem.at/@/mat@allpurposem.at" class="u-url mention">@<span>mat@allpurposem.at</span></a>.</p>

<hr>

<p>Thanks for reading! Feel free to contact me if you have any suggestions or comments.
Find me on <a href="https://allpurposem.at/link/mastodon">Mastodon</a> and <a href="https://allpurposem.at/link/matrix">Matrix</a>.</p>

<p>You can follow the blog through:
– ActivityPub by inputting <code><a href="https://blog.allpurposem.at/@/mat@blog.allpurposem.at" class="u-url mention">@<span>mat@blog.allpurposem.at</span></a></code>
– RSS/Atom: Copy this link into your reader: <code>https://blog.allpurposem.at</code></p>

<p>My website: <a href="https://allpurposem.at">https://allpurposem.at</a></p>

<p></p>
]]></content:encoded>
      <guid>https://blog.allpurposem.at/linux</guid>
      <pubDate>Tue, 27 Jan 2026 23:01:26 +0000</pubDate>
    </item>
    <item>
      <title>Making a Wii game in 2024</title>
      <link>https://blog.allpurposem.at/making-a-wii-game-in-2024</link>
      <description>&lt;![CDATA[As a game developer, some of my most creative work has come from embracing limitations rather than fighting against them. As counterintuitive as it sounds, clamping down on hardware capabilities or abstractions forces you to think outside the box much more.&#xA;&#xA;To give you this experience, there&#39;s online fantasy consoles such as PICO-8 (nonfree) and TIC80 which make it super accessible to prototype and finish small experiences. There&#39;s also hardware like the Playdate (nonfree) that further plays with input methods and form factors to really constrain your playground. Finally, there&#39;s the thriving homebrew communities around consoles such as the SNES and the N64 (check out this awesome demake of Portal!).&#xA;&#xA;I&#39;ve personally always had a soft spot for the Wii. Partially because I grew up with its incredible games such as Super Mario Galaxy 2 but also because Wii game modding gave me a peek at what would later be my career: game development. Although I&#39;ve dabbled with Wii development in the past, I never felt I really understood what I was doing. A couple months ago, I set out to fix this. Armed with my finished DirectX assignment for a university Graphics Programming course, and the open door of &#34;you can add extra features to raise your grades, but those are not mandatory,&#34; I thought of this: what if I show up to the exams with my Wii, and do the presentation on it?&#xA;&#xA;A picture of a messy table with a Wiimote and GameCube controller, with a CRT hooked up to an offscreen GameCube showing a vehicle with a fire effect behind it&#xA;&#xA;DirectX on the Wii (jk)&#xA;As excited as I was to enact this idea, I knew that I wasn&#39;t just going to compile my DirectX shaders and code for the Wii&#39;s CPU and call it a day. DirectX is, uh, not very portable or compatible with the Wii. The Wii is equipped with a GPU codenamed &#34;Hollywood,&#34; which has a whopping 24MB of video RAM as well as featuring no hardware support for any sort of shader. It really makes you appreciate some of the amazing scenes crafted on this console.&#xA;&#xA;A shot of the starting area in Slimy Spring Galaxy&#xA;  Click here to explore Slimy Spring Galaxy on noclip.website&#xA;&#xA;So, we must speak Hollywood&#39;s own API (called GX) to coax it into rendering a mesh with textures and transparency (as required by the assignment).&#xA;&#xA;  NOTE: In the final project, I&#39;ve created a GX folder to hold all GX-specific code, and isolated the DirectX stuff into a separate folder called SDL. This way, I can control which platform-specific code is used via a simple CMake option. If you&#39;d like to follow along, you can find everything here.&#xA;&#xA;libogc&#xA;&#xA;To access this API from C++, there&#39;s a library maintained by the folks at @devkitPro@mastodon.gamedev.place called libogc. This library, combined with the PowerPC toolchain, allows one to build programs targeting the Wii (and the GameCube, since they&#39;re so similar!).&#xA;&#xA;  NOTE: whenever I refer to the Wii from now on, it (mostly) also applies to the GameCube.&#xA;&#xA;Although devkitPro themselves do not have a CMake toolchain file available, I was able to find an MIT licensed one courtesy of the rehover homebrew game. Passing this toolchain file to CMake automatically sets it up to build for the Wii. Cool stuff!&#xA;&#xA;  NOTE: The rest of this post is accurate to the best of my understanding, but it is likely I got some things wrong! If you need accurate info, I suggest you take a look at libogc&#39;s gx.h with all the functions and comments as well as the official devkitPro GX examples rather than following my own code. Comments, questions, and corrections are as always welcome at my fedi handle @mat@mastodon.gamedev.place !&#xA;&#xA;Video setup&#xA;I won&#39;t dwell on the init too long, as most of it is just taken from a libogc example it&#39;s not too thrilling. What is cool though is how, to do v-sync, we create two &#34;framebuffers&#34; which are merely integer arrays... on the CPU? This is where one of the big differences in the Wii&#39;s hardware design compared to a modern computer comes in: both the CPU and GPU have access to 24MB of shared RAM. Meanwhile on a modern PC, the GPU will exclusively have its own dedicated RAM which the CPU cannot touch directly.&#xA;&#xA;This shared RAM is where we store these framebuffers arrays, named by the Wii hacking scene &#34;eXternal Frame Buffers&#34; or XFBs (source. Because access to this so-called &#34;main&#34;)). Because having the GPU work on the XFB directly would be slow, the GPU has its own bit of actually private RAM which stores what&#39;s officially called the &#34;Embedded Frame Buffer&#34; (EFB). GX draw commands work on the super-fast EFB, and when our frame is ready we can copy the EFB into the XFB, for the Video Interface to read and finally display to the screen. This buffer copy is loosely equivalent to &#34;presenting&#34; the frame as is done in the APIs we&#39;re used to.&#xA;                 ┌─────────┐                           &#xA;                 │         │                           &#xA;       ┌─────────┤   CPU   ├───────────────────┐           &#xA;       │         │         │                   │           &#xA;       │         └─────────┘                   ▼           &#xA;       │                                  GX drawcalls      &#xA;       │                                       │           &#xA;       ▼                          ┌────────────▼──────────┐&#xA;Create XFB arrays                 │                       │&#xA;       │                          │ GPU private RAM (EFB) │&#xA;       │                          │                       │&#xA;       │                          └────────────┬──────────┘&#xA;       │                                       ▼           &#xA;       │                           Copy EFB to current XFB &#xA;       │                                       │           &#xA;       │      ┌───────────────────────┐        │           &#xA;       │      │                       │        │           &#xA;       └──────► Shared MEM1 RAM (24MB) ◄───────┘           &#xA;              │                       │                    &#xA;              └─────────┬─────────────┘                    &#xA;                        ▼                              &#xA;                  Display frame                        &#xA;               ┌────────▼────────┐                     &#xA;               │                 │                     &#xA;               │ Video Interface │                     &#xA;               │                 │                     &#xA;               └─────────────────┘                     &#xA;&#xA;The following code specifically handles finishing the frame and displaying it:&#xA;void GraphicsContext::Swap()&#xA;{&#xA;    GXDrawDone();&#xA;&#xA;    GXSetZMode(GXTRUE, GXLEQUAL, GXTRUE);&#xA;    GXSetColorUpdate(GXTRUE);&#xA;    GXCopyDisp(gXfb[gWhichFB], GXTRUE);&#xA;&#xA;    VIDEOSetNextFramebuffer(gXfb[gWhichFB]);&#xA;    VIDEOFlush();&#xA;    VIDEOWaitVSync();&#xA;    &#xA;    gWhichFB ^= 1; // flip framebuffer&#xA;}&#xA;Every time a frame is done, we tell the GPU we&#39;re done via GXDrawDone() and then do our EFB -  XFB copy via GXCopyDisp(gXfb[gWhichFB], GXTRUE) (where gXfb is our two XFBs and gWhichFB is a single bit we flip every frame). Then we notify the Video Interface of the framebuffer it should display with a call to VIDEOSetNextFramebuffer(gXfb[gWhichFB]). Finally, VIDEOFlush() and VideoWaitVSync() ensure we don&#39;t start rendering the next frame before this one is displayed.&#xA;&#xA;Drawing a mesh&#xA;Now that we know how framebuffers work on the Wii, let&#39;s get to drawing our mesh!&#xA;&#xA;Vertex attribute setup&#xA;Before we can start pushing triangles via GX, we must tell it what kind of data to expect. This is done in two steps:&#xA;&#xA;First, we tell GX that we will be giving it our vertex data directly every frame, rather than having it fetch it from an array via indices.  &#xA;GXSetVtxDesc(GXVAPOS, GXDIRECT);  &#xA;We then access the vertex format table at index 0 (GXVTXFMT0), and set its position attribute (GXVAPOS) as follows:&#xA;    The data consists of three values for XYZ coords (GXPOSXYZ)&#xA;    Each value is a 32-bit floating point number (GXF32)&#xA;    I&#39;m not sure what the last argument is for, but zero worked fine for me.&#xA;&#xA;GXSetVtxAttrFmt(GXVTXFMT0, GXVAPOS, GXPOSXYZ, GXF32, 0);&#xA;&#xA;Both of those functions are then repeated for normals and texture data, if needed.&#xA;&#xA;  NOTE: The Wii&#39;s GPU supports indexed drawing, where vertex data is stored in an array and drawn using indices into that array. This allows fewer vertices to be defined while reusing them.  &#xA;I didn&#39;t know about this until I finished this project, so we&#39;ll be sticking with non-indexed drawing. The concept is quite similar, but you&#39;d set the vertex desc to GXINDEX8 and bind an array before calling GXBegin. You&#39;d then pass indices rather than vertex data inside the begin/end block.&#xA;&#xA;Drawcalls&#xA;&#xA;Each frame, we must queue up some commands in the GPU&#39;s first-in-first-out buffer. We can tell GX it&#39;s time to draw some primitives via the GXBegin function, passing along the type of primitives (triangles!), the index in the vertex format table we filled in earlier, and the number of vertices we&#39;ll be drawing.&#xA;Afterward, we can give it the data in order by calling the respective function for each attribute we configured.&#xA;Finally, we cap it off with a GXEnd (which libogc just defines as an empty function, so I guess it may just be syntax/API sugar).&#xA;&#xA;GXBegin(GXTRIANGLES, GXVTXFMT0, mVertices.size());&#xA;for(uint32t index : mIndices)&#xA;{&#xA;    // NOTE: really wish I used GX&#39;s indexing support...&#xA;    const Vertex&amp; vert = mVertices[index]; &#xA;&#xA;    GXPosition3f32(vert.pos.x, vert.pos.y, vert.pos.z);&#xA;    GXNormal3f32(vert.normal.x, vert.normal.y, vert.normal.z);&#xA;    GXTexCoord2f32(vert.uv.x, vert.uv.y);&#xA;}&#xA;GXEnd();&#xA;&#xA;Transformations&#xA;&#xA;  NOTE: This section will assume you&#39;re familiar with matrix transformations. If you don&#39;t know what this is, here&#39;s a link to the first of two pages in the OpenGL tutorial discussing this, which is the explanation that finally made it click for me.&#xA;&#xA;The first important matrix is the model matrix. This matrix&#39;s job is to convert model-space vertices into world-space. This is useful when we want to rotate, scale, or translate an object in the world.&#xA;&#xA;To look around in our scene, we need to set up a view matrix, which takes care of translating world-space into view-space. Finally, we&#39;ll need a projection matrix that turns the given view-space into clip-space, at which point the GPU takes over and handles stuff like culling and converting to non-homogeneous coordinates.&#xA;&#xA;In normal graphics, we tend to pair view and projection together, and leave model on its own for transforming normals and other data to worldspace in the shader. The Wii however takes a different approach: load the combined modelView matrix, and separately handle the projection.&#xA;&#xA;The reason for this is quite interesting: GX instead expects you to give it light information in view space rather than the usual worldspace. We&#39;ll cover simple lighting in a later section.&#xA;&#xA;So, we must only set these two matrices to handle all of our transformation needs:&#xA;    GXLoadProjectionMtx(projectionMat, GXPERSPECTIVE);&#xA;&#xA;    // use the same matrix for positions and normals&#xA;    GXLoadPosMtxImm(modelViewMat, GXPNMTX0);&#xA;    GXLoadNrmMtxImm(modelViewMat, GXPNMTX0);&#xA;&#xA;A lone untextured cube on a blue background&#xA;&#xA;Textures&#xA;&#xA;Textures are actually really easy! We can directly bind a byte array as a texture, since the CPU and GPU have that 24MB of shared RAM.&#xA;&#xA;I initially tried to use the Wii&#39;s native format (TPL), which has some really cool features such as the CMPR compressed texture encoding, which has the GPU decompress the texture live when it needs the data, at (seemingly) no performance cost. Awesome!&#xA;&#xA;Sadly, I couldn&#39;t get it working...&#xA;&#xA;The vehicle, with a rainbow corrupted-looking texture&#xA;&#xA;Even using basic TPL, there were some gnarly artifacts:&#xA;A close-up of a wing from the vehicle, with bizarre texture artifacts&#xA;&#xA;I finally caved and decided to just use PNG and decode it to a raw RGBA8 byte array, bypassing TPL entirely. This got rid of the artifacts, so I guess we&#39;ll never know why they happened!&#xA;GXInitTexObj(&amp;mTexture, decodedData, width, height, GXTFRGBA8, GXCLAMP, GXCLAMP, GXFALSE);&#xA;&#xA;To use the texture, we can simply bind the texture object that we got during init to the index we want to sample from:&#xA;GXLoadTexObj(constcastGXTexObj(&amp;mTexture), GXTEXMAP0);&#xA;By default, GX reads from GXTEXMAP0 when it draws triangles, so this is actually all we needed to do!&#xA;&#xA;The vehicle, with textures&#xA;&#xA;Transparent textures&#xA;We can set up blending with the alpha channel like so:&#xA;GXSetBlendMode(GXBMBLEND, GXBLSRCALPHA, GXBLINVSRCALPHA, GXLOOR);&#xA;This tells GX that, when blending two transparent samples, it should take the previous pixel&#39;s alpha value (GXBLSRCALPHA) and the inverse of the new one&#39;s alpha (GXBLINVSRCALPHA). I&#39;m not sure what the GXLOOR is for, but blending sure does seem to work so I&#39;m keeping it. There&#39;s a good explanation of this exact blend function over at LearnOpenGL.&#xA;&#xA;Although on a first glance transparency seems to work, there&#39;s a pretty big issue that appears if you look at the fire effect from close up (I don&#39;t have a screenshot from the Wii build, so this one&#39;s from DirectX, however the same effect is visible)!&#xA;&#xA;A close-up of the fire effect, where some planes are writing to the Z-buffer and causing fire that should be drawn behind it to get skipped instead&#xA;&#xA;One of the triangles that makes up the effect is getting drawn before another triangle that should render behind it... the first one writes to the Z-buffer, causing the second triangle to get discarded. This is usually good, because it skips drawing pixels that are fully occluded, and makes sure stuff that&#39;s behind a model doesn&#39;t end up getting drawn over it. In the case of translucent images however, we get artifacts like the one above.&#xA;&#xA;This image was rendered with the Z buffer entirely disabled, which shows why we need it:&#xA;The vehicle, but with bad Z sorting&#xA;&#xA;The solution is thankfully quite simple:&#xA;if(mUseZBuffer)&#xA;{&#xA;    GXSetZMode(GXTRUE, GXLEQUAL, GXTRUE);&#xA;}&#xA;else&#xA;{&#xA;    GXSetZMode(GXTRUE, GXLEQUAL, GXFALSE);&#xA;}&#xA;Set mUseZBuffer to false for models using transparent textures, and that last GXFALSE in the GXSetZMode disables writing to the Z buffer. Note that we still want reading (the first GXTRUE), as otherwise the fire effect would end up rendering over our vehicle mesh!&#xA;&#xA;&#34;&#34;&#34;Shaders&#34;&#34;&#34;&#xA;&#xA;Unlike modern APIs, the Wii&#39;s GPU is not programmable with arbitrary shaders. Instead, we can play with something quite powerful called texture evaluation (TEV) stages. We&#39;ve got a whopping 16 TEV stages to play with, which Nintendo graciously calls a &#34;flexible fixed-pipeline.&#34; Each stage is essentially a configurable linear interpolation (lerp) between two values A and B by a factor of C. Finally, a fourth value D is added to the result.&#xA;u8 TEVstage(u8 a, u8 b, u8 c, u8 d)&#xA;{&#xA;    return d + (a  (1.0 - c) + b  c);&#xA;}&#xA;  NOTE: There&#39;s also optional negation, scale, bias, and clamping. I&#39;m skipping over them here because I didn&#39;t end up using them. There&#39;s more complete documentation available here.&#xA;&#xA;The source of A, B, C, and D can all be configured per stage. You could, for example, have it lerp between your texture&#39;s color and the light color based on the amount of specular lighting it receives. I tried to set this up with lots of help from Jasper (thanks again!) but ultimately it didn&#39;t work. I&#39;d like to try again sometime in the future!&#xA;&#xA;Diffuse lighting&#xA;&#xA;The Wii&#39;s GPU features built-in per-vertex lighting. This means that you can (optionally) tell it to calculate how much light each vertex receives from up to eight light sources, which can be either distance-attenuated (like a lamp) or angle-attenuated (like a spotlight).&#xA;&#xA;GX provides a type GXLightObj that we can load and then set up with all our parameters. For the renderer I was making, I needed to set up a &#34;sun&#34; light, which is a very far away point light with (practically) no attenuation.&#xA;&#xA;  NOTE: normally in graphics programming, this is  be done with a simple directional light. However the way I got it to work on the Wii was by simulating this attenuation-free point light model, so I went with that.&#xA;&#xA;This is the bit of code that initializes it every frame:&#xA;GXSetChanAmbColor(GXCOLOR0, ambientColor);&#xA;GXSetChanMatColor(GXCOLOR0, materialColor);&#xA;&#xA;GXSetChanCtrl(&#xA;        GXCOLOR0, GXENABLE,&#xA;        GXSRCREG, GXSRCREG,&#xA;        GXLIGHT0, GXDFCLAMP, GXAFNONE);&#xA;&#xA;guVector lightPos = { -lightDirWorld.x  100.f, -lightDirWorld.y  100.f, -lightDirWorld.z  100.f };&#xA;guVecMultiply(viewMatNoTrans, &amp;lightPos, &amp;lightPos);&#xA;&#xA;GXLightObj lightObj;&#xA;GXInitLightPos(&amp;lightObj, lightPos.x, lightPos.y, lightPos.z);&#xA;GXInitLightColor(&amp;lightObj, lightColor);&#xA;&#xA;GXLoadLightObj(&amp;lightObj, GXLIGHT0);&#xA;Let&#39;s go over each step.&#xA;&#xA;Color registers&#xA;First, we tell GX what ambient and material colors we&#39;ll use. The ambient color is used for lighting all vertices, no matter of received light. This makes sure the back of our mesh is not just pure black. The material color will tint your whole model (it&#39;s like a global vertex color), so I keep it as white.&#xA;&#xA;Channel setup&#xA;GXSetChanCtrl configures the lighting channel we&#39;ll use. We want the light to affect GXCOLOR0, which is where our texture will be. We tell it to get the ambient and material color from the registers we set just before (GXSRCREG). We set GXLIGHT0 as a light that affects this channel, with the default diffuse function GXDFCLAMP. Finally, we disable attenuation by passing GXAFNONE, meaning our light can be infinitely far away but still light our model as if it were right next to it. &#xA;&#xA;Position transformation&#xA;We then calculate the light position, which is very far away opposite to the direction it&#39;ll shine. Note that we multiply it with the view matrix (with the translation part stripped out) as light stuff is in view space!&#xA;&#xA;Light object creation&#xA;Finally we create our GXLightObj, giving it its position and color, and load it into the GXLIGHT0 channel. Make sure to disable lighting on the fire (it makes its own light, wouldn&#39;t make sense to be in shadow) and wham! There&#39;s our sun!&#xA;&#xA;Final picture of the Wii rendering of the vehicle&#xA;&#xA;  You can find all my lighting and TEV code in Effect.cpp. The filename is unfortunate, but as this was initially a DirectX project, I was stuck with that name from the header.&#xA;&#xA;We&#39;re done!&#xA;I quickly built a GameCube version the night before the due date, and submitted the required .exe alongside my sneaky .dol binaries with no further elaboration. I wanted to keep the surprise. I showed up the next day to campus with a very full backpack, and when it was time pulled out the Wii to present my &#34;extra features to raise your grades, but those are not mandatory.&#34; It seemed to make quite a splash! Looks like I&#39;m not the only one who grew up with the Wii :)&#xA;&#xA;You can download a build here. Wiimote and GameCube controls are supported!&#xA;&#xA;Tags for fedi: #homebrew #wii #gamecube #gcn #devkitpro #directx #linux #graphics #gamedev #foss #retrocomputing&#xA;&#xA;---&#xD;&#xA;&#xD;&#xA;Thanks for reading! Feel free to contact me if you have any suggestions or comments.&#xD;&#xA;Find me on Mastodon and Matrix.&#xD;&#xA;&#xD;&#xA;You can follow the blog through:&#xD;&#xA;ActivityPub by inputting @mat@blog.allpurposem.at&#xD;&#xA;RSS/Atom: Copy this link into your reader: https://blog.allpurposem.at&#xD;&#xA;&#xD;&#xA;My website: https://allpurposem.at&#xD;&#xA;&#xD;&#xA;link rel=&#34;preload&#34; href=&#34;https://blog.allpurposem.at/lexend.woff2&#34; as=&#34;font&#34; type=&#34;font/woff2&#34; crossorigin=&#34;&#34;]]&gt;</description>
      <content:encoded><![CDATA[<p>As a game developer, some of my most creative work has come from embracing limitations rather than fighting against them. As counterintuitive as it sounds, clamping down on hardware capabilities or abstractions forces you to think outside the box much more.</p>

<p>To give you this experience, there&#39;s online fantasy consoles such as <a href="https://www.lexaloffle.com/bbs/?cat=7">PICO-8</a> (nonfree) and <a href="https://tic80.com/play">TIC80</a> which make it super accessible to prototype and finish small experiences. There&#39;s also hardware like the <a href="https://play.date/">Playdate</a> (nonfree) that further plays with input methods and form factors to really constrain your playground. Finally, there&#39;s the thriving homebrew communities around consoles such as the SNES and the N64 (check out <a href="https://github.com/mwpenny/portal64-still-alive">this awesome demake of Portal</a>!).</p>

<p>I&#39;ve personally always had a soft spot for the Wii. Partially because I grew up with its incredible games such as Super Mario Galaxy 2 but also because Wii game modding gave me a peek at what would later be my career: game development. Although I&#39;ve dabbled with Wii development in the past, I never felt I really understood what I was doing. A couple months ago, I set out to fix this. Armed with my finished DirectX assignment for a university Graphics Programming course, and the open door of “you can add extra features to raise your grades, but those are not mandatory,” I thought of this: what if I show up to the exams with my Wii, and do the presentation on it?</p>

<p><img src="https://allpurposem.at/blog/wii-directx.png" alt="A picture of a messy table with a Wiimote and GameCube controller, with a CRT hooked up to an offscreen GameCube showing a vehicle with a fire effect behind it"></p>

<h2 id="directx-on-the-wii-jk" id="directx-on-the-wii-jk">DirectX on the Wii (jk)</h2>

<p>As excited as I was to enact this idea, I knew that I wasn&#39;t just going to compile my DirectX shaders and code for the Wii&#39;s CPU and call it a day. DirectX is, uh, not very portable or compatible with the Wii. The Wii is equipped with a GPU codenamed “Hollywood,” which has a whopping 24MB of video RAM as well as featuring no hardware support for any sort of shader. It really makes you appreciate some of the amazing scenes crafted on this console.</p>

<p><img src="https://allpurposem.at/blog/slimy-spring-noclip.png" alt="A shot of the starting area in Slimy Spring Galaxy">
&gt; <a href="https://noclip.website/#smg2/UnderGroundDangeonGalaxy;ShareData=ASxg2UbN$ZT%5E!t89Z&amp;eU=U~m1Q&amp;VKZUu5B7UU:B&amp;WRQ%7DWT%7Bo3%7B9WZ:AUkC%5BRW~2">Click here to explore Slimy Spring Galaxy on noclip.website</a></p>

<p>So, we must speak Hollywood&#39;s own API (called GX) to coax it into rendering a mesh with textures and transparency (as required by the assignment).</p>

<blockquote><p>NOTE: In the final project, I&#39;ve created a GX folder to hold all GX-specific code, and isolated the DirectX stuff into a separate folder called SDL. This way, I can control which platform-specific code is used via a simple CMake option. If you&#39;d like to follow along, you can find everything <a href="https://git.allpurposem.at/mat/GraphicsProg1_DirectX/src/branch/main/source/GX">here.</a></p></blockquote>

<h2 id="libogc" id="libogc">libogc</h2>

<p>To access this API from C++, there&#39;s a library maintained by the folks at <a href="https://blog.allpurposem.at/@/devkitPro@mastodon.gamedev.place" class="u-url mention">@<span>devkitPro@mastodon.gamedev.place</span></a> called <a href="https://github.com/devkitPro/libogc">libogc</a>. This library, combined with the PowerPC toolchain, allows one to build programs targeting the Wii (and the GameCube, since they&#39;re so similar!).</p>

<blockquote><p>NOTE: whenever I refer to the Wii from now on, it (mostly) also applies to the GameCube.</p></blockquote>

<p>Although devkitPro themselves do not have a CMake toolchain file available, I was able to find an MIT licensed one courtesy of the <a href="https://github.com/hoverguys/rehover">rehover</a> homebrew game. Passing this toolchain file to CMake automatically sets it up to build for the Wii. Cool stuff!</p>

<blockquote><p>NOTE: The rest of this post is accurate to the best of my understanding, but it is likely I got some things wrong! If you need accurate info, I suggest you take a look at <a href="https://github.com/devkitPro/libogc/blob/master/gc/ogc/gx.h">libogc&#39;s gx.h with all the functions and comments</a> as well as the <a href="https://github.com/devkitPro/wii-examples/tree/master/graphics/gx">official devkitPro GX examples</a> rather than following my own code. Comments, questions, and corrections are as always welcome at my fedi handle <a href="https://blog.allpurposem.at/@/mat@mastodon.gamedev.place" class="u-url mention">@<span>mat@mastodon.gamedev.place</span></a> !</p></blockquote>

<h3 id="video-setup" id="video-setup">Video setup</h3>

<p>I won&#39;t dwell on the init too long, as <del>most of it is just taken from <a href="https://github.com/devkitPro/wii-examples/blob/master/graphics/gx/neheGX/lesson04/source/lesson4.c#L17">a libogc example</a></del> it&#39;s not too thrilling. What is cool though is how, to do v-sync, we create two “framebuffers” which are merely integer arrays... on the CPU? This is where one of the big differences in the Wii&#39;s hardware design compared to a modern computer comes in: both the CPU and GPU have access to 24MB of shared RAM. Meanwhile on a modern PC, the GPU will exclusively have its own dedicated RAM which the CPU cannot touch directly.</p>

<p>This shared RAM is where we store these framebuffers arrays, named by the Wii hacking scene “eXternal Frame Buffers” or XFBs (<a href="https://forums.dolphin-emu.org/Thread-what-is-the-difference-between-xfb-and-efb?pid=505591#pid505591">source</a>. Because access to this so-called “main”)). Because having the GPU work on the XFB directly would be slow, the GPU has its own bit of actually private RAM which stores what&#39;s officially called the “Embedded Frame Buffer” (EFB). GX draw commands work on the super-fast EFB, and when our frame is ready we can copy the EFB into the XFB, for the Video Interface to read and finally display to the screen. This buffer copy is loosely equivalent to “presenting” the frame as is done in the APIs we&#39;re used to.</p>

<pre><code>                 ┌─────────┐                           
                 │         │                           
       ┌─────────┤   CPU   ├───────────────────┐           
       │         │         │                   │           
       │         └─────────┘                   ▼           
       │                                  GX drawcalls      
       │                                       │           
       ▼                          ┌────────────▼──────────┐
Create XFB arrays                 │                       │
       │                          │ GPU private RAM (EFB) │
       │                          │                       │
       │                          └────────────┬──────────┘
       │                                       ▼           
       │                           Copy EFB to current XFB 
       │                                       │           
       │      ┌───────────────────────┐        │           
       │      │                       │        │           
       └──────► Shared MEM1 RAM (24MB) ◄───────┘           
              │                       │                    
              └─────────┬─────────────┘                    
                        ▼                              
                  Display frame                        
               ┌────────▼────────┐                     
               │                 │                     
               │ Video Interface │                     
               │                 │                     
               └─────────────────┘                     
</code></pre>

<p>The following code specifically handles finishing the frame and displaying it:</p>

<pre><code class="language-cpp">void GraphicsContext::Swap()
{
    GX_DrawDone();

    GX_SetZMode(GX_TRUE, GX_LEQUAL, GX_TRUE);
    GX_SetColorUpdate(GX_TRUE);
    GX_CopyDisp(g_Xfb[g_WhichFB], GX_TRUE);

    VIDEO_SetNextFramebuffer(g_Xfb[g_WhichFB]);
    VIDEO_Flush();
    VIDEO_WaitVSync();
    
    g_WhichFB ^= 1; // flip framebuffer
}
</code></pre>

<p>Every time a frame is done, we tell the GPU we&#39;re done via <code>GX_DrawDone()</code> and then do our EFB –&gt; XFB copy via <code>GX_CopyDisp(g_Xfb[g_WhichFB], GX_TRUE)</code> (where <code>g_Xfb</code> is our two XFBs and <code>g_WhichFB</code> is a single bit we flip every frame). Then we notify the Video Interface of the framebuffer it should display with a call to <code>VIDEO_SetNextFramebuffer(g_Xfb[g_WhichFB])</code>. Finally, <code>VIDEO_Flush()</code> and <code>Video_WaitVSync()</code> ensure we don&#39;t start rendering the next frame before this one is displayed.</p>

<h2 id="drawing-a-mesh" id="drawing-a-mesh">Drawing a mesh</h2>

<p>Now that we know how framebuffers work on the Wii, let&#39;s get to drawing our mesh!</p>

<h3 id="vertex-attribute-setup" id="vertex-attribute-setup">Vertex attribute setup</h3>

<p>Before we can start pushing triangles via GX, we must tell it what kind of data to expect. This is done in two steps:</p>
<ol><li>First, we tell GX that we will be giving it our vertex data directly every frame, rather than having it fetch it from an array via indices.<br>
<code>GX_SetVtxDesc(GX_VA_POS, GX_DIRECT);</code><br></li>
<li>We then access the vertex format table at index 0 (<code>GX_VTXFMT0</code>), and set its position attribute (<code>GX_VA_POS</code>) as follows:
<ul><li>The data consists of three values for XYZ coords (<code>GX_POS_XYZ</code>)</li>
<li>Each value is a 32-bit floating point number (<code>GX_F32</code>)</li>
<li>I&#39;m not sure what the last argument is for, but zero worked fine for me.</li></ul></li></ol>

<p><code>GX_SetVtxAttrFmt(GX_VTXFMT0, GX_VA_POS, GX_POS_XYZ, GX_F32, 0);</code></p>

<p>Both of those functions are then repeated for normals and texture data, if needed.</p>

<blockquote><p>NOTE: The Wii&#39;s GPU supports indexed drawing, where vertex data is stored in an array and drawn using indices into that array. This allows fewer vertices to be defined while reusing them.<br>
I didn&#39;t know about this until I finished this project, so we&#39;ll be sticking with non-indexed drawing. The concept is quite similar, but you&#39;d set the vertex desc to <code>GX_INDEX8</code> and bind an array before calling <code>GX_Begin</code>. You&#39;d then pass indices rather than vertex data inside the begin/end block.</p></blockquote>

<h3 id="drawcalls" id="drawcalls">Drawcalls</h3>

<p>Each frame, we must queue up some commands in the GPU&#39;s first-in-first-out buffer. We can tell GX it&#39;s time to draw some primitives via the GX_Begin function, passing along the type of primitives (triangles!), the index in the vertex format table we filled in earlier, and the number of vertices we&#39;ll be drawing.
Afterward, we can give it the data in order by calling the respective function for each attribute we configured.
Finally, we cap it off with a <code>GX_End</code> (which libogc just defines as an empty function, so I guess it may just be syntax/API sugar).</p>

<pre><code class="language-cpp">GX_Begin(GX_TRIANGLES, GX_VTXFMT0, m_Vertices.size());
for(uint32_t index : m_Indices)
{
    // NOTE: really wish I used GX&#39;s indexing support...
    const Vertex&amp; vert = m_Vertices[index]; 

    GX_Position3f32(vert.pos.x, vert.pos.y, vert.pos.z);
    GX_Normal3f32(vert.normal.x, vert.normal.y, vert.normal.z);
    GX_TexCoord2f32(vert.uv.x, vert.uv.y);
}
GX_End();
</code></pre>

<h3 id="transformations" id="transformations">Transformations</h3>

<blockquote><p>NOTE: This section will assume you&#39;re familiar with matrix transformations. If you don&#39;t know what this is, <a href="https://learnopengl.com/Getting-started/Transformations">here&#39;s a link</a> to the first of two pages in the OpenGL tutorial discussing this, which is the explanation that finally made it click for me.</p></blockquote>

<p>The first important matrix is the model matrix. This matrix&#39;s job is to convert model-space vertices into world-space. This is useful when we want to rotate, scale, or translate an object in the world.</p>

<p>To look around in our scene, we need to set up a view matrix, which takes care of translating world-space into view-space. Finally, we&#39;ll need a projection matrix that turns the given view-space into clip-space, at which point the GPU takes over and handles stuff like culling and converting to non-homogeneous coordinates.</p>

<p>In normal graphics, we tend to pair view and projection together, and leave model on its own for transforming normals and other data to worldspace in the shader. The Wii however takes a different approach: load the combined modelView matrix, and separately handle the projection.</p>

<p>The reason for this is quite interesting: GX instead expects you to give it light information in <em>view</em> space rather than the usual worldspace. We&#39;ll cover simple lighting in a later section.</p>

<p>So, we must only set these two matrices to handle all of our transformation needs:</p>

<pre><code class="language-cpp">    GX_LoadProjectionMtx(projectionMat, GX_PERSPECTIVE);

    // use the same matrix for positions and normals
    GX_LoadPosMtxImm(modelViewMat, GX_PNMTX0);
    GX_LoadNrmMtxImm(modelViewMat, GX_PNMTX0);
</code></pre>

<p><img src="https://allpurposem.at/blog/wii-cube.png" alt="A lone untextured cube on a blue background"></p>

<h2 id="textures" id="textures">Textures</h2>

<p>Textures are actually really easy! We can directly bind a byte array as a texture, since the CPU and GPU have that 24MB of shared RAM.</p>

<p>I initially tried to use the Wii&#39;s native format (TPL), which has some really cool features such as the <code>CMPR</code> compressed texture encoding, which has the GPU decompress the texture live when it needs the data, at (seemingly) no performance cost. Awesome!</p>

<p>Sadly, I couldn&#39;t get it working...</p>

<p><img src="https://allpurposem.at/blog/wii-tpl-cmpr-lol.png" alt="The vehicle, with a rainbow corrupted-looking texture"></p>

<p>Even using basic TPL, there were some gnarly artifacts:
<img src="https://allpurposem.at/blog/wii-tpl-borked.png" alt="A close-up of a wing from the vehicle, with bizarre texture artifacts"></p>

<p>I finally caved and decided to just use PNG and decode it to a raw RGBA8 byte array, bypassing TPL entirely. This got rid of the artifacts, so I guess we&#39;ll never know why they happened!</p>

<pre><code class="language-cpp">GX_InitTexObj(&amp;m_Texture, decodedData, width, height, GX_TF_RGBA8, GX_CLAMP, GX_CLAMP, GX_FALSE);
</code></pre>

<p>To use the texture, we can simply bind the texture object that we got during init to the index we want to sample from:</p>

<pre><code class="language-cpp">GX_LoadTexObj(const_cast&lt;GXTexObj*&gt;(&amp;m_Texture), GX_TEXMAP0);
</code></pre>

<p>By default, GX reads from <code>GX_TEXMAP0</code> when it draws triangles, so this is actually all we needed to do!</p>

<p><img src="https://allpurposem.at/blog/wii-textures.png" alt="The vehicle, with textures"></p>

<h3 id="transparent-textures" id="transparent-textures">Transparent textures</h3>

<p>We can set up blending with the alpha channel like so:</p>

<pre><code class="language-cpp">GX_SetBlendMode(GX_BM_BLEND, GX_BL_SRCALPHA, GX_BL_INVSRCALPHA, GX_LO_OR);
</code></pre>

<p>This tells GX that, when blending two transparent samples, it should take the previous pixel&#39;s alpha value (<code>GX_BL_SRCALPHA</code>) and the inverse of the new one&#39;s alpha (<code>GX_BL_INVSRCALPHA</code>). I&#39;m not sure what the <code>GX_LO_OR</code> is for, but blending sure does seem to work so I&#39;m keeping it. There&#39;s a good explanation of this exact blend function over at <a href="https://learnopengl.com/Advanced-OpenGL/Blending">LearnOpenGL.</a></p>

<p>Although on a first glance transparency seems to work, there&#39;s a pretty big issue that appears if you look at the fire effect from close up (I don&#39;t have a screenshot from the Wii build, so this one&#39;s from DirectX, however the same effect is visible)!</p>

<p><img src="https://allpurposem.at/blog/firefx-zbuffer.png" alt="A close-up of the fire effect, where some planes are writing to the Z-buffer and causing fire that should be drawn behind it to get skipped instead"></p>

<p>One of the triangles that makes up the effect is getting drawn before another triangle that should render behind it... the first one writes to the Z-buffer, causing the second triangle to get discarded. This is usually good, because it skips drawing pixels that are fully occluded, and makes sure stuff that&#39;s <em>behind</em> a model doesn&#39;t end up getting drawn <em>over</em> it. In the case of translucent images however, we get artifacts like the one above.</p>

<p>This image was rendered with the Z buffer entirely disabled, which shows why we need it:
<img src="https://allpurposem.at/blog/wii-no-zbuffer.png" alt="The vehicle, but with bad Z sorting"></p>

<p>The solution is thankfully quite simple:</p>

<pre><code class="language-cpp">if(m_UseZBuffer)
{
    GX_SetZMode(GX_TRUE, GX_LEQUAL, GX_TRUE);
}
else
{
    GX_SetZMode(GX_TRUE, GX_LEQUAL, GX_FALSE);
}
</code></pre>

<p>Set <code>m_UseZBuffer</code> to false for models using transparent textures, and that last <code>GX_FALSE</code> in the <code>GX_SetZMode</code> disables writing to the Z buffer. Note that we still want reading (the first <code>GX_TRUE</code>), as otherwise the fire effect would end up rendering <em>over</em> our vehicle mesh!</p>

<h2 id="shaders" id="shaders">”““Shaders”“”</h2>

<p>Unlike modern APIs, the Wii&#39;s GPU is not programmable with arbitrary shaders. Instead, we can play with something quite powerful called <strong>t</strong>exture <strong>ev</strong>aluation (TEV) stages. We&#39;ve got a whopping 16 TEV stages to play with, which Nintendo graciously calls a “flexible fixed-pipeline.” Each stage is essentially a configurable linear interpolation (lerp) between two values A and B by a factor of C. Finally, a fourth value D is added to the result.</p>

<pre><code class="language-rust">u8 TEV_stage(u8 a, u8 b, u8 c, u8 d)
{
    return d + (a * (1.0 - c) + b * c);
}
</code></pre>

<blockquote><p>NOTE: There&#39;s also optional negation, scale, bias, and clamping. I&#39;m skipping over them here because I didn&#39;t end up using them. There&#39;s more complete documentation available <a href="http://www.amnoid.de/gc/tev.html">here.</a></p></blockquote>

<p>The source of A, B, C, and D can all be configured per stage. You could, for example, have it lerp between your texture&#39;s color and the light color based on the amount of specular lighting it receives. I tried to set this up with lots of help from <a href="https://github.com/magcius">Jasper</a> (thanks again!) but ultimately it didn&#39;t work. I&#39;d like to try again sometime in the future!</p>

<h3 id="diffuse-lighting" id="diffuse-lighting">Diffuse lighting</h3>

<p>The Wii&#39;s GPU features built-in per-vertex lighting. This means that you can (optionally) tell it to calculate how much light each vertex receives from up to eight light sources, which can be either distance-attenuated (like a lamp) or angle-attenuated (like a spotlight).</p>

<p>GX provides a type <code>GXLightObj</code> that we can load and then set up with all our parameters. For the renderer I was making, I needed to set up a “sun” light, which is a very far away point light with (practically) no attenuation.</p>

<blockquote><p>NOTE: normally in graphics programming, this is  be done with a simple directional light. However the way I got it to work on the Wii was by simulating this attenuation-free point light model, so I went with that.</p></blockquote>

<p>This is the bit of code that initializes it every frame:</p>

<pre><code class="language-cpp">GX_SetChanAmbColor(GX_COLOR0, ambientColor);
GX_SetChanMatColor(GX_COLOR0, materialColor);

GX_SetChanCtrl(
        GX_COLOR0, GX_ENABLE,
        GX_SRC_REG, GX_SRC_REG,
        GX_LIGHT0, GX_DF_CLAMP, GX_AF_NONE);

guVector lightPos = { -lightDirWorld.x * 100.f, -lightDirWorld.y * 100.f, -lightDirWorld.z * 100.f };
guVecMultiply(viewMatNoTrans, &amp;lightPos, &amp;lightPos);

GXLightObj lightObj;
GX_InitLightPos(&amp;lightObj, lightPos.x, lightPos.y, lightPos.z);
GX_InitLightColor(&amp;lightObj, lightColor);

GX_LoadLightObj(&amp;lightObj, GX_LIGHT0);
</code></pre>

<p>Let&#39;s go over each step.</p>

<h4 id="color-registers" id="color-registers">Color registers</h4>

<p>First, we tell GX what ambient and material colors we&#39;ll use. The ambient color is used for lighting all vertices, no matter of received light. This makes sure the back of our mesh is not just pure black. The material color will tint your whole model (it&#39;s like a global vertex color), so I keep it as white.</p>

<h4 id="channel-setup" id="channel-setup">Channel setup</h4>

<p><code>GX_SetChanCtrl</code> configures the lighting channel we&#39;ll use. We want the light to affect GX_COLOR0, which is where our texture will be. We tell it to get the ambient and material color from the registers we set just before (<code>GX_SRC_REG</code>). We set <code>GX_LIGHT0</code> as a light that affects this channel, with the default diffuse function <code>GX_DF_CLAMP</code>. Finally, we disable attenuation by passing <code>GX_AF_NONE</code>, meaning our light can be infinitely far away but still light our model as if it were right next to it.</p>

<h4 id="position-transformation" id="position-transformation">Position transformation</h4>

<p>We then calculate the light position, which is very far away opposite to the direction it&#39;ll shine. Note that we multiply it with the view matrix (with the translation part stripped out) as light stuff is in view space!</p>

<h4 id="light-object-creation" id="light-object-creation">Light object creation</h4>

<p>Finally we create our <code>GXLightObj</code>, giving it its position and color, and load it into the <code>GX_LIGHT0</code> channel. Make sure to disable lighting on the fire (it makes its own light, wouldn&#39;t make sense to be in shadow) and wham! There&#39;s our sun!</p>

<p><img src="https://allpurposem.at/blog/wii-final.png" alt="Final picture of the Wii rendering of the vehicle"></p>

<blockquote><p>You can find all my lighting and TEV code <a href="https://git.allpurposem.at/mat/GraphicsProg1_DirectX/src/branch/main/source/GX/Effect.cpp">in <code>Effect.cpp</code></a>. The filename is unfortunate, but as this was initially a DirectX project, I was stuck with that name from the header.</p></blockquote>

<h2 id="we-re-done" id="we-re-done">We&#39;re done!</h2>

<p>I quickly built a GameCube version the night before the due date, and submitted the required .exe alongside my sneaky .dol binaries with no further elaboration. I wanted to keep the surprise. I showed up the next day to campus with a <em>very</em> full backpack, and when it was time pulled out the Wii to present my “extra features to raise your grades, but those are not mandatory.” It seemed to make quite a splash! Looks like I&#39;m not the only one who grew up with the Wii :)</p>

<p>You can download a build <a href="https://allpurposem.at/blog/GraphicsProg1_GX.elf">here</a>. Wiimote and GameCube controls are supported!</p>

<p>Tags for fedi: <a href="https://blog.allpurposem.at/tag:homebrew" class="hashtag"><span>#</span><span class="p-category">homebrew</span></a> <a href="https://blog.allpurposem.at/tag:wii" class="hashtag"><span>#</span><span class="p-category">wii</span></a> <a href="https://blog.allpurposem.at/tag:gamecube" class="hashtag"><span>#</span><span class="p-category">gamecube</span></a> <a href="https://blog.allpurposem.at/tag:gcn" class="hashtag"><span>#</span><span class="p-category">gcn</span></a> <a href="https://blog.allpurposem.at/tag:devkitpro" class="hashtag"><span>#</span><span class="p-category">devkitpro</span></a> <a href="https://blog.allpurposem.at/tag:directx" class="hashtag"><span>#</span><span class="p-category">directx</span></a> <a href="https://blog.allpurposem.at/tag:linux" class="hashtag"><span>#</span><span class="p-category">linux</span></a> <a href="https://blog.allpurposem.at/tag:graphics" class="hashtag"><span>#</span><span class="p-category">graphics</span></a> <a href="https://blog.allpurposem.at/tag:gamedev" class="hashtag"><span>#</span><span class="p-category">gamedev</span></a> <a href="https://blog.allpurposem.at/tag:foss" class="hashtag"><span>#</span><span class="p-category">foss</span></a> <a href="https://blog.allpurposem.at/tag:retrocomputing" class="hashtag"><span>#</span><span class="p-category">retrocomputing</span></a></p>

<hr>

<p>Thanks for reading! Feel free to contact me if you have any suggestions or comments.
Find me on <a href="https://allpurposem.at/link/mastodon">Mastodon</a> and <a href="https://allpurposem.at/link/matrix">Matrix</a>.</p>

<p>You can follow the blog through:
– ActivityPub by inputting <code><a href="https://blog.allpurposem.at/@/mat@blog.allpurposem.at" class="u-url mention">@<span>mat@blog.allpurposem.at</span></a></code>
– RSS/Atom: Copy this link into your reader: <code>https://blog.allpurposem.at</code></p>

<p>My website: <a href="https://allpurposem.at">https://allpurposem.at</a></p>

<p></p>
]]></content:encoded>
      <guid>https://blog.allpurposem.at/making-a-wii-game-in-2024</guid>
      <pubDate>Tue, 09 Apr 2024 20:35:52 +0000</pubDate>
    </item>
    <item>
      <title>PythonPlusPlus: Bridging Worlds with Polyglot Code</title>
      <link>https://blog.allpurposem.at/pythonplusplus-bridging-worlds-with-polyglot-code</link>
      <description>&lt;![CDATA[Picture this: you find yourself immersed in a new job, knee-deep in a C++ codebase, yet your heart yearns for the simplicity and elegance of Python syntax. What do you do? You don&#39;t just conform – you innovate, and boldly submit this to code review:&#xA;&#xA;include &#34;pythonstart.h&#34;&#xA;&#xA;def greet(name):&#xA;    print(&#34;hello, &#34; + name + &#34;!&#34;)&#xA;    return&#xA;&#xA;def greet2(name):&#xA;    print(&#34;how are you, &#34; + name + &#34;?&#34;)&#xA;    return&#xA;&#xA;def bye():&#xA;    print(&#34;ok bye!&#34;)&#xA;    return&#xA;&#xA;include &#34;pythonmid.h&#34;&#xA;&#xA;username = &#34;Mat&#34;&#xA;&#xA;print(&#34;Hello from \&#34;Python\&#34;!&#34;)&#xA;&#xA;greet(username)&#xA;greet2(username)&#xA;print(&#34;getting ready to say bye...&#34;)&#xA;bye()&#xA;&#xA;include &#34;pythonend.h&#34;&#xA;!--more--&#xA;The first code review comes in, and it seems your contribution may be in jeopardy:&#xA;&#xA;  That&#39;s just Python code! It won&#39;t work with the rest of our C++ codebase!&#xA;&#xA;Before they can reject your code, you sharply interject:&#xA;&#xA;  Hey! Did you even test my code?&#xA;&#xA;With skepticism in the air, one of your brave teammates steps up and runs the code through a C++ compiler. To everyone&#39;s amazement, the result is identical to that of running it with Python! The code not only speaks Python, but fluently converses in C++:&#xA;&#xA;$ g++ Python.cpp &amp;&amp; ./a.out&#xA;Hello from &#34;Python&#34;!&#xA;hello, Mat!&#xA;how are you, Mat?&#xA;getting ready to say bye...&#xA;ok bye!&#xA;&#xA;The commit is eventually merged, and your unconventional approach not only saves your job but earns you a place in the annals of the team&#39;s most memorable code submissions. You are also banned from touching that codebase again.&#xA;&#xA;How it works: unraveling the enigma&#xA;&#xA;This kind of program is termed as &#34;polyglot,&#34; which literally means &#34;written in multiple languages.&#34; The entire idea of writing one of these relies on finding intersections between the two (or more!) languages&#39; syntax. In Python, a # signifies a comment, while in C++ (and C), it denotes a preprocessor directive. These lines are the key to the program. We&#39;ll see how the preprocessor works (and how we can abuse this!) in a little bit.&#xA;&#xA;You&#39;ll notice the only preprocessor directives in our code are #include statements. Unlike other languages, C and C++ opt for a simple but effective solution to calling external library functions: copy-paste. Seriously.&#xA;&#xA;When I write #include iostream at the top of a C++ file, what actually happens is the entire file called &#34;iostream&#34; (installed system-wide as part of the C++ standard library) gets pasted by the preprocessor, residing now where that #include statement once was. You don&#39;t technically have to use the #include directive to get a C++ program calling library functions: you can get the same behavior by just copying the file&#39;s contents manually at the top of your code (but that&#39;s a terrible idea!).&#xA;&#xA;For example, here are two C++ header files:&#xA;&#xA;preamble.h:&#xA;int main()&#xA;{&#xA;    int retVal = 0;&#xA;postlude.h:&#xA;}&#xA;And our beautifully readable code:&#xA;include &#34;preamble.h&#34;&#xA;&#xA;for(int i = 0; i &lt; 4; i++)&#xA;{&#xA;    retVal += 1;&#xA;}&#xA;return retVal;&#xA;&#xA;include &#34;postlude.h&#34;&#xA;&#xA;Let&#39;s run our code through the standalone C/C++ Preprocessor cpp:&#xA;$ cpp code.cpp&#xA;1 &#34;code.cpp&#34;&#xA;1 &#34;preamble.h&#34; 1&#xA;int main()&#xA;{&#xA;    int retVal = 0;&#xA;2 &#34;code.cpp&#34; 2&#xA;&#xA;for(int i = 0; i &lt; 4; i++)&#xA;{&#xA;    retVal += 1;&#xA;}&#xA;return retVal;&#xA;&#xA;1 &#34;postlude.h&#34; 1&#xA;}&#xA;10 &#34;code.cpp&#34; 2&#xA;&#xA;You can see the preprocessor outputs a bunch of lines starting with #. These are a kind of comment meant for us puny humans to understand exactly what the preprocessor did. The first number indicates the line number, and the string in quotes is the filename. The optional number at the end of the line represents a flag, where 1 means it&#39;s the start of an include, and 2 means we are returning to a file after an include is done. You can find the full docs here.&#xA;You can see we start at line 1 in code.cpp, which then includes preamble.h. The contents of preamble.h follow, and afterwards we return back to code.cpp. So on and so forth, finally copy-pasting together an amalgamate program that consists of a simple main function that returns 4.&#xA;&#xA;The preprocessor is a very powerful tool, and as long as the final text that is passed to the compiler is valid, anything goes!&#xA;&#xA;Let&#39;s break down the polyglot program from the start of the post:&#xA;&#xA;Functions&#xA;In Python, functions are defined as follows:&#xA;def greet(name):&#xA;    print(&#34;hello, &#34; + name + &#34;!&#34;)&#xA;&#xA;Somehow, we need to translate this into working C++ just through the preprocessor. Because Python allows declaring functions anywhere, but C++ does not, we can use a function pointer instead. C++ has a neat trick here called a lambda, which allows us to define unnamed functions inline, and it&#39;s perfect to have our pointer point to.&#xA;&#xA;Armed with this knowledge, we can use #define to create a macro that will turn def into auto (a special C++ keyword that deduces the type of a variable based on what&#39;s assigned to it), and another macro that turns greet(name) into our lambda definition:&#xA;define def auto &#xA;define greet(arg) greet =  {&#xA;&#xA;Applying this to our Python function from above gets us some of the way there&#xA;auto greet =  {:&#xA;    print(&#34;hello, &#34; + name + &#34;!&#34;)&#xA;&#xA;We still have to handle that pesky : that Python requires at the end of function declarations. Now, where does C++ have a :... aha! The revered ternary operator, that everybody totally loves! Its syntax is as follows: condition ? truthy : falsy. We don&#39;t care about the logic here, we just want that sweet : character, so we can add the most cursed ternary expression I&#39;ve ever written to the end of the greet macro:&#xA;define greet(arg) greet =  { false?false&#xA;&#xA;Running the preprocessor through our function, we get the following:&#xA;auto greet =  { false?false:&#xA;    print(&#34;hello, &#34; + name + &#34;!&#34;)&#xA;&#xA;That&#39;s some good progress! There&#39;s three main issues left:&#xA;the ternary operator is left hanging there. We need a &#34;falsy&#34; value for this thrilling and definitely-very-useful comparison to compile.&#xA;there&#39;s no print function in C++ (this project was conceived before std::print was added to C++23).&#xA;we need to close that dangling curly bracket and add a semicolon at the end of our lambda, somehow.&#xA;&#xA;Implementing the print function can be done with a simple function-style macro that just plops the argument into std::cout. This only works for simple prints, but I&#39;m not going for anything more here :)&#xA;&#xA;Additionally, we can knock the unfinished ternary issue out by adding a stray false; at the beginning. Usually this will just do nothing as it just gets discarded, but in the case that a print occurs right after a function definition, it will complete the ternary operator. Hooray! &#xA;&#xA;define print(a) false;std::cout &lt;&lt; (a) &lt;&lt; std::endl; &#xA;&#xA;Now for closing the function... there are no keywords left we can use here. I haven&#39;t found a way to make this work consistently without polluting the print macro with closing brackets that would cause it to break if used more than once or outside of a function. Thankfully, Python has a return keyword we can add without changing the behavior of the function:&#xA;def greet(name):&#xA;    print(&#34;hello, &#34; + name + &#34;!&#34;)&#xA;    return&#xA;&#xA;Then on the C++ side, we can redefine it to close our lambda!&#xA;define return return; };&#xA;&#xA;Finally, our simple function now preprocesses to this valid albeit cursed C++ code:&#xA;auto greet =  { false?false:&#xA;    false;std::cout &lt;&lt; (&#34;hello, &#34; + name + &#34;!&#34;) &lt;&lt; std::endl;&#xA;    return; };&#xA;&#xA;&#34;int main&#34;&#xA;&#xA;Here&#39;s the next bit we have to tackle, after the function definitions:&#xA;username = &#34;Mat&#34;&#xA;&#xA;print(&#34;Hello from \&#34;Python\&#34;!&#34;)&#xA;&#xA;greet(username)&#xA;greet2(username)&#xA;print(&#34;getting ready to say bye...&#34;)&#xA;bye()&#xA;  This code was given in one of my university courses to showcase the basics of Python. In it, we create a variable, call a couple functions, and call it a day.&#xA;&#xA;Python allows writing code willy-nilly outside of any function, but in C++ this is not exactly the case, especially if we need to call library functions. Our print statements must reside inside the main function. We can have our initial header (the one with all the function macros) also start the main function by adding a lone int main() { at the end of it. We also need a header at the end with the sole purpose of closing that opening bracket:&#xA;&#xA;pythonstart.h:&#xA;define greet(arg) greet =  { false?false&#xA;define print(a) false;std::cout &lt;&lt; (a) &lt;&lt; std::endl; &#xA;define return return; };&#xA;&#xA;// start the main function (will be closed by pythonend.h)&#xA;int main() {&#xA;&#xA;pythonend.h (thrilling):&#xA;}&#xA;&#xA;The code&#xA;&#xA;Looking at the first lines of the actual code, a lot of stuff is missing for it to work in C++:&#xA;username = &#34;Mat&#34;&#xA;print(&#34;Hello from \&#34;Python\&#34;!&#34;)&#xA;&#xA;The first obvious issue is that C++ requires types, while Python does not. We will need a pythonmid.h header to plop a std::string in there and so tell username its type:&#xA;&#xA;pythonmid.h:&#xA;define username std::string username&#xA;&#xA;Then, oh no! Our print macro-function inserts a stray false right after my string literal, causing a compile error! We must redefine print to remove the false prefix, but keep the lone semicolon as it can serve to punctuate the username declaration:&#xA;undef print&#xA;define print(a) ;std::cout &lt;&lt; (a) &lt;&lt; std::endl;&#xA;&#xA;Finally, the function calls:&#xA;greet(username)&#xA;greet2(username)&#xA;print(&#34;getting ready to say bye...&#34;)&#xA;bye()&#xA;&#xA;In short, every function must be redefined to expand into a call rather than a declaration, like so:&#xA;undef greet&#xA;define greet(name) greet(name);&#xA;&#xA;That&#39;s it!&#xA;And there we go! Here&#39;s the full &#34;Python&#34; file from the start of this post, put through the C++ preprocessor:&#xA;// -snip- the entire contents of the iostream and string C++ headers&#xA;2 &#34;pythonstart.h&#34; 2&#xA;15 &#34;pythonstart.h&#34;&#xA;&#xA;15 &#34;pythonstart.h&#34;&#xA;int main() {&#xA;4 &#34;Python.cpp&#34; 2&#xA;&#xA;auto greet =  { false?false:&#xA;    false;std::cout &lt;&lt; (&#34;hello, &#34; + name + &#34;!&#34;) &lt;&lt; std::endl;&#xA;    return; };&#xA;&#xA;auto greet2 =  { false?false:&#xA;    false;std::cout &lt;&lt; (&#34;how are you, &#34; + name + &#34;?&#34;) &lt;&lt; std::endl;&#xA;    return; };&#xA;&#xA;auto bye =  { false?false:&#xA;    false;std::cout &lt;&lt; (&#34;ok bye!&#34;) &lt;&lt; std::endl;&#xA;    return; };&#xA;&#xA;1 &#34;pythonmid.h&#34; 1&#xA;25 &#34;pythonmid.h&#34;&#xA;std::string&#xA;20 &#34;Python.cpp&#34; 2&#xA;&#xA;username = &#34;Mat&#34;&#xA;&#xA;;std::cout &lt;&lt; (&#34;Running \&#34;Python\&#34;!&#34;) &lt;&lt; std::endl;&#xA;&#xA;greet(username);&#xA;greet2(username);&#xA;;std::cout &lt;&lt; (&#34;getting ready to say bye...&#34;) &lt;&lt; std::endl;&#xA;bye();&#xA;&#xA;1 &#34;pythonend.h&#34; 1&#xA;}&#xA;31 &#34;Python.cpp&#34; 2&#xA;You can find the full sources on my Gitea.&#xA;&#xA;I hope this was a fun introduction to polyglot programming! It&#39;s usually filled with crazy hacks like these, and thus can be very fun whilst being immensely impractical, but believe me: it has its uses!&#xA;&#xA;While researching for a different project in 2020, I came across this perfect example: Cosmopolitan is a project that allows C programs to build to an &#34;actually portable executable&#34;: a file that runs simultaneously on Linux, MacOS, Windows, FreeBSD, OpenBSD, NetBSD, and can also directly boot from the BIOS. I recommend Justine&#39;s blog post for a fascinating read!&#xA;&#xA;---&#xD;&#xA;&#xD;&#xA;Thanks for reading! Feel free to contact me if you have any suggestions or comments.&#xD;&#xA;Find me on Mastodon and Matrix.&#xD;&#xA;&#xD;&#xA;You can follow the blog through:&#xD;&#xA;ActivityPub by inputting @mat@blog.allpurposem.at&#xD;&#xA;RSS/Atom: Copy this link into your reader: https://blog.allpurposem.at&#xD;&#xA;&#xD;&#xA;My website: https://allpurposem.at&#xD;&#xA;&#xD;&#xA;link rel=&#34;preload&#34; href=&#34;https://blog.allpurposem.at/lexend.woff2&#34; as=&#34;font&#34; type=&#34;font/woff2&#34; crossorigin=&#34;&#34;]]&gt;</description>
      <content:encoded><![CDATA[<p>Picture this: you find yourself immersed in a new job, knee-deep in a C++ codebase, yet your heart yearns for the simplicity and elegance of Python syntax. What do you do? You don&#39;t just conform – you innovate, and boldly submit this to code review:</p>

<pre><code class="language-python">#include &#34;pythonstart.h&#34;

def greet(name):
    print(&#34;hello, &#34; + name + &#34;!&#34;)
    return

def greet2(name):
    print(&#34;how are you, &#34; + name + &#34;?&#34;)
    return

def bye():
    print(&#34;ok bye!&#34;)
    return

#include &#34;pythonmid.h&#34;

username = &#34;Mat&#34;

print(&#34;Hello from \&#34;Python\&#34;!&#34;)

greet(username)
greet2(username)
print(&#34;getting ready to say bye...&#34;)
bye()

#include &#34;pythonend.h&#34;
</code></pre>



<p>The first code review comes in, and it seems your contribution may be in jeopardy:</p>

<blockquote><p>That&#39;s just Python code! It won&#39;t work with the rest of our C++ codebase!</p></blockquote>

<p>Before they can reject your code, you sharply interject:</p>

<blockquote><p>Hey! Did you even test my code?</p></blockquote>

<p>With skepticism in the air, one of your brave teammates steps up and runs the code through a C++ compiler. To everyone&#39;s amazement, the result is identical to that of running it with Python! The code not only speaks Python, but fluently converses in C++:</p>

<pre><code>$ g++ Python.cpp &amp;&amp; ./a.out
Hello from &#34;Python&#34;!
hello, Mat!
how are you, Mat?
getting ready to say bye...
ok bye!
</code></pre>

<p>The commit is eventually merged, and your unconventional approach not only saves your job but earns you a place in the annals of the team&#39;s most memorable code submissions. You are also banned from touching that codebase again.</p>

<h2 id="how-it-works-unraveling-the-enigma" id="how-it-works-unraveling-the-enigma">How it works: unraveling the enigma</h2>

<p>This kind of program is termed as “polyglot,” which literally means “written in multiple languages.” The entire idea of writing one of these relies on finding intersections between the two (or more!) languages&#39; syntax. In Python, a <code>#</code> signifies a comment, while in C++ (and C), it denotes a <em>preprocessor directive</em>. These lines are the key to the program. We&#39;ll see how the preprocessor works (and how we can abuse this!) in a little bit.</p>

<p>You&#39;ll notice the only preprocessor directives in our code are <code>#include</code> statements. Unlike other languages, C and C++ opt for a simple but effective solution to calling external library functions: copy-paste. Seriously.</p>

<p>When I write <code>#include &lt;iostream&gt;</code> at the top of a C++ file, what actually happens is the entire file called “iostream” (installed system-wide as part of the C++ standard library) gets pasted by the preprocessor, residing now where that <code>#include</code> statement once was. You don&#39;t technically have to use the <code>#include</code> directive to get a C++ program calling library functions: you can get the same behavior by just copying the file&#39;s contents manually at the top of your code (but that&#39;s a terrible idea!).</p>

<p>For example, here are two C++ header files:</p>

<p><code>preamble.h</code>:</p>

<pre><code class="language-cpp">int main()
{
    int retVal = 0;
</code></pre>

<p><code>postlude.h</code>:</p>

<pre><code class="language-cpp">}
</code></pre>

<p>And our beautifully readable code:</p>

<pre><code class="language-cpp">#include &#34;preamble.h&#34;

for(int i = 0; i &lt; 4; i++)
{
    retVal += 1;
}
return retVal;

#include &#34;postlude.h&#34;
</code></pre>

<p>Let&#39;s run our code through the standalone C/C++ Preprocessor <code>cpp</code>:</p>

<pre><code class="language-cpp">$ cpp code.cpp
# 1 &#34;code.cpp&#34;
# 1 &#34;preamble.h&#34; 1
int main()
{
    int retVal = 0;
# 2 &#34;code.cpp&#34; 2

for(int i = 0; i &lt; 4; i++)
{
    retVal += 1;
}
return retVal;

# 1 &#34;postlude.h&#34; 1
}
# 10 &#34;code.cpp&#34; 2
</code></pre>

<p>You can see the preprocessor outputs a bunch of lines starting with <code>#</code>. These are a kind of comment meant for us puny humans to understand exactly what the preprocessor did. The first number indicates the line number, and the string in quotes is the filename. The optional number at the end of the line represents a flag, where <code>1</code> means it&#39;s the start of an include, and <code>2</code> means we are returning to a file after an include is done. You can find the full docs <a href="https://gcc.gnu.org/onlinedocs/cpp/Preprocessor-Output.html">here</a>.
You can see we start at line 1 in <code>code.cpp</code>, which then includes <code>preamble.h</code>. The contents of <code>preamble.h</code> follow, and afterwards we return back to <code>code.cpp</code>. So on and so forth, finally copy-pasting together an amalgamate program that consists of a simple <code>main</code> function that returns 4.</p>

<p>The preprocessor is a very powerful tool, and as long as the final text that is passed to the compiler is valid, anything goes!</p>

<p>Let&#39;s break down the polyglot program from the start of the post:</p>

<h3 id="functions" id="functions">Functions</h3>

<p>In Python, functions are defined as follows:</p>

<pre><code class="language-py">def greet(name):
    print(&#34;hello, &#34; + name + &#34;!&#34;)
</code></pre>

<p>Somehow, we need to translate this into working C++ just through the preprocessor. Because Python allows declaring functions anywhere, but C++ does not, we can use a function <em>pointer</em> instead. C++ has a neat trick here called a lambda, which allows us to define unnamed functions inline, and it&#39;s perfect to have our pointer point to.</p>

<p>Armed with this knowledge, we can use <code>#define</code> to create a macro that will turn <code>def</code> into <code>auto</code> (a special C++ keyword that deduces the type of a variable based on what&#39;s assigned to it), and another macro that turns <code>greet(name)</code> into our lambda definition:</p>

<pre><code class="language-cpp">#define def auto 
#define greet(arg) greet = [](std::string arg) {
</code></pre>

<p>Applying this to our Python function from above gets us some of the way there</p>

<pre><code class="language-cpp">auto greet = [](std::string arg) {:
    print(&#34;hello, &#34; + name + &#34;!&#34;)
</code></pre>

<p>We still have to handle that pesky <code>:</code> that Python requires at the end of function declarations. Now, where does C++ have a <code>:</code>... aha! The revered ternary operator, that everybody totally loves! Its syntax is as follows: <code>condition ? truthy : falsy</code>. We don&#39;t care about the logic here, we just want that sweet <code>:</code> character, so we can add the most cursed ternary expression I&#39;ve ever written to the end of the <code>greet</code> macro:</p>

<pre><code class="language-cpp">#define greet(arg) greet = [](std::string arg) { false?false
</code></pre>

<p>Running the preprocessor through our function, we get the following:</p>

<pre><code class="language-cpp">auto greet = [](std::string name) { false?false:
    print(&#34;hello, &#34; + name + &#34;!&#34;)
</code></pre>

<p>That&#39;s some good progress! There&#39;s three main issues left:
– the ternary operator is left hanging there. We need a “falsy” value for this thrilling and definitely-very-useful comparison to compile.
– there&#39;s no <code>print</code> function in C++ (this project was conceived before <code>std::print</code> was added to C++23).
– we need to close that dangling curly bracket and add a semicolon at the end of our lambda, somehow.</p>

<p>Implementing the <code>print</code> function can be done with a simple function-style macro that just plops the argument into <code>std::cout</code>. This only works for simple prints, but I&#39;m not going for anything more here :)</p>

<p>Additionally, we can knock the unfinished ternary issue out by adding a stray <code>false;</code> at the beginning. Usually this will just do nothing as it just gets discarded, but in the case that a print occurs right after a function definition, it will complete the ternary operator. Hooray!</p>

<pre><code class="language-cpp">#define print(a) false;std::cout &lt;&lt; (a) &lt;&lt; std::endl; 
</code></pre>

<p>Now for closing the function... there are no keywords left we can use here. I haven&#39;t found a way to make this work consistently without polluting the <code>print</code> macro with closing brackets that would cause it to break if used more than once or outside of a function. Thankfully, Python has a <code>return</code> keyword we can add without changing the behavior of the function:</p>

<pre><code class="language-py">def greet(name):
    print(&#34;hello, &#34; + name + &#34;!&#34;)
    return
</code></pre>

<p>Then on the C++ side, we can redefine it to close our lambda!</p>

<pre><code class="language-cpp">#define return return; };
</code></pre>

<p>Finally, our simple function now preprocesses to this valid albeit cursed C++ code:</p>

<pre><code class="language-cpp">auto greet = [](std::string name) { false?false:
    false;std::cout &lt;&lt; (&#34;hello, &#34; + name + &#34;!&#34;) &lt;&lt; std::endl;
    return; };
</code></pre>

<h2 id="int-main" id="int-main">“int main”</h2>

<p>Here&#39;s the next bit we have to tackle, after the function definitions:</p>

<pre><code class="language-py">username = &#34;Mat&#34;

print(&#34;Hello from \&#34;Python\&#34;!&#34;)

greet(username)
greet2(username)
print(&#34;getting ready to say bye...&#34;)
bye()
</code></pre>

<blockquote><p>This code was given in one of my university courses to showcase the basics of Python. In it, we create a variable, call a couple functions, and call it a day.</p></blockquote>

<p>Python allows writing code willy-nilly outside of any function, but in C++ this is not exactly the case, especially if we need to call library functions. Our print statements <em>must</em> reside inside the <code>main</code> function. We can have our initial header (the one with all the function macros) also start the <code>main</code> function by adding a lone <code>int main() {</code> at the end of it. We also need a header at the end with the sole purpose of closing that opening bracket:</p>

<p><code>pythonstart.h</code>:</p>

<pre><code class="language-cpp">#define greet(arg) greet = [](std::string arg) { false?false
#define print(a) false;std::cout &lt;&lt; (a) &lt;&lt; std::endl; 
#define return return; };

// start the main function (will be closed by pythonend.h)
int main() {
</code></pre>

<p><code>pythonend.h</code> (thrilling):</p>

<pre><code class="language-cpp">}
</code></pre>

<h2 id="the-code" id="the-code">The code</h2>

<p>Looking at the first lines of the actual code, a lot of stuff is missing for it to work in C++:</p>

<pre><code class="language-py">username = &#34;Mat&#34;
print(&#34;Hello from \&#34;Python\&#34;!&#34;)
</code></pre>

<p>The first obvious issue is that C++ requires types, while Python does not. We will need a <code>pythonmid.h</code> header to plop a <code>std::string</code> in there and so tell <code>username</code> its type:</p>

<p><code>pythonmid.h</code>:</p>

<pre><code class="language-cpp">#define username std::string username
</code></pre>

<p>Then, oh no! Our <code>print</code> macro-function inserts a stray <code>false</code> right after my string literal, causing a compile error! We must redefine <code>print</code> to remove the <code>false</code> prefix, but keep the lone semicolon as it can serve to punctuate the <code>username</code> declaration:</p>

<pre><code class="language-cpp">#undef print
#define print(a) ;std::cout &lt;&lt; (a) &lt;&lt; std::endl;
</code></pre>

<p>Finally, the function calls:</p>

<pre><code class="language-cpp">greet(username)
greet2(username)
print(&#34;getting ready to say bye...&#34;)
bye()
</code></pre>

<p>In short, every function must be redefined to expand into a call rather than a declaration, like so:</p>

<pre><code class="language-cpp">#undef greet
#define greet(name) greet(name);
</code></pre>

<h2 id="that-s-it" id="that-s-it">That&#39;s it!</h2>

<p>And there we go! Here&#39;s the full “Python” file from the start of this post, put through the C++ preprocessor:</p>

<pre><code class="language-cpp">// -snip- the entire contents of the &lt;iostream&gt; and &lt;string&gt; C++ headers
# 2 &#34;pythonstart.h&#34; 2
# 15 &#34;pythonstart.h&#34;

# 15 &#34;pythonstart.h&#34;
int main() {
# 4 &#34;Python.cpp&#34; 2


auto greet = [](std::string name) { false?false:
    false;std::cout &lt;&lt; (&#34;hello, &#34; + name + &#34;!&#34;) &lt;&lt; std::endl;
    return; };

auto greet2 = [](std::string name) { false?false:
    false;std::cout &lt;&lt; (&#34;how are you, &#34; + name + &#34;?&#34;) &lt;&lt; std::endl;
    return; };

auto bye = []() { false?false:
    false;std::cout &lt;&lt; (&#34;ok bye!&#34;) &lt;&lt; std::endl;
    return; };

# 1 &#34;pythonmid.h&#34; 1
# 25 &#34;pythonmid.h&#34;
std::string
# 20 &#34;Python.cpp&#34; 2

username = &#34;Mat&#34;

;std::cout &lt;&lt; (&#34;Running \&#34;Python\&#34;!&#34;) &lt;&lt; std::endl;

greet(username);
greet2(username);
;std::cout &lt;&lt; (&#34;getting ready to say bye...&#34;) &lt;&lt; std::endl;
bye();

# 1 &#34;pythonend.h&#34; 1
}
# 31 &#34;Python.cpp&#34; 2
</code></pre>

<p>You can find the full sources on <a href="https://git.allpurposem.at/mat/PythonPlusPlus">my Gitea.</a></p>

<p>I hope this was a fun introduction to polyglot programming! It&#39;s usually filled with crazy hacks like these, and thus can be very fun whilst being immensely impractical, but believe me: it has its uses!</p>

<p>While researching for a different project in 2020, I came across this perfect example: <a href="https://justine.lol/ape.html">Cosmopolitan</a> is a project that allows C programs to build to an “actually portable executable”: a file that runs simultaneously on Linux, MacOS, Windows, FreeBSD, OpenBSD, NetBSD, and can also directly boot from the BIOS. I recommend Justine&#39;s blog post for a fascinating read!</p>

<hr>

<p>Thanks for reading! Feel free to contact me if you have any suggestions or comments.
Find me on <a href="https://allpurposem.at/link/mastodon">Mastodon</a> and <a href="https://allpurposem.at/link/matrix">Matrix</a>.</p>

<p>You can follow the blog through:
– ActivityPub by inputting <code><a href="https://blog.allpurposem.at/@/mat@blog.allpurposem.at" class="u-url mention">@<span>mat@blog.allpurposem.at</span></a></code>
– RSS/Atom: Copy this link into your reader: <code>https://blog.allpurposem.at</code></p>

<p>My website: <a href="https://allpurposem.at">https://allpurposem.at</a></p>

<p></p>
]]></content:encoded>
      <guid>https://blog.allpurposem.at/pythonplusplus-bridging-worlds-with-polyglot-code</guid>
      <pubDate>Sun, 28 Jan 2024 22:14:49 +0000</pubDate>
    </item>
    <item>
      <title>DirectX? On my Linux??</title>
      <link>https://blog.allpurposem.at/directx</link>
      <description>&lt;![CDATA[  You: DirectX 11? At this time of year? At this time of day? In this part of the country? Localized entirely within your Linux system?&#xA;  Me: Yes.&#xA;  You: May I see it?&#xA;  Me: No. Yes! &#xA;&#xA;Yes, it&#39;s true! Set up all nice and fast, with no WINE. Just a native executable, with full diagnostics for any IDE, compatibility with debuggers, nice and (usually) helpful output, and maintaining full Windows build support at the same time.&#xA;!--more--&#xA;&#xA;One of my favorite courses this year at DAE has to be Graphics Programming 1. In it, we go through the logic and techniques to create a software raytracer (laggy web build), then a rasterizer. I&#39;ve really enjoyed the course so far, and it&#39;s been painless to follow on Linux... with one small looming issue: the last few weeks of the semester involve learning the DirectX 11 API, and porting the rasterizer to it such that you can seamlessly switch between hardware and software rendering. Really cool assignment, but this won&#39;t work on Linux!&#xA;&#xA;An image of a flying vehicle with one seat, with some basic shading.&#xA;Software rasterizer&#xA;&#xA;DirectX is a proprietary closed standard made by Microsoft, and is thus likely never to come to Linux and other platforms.&#xA;&#xA;Wait, but how come I can play DirectX Windows games on Linux?&#xA;&#xA;Aha! I&#39;m glad you asked, even if by proxy of me prewriting that question as the section title.&#xA;As part of Valve&#39;s Proton compatibility layer for playing Windows games seamlessly on Linux, some really smart folks maintain a wonderful project called DXVK. DXVK works together with WINE to translate DirectX 9-11 API calls into Vulkan calls, enabling compatibility with any system that supports a recent enough Vulkan spec. When a game kindly asks DirectX to draw a triangle, DXVK will handle this function and make the equivalent Vulkan API calls for the GPU&#39;s Vulkan driver to handle. I&#39;m just now learning about these APIs (I have exclusively used OpenGL in my previous projects), so I can&#39;t explain how this works much further. It&#39;s magic to me, and I don&#39;t cease to be impressed when I can just launch a Windows-only DirectX-based game and it &#34;just works&#34; on Linux, often even with better performance than if it were running on the native Windows DirectX drivers.&#xA;&#xA;Anyways, this is all great, but that covers running Windows games through WINE and having DXVK translate DirectX calls to Vulkan. Having to build .exe files, and especially the tooling for handling them on Linux (WineDbg, profiling...) sucks. It works, and I wrote a blog post about doing exactly this, but I&#39;d still much rather avoid having to do this. So what&#39;s the trick?&#xA;&#xA;The trick&#xA;I asked in a few chat rooms whether there was another way, and by pure chance someone at the ASUS Linux community happened to mention I&#39;d have to use &#34;dxvk native.&#34; Yes, DXVK was designed to work together with WINE. Nowhere in its wiki does it mention any other use cases (that I could find). However, checking its recent releases we can see a file named dxvk-native-X.X-steamrt-sniper.tar.gz. Native? As in, doesn&#39;t depend on WINE? As in I can link against it and produce a normal Linux ELF binary that can be debugged and poked at just like I&#39;d do an OpenGL program?&#xA;&#xA;I immediately got to work, and found shockingly little documentation online about using this. The best resource I found is the build script of the Deus Ex: Human Revolution decompilation, which enables Linux support through DXVK. I whipped up a small CMake script, and ended up with this snippet:&#xA;if(UNIX)&#xA;    include(ExternalProject)&#xA;&#xA;    ExternalProjectAdd(dxvk-native&#xA;        GITREPOSITORY https://github.com/doitsujin/dxvk.git&#xA;        GITTAG v2.3&#xA;        CONFIGURECOMMAND meson setup SOURCEDIR&#xA;        BUILDCOMMAND ninja src/d3d11/libdxvkd3d11.so src/dxgi/libdxvkdxgi.so&#xA;        INSTALLCOMMAND &#34;&#34;)&#xA;&#xA;    ExternalProjectGetproperty(dxvk-native SOURCEDIR BINARYDIR)&#xA;    set(DXVKSOURCEDIR ${SOURCEDIR})&#xA;    set(DXVKBINARYDIR ${BINARYDIR}) &#xA;    unset(SOURCEDIR)&#xA;    unset(BINARYDIR)&#xA;    adddependencies(${PROJECTNAME} dxvk-native)&#xA;&#xA;    includedirectories(SYSTEM&#xA;        ${DXVKSOURCEDIR}/include/native/directx&#xA;        ${DXVKSOURCEDIR}/include/native/windows)&#xA;    targetlinkdirectories(${PROJECTNAME} PRIVATE&#xA;        ${DXVKBINARYDIR}/src/d3d11&#xA;        ${DXVKBINARYDIR}/src/dxgi&#xA;    )&#xA;&#xA;    targetlinklibraries(&#xA;        ${PROJECTNAME} PUBLIC dxvkd3d11 dxvkdxgi&#xA;    )&#xA;endif()&#xA;&#xA;Drop this into your CMake project, and you should be able to use and interface with DirectX just like you would on Windows! Just make sure that, instead of a HWND, you give DirectX (well, DXVK) a pointer to your SDLWindow. I added this bit of code to handle that:&#xA;ifdef WIN32&#xA;        // Get the handle (HWND) from the SDL window&#xA;        SDLSysWMinfo sysWMInfo{};&#xA;        SDLGetVersion(&amp;sysWMInfo.version);&#xA;        SDLGetWindowWMInfo(mpWindow, &amp;sysWMInfo);&#xA;        swapChainDesc.OutputWindow = sysWMInfo.info.win.window;&#xA;else&#xA;        swapChainDesc.OutputWindow = mpWindow;&#xA;endif&#xA;&#xA;After a pretty quick build, here it is! A pure DirectX window running on Linux, in all its glory!&#xA;&#xA;A screenshot showing a lengthy info dump from DXVK, with a few warnings and errors amidst the logs. An empty blue window is open next to it, with a glowing shadow effect behind it. Framerates around 5000 are reported at the end of the logs.&#xA;&#xA;Well, it&#39;s not drawing anything interesting, but that&#39;s because I still need to port my rasterizer! Seeing the default blue background is already amazing given how far from an officially supported DirectX platform I&#39;m running this on.&#xA;&#xA;Full source code for the project is available on my Gitea instance here.&#xA;&#xA;---&#xD;&#xA;&#xD;&#xA;Thanks for reading! Feel free to contact me if you have any suggestions or comments.&#xD;&#xA;Find me on Mastodon and Matrix.&#xD;&#xA;&#xD;&#xA;You can follow the blog through:&#xD;&#xA;ActivityPub by inputting @mat@blog.allpurposem.at&#xD;&#xA;RSS/Atom: Copy this link into your reader: https://blog.allpurposem.at&#xD;&#xA;&#xD;&#xA;My website: https://allpurposem.at&#xD;&#xA;&#xD;&#xA;link rel=&#34;preload&#34; href=&#34;https://blog.allpurposem.at/lexend.woff2&#34; as=&#34;font&#34; type=&#34;font/woff2&#34; crossorigin=&#34;&#34;]]&gt;</description>
      <content:encoded><![CDATA[<blockquote><p><strong>You</strong>: DirectX 11? At this time of year? At this time of day? In this part of the country? Localized entirely within your Linux system?
<strong>Me</strong>: Yes.
<strong>You</strong>: May I see it?
<strong>Me</strong>: <del>No.</del> Yes!</p></blockquote>

<p>Yes, it&#39;s true! Set up all nice and fast, with no WINE. Just a native executable, with full diagnostics for any IDE, compatibility with debuggers, nice and (usually) helpful output, and maintaining full Windows build support at the same time.
</p>

<p>One of my favorite courses this year at DAE has to be Graphics Programming 1. In it, we go through the logic and techniques to create <a href="https://git.allpurposem.at/mat/GraphicsProg1_Raytracer">a software raytracer</a> (<a href="https://allpurposem.at/raytracer/game.html">laggy web build</a>), then <a href="https://git.allpurposem.at/mat/GraphicsProg1_Rasterizer">a rasterizer</a>. I&#39;ve really enjoyed the course so far, and it&#39;s been painless to follow on Linux... with one small looming issue: the last few weeks of the semester involve learning the DirectX 11 API, and porting the rasterizer to it such that you can seamlessly switch between hardware and software rendering. Really cool assignment, but this won&#39;t work on Linux!</p>

<p><img src="https://allpurposem.at/blog/rasterizer-reference.png" alt="An image of a flying vehicle with one seat, with some basic shading.">
<em>Software rasterizer</em></p>

<p>DirectX is a proprietary closed standard made by Microsoft, and is thus likely never to come to Linux and other platforms.</p>

<h2 id="wait-but-how-come-i-can-play-directx-windows-games-on-linux" id="wait-but-how-come-i-can-play-directx-windows-games-on-linux">Wait, but how come I can play DirectX Windows games on Linux?</h2>

<p>Aha! I&#39;m glad you asked, even if by proxy of me prewriting that question as the section title.
As part of Valve&#39;s Proton compatibility layer for playing Windows games seamlessly on Linux, some really smart folks maintain a wonderful project called <a href="https://github.com/doitsujin/dxvk">DXVK</a>. DXVK works together with WINE to translate DirectX 9-11 API calls into Vulkan calls, enabling compatibility with any system that supports a recent enough Vulkan spec. When a game kindly asks DirectX to draw a triangle, DXVK will handle this function and make the equivalent Vulkan API calls for the GPU&#39;s Vulkan driver to handle. I&#39;m just now learning about these APIs (I have exclusively used OpenGL in my previous projects), so I can&#39;t explain how this works much further. It&#39;s magic to me, and I don&#39;t cease to be impressed when I can just launch a Windows-only DirectX-based game and it “just works” on Linux, often even with better performance than if it were running on the native Windows DirectX drivers.</p>

<p>Anyways, this is all great, but that covers running <em>Windows</em> games through WINE and having DXVK translate DirectX calls to Vulkan. Having to build .exe files, and especially the tooling for handling them on Linux (WineDbg, profiling...) sucks. It works, and I wrote <a href="https://blog.allpurposem.at/adventures-cross-compiling-a-windows-game-engine">a blog post</a> about doing exactly this, but I&#39;d still much rather avoid having to do this. So what&#39;s the trick?</p>

<h2 id="the-trick" id="the-trick">The trick</h2>

<p>I asked in a few chat rooms whether there was another way, and by pure chance someone at the <a href="https://asus-linux.org/">ASUS Linux</a> community happened to mention I&#39;d have to use “dxvk native.” Yes, DXVK was designed to work together with WINE. Nowhere in its wiki does it mention any other use cases (that I could find). However, checking its recent releases we can see a file named <code>dxvk-native-X.X-steamrt-sniper.tar.gz</code>. Native? As in, doesn&#39;t depend on WINE? As in I can link against it and produce a normal Linux ELF binary that can be debugged and poked at just like I&#39;d do an OpenGL program?</p>

<p>I immediately got to work, and found shockingly little documentation online about using this. The best resource I found is the build script of the <a href="https://github.com/rrika/cdcEngineDXHR">Deus Ex: Human Revolution decompilation</a>, which enables Linux support through DXVK. I whipped up a small CMake script, and ended up with this snippet:</p>

<pre><code class="language-cmake">if(UNIX)
    include(ExternalProject)

    ExternalProject_Add(dxvk-native
        GIT_REPOSITORY https://github.com/doitsujin/dxvk.git
        GIT_TAG v2.3
        CONFIGURE_COMMAND meson setup &lt;SOURCE_DIR&gt;
        BUILD_COMMAND ninja src/d3d11/libdxvk_d3d11.so src/dxgi/libdxvk_dxgi.so
        INSTALL_COMMAND &#34;&#34;)

    ExternalProject_Get_property(dxvk-native SOURCE_DIR BINARY_DIR)
    set(DXVK_SOURCE_DIR ${SOURCE_DIR})
    set(DXVK_BINARY_DIR ${BINARY_DIR}) 
    unset(SOURCE_DIR)
    unset(BINARY_DIR)
    add_dependencies(${PROJECT_NAME} dxvk-native)

    include_directories(SYSTEM
        ${DXVK_SOURCE_DIR}/include/native/directx
        ${DXVK_SOURCE_DIR}/include/native/windows)
    target_link_directories(${PROJECT_NAME} PRIVATE
        ${DXVK_BINARY_DIR}/src/d3d11
        ${DXVK_BINARY_DIR}/src/dxgi
    )

    target_link_libraries(
        ${PROJECT_NAME} PUBLIC dxvk_d3d11 dxvk_dxgi
    )
endif()
</code></pre>

<p>Drop this into your CMake project, and you should be able to use and interface with DirectX just like you would on Windows! Just make sure that, instead of a HWND, you give DirectX (well, DXVK) a pointer to your <code>SDL_Window</code>. I added this bit of code to handle that:</p>

<pre><code class="language-cpp">#ifdef WIN32
        // Get the handle (HWND) from the SDL window
        SDL_SysWMinfo sysWMInfo{};
        SDL_GetVersion(&amp;sysWMInfo.version);
        SDL_GetWindowWMInfo(m_pWindow, &amp;sysWMInfo);
        swapChainDesc.OutputWindow = sysWMInfo.info.win.window;
#else
        swapChainDesc.OutputWindow = m_pWindow;
#endif
</code></pre>

<p>After a pretty quick build, here it is! A pure DirectX window running on Linux, in all its glory!</p>

<p><img src="https://allpurposem.at/blog/directx-on-linux.png" alt="A screenshot showing a lengthy info dump from DXVK, with a few warnings and errors amidst the logs. An empty blue window is open next to it, with a glowing shadow effect behind it. Framerates around 5000 are reported at the end of the logs."></p>

<p>Well, it&#39;s not drawing anything interesting, but that&#39;s because I still need to port <a href="https://git.allpurposem.at/mat/GraphicsProg1_Rasterizer">my rasterizer</a>! Seeing the default blue background is already amazing given how far from an officially supported DirectX platform I&#39;m running this on.</p>

<p>Full source code for the project is available on my Gitea instance <a href="https://git.allpurposem.at/mat/GraphicsProg1_DirectX">here</a>.</p>

<hr>

<p>Thanks for reading! Feel free to contact me if you have any suggestions or comments.
Find me on <a href="https://allpurposem.at/link/mastodon">Mastodon</a> and <a href="https://allpurposem.at/link/matrix">Matrix</a>.</p>

<p>You can follow the blog through:
– ActivityPub by inputting <code><a href="https://blog.allpurposem.at/@/mat@blog.allpurposem.at" class="u-url mention">@<span>mat@blog.allpurposem.at</span></a></code>
– RSS/Atom: Copy this link into your reader: <code>https://blog.allpurposem.at</code></p>

<p>My website: <a href="https://allpurposem.at">https://allpurposem.at</a></p>

<p></p>
]]></content:encoded>
      <guid>https://blog.allpurposem.at/directx</guid>
      <pubDate>Sat, 09 Dec 2023 14:44:46 +0000</pubDate>
    </item>
    <item>
      <title>The vector::reserve fallacy</title>
      <link>https://blog.allpurposem.at/the-vector-reserve-fallacy</link>
      <description>&lt;![CDATA[While reading through some code I wrote for a raytracing assignment, I noticed a peculiar function that had never caused any issues, but really looked like it should. After asking a bunch of people, I present this blog post to you! &#xA;!--more--&#xA;&#xA;Ah, C++ standard containers. So delightfully intuitive to work with. The most versatile has to be std::vector, whose job is to wrap a dynamic &#34;C-style&#34; array and manage its capacity for us as we grow and shrink the vector&#39;s size. We can simply call pushback on the vector to add as many elements as we want, and the vector will grow its capacity when needed to fit our new elements.&#xA;&#xA;  If you understand how a std::vector works, feel free to skip to the code.&#xA;&#xA;But is it that simple?&#xA;&#xA;Resizing the vector&#39;s internal array is not cheap! It incurs allocating a whole new (bigger) block of memory, copying all the elements to it, and finally freeing the old block (note that this copy may be a move, see here). Because we add elements one by one, this would trigger a lot of resizes, as the vector keeps having to guess how many elements we plan to add and reallocating a bigger and bigger internal array every time we pushback past its capacity! So, a conforming std::vector implementation will usually try to get ahead of us and secretly allocate a bigger block when it sees we start pushing to it, and then it can just keep track of the size of the vector (how many elements we&#39;ve pushed to it) separately from its capacity (how many elements it can grow to before it needs to resize the internal array again).&#xA;&#xA;std::vector kindly exposes this internal functionality to us through some functions. For example, the capacity() function returns the current capacity of the vector&#39;s internal array. If we know the size it will grow up to ahead of time, we can use the reserve(sizetype capacity) function to have it pre-allocate this capacity for us. This avoids reallocating a lot when doing a bunch of pushbacks, which can let us gain a precious bit of performance (see the example here for some actual numbers).&#xA;&#xA;The code&#xA;&#xA;Now that we understand std::vector::reserve, let&#39;s take a look at some C++:&#xA;std::vectorint myVec{}; // create a vector of size 0&#xA;myVec.reserve(1); // reserve a capacity of 1&#xA;myVec[0] = 42; // write 42 to the first element of our empty(!!) vector&#xA;std::cout &lt;&lt; myVec[0];&#xA;&#xA;When run, the above prints 42. I hope I&#39;m not the only one who&#39;s surprised this works! I&#39;m overwriting the value of the first element in a vector... which has no elements. This is an out of bounds write, and should definitely not work.&#xA;Not only that, but on my machine I can replace index 0 with up to index 15187 and it still works fine! Index 15188 segfaults, though, so at least that&#39;s sane behavior (so long as I get far enough away from the start of the vector...).&#xA;So what the peck is going on??&#xA;&#xA;The peck (it&#39;s going on)&#xA;&#xA;Okay, okay, I&#39;ll say the thing. We&#39;ve found what in C++ is called &#34;undefined behavior&#34; (UB). This is a magical realm where anything could happen. Your computer might replace every window title with your username, or your program might send an order to all pizza restaurants in a 5km radius. If you&#39;re lucky, your program will just crash. More likely though, your code will do exactly what you intended it to do, and either subtly break something later on, or never signal anything on your machine... and break on someone else&#39;s.&#xA;&#xA;Why is this undefined behavior, you ask? We told our vector to reserve a size of 1, so 0 is a perfectly valid index in the its internal array. However, the C++ standard never states that vector should have an internal array! It only asks for vector implementations to be able to grow and shrink, and for reserve() to &#34;ensure a capacity&#34; up to which no reallocations need to happen.&#xA;&#xA;  NOTE: after lots of research (and asking the smart folks of the #include C++ community), I&#39;ve been unable to find an implementation where this does break. That doesn&#39;t mean it&#39;s okay to rely on this behavior! It&#39;s still UB!&#xA;&#xA;Why it works for us&#xA;&#xA;Despite this being undefined behavior, it works consistently in my program. Why is this?&#xA;When we run the line myVec0] = 42, the std::vector::operator[] function is called with an argument of 0, to return a reference to the location in memory at index 0 for this vector. Let&#39;s look at the [source code for this function in GCC&#39;s libstdc++ (which I used for my testing, though the same issue applies on clang and MSVC):&#xA;&#xA;/*&#xA; @brief  Subscript access to the data contained in the %vector.&#xA; @param _n The index of the element for which data should be&#xA; accessed.&#xA; @return  Read/write reference to data.&#xA;   This operator allows for easy, array-style, data access.&#xA; Note that data access with this operator is unchecked and&#xA; outofrange lookups are not defined. (For checked lookups&#xA; see at().)&#xA; /&#xA;GLIBCXXNODISCARD GLIBCXX20CONSTEXPR&#xA;reference&#xA;operator GLIBCXXNOEXCEPT&#xA;{&#xA;    glibcxxrequiressubscript(n);&#xA;    return (this-  Mimpl.Mstart + n);&#xA;}&#xA;&#xA;Looking past all the macros (the subscript thing expands to an empty line by default, we&#39;ll look into it later), this simply takes the pointer to the start of the internal array (Mimpl.Mstart), adds our argument n, and returns it as a reference. As long as Mstart points to some valid allocated address, we should be fine accessing it within bounds of the array (note, of course, that this is only true for this implementation of libstdc++! Other implementations may do different things; we&#39;re in UB-land here). This explains why our index outside of the vector&#39;s size worked: we&#39;re indexing the internal array, not the vector! As long as we call reserve on the vector first, and our index is within that reserved array&#39;s size the data should be perfectly okay being written to and read from an out-of-bounds-but-within-capacity index of a vector (on this specific version of GCC&#39;s libstdc++). If we remove the myVec.reserve(1) line, the program does crash as expected, since Mimpl.Mstart is not initialized and thus points to invalid memory.&#xA;&#xA;Array out of bounds&#xA;&#xA;The reason why accessing an index higher than the array&#39;s size works is covered here, but a tl;dr is that you are indeed overwriting memory you shouldn&#39;t be, and by chance nothing bad is happening. If we run it through the valgrind memory error detector, it indeed detects our error for any index outside the array. Here&#39;s the log for a write at index 1, after a call to reserve(1):&#xA;&#xA;Invalid write of size 4&#xA;   at 0x1091FC: main (ub.cpp:8)&#xA; Address 0x4e21084 is 0 bytes after a block of size 4 alloc&#39;d&#xA;   at 0x4841F11: operator new(unsigned long) (vgreplacemalloc.c:434)&#xA;   by 0x109825: std::newallocatorint::allocate(unsigned long, void const) (newallocator.h:147)&#xA;   by 0x109604: allocate (alloctraits.h:482)&#xA;   by 0x109604: std::Vectorbaseint, std::allocator&lt;int   ::Mallocate(unsigned long) (stlvector.h:378)&#xA;   by 0x1093FF: std::vectorint, std::allocator&lt;int   ::reserve(unsigned long) (vector.tcc:79)&#xA;   by 0x1091EA: main (ub.cpp:6)&#xA;&#xA;Let&#39;s dissect this output:&#xA;The first line indicates that we wrote 4 bytes somewhere that&#39;s &#34;invalid.&#34; That&#39;s the size of a 64-bit int, which is the type we&#39;re writing into index 1.&#xA;The big call stack tells us where the array that we&#39;re accessing out of bounds was allocated. The penultimate line points us to that std::vector::reserve call we made, which creates a &#34;block of size 4&#34; (the vector&#39;s internal array, with the capacity for a single 4-byte int).&#xA;&#xA;This indicates that we are indeed accessing the internal array out of bounds, and that it is a memory error that will cause UB even on this implementation of std::vector. So that answers that!&#xA;&#xA;Speed at the cost of safety&#xA;&#xA;Although on my GCC install, using this as actual storage &#34;works&#34; &#34;fine,&#34; it has... issues. When we try to do a range-based loop, it will never get the elements we wrote out of bounds. If the vector gets copied, it will only bring over the data within its size, and leave behind everything else. These kinds of issues would be super hard to diagnose had I not spotted the UB here!&#xA;&#xA;Shouldn&#39;t std::vector::operator[] warn us that we&#39;re accessing an element outside of the vector&#39;s size? Let&#39;s check the C++ standard on vector functions.&#xA;&#xA;  Only at() performs range checking. If the index is out of range, at() throws an outofrange exception. All other functions do not check.&#xA;&#xA;\- The C++ Standard Library: A Tutorial and Reference by Nicolai M. Josuttis (2012), pages 274-275&#xA;&#xA;Well, darn. I can understand why, though. When writing code in C++, we expect to have the lowest possible performance overhead, yet still get to use all these nice abstractions. Performing bounds checks, even if cheap, can really add up if we have to do it for every vector access. Changing it to at(0) does indeed print a (relatively) helpful crash message: &#xA;terminate called after throwing an instance of &#39;std::outofrange&#39;&#xA;  what():  vector::Mrangecheck: n (which is 1)   = this-  size() (which is 0)&#xA;&#xA;As I was writing this, an excellent relevant post by @saagar@saagarjha.com graced my Mastodon timeline:&#xA;&#xA;video controls src=&#34;https://federated.saagarjha.com/media/e0fb0d82-7cfe-4d6b-ba5b-a89c7c8d97d6/out.mov&#34;&#xA;Download the&#xA;  a href=&#34;https://federated.saagarjha.com/media/e0fb0d82-7cfe-4d6b-ba5b-a89c7c8d97d6/out.mov&#34;video./a&#xA;/video&#xA;Original source.&#xA;&#xA;That&#39;s not all, though! Remember that curious glibcxxrequiressubscript(_n); macro in the GCC implementation of operator[], which I said we&#39;d look at later? Now is before&#39;s later, so let&#39;s take a look at the definition:&#xA;ifndef GLIBCXXASSERTIONS&#xA;  # define glibcxxrequiressubscript(N)&#xA;else&#xA;  # define _glibcxxrequiressubscript(N)&#x9;\&#xA;  _glibcxxassert(N  this-size())&#xA;endif&#xA;&#xA;So it does* do something! You just have to have GLIBCXXASSERTIONS defined. Indeed, if we define that macro with the -DGLIBCXXASSERTIONS compiler flag, we get this wonderful totally-readable error when the code tries to index out of bounds:&#xA;/usr/include/c++/13.2.1/bits/stlvector.h:1125: std::vectorTp, Alloc::reference std::vectorTp, Alloc::operator [with Tp = int; Alloc = std::allocatorint; reference = int&amp;; sizetype = long unsigned int]: Assertion &#39;__n  this-size()&#39; failed.&#xA;Okay, it&#39;s no &#34;you&#39;re accessing this vector out of bounds, please stop,&#34; but it certainly is better than dealing with the potential mess of undefined behavior that awaits otherwise. I guess I&#39;ll be adding this flag to all my debug builds from now on!&#xA;&#xA;If you&#39;re curious, this is my original code where I found the issue.&#xA;&#xA;---&#xD;&#xA;&#xD;&#xA;Thanks for reading! Feel free to contact me if you have any suggestions or comments.&#xD;&#xA;Find me on Mastodon and Matrix.&#xD;&#xA;&#xD;&#xA;You can follow the blog through:&#xD;&#xA;ActivityPub by inputting @mat@blog.allpurposem.at&#xD;&#xA;RSS/Atom: Copy this link into your reader: https://blog.allpurposem.at&#xD;&#xA;&#xD;&#xA;My website: https://allpurposem.at&#xD;&#xA;&#xD;&#xA;link rel=&#34;preload&#34; href=&#34;https://blog.allpurposem.at/lexend.woff2&#34; as=&#34;font&#34; type=&#34;font/woff2&#34; crossorigin=&#34;&#34;]]&gt;</description>
      <content:encoded><![CDATA[<p>While reading through some code I wrote for a raytracing assignment, I noticed a peculiar function that had never caused any issues, but <em>really</em> looked like it should. After asking a bunch of people, I present this blog post to you!
</p>

<p>Ah, C++ standard containers. So delightfully intuitive to work with. The most versatile has to be <code>std::vector</code>, whose job is to wrap a dynamic “C-style” array and manage its <em>capacity</em> for us as we grow and shrink the vector&#39;s <em>size</em>. We can simply call <code>push_back</code> on the vector to add as many elements as we want, and the vector will grow its capacity when needed to fit our new elements.</p>

<blockquote><p>If you understand how a <code>std::vector</code> works, feel free to skip to <a href="#the-code">the code.</a></p></blockquote>

<h2 id="but-is-it-that-simple" id="but-is-it-that-simple">But is it that simple?</h2>

<p>Resizing the vector&#39;s internal array is not cheap! It incurs allocating a whole new (bigger) block of memory, copying all the elements to it, and finally freeing the old block (note that this copy may be a move, see <a href="http://stackoverflow.com/questions/10127603/why-does-reallocating-a-vector-copy-instead-of-moving-the-elements">here</a>). Because we add elements one by one, this would trigger a lot of resizes, as the vector keeps having to guess how many elements we plan to add and reallocating a bigger and bigger internal array every time we <code>push_back</code> past its capacity! So, a conforming <code>std::vector</code> implementation will usually try to get ahead of us and secretly allocate a bigger block when it sees we start pushing to it, and then it can just keep track of the <em>size</em> of the vector (how many elements we&#39;ve pushed to it) separately from its <em>capacity</em> (how many elements it can grow to before it needs to resize the internal array again).</p>

<p><code>std::vector</code> kindly exposes this internal functionality to us through some functions. For example, the <code>capacity()</code> function returns the current capacity of the vector&#39;s internal array. If we know the size it will grow up to ahead of time, we can use the <code>reserve(size_type capacity)</code> function to have it pre-allocate this capacity for us. This avoids reallocating a lot when doing a bunch of <code>push_back</code>s, which can let us gain a precious bit of performance (see the example <a href="https://www.codeproject.com/Articles/5425/An-In-Depth-Study-of-the-STL-Deque-Container#_Experiment2">here</a> for some actual numbers).</p>

<h2 id="the-code" id="the-code">The code</h2>

<p>Now that we understand <code>std::vector::reserve</code>, let&#39;s take a look at some C++:</p>

<pre><code class="language-cpp">std::vector&lt;int&gt; myVec{}; // create a vector of size 0
myVec.reserve(1); // reserve a capacity of 1
myVec[0] = 42; // write 42 to the first element of our empty(!!) vector
std::cout &lt;&lt; myVec[0];
</code></pre>

<p>When run, the above prints <code>42</code>. I hope I&#39;m not the only one who&#39;s surprised this works! I&#39;m overwriting the value of the first element in a vector... which has no elements. This is an out of bounds write, and should definitely not work.
Not only that, but on my machine I can replace index <code>0</code> with up to index <code>15187</code> and it still works fine! Index <code>15188</code> segfaults, though, so at least that&#39;s sane behavior (so long as I get far enough away from the start of the vector...).
So what the peck is going on??</p>

<h2 id="the-peck-it-s-going-on" id="the-peck-it-s-going-on">The peck (it&#39;s going on)</h2>

<p>Okay, okay, I&#39;ll say the thing. We&#39;ve found what in C++ is called “undefined behavior” (UB). This is a magical realm where anything could happen. Your computer might replace every window title with your username, or your program might send an order to all pizza restaurants in a 5km radius. If you&#39;re lucky, your program will just crash. More likely though, your code will do exactly what you intended it to do, and either subtly break something later on, or never signal anything on your machine... and break on someone else&#39;s.</p>

<p>Why is this undefined behavior, you ask? We told our vector to reserve a size of 1, so 0 is a perfectly valid index in the its internal array. However, the C++ standard never states that vector should have an internal array! It only asks for vector implementations to be able to grow and shrink, and for <code>reserve()</code> to “ensure a capacity” up to which no reallocations need to happen.</p>

<blockquote><p>NOTE: after lots of research (and asking the smart folks of the <a href="https://www.includecpp.org/"><a href="https://blog.allpurposem.at/tag:include" class="hashtag"><span>#</span><span class="p-category">include</span></a> C++ community</a>), I&#39;ve been unable to find an implementation where this does break. That doesn&#39;t mean it&#39;s okay to rely on this behavior! It&#39;s still UB!</p></blockquote>

<h3 id="why-it-works-for-us" id="why-it-works-for-us">Why it works for us</h3>

<p>Despite this being undefined behavior, it works consistently in my program. Why is this?
When we run the line <code>myVec[0] = 42</code>, the <code>std::vector::operator[]</code> function is called with an argument of 0, to return a reference to the location in memory at index 0 for this vector. Let&#39;s look at the <a href="https://gcc.gnu.org/onlinedocs/gcc-4.6.2/libstdc++/api/a01069_source.html#l00695">source code</a> for this function in GCC&#39;s libstdc++ (which I used for my testing, though the same issue applies on clang and MSVC):</p>

<pre><code class="language-cpp">/**
 *  @brief  Subscript access to the data contained in the %vector.
 *  @param __n The index of the element for which data should be
 *  accessed.
 *  @return  Read/write reference to data.
 *
 *  This operator allows for easy, array-style, data access.
 *  Note that data access with this operator is unchecked and
 *  out_of_range lookups are not defined. (For checked lookups
 *  see at().)
 */
_GLIBCXX_NODISCARD _GLIBCXX20_CONSTEXPR
reference
operator[](size_type __n) _GLIBCXX_NOEXCEPT
{
    __glibcxx_requires_subscript(__n);
    return *(this-&gt;_M_impl._M_start + __n);
}
</code></pre>

<p>Looking past all the macros (the subscript thing expands to an empty line by default, we&#39;ll look into it later), this simply takes the pointer to the start of the internal array (<code>_M_impl._M_start</code>), adds our argument <code>__n</code>, and returns it as a reference. As long as <code>_M_start</code> points to some valid allocated address, we should be fine accessing it within bounds of the array (note, of course, that this is only true for <strong>this</strong> implementation of libstdc++! Other implementations may do different things; we&#39;re in UB-land here). This explains why our index outside of the vector&#39;s size worked: we&#39;re indexing the internal array, not the vector! As long as we call <code>reserve</code> on the vector first, and our index is within that reserved array&#39;s size the data should be perfectly okay being written to and read from an out-of-bounds-but-within-capacity index of a vector (on this specific version of GCC&#39;s libstdc++). If we remove the <code>myVec.reserve(1)</code> line, the program does crash as expected, since <code>_M_impl._M_start</code> is not initialized and thus points to invalid memory.</p>

<h4 id="array-out-of-bounds" id="array-out-of-bounds">Array out of bounds</h4>

<p>The reason why accessing an index <em>higher</em> than the array&#39;s size works is covered <a href="https://stackoverflow.com/questions/1239938/accessing-an-array-out-of-bounds-gives-no-error-why">here</a>, but a tl;dr is that you are indeed overwriting memory you shouldn&#39;t be, and by chance nothing bad is happening. If we run it through the <code>valgrind</code> memory error detector, it indeed detects our error for any index outside the array. Here&#39;s the log for a write at index <code>1</code>, after a call to <code>reserve(1)</code>:</p>

<pre><code>Invalid write of size 4
   at 0x1091FC: main (ub.cpp:8)
 Address 0x4e21084 is 0 bytes after a block of size 4 alloc&#39;d
   at 0x4841F11: operator new(unsigned long) (vg_replace_malloc.c:434)
   by 0x109825: std::__new_allocator&lt;int&gt;::allocate(unsigned long, void const*) (new_allocator.h:147)
   by 0x109604: allocate (alloc_traits.h:482)
   by 0x109604: std::_Vector_base&lt;int, std::allocator&lt;int&gt; &gt;::_M_allocate(unsigned long) (stl_vector.h:378)
   by 0x1093FF: std::vector&lt;int, std::allocator&lt;int&gt; &gt;::reserve(unsigned long) (vector.tcc:79)
   by 0x1091EA: main (ub.cpp:6)
</code></pre>

<p>Let&#39;s dissect this output:
1. The first line indicates that we wrote 4 bytes somewhere that&#39;s “invalid.” That&#39;s the size of a 64-bit <code>int</code>, which is the type we&#39;re writing into index <code>1</code>.
2. The big call stack tells us where the array that we&#39;re accessing out of bounds was allocated. The penultimate line points us to that <code>std::vector::reserve</code> call we made, which creates a “block of size 4” (the vector&#39;s internal array, with the capacity for a single 4-byte <code>int</code>).</p>

<p>This indicates that we are indeed accessing the internal array out of bounds, and that it is a memory error that will cause UB even on this implementation of <code>std::vector</code>. So that answers that!</p>

<h2 id="speed-at-the-cost-of-safety" id="speed-at-the-cost-of-safety">Speed at the cost of safety</h2>

<p>Although on my GCC install, using this as actual storage “works” “fine,” it has... issues. When we try to do a range-based loop, it will never get the elements we wrote out of bounds. If the vector gets copied, it will only bring over the data within its size, and leave behind everything else. These kinds of issues would be super hard to diagnose had I not spotted the UB here!</p>

<p>Shouldn&#39;t <code>std::vector::operator[]</code> warn us that we&#39;re accessing an element outside of the vector&#39;s size? Let&#39;s check the C++ standard on vector functions.</p>

<blockquote><p>Only <code>at()</code> performs range checking. If the index is out of range, <code>at()</code> throws an <code>out_of_range</code> exception. All other functions do <em>not</em> check.</p></blockquote>

<p>- <em>The C++ Standard Library: A Tutorial and Reference</em> by Nicolai M. Josuttis (2012), pages 274-275</p>

<p>Well, darn. I can understand why, though. When writing code in C++, we expect to have the lowest possible performance overhead, yet still get to use all these nice abstractions. Performing bounds checks, even if cheap, can really add up if we have to do it for every vector access. Changing it to <code>at(0)</code> does indeed print a (relatively) helpful crash message:</p>

<pre><code class="language-cpp">terminate called after throwing an instance of &#39;std::out_of_range&#39;
  what():  vector::_M_range_check: __n (which is 1) &gt;= this-&gt;size() (which is 0)
</code></pre>

<p>As I was writing this, an excellent relevant post by <a href="https://blog.allpurposem.at/@/saagar@saagarjha.com" class="u-url mention">@<span>saagar@saagarjha.com</span></a> graced my Mastodon timeline:</p>

<p><video controls="" src="https://federated.saagarjha.com/media/e0fb0d82-7cfe-4d6b-ba5b-a89c7c8d97d6/out.mov">
Download the
  <a href="https://federated.saagarjha.com/media/e0fb0d82-7cfe-4d6b-ba5b-a89c7c8d97d6/out.mov">video.</a>
</video>
<a href="https://federated.saagarjha.com/notice/AbFvHsSx5mhPlyqABk">Original source.</a></p>

<p>That&#39;s not all, though! Remember that curious <code>__glibcxx_requires_subscript(__n);</code> macro in the GCC implementation of <code>operator[]</code>, which I said we&#39;d look at later? Now is before&#39;s later, so let&#39;s take a look at the definition:</p>

<pre><code class="language-cpp">#ifndef _GLIBCXX_ASSERTIONS
  # define __glibcxx_requires_subscript(_N)
#else
  # define __glibcxx_requires_subscript(_N)	\
  __glibcxx_assert(_N &lt; this-&gt;size())
#endif
</code></pre>

<p>So it <em>does</em> do something! You just have to have <code>_GLIBCXX_ASSERTIONS</code> defined. Indeed, if we define that macro with the <code>-D_GLIBCXX_ASSERTIONS</code> compiler flag, we get this wonderful totally-readable error when the code tries to index out of bounds:</p>

<pre><code class="language-cpp">/usr/include/c++/13.2.1/bits/stl_vector.h:1125: std::vector&lt;_Tp, _Alloc&gt;::reference std::vector&lt;_Tp, _Alloc&gt;::operator[](size_type) [with _Tp = int; _Alloc = std::allocator&lt;int&gt;; reference = int&amp;; size_type = long unsigned int]: Assertion &#39;__n &lt; this-&gt;size()&#39; failed.
</code></pre>

<p>Okay, it&#39;s no “you&#39;re accessing this vector out of bounds, please stop,” but it certainly is better than dealing with the potential mess of undefined behavior that awaits otherwise. I guess I&#39;ll be adding this flag to all my debug builds from now on!</p>

<p>If you&#39;re curious, <a href="https://git.allpurposem.at/mat/GraphicsProg1/src/commit/b3ef88189ee7d2bec2d1da08edbd6e2e84928496/source/DataTypes.h#L178">this</a> is my original code where I found the issue.</p>

<hr>

<p>Thanks for reading! Feel free to contact me if you have any suggestions or comments.
Find me on <a href="https://allpurposem.at/link/mastodon">Mastodon</a> and <a href="https://allpurposem.at/link/matrix">Matrix</a>.</p>

<p>You can follow the blog through:
– ActivityPub by inputting <code><a href="https://blog.allpurposem.at/@/mat@blog.allpurposem.at" class="u-url mention">@<span>mat@blog.allpurposem.at</span></a></code>
– RSS/Atom: Copy this link into your reader: <code>https://blog.allpurposem.at</code></p>

<p>My website: <a href="https://allpurposem.at">https://allpurposem.at</a></p>

<p></p>
]]></content:encoded>
      <guid>https://blog.allpurposem.at/the-vector-reserve-fallacy</guid>
      <pubDate>Fri, 27 Oct 2023 21:53:41 +0000</pubDate>
    </item>
    <item>
      <title>Adventures cross-compiling a Windows game engine</title>
      <link>https://blog.allpurposem.at/adventures-cross-compiling-a-windows-game-engine</link>
      <description>&lt;![CDATA[As part of my game development major at DAE, I have to work on several projects which were not made with support for my platform of choice (Linux). Thankfully, most of these have been simple frameworks wrapping around SDL and OpenGL, so my job was limited to rewriting the build system from Visual Studio&#39;s .sln project file to a cross-platform CMake project (and fixing some bugs along the way). Not too bad. I&#39;d miss the beginning of the first class, but was up and going shortly after. Among these were the first two semesters of Programming. Here&#39;s a list of school engines I have ported so far:&#xA;&#xA;Programming 1 &#34;SDL Framework&#34;: https://git.allpurposem.at/mat/SDL-Framework&#xA;Programming 2 &#34;GameDevEngine2&#34;: https://git.allpurposem.at/mat/GameDevEngine2&#xA;Graphics Programming &#34;RayTracer&#34;: https://git.allpurposem.at/mat/GraphicsProg1&#xA;Gameplay Programming &#34;FRAMEWORK&#34;: https://git.allpurposem.at/mat/GameplayProg&#xA;&#xA;The versatility of having a cross-platform project allowed me to add tons of niceties for some of these. The one I&#39;m most happy with is the &#34;GameDevEngine2&#34; framework from Programming 2, to which I added web support and ended up using it for my and 2FoamBoards&#39;s entry in the 2023 GMTK game jam.&#xA;&#xA;Programming 3&#xA;&#xA;I&#39;d been having it easy. A couple nonstandard Microsoft Visual C++ (MSVC) bits of syntax here, a couple win32 API calls (functions that are specific to Windows) there... I wasn&#39;t expecting what arrived in my downloads folder today. I applied my usual CMake boilerplate, with SDL support, hit run to see the perhaps 50-100 errors... and instead was greeted with a simple but effective singular error.&#xA;&#xA;apm@apg ~/S/Prog3 (main)  clang++ source/GameWinMain.cpp &#xA;In file included from source/GameWinMain.cpp:9:&#xA;source/GameWinMain.h:12:10: fatal error: &#39;windows.h&#39; file not found&#xA;include windows.h&#xA;         ^&#xA;1 error generated.&#xA;&#xA;Oh, no&#xA;There&#39;s no SDL. There&#39;s no OpenGL. No GLFW, Qt, or GTK. It&#39;s all bare Windows API calls. I think I was in some form of state of disbelief, as I spent the next 30 minutes slowly creating #defines and typedefs to patch in all the types. Maybe, just maybe, I could patch around the types and it would magically open a window and I could get started with my classwork. No such thing happened.&#xA;&#xA;!--more--&#xA;Options&#xA;&#xA;So: what are my options? Is this salvageable, without having to boot the dreaded virtual machine? Let&#39;s see... I could:&#xA;&#xA;continue patching around the 3-4k lines of win32 API calls like I was ineffectively doing before&#xA;rewrite the engine from scratch to support SDL&#xA;build the native .sln file by somehow running MSVC on WINE (a Windows compatibility layer for Linux)&#xA;cross-compile from Linux to Windows and run the .exe file with WINE&#xA;&#xA;Obviously the first two options would be preferable, as they don&#39;t come with a hard dependency on the unfamiliar world of WINE. However, they sadly also take the most time. I have not yet discarded the second option (the author of the engine gave me the green light to rewrite it for native Linux, and even use it in exams (that&#39;s a first!!)), but as I have to follow the class from the start, I think I&#39;ll be going with WINE.&#xA;&#xA;aur/msvc-wine-git&#xA;&#xA;Of course, I&#39;m not the first person to want to build a .sln project from Linux. This appears to be a solved problem, with the polished-looking msvc-wine toolchain available as a native package for my distro. So I went ahead and installed it:&#xA;&#xA;apm@apg ~/S/Prog3 (main)  gimme msvc-wine-git&#xA;[sudo] password for apm: &#xA;:: Resolving dependencies...&#xA;:: Calculating conflicts...&#xA;:: Calculating inner conflicts...&#xA;&#xA;Aur (1) msvc-wine-git-17.7.r4-2&#xA;&#xA;:: Proceed to review? [Y/n]: &#xA;&#xA;It diligently fetched MSVC, the Windows 11 SDK, and all the necessary components from Microsoft&#39;s servers, while I had time to read the documentation. I happened upon the CMake instructions, which is how I&#39;ve managed all my school-related projects so far, and it didn&#39;t stick in my brain. I don&#39;t intend to criticize the writing, but something about it being all the way in the bottom in a FAQ, with no code blocks or example commands, or having a class going on around me while I was doing this prevented me from understanding how I&#39;m supposed to use it. The only time I&#39;ve ever used a separate toolchain was Emscripten; it provides a nice little emcmake wrapper for CMake which takes care of a lot of the details for you. I gave it a few tries, but seeing I was getting nowhere, and every second was lost class time, I decided to move on to my last option.&#xA;&#xA;LLVM&#xA;&#xA;I knew a little about LLVM before this, from having used clangd as my language server for C++ projects. As I understand it, it&#39;s a group of compilers designed in such a way that the &#34;frontends&#34; (which read the text code and output an intermediate language) and &#34;backends&#34; (read intermediate language and output the final binary) are swappable and interchangeable. This means you can use the same backend to compile both C++ and Rust code, while still getting equally well-optimized machine code out the other side. I enlisted the help of @JohnyTheCarrot@toot.community, who I knew has worked with clang before. He told me about the concept of an &#34;LLVM triple&#34;, which is a setting for LLVM compilers that tells it what sort of machine you want it to output code for. Crucially, you can specify a triplet for a completely different system than your own, and it should still work. I tried the following command:&#xA;clang++ -target x8664-w64-mingw32 source/.cpp -o game&#xA;&#xA;This currently outputs 227 linker errors. I know there were many syntax-related compiler errors which I&#39;ve since fixed, but it does get us past the dreaded #include windows.h! All of the linker errors take the following form:&#xA;/usr/bin/x8664-w64-mingw32-ld: /tmp/GameEngine-ac27d8.o:GameEngine.cpp:(.text+0xc95f): undefined reference to `impDeleteObject&#xA;&#xA;Fun with the linker &#xA;&#xA;Each of these is related to a call of a Windows-related function. It looks like we&#39;re missing the libraries! Adding the -mwindows flag tells Clang it&#39;s compiling &amp; linking a GUI Windows app, instead of a command line one. This causes linking against a lot of win32 GUI-related functions, reducing the linker errors to a mere 9. There&#39;s two kinds:&#xA;&#xA;_impAlphaBlend and _impTransparentBlt&#xA;According to the code, these are used for transparency. I have yet to use this engine, but from the names I&#39;m guessing they allow for drawing semi-opaque images on top of each other and blend the colors together. According to Microsoft&#39;s documentation, these are located in Msimg32.dll.&#xA;&#xA;_impmciSendStringA&#xA;These are functions from the defunct Multimedia Control Interface (that&#39;s the mci at the start of the name!), which this engine uses to play audio. Microsoft helpfully kept the legacy documentation online, informing me that these belong to Winmm.dll.&#xA;&#xA;At first, I assumed I&#39;d have to get these from a copy of Windows. However, I remembered WINE has a lot of open source reimplementations of these DLLs (Windows&#39;s version of .so shared libraries), and sure enough locate msimg32.dll (note the lowercase: I wasted some time with this because Linux is case sensitive, while Windows is not!) pointed me straight to a DLL I could yoink. I added it to the list of files to compile, and the msimg32-related linker errors were gone. Hooray!&#xA;&#xA;...or so I thought. I excitedly copied in winmm.dll and tried to compile...&#xA;clang-16: error: unable to execute command: Segmentation fault (core dumped)&#xA;clang-16: error: linker command failed due to signal (use -v to see invocation)&#xA;&#xA;Excuse me?? The linker is segfaulting?? To be honest, I have no idea whether this is an actual bug in LLVM&#39;s linker, but it sure did stump me for a while. I thought maybe my copy of winmm.dll was corrupt, or WINE did something weird with it. I went as far as downloading Microsoft&#39;s version of the DLL, but was met with the same sad message. What could I be possibly doing wrong?&#xA;&#xA;Oh. I&#39;m not supposed to be copying the DLLs into here, am I? The last time I used a linker without going through CMake, I was passing libraries to it was -llibname. But it can&#39;t be that easy for this... can it? It&#39;d have to go to my default WINE prefix to fetch them, which sounds plain weird. Libraries come from system paths, not user-specific folders. Well, might be worth a try anyways...&#xA;&#xA;apm@apg ~/S/P/build (main)  clang++ -mwindows -target x8664-w64-mingw32 ../source/.cpp -o game -lmsimg32 -lwinmm&#xA;In file included from ../source/GameWinMain.cpp:10:&#xA;../source/GameEngine.h:19:9: warning: &#39;WIN32WINNT&#39; macro redefined [-Wmacro-redefined]&#xA;define WIN32WINNT 0x0A00                             // Windows 10&#xA;        ^&#xA;/usr/x8664-w64-mingw32/include/mingw.h:239:9: note: previous definition is here&#xA;define WIN32WINNT 0xa00&#xA;        ^&#xA;1 warning generated.&#xA;Warning: corrupt .drectve at end of def file&#xA;Warning: corrupt .drectve at end of def file&#xA;Warning: corrupt .drectve at end of def file&#xA;apm@apg ~/S/P/build (main)  ls&#xA;game.exe&#xA;&#xA;wait*. That built?? HUH???? There&#39;s no way it--&#xA;apm@apg ~/S/P/build (main)  ./game.exe&#xA;-snip-&#xA;0130:err:module:importdll Library libgccsseh-1.dll (which is needed by L&#34;Z:\\home\\apm\\School\\Prog3\\build\\game.exe&#34;) not found&#xA;0130:err:module:importdll Library libstdc++-6.dll (which is needed by L&#34;Z:\\home\\apm\\School\\Prog3\\build\\game.exe&#34;) not found&#xA;0130:err:module:LdrInitializeThunk Importing dlls for L&#34;Z:\\home\\apm\\School\\Prog3\\build\\game.exe&#34; failed, status c0000135&#xA;&#xA;Right. Not so fast, heh. Still, this is great news! I don&#39;t know how or why this works, but we&#39;re linking to the DLLs somehow somewhere. WINE can&#39;t find some mingw32 libraries which were pulled in by -mwindows, but we can easily point it to them with export WINEPATH=&#34;/usr/x8664-w64-mingw32/bin&#34;&#xA;&#xA;And that&#39;s it! Here&#39;s the engine in all its glory, with audio support and all! It&#39;s beautiful...&#xA;&#xA;A screenshot of a completely black window with many lines of warnings from WINE behind it&#xA;&#xA;Right, there&#39;s nothing built on it yet. It&#39;s just a blank canvas. But hey, it doesn&#39;t crash!&#xA;&#xA;What&#39;s next?&#xA;&#xA;Having this run through WINE does come with a few limitations:&#xA;&#xA;All WINE apps take a long while to launch, though you can vastly improve this by running wineserver --persistent beforehand.&#xA;Usually, I attach gdb (the GNU debugger) to my code from my IDE, neovim. However, with this program running under WINE, I don&#39;t know how I would do that. Debugging remains an unsolved mystery (EDIT: see Addendum, I figured it out!).&#xA;WINE is slowly merging Wayland support, but at the moment it runs under X11, meaning I&#39;m sacrificing some performance and convenience.&#xA;Finally, of course, this will never have Linux support. I don&#39;t like that.&#xA;&#xA;Long-term, depending on the course workload and how complex the engine functions end up being, I think I will rewrite it in SDL. This will have the added bonus of enabling, like with my other engine ports, web support (see my Programming 2 end project here and a game jam game made in the same engine here). However, I think this will take longer than I think is reasonable to spend while procrastinating on other classes, so I&#39;m leaving it here. I wrote down my process while it was still fresh in my mind, so I hope this was an interesting read! As always, any and all constructive feedback is welcome directed to me: @mat@mastodon.gamedev.place .&#xA;&#xA;I am considering writing up my general porting process in a separate blog post, so perhaps expect that next!&#xA;&#xA;---&#xA;&#xA;Addendum&#xA;&#xA;After doing some additional research, and asking around in the very helpful WineHQ IRC room, I found a way to get debugging working! The first step is adding the -g flag to the clang++ invocation, which tells clang we want it to generate debug information (namely source maps, so the debugger can show which line of code we&#39;re at). Then I simply have to run winedbg --gdb game.exe, and I am presented with a (nearly) full-featured gdb prompt!&#xA;&#xA;A screenshot of a gdb interface showing source code of a WinMain function which runs the game engine&#xA;&#xA;I&#39;m unsure how to hook this up to neovim (maybe I can look into the Debug Adapter Protocol for this?), but for now just having a gdb environment is awesome enough. Unto more adventures!&#xA;&#xA;---&#xD;&#xA;&#xD;&#xA;Thanks for reading! Feel free to contact me if you have any suggestions or comments.&#xD;&#xA;Find me on Mastodon and Matrix.&#xD;&#xA;&#xD;&#xA;You can follow the blog through:&#xD;&#xA;ActivityPub by inputting @mat@blog.allpurposem.at&#xD;&#xA;RSS/Atom: Copy this link into your reader: https://blog.allpurposem.at&#xD;&#xA;&#xD;&#xA;My website: https://allpurposem.at&#xD;&#xA;&#xD;&#xA;link rel=&#34;preload&#34; href=&#34;https://blog.allpurposem.at/lexend.woff2&#34; as=&#34;font&#34; type=&#34;font/woff2&#34; crossorigin=&#34;&#34;]]&gt;</description>
      <content:encoded><![CDATA[<p>As part of my game development major at DAE, I have to work on several projects which were not made with support for my platform of choice (Linux). Thankfully, most of these have been simple frameworks wrapping around SDL and OpenGL, so my job was limited to rewriting the build system from Visual Studio&#39;s <code>.sln</code> project file to a cross-platform CMake project (and fixing some bugs along the way). Not too bad. I&#39;d miss the beginning of the first class, but was up and going shortly after. Among these were the first two semesters of Programming. Here&#39;s a list of school engines I have ported so far:</p>
<ol><li>Programming 1 “SDL Framework”: <a href="https://git.allpurposem.at/mat/SDL-Framework">https://git.allpurposem.at/mat/SDL-Framework</a></li>
<li>Programming 2 “GameDevEngine2”: <a href="https://git.allpurposem.at/mat/GameDevEngine2">https://git.allpurposem.at/mat/GameDevEngine2</a></li>
<li>Graphics Programming “RayTracer”: <a href="https://git.allpurposem.at/mat/GraphicsProg1">https://git.allpurposem.at/mat/GraphicsProg1</a></li>
<li>Gameplay Programming “_FRAMEWORK”: <a href="https://git.allpurposem.at/mat/GameplayProg">https://git.allpurposem.at/mat/GameplayProg</a></li></ol>

<p>The versatility of having a cross-platform project allowed me to add tons of niceties for some of these. The one I&#39;m most happy with is the “GameDevEngine2” framework from Programming 2, to which I added web support and ended up using it for my and 2FoamBoards&#39;s <a href="https://2foamboards.itch.io/murder">entry in the 2023 GMTK game jam</a>.</p>

<h2 id="programming-3" id="programming-3">Programming 3</h2>

<p>I&#39;d been having it easy. A couple nonstandard Microsoft Visual C++ (MSVC) bits of syntax here, a couple win32 API calls (functions that are specific to Windows) there... I wasn&#39;t expecting what arrived in my downloads folder today. I applied my usual CMake boilerplate, with SDL support, hit run to see the perhaps 50-100 errors... and instead was greeted with a simple but effective singular error.</p>

<pre><code class="language-cpp">apm@apg ~/S/Prog3 (main)&gt; clang++ source/GameWinMain.cpp 
In file included from source/GameWinMain.cpp:9:
source/GameWinMain.h:12:10: fatal error: &#39;windows.h&#39; file not found
#include &lt;windows.h&gt;
         ^~~~~~~~~~~
1 error generated.
</code></pre>

<h3 id="oh-no" id="oh-no">Oh, no</h3>

<p>There&#39;s no SDL. There&#39;s no OpenGL. No GLFW, Qt, or GTK. It&#39;s <em>all</em> bare Windows API calls. I think I was in some form of state of disbelief, as I spent the next 30 minutes slowly creating <code>#define</code>s and <code>typedefs</code> to patch in all the types. Maybe, just maybe, I could patch around the types and it would magically open a window and I could get started with my classwork. No such thing happened.</p>



<h2 id="options" id="options">Options</h2>

<p>So: what are my options? Is this salvageable, without having to boot the dreaded virtual machine? Let&#39;s see... I could:</p>
<ul><li>continue patching around the 3-4k lines of win32 API calls like I was ineffectively doing before</li>
<li>rewrite the engine from scratch to support SDL</li>
<li>build the native <code>.sln</code> file by somehow running MSVC on WINE (a Windows compatibility layer for Linux)</li>
<li>cross-compile from Linux to Windows and run the <code>.exe</code> file with WINE</li></ul>

<p>Obviously the first two options would be preferable, as they don&#39;t come with a hard dependency on the unfamiliar world of WINE. However, they sadly also take the most time. I have not yet discarded the second option (the author of the engine gave me the green light to rewrite it for native Linux, and even use it in exams (that&#39;s a first!!)), but as I have to follow the class from the start, I think I&#39;ll be going with WINE.</p>

<h3 id="aur-msvc-wine-git" id="aur-msvc-wine-git"><code>aur/msvc-wine-git</code></h3>

<p>Of course, I&#39;m not the first person to want to build a <code>.sln</code> project from Linux. This appears to be a solved problem, with the polished-looking <a href="https://github.com/mstorsjo/msvc-wine">msvc-wine</a> toolchain available as a native package for my distro. So I went ahead and installed it:</p>

<pre><code class="language-ini">apm@apg ~/S/Prog3 (main)&gt; gimme msvc-wine-git
[sudo] password for apm: 
:: Resolving dependencies...
:: Calculating conflicts...
:: Calculating inner conflicts...

Aur (1) msvc-wine-git-17.7.r4-2

:: Proceed to review? [Y/n]: 
</code></pre>

<p>It diligently fetched MSVC, the Windows 11 SDK, and all the necessary components from Microsoft&#39;s servers, while I had time to read the documentation. I happened upon the CMake instructions, which is how I&#39;ve managed all my school-related projects so far, and it didn&#39;t stick in my brain. I don&#39;t intend to criticize the writing, but something about it being all the way in the bottom in a FAQ, with no code blocks or example commands, or having a class going on around me while I was doing this prevented me from understanding how I&#39;m supposed to use it. The only time I&#39;ve ever used a separate toolchain was <a href="https://emscripten.org">Emscripten</a>; it provides a nice little <code>emcmake</code> wrapper for CMake which takes care of a lot of the details for you. I gave it a few tries, but seeing I was getting nowhere, and every second was lost class time, I decided to move on to my last option.</p>

<h2 id="llvm" id="llvm">LLVM</h2>

<p>I knew a little about <a href="https://llvm.org">LLVM</a> before this, from having used <code>clangd</code> as my language server for C++ projects. As I understand it, it&#39;s a group of compilers designed in such a way that the “frontends” (which read the text code and output an intermediate language) and “backends” (read intermediate language and output the final binary) are swappable and interchangeable. This means you can use the same backend to compile both C++ and Rust code, while still getting equally well-optimized machine code out the other side. I enlisted the help of <a href="https://blog.allpurposem.at/@/JohnyTheCarrot@toot.community" class="u-url mention">@<span>JohnyTheCarrot@toot.community</span></a>, who I knew has worked with <code>clang</code> before. He told me about the concept of an “LLVM triple”, which is a setting for LLVM compilers that tells it what sort of machine you want it to output code for. Crucially, you can specify a triplet for a completely different system than your own, and it <em>should</em> still work. I tried the following command:</p>

<pre><code class="language-bash">clang++ -target x86_64-w64-mingw32 source/*.cpp -o game
</code></pre>

<p>This currently outputs 227 linker errors. I know there were many syntax-related compiler errors which I&#39;ve since fixed, but it does get us past the dreaded <code>#include windows.h</code>! All of the linker errors take the following form:</p>

<pre><code class="language-cpp">/usr/bin/x86_64-w64-mingw32-ld: /tmp/GameEngine-ac27d8.o:GameEngine.cpp:(.text+0xc95f): undefined reference to `__imp_DeleteObject
</code></pre>

<h3 id="fun-with-the-linker" id="fun-with-the-linker">Fun with the linker</h3>

<p>Each of these is related to a call of a Windows-related function. It looks like we&#39;re missing the libraries! Adding the <code>-mwindows</code> flag tells Clang it&#39;s compiling &amp; linking a GUI Windows app, instead of a command line one. This causes linking against a lot of win32 GUI-related functions, reducing the linker errors to a mere 9. There&#39;s two kinds:</p>
<ul><li><p><code>__imp_AlphaBlend</code> and <code>__imp_TransparentBlt</code>
According to the code, these are used for transparency. I have yet to use this engine, but from the names I&#39;m guessing they allow for drawing semi-opaque images on top of each other and blend the colors together. According to Microsoft&#39;s documentation, these are located in <code>Msimg32.dll</code>.</p></li>

<li><p><code>__imp_mciSendStringA</code>
These are functions from the defunct <a href="https://en.wikipedia.org/wiki/Media_Control_Interface">Multimedia Control Interface</a> (that&#39;s the <code>mci</code> at the start of the name!), which this engine uses to play audio. Microsoft helpfully kept the legacy documentation online, informing me that these belong to <code>Winmm.dll</code>.</p></li></ul>

<p>At first, I assumed I&#39;d have to get these from a copy of Windows. However, I remembered WINE has a lot of open source reimplementations of these DLLs (Windows&#39;s version of <code>.so</code> shared libraries), and sure enough <code>locate msimg32.dll</code> (note the lowercase: I wasted some time with this because Linux is case sensitive, while Windows is not!) pointed me straight to a DLL I could yoink. I added it to the list of files to compile, and the <code>msimg32</code>-related linker errors were gone. Hooray!</p>

<p>...or so I thought. I excitedly copied in <code>winmm.dll</code> and tried to compile...</p>

<pre><code class="language-toml">clang-16: error: unable to execute command: Segmentation fault (core dumped)
clang-16: error: linker command failed due to signal (use -v to see invocation)
</code></pre>

<p>Excuse me?? The <em>linker</em> is segfaulting?? To be honest, I have no idea whether this is an actual bug in LLVM&#39;s linker, but it sure did stump me for a while. I thought maybe my copy of <code>winmm.dll</code> was corrupt, or WINE did something weird with it. I went as far as downloading Microsoft&#39;s version of the DLL, but was met with the same sad message. What could I be possibly doing wrong?</p>

<p><strong><em>Oh.</em></strong> I&#39;m not supposed to be copying the DLLs into here, am I? The last time I used a linker without going through CMake, I was passing libraries to it was <code>-l&lt;libname&gt;</code>. But it can&#39;t be that easy for this... can it? It&#39;d have to go to my default WINE prefix to fetch them, which sounds plain weird. Libraries come from system paths, not user-specific folders. Well, might be worth a try anyways...</p>

<pre><code class="language-ini">apm@apg ~/S/P/build (main)&gt; clang++ -mwindows -target x86_64-w64-mingw32 ../source/*.cpp -o game -lmsimg32 -lwinmm
In file included from ../source/GameWinMain.cpp:10:
../source/GameEngine.h:19:9: warning: &#39;_WIN32_WINNT&#39; macro redefined [-Wmacro-redefined]
#define _WIN32_WINNT 0x0A00                             // Windows 10
        ^
/usr/x86_64-w64-mingw32/include/_mingw.h:239:9: note: previous definition is here
#define _WIN32_WINNT 0xa00
        ^
1 warning generated.
Warning: corrupt .drectve at end of def file
Warning: corrupt .drectve at end of def file
Warning: corrupt .drectve at end of def file
apm@apg ~/S/P/build (main)&gt; ls
game.exe*
</code></pre>

<p><em>wait</em>. That built?? HUH???? There&#39;s no way it—</p>

<pre><code class="language-ini">apm@apg ~/S/P/build (main)&gt; ./game.exe
-snip-
0130:err:module:import_dll Library libgcc_s_seh-1.dll (which is needed by L&#34;Z:\\home\\apm\\School\\Prog3\\build\\game.exe&#34;) not found
0130:err:module:import_dll Library libstdc++-6.dll (which is needed by L&#34;Z:\\home\\apm\\School\\Prog3\\build\\game.exe&#34;) not found
0130:err:module:LdrInitializeThunk Importing dlls for L&#34;Z:\\home\\apm\\School\\Prog3\\build\\game.exe&#34; failed, status c0000135
</code></pre>

<p>Right. Not so fast, heh. Still, this is great news! I don&#39;t know how or why this works, but we&#39;re linking to the DLLs somehow somewhere. WINE can&#39;t find some mingw32 libraries which were pulled in by <code>-mwindows</code>, but we can easily point it to them with <code>export WINEPATH=&#34;/usr/x86_64-w64-mingw32/bin&#34;</code></p>

<p>And that&#39;s it! Here&#39;s the engine in all its glory, with audio support and all! It&#39;s beautiful...</p>

<p><img src="https://allpurposem.at/blog/prog3-engine.png" alt="A screenshot of a completely black window with many lines of warnings from WINE behind it"></p>

<p>Right, there&#39;s nothing built on it yet. It&#39;s just a blank canvas. But hey, it doesn&#39;t crash!</p>

<h2 id="what-s-next" id="what-s-next">What&#39;s next?</h2>

<p>Having this run through WINE does come with a few limitations:</p>
<ul><li>All WINE apps take a long while to launch, though you can vastly improve this by running <code>wineserver --persistent</code> beforehand.</li>
<li>Usually, I attach <code>gdb</code> (the GNU debugger) to my code from my IDE, neovim. However, with this program running under WINE, I don&#39;t know how I would do that. Debugging remains an unsolved mystery (EDIT: see Addendum, I figured it out!).</li>
<li>WINE is slowly merging Wayland support, but at the moment it runs under X11, meaning I&#39;m sacrificing some performance and convenience.</li>
<li>Finally, of course, this will never have Linux support. I don&#39;t like that.</li></ul>

<p>Long-term, depending on the course workload and how complex the engine functions end up being, I think I will rewrite it in SDL. This will have the added bonus of enabling, like with my other engine ports, web support (see my Programming 2 end project <a href="https://allpurposem.at/tdyd.html">here</a> and a game jam game made in the same engine <a href="https://2foamboards.itch.io/murder">here</a>). However, I think this will take longer than I think is reasonable to spend while procrastinating on other classes, so I&#39;m leaving it here. I wrote down my process while it was still fresh in my mind, so I hope this was an interesting read! As always, any and all constructive feedback is welcome directed to me: <a href="https://blog.allpurposem.at/@/mat@mastodon.gamedev.place" class="u-url mention">@<span>mat@mastodon.gamedev.place</span></a> .</p>

<p>I am considering writing up my general porting process in a separate blog post, so perhaps expect that next!</p>

<hr>

<h2 id="addendum" id="addendum">Addendum</h2>

<p>After doing some additional research, and asking around in the very helpful <a href="https://www.winehq.org/irc">WineHQ IRC</a> room, I found a way to get debugging working! The first step is adding the <code>-g</code> flag to the <code>clang++</code> invocation, which tells clang we want it to generate debug information (namely source maps, so the debugger can show which line of code we&#39;re at). Then I simply have to run <code>winedbg --gdb game.exe</code>, and I am presented with a (nearly) full-featured gdb prompt!</p>

<p><img src="https://allpurposem.at/blog/prog3-windbg.png" alt="A screenshot of a gdb interface showing source code of a WinMain function which runs the game engine"></p>

<p>I&#39;m unsure how to hook this up to neovim (maybe I can look into the Debug Adapter Protocol for this?), but for now just having a gdb environment is awesome enough. Unto more adventures!</p>

<hr>

<p>Thanks for reading! Feel free to contact me if you have any suggestions or comments.
Find me on <a href="https://allpurposem.at/link/mastodon">Mastodon</a> and <a href="https://allpurposem.at/link/matrix">Matrix</a>.</p>

<p>You can follow the blog through:
– ActivityPub by inputting <code><a href="https://blog.allpurposem.at/@/mat@blog.allpurposem.at" class="u-url mention">@<span>mat@blog.allpurposem.at</span></a></code>
– RSS/Atom: Copy this link into your reader: <code>https://blog.allpurposem.at</code></p>

<p>My website: <a href="https://allpurposem.at">https://allpurposem.at</a></p>

<p></p>
]]></content:encoded>
      <guid>https://blog.allpurposem.at/adventures-cross-compiling-a-windows-game-engine</guid>
      <pubDate>Thu, 21 Sep 2023 20:20:48 +0000</pubDate>
    </item>
    <item>
      <title>Can you fit Minecraft in a QR code?</title>
      <link>https://blog.allpurposem.at/minecraft-qr</link>
      <description>&lt;![CDATA[Answer: Yes! Here it is:&#xA;&#xA;A QR code which, when scanned, outputs a Minecraft executable&#xA;&#xA;The game launches, and you can move around the 64x64x64 world with WASD. Space is used to jump. Look around with the mouse. You can left click to break a block, and right click to place dirt.&#xA;&#xA;You can scan it with the following command on Linux:&#xA;zbarcam -1 --raw -Sbinary   /tmp/m4k &amp;&amp; chmod +x /tmp/m4k  &amp;&amp; /tmp/m4k&#xA;-1: exit after scanning the code&#xA;--raw: don&#39;t process it as text&#xA;--Sbinary: use the binary configuration&#xA;&#xA;A screenshot of Minecraft4k, a pixelated, randomly-filled world with a low render distance. The whole game fits in a QR code.&#xA;A screenshot of Minecraft4k, showing the word &#34;Hi!&#34; built out of dirt blocks.&#xA;!--more--&#xA;The project is available on GitHub at TheSunCat/Minecraft4k&#xA;&#xA;How???&#xA;&#xA;Short answer: Pain, suffering, evil dark magic, compression, a couple years, some more pain, and a bit of luck.&#xA;&#xA;Long answer: Well, it&#39;s a long story. If you&#39;re ready to learn about various creative game programming techniques and cursed incantations (don&#39;t worry, I will explain each concept as it becomes relevant), strap on!&#xA;&#xA;  NOTE: This is my first ever blog post. I&#39;m trying it out, as I really enjoy writing! I tried my best to keep it accessible and entertaining, but I am very much open to constructive feedback on how it and future posts can be made better. Every time I describe a major change or evolution, there will be a link to the file or commit in question.&#xA;&#xA;The goal&#xA;&#xA;So, how big is a QR code? Truth is, they come in many sizes. The one you&#39;re probably most familiar with is &#34;version 1&#34;, with 21x21 pixels. This can store 25 characters of text, but only 17 8-bit bytes. The difference is because text can be more efficiently encoded than bytes can, as there are less possible values for a QR-compatible text character than the 255 values a byte can have.&#xA;A small standard QR code encoding the string &#34;Minecraft4k&#34; &#xA;&#xA;The biggest existing QR code, which you can see at the top of the document, is &#34;version 40&#34; and fits a whopping 2953 bytes. Let&#39;s see how many max-size QR codes it would take to fit the following Villager idle sound from Minecraft:&#xA;&#xA;audio controls=&#34;controls&#34;&#xA;  source type=&#34;audio/ogg&#34; src=&#34;https://allpurposem.at/blog/idle1.ogg&#34;/source&#xA;/audio&#xA;&#xA;That&#39;s, uh, 8605 bytes. It fills up (nearly) three QR codes. We have to fit playable Minecraft into a third of the Villager &#34;hmm&#34; sound.&#xA;&#xA;Exposition&#xA;&#xA;On December 2009, Markus Persson, the creator of Minecraft, released a very cut-down version of Minecraft for the Java 4k Game Programming Contest, in which contestants develop games in Java which fit under 4 kilobytes (4096 bytes). An archived version of the game is available on Archive.org. The game renders a pixelated image vaguely resembling Minecraft. All this in... 2581 bytes. Woah. That&#39;s much less than 4k. AND it already fits in a QR code. Remember, our size limit is 2953 bytes.&#xA;&#xA;So the deed is done, right? Not quite! This version depends on the Java Applets framework, which was phased out by browsers starting in 2013, and is now completely unusable. Also, I think we can do better. The game suffers from bugs, poor performance, and low resolution. Not only that, but running it required a full web browser, Java installation, and the Java Applets plugin enabled. Can a standalone version of Minecraft really fit in 2953 bytes? &#xA;&#xA;The Java era &#xA;To improve upon software, we need to change the source code. Unfortunately, the code for Minecraft4k was never made available, and likely never will be. All original posts about it now lead to a 404 error, and the original Minecraft4k page where you could play it now redirects to Microsoft&#39;s Minecraft homepage. Thankfully, Java code is not too hard to retrieve from JAR files! Unlike most compiled languages like C and C++, which compile to optimized assembly language, Java programs compile to an intermediary called Java bytecode (which is then interpreted into normal assembly by the Java Virtual Machine when you run the JAR file!). This bytecode still bears a strong resemblance to the original source code, and preserves a lot of information, such as the names of variables and functions. This means that using tools made by very smart people, we can &#34;de-compile&#34; the bytecode stored inside Minecraft4k and get back usable source code!&#xA;&#xA;Let&#39;s pop the JAR file into the wonderful jd-gui decompiler, and...&#xA;&#xA;float f13 = 0.0F;&#xA;float f14 = 0.0F;&#xA;f14 += (this.M[119] - this.M[115])  0.02F;&#xA;f13 += (this.M[100] - this.M[97])  0.02F;&#xA;f4 = 0.5F;&#xA;f5 = 0.99F;&#xA;f6 = 0.5F;&#xA;f4 += f9  f14 + f10  f13;&#xA;f6 += f10  f14 - f9  f13;&#xA;f5 += 0.003F;&#xA;int m;&#xA;label208: for (m = 0; m &lt; 3; m++) {&#xA;    float f16 = f1 + f4  ((m + 0) % 3 / 2);&#xA;    float f17 = f2 + f5  ((m + 1) % 3 / 2);&#xA;    float f19 = f3 + f6  ((m + 2) % 3 / 2);&#xA;    for (int i12 = 0; i12 &lt; 12; i12++) {&#xA;        int i13 = (int)(f16 + (i12     0 &amp; 0x1)  0.6F - 0.3F) - 64;&#xA;        int i14 = (int)(f17 + ((i12     2) - 1)  0.8F + 0.65F) - 64;&#xA;        int i15 = (int)(f19 + (i12     1 &amp; 0x1)  0.6F - 0.3F) - 64;&#xA;        if (i13  0 || i14 &lt; 0 || i15 &lt; 0 || i13 = 64 || i14   = 64 || i15   = 64 || arrayOfInt2[i13 + i14  64 + i15  4096]   0) {&#xA;            if (m != 1)&#xA;                break label208; &#xA;            if (this.M[32]   0 &amp;&amp; f5   0.0F) {&#xA;                this.M[32] = 0;&#xA;                f5 = -0.1F;&#xA;                break label208;&#xA;            } &#xA;            f5 = 0.0F;&#xA;            break label208;&#xA;        } &#xA;    } &#xA;    f1 = f16;&#xA;    f2 = f17;&#xA;    f3 = f19;&#xA;}&#xA;&#xA;Uh-oh. It&#39;s true that Java usually keeps variable and function names when compiling. But Persson likely turned this feature off, as it would take up place in the JAR. We have the code, but we don&#39;t know what any of it does. It looks like we&#39;ll have to figure out what every single statement does by staring at it and poking it repeatedly: let&#39;s do some reverse engineering!&#xA;&#xA;Reverse engineering&#xA;&#xA;The first step is to get it running. The code still uses the Java Applet framework, so I ported everything to use the newer (but still ancient) Java Swing framework. This allows the game to open a window and display (render) pixels inside it. Great! Let&#39;s start reversing the code (on May 30, 2020). I will only go over the major parts, as it took a long while.&#xA;&#xA;The most obvious things are as follows:&#xA;the 214128 BufferedImage is clearly the screen that&#39;s drawn on the window. There&#39;s a big loop that updates every byte inside it, every frame. The dimensions also match the resolution of the pixelated game view.&#xA;the 646464 array of integers is the world. It gets generated at game start, and a single value is modified when you place/destroy a block.&#xA;the game uses fixed-update physics. This means that, instead of multiplying the player&#39;s movement by the length of each frame, it assumes a constant length for each physics step, and does enough steps to catch up with the time elapsed that frame. The benefit of this is that physics calculations are much simpler, since they don&#39;t have to adapt to different step sizes.&#xA;&#xA;I documented the player&#39;s position, velocity, look direction, keyboard and mouse input. My pal @JuPaHe64 (Twitter) documented how the game checks whether your movement is valid and corrects it to not let you clip inside blocks. Thanks to this, we fixed a bug which gets the player stuck in the original game if they jump into a wall.&#xA;&#xA;Over the next week, JuPaHe64 and I documented the rest of the game. There&#39;s two very unusual systems operating together to keep the size of the game down.&#xA;&#xA;The texture atlas&#xA;A single 16x16px texture of the side of a block in the original game takes up ~350 bytes. Minecraft4k has 6 distinct blocks, with three unique sides. That&#39;s 6  3  350 = 6300 bytes. Even compressed by the JAR format (it&#39;s just a renamed ZIP file), this would take up a huge amount of our allotted space. So how does Persson do it?&#xA;&#xA;In stead of storing a bitmap of the textures, Minecraft4k opts to generate them at runtime from algorithms.&#xA;Woah.&#xA;&#xA;I&#39;d never seen this before, but it&#39;s a great way to save space. Here&#39;s the texture atlas generated with the default Java Random seed:&#xA;&#xA;An atlas of 6 textures from Minecraft4k, which look like weird imitations of Minecraft Classic textures&#xA;&#xA;One unexpected boon of this is that it becomes really easy to up the resolution of the textures. As they&#39;re algorithmically generated, the same patterns will hold in higher detail, which led to this cursed image:&#xA;&#xA;Way too high-resolution textures in Minecraft&#xA;&#xA;JuPaHe64 created a really nice texture pack for it, showing that this is actually a viable method to generate some kinds of textures for games:&#xA;&#xA;High-resolution stylized texture pack&#xA;&#xA;I am especially fond of the tree bark and stone textures.&#xA;&#xA;Ray&#34;tracing&#34;&#xA;&#xA;No, really. Well, sort of. Minecraft is a very complex game to render, despite its relatively simple graphics. The real game has to do tons of calculations to avoid rendering block faces that are hidden, turn it all into triangles, and do math to distort them from 3D space into the shapes we see on our 2D screen. This is an oversimplification of the rendering technique called rasterizing. This complexity would be too much for our size limitations, so instead Minecraft4k employs a specific variation of raytracing: voxel raymarching.&#xA;&#xA;  NOTE: Voxel is just a word for a 3D pixel, which the Minecraft world is made out of.&#xA;&#xA;This is what happens for each pixel that needs to be drawn:&#xA;&#xA;Calculate the direction of the ray, based on where the player is looking and the pixel&#39;s coordinates&#xA;Store the initial position of the ray&#xA;Loop until we hit a block:&#xA;    Step (&#34;march&#34;) forward by one block&#xA;    Check if the ray has hit a solid block&#xA;Color the pixel with that block&#39;s texture&#xA;&#xA;As you can probably guess, this is a pretty simple algorithm to implement, and therefore saves a lot of precious bytes. Additionally, we can use the result of raymarching the pixel at the center of the screen to tell what block the player is looking at. This saves writing a separate function to get the block, and saves a considerable amount of space.&#xA;&#xA;Raytracing is known nowadays for enabling more complex effects, such as lighting and reflections. With minimal modifications to the code, JuPaHe64 was able to add pixel-perfect shadows, and I added ambient light illumination. Paired with a simple world generator, it can make for some interesting shots, despite the poor performance:&#xA;&#xA;A sunset in Minecraft4k with an orange skylight tint&#xA;&#xA;A tree with pixel-perfect shadows&#xA;&#xA;However, a problem arises: even without the fancy world generation and raytraced effects, the Java game now takes up 17757 bytes. That&#39;s over 6 QR codes. The code required to make this work on contemporary systems with Java is simply too big. It&#39;s time for a change of approach.&#xA;&#xA;June 2020: Porting to C++&#xA;Let&#39;s kill two birds with one stone, and rewrite the game to:&#xA;Use C++, which is much more familiar to me, so I can make faster progress&#xA;Use the GPU to run the game in real time, as raytracing on the CPU can be very slow&#xA;&#xA;After a few grueling days, the new port was finally functional, and ran beautifully thanks to GPU acceleration. It used an OpenGL compute shader, which is a type of program that can run on the GPU, once per pixel, all at once. This means that, unlike on the CPU, where each pixel has to finish rendering before the next one can be rendered, on the GPU all of it happens at once.&#xA;&#xA;One of my favorite changes happened when my friend @HANSHOTFIRST joined in, and we wrote the commit title &#34;bad the shader&#34;. Here, &#34;bad&#34; is used as a verb, since we pretty much rewrote the entire thing and it, uh, didn&#39;t work. The idea was to reduce the space it took up, and improve performance, by processing all three XYZ axes together. The original game does ray steps per-axis, meaning that first it does the X axis, then the Y, and finally the Z step. This seemed unnecessary to us at the time, but we clearly missed something, as you can see here:&#xA;&#xA;A very buggily rendered Minecraft world&#xA;&#xA;After a small hiatus, and porting the game to Linux on March 2021, I played a bit more with the graphical effects possible with raytracing:&#xA;&#xA;A foggy forest scene with soft sunlight and shadows&#xA;&#xA;I then introduced the first big size improvement: executable packing. gzexe is a tool which uses gzip compression (the same that was used to deliver this page to you!) to reduce the size of an executable. I also implemented usage of a Shader Minifier, whose job it is to automatically reduce the size of the shader code by removing comments, shortening variable names, and getting rid of unneeded newlines and spaces. The reason why this is so important to do with the shader is that, unlike C++ code which is compiled into the binary, OpenGL shaders must be stored as source code, and compiled by the GPU drivers at runtime. Therefore, any single character we can save in the shader code should translate directly to a byte saved toward our goal. So, how small did this get it? Well, an impressively small, QR-code fitting... 11314 bytes. Well, peck. That&#39;s four QR codes. We need to divide the size by four. How is that even possible?&#xA;&#xA;C that? It&#39;s my sanity evaporating!&#xA;&#xA;Yeah. I, uh, rewrote it in C. After a long break. In June of 2022, the game is born anew, this time more broken than ever. Once I got all the basic features in by August, I was left with a very functional game, which does everything I think defines Minecraft, in 4598 bytes. Woah. That&#39;s 1645 bytes over our limit, for a total size of just over 1.5 QR codes. Suddenly this looks feasible. By this time, the build process has picked up some dirty tricks. We&#39;re far from done, but there are already a few things that make me mildly uncomfortable. Let&#39;s go through the most egregious ones:&#xA;&#xA;-nostartfiles&#xA;The C compiler we&#39;ll be using is called GCC. There&#39;s a few obvious flags we can pass to it to reduce the size of the output executable (the binary). We can remove the debug information with -s, and optimize the code for size rather than performance with -Os. However, there&#39;s one different argument. We all know the ubiquitous int main function, yes? The entry point to every C and C++ program? The first code that runs? I&#39;m here to reveal to the world that we&#39;ve been lied to: the real entry point is void start, but Big Compiler doesn&#39;t want you to know. This is part of their great ploy to sell more argc and argv. In all seriousness, C programs actually start at, well, start. This contains code to set up the stack, global variables, the values of argc and argv, and various parts of the C runtime. Because we don&#39;t need most of this, we can elect to just skip main and use start in stead: &#xA;&#xA;void start() {&#xA;    // set up the stack&#xA;    asm volatile(&#34;sub $8, %rsp\n&#34;);&#xA;&#xA;    // Minecraft goes here&#xA;&#xA;    // exit&#xA;    asm volatile(&#34;.intelsyntax noprefix&#34;);&#xA;    asm volatile(&#34;push 231&#34;); //exitgroup&#xA;    asm volatile(&#34;pop rax&#34;);&#xA;    asm volatile(&#34;xor edi, edi&#34;);&#xA;    asm volatile(&#34;syscall&#34;);&#xA;    asm volatile(&#34;.attsyntax prefix&#34;);&#xA;    builtinunreachable();&#xA;&#xA;vondehi&#xA;Like gzexe, vondehi is a tool that allows shrinking a binary by decompressing it and then running it. It&#39;s a bit of very well-optimized assembly that can be prepended to any xzcat-compatible compressed executable, and will make it runnable on Linux. At this point, all semblance of cross-platform support is completely gone.&#xA;&#xA;strip&#xA;Another GNU tool, strip allows you to remove specific sections from a Linux executable (in the Executable Linker Format—ELF). It turns out that, even with -s passed to GCC, a lot of nonessential information is still kept in the form of ELF sections. We can use strip -R to get rid of them. I simply tried each one by one until the game would no longer run.&#xA;&#xA;With all this, we&#39;re at 4598 bytes. Which is pretty great, but we have a whole journey ahead to get it below 2953. There&#39;s a few obvious shader code optimizations that can be made, such as shortening the names of uniforms (variables in the shader which can&#39;t be changed by the Shader Minifier), and putting everything in the shader&#39;s main function rather than using function calls. This makes up for a total of 42 bytes. Yikes. Where are we going to get the 1k+ savings we need? That&#39;s a great question, and it&#39;s going to take a year of hiatus to answer. I left Minecraft4k at 3786 bytes on September 2022 and focused on my first year at university.&#xA;&#xA;The final run (sanity--)&#xA;It&#39;s September 2, 2023. University starts in less than two weeks. I have recently rediscovered MattKC&#39;s excellent snake game in a QR code. I&#39;m reminded of Minecraft4k, and how close I was to the finish line. It&#39;s time for one last sprint.&#xA;&#xA;Embracing the dark side&#xA;One big chunk of bytes lies in my calls to C standard library functions. sin, cos, fmodf, and friends. I had tried to get around this by implementing some of them myself, where it was smaller than just calling the libc function. &#xA;&#xA;// TODO tune this, or use inline x86 ASM&#xA;define TRIGPRECISION 20&#xA;static float mysin(float x)&#xA;{&#xA;    float sine;&#xA;&#xA;    float t = x;&#xA;    float sine = x;&#xA;    for (int a=1; a &lt; TRIGPRECISION; ++a)&#xA;    {&#xA;        float mult = -xx/((2a+1)(2a));&#xA;        t *= mult;&#xA;        sine += t;&#xA;    }&#xA;    return sine;&#xA;}&#xA;I&#39;ll let you guess how the TODO comment was applied.&#xA;&#xA;float mysin(float x) {&#xA;    float sine;&#xA;    asm (&#xA;        &#34;fsin;&#34;&#xA;        &#34;fstps %0;&#34;&#xA;        : &#34;=m&#34; (sine)&#xA;        : &#34;t&#34; (x)&#xA;    );&#xA;    return sine;&#xA;}&#xA;&#xA;Directly using the x86 instruction fsin, I was able to save 80 whole bytes from the binary.&#xA;&#xA;API Abuse&#xA;OpenGL defines a standard language to talk to the GPU in, so you can get it to do your bidding. This is great, because it will work on any machine, no matter the platform or hardware... supposedly. I&#39;ve already encountered weird crashes running Minecraft4k&#39;s C++ edition on Intel GPUs, because their OpenGL implementation didn&#39;t like my way of storing the world data. Every OpenGL driver has its quirks and bugs, which make programming in OpenGL so much more fun. It adds the surprise factor that is very welcome in an otherwise consistent field. Your raytracer might work on one computer, but you&#39;ll never know whether it works on all computers, because of the fun factor that is OpenGL driver bugs.&#xA;&#xA;Conversely, we can take advantage of some of these bugs to get away with removing a lot of otherwise strictly necessary code. That&#39;ll be 26 bytes, coming right up!&#xA;&#xA;dlsym&#xA;Rather than asking Linux to make the OpenGL functions available to us, we can manually load and fetch them using the pair of functions dlopen and dlsym. This requires storing the plain text name of every function we need in the binary, which does take up a lot of bytes, but it ends up being just slightly shorter by 21 bytes. Whew.&#xA;&#xA;This is not okay&#xA;Remember that we&#39;re compressing the binary and attaching a small decompressor to it, so any improvement to the &#34;compressibility&#34; of the binary directly translates to saved bytes for us. So, I opened the binary in a hex editor, and started zeroing out parts of it. Surprisingly, I found that a lot of seemingly important parts, defined in the ELF specification, are simply not necessary. The game still runs after being so heavily mutilated, although I cannot say the same about Linux ELF utilities.  Check out this very disappointed readelf output, where almost everything is a zero:&#xA;&#xA;apm@apg ~/D/m4k ((37570fc8))   readelf -a Minecraft4k_prepacked.elf&#xA;ELF Header:&#xA;  Magic:   7f 45 4c 46 00 00 00 00 00 00 00 00 00 00 00 00 &#xA;  Class:                             none&#xA;  Data:                              none&#xA;  Version:                           0&#xA;  OS/ABI:                            UNIX - System V&#xA;  ABI Version:                       0&#xA;  Type:                              EXEC (Executable file)&#xA;  Machine:                           Advanced Micro Devices X86-64&#xA;  Version:                           0x0&#xA;  Entry point address:               0x102ca&#xA;  Start of program headers:          0 (bytes into file)&#xA;  Start of section headers:          64 (bytes into file)&#xA;  Flags:                             0x0&#xA;  Size of this header:               0 (bytes)&#xA;  Size of program headers:           0 (bytes)&#xA;  Number of program headers:         0&#xA;  Size of section headers:           0 (bytes)&#xA;  Number of section headers:         0&#xA;  Section header string table index: 0&#xA;readelf: Warning: possibly corrupt ELF file header - it has a non-zero section header offset, but no section headers&#xA;&#xA;There are no section groups in this file.&#xA;&#xA;There are no program headers in this file.&#xA;&#xA;There is no dynamic section in this file.&#xA;I also truncate the last 50 bytes of the uncompressed file, and the last 8 bytes of the compressed archive. This gives us a fun new error when running the program: /usr/bin/xzcat: (stdin): Unexpected end of input, though the game still plays fine! That&#39;s 28 more bytes for me!&#xA;&#xA;Transcending sanity&#xA;After rewriting the math in the shader code more than once, drawing many pages of diagrams to figure out how the equations can be simplified, and making a few compromises (goodbye, crosshair and window resizing), Minecraft4k took up a measly 3006 bytes. Where are the remaining 53 expendable bytes?&#xA;&#xA;Excluding the dlsym strings, the biggest data Minecraft4k stores is a couple floating point constants. A float takes up four uncompressed bytes, but Inigo Quilez points out that we often don&#39;t need the last two bytes, and can therefore make floats compressible down to just 2 bytes. That&#39;s a 2x reduction!&#xA;&#xA;Thanks to @b0rk@jvns.ca sharing the very useful float.exposed website, which has a great interface to poke at the floating point binary format, I was able to check that clearing the last two bytes of the constants did not affect their values too much. Sure, I lost some minor precision, but will the player notice if gravity is  0.00299072265625 instead of 0.003? I don&#39;t think we can fit any analytics library to tell us, so it must be good enough. Combining this trick with some tuned compression flags, we have Minecraft4k in 2981 bytes. Just 28 more...&#xA;&#xA;At this point, I was out of ideas. I asked everyone I knew for advice, and we tried a few great ideas. Porting large functions to assembly and hand-optimizing. Getting rid of the stdlib by implementing dlopen and dlsym myself. Linking against the minimal musl libc implementation instead of the GNU C library. None of these ended up working out. So, I asked a very important question:&#xA;&#xA;How bad is this? (image of Minecraft4k with simplified grass texture) Compared to this: (image of Minecraft4k with regular grass texture)&#xA;&#xA;Notice anything that looks wrong? Hopefully not.&#xA;&#xA;I removed the shadow from under the grass tufts. I&#39;m sorry. It was the only way. Thankfully nobody I asked noticed it, so it&#39;s fiiiiiine.&#xA;&#xA;And with that, we&#39;re done! Minecraft4k now fits snugly into 2952 bytes, a single byte under the maximum.&#xA;&#xA;---&#xD;&#xA;&#xD;&#xA;Thanks for reading! Feel free to contact me if you have any suggestions or comments.&#xD;&#xA;Find me on Mastodon and Matrix.&#xD;&#xA;&#xD;&#xA;You can follow the blog through:&#xD;&#xA;ActivityPub by inputting @mat@blog.allpurposem.at&#xD;&#xA;RSS/Atom: Copy this link into your reader: https://blog.allpurposem.at&#xD;&#xA;&#xD;&#xA;My website: https://allpurposem.at&#xD;&#xA;&#xD;&#xA;link rel=&#34;preload&#34; href=&#34;https://blog.allpurposem.at/lexend.woff2&#34; as=&#34;font&#34; type=&#34;font/woff2&#34; crossorigin=&#34;&#34;]]&gt;</description>
      <content:encoded><![CDATA[<p>Answer: Yes! Here it is:</p>

<p><img src="https://allpurposem.at/qr.png" alt="A QR code which, when scanned, outputs a Minecraft executable"></p>

<p>The game launches, and you can move around the 64x64x64 world with WASD. Space is used to jump. Look around with the mouse. You can left click to break a block, and right click to place dirt.</p>

<p>You can scan it with the following command on Linux:</p>

<pre><code class="language-console">zbarcam -1 --raw -Sbinary &gt; /tmp/m4k &amp;&amp; chmod +x /tmp/m4k  &amp;&amp; /tmp/m4k
</code></pre>
<ul><li><code>-1</code>: exit after scanning the code</li>
<li><code>--raw</code>: don&#39;t process it as text</li>
<li><code>--Sbinary</code>: use the binary configuration</li></ul>

<p><img src="https://allpurposem.at/blog/m4k.png" alt="A screenshot of Minecraft4k, a pixelated, randomly-filled world with a low render distance. The whole game fits in a QR code.">
<img src="https://allpurposem.at/blog/m4k-hi.png" alt="A screenshot of Minecraft4k, showing the word &#34;Hi!&#34; built out of dirt blocks.">

The project is available on GitHub at <a href="https://github.com/TheSunCat/Minecraft4k">TheSunCat/Minecraft4k</a></p>

<h2 id="how" id="how">How???</h2>

<p>Short answer: Pain, suffering, evil dark magic, compression, a couple years, some more pain, and a bit of luck.</p>

<p>Long answer: Well, it&#39;s a long story. If you&#39;re ready to learn about various creative game programming techniques and cursed incantations (don&#39;t worry, I will explain each concept as it becomes relevant), strap on!</p>

<blockquote><p><strong>NOTE:</strong> This is my first ever blog post. I&#39;m trying it out, as I really enjoy writing! I tried my best to keep it accessible and entertaining, but I am very much open to constructive feedback on how it and future posts can be made better. Every time I describe a major change or evolution, there will be a link to the file or commit in question.</p></blockquote>

<h3 id="the-goal" id="the-goal">The goal</h3>

<p>So, how big is a QR code? Truth is, they come in many sizes. The one you&#39;re probably most familiar with is “version 1”, with 21x21 pixels. This can store 25 characters of text, but only 17 8-bit bytes. The difference is because text can be more efficiently encoded than bytes can, as there are less possible values for a QR-compatible text character than the 255 values a byte can have.
<img src="https://allpurposem.at/blog/m4k-version1-qr.png" alt="A small standard QR code encoding the string &#34;Minecraft4k&#34;"></p>

<p>The biggest existing QR code, which you can see at the top of the document, is “version 40” and fits a whopping 2953 bytes. Let&#39;s see how many max-size QR codes it would take to fit the following Villager idle sound from Minecraft:</p>

<p><audio controls="controls">
  <source type="audio/ogg" src="https://allpurposem.at/blog/idle1.ogg"></source>
</audio></p>

<p>That&#39;s, uh, 8605 bytes. It fills up (nearly) three QR codes. We have to fit playable Minecraft into a third of the Villager “hmm” sound.</p>

<h3 id="exposition" id="exposition">Exposition</h3>

<p>On December 2009, Markus Persson, the creator of Minecraft, released a very cut-down version of Minecraft for the Java 4k Game Programming Contest, in which contestants develop games in Java which fit under 4 kilobytes (4096 bytes). An archived version of the game is available <a href="https://archive.org/details/Minecraft4K">on Archive.org</a>. The game renders a pixelated image vaguely resembling Minecraft. All this in... 2581 bytes. Woah. That&#39;s much less than 4k. AND it already fits in a QR code. Remember, our size limit is 2953 bytes.</p>

<p>So the deed is done, right? Not quite! This version depends on the <a href="https://en.wikipedia.org/wiki/Java_applet">Java Applets</a> framework, which was phased out by browsers starting in 2013, and is now completely unusable. Also, I think we can do better. The game suffers from bugs, poor performance, and low resolution. Not only that, but running it required a full web browser, Java installation, and the Java Applets plugin enabled. Can a <em>standalone</em> version of Minecraft really fit in 2953 bytes?</p>

<h2 id="the-java-era" id="the-java-era">The Java era</h2>

<p>To improve upon software, we need to change the source code. Unfortunately, the code for Minecraft4k was never made available, and likely never will be. All original posts about it now lead to a 404 error, and the original Minecraft4k page where you could play it now redirects to Microsoft&#39;s Minecraft homepage. Thankfully, Java code is not too hard to retrieve from JAR files! Unlike most compiled languages like C and C++, which compile to optimized assembly language, Java programs compile to an intermediary called Java bytecode (which is then interpreted into normal assembly by the Java Virtual Machine when you run the JAR file!). This bytecode still bears a strong resemblance to the original source code, and preserves a lot of information, such as the names of variables and functions. This means that using tools made by very smart people, we can “de-compile” the bytecode stored inside Minecraft4k and get back usable source code!</p>

<p>Let&#39;s pop the JAR file into the wonderful <a href="https://java-decompiler.github.io/">jd-gui</a> decompiler, and...</p>

<pre><code class="language-java">float f13 = 0.0F;
float f14 = 0.0F;
f14 += (this.M[119] - this.M[115]) * 0.02F;
f13 += (this.M[100] - this.M[97]) * 0.02F;
f4 *= 0.5F;
f5 *= 0.99F;
f6 *= 0.5F;
f4 += f9 * f14 + f10 * f13;
f6 += f10 * f14 - f9 * f13;
f5 += 0.003F;
int m;
label208: for (m = 0; m &lt; 3; m++) {
    float f16 = f1 + f4 * ((m + 0) % 3 / 2);
    float f17 = f2 + f5 * ((m + 1) % 3 / 2);
    float f19 = f3 + f6 * ((m + 2) % 3 / 2);
    for (int i12 = 0; i12 &lt; 12; i12++) {
        int i13 = (int)(f16 + (i12 &gt;&gt; 0 &amp; 0x1) * 0.6F - 0.3F) - 64;
        int i14 = (int)(f17 + ((i12 &gt;&gt; 2) - 1) * 0.8F + 0.65F) - 64;
        int i15 = (int)(f19 + (i12 &gt;&gt; 1 &amp; 0x1) * 0.6F - 0.3F) - 64;
        if (i13 &lt; 0 || i14 &lt; 0 || i15 &lt; 0 || i13 &gt;= 64 || i14 &gt;= 64 || i15 &gt;= 64 || arrayOfInt2[i13 + i14 * 64 + i15 * 4096] &gt; 0) {
            if (m != 1)
                break label208; 
            if (this.M[32] &gt; 0 &amp;&amp; f5 &gt; 0.0F) {
                this.M[32] = 0;
                f5 = -0.1F;
                break label208;
            } 
            f5 = 0.0F;
            break label208;
        } 
    } 
    f1 = f16;
    f2 = f17;
    f3 = f19;
}
</code></pre>

<p>Uh-oh. It&#39;s true that Java usually keeps variable and function names when compiling. But Persson likely turned this feature off, as it would take up place in the JAR. We have the code, but we don&#39;t know what any of it does. It looks like we&#39;ll have to figure out what every single statement does by staring at it and poking it repeatedly: let&#39;s do some reverse engineering!</p>

<h3 id="reverse-engineering" id="reverse-engineering">Reverse engineering</h3>

<p>The first step is to get it running. The code still uses the Java Applet framework, so I ported everything to use the newer (but still ancient) Java Swing framework. This allows the game to open a window and display (render) pixels inside it. Great! Let&#39;s start reversing the code (<a href="https://github.com/TheSunCat/Minecraft4k-Reversed/commit/89c6b775f7e12686267f8bfc29469bc50e73c2cb">on May 30, 2020</a>). I will only go over the major parts, as it took a long while.</p>

<p>The most obvious things are as follows:
– the <code>214*128</code> <code>BufferedImage</code> is clearly the <code>screen</code> that&#39;s drawn on the window. There&#39;s a big loop that updates every byte inside it, every frame. The dimensions also match the resolution of the pixelated game view.
– the <code>64*64*64</code> array of integers is the <code>world</code>. It gets generated at game start, and a single value is modified when you place/destroy a block.
– the game uses fixed-update physics. This means that, instead of multiplying the player&#39;s movement by the length of each frame, it assumes a constant length for each physics step, and does enough steps to catch up with the time elapsed that frame. The benefit of this is that physics calculations are much simpler, since they don&#39;t have to adapt to different step sizes.</p>

<p>I documented the player&#39;s position, velocity, look direction, keyboard and mouse input. My pal @JuPaHe64 (<a href="https://twitter.com/JuPaHe64">Twitter</a>) documented how the game checks whether your movement is valid and corrects it to not let you clip inside blocks. Thanks to this, we fixed a bug which gets the player stuck in the original game if they jump into a wall.</p>

<p>Over the next week, JuPaHe64 and I documented the rest of the game. There&#39;s two very unusual systems operating together to keep the size of the game down.</p>

<h3 id="the-texture-atlas" id="the-texture-atlas">The texture atlas</h3>

<p>A single 16x16px texture of the side of a block in the original game takes up ~350 bytes. Minecraft4k has 6 distinct blocks, with three unique sides. That&#39;s <code>6 * 3 * 350 = 6300</code> bytes. Even compressed by the JAR format (it&#39;s just a renamed ZIP file), this would take up a huge amount of our allotted space. So how does Persson do it?</p>

<p>In stead of storing a bitmap of the textures, Minecraft4k opts to generate them at runtime from algorithms.
<strong>Woah.</strong></p>

<p>I&#39;d never seen this before, but it&#39;s a great way to save space. Here&#39;s the texture atlas generated with the default Java Random seed:</p>

<p><img src="https://allpurposem.at/blog/textures.png" alt="An atlas of 6 textures from Minecraft4k, which look like weird imitations of Minecraft Classic textures"></p>

<p>One unexpected boon of this is that it becomes really easy to up the resolution of the textures. As they&#39;re algorithmically generated, the same patterns will hold in higher detail, which led to this cursed image:</p>

<p><img src="https://allpurposem.at/blog/textures-cursed.png" alt="Way too high-resolution textures in Minecraft"></p>

<p>JuPaHe64 created a really nice texture pack for it, showing that this is actually a viable method to generate some kinds of textures for games:</p>

<p><img src="https://allpurposem.at/blog/hd-textures.png" alt="High-resolution stylized texture pack"></p>

<p>I am especially fond of the tree bark and stone textures.</p>

<h3 id="ray-tracing" id="ray-tracing">Ray”tracing”</h3>

<p>No, really. Well, sort of. Minecraft is a very complex game to render, despite its relatively simple graphics. The real game has to do tons of calculations to avoid rendering block faces that are hidden, turn it all into triangles, and do math to distort them from 3D space into the shapes we see on our 2D screen. This is an oversimplification of the rendering technique called rasterizing. This complexity would be too much for our size limitations, so instead Minecraft4k employs a specific variation of raytracing: voxel raymarching.</p>

<blockquote><p><strong>NOTE</strong>: Voxel is just a word for a 3D pixel, which the Minecraft world is made out of.</p></blockquote>

<p>This is what happens for each pixel that needs to be drawn:</p>
<ol><li>Calculate the direction of the ray, based on where the player is looking and the pixel&#39;s coordinates</li>
<li>Store the initial position of the ray</li>
<li>Loop until we hit a block:
<ol><li>Step (“march”) forward by one block</li>
<li>Check if the ray has hit a solid block</li></ol></li>
<li>Color the pixel with that block&#39;s texture</li></ol>

<p>As you can probably guess, this is a pretty simple algorithm to implement, and therefore saves a lot of precious bytes. Additionally, we can use the result of raymarching the pixel at the center of the screen to tell what block the player is looking at. This saves writing a separate function to get the block, and saves a considerable amount of space.</p>

<p>Raytracing is known nowadays for enabling more complex effects, such as lighting and reflections. With minimal modifications to the code, JuPaHe64 was able to add pixel-perfect shadows, and I added ambient light illumination. Paired with a simple world generator, it can make for some interesting shots, despite the poor performance:</p>

<p><img src="https://allpurposem.at/blog/rtx-effects.png" alt="A sunset in Minecraft4k with an orange skylight tint"></p>

<p><img src="https://allpurposem.at/blog/rtx-effects2.png" alt="A tree with pixel-perfect shadows"></p>

<p>However, a problem arises: even without the fancy world generation and raytraced effects, the Java game <a href="https://github.com/TheSunCat/Minecraft4k-Reversed/releases/tag/1.0">now takes up 17757 bytes</a>. That&#39;s over 6 QR codes. The code required to make this work on contemporary systems with Java is simply too big. It&#39;s time for a change of approach.</p>

<h2 id="june-2020-https-github-com-thesuncat-minecraft4k-cpp-commit-96f8fd41a44d56494a4844dabf22a1162fdc076e-porting-to-c" id="june-2020-https-github-com-thesuncat-minecraft4k-cpp-commit-96f8fd41a44d56494a4844dabf22a1162fdc076e-porting-to-c"><a href="https://github.com/TheSunCat/Minecraft4k-CPP/commit/96f8fd41a44d56494a4844dabf22a1162fdc076e">June 2020</a>: Porting to C++</h2>

<p>Let&#39;s kill two birds with one stone, and rewrite the game to:
1. Use C++, which is much more familiar to me, so I can make faster progress
2. Use the GPU to run the game in real time, as raytracing on the CPU can be very slow</p>

<p>After a few grueling days, the new port was finally functional, and ran beautifully thanks to GPU acceleration. It used an OpenGL compute shader, which is a type of program that can run on the GPU, once per pixel, all at once. This means that, unlike on the CPU, where each pixel has to finish rendering before the next one can be rendered, on the GPU all of it happens at once.</p>

<p>One of my favorite changes happened when my friend @HANSHOTFIRST joined in, and we wrote the commit title <a href="https://github.com/TheSunCat/Minecraft4k-CPP/commit/ae988b3d16f06cfa30017dd0f6279ef1349b35bf">“bad the shader”</a>. Here, “bad” is used as a verb, since we pretty much rewrote the entire thing and it, uh, didn&#39;t work. The idea was to reduce the space it took up, and improve performance, by processing all three XYZ axes together. The original game does ray steps per-axis, meaning that first it does the X axis, then the Y, and finally the Z step. This seemed unnecessary to us at the time, but we clearly missed something, as you can see here:</p>

<p><img src="https://allpurposem.at/blog/bad-the-shader.png" alt="A very buggily rendered Minecraft world"></p>

<p>After a small hiatus, and porting the game to Linux on <a href="https://github.com/TheSunCat/Minecraft4k-CPP/commit/3fc623d490aa980668429a0370763b1f00147ede">March 2021</a>, I played a bit more with the graphical effects possible with raytracing:</p>

<p><img src="https://allpurposem.at/blog/rtx-sun.png" alt="A foggy forest scene with soft sunlight and shadows"></p>

<p>I then introduced the first big size improvement: executable packing. <a href="https://man.archlinux.org/man/gzexe.1.en">gzexe</a> is a tool which uses gzip compression (the same that was used to deliver this page to you!) to reduce the size of an executable. I also implemented usage of a <a href="https://github.com/laurentlb/Shader_Minifier">Shader Minifier</a>, whose job it is to automatically reduce the size of the shader code by removing comments, shortening variable names, and getting rid of unneeded newlines and spaces. The reason why this is so important to do with the shader is that, unlike C++ code which is compiled into the binary, OpenGL shaders must be stored as source code, and compiled by the GPU drivers at runtime. Therefore, any single character we can save in the shader code should translate directly to a byte saved toward our goal. So, how small did this get it? Well, an impressively small, QR-code fitting... 11314 bytes. Well, peck. That&#39;s four QR codes. We need to divide the size by four. How is that even possible?</p>

<h2 id="c-that-it-s-my-sanity-evaporating" id="c-that-it-s-my-sanity-evaporating">C that? It&#39;s my sanity evaporating!</h2>

<p>Yeah. I, uh, rewrote it in C. After a long break. In <a href="https://github.com/TheSunCat/Minecraft4k/commit/9808a68698f229537e28cb48aa516be7ca0f77fb">June of 2022</a>, the game is born anew, this time more broken than ever. Once I got all the basic features in by <a href="https://github.com/TheSunCat/Minecraft4k/commit/e720e3d86d5f84cac2c1bb725510b86028f70698">August</a>, I was left with a very functional game, which does everything I think defines Minecraft, in 4598 bytes. Woah. That&#39;s 1645 bytes over our limit, for a total size of just over 1.5 QR codes. Suddenly this looks feasible. By this time, <a href="https://github.com/TheSunCat/Minecraft4k/blob/e720e3d86d5f84cac2c1bb725510b86028f70698/Makefile">the build process</a> has picked up some dirty tricks. We&#39;re far from done, but there are already a few things that make me mildly uncomfortable. Let&#39;s go through the most egregious ones:</p>

<h3 id="nostartfiles" id="nostartfiles"><code>-nostartfiles</code></h3>

<p>The C compiler we&#39;ll be using is called GCC. There&#39;s a few obvious flags we can pass to it to reduce the size of the output executable (the binary). We can remove the debug information with <code>-s</code>, and optimize the code for size rather than performance with <code>-Os</code>. However, there&#39;s one different argument. We all know the ubiquitous <code>int main</code> function, yes? The entry point to every C and C++ program? The first code that runs? I&#39;m here to reveal to the world that we&#39;ve been lied to: the real entry point is <code>void _start</code>, but Big Compiler doesn&#39;t want you to know. This is part of their great ploy to sell more <code>argc</code> and <code>argv</code>. In all seriousness, C programs actually start at, well, <code>_start</code>. This contains code to set up the stack, global variables, the values of <code>argc</code> and <code>argv</code>, and various parts of the C runtime. Because we don&#39;t need most of this, we can elect to just skip <code>main</code> and use <code>_start</code> in stead:</p>

<pre><code class="language-c">void _start() {
    // set up the stack
    asm volatile(&#34;sub $8, %rsp\n&#34;);

    // Minecraft goes here

    // exit
    asm volatile(&#34;.intel_syntax noprefix&#34;);
    asm volatile(&#34;push 231&#34;); //exit_group
    asm volatile(&#34;pop rax&#34;);
    asm volatile(&#34;xor edi, edi&#34;);
    asm volatile(&#34;syscall&#34;);
    asm volatile(&#34;.att_syntax prefix&#34;);
    __builtin_unreachable();
</code></pre>

<h3 id="vondehi" id="vondehi"><code>vondehi</code></h3>

<p>Like <code>gzexe</code>, <a href="https://gitlab.com/PoroCYon/vondehi"><code>vondehi</code></a> is a tool that allows shrinking a binary by decompressing it and then running it. It&#39;s a bit of very well-optimized assembly that can be prepended to any <code>xzcat</code>-compatible compressed executable, and will make it runnable on Linux. At this point, all semblance of cross-platform support is completely gone.</p>

<h3 id="strip" id="strip"><code>strip</code></h3>

<p>Another GNU tool, <a href="https://man.archlinux.org/man/strip.1.en"><code>strip</code></a> allows you to remove specific sections from a Linux executable (in the Executable Linker Format—ELF). It turns out that, even with <code>-s</code> passed to GCC, a lot of nonessential information is still kept in the form of ELF sections. We can use <code>strip -R</code> to get rid of them. I simply tried each one by one until the game would no longer run.</p>

<p>With all this, we&#39;re at 4598 bytes. Which is pretty great, but we have a whole journey ahead to get it below 2953. There&#39;s a few obvious shader code optimizations that can be made, such as shortening the names of uniforms (variables in the shader which can&#39;t be changed by the Shader Minifier), and putting everything in the shader&#39;s <code>main</code> function rather than using function calls. This makes up for a total of <a href="https://github.com/TheSunCat/Minecraft4k/compare/abcb0e7...759da0b49607711f943ce0d5a7e0eab73c0f5b31">42 bytes</a>. Yikes. Where are we going to get the 1k+ savings we need? That&#39;s a great question, and it&#39;s going to take a year of hiatus to answer. I left Minecraft4k at <a href="https://github.com/TheSunCat/Minecraft4k/commit/ca21596984cdb16932e540a4206ea217cefca8e1">3786 bytes on September 2022</a> and focused on my first year at university.</p>

<h2 id="the-final-run-sanity" id="the-final-run-sanity">The final run (<code>sanity--</code>)</h2>

<p>It&#39;s <a href="https://github.com/TheSunCat/Minecraft4k/commit/8cb6fafd53f65538946e05da564a1359c76a040b">September 2, 2023</a>. University starts in less than two weeks. I have recently rediscovered MattKC&#39;s excellent <a href="https://mattkc.com/etc/snakeqr/">snake game in a QR code</a>. I&#39;m reminded of Minecraft4k, and how close I was to the finish line. It&#39;s time for one last sprint.</p>

<h3 id="embracing-the-dark-side" id="embracing-the-dark-side">Embracing the dark side</h3>

<p>One big chunk of bytes lies in my calls to C standard library functions. <code>sin</code>, <code>cos</code>, <code>fmodf</code>, and friends. I had tried to get around this by implementing some of them myself, where it was smaller than just calling the libc function.</p>

<pre><code class="language-c">// TODO tune this, or use inline x86 ASM
#define TRIG_PRECISION 20
static float my_sin(float x)
{
    float sine;

    float t = x;
    float sine = x;
    for (int a=1; a &lt; TRIG_PRECISION; ++a)
    {
        float mult = -x*x/((2*a+1)*(2*a));
        t *= mult;
        sine += t;
    }
    return sine;
}
</code></pre>

<p>I&#39;ll let you guess how the TODO comment was applied.</p>

<pre><code class="language-c">float my_sin(float x) {
    float sine;
    asm (
        &#34;fsin;&#34;
        &#34;fstps %0;&#34;
        : &#34;=m&#34; (sine)
        : &#34;t&#34; (x)
    );
    return sine;
}
</code></pre>

<p>Directly using the x86 instruction <code>fsin</code>, I was able to save <a href="https://github.com/TheSunCat/Minecraft4k/commit/31fbbb6a4779acba8be20cdc9c5a5d92f225aff6">80 whole bytes</a> from the binary.</p>

<h3 id="api-abuse" id="api-abuse">API Abuse</h3>

<p>OpenGL defines a standard language to talk to the GPU in, so you can get it to do your bidding. This is great, because it will work on any machine, no matter the platform or hardware... supposedly. I&#39;ve already encountered weird crashes running Minecraft4k&#39;s C++ edition on Intel GPUs, because their OpenGL implementation didn&#39;t like my way of storing the world data. Every OpenGL driver has its quirks and bugs, which make programming in OpenGL so much more fun. It adds the surprise factor that is very welcome in an otherwise consistent field. Your raytracer might work on one computer, but you&#39;ll never know whether it works on all computers, because of the fun factor that is OpenGL driver bugs.</p>

<p>Conversely, we can take advantage of some of these bugs to get away with <a href="https://github.com/TheSunCat/Minecraft4k/compare/028f162...0d02c78">removing a lot of otherwise strictly necessary code</a>. That&#39;ll be 26 bytes, coming right up!</p>

<h3 id="dlsym" id="dlsym"><code>dlsym</code></h3>

<p>Rather than asking Linux to make the OpenGL functions available to us, we can manually load and fetch them using the pair of functions <code>dlopen</code> and <code>dlsym</code>. This requires storing the plain text name of every function we need in the binary, which does take up a lot of bytes, but it ends up being just slightly shorter <a href="https://github.com/TheSunCat/Minecraft4k/commit/db225ca0969b84fd7391c4c23e631325dbae1e13">by 21 bytes</a>. Whew.</p>

<h3 id="this-is-not-okay" id="this-is-not-okay">This is not okay</h3>

<p>Remember that we&#39;re compressing the binary and attaching a small decompressor to it, so any improvement to the “compressibility” of the binary directly translates to saved bytes for us. So, I opened the binary in a hex editor, and started zeroing out parts of it. Surprisingly, I found that a lot of seemingly important parts, defined in the ELF specification, are simply not necessary. The game still runs after being so heavily mutilated, although I cannot say the same about Linux ELF utilities.  Check out this very disappointed <code>readelf</code> output, where almost everything is a zero:</p>

<pre><code class="language-console">apm@apg ~/D/m4k ((37570fc8)) &gt; readelf -a Minecraft4k_prepacked.elf
ELF Header:
  Magic:   7f 45 4c 46 00 00 00 00 00 00 00 00 00 00 00 00 
  Class:                             none
  Data:                              none
  Version:                           0
  OS/ABI:                            UNIX - System V
  ABI Version:                       0
  Type:                              EXEC (Executable file)
  Machine:                           Advanced Micro Devices X86-64
  Version:                           0x0
  Entry point address:               0x102ca
  Start of program headers:          0 (bytes into file)
  Start of section headers:          64 (bytes into file)
  Flags:                             0x0
  Size of this header:               0 (bytes)
  Size of program headers:           0 (bytes)
  Number of program headers:         0
  Size of section headers:           0 (bytes)
  Number of section headers:         0
  Section header string table index: 0
readelf: Warning: possibly corrupt ELF file header - it has a non-zero section header offset, but no section headers

There are no section groups in this file.

There are no program headers in this file.

There is no dynamic section in this file.
</code></pre>

<p>I also truncate the last 50 bytes of the uncompressed file, and the last 8 bytes of the compressed archive. This gives us a fun new error when running the program: <code>/usr/bin/xzcat: (stdin): Unexpected end of input</code>, though the game still plays fine! That&#39;s <a href="https://github.com/TheSunCat/Minecraft4k/commit/37570fc8145f26f0b2de7d648ab2ac8cf376cb45">28 more bytes</a> for me!</p>

<h2 id="transcending-sanity" id="transcending-sanity">Transcending sanity</h2>

<p>After <a href="https://github.com/TheSunCat/Minecraft4k/commit/8992954dcc7eb91f5996772ea3fb21e395c84f2d">rewriting</a> the <a href="https://github.com/TheSunCat/Minecraft4k/commit/8de53509b120c84f675a133998cfd6642934a228">math in the shader code</a> more <a href="https://github.com/TheSunCat/Minecraft4k/commit/bfe68c9ba7bf98ec1299be2382bb7247ea07464a">than once</a>, drawing many pages of diagrams to figure out how the equations can be simplified, and making a few compromises (goodbye, crosshair and window resizing), Minecraft4k took up a measly 3006 bytes. Where are the remaining 53 expendable bytes?</p>

<p>Excluding the <code>dlsym</code> strings, the biggest data Minecraft4k stores is a couple floating point constants. A float takes up four uncompressed bytes, but <a href="https://iquilezles.org/articles/float4k/">Inigo Quilez</a> points out that we often don&#39;t need the last two bytes, and can therefore make floats compressible down to just 2 bytes. That&#39;s a 2x reduction!</p>

<p>Thanks to <a href="https://blog.allpurposem.at/@/b0rk@jvns.ca" class="u-url mention">@<span>b0rk@jvns.ca</span></a> sharing the very useful <a href="https://float.exposed">float.exposed</a> website, which has a great interface to poke at the floating point binary format, I was able to check that clearing the last two bytes of the constants did not affect their values too much. Sure, I lost some minor precision, but will the player notice if gravity is <code>0.00299072265625</code> instead of <code>0.003</code>? I don&#39;t think we can fit any analytics library to tell us, so it must be good enough. Combining this trick with some tuned compression flags, we have Minecraft4k in 2981 bytes. Just 28 more...</p>

<p>At this point, I was out of ideas. I asked everyone I knew for advice, and we tried a few great ideas. Porting large functions to assembly and hand-optimizing. Getting rid of the stdlib by implementing <code>dlopen</code> and <code>dlsym</code> myself. Linking against the minimal <a href="https://www.musl-libc.org/">musl libc</a> implementation instead of the GNU C library. None of these ended up working out. So, I asked a very important question:</p>

<p><img src="https://allpurposem.at/blog/m4k-question.png" alt="How bad is this? (image of Minecraft4k with simplified grass texture) Compared to this: (image of Minecraft4k with regular grass texture)"></p>

<p>Notice anything that looks wrong? Hopefully not.</p>

<p>I removed the shadow from under the grass tufts. I&#39;m sorry. It was the only way. Thankfully nobody I asked noticed it, so it&#39;s fiiiiiine.</p>

<p>And with that, we&#39;re done! Minecraft4k now fits snugly into 2952 bytes, a single byte under the maximum.</p>

<hr>

<p>Thanks for reading! Feel free to contact me if you have any suggestions or comments.
Find me on <a href="https://allpurposem.at/link/mastodon">Mastodon</a> and <a href="https://allpurposem.at/link/matrix">Matrix</a>.</p>

<p>You can follow the blog through:
– ActivityPub by inputting <code><a href="https://blog.allpurposem.at/@/mat@blog.allpurposem.at" class="u-url mention">@<span>mat@blog.allpurposem.at</span></a></code>
– RSS/Atom: Copy this link into your reader: <code>https://blog.allpurposem.at</code></p>

<p>My website: <a href="https://allpurposem.at">https://allpurposem.at</a></p>

<p></p>
]]></content:encoded>
      <guid>https://blog.allpurposem.at/minecraft-qr</guid>
      <pubDate>Mon, 11 Sep 2023 22:44:31 +0000</pubDate>
    </item>
  </channel>
</rss>