Lux's blog

my occasional writeups

Cover art for this post, showing a distorted bright background, with a blooming PostgreSQL logo on the front

Hi! It's been a while. This post has been in my drafts since November of last year. I finally decided to finish it up, since the information presented here was quite hard to pinpoint.

As some of you may be aware of, my main server suffered an SSD failure on October 14th, 2023, with no backups available. This SSD had the data for many of the services I host, including my Akkoma instance, the mint.lgbt Matrix and XMPP servers, and even this blog! Thankfully, with the help of the donators now listed in the Coral Castle homepage, I sent it to a local, Chilean data recovery service called Nekiori, located in the Los Leones commercial neighborhood of the Providencia district. They were successful at the job, managing to recover around 300 GB of data, not to mention their excellent communication and pricing.

However, no data recovery job is perfect, and in many cases you'll still end up with a few files missing. This is what happened with the salvaged PostgreSQL cluster, and I'll explain how I was able to get a huge chunk of it back within about 2 hours.

Starting up the salvaged cluster

Before beginning, MAKE A COPY OF THE ORIGINAL DATA DIRECTORY. We'll be doing lots of destructive changes, so, if anything goes wrong, you'll want to have a checkpoint you can restart from. I know duplicating clusters is time-consuming, and may take up lots of space, but it's best to be safe than sorry (although, I'm expecting whoever has to go through this to already have learned this valuable lesson). Additionally, this guide only consists of getting past PostgreSQL errors, not regenerating any pages that were permanently lost.

Let's start by installing PostgreSQL on our work environment, making sure to match the version the recovered cluster was originally running on. On Alpine Linux, for instance, we can specify the cluster's version to be 15.x by running:

# apk add postgresql15

Once installed, we'll skip using the daemons provided by the package, and instead we'll run the server straight from the CLI, in order to have better control of each parameter. I would recommend using a multiplexer like Byobu, in order to view logs easily and be able to gracefully shut down the database when needed. Otherwise, if you're working on a full blown desktop, you can use multiple terminals. Let's break down the following command:

$ postgres -k /tmp -p 5433 -D postgresql/15/data/
  • postgres is the runtime for our PostgreSQL server.
  • -k /tmp sets the Unix socket location to the host's ramdisk. However, you can set this location anywhere you'd like, as long as you use it for every other command which interact with PostgreSQL
  • -p 5433 sets the port our server will start listening on. Even if you're not using TCP/IP, the socket will attempt to map a port, so it's still important to set this.
  • -D postgresql/15/data/ is our salvaged cluster. Make sure to point it to the copy we made earlier.

After we've each parameter, let's run it!

...

All good? No? Don't worry. It's likely to fail its first launch, complaining about it not being able to find a directory in the data folder. If that's the case, create the missing directory (should be shown in the log) with:

$ mkdir [data]/[dir]

Then, try to execute the server again. Rinse, repeat, and eventually the server should begin running without a crash.

Locate a database

Now, since this cluster is in an unstable state, it shouldn't be used for production. Instead, we'll locate the most relevant databases we want to keep, so we can later dump them. Enter the psql shell with the following command:

$ psql -h /tmp -p 5433 postgres postgres 

Again, here's the breakdown: – psql is the command for the PostgreSQL shell – -h /tmp is the location of the Unix socket we defined on the last step. – -p 5433 is the port we also defined earlier. – First postgres is the database name. Most PostgreSQL clusters use this database as a sort of starting point. – Second postgres is the user we're authenticating as, which in most clusters is also set to be a superuser, granting us elevated privileges.

If all went well, you should now be staring at a screen like this:

psql (15.6)
Type "help" for help.

postgres=#

Now, to list our available databases, we need to run the following command:

postgres=# \l

After pressing enter, we should see a table of every database present in the cluster. Write down the names of the databases to preserve, as shown in the “Name” column. If you want to explore the contents of the database, you can execute a series of commands like these:

postgres=# \c shop # Connects to the database "shop"
shop=# \dt # Displays all tables in database
shop=# SELECT * IN orders; # Displays all elements in the table "orders"

Once we've kept note of all the databases we want to preserve, we'll be exiting psql by running:

postgres=# \q

Dump away!

In order to ease this process, I've created a script that will read the output off of the pg_dump command, and create blank pages in the likely case that some couldn't be found. We can download it from my GitHub Gist using the following command:

$ curl -LO https://gist.githubusercontent.com/otoayana/7623d05b0b60c71160c37771398bfcaf/raw/ada88e9ad936e317ba19787f8886e24e2c96b123/pg_taprec

Once downloaded, we'll give it permissions to execute using chmod, and afterwards we'll need to open it with a text editor. For the sake of simplicity, I'll be using nano as an example.

$ chmod +x pg_taprec
$ nano pg_taprec

Inside this file, we'll need to modify the following variables: – DATA: Route of the cluster's data directory – HOST: The location for the PostgreSQL server's Unix socket, as defined earlier – DB: The database we want to recover. Set it to the name for one of the databases we noted down earlier. – USER: A superuser within our cluster. By default, should stay as postgres.

Once we've edited the file, let's execute it, making sure to specify the final output as one of the arguments.

$ ./pg_taprec [file]

This process may take a while, as it needs to retry dumping whenever it encounters a missing page. In the meantime, go prepare whatever hot drink you like, and maybe read another article on this blog.

Once it's done, rinse and repeat, changing the DB variable to the next database we want to dump, and changing the first argument in the command to a different path, otherwise we'll be overwriting the previous dump.

Shut down, then import

Once we've finished dumping every picked database, let's close the terminal running the server (or terminate the server using Ctrl+C). We can now delete the duplicated cluster, and keep the original somewhere safe, just in the case we need it at a later time.

Let's copy the database dumps into our new server, and install a fresh PostgreSQL server on it, if not yet installed, as specified at the beginning of this post.

Once the new cluster is running, let's create a new database corresponding to a dump, with an owner which can interact with it:

# su - postgres
$ createuser [owner]
$ createdb [database] -O [owner]

Finally, let's import the database dump into it. We can do so using the following command, making sure to replace [dump] with the path where our dump is stored within the server:

$ psql [database] < [dump]

Rinse and repeat for each dumped database.

Conclusion

Congrats! You've managed to salvage what you could. However, it's always better to prepare for an emergency such as this one. Consider the following tips for preventing other catastrophes:

  • By ALL MEANS keep constant, verified backups. If you can't afford downtime, you can replicate a cluster across multiple servers. If you don't have multiple servers, create a pg_basebackup monthly, and then store WAL pages afterwards. PostgreSQL's official documentation explains how to do so.
  • If you're doing hardware upgrade, check the hardware you're getting. It's not uncommon to install bad RAM or SSDs that are about to fail, both of which can corrupt your database beyond repair.
  • Keep your backups in multiple locations. If your datacenter/house burns down, at least you'll have the files you kept on the cloud! I'm using Backblaze (not sponsored), and my bills are safely within the single USD digits.

After the latest September Apple event from this year, many people have been saying something along the words in the title of this article, so I decided to share my personal opinion regarding this new announcement, and let me tell you why it's not bad that Apple is stagnating the iPhone.

Where's smartphone tech nowadays anyways?

It's been 16 years since the first iPhone was announced and released, but I'd say the mass adoption of smartphones worldwide was not met until around 2014. Of course, americans will point out that the iPhone became rather popular earlier in their country, which, while true, was not a case of “everyone and their dog has a smartphone in their pocket”. But after this turning point, phone evolution has slowed down. Smartphones have gotten faster and “prettier”, sure, but everyday tasks from smartphones 10 years ago are not too different from the ones now. The main difference is our software has gotten so overly complex that the older generations can't run it anymore, and now the richer population can pack GPUs capable of ray tracing in their pockets. Will many people use them? Not really. Smartphones are clearly not the same productivity devices desktop computers are, and good luck trying to do advanced 3D work on your pocket without needing an extra monitor and some input devices that aren't a touch screen.

But folding displays!

Folding displays still are experimental technology. They depend on flimsy, fragile displays that do not last to time. Let me put an example with another industry trend that suffers from a similar issue: light trucks. In the US, car companies have started aiming light trucks to the average suburban white family, which, in other words, means they made them look more like SUVs and given them flashy new designs. They're also horrendous for anyone willing to actually use them for cargo, ergo many professionals opt for older light trucks in order to pursue their work. Let's apply the same example to smartphones. Would you be glad to daily drive a phone that is more likely to be damaged in your regular usage and for which your workflow depends on compared, to a phone you KNOW it can last you more time? I would personally go with the second option.

Conclusion

Now, let me be clear, I am not an Apple fangirl. I despise the MANY unethical work actions they've done throughout the years, such as poor factory conditions, child labor, greenwashing, union busting... As a free software user, I also dislike how walled their ecosystem is. But this all seems like a rich kid issue. iPhones are “not exciting” anymore because smartphones have reached their limits. If you actually use your smartphone as a tool, then why would you ever need it to do more than it is already capable of? If you're already happy with your iPhone, keep it until it becomes obsolete and maybe then buy the new one, the slightly older one, or perhaps even buy a Fairphone if you can! At least, for the people who decide to get the newest one now, they came in at the right time, considering the new USB-C port.

The banner for this blog entry, showing a distorted background with the Threads logo in the middle

The world finally got to see Meta's new copycat beast: Threads. Some see it as the salvation from the apocalyptic climate that the bird app is going through, but the fair truth is that we're working with basically similar platforms. This is more obviously shown by the fact that some of the most currently prominent hate groups, such as Libs of TikTok, Gays Against Groomers and PragerU, have been platformed into this new service. While it's not being driven to the ground, let's be honest, it's not sunshine and rainbows either.

One of the few things Meta has pinky-promised is the intent to federate with ActivityPub servers, which has been received with lots of backlash from users and admins on smaller instances, myself included. The causes from this backlash are lack of trust in Meta due to their past actions, keeping conversations regarding federation attempts closed off, and how the Fediverse is a platform mostly used by minorities, leading to potential harassment in the instance that Threads federated with ActivityPub servers. This has lead to an agreement called the Anti-Meta Fedi Pact, which I have signed amongst others.

But I've just summarized the events from the past few weeks. What I want to focus on today is picturing a world where Meta can be a fair competitor amongst instances and not the monolith they clearly want to build.

A memoriam for some federated platforms

Before attempting to pitch this fix, let's look back at the past deaths from other federated platforms, mostly comprising of email and XMPP.

Email is a tamer example, since it's not an exclusive monopoly and the protocol has not been (completely) extinguished by the big players. However, it still annoys me to a certain level, since every major organization I've been a part of (that is, my current university and my high school), uses G Suite. Our two major players in the game are Google and Microsoft. According to the Web Technology Surveys, they control 18.2% and 13.4% of the market respectively. These may not seem like huge percentages until we compare them to, for example, my current personal email provider, Migadu, which does not even reach 0.1%. Essentially, when an individual or organization looks for a provider, they will always run towards the big players without thinking twice about the risks involved, which include, but are not limited to:

  • limitations when migrating to another provider
  • the risk of having sensitive data leaked in the case the platform gets hijacked (let's remember, no computer is impenetrable, as much as you can try to mitigate attacks)
  • the risk of losing data in case the platform shuts down.

XMPP is a different story however, and is closer to what could've happened if Meta was welcomed with open arms by everyone in the Fediverse. When Google introduced Talk back in 2005, their implementation was just another XMPP (Jabber back then) server in the block. However, since a big player was the one providing the server and they had the capital to market it, users flocked to the flashy new service. Afterwards, Google “expanded” it with proprietary technologies (Orkut, Google+, Gmail, Google Voice...). Finally they sunsetted their service, moving existing users to a walled garden named Hangouts, leaving XMPP deserted. The protocol lives to this day as a niche platform, with many of us being forced into other closed messaging platforms such as WhatsApp, Instagram, Snapchat, Messenger and Discord in order to keep in touch with friends and family. When we talk about “Embrace, Extend, Extingish”, this is one of the most blatant examples. And while Microsoft got sued to the ground when they did this in the 90s by integrating Internet Explorer deeply within Windows in order to eat up Netscape's market share, Google have washed their hands clean from the crime scene.

What is our salvation for ActivityPub then?

Regulation! EU ministers have already been working at standardizing technologies such as USB-C on mobile devices and have forced players such as Apple to allow sideloading of apps in their operating systems. As of now we don't have a standard client API for ActivityPub servers, so a first step would be to not just depend on the Mastodon API for each server and instead work on making a “one-size-fits-all” client API. Afterwards, we should convince ministers that platforms federating with ActivityPub must implement this standard, and clients should present alternative providers as choices the user can pick without major constraints. Let's be fair however, the current onboarding process in the official Mastodon apps confuses users, so this process could be made more intuitive by developers.

Conclusion

Our true evils in the tech world are not the existence of these companies, but rather the control that they have. Capitalist preachers love to talk about the freedom of choice our current system give to users, but are absolutely blind to the monopolies that have been built in the past few years, and instead of limiting their grow to actually allow for innovation to thrive, they will yell about how government needs deregulation, which is the exact cause for our monopolistic (tech) world. There are paths out of this however, but we need to begin to build them soon. As for me, I'm planning to work on a private frontend for the platform. It seems like it's getting a lot of traction and it might be worth it to find a way to look at posts without giving Meta your precious data.

A while back I saw a video posted by the educational tech YouTube channel ColdFusion, called “The Anti-Smartphone Revolution”. The video did present a pretty accurate idea on how social media has been linked to mental health issues, such as decreased self-esteem and addiction, plus the political influence these platforms hold. And let me be absolutely clear, mobile devices need to go simpler in order to mitigate these issues. However, dumbphones still present some issues that are deeply tied to how our current mobile infrastructure is planned out. And this is what I'll be talking about.

Smartphone: Hot, sleek, dangerous

While the mobile phone is not a new invention, we didn't materialize its full concept until around 2008, after the iPhone 3G introduced the App Store and the market became desperate to launch apps from useless to overcomplicated, not to mention the copycats that also started to appear around the same time, such as the HTC Dream, the Nokia N97 or the BlackBerry Storm. This marked the switch from hardware being the spotlight to the software, at least for the most part. While the hardware was refined in the later years (bigger screens, smaller bezels, more processing power, bigger batteries), the general form stayed the same. The apps which ended up prevailing were the ones the public got interested in adopting without investing too much effort into tweaking. We ended up with a few services which have abused the trust of their users with the issues I listed in the introduction. In the end most of the world still uses the same general shape of the iPhone: A black, vertical square with the front body being mostly a touchscreen. However, the phone wasn't always this way. We had a simpler, utilitarian device before this striking change.

Dumbphone: Clunky, small, utilitarian

If you have back problems or you live in a lower income country, you've most likely seen the dumbphone. These devices played the simple role of allowing you to make phone calls on the go, send short text messages or perhaps playing a simple game. We didn't have much computing power in our pockets back in the early 2000s and, ergo, most used their phones for simple social or enterprise interactions. The issue with these devices is their low security. In the case of smartphones we can send messages using end-to-end encrypted messengers or VoIP apps, but, with dumbphones, carriers can listen in to this day on our conversations, which is not ideal for people living in authoritarian regimes or in hijacked networks. Also, currently manufactured dumbphones do not meet the same quality standards as smartphones do, and the ones which do meet them are prohibitly expensive. Take, for instance, the Light Phone II. The international version costs $299 USD excluding shipping and tax. Let me remind you, that's for a phone with limited functionality and specifications.

So where's the middle ground?

There are a few phones which can cover this middle ground. Take for instance, the Volla Phone. Its customized version of Volla OS includes a minimalist launcher and set of apps surpasses the functionality of the dumbphone while decluttering the smartphone. However, we still have to deal with the phone potentially losing updates after just a few years, since Volla OS is a fork of Android (which is also an issue if we declutter existing ROMs using launchers and certain settings as an alternative). Plus, while the launcher is an open-source, buildable app, Volla OS has not yet been distributed outside of Volla's own hardware, which is not affordable for a lot of the population. These also do not provide E2EE communication platforms by default, resorting to SMS apps or Telegram (which while it has a secure chat function, it defaults to server-side encryption).

I personally think the path should be taken with what I think is the communitarian alternative of the Google-Apple mobile OS duopoly – Linux phones, and more specifically those running on the mainline kernel. These can be maintained by kernel hackers in the long term and have less chances of falling into obsolecence. The current issue as of now is that we don't yet have a UI that's minimal yet friendly enough for the average user. Most still replicate the UIs of more popular smartphones, since these are the first usable environments that have been developed for the platform. Further work could be done with the community to bring these ideas to life in the future.

We should also push for decentralized and E2EE chat platforms as the defaults for these kinds of platforms. Apple has done this with iMessage, with it becoming the leading chat platform in the US. Matrix and XMPP could come into play, since they're not limitant to Linux phones and can be used on its alternatives and even on other devices independently.

Conclusion

While I'm most certain mobile manufacturers and development companies will not be reluctant to give up the current mobile status quo, providing viable alternatives is something we need to start doing soon. Let's agree that times have changed, dumbphones are an alternative that most people are still hesitant to go with and we need to make more ethical smartphones. If we manage to get at least a good fraction of the population on an alternative, we can convince our manufacturers that most of us don't need gimmicks, but rather great tools.

It's been a while, huh? A while back I switched my website to Sourcehut Pages and had to scrap the gemlog I previously had. But now I've decided to return to the web and what a way to come back than to relaunch my blog using WriteFreely! This software should be able to federate using ActivityPub, so Fediverse users (such as people using Mastodon, Akkoma, Calckey, etc.) should be able to read my posts from there. You can follow me over at @lux@blog.nixgoat.me if you're interested. RSS is also still available by adding https://blog.nixgoat.me/feed/ to your reader.

All entries are still here by the way, and I've also corrected some grammar issues here and there. This won't mean I'll abandon my exisiting gemlog either, as both will receive the same content, so you have choices on where to read my posts.

Anywho, expect more entries here soon! I have some ideas cooked up that I haven't published yet and for now I'll see you soon.

Hello everyone! It's been a while since I've posted an entry in this gemlog, so I've decided to dust it off and take the chance to talk about my recent migration to Sourcehut and my reasons for doing such a migration, considering I already host my own Git forge (Femgit) and have accounts on multiple others. There's quite a lot of stuff to unwrap here so bare with me.

Issues with current Git forges

For context I officially have accounts (at least from what I can remember) in Femgit, GitHub, GitLab, Akkoma's Gitea instance and Codeberg. I've created various projects in these accounts but I've never really specified a single location to centralize most of my projects. In other words, my body of work tends to be scattered across Git forges, which makes it hard for other people willing to take a look at what I've made to find my projects. The reason why I've tended to scatter them is simply issues arising with one Git forge making me want to try another one. GitHub was closed source, owned by Microsoft and they started using projects without the owner's permission for their Copilot service, so I decided to switch to GitLab. GitLab was hard to use and now very limited to anyone willing to use CI unless they pay for Pro, so I switched to my own Git forge. My own Git forge was getting limited for Linux kernel projects, so I switched to GitLab again, then to Codeberg for the CLI utility I made called “Vento”. And now Gitea went corporate and want to use DAOs to make decisions over the software (spoiler alert: idea that will backfire immediately due to DAOs being by design antidemocratic structures), so I needed a new forge while community members fork Gitea and I switch to that for my Git instance... And when a friend of mine switched to Sourcehut it peaked my interest.

Sourcehut

So I've known about Sourcehut for a while now, however I didn't try it at the time due to initial impressions of it being a strictly paid service. While in the future Sourcehut will switch business schemes, the alpha is currently free for project owners to work with. It makes sense from a business standpoint also, since the traffic Sourcehut receives needs to be handled with powerful enough hardware. And for the starting price of $2 a month what you'll get in the future on Sourcehut is a bargain. It includes:

  • Git hosting
  • Mercurial hosting
  • Mailing lists
  • Issue tracking
  • Wiki hosting
  • Static web hosting
  • Pastebin
  • Build service
  • IRC chat
  • Email hosting (soon)

While in some forges some of these services are already free, I haven't seen a forge include all of them packed for this cheap. The interface for Sourcehut is also very clean and, even more importantly, totally static. And, on top of this, if you can't afford to pay it officially, self-hosting is also an option! Projects are grouped in “project hubs” and they can also be totally independent. So you can group repositories, mailing lists and several issue trackers into a single big hub for users to explore. While getting started is a bit harder compared to the bigger forges, which are mainly web-driven, it pays off with a worthy, user-friendly service.

The Plan

So I've already migrated Vento to Sourcehut, which you can find in the following link:

However, this won't be the last project I'll migrate. To be precise, the following projects will be migrated:

Other projects will be migrated if I ever need to update them, but for now they can be found on their original forges. Linux kernel projects will be kept on GitLab, due to their large size, and my active projects will be mirrored over on Femgit. Therefore, the only Git forges I'll be using for my own projects will be:

Conclusion

If I happen to change my opinion in the future regarding any of the choices I've published on this entry, I'll give you an update in this capsule, either through a new entry if it's major, or under this same entry if it's minor. I'll also be updating this gemlog more often now that I've graduated high school (fun stuff). So expect more content here soonish. In the meantime, stay safe and away from Microsoft!

If you live in a country with decent public transportation it's more than likely that you may have used a transport card to pay for the bus fares. Most, if not all of these work by using NFC chips and receivers, and many are using NXP's MIFARE chips. I'll be talking more precisely about the bip! transport cards, the ones which are actually being used in Santiago de Chile (where I'm currently living). These use MIFARE Classic with shared encryption keys for every card. What does this mean? Horrible security, but lots of fun too! Let's get started.

Disclaimer

This document was written for educational purposes only. Do not modify other people's transport cards and check for the legality of modifying transport cards before starting. I will not take any responsibility for angry roommates, arrests or thermonuclear war. In my specific jurisdiction doing this doesn't seem to be attached to any legal problems, but this may not be the case worldwide.

An accidental and catastrophic discovery

This all started by me trying to get my bip! card working with Metrodroid, an app which was built for allowing you to check how much money you have on your card, the transaction history for it and its data. Now, the process indicated in the wiki was a bit involved. Buying components off the internet, soldering, etc. Looking for another option, I spotted an app on F-Droid called “MIFARE Classic Tool”. I installed it and after a few minutes of poking around with it not only did I find out you can use it to crack the keys for a transport card, I realized you can also completely modify it. This brings me to talking about the card itself!

The card in question

Photo of my bip! card.

As I mentioned in the introduction, I'm using a bip! card, provided by Red Metropolitana de Movilidad (Metropolitan Mobility Network, translated) as my target for these modifications. In terms of security, this card is absolutely open. It uses a MIFARE Classic chip, which has been compromised all the way back in 2008, only requiring 200 seconds to crack it on a laptop from back then. Not only that, it's using the same key to decrypt every single card. This means by getting the key for one card like I did, you can modify every single card in existence.

Modifying the amount of money in the card and trying to use it will lead to the card being blacklisted from the network, from my guess by comparing it against a database. This may not seem too harmful at first, but this could mean anyone could casually deactivate transport cards with just their phone. This seems to be the reason why the Metropolitan Mobility Network has started pushing for their alternative named bip!QR. This uses an app on your phone to show a QR code which is scanned by their payment terminals. However, this system is less anonymous than the physical cards, since they're linked to your Unique Tax Registry (known locally as RUT). In a nutshell, you're either trading security or privacy with these two systems.

Modifying the card to store a website

Photo of my Pixel 3a with a bip! card below it. It has the MIFARE Classic Tool app open.

I'm using a Google Pixel 3a as my reader, since this phone is compatible with MIFARE Classic. I will link to a list of phones compatible with MIFARE Classic at the end of the document. Let's start by opening MIFARE Classic Tool and tapping on “READ TAG”. Place the card behind the phone and move it until its recognized. I'd recommend laying the phone with the card on a table in the exact position it got recognized in. Back to MIFARE Classic Tool, we'll map “extended-std.keys” and “std.keys” to our sectors. Once they're selected. Tap on “START MAPPING AND READ TAG”. This will take a few minutes, so while it's running go make a cup of coffee or play some Minesweeper. Once its done, tap on the hamburger menu on the top right and tap on “Save Keys”. Enter whatever name you like and press Ok. Take the opportunity to also back up your card, since by doing this it'll be formatted. You can do this by tapping on the floppy disk icon on the top right and giving it a more memorable name!

Once that's done, go back to the main page and tap on “WRITE TAG”. Select “Factory Format” and press the button with the same name. Select the keys you just saved and tap on “START MAPPING AND FORMAT TAG”. This shouldn't take too long. You can now write anything that can be stored in 1 KiB! I'm using NFC Tools to write the URL to my proxied Gemini capsule. And surely enough, it works as a regular NFC chip!

Conclusion

NXP has made newer versions for MIFARE which improve their security considerably and are already used in some transport systems nowadays. I think attempts at replacing the card system with deanonymized platforms like with bip!QR presents a huge privacy issue which could be abused in the future. So, to the Metropolitan Mobility Network of Santiago de Chile: Consider safer chips for a new variant of bip! cards instead of presenting this new platform. Anyways, this was a fun project to make and could serve as a business card for me. Who knows? Maybe it'll impress a few people.

Hi there. It's been a while, hasn't it? I'm here to announce an overhaul to my personal website. Let's go briefly through the changes that have been done.

New domain

As you may have seen, I've switched to a new domain: nixgoat.me. The main reason I've decided to do this is to distinguish my own website more from mint.lgbt, since it is only one of the many projects I'm working on. While I'm considering changing my email and XMPP to this domain, those changes might be more long-term.

Gemini as priority

While Hugo isn't a bad framework, I'm honestly not too used to it. This is mainly why I previously recurred to using a premade template. To be more exact panr's terminal theme. Hugo also doesn't have support for protocols like Gemini, which I've been interested in for a while now. So I've decided to rebuild the site before your eyes using gssg, which is a static site generator for Gemini. For HTTPS users this site is going to be proxied through Kineto.

Conclusion

There's not much else to say really! I'm pretty satisfied with the results of this site and I'm hoping it'll be easier to maintain in the near future. To anyone reading, thank you for passing by my site!

Attributions

Photo of my Nokia N900 running i3wm with a terminal running toot, a TUI Mastodon client.

In August 2009, over 12 years ago, Nokia announced a device which had the not-so-attractive name Nokia N900. While it wasn’t the first Linux mobile device, even by Nokia (Nokia had a series of what were called “internet tablets” going by the same naming scheme), it was the first and last smartphone made by Nokia with Maemo, marking the merge of the internet tablet software Maemo with their N-series phone lineup. Maemo was later evolved into the operating system co-developed with Intel named MeeGo which after Nokia partnered up with Microsoft got discontinued. With Maemo dead however, the community got together and began building many alternatives and mods for this device. Some even going as far as to implement wireless charging.

Over a year ago I purchased a preowned unit on eBay for about €45. It had a broken microUSB port but it worked fine as said by the seller on the description. It took about 3 months to arrive (keep in mind the purchase was made mid-pandemic) and I had to purchase a Nokia Asha 201 to charge it and get a battery for it, but after that I turned the phone on and was presented with a heavily customized Maemo 5 install with no way to reflash it back to stock. The good news is however, it came with uboot preinstalled! This means I can flash one of the many distros available for this device. And now I shall show you the experience I had with it!

Hardware

This device is a joy to use and an absolute dream for tinkerers! While the specification sheet may not sound impressive in 2022 eyes with a struggling 600 MHz Ti OMAP 3430 and 256 MB of RAM which makes a Galaxy S2 look like an absolute titan, the device contains many sensors you don’t see on modern smartphones anymore. For example, the phone includes an FM transmitter by default. You read that right, an FM transmitter. You have an infrared sensor which you barely see in some Xiaomi phones nowadays and even A/V output, so you can watch movies from your tiny Nokia… or maybe not, more on that later.

The device looks unremarkably vanilla but everyone I’ve shown this to is caught off guard by how hackery and plain out ancient it looks. The keyboard is not too big and sometimes the keys are hard to hit correctly, but you can comfortably thumb-type on it with ease.

Another feature that I found pretty neat is the retroreflective LCD panel the device has, which makes it absolutely usable in daylight if you can get past the scratches of the resistive plastic screen.

postmarketOS

I’m gonna be honest, running postmarketOS wasn’t easy on this device. It turns out either the device may have had a broken back sensor when I got it or I broke it myself trying to repair the microUSB port, but this made booting postmarketOS with the default kernel a chore. At the end I had to patch the pre-existing postmarketOS kernel to get it booting.

However, after that was done it was pretty much smooth sailing ahead! Nearly every CLI app and most GUI apps worked out of the box. cmus worked better after installing PulseAudio, but ran just fine.

The main issue however with postmarketOS is the current lack of hardware acceleration. This means you’re stuck on Xorg-based UIs like i3wm, sxmo on dwm and Xfce4 which, while not terrible, limit the phone severely in terms of security. Playing videos on postmarketOS is pretty much impossible due to this same reason and don’t even attempt to run ANYTHING powered by OpenGL.

If you can get past these limitations this device can serve well as a portable terminal, which is actually exactly how I used it when I had postmarketOS installed. I was sad to see it being dropped to testing but I might switch back to it once I have the time since Danct12 and sicelo are maintaining it once again (thank you!).

Maemo Leste

Installing Maemo Leste was really easy. Using just GNOME Disks was enough to get it fully running and I’m sure if you use any OS that isn’t Linux this will be pretty easy too. Maemo Leste pretty much solves all the issues I had with postmarketOS. The kernel, while older than the one in postmarketOS at the time of writing this article, includes patches that fix booting with the back cover sensor broken and allow for hardware acceleration, which is actually needed to run Hildon, the shell Maemo Leste uses.

However, it introduces some issues that, while not making it unusable, are deal breakers. While you can install packages through APT, they will go in a folder inside your launcher named “Debian”, which while at least makes launching APT apps easier, makes creating shortcuts for them in the desktop impossible. The official method to install apps on Maemo Leste is through their application manager, which has a way more limiting set of applications. While you have some web browsers like surf2, they don’t work well specifically with the N900.

I have to give Maemo Leste credit though for maintaining a UI that’s essentially easier for the end user to understand. And the team is doing a great job, don’t get me wrong. But these issues are not ideal if you are planning to use this device constantly.

Other distros

While these two are not the only distros ever made for the N900, these are the most active ones. After some searching some users have managed to get Arch Linux ARM and Kali Linux running on this device, but these projects are long abandoned and I doubt they’re coming back any time soon. This may change in the near future as this device ages more. Who knows!

Conclusion

Would I recommend a Nokia N900 as a main phone to anyone in 2022? No. This device has gone past daily driver phone territory. However, if you’re a tinkerer or you’re interested in Linux, this is a great device to play around with! It’s a brilliant little Linux machine that you can bring on the go and play around in and if you’re into that and you can afford it, then go ahead! It’s pretty interesting that such an old device is getting a second chance through community ports, and I’m glad people are happily maintaining it.