This will feel extremely simple for some folks, but I was having a hell of a time getting Steam games that had previously worked through Proton running. I scoured the internet for solutions after trying to install proton-ge and testing multiple versions. Eventually someone had the galaxy brain idea to suggest installing WINE. For some reason, that fixed the problem real good.
Not fixed but there is an Arch problem that is and will always be the bane of mi existence.
For some reason when I click with the trackpad buttons the touchpad gets frozen for like a second (it’s like they are recognised by the system as keyboard buttons, I have enabled that option to temporarily disable it when using keyboard).
I’ve checked for hours and days the libinput documentation and some synaptics libraries, even legacy ones. It is to this day the only problem that has lead me to reinstall my system but the problem remains.
It’s not even like I have some niche setup, I mean, surely there must be thousands of Arch users running with a ThinkPad X1 Carbon Gen 7, and surely not every single one of them must be running it like this, right?
It has come to a point where I just gave up and got used to my system as is, but I’m sure I would be running fanfare if some day I am able to fix it.
It was for work, so it probably doesn’t count, but I needed to write a kernel module to automatically resize an encrypted LUKS volume when the image is written to a larger disk.
I had to fix so many booting problems using live usb, grub, xorg, login managers, they’are all difficult
Using Linux on a GTX660 without proprietary drivers. I never managed to succeed. Desktop would always freeze. Never again.
idk how i would define difficult, but the thing i probably put the most time into figuring out thus far is LXC containers.
Or LXC, if you like not using redundant acronyms. Those containers are good shit, weird shit, but good shit nonetheless.
I managed a CentOS system where someone accidentally deleted everything from /usr, so no lib64, and no bin. I didn’t have a way to get proper files at the time, so I hooked the drive up to my Arch system, made sure glibc matched, and copied yum and other tools from Arch.
Booted the system, reinstalled a whole lot of yum packages, and… the thing still worked.
That’s almost equivalent to a reinstall, though. As a broke college student, I had a laptop with a loose drive, that would fall out very easily. I set it up to load a few crucial things into a ramdisk at boot, so that I could browse the web and take notes even if the drive was disconnected, and it would still load images and things. I could pull the cover off and push the drive back in place to save files, but doing that every time I had class got really tiring, so I wanted it to run a little like a live system.
Are you including back in the day when we had to use windows device drivers via ndiswrappers?
I’ve managed to remove a critical library once but did manage to extract it from an RPM on another machine and manually install it. That was good enough to get me to the point where I could yum reinstall.
Pre-linux we had an HP workstation where the disc drive died and of course we had no backups. I managed to frankenstein the disc by connecting the platters on the broken disc to the circuit board of a working disc. This worked and I was able to back up the disk and reload on to a new drive.
And then we bought an 8mm tape drive for backups and I had to port some drivers to HP-UX to get it to work. But we had awesome backups after that!
I can’t remember the details anymore, but for a year or two I had a bad run of absolutely hosing my boot config and leaving myself in a state where the system either couldn’t find it’s kernel or couldn’t find the root partition and would drop me into an initramfs emergency shell. I got pretty good at booting into a live environment, getting all my dm-raid and lvm disks discovered, mounting all the relevant file systems in the right place, chrooting in and rebuilding the pieces that were broken
More than a decade ago a user came into #ubuntu-server on Freenode (now libera.chat ) and said that they had accidentally run “rm -rf /* something*” in a root shell.
Note the errant space that made that a fatal mistake. I don’t remember how far it actually got in deleting files, but all of /bin/ /sbin/ and /usr/ were gone.
He had 1 active ssh connection, and couldn’t start another one.
It was a server that was “in production”, was thousands of miles away from him, and which had no possibility for IPMI / remote hands.
Everyone (but me) in the channel said that he was just SoL and should just give up.
I stayed up most of the night helping him. I like challenges and I like helping people.
This was in the sysv-init (maybe upstart) days, and so a decent number of shell scripts were running, and using basic *nix commands.
We recovered the bash binary by running something along the lines of
bash_binary_contents="$( </proc/self/exe)" printf "%s" > /tmp/bash
(If you can access “lsof” then “sudo lsof | grep deleted” will show you any files that are open, but also “deleted”. You may be surprised at how many there are!)
But bash needed too many shared libraries to make that practical.
Somehow we were able to recover curl and chmod, after which I had him download busybox-static. From there we downloaded an Ubuntu LiveCD iso, loop mounted it, loop mounted the squashfs image inside the iso, and copied all of /bin/ , /sbin/ , /etc , and so on from there onto his root FS.
Then we re-installed missing packages, fixed up /etc/ (a lot of important daemons, including the one that was production critical, kept their configuration files open, and so we were able to use lsof to find the magic symlinks to them in /proc/$pid/fd/ and just cp them back into /etc/.
We were able to restart openssh-server, log in again, and I don’t remember if we were brave enough to test rebooting.
But we fucking did it!
I am certainly getting a lot of details wrong from memory. It’s all somewhere at irclogs.ubuntu.com though. My nick was / is Jordan_U.
I tried to find it once, and failed.
I just told this story to a friend but I did the standard rm -rf * as root while in the / directory. And this was back in the day where we nfs mounted every other machine and root privileges propagated through NFS. I think it was on the 2nd or 3rd machine when I thought – “this seems to be taking longer than I thought”.
I used to main Gentoo.
Breaking the install was more of a guarantee.
I once removed most of X by trying to remove Gnome dependencies and it lead to an interesting couple of hours but I did have a working system when I was done.
There were countless dependency bugs and broken systems but at least I learned how to use the Gentoo Forum and also a lot of how Linux works.
I kind of want to give it another go.
At some point I’ve installed rust implementation of the coreutils from the AUR, they worked for a long while until some ssl vulnerability were discovered and everyone had to update the library. As you can imagine, without working coreutils system were hard to use. troubleshooting were also a pain in the ass because who could blame coreutils of all things? :P
A Gentoo upgrade package list with over 100 packages and conflicts all over the place. Then do it again when the list grows to the same size in a few months.
This is why I don’t use Gentoo anymore.
I did a partial system upgrade when installing nginx without upgrading the rest of my Arch system. One of the things it upgraded was libssl.
Turns out systemd depends on that.
Turns out programs won’t start at all if one of their shared libraries is missing.
Turns out that if you write init=bash in the kernel command line, not even Ethernet connections work if systemd isn’t running.
I had to boot off archiso, chroot into my / partition, and run the system upgrade from there.
My first Linux machine crashing. This was way before Redhat, Ubuntu, Arch, or OpenSUSE. This was installed from 60+ floppy disks on a 386-40 with 8MB of RAM.
This machine ran happily, but it crashed under heavy load. I checked out causing the load by using different applications, but could not nail it to a certain software. So the next thing I checked was the RAM. Memtest86 ran for a day without any problems. But the crashes still came. So I got the infrared camera from the lab to see if some hardware overheats. Nope, this went nowhere, either.
Then I tested the harddisk. Read test of the whole HD went without problems. I copied the data on a backup medium and did a write and read test by dd’ing /dev/zero over the whole disk, and then dd’ing the disk to /dev/null. Nothing did show up.
I reinstalled the Linux, and it crashed again. But this time, I noticed that something was odd with the harddisk. I added a second swap partition, disabled the first, and the machine ran without problems. Strange…
So I wrote a small program that tested the part of the disk occupied by the old swap space: Write data, read data, and log everything with timestamps. And there was the culprit: There was an area on the HD where I could write any data, but when I read blocks from that area, a) It took a very long time for the read, b) the blocks I read were containing all zero, regardless of what I had written, and worst of all c) there was no error indication whatsoever from the controller or drive. Down at the kernel level, the zeroed blocks were happily served by the HD with an “OK”. And the faulty area was right in the middle of the original swap partition.
Nice read! Did you delete the old swap space or left it as-is?
I took no risks and binned the disk. I wanted to buy a bigger one, anyway.
lol nerd
Blocked
Yes a compliment
If you were trolling, “Blocked” would definitely be a complement.
Are you saying this as a compliment ? it’s not completely clear. Either way, it is a compliment