digdilem

@digdilem@lemmy.ml

This profile is from a federated server and may be incomplete. Browse more on the original instance.

digdilem,

Agree, and I switched over a couple of years ago. Only yesterday learned about Mermaid graphs and was impressed that Joplin does them natively.

digdilem,

I don’t think Snowden’s endorsement is the positive you think it is. Even if you can ignore treason, he’s a pretty toxic person, by all accounts.

Linux on old School Machines?

Hi all, the private school I work at has a tonne of old windows 7/8 era desktops in a student library. The place really needs upgrades but they never seem to prioritise replacing these machines. Ive installed Linux on some older laptops of mine and was wondering if you all think it would be worth throwing a light Linux distro on...

digdilem,

Instead of you installing linux on them, why not make it a project for the kids? Give them a bunch of distros to try and see what they learn.

digdilem,

Linux: 1995, Sco (At work), then got a copy of Slackware on a Cover-CD around 2000. Shortly after found Debian and have been using that at home exclusively for over two decades, now onto desktops and laptops as well as a couple of home servers. (I use EL distros, Ubuntu and OpenSuse at work nowadays)

Longer history: 1981: ZX81. 1985, Dragon 32. 1988 Amstrad CPC. 1991 an XT. 1992 A 386 sx25 with 1mb ram, and so on.

DeAmazoning a FireTV

I never want to get a smart TV, but I found this exact TV (Toshiba FireTV) on the side of the road and decided it would be a fun project to try enhancing its privacy as much as I can. It did not come with the remote or any other accessories besides the TV, so if there is any way to pair an iPhone/Pixel as a remote that would...

digdilem,

Why use kodi *and *jellyfin? Jellyfin is its own thing, and without all the awful cruft that comes with Kodi.

It also has native apps for windows, linux and… FireTv.

digdilem,

So you’re using Kodi as the OS on the TV itself? Not the Kodi App or Kodi backend?

I’m still struggling to understand how that would work, and still have Jellyfin in the mix - could you please explain exactly what you mean?

digdilem,

IRC’s not as popular as in its heyday, and while once it was the main choice for multi-playing gaming chat (Quakenet et al), that’s largely gone elsewhere, but it’s still very good for certain technical channels.

IRC has also proved to be remarkably resistent to commercialisation, mostly due to the users. Even when one of the biggest networks, Freenode, got taken over by a drug addled mentalist Reference who started insisting all all kinds of strange things, the users just upped sticks and created a new network. A bit of fuss, but the important stuff stayed the same and it’s continued much as before as a new network, Librenet.

digdilem,

Others have answered your question - but it may be worth pointing out the obvious - backups. Annoyances such as you describe are much less of a stress if you know you’re protected - not just against accidental erasure, but malicious damage and technical failure.

Some people think it’s a lot of bother to do backups, but it is very easily automated with any of the very good free tools around (backup-manager, someone’s mentioned timeshift, and about a million others). A little time spent planning decent backups now will pay you back one day in spades, it’s a genuine investment of time. And once set up, with some basic monitoring to ensure they’re working and the odd manual check once in a blue moon, you’ll never be in this position again. Linux comes ahead here in that the performance impact from automated backups can be prioritised not to impact your main usage, if the machine isn’t on all the time.

Any suggestions for cheap but decent laptops for coding?

I’m currently learning how to code (currently Python, then maybe JavaScript), but I’m not always around my desktop, and learning on my phone is not always an option (also, it can be quite cumbersome at times). Therefore, I’m looking into purchasing a laptop just for learning how to code and stuff....

digdilem,

Some of the cheaper Thinkpads are terribly poor quality. Once a by word for ruggedness, now just another name.

digdilem,

The way I help, as a Sysadmin, is primarily by using foss software in my job and feeding back with bug reports, issues and so on. I’ve raised several hundred issues on Github this way, and try to do them concisely, accurately and with as much relevant information as I can.

Did I just solve the packaging problem? (please feel free to tell me why I'm wrong)

You know what I just realised? These “universal formats” were created to make it easier for developers to package software for Linux, and there just so happens to be this thing called the Open Build Service by OpenSUSE, which allows you to package for Debian and Ubuntu (deb), Fedora and RHEL (rpm) and SUSE and OpenSUSE (also...

digdilem,

I like this perspective, but it’s the developers who get to choose in the world of FOSS software, and I suspect most would rather develop than package.

Learning the different formats, methods and then committing to re-packaging every update for eternity when you’re often a single person or a very small group is a big ask on top of developing the software too, so they’re going to select a method that’s easiest for them.

So if there was a user-led method, it would still need to appeal to developers as well.

digdilem,

micro looks very impressive. I’m too invested in vi to move away from that, but it’s great to see alternatives, especially those focused on being easy to use (like jed)

Only weird thing from the cap I saw was that you need to edit a json file to change keybindings - doesn’t that go against the ‘easy to use’ edict, or is that something that’s planned to be changed?

digdilem,

That’s really neat, and in the Debian main repos.

digdilem,

This is exactly why I never buy Early Access games. The biggest thrill for me is starting a new game, and if that isn’t as good as it can possibly be, then that opportunity has been wasted.

Sure, it /may/ get better at some undefined point in the future, but there’s just so many games out there that are complete, and won’t require re-visiting at some point because they got better. Once that first play is gone, it’s gone.

why cant we connect 2 computers using USB

So i tried to connect steamdeck to pc using usb and i read its immpossible because steamdeck is a computer and some explanation on quora about strong master slave relationship. But then why is it possible for android phones to connect to pc whilist also having the ability to use USB and other usb c accesories. Also why cant it...

digdilem,

And it was a good design - it’s universal (aha) adoption proves that.

Those of us old enough to remember the pain of using 9 and 25 pin serial leads and having to manually set baud rate and protocols, along with LPT and external SCSI and manufacturer specific sockets probably agree this was a problem that needed solving, and USB did do that.

digdilem,

I’ve had to scroll down eight pages to find a post that seems to actually address the good points raised in the article.

digdilem,

It’s actually 250 euros for the top tier (267 $us)

I mean, seriously, what the actual fucking fuck?

Stopping a badly behaved bot the wrong way.

I host a few small low-traffic websites for local interests. I do this for free - and some of them are for a friend who died last year but didn’t want all his work to vanish. They don’t get so many views, so I was surprised when I happened to glance at munin and saw my bandwidth usage had gone up a lot....

digdilem,

I mean - I switched my attention to Haproxy. And yes, no argument there.

digdilem,

Fail2ban is something I’ve used for years - in fact it was working on these very sites before I decided to dockerise them, but find it a lot less simple in this application for a couple of reasons:

The logs are in the docker containers. Yes, I could get them squirting to a central logging serverbut that’s a chunk of overhead for a home system. (I’ve done that before, so it is possible, just extra time)

And getting the real IP through from cloudlfare. Yes, CF passes headers with it in, and haproxy can forward that as well with a bit of tweaking. But not every docker container for serving webpages (notably the phpbb one) will correctly log the source IP even when passed through from Haproxy as the forwarded-ip, instead showing the IP of the proxy. I’ve other containers that do display it, and it can obviously be done, but I’m not clear yet why it’s inconsistent. Without that, there’s no blocking.

And… You can use the cloudflare IP to block IPs, but there’s a fixed limit on the free accounts. When I set this up before with native webservers and blocked malicious url scanning bots, then using the api to block them - I reached that limit within a couple of days. I don’t think there’s automatic expiry, so I’d need to find or build a tool that manages the blocklist remotely. (Or use haproxy to block and accept the overhead)

It’s probably where I should go next.

And yes - you’re right about scripting. Automation is absolutely how I like to do things. But so many problems only become clear retrospectively.

digdilem,

Some nice evil ideas there!

digdilem,

Maybe? It feels like the kind of stupid that you really need a human to half-ass it to achieve this thoroughly though.

digdilem,

Doh - another example of my muddled thinking.

Fail2ban will work directly on haproxy’s log, no need to read the web logs from containers at all. Much simpler and better.

digdilem,

Thanks, I’ve not heard of that, it sounds like it’s worth a look.

I don’t think the tunnel would complicate blocking via the cloudflare api, but there is a limit on the number of IPs you can ban that way, so some expiry rules are necessary.

digdilem,

Yep - agree with all of that. It’s a fault of mine that I don’t always step back and look at the bigger picture first.

digdilem,

I’ve just installed crowdsec and its haproxy plugin. Documentation is pretty good. I need to look into getting it to ban the ip at cloudflare - that would be neat.

Annoyingly, the claudebot spammer is back again today with a new UA. I’ve emailed the address within it politely asking them to desist - be interesting to see if there’s a reply. And yes, it is Claudebot 3 - AI.

UA:like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)

digdilem,

It’s back today with a new user-agent, this time containing an email address at anthropic.com - so it looks like it’s Claude3, a scraper for an AI bot.

digdilem,

Anyone else find themselves singing this headline to the tune of The House of the Rising Sun?

non-Euclidean filesystem

I noticed that I only had 5 GiB of free space left today. After quickly deleting some cached files, I tried to figure out what was causing this, but a lot was missing. Every tool gives a different amount of remaining storage space. System Monitor says I’m using 892.2 GiB/2.8 TiB (I don’t even have 2.8 TiB of storage...

digdilem,

This is a common thing one needs to do. Not all linux gui tools are perfect, and some calculate number differently (1000 vs 1024 soon mounts up to big differences). Also, if you’re running as a user, you’re not going to be seeing all the files.

Here’s how I do it as a sysadmin:

As root, run:

du /* -shc |sort -h

“disk usage for all files in root, displaying a summary instead of listing all sub-files, and human-readable numbers, with a total. Then sort the results so that the largest are at the bottom”

Takes a while (many minutes, up to hours or days if you’ve slow disks, many files or remote filesystems) to run on most systems and there’s no output until it finishes because it’s piping to sort. You can speed it up by omitting the “|sort -h” bit, and you’ll get summaries when each top level dir is checked, but you won’t have a nice sorted output.

You’ll probably get some permission errors when it goes through /proc or /dev

You can be more targetted by picking some of the common places, like /var - here’s mine from a debian system, takes a couple of seconds. I’ll often start with /var as it’s a common place for systems to start filling up along with /home.


<span style="color:#323232;">root@scrofula:~# du /var/* -shc |sort -h
</span><span style="color:#323232;">0       /var/lock
</span><span style="color:#323232;">0       /var/run
</span><span style="color:#323232;">4.0K    /var/local
</span><span style="color:#323232;">4.0K    /var/mail
</span><span style="color:#323232;">4.0K    /var/opt
</span><span style="color:#323232;">168K    /var/tmp
</span><span style="color:#323232;">4.1M    /var/spool
</span><span style="color:#323232;">5.5M    /var/backups
</span><span style="color:#323232;">781M    /var/log
</span><span style="color:#323232;">787M    /var/cache
</span><span style="color:#323232;">8.3G    /var/www
</span><span style="color:#323232;">36G     /var/lib
</span><span style="color:#323232;">46G     total
</span>

Here we can see /var/lib has a lot of stuff in it, so we can look into that with du /var/lib/* -shc|sort -h - it turns out mine has some big databases in /var/lib/mysql and a bunch of docker stuff in /var/lib/docker, not surprising.

Sometimes you just won’t be able to tally what you’re seeing with what you’re using. Often that might be due to a locked file having been deleted or truncated, but the lock’s still preventing the OS from seeing the recovered space. That generally sorts itself out with various timeouts, but you can try and find it with lsof, or if the machine isn’t doing much, a quick reboot.

digdilem,

Good thinking. That would speed things up on some systems for sure.

XZ Hack - "If this timeline is correct, it’s not the modus operandi of a hobbyist. [...] It wouldn’t be surprising if it was paid for by a state actor." (lcamtuf.substack.com)

Thought this was a good read exploring some how the “how and why” including several apparent sock puppet accounts that convinced the original dev (Lasse Collin) to hand over the baton.

digdilem,

I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.

In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.

Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?

The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I’ve never tried to get a distro to accept my software.

Nothing I’ve seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don’t seriously think that is the case here though - this feels very much state sponsored and very well planned)

It’s good we’re asking these questions. None of them are new, but the importance is ever increasing.

digdilem,

Fair point.

If the distro team is compromised, then that leaves all their users open too. I’d hope that didn’t happen, but you’re right, it’s possible.

digdilem,

software developers are criticizing Microsoft and GitHub for taking down some of the affected code repositories

Surely it’s sensible of Github to take down malicious code? It’s not just honest, hardworking people trying to make sense of this that have eyes, it’s others looking for inspiration from what appears to be a sophisticated and very dangerous supply chain attack.

digdilem,

One question and some unfollowable advice.

**Question:**Why not use AppArmor? My understanding is that’s what Debian uses by default instead of Selinux which is more native to Enterprise Linux (Fedora, RHEL, Rocky, Alma etc).

**Unfollowable advice:**As an EL admin where it’s the default and very closely integrated, we have a saying; “It’s not always dns, mostly it’s Selinux”. For most distro-sourced software, it’s fine. But if you install software from other sources, you’re going to hit problems.

Others have given good reasons to your specific questions, but one tip if you go down this route. We use a redhat tool, “setroubleshoot-server” which helps hugely in both identifying when something isn’t working because SELinux has blocked it, but also gives you the commands to add an explicit rule to allow it, so you can view the log, understand why it’s blocking, and allow it without needing to get too involved with the complicated file contexts.

Sadly, it looks like this tool isn’t available in Debian, which would seem to make like a lot harder using selinux. Familiar as I am with selinux, I don’t run it on my personal servers or this laptop, which are Debian.

Lasse Collin, the other xz maintainer, has acknowledged the backdoor (tukaani.org)

They haven’t particularly made a comment on the situation so much as acknowledged it’s happening. They seem to be going with the story that they had nothing to do with it and this is news to them. Hope to hear more from them soon so we can find out more about the situation, how and why this happened, etc....

digdilem,

Reading that made me sad, angry and scared. Great article, but terrifying.

digdilem,

Good luck with that.

Commercial and closed source software is no safer, and may even be using the same foss third-party libs under the hood that you’re trying to avoid. Just because foss licences generally require you to disclose you’re using them, it doesn’t mean that’s what actually happens.

And even if, by some miracle, they have a unique codebase - how secure is that? Even if an attacker can’t reach the source, they can still locate exploits and develop successful attacks against it.

At its core, all software relies upon trust. I don’t know the answer to this, and we’ll be here again soon enough.

digdilem,

In what way did I bend your logic? I found your logic quite twisted to start with, and don’t think I did alter it further.

Also - not constructive? But you’re the one that’s being negative. I’m merely trying to point out that you’ll have a very hard job not relying on foss as it stands today. Where we go from here is a much bigger question, but we’ve all got very used to having free software and, as I said, even if we all start paying huge amounts of money for the alternative, that doesn’t mean it’ll be safer. In fact, I rather suspect it’ll be less safe, as issues like this then have a commercial interest in not disclosing security problems. (As evidenced already in numerous commercial security exploits that were known and hidden)

digdilem,

I think there’s a core difference between loot boxes, which is out and out gambling, and gameplay. Both can be addictive, but they have very different consequences.

Gameplay addiction steals your time and maybe your social life, but that’s it.

Gambling addiction also steals your money. And when that’s gone, drives you to extremes trying to find more.

digdilem,

I respectfully disagree.

I had redcare via Age Concern for my mum before she went into a home with dementia - it was a few years ago and it was all that was available.

Nowadays, the panic alarms are, I believe, entirely self contained using a sim card and mobile connectivity and include location information - so they are not reliant on local power or internet connection. That locational information could be life saving - one time my mother got very confused, left her flat and was wandering around outside in freezing conditions. Luckily someone heard her calling out and took her home, but she could easily have died that night and was so confused that she didn’t think to use her dongle which was still around her neck, and it is doubtful it would have been in range of her base station anyway. A modern system can also include geofencing and even positional data (if someone falls down), takes it off, or battery runs low and automatically alert. Just like redcare, the modern systems are manned 24/7 just the same.

Sometimes old school is not best.

digdilem,

I like the energy, but this doesn’t qualify as “lesser known”

digdilem,

Ever read some of the microsoft forums? Just as many people seeking help there - the only difference is we don’t have an over eager paid employee replying with scripted answers which don’t help.

Linux is as simple or as complicated as you want it to be. Most of the mainstream distros “just work” on most hardware. I’ve installed Mint, Rocky, Ubuntu and Debian on laptops and desktops for relatives, including those who aren’t remotely technically gifted. It was as easy/easier as Windows to install, set up and get running. The users are happy - they can use cheaper hardware (and don’t need to upgrade a perfectly good laptop for Windows 11) and are entirely free of software costs and subscriptions. Everything works and things don’t break - just like Windows and Macs. Most people just want their computer to turn on and let them run stuff. All three do that equally as well.

I’ve also installed linux on hardware clusters costing hundreds of thousands of pounds and that definitely wasn’t a simple or quick process, but that’s the nature of the task. Actually, installing the base os was probably the easiest part. Windows just isn’t an option for that.

You ask a fair question - you’re not unique in your viewpoint and that’s probably hampered takeup more than anything else. What makes you a bit better than most is that you actually ask the question and appear to be open to the answers.

digdilem,

That last point is often under-appreciated in its importance, especially when dealing with hundreds of servers.

digdilem,

htop on our vms and clusters, because it’s in all the repos, it’s fast, it’s configurable by a deployable config file, it’s very clearly laid out and it does everything I need. I definitely would not call it bloated in any way.

My config includes network and i/o traffic stats, and details cpu load type - this in particular makes iowait very easy to spot when finding out why something’s racking up big sysloads. Plus, it looks very impressive on a machine with 80 cores…

My brain can’t parse top’s output very well for anything other than looking for the highest cpu process.

But - ymmv. Everyone has a preference and we have lots of choice, it doesn’t make one thing better or worse than another.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fightinggames
  • All magazines