Veraxis

@Veraxis@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Veraxis,

I cannot say that I have done extensive testing, but the Acer Swift 315-51G and Gigabyte Aero WV8 that I have both worked fine with Linux with zero prior research on my part. No issues with any drivers, even the SD card readers, although I have not checked the fingerprint sensor on the Acer. Maybe I have just been lucky.

Both have hybrid Nvidia graphics, though, and 10-series and prior hybrid graphics especially, as I understand, have issues with high idle power usage unless you manually disable the dGPU when not gaming, which I had to do using envycontrol and nearly doubled my battery life on both. I might avoid hybrid dGPUs and especially older ones unless you need that.

Used laptop-wise, I agree with others that a used business laptop like a Dell would probably be your best bet.

Linux on old School Machines?

Hi all, the private school I work at has a tonne of old windows 7/8 era desktops in a student library. The place really needs upgrades but they never seem to prioritise replacing these machines. Ive installed Linux on some older laptops of mine and was wondering if you all think it would be worth throwing a light Linux distro on...

Veraxis, (edited )

That covers a pretty wide range of hardware, but that era would be around 2009-2015, give or take, so you would be looking at around Intel 1st gen to 6th gen most likely (Let’s be honest, there is nearly zero chance institutions would be using anything but Intel in that era). Pentium-branded CPUs from that time range, unfortunately, likely means low-end dual core CPUs with no hyperthreading, so 2C/2T, but I have run Linux on Core2-era machines without issue, so hopefully the CPU specs will be okay.

2-8GB of DDR3 RAM is most likely for that era, and as others point out, will be your biggest issue for running browsers. If the RAM is anything like the CPUs, I am assuming you will be looking at the lower end with 2-4GB depending on how old the oldest machines you have are, so I second the recommendation of maybe consolidating the RAM into fewer machines, or if you can get any kind of budget at all, DDR3 sticks on ebay are going to be dirt cheap. A quick look and I see bulk listings of 20x4GB sticks for $26.

In terms of distro/DE, I second anything with XFCE, but if you could bump them up to around 8GB RAM, then I think any DE would be feasible.

Hard drives shouldn’t be an issue I think, since desktop hard drives in the 320GB-1TB range would have been standard by then. Also, you are most likely outside of the “capacitor plague” era, so I would not expect motherboard issues, but you might open them up and dust them out so the fans aren’t struggling. Re-pasting the CPUs would also probably be a good idea, so maybe consider adding a couple $5 tubes of thermal paste to a possible budget. Polysynthetic thermal compounds which do not dry out easily would be preferable, and something like Arctic Silver 5 would also be an era-appropriate choice, lol.

Veraxis,

No problem! Mint XFCE sounds perfect to me.

Veraxis,

I am not sure that I can really call what I did distrohopping, but

Mint w/ Cinnamon (several years ago on an old junker laptop and never ended up using it as a daily driver) -> Manjaro w/ KDE Plasma (daily driver for ~1 year) -> Arch w/ KDE Plasma (~2 years and counting).

I have also used Debian with no DE on a file server I made out of an old thin client PC and I have used Rasbian on a raspberry pi.

What're some of the dumbest things you've done to yourself in Linux?

I’m working on a some materials for a class wherein I’ll be teaching some young, wide-eyed Windows nerds about Linux and we’re including a section we’re calling “foot guns”. Basically it’s ways you might shoot yourself in the foot while meddling with your newfound Linux powers....

Veraxis,

The Arch installation tutorial I followed originally advised using LVM to have separate root and user logical volumes. However, after some time my root volume started getting full, so I figured I would take 10GB off of my home volume and add it to the root one. Simple, right?

It turns out that lvreduce --size 10G volgroup0/lv_home doesn’t reduce the size by 10GB, it sets the absolute size to 10GB, and since I had way more than 10GB in that volume, it corrupted my entire system.

There was a warning message, but it seems my past years of Windows use still have me trained to reflexively ignore dire warnings, and so I did it anyway.

Since then I have learned enough to know that I really don’t do anything with LVM, nor do I see much benefit to separate root/home partitions for desktop Linux use, so I reinstalled my system without LVM the next time around. This is, to date, the first and only time I have irreparably broken my Linux install.

Veraxis,

Oh no, I’ve been caught, haha. Good memory!

To my defense, the story seemed relevant to OP’s question, and the post that it was originally in has been deleted, apparently.

Veraxis,

Arch gamer here. I can confirm that it works well.

Veraxis,

I have not played CS2, sorry.

Veraxis, (edited )

What is your use case? For me, something like a fileserver which I am mainly SSH-ing into anyway, I may not install a DE at all, but if this is going to be a general-use desktop, I see no reason not to install the DE right from the beginning, and selecting a DE will be part of the install process of most Linux distros, or some distros have different install disk images that you can download with any given DE which it supports.

If you are very concerned about keeping your system lean and want full control of what gets installed, you might want to look up guides for installing Arch Linux. The initial setup process is more involved than other distros, but once you have it installed, I think its reputation for being difficult is overblown.

Veraxis, (edited )

I am not sure what graphics you have, but I have an older-ish laptop with hybrid 10-series Nvidia graphics which do not fully power down even with TLP installed. I was finding that it continued to draw a continuous 7W even in an idle state. I installed envycontrol so that I can manually turn off/on hybrid graphics or force the use of integrated graphics. I noticed my battery life jumped from 2-3 hours to 4-5 hours after I did this, and unless I am gaming (which I rarely do on this laptop) I hardly ever need the dgpu in this.

I also use TLP. I have tried auto-cpufreq and powertop, and I found TLP offered the most granular control and worked the best for my system/needs.

Veraxis,

Sorry for the late reply. It sounds like it could be due to the dGPU if your battery life is terrible. I don’t know if that method would work or not. I had to try a couple different things before I eventually settled on envycontrol.

Veraxis,

I have a laptop with integrated Intel graphics and a desktop with Nvidia graphics. I use Wayland on the former right now as of KDE 6. I have noticed some odd behaviors, but overall it has been fine. The latter, however, just boots to a black screen. I have neither the time nor the desire to debug that right now, so I will adopt Wayland on that machine when it works with Nvidia to a reasonable degree of stability.

Can Linux be dual booted on a computer with Windows?

I have a Lenovo Yoga running Windows 10 on a 1TB SSD and at some point will probably have to upgrade it to Windows 11. I use it for school and have to keep Windows on it for now because of what I’m currently doing. I want to start getting into Linux in hopes of making the switch sometime down the line. Is partitioning the disk...

Veraxis,

Yep, I dual boot on my laptop so that I can run certain programs for my schoolwork as well. I use Refind as my boot manager so that I can easily select one or the other on startup.

Veraxis,

Blah blah blah blah blah…

tl;dr the author never actually gets to the point stated in the title about what the “problem” is with the direction of Linux and/or how knowing the history of UNIX would allegedly solve this. The author mainly goes off on a tangent listing out every UNIX and POSIX system in their history of UNIX.

If I understand correctly, the author sort of backs into the argument that, because certain Chinese distros like Huawei EulerOS and Inspur K/UX were UNIX-certified by Open Group, Linux therefore is a UNIX and not merely UNIX-like. The author seems to be indirectly implying that all of Linux therefore needs to be made fully UNIX-compatible at a native level and not just via translation layers.

Towards the end, the author points out that Wayland doesn’t comply with UNIX principles because the graphics stack does not follow the “everything is a file” principle, despite previously admitting that basically no graphics stack, like X11 or MacOS’s graphics stack, has ever done this.

Help me out if I am missing something, but all of this fails to articulate why any of this is a “problem” which will lead to some kind of dead-end for Linux or why making all parts of Linux UNIX-compatible would be helpful or preferable. The author seems to assume out of hand that making systems UNIX-compatible is an end unto itself.

Veraxis,

I have done some basic testing, and the speed of the USB stick you use does make a noticeable difference on the boot time of whatever you install on it.

If I recall correctly, A low speed USB 2.0 stick took around 30-60 seconds to load (either time to login screen or time to reach a blinking cursor for something like an arch install disk). If this is something for occasional use, even this works perfectly fine.

Slightly faster USB 3 sticks in the 100MB/s range can be had for only around $5-15 USD and work significantly better, maybe 7-15 seconds. These usually have assymetric read/write speeds, such as 100MB/s read and 20MB/s write, but for a boot disk the read speed is the primary factor.

Some high end flash drives can reach 500-1000MB/s and would load in only a few seconds. A high speed 256GB stick might cost $25-50, and a 1TB stick maybe $75-150.

An NVMe enclosure might cost $20-30 for a decent quality 1GB/s USB 3 enclosure, or $80-100 for a thunderbolt enclosure in the 3GB/s range so long as your hardware supports it, plus another $50-100 for a 1TB NVMe drive itself. This would of course be the fastest, but it is also bulkier than a simple flash drive, and I think you are at the point of diminishing returns in terms of performance to cost.

I would say up to you on what you are willing to spend, how often you realistically intend to use it, and how much you care about the extra couple seconds. For me, I don’t use boot disks all that often, so an ordinary 100MB/s USB 3 stick is fine for my needs even if I have faster options available.

Veraxis,

My particular testing was with an SSK SD300, which is roughly 500MB/s up and down. I have benchmarked this and confirmed it meets its rating.

I have thought about buying something like a Team Group C212 or Team Group Spark LED, which are rated at 1000MB/s. The 256GB version of the C212 is available on places like Newegg and Amazon for around $27 USD at time of writing, but they make variants as high as 1TB.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fightinggames
  • All magazines