How so? Honest question, I can't seem to find anything that is not super pro-corporations like the prohibition on modding consoles with tens of thousands dollar fines or even prison sentences...
I would pay a lot of money to see Nintendo’s conniption over having to allow home brew and non-approved software on their game consoles. I would love to release emulators for older Nintendo consoles for the Switch so that they don’t get to keep charging people again to play old games on newer consoles.
that is vastly more readable, not only thanks to the colors, but the indentation, new lines, and straightforward section titles are a huge improvement.
I love this change, actually, I’m not a boring-text purist. Proper categorizing of data allows me to spot things at a glance much easier, and I’m all in favor of anything that can improve efficiency and understanding, especially for new folks, so we can improve product adoption.
I’m using sid and i’m loving thia change. It’s an obvious visual cue to check if i’m about to remove something important like my whole desktop environment lol
I love it, but as someone with a red-green colour blind coworker, I always try to use blue for positive feedback, and orange for negative, as its better for representation for most colourblind types.
You could scroll down to the screenshots on the GitHub page, but I had a friend recommend btop to me and seeing it for the first time running on my own machine was an experience. Highly recommend.
Ah! I just configured yesterday my router to block all the Apple tracking requests (via DNS)… My Android don’t have Google, so they are technically wrong, there is no Apple OS with no tracking (as it is closed source).
Compared to the rise of LLMs, containers are positively old hat now
You know, this statement makes the author sound like they think LLMs should replace containers, or that development of better containers is passé because of New and Shiny Things.
Please take care not to sound like a project manager when doing tech journalism.
@vrighter@ylai
That is a really bad analogy. If the "compilation" takes 6 months on a farm of 1000 GPUs and the results are random, then the dataset is basically worthless compared to the model. Datasets are easily available, always were, but if someone invests the effort in the training, then they don't want to let others use the model as open-source. Which is why we want open-source models. But not "openwashed" where they call it "open" for non-commercial, no modifications, no redistribution
I think technically, the source should be the native format of whatever image manipulation program that you use. For vector graphics, there is svg format but the native editor is still preferable. Otherwise, whoever gets the end copy cannot easily modify or reproduce it, only copy it. But it of course depends on the definition of “easy” and a lot of other factors. Licensing is hard and it is because I am not a lawyer.
It would depend on the format what is counted as source, and what isn’t.
You can create a picture by hand, using no input data.
I challenge you to do the same for model weights. If you truly just sit down and type away numbers in a file, then yes, the model would have no further source. But that is not something that can be done in practice.
Are you sure that you can reproduce the model, given the same inputs? Reproducibility is a difficult property to achieve. I wouldn’t think LLMs are reproduce.
In theory, if you have the inputs, you have reproducible outputs, modulo perhaps some small deviations due to non-deterministic parallelism. But if those effects are large enough to make your model perform differently you already have big issues, no different than if a piece of software performs differently each time it is compiled.
I would consider the “source code” for artwork to be the project file, with all of the layers intact and whatnot. The Photoshop PSD, the GIMP XCF or the Krita KRA. The “compiled” version would be the exported PNG/JPG.
You can license a compiled binary under CC BY if you want. That would allow users to freely decompile/disassemble it or to bundle the binary for their purposes, but it’s different from releasing source code. It’s closed source, but under a free license.
The situation is somewhat different and nuanced. With weights there are tools for fine-tuning, LoRA/LoHa, PEFT, etc., which presents a different situation as with binaries for programs. You can see that despite e.g. LLaMA being “compiled”, others can significantly use it to make models that surpass the previous iteration (see e.g. recently WizardLM 2 in relation to LLaMA 2). Weights are also to a much larger degree architecturally independent than binaries (you can usually cross train/inference on GPU, Google TPU, Cerebras WSE, etc. with the same weights).
How is that different then e.g. patching a closed-sourced binary? There are plenty of community patches to old games to e.g. make them work on newer hardware. Architectural independence seems irrelevant, it’s no different than e.g Java bytecode.
This is a very shallow analogy. Fine-tuning is rather the standard technical approach to reduce compute, even if you have access to the code and all training data. Hence there has always been a rich and established ecosystem for fine-tuning, regardless of “source.” Patching closed-source binaries is not the standard approach, since compilation is far less computational intensive than today’s large scale training.
Java byte codes are a far fetched example. JVM does assume a specific architecture that is particular to the CPU-dominant world when it was developed, and Java byte codes cannot be trivially executed (efficiently) on a GPU or FPGA, for instance.
And by the way, the issue of weight portability is far more relevant than the forced comparison to (simple) code can accomplish. Usually today’s large scale training code is very unique to a particular cluster (or TPU, WSE), as opposed to the resulting weight. Even if you got hold of somebody’s training code, you often have to reinvent the wheel to scale it to your own particular compute hardware, interconnect, I/O pipeline, etc… This is not commodity open source on your home PC or workstation.
The analogy works perfectly well. It does not matter how common it is. Pstching binaries is very hard compared to e.g. LoRA. But it is still essentially the same thing, making a derivative work by modifying parts of the original.
How does this analogy work at all? LoRA is chosen by the modifier to be low ranked to accommodate some desktop/workstation memory constraint, not because the other weights are “very hard” to modify if you happens to have the necessary compute and I/O. The development in LoRA is also largely directed by storage reduction (hence not too many layers modified) and preservation of the generalizability (since training generalizable models is hard). The Kronecker product versions, in particular, has been first developed in the context of federated learning, and not for desktop/workstation fine-tuning (also LoRA is fully capable of modifying all weights, it is rather a technique to do it in a correlated fashion to reduce the size of the gradient update). And much development of LoRA happened in the context of otherwise fully open datasets (e.g. LAION), that are just not manageable in desktop/workstation settings.
This narrow perspective of “source” is taking away the actual usefulness of compute/training here. Datasets from e.g. LAION to Common Crawl have been available for some time, along with training code (sometimes independently reproduced) for the Imagen diffusion model or GPT. It is only when e.g. GPT-J came along that somebody invested into the compute (including how to scale it to their specific cluster) that the result became useful.
It’s actually just the display backlight which is why I had to cover it with aluminium tape instead of just disconnecting the wire. Not only don’t I want an ad on my computer I especially don’t want an illuminated one.
It would be a massive loss for sure. One that will be felt for a long time. It’s the only way I can get around our thoroughly enshittified press up here in Canada. I mean I’d gladly pay, if it was worth paying for, which it’s not.
Yeah, well, they couldn’t “shut it down” before E2E encryption, either, so, obviously, the problem isn’t necessarily the encryption, but that the cops suck at their jobs.
“We couldn’t really catch them before, but now we can’t real their text messages! Merde!”
True by the letter but not really by practice. PC is synonymous with a computer running Windows, or Linux at a push. I don’t know whether that’s because of Microsoft’s early market dominance or because Apple enjoys marketing itself as a totally different entity, or some combination of the two. But yeah, usage determines meaning more than what the individual words mean in a more literal sense.
Originally “PC” was IBMPC or PC Compatible (as in compatible with IBM without using their trademark). An IBMPC could have run DOS, Windows or even OS/2
It’s funny to me becouse these days with all the remote software reinstallation and asking why you want to close one drive and things, windows isn’t exactly very personal either
You can install your own software on a personal computer, there is a freedom of choice. Apple tells you what you can install on a Mac. archive.ph/ks4uO
Will this make Apple Silicon Macs a fully open platform?
No, Apple still controls the boot process and, for example, the firmware that runs on the Secure Enclave Processor. However, no modern device is “fully open” - no usable computer exists today with completely open software and hardware (as much as some companies want to market themselves as such). What ends up changing is where you draw the line between closed parts and open parts. The line on Apple Silicon Macs is when the alternate kernel image is booted, while SEP firmware remains closed - which is quite similar to the line on standard PCs, where the UEFI firmware boots the OS loader, while the ME/PSP firmware remains closed. In fact, mainstream x86 platforms are arguably more intrusive because the proprietary UEFI firmware is allowed to steal the main CPU from the OS at any time via SMM interrupts, which is not the case on Apple Silicon Macs. This has real performance/stability implications; it’s not just a philosophical issue.
And wouldn’t it be a lot cheaper to just build your own PC rather than pay the premium for the apple logo?
It’s not virtualization. It’s actually booted and runs on bare metal, same as the way Windows runs on a normal Windows computer: a proprietary closed UEFI firmware handles the boot process but boots an OS from the “hard drive” portion of non-volatile storage (usually an SSD on Windows machines). Whether you run Linux or Windows, that boot process starts the same.
Asahi Linux is configured so that Apple’s firmware loads a Linux bootloader instead of booting MacOS.
And wouldn’t it be a lot cheaper to just build your own PC rather than pay the premium for the apple logo?
Apple’s base configurations are generally cheaper than similarly specced competitors, because their CPU/GPUs are so much cheaper than similar Intel/AMD/Nvidia chips. The expense comes from exorbitant prices for additional memory or storage, and the fact that they simply refuse to use cheaper display tech even in their cheapest laptops. The entry level laptop has a 13 inch 2560x1600 screen, which compares favorably to the highest end displays available on Thinkpads and Dells.
If you’re already going to buy a laptop with a high quality HiDPI display, and are looking for high performance from your CPU/GPU, it takes a decent amount of storage/memory for a Macbook to overtake a similarly specced competitor in price.
It’s not virtualization. It’s actually booted and runs on bare metal, same as the way Windows runs on a normal Windows computer: a proprietary closed UEFI firmware handles the boot process but boots an OS from the “hard drive” portion of non-volatile storage (usually an SSD on Windows machines). Whether you run Linux or Windows, that boot process starts the same.
Except the boot process on a non apple PC is open software. You can create custom a bios revision. The firmware on an apple computer is not open source. AFAIK you cannot create a custom bios on an apple computer.
Apple’s base configurations are generally cheaper than similarly specced competitors, because their CPU/GPUs are so much cheaper than similar Intel/AMD/Nvidia chips.
No idea what you mean by this. You cannot buy Apple’s hardware due the restrictions Apple places on any purchases. Any hardware you can buy from Apple has a premium.
If you’re already going to buy a laptop with a high quality HiDPI display, and are looking for high performance from your CPU/GPU, it takes a decent amount of storage/memory for a Macbook to overtake a similarly specced competitor in price.
I think you mean that Apple uses its own memory more effectively then a windows PC does. Yes it does, but memory is not that expensive to make. To increase the storage space from 256GB to 512 is £200. I can buy a 2TB drive for that. More importantly, it can be replaced when it wears out. Apple give you a replacement price that means you need a new computer.
Apple computers are designed to make repairs expensive. They may have pseudo adopted the right to repair, but let us see how that goes before believing the hype.
Except the boot process on a non apple PC is open software.
For the most part, it isn’t. The typical laptop you buy from the major manufacturers (Lenovo, HP, Dell) have closed-source firmware. They all end up supporting the open UEFI standard, but the implementation is usually closed source. Having the ability to flash new firmware that is mostly open source but with closed source binary blobs (like coreboot) or fully open source (like libreboot) gets closer to the hardware at startup, but still sits on proprietary implementations.
There’s some movement to open source more and more of this process, but it’s not quite there yet. AMD has the OpenSIL project and has publicly committed to open sourcing a functional firmware for those chips by 2026.
Asahi uses the open source m1n1 bootloader to load a U-boot to load desktop Linux bootloaders like GRUB (which generally expect UEFI compatibility), as described here:
The SecureROM inside the M1 SoC starts up on cold boot, and loads iBoot1 from NOR flash
iBoot1 reads the boot configuration in the internal SSD, validates the system boot policy, and chooses an “OS” to boot – for us, Asahi Linux / m1n1 will look like an OS partition to iBoot1.
iBoot2, which is the “OS loader” and needs to reside in the OS partition being booted to, loads firmware for internal devices, sets up the Apple Device Tree, and boots a Mach-O kernel (or in our case, m1n1).
m1n1 parses the ADT, sets up more devices and makes things Linux-like, sets up an FDT (Flattened Device Tree, the binary devicetree format), then boots U-Boot.
U-Boot, which will have drivers for the internal SSD, reads its configuration and the next stage, and provides UEFI services – including forwarding the devicetree from m1n1.
GRUB, booting as a standard UEFI application from a disk partition, works like GRUB on any PC. This is what allows distributions to manage kernels the way we are used to, with grub-mkconfig and /etc/default/grub and friends.
Finally, the Linux kernel is booted, with the devicetree that was passed all the way from m1n1 providing it with the information it needs to work.
If you compare the role of iBoot (proprietary Apple code) to the closed source firmware in the typical Dell/HP/Acer/Asus/Lenovo booting Linux, you’ll see that it’s basically just line drawing at a slightly later stage, where closed-source code hands off to open-source code. No matter how you slice it, it’s not virtualization, unless you want to take the position that most laptops can only run virtualized OSes.
I think you mean that Apple uses its own memory more effectively then a windows PC does.
No, I mean that when you spec out a base model Macbook Air at $1,199 and compare to similarly specced Windows laptops, whose CPUs/GPUs can deliver comparable performance on benchmarks, and a similar quality display built into the laptop, the Macbook Air is usually cheaper. The Windows laptops tend to become cheaper when you’re comparing Apple to non-Apple at higher memory and storage (roughly 16GB/1TB), but the base model Macbooks do compare favorably on price.
The typical laptop you buy from the major manufacturers (Lenovo, HP, Dell) have closed-source firmware.
FTFY: The typical laptop MOST buy from the major manufacturers (Lenovo, HP, Dell) have closed-source firmware. Though, I totally agree there are some PC suppliers with shitty practises. Where we disagree is that is if the firmware is fixed by the hardware manufacturer, then you have control over everything on the system. It is only when you have control of the base functionality of the system that you can say you are in charge. This may be too literal for you, but I just see that as a trust level you have in the manufacturer not to abuse that control.
17 inch screen (2560X1440) over the 15.3 inch (2880X1864)
16gb memory - 8GB upgrade to 16gb=+£200
1TB SSD over 256GB (upgrade to 1Tb=+£400)
8 full core/16t CPU (AMD5900hx) over an 8 core non hyperx cpu, 4 cores are cheaper variants.
All of the PC components can be upgraded at the cost of the part + labour. Everything on the Apple will cost the same price as a new computer to replace. Mainly because it is all soldered onto the board to make it harder to replace.
This is a £1400 laptop from scan V’s £1500 macbook air currently.
Ah, I see where some of the disconnect is. I’m comparing U.S. prices, where identical Apple hardware is significantly cheaper (that 15" Macbook Air starts at $1300 in the U.S., or £1058).
And I can’t help but notice you’ve chosen a laptop with a worse screen (larger panel with lower resolution). Like I said, once you actually start looking at High DPI screens on laptops you’ll find that Apple’s prices are actually pretty cheap. 15 inch laptops with at least 2600 pixels of horizontal resolution generally start at higher prices. It’s fair to say you don’t need that kind of screen resolution, but the price for a device with those specs is going to be higher.
The CPU benchmarks on that laptop’s CPU are also slightly behind the 15" Macbook Air, too, even held back by not having fans for managing thermals.
There’s a huge market for new computers that have lower prices and lower performance than Apple’s cheapest models. That doesn’t mean that Apple’s cheapest models are a bad price for what they are, as Dell and Lenovo have plenty of models that are roughly around Apple’s price range, unless and until you start adding memory and storage. Thus, the backwards engineered pricing formula is that it’s a pretty low price for the CPU/GPU, and a very high price for the Storage/Memory.
All of the PC components can be upgraded at the cost of the part + labour.
Well, that’s becoming less common. Lots of motherboards are now relying on soldered RAM, and a few have started relying on soldered SSDs, too.
I can’t help but notice you’ve chosen a laptop with a worse screen (larger panel with lower resolution).
I would choose a larger screen over that marginal difference in dpi every day of the week. People game on TV screens all the time with lower resolution because it is better.
The CPU benchmarks on that laptop’s CPU are also slightly behind the 15" Macbook Air, too, even held back by not having fans for managing thermals.
You cannot compare an app that runs on two different OS. That is just plain silly. Cinebench only tests one feature of a system. That is the CPU to render a graphic. Apple is built around displaying graphics. A PC is a lot more versatile. There is more to a system than one component. Let’s see you run some raytracing benchmarks on that system.
Well, that’s becoming less common. Lots of motherboards are now relying on soldered RAM
I wouldn’t buy one. You will always find some idiotic willing victim. In the future though ram is moving to the CPU as a package, but that will be done for speed gains. Until then only a bloody fool would buy into this.
An apple system has one major benefit over a PC system - battery life. Other than that I would not recommend one, even then I would give stern warnings over repair costs.
I would choose a larger screen over that marginal difference in dpi every day of the week.
Yes, but you’re not addressing my point that the price for the hardware isn’t actually bad, and that people who complain would often just prefer to buy hardware with lower specs for a lower price.
The simple fact is that if you were to try to build a MacBook killer and try to compete on Apple’s own turf by matching specs, you’d find that the entry level Apple devices are basically the same price as other laptops you could configure with similar specs, because Apple’s baseline/entry level has a pretty powerful CPU/GPU and high resolution displays. So the appropriate response is not that they overcharge for what they give, but that they make choices that are more expensive for the consumer, which is a subtle difference that I’ve been trying to explain throughout this thread.
You cannot compare an app that runs on two different OS.
Why not? Half of the software I use is available on both Linux and MacOS, and frankly a substantial amount of what most people do is in browser anyway. If the software runs better on one device over another, that’s a real world difference that can be measured. If you’d prefer to use Passmark or whatever other benchmark you’d like you use, you’ll still see be able to compare specific CPUs.
you’re not addressing my point that the price for the hardware isn’t actually bad,
I disagree. It is not only that the hardware is cheaper and a lower spec with the exception of the CPU, the design is geared around making upgrades and repairs near impossible or unfeasible. Software has much more support on a Windows OS. Video editing has been bread and butter for many years now, but Windows has caught up due to improvements in hardware and software. In my mind this negates the case for buying a Mac currently, but I can easily see it was a good buy in the past.
The outlier is Macs are good in battery life. Therefore there is a niche market that is an exceptionally good return on your investment.
Why not? Half of the software I use is available on both Linux and MacOS, and frankly a substantial amount of what most people do is in browser anyway. If the software runs better on one device over another, that’s a real world difference that can be measured. If you’d prefer to use Passmark or whatever other benchmark you’d like you use, you’ll still see be able to compare specific CPUs.
Because you cannot use Cinebench unless you are comparing the same system setup. Comparing two OSs is just stupid and cherry picking. Apple has a very trimmed down OS compared to the complexity of Windows. Apple OS dumps the need for legacy code with a closed system designed for specific hardware. Windows still caters for code written for DX CPUs under x86 architecture. This as well as the many other reasons why not. I noticed you ignored my offer of comparing back to back raytracing result, and now fail to even mention it.
You are obviously enamoured by the Apple model, I am not. There really is nothing that you could say that would convince me otherwise. I will wish you good day, and hope you agree to disagree.
I doubt it’s the last time. also while “PC” means personal computer, it was a very specific brand name by IBM, not a general purpose term. their computers (and clones later) became synonymous with x86-windows machines.
Even apple themselves have always distanced themselves from the term (I’m a Mac, and I’m a PC…).
Tbh I am fully behind KDE as flagship desktop. Dealing with GNOME users problems all day in the forum, KDE is just better for usability?
GNOME is reduced over the amount that makes sense. KDE could use a bit of reduction, but not as much as GNOMEs. People need the Terminal or random extensions for basic things, this is not a good experience.
On the other hand, GNOME and KDE both have really nice features, GNOME with their Microsoft integrations being particularly powerful (their account system works at all, unlike KDEs which I think nobody uses. But when using Thunderbird, which has standalone Exchange support, you dont use that account system anyways so it doesnt matter again).
Also GNOME has like all their apps on Flathub. GNOME Boxes is particularly crazy, having sandboxed virtualization. This means you can mix match GNOME Flatpaks on a KDE desktop without any problems, KDE even handles the theming for you. On GNOME on the other hand… it actively breaks Qt apps, its insane.
So I think GNOME has some great apps (snapshot, decoder, simplescan, carburetor, celluloid …) but you can install them anywhere.
GNOME looks better out of the box and configuring KDE can be very tricky. There are also a lot of outdated “addons” for KDE and you need some in order to get what you want. extensions are better integrsted in KDE but it’s not like KDE has everything out of the box. I’d love to see more KDE support.
True. KDEs virtual desktops are also basically unusable for me, idk I just dont see them so they are not used.
There are pros and cons. Its simply a tie, I stay with KDE because the lack of some things (like close buttons with the hitbox in the very edge) would annoy me.
This is my issue with KDE. Virtual Desktops are too unnecessarily convoluted to use. Even Alt-Tabbing is a pain if you have anything over 1 single workspace. I decided to daily drive KDE for a few months to give it a good chance, because before I would usually just go back to Gnome after a few days. It’s been 2 months now, and I don’t think I can take much more of it.
I actually tweaked it to be more “gnome-like”, but the desktops are a hot mess. At the end of the day, it’s a matter of taste, and I’m a huge fan of Gnome’s simplicity.
I don’t really get this but I’m going to assume it’s that my workflow is just different than yours.
I have keyboard shortcuts I’m happy with that let me navigate my virtual desktops as desired and place widows on them. If I wasn’t happy with those shortcuts I could change them. I can see having different preferences, or etc, but what makes it a hot mess exactly?
When I Alt-tab it always goes to the apps open on the next desktop, and never shows the apps on the current desktop. So, say I have Vivaldi and KWrite on desktop 1, and Brave and LibreOffice Calc on desktop 2.
If I’m on desktop 1 on Vivaldi and Alt-tab, it’ll move to Desktop 2 and move between Brave and Calc, and but will never show anything from Desktop 1, until I release the Alt key and Alt-tab again.
Now, for me it’s even worse since I have 3 Desktops instead of 2.
This is what people dont get. Different DEs best serve different people. We should always push to have a better experience but sniping between DEs makes no sense
Dealing with GNOME users problems all day in the forum, KDE is just better for usability?
It seems not unimaginable that whichever is more popular (/the default) will have more people reporting problems in the forum, regardless of how good it is?
I think Gnome is great. I use KDE on my Steam Deck and it’s fine, but very dated and ugly. Looks too much like Windows. Same reason I wont recommend Mint.
What is different? I think GNOME diverged a bit more, by removing window buttons, desktop icons, the dock etc. And they dont use blur and transparency at all.
But with dash to dock, blur my shell and some decoration manipulation changer it is very similar.
Not that I dont think this makes sense (I dont, as having a dock but also a top panel wastes space) but it is not really a unique workflow
But there’s 3 actions right ? is there a way to minimize and close too ? triple click ? that sounds so counter functional on paper. I guess I’d have to try it
You wont believe me but minimize is not a thing as there is no panel or dock. You open stuff, move it somewhere else and you will never use a dock as a container, just as a quicklauncher.
I think that is fair, but it for sure forces many people to adapt their workflows.
Well the way the workspaces and the overview work is completely different which means that workflow is night and day different. Not to mention how the differences in how floating windows work, what role the top panel plays and things like that.
They might look similar just like how KDE ‘looks’ similar to windows but that is only true at the surface level. The way the desktops behave and hence the workflow is very different in each case
I never understand the “Gnome is a MacOS clone” thing.
Other than a black bar at the top which has the time and a few system icons, what to they really have in common?
The workflow is entirely different, the dock is almost always hidden in Gnome, MacOS has no activities view, Gnome doesn’t even use the icon in the top left as a start-menu.
Yes it is MacOS with the dock hidden. And without window buttons. And they are not on the left and not damn colorblind unfriendly.
I mean the top bar is the exact same, the app drawer, the workspaces. The quicksettings. They just removed even more stuff.
Edit: there are many things about them that are different, but the overall design seems similar to me. I think GNOME is way more usable and makes more sense. But still, having a top bar already is kinda odd and I think using that already makes you “macOS like”.
The top bar isn’t the exact same, it’s extremely different. Gnome doesn’t use a global menu, doesn’t have a start menu, doesn’t have the clock on the right. The only similarity is the bar being at the top and containing stuff like WiFi and battery icons.
The window decorations are different. The UI looks different. Gnome doesn’t have a permanent dock, doesn’t have stuff on the desktop. Window management works in a very different way, MacOS doesn’t have the activities view, etc.
What I wonder is… how?! A quick search shows that half of people in the USA use Chrome, another 30% Safari, 8% use Edge, and only 5% Firefox. This study was done by Ghostery so perhaps they chose a biased subset of the population? It just seems weird to me to think that more than half of average users use ad-blocking, these days.
My mom knows nothing about adblock, and is still blocking ads. You better believe all of the kids having to fix their relative’s computers will set up some free antivirus and ad blocking right away.
Can’t comment on the sample size though, Ghostery might indeed be somehow biased and measure devices where their software is installed vs. total number of internet users or something? But users of ghostery are more likely to be tech savvy, so there’s a higher chance of them having more devices that are equally sanitized.
I’d have to dig through the study and see if the sampling mechanism is made public.
will set up some free antivirus and ad blocking right away.
Those mfs have got a way to go if they’re setting up free antiviruses. Free anti-virsus will hurt your system probably more on average than actual viruses
I have an inherent distrust to all things Microsoft. And their firewall is so terrible that I don’t want to find out they were as negligent when it came to developing their antivirus.
Some years ago, the Windows Defender certainly was a joke, but currently is very capable with an detection quote of 100%. A cause that Windows, as the most used OS was always also the most atacked by malware, but the devs of MS at least had made a good job. Windows is certainly an privacy nightmare, at least if used in default settings, but in question of security is currently maybe the best protected with safe boot, a good sandbox system and Defender, and, well, the Firewall is good, but sometimes overreacting with the need to whitelist some downloads and apps. But at all, there isn’t anymor need for 3rd party AVs.
The better option is not to use windows at all, but if you are, I don’t think disabling windows defender will stop them from getting whatever they want anyway.
Here we are spaking about that this Spyware is pretty resistant against all kind of malware, not about that it’s needed to gut it from all kind of telemetries, bloatware and not needed services before the first use, that is another thing.
Kaspersky Free is top grade stuff. Bitdefender Free is good but has false positives. Defender is a joke against ransomware and without internet connection. Rest are bad.
According to statistics on my server, it’s 57% Chrome, 14% Safari and 12% Firefox. Also 10% use Linux. I’m not hosting anything tech related though.
Anyways, adblocking is kind of essential. Even the boomers ask what’s wrong when ads start showing. The only people I’ve seen browsing without adblock are Apple users.
theregister.com
Top